Compare commits

..

539 Commits

Author SHA1 Message Date
mergify[bot]
e012a05b08 Add RecentItems metrics (#20484) (#20486)
(cherry picked from commit e13ed8a627)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-10-06 16:51:19 -06:00
Justin Starry
aa5e23799f Optimize stakes cache and rewards at epoch boundaries (backport #20432) (#20475)
* Optimize stakes cache and rewards at epoch boundaries (backport #20432)

* fix conflicts
2021-10-06 16:25:30 -04:00
sakridge
44dbc51f8c Backport tpu vote feature (#20459) 2021-10-06 18:50:12 +02:00
mergify[bot]
f20280620a fix syntax error in bash_profile (#20385)
if there is no newline at the end of the file, this export is glued to the rest of the code and generates a syntax error like this

```bash
if [ -f ~/.git-completion.bash ]; then
  . ~/.git-completion.bash
fiexport PATH="/Users/user/.local/share/solana/install/active_release/bin:$PATH"
```

(cherry picked from commit 87c0d8d9e7)

Co-authored-by: OleG <emptystamp@gmail.com>
2021-10-06 10:49:08 +00:00
mergify[bot]
b9b81def35 Make rewards tracer async friendly (backport #20452) (#20455)
* Make rewards tracer async friendly (#20452)

(cherry picked from commit 250a8503fe)

# Conflicts:
#	Cargo.lock
#	ledger-tool/Cargo.toml
#	ledger-tool/src/main.rs
#	programs/stake/src/stake_state.rs
#	runtime/src/bank.rs

* fix conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2021-10-06 07:09:40 +00:00
mergify[bot]
972d101f6e Default --rpc-bind-address to 127.0.0.1 when --private-rpc is provided and --bind-address is not (#20427)
(cherry picked from commit 221343e849)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-10-05 01:13:31 +00:00
Justin Starry
b2f7be0351 Revert "cargo fmt"
This reverts commit be43512bae.
2021-10-01 00:55:49 -04:00
Justin Starry
be43512bae cargo fmt 2021-10-01 00:51:43 -04:00
Tyera Eulberg
5df386ae0e Bump version to v1.6.28 (#20300) 2021-09-28 20:23:27 +00:00
Tyera Eulberg
c5d174c170 Use single feature (#20289) 2021-09-28 04:30:11 -06:00
Tyera Eulberg
9b6ec0b6d5 v1.6: Restore ability for programs to upgrade themselves (#20263)
* Make helper associated fn

* Add feature definition

* Restore program-id write lock when upgradeable loader is present; restore bpf upgrade-self test

* Remove spurious comment
2021-09-27 22:29:59 +00:00
Justin Starry
1a88a9eb0e resolve conflicts 2021-09-25 22:11:58 -04:00
dependabot[bot]
66a9e6bcb4 chore: bump tiny-bip39 from 0.8.0 to 0.8.1 (#20136)
* chore: bump tiny-bip39 from 0.8.0 to 0.8.1

Bumps [tiny-bip39](https://github.com/maciejhirsz/tiny-bip39) from 0.8.0 to 0.8.1.
- [Release notes](https://github.com/maciejhirsz/tiny-bip39/releases)
- [Changelog](https://github.com/maciejhirsz/tiny-bip39/blob/master/CHANGELOG.md)
- [Commits](https://github.com/maciejhirsz/tiny-bip39/compare/v0.8.0...v0.8.1)

---
updated-dependencies:
- dependency-name: tiny-bip39
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <anatoly+githubjenkins@solana.io>
(cherry picked from commit 038d77347a)

# Conflicts:
#	Cargo.lock
#	clap-utils/Cargo.toml
#	cli/Cargo.toml
#	keygen/Cargo.toml
#	sdk/Cargo.toml
2021-09-25 22:11:58 -04:00
Dmitri Makarov
2d0c0bc8ba Bump bpf-tools version to 1.12 2021-09-25 22:11:58 -04:00
Dmitri Makarov
2b6962c063 Bump bpf-tools version to 1.11 (#18079)
Avoid including local paths in compiler generated ELF files
2021-09-25 22:11:58 -04:00
Dmitri Makarov
ed37cfed1c Update RUSTFLAGS if it is set, instead of overriding in build-bpf 2021-09-25 22:11:58 -04:00
Dmitri Makarov
8f1a08e15c Remove O2 option for compiling BPF programs (#17865)
rustc default compiler optimization level is O3. This change removes
the option that overrides the default optimization level, because it
is safe to do so.  The code generation is incorrect in some cases
because of link-time optimizations, which remain disabled for
compiling BPF programs.  In addition, this commit updates the expected
instruction counts for assert_instruction_count test.
2021-09-25 22:11:58 -04:00
Dmitri Makarov
c8d9e6c8c8 Add separate rules to generate dependencies files for C BPF build 2021-09-25 22:11:58 -04:00
Dmitri Makarov
3a1b2c8c9e Bump BPF tools to v1.10 2021-09-25 22:11:58 -04:00
Dmitri Makarov
77e6cd35ee Add BPF rustc option to reduce the optimizations to safer level (#17590) 2021-09-25 22:11:58 -04:00
Dmitri Makarov
6c390aeb67 Fix missing builtins in C programs linking with compiler_builtins (#17475) 2021-09-25 22:11:58 -04:00
Dmitri Makarov
cde248ae73 Bump bpf-tools version to 1.9
- upgrade rustc to 1.52.1 and clang to 12.0
2021-09-25 22:11:58 -04:00
Dmitri Makarov
9bc20a8cbb Bump bpf-tools version to 1.8 2021-09-25 22:11:58 -04:00
Dmitri Makarov
6fd959c6d2 Bump bpf-tools version to 1.7 (#17176) 2021-09-25 22:11:58 -04:00
Dmitri Makarov
94e50b61e8 Add llvm feature option to compile for Solana BPF target (#16495) 2021-09-25 22:11:58 -04:00
mergify[bot]
4113acbf24 Resolve zeroize_derive audit warning by bumping version (backport #20182) (#20189)
* Resolve zeroize_derive audit warning by bumping version (#20182)

* Revert "temporarily disable new audit"

This reverts commit 3dfbd95ddc.

* Bump version of zeroize_derive from v1.0.0 to v1.2.0

(cherry picked from commit 0c62a6fe3f)

# Conflicts:
#	ci/do-audit.sh

* resolve conflict

Co-authored-by: Justin Starry <justin@solana.com>
2021-09-25 09:45:32 -04:00
Trent Nelson
572c0af699 runtime: remove inactive delegation from stakes cache 2021-09-22 20:39:34 -06:00
sakridge
0a3bf17bb5 Only allow votes when root distance gets too high #2 (#19991)
Updates from master
2021-09-22 18:25:47 +02:00
Tao Zhu
431a82276e allow vote to have 1 or 2 sigs (#20082) 2021-09-22 03:13:44 +00:00
mergify[bot]
14505f5bfd Fix memo handling and "blocks" key formatting (#20044) (#20079)
* Fix memo handling and "blocks" key formatting

* Skip memo equality check

* Define slot_to_tx_by_addr_key

Co-authored-by: Justin Starry <justin@solana.com>
(cherry picked from commit 795dde109c)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-09-21 21:43:59 +00:00
Tao Zhu
c17c74021c sigverify to identify and mark simple vote transaction (#20021)
* sigverify to identify and mark simple vote transaction

* check vote tx at get_packet_offsets to cover both cpu and gpu paths

* add pubkey_len to PacketOffsets to reduce the redundant bytes counting
2021-09-20 23:00:33 -05:00
mergify[bot]
cfc0e2d61f Make bigtable delete-slots actually usable (#20037) (#20041)
(cherry picked from commit 22f45dca80)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-09-20 23:26:33 +00:00
mergify[bot]
b7fc704092 Optimize RPC pubsub for multiple clients with the same subscription (backport #18943) (#19986)
* Optimize RPC pubsub for multiple clients with the same subscription (#18943)

* reimplement rpc pubsub with a broadcast queue

* update tests for new pubsub implementation

* fix: fix review suggestions

* chore(rpc): add additional pubsub metrics

* integrate max subscriptions check into SubscriptionTracker to reduce locking

* separate subscription control from tracker

* limit memory usage of items in pubsub broadcast queue, improve error handling

* add more pubsub metrics

* add final count metrics to pubsub

* add metric for total number of subscriptions

* fix small review suggestions

* remove by_params from SubscriptionTracker and add node_progress_watchers map instead

* add subscription tracker tests

* add metrics for number of pubsub notifications as a counter

* ignore clippy lint in TokenCounter

* fix underflow in token counter

* reduce queue capacity in pubsub tests

* fix(rpc): fix test timeouts

* fix race in account subscription test

* Add RpcSubscriptions::new_for_tests

Co-authored-by: Pavel Strakhov <p.strakhov@iconic.vc>
Co-authored-by: Nikita Podoliako <n.podoliako@zubr.io>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
(cherry picked from commit 65227f44dc)

* Fix conflicts (and standardize naming to make future subscription backports easier)

Co-authored-by: Pavel Strakhov <ri@idzaaus.org>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-09-20 07:32:34 +00:00
sakridge
8bf5cdb349 bump 1.6 version (#19988) 2021-09-18 00:09:39 +00:00
sakridge
eaa6aa0998 Only allow votes when root distance gets too high (#19916) 2021-09-17 22:28:11 +02:00
mergify[bot]
3b3822846b Add delete subcommand to ledger-tool bigtable (#19931) (#19962)
* Add `delete` subcommand to `ledger-tool bigtable` command

* feedback

(cherry picked from commit c71fab6cb3)

Co-authored-by: Justin Starry <justin@solana.com>
2021-09-17 04:25:36 +00:00
mergify[bot]
da7185e2ed rpc: performance fix for getProgramAccounts (backport #19941) (#19949)
* rpc: performance fix for getProgramAccounts (#19941)

* rpc: performance fix for getProgramAccounts

The accounts were gradually pushed into a vector, which produced
significant slowdowns for very large responses.

* rpc: rewrite loops using iterators

Co-authored-by: Christian Kamm <ckamm@delightful-solutions.de>
(cherry picked from commit f1bbf1d8b0)

# Conflicts:
#	core/src/rpc.rs

* fix conflicts

* Fix conflicts

Co-authored-by: Christian Kamm <mail@ckamm.de>
Co-authored-by: Justin Starry <justin@solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-09-16 23:09:11 +00:00
mergify[bot]
ccef24c44e Add banking metrics for buffered and dropped packets (#19902) (#19926)
(cherry picked from commit ca3f147670)

Co-authored-by: Justin Starry <justin@solana.com>
2021-09-16 00:39:02 +00:00
Tyera Eulberg
0c6a133b63 Restore feature declaration (#19922) 2021-09-15 16:45:59 -06:00
Justin Starry
7873f6fb30 Report consumed_buffered_packets_count stat to metrics (#19899) 2021-09-15 07:21:20 -05:00
mergify[bot]
7cba9b8f8f Add an info log to indicate the node has reached supermajority and print the active stake percentage (#19893) (#19897)
(cherry picked from commit 4ff50519ff)

Co-authored-by: Michael <68944931+michaelh-laine@users.noreply.github.com>
2021-09-15 07:55:17 +00:00
Michael Vines
ca55bce522 Bump version to 1.6.26 (#19894) 2021-09-14 23:29:33 -06:00
Tyera Eulberg
7de8a55b54 Use f64 for stake math in get_stake_percent_in_gossip (#19892) 2021-09-14 19:17:59 -06:00
Tyera Eulberg
7232b01a02 Bump version to v1.6.25 (#19880) 2021-09-14 20:33:13 +00:00
Tyera Eulberg
a4ebbc9f55 Remove demote_program_write_locks feature (#19877)
* Remove demote_program_write_locks feature

* Update test
2021-09-14 12:25:42 -06:00
mergify[bot]
bf7c2f79c1 filters for recent contact-infos when checking for live stake (backport #19204) (#19873)
* filters for recent contact-infos when checking for live stake (#19204)

Contact-infos are saved to disk:
https://github.com/solana-labs/solana/blob/9dfeee299/gossip/src/cluster_info.rs#L1678-L1683

and restored on validator start-up:
https://github.com/solana-labs/solana/blob/9dfeee299/core/src/validator.rs#L450

Staked nodes entries will not expire until an epoch after. So when the
validator checks for online stake it is erroneously picking up
contact-infos restored from disk, which breaks the entire
wait-for-supermajority logic:
https://github.com/solana-labs/solana/blob/9dfeee299/core/src/validator.rs#L1515-L1561

This commit adds an extra check for the age of contact-info entries and
filters out old ones.

(cherry picked from commit 7a789e0763)

# Conflicts:
#	core/src/validator.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-09-14 18:21:56 +00:00
Stephen Akridge
d401c3f5ab Bump version 2021-09-14 08:12:02 -07:00
sakridge
83d08223fd More transaction forwarding (#19834) 2021-09-14 16:08:53 +02:00
mergify[bot]
6bb63f9abf Silence vercel github comments (#19827) (#19832)
(cherry picked from commit 8ac12c29ed)

Co-authored-by: Justin Starry <justin@solana.com>
2021-09-13 17:51:39 -05:00
behzad nouri
d8293abc64 debug logs when crds table trim failed (#18307)
reports of this error being possibly spammy:
https://discord.com/channels/428295358100013066/689412830075551748/859441080054710293

The commit changes the log level to debug.
Additionally adding a new metric to understand the frequency of this error.

(cherry picked from commit 9d983a34a0)
2021-09-10 20:02:13 -07:00
Michael Vines
4d074a6716 Reduce wait for supermajority threshold back to 80%
(cherry picked from commit 4386e09710)
2021-09-09 21:44:42 -07:00
mergify[bot]
3eee222667 Return error if Transaction contains writable executable or ProgramData accounts (backport #19629) (#19729)
* Return error if Transaction contains writable executable or ProgramData accounts (#19629)

* Return error if Transaction locks an executable as writable

* Return error if a ProgramData account is writable but the upgradable loader isn't present

* Remove unreachable clause

* Fixup bpf tests

* Review comments

* Add new TransactionError

* Disallow writes to any upgradeable-loader account when loader not present; remove is_upgradeable_loader_present exception for all other executables

(cherry picked from commit 38bbb77989)

# Conflicts:
#	programs/bpf/tests/programs.rs
#	runtime/src/accounts.rs
#	runtime/src/bank.rs
#	sdk/src/transaction.rs
#	storage-proto/proto/transaction_by_addr.proto
#	storage-proto/src/convert.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-09-09 03:06:02 +00:00
mergify[bot]
a78c3c07f2 filters crds values in parallel when responding to gossip pull-requests (backport #18877) (#19450)
* filters crds values in parallel when responding to gossip pull-requests (#18877)

When responding to gossip pull-requests, filter_crds_values takes a lot of time
while holding onto read-lock:
https://github.com/solana-labs/solana/blob/f51d64868/gossip/src/crds_gossip_pull.rs#L509-L566

This commit will filter-crds-values in parallel using rayon thread-pools.

(cherry picked from commit f1198fc6d5)

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-09-08 17:09:35 +00:00
mergify[bot]
53f8e58300 Demote write locks on transaction program ids (backport #19593) (backport #19633) (#19637)
* Demote write locks on transaction program ids (backport #19593) (#19633)

* Demote write locks on transaction program ids (#19593)

* Add feature

* Demote write lock on program ids

* Fixup bpf tests

* Update MappedMessage::is_writable

* Comma nit

* Review comments

(cherry picked from commit decec3cd8b)

# Conflicts:
#	core/src/banking_stage.rs
#	core/src/cost_model.rs
#	core/src/cost_tracker.rs
#	ledger-tool/src/main.rs
#	program-runtime/src/instruction_processor.rs
#	programs/bpf/tests/programs.rs
#	programs/bpf_loader/src/syscalls.rs
#	rpc/src/transaction_status_service.rs
#	runtime/src/accounts.rs
#	runtime/src/bank.rs
#	runtime/src/message_processor.rs
#	sdk/benches/serialize_instructions.rs
#	sdk/program/src/message/mapped.rs
#	sdk/program/src/message/sanitized.rs
#	sdk/src/transaction/sanitized.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
(cherry picked from commit fcda5d4a7d)

# Conflicts:
#	cli-output/src/display.rs
#	core/src/transaction_status_service.rs
#	program-test/src/lib.rs
#	programs/bpf_loader/src/syscalls.rs
#	runtime/src/accounts.rs
#	runtime/src/bank.rs
#	runtime/src/message_processor.rs
#	sdk/benches/serialize_instructions.rs
#	sdk/program/src/message.rs
#	sdk/src/feature_set.rs
#	transaction-status/src/parse_accounts.rs

* Replace feature and fix conflicts

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-09-04 09:10:57 +00:00
mergify[bot]
b4fdce9443 Populate memo in blockstore signatures-for-address (#19515) (#19604)
* Add TransactionMemos column family

* Traitify extract_memos

* Write TransactionMemos in TransactionStatusService

* Populate memos from column

* Dedupe and add unit test

(cherry picked from commit 5fa3e5744c)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-09-03 18:38:57 +00:00
mergify[bot]
39fe52fc06 Fix clippy (#17214) (#19609)
Newer clippy from +nightly have new errors about where annotations can and
cannot go.  This commit fixes two of them.

(cherry picked from commit b074e86ad2)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2021-09-03 17:00:25 +00:00
mergify[bot]
db65b4e641 Populate memo in bigtable transaction structs (backport #19512) (#19606)
* Populate memo in bigtable transaction structs (#19512)

* Populate memo in bigtable transaction structs

* Preface memos with len

(cherry picked from commit f4ae450f34)

# Conflicts:
#	transaction-status/src/parse_instruction.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-09-03 09:26:08 +00:00
mergify[bot]
fe1c20c689 excludes epoch-slots from nodes with unknown or different shred version (backport #17899) (#19551)
* excludes epoch-slots from nodes with unknown or different shred version (#17899)

Inspecting TDS gossip table shows that crds values of nodes with
different shred-versions are creeping in. Their epoch-slots are
accumulated in ClusterSlots causing bogus slots very far from current
root which are not purged and so cause ClusterSlots keep consuming more
memory:
https://github.com/solana-labs/solana/issues/17789
https://github.com/solana-labs/solana/issues/14366#issuecomment-769896036
https://github.com/solana-labs/solana/issues/14366#issuecomment-832754654

This commit updates ClusterInfo::get_epoch_slots, and discards entries
from nodes with unknown or different shred-version.

Follow up commits will patch gossip not to waste bandwidth and memory
over crds values of nodes with different shred-version.

(cherry picked from commit 985280ec0b)

# Conflicts:
#	core/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-09-01 18:50:56 +00:00
mergify[bot]
6566a0f6c8 short cuts expiration check if origin's contact-info is still valid (#17918) (#19552)
Crds::find_old_labels can skip checking values timestamps if the
origin's contact info hasn't expired yet:
https://github.com/solana-labs/solana/blob/985280ec0/gossip/src/crds.rs#L394-L408

(cherry picked from commit cca46308bc)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-09-01 18:47:40 +00:00
mergify[bot]
3606de56f7 Fix tests that make assumptions about tx fee rate (#19538) (#19543)
(cherry picked from commit 4fb43bbd90)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-09-01 05:57:30 +00:00
mergify[bot]
9a005855dc persists repair-peers cache across repair service loops (backport #18400) (#19526)
* persists repair-peers cache across repair service loops (#18400)

The repair-peers cache is reset each time repair service loop runs,
and so computed repeatedly for the same slots:
https://github.com/solana-labs/solana/blob/d2b07dca9/core/src/repair_service.rs#L275

This commit uses an LRU cache to persists repair-peers for each slot.
In addition to LRU eviction rules, in order to avoid re-using outdated
data, each entry also has 10 seconds TTL.

(cherry picked from commit a0551b4054)

# Conflicts:
#	core/src/repair_service.rs
#	core/src/serve_repair.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-08-31 21:17:22 +00:00
mergify[bot]
70bdcf06b4 locks gossip only once in push_epoch_slots (backport #17355) (#19484)
* locks gossip only once in push_epoch_slots

push_epoch_slots is unlocking and locking again gossip when iterating
over epoch slot indices which is wasteful:
https://github.com/solana-labs/solana/blob/0486df02b/core/src/cluster_info.rs#L915-L929

(cherry picked from commit 9339a6f8f3)

# Conflicts:
#	core/src/cluster_info.rs

* removes redundant slots sort in push_epoch_slots

(cherry picked from commit 9471ba61c5)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-08-29 17:09:59 +00:00
mergify[bot]
f53f273585 parallelizes gossip packets receiver with processing of requests (backport #17647) (#19474)
* parallelizes gossip packets receiver with processing of requests (#17647)

Gossip packet processing is composed of two stages:
  * The first is consuming packets from the socket, deserializing,
    sanitizing and verifying them:
    https://github.com/solana-labs/solana/blob/7f0349b29/gossip/src/cluster_info.rs#L2510-L2521
  * The second is actually processing the requests/messages:
    https://github.com/solana-labs/solana/blob/7f0349b29/gossip/src/cluster_info.rs#L2585-L2605

The former does not acquire any locks and so can be parallelized with
the later, allowing better pipelineing properties and smaller latency in
responding to gossip requests or propagating messages.

(cherry picked from commit cab30e2356)

# Conflicts:
#	core/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-08-27 22:13:14 +00:00
mergify[bot]
e0584b77c5 Fixing missing pubsub notification for programSubscribe and logsSubscribe (backport #19092) (#19455)
* Fixing missing pubsub notification for programSubscribe and logsSubscribe (#19092)

#18587: programSubscribe is missing notifications randomly. The issue is because of two reasons

Not all rooted slots get OptimisticallyConfirmed notifications
The OptimisticallyConfirmed notifications can be out of order for slots: slot A and B with A < B can see notification for B first before A.
Summary of Changes

Changed OptimisticallyConfirmedBankTracker to send notifications for parent banks if they have not been notified yet. We use a new variable last_notified_slot to track that.

Tests:
With my validator running against testnet, before the fix, it was failing 75% of time, with the fix, it is passing consistently. Using the program mentioned in #18587.

(cherry picked from commit 1a372a792e)

# Conflicts:
#	core/src/optimistically_confirmed_bank_tracker.rs
#	core/src/rpc_subscriptions.rs

* Fix conflicts

Co-authored-by: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-08-27 01:31:51 +00:00
Tyera Eulberg
39a5431790 Bump jsonrpc crates and remove old tokio (#19454) 2021-08-26 23:53:44 +00:00
sakridge
715c5e64c4 Bump version to 1.6.23 (#19447) 2021-08-26 18:09:15 +00:00
carllin
7956f04fa5 Backport Accounts Fixes #16838 and the test #17038 (#19412)
* reclaims unref accounts from index (#16838)

* Test account index and store alignment (#17038)

* Use ReclaimResult::Default() instead of building subtypes

* Add test to ensure account_db store and index are aligned

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
Co-authored-by: steviez <steven@solana.com>
2021-08-25 14:40:53 -07:00
mergify[bot]
2825f82bee Add parameter to allow setting max-retries for SendTransaction rpc (backport #19387) (#19415)
* Add parameter to allow setting max-retries for SendTransaction rpc (#19387)

* Add parameter to cap rpc send retries for a tx

* Add parameter to docs

(cherry picked from commit 7482861f4b)

# Conflicts:
#	banks-server/src/banks_server.rs
#	core/src/rpc.rs
#	core/src/send_transaction_service.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-08-25 16:47:47 +00:00
mergify[bot]
45e3cd373b Bump crossbeam-epoch (backport #19378) (#19388)
* Bump to get off yanked crate (#19378)

(cherry picked from commit 82ea4891fd)

# Conflicts:
#	Cargo.lock
#	programs/bpf/Cargo.lock

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-08-24 08:16:08 +00:00
Brooks Prumo
2b4b260c71 Update crossbeam-deque to 0.8.1 (#19361) (#19381)
(cherry picked from commit 2ccbe471ae)
2021-08-23 23:10:35 +00:00
Trent Nelson
866fd22fde ci: pin patched solana crates version for downstream project builds 2021-08-18 22:08:12 -06:00
Tyera Eulberg
d23df85410 Remove pin-project-lite warning 2021-08-18 22:08:12 -06:00
Tyera Eulberg
839cdecfd2 Bump assert_cmd and remove audit ignore 2021-08-18 22:08:12 -06:00
Tyera Eulberg
cc6296b1fa Add audit ignores 2021-08-18 22:08:12 -06:00
Tyera Eulberg
0c2a85a903 Update hyper 0.14 2021-08-18 22:08:12 -06:00
Trent Nelson
011fe72aa2 Bump version to v1.6.22 2021-08-18 22:08:12 -06:00
mergify[bot]
40fc14471d Really start caching by fixing swapped CAS... (#18842) (#18969)
(cherry picked from commit 611af87fdb)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-07-29 16:52:46 +00:00
mergify[bot]
7fc85b8c9b Fix erroneous default start_slot (#18948) (#18951)
(cherry picked from commit 578f2aa22b)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-07-29 03:56:54 +00:00
Tyera Eulberg
4d77ac1688 Bump version to v1.6.21 (#18953) 2021-07-28 23:47:03 +00:00
sakridge
77bdb45d4a Sigverify refactor (#18873) 2021-07-23 22:32:09 +02:00
sakridge
3539849a7f Add voting service (#18552) (#18781) 2021-07-22 22:26:04 +02:00
Tyera Eulberg
8c28f9b63e Exclude stubbed ProgramCosts column from compaction (#18840) 2021-07-22 17:56:23 +00:00
mergify[bot]
3346843a87 token: Swap new token program id for consistency on all networks (#18823) (#18836)
(cherry picked from commit d6f5945653)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-07-22 09:55:39 +00:00
Jon Cinque
007c49ff2b feature: add new token program feature (v1.6 backport of #18780) (#18804)
* feature: add new token program feature

* Fixup test

* Cargo fmt

* Add back whitespace for cargo fmt

* Revert file totally
2021-07-21 21:58:28 +02:00
mergify[bot]
bbd386884d Disambiguate archive_snapshot_package IO error sources (#18797)
(cherry picked from commit a4c3db51fc)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-21 19:32:19 +00:00
Jon Cinque
ba8426e0fd 1.6: Bump crates to 1.6.20 (#18805) 2021-07-21 17:44:41 +02:00
Tao Zhu
1017c4851a backport new column families from master to 1.6 (#18743)
* backporting bank_hash and program_costs column families from master to 1.6 for rocksdb backward compatibility

* missed a line to allow dead code

* include code for purge
2021-07-17 10:59:42 -06:00
Trent Nelson
d7b381c90c Bump version to v1.6.19 2021-07-17 08:57:44 +00:00
mergify[bot]
870a7e8458 CI Tweaks (backport #18738) (#18741)
* ci: fix typo

(cherry picked from commit 96a7cedaca)

* ci: suppress cargo tree output

(cherry picked from commit 59cd0556ef)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-17 07:08:28 +00:00
mergify[bot]
6f661dd8a9 excludes private ip addresses (#18739)
(cherry picked from commit e316586516)

# Conflicts:
#	core/src/broadcast_stage.rs
#	core/src/cluster_info.rs

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-07-17 03:48:07 +00:00
mergify[bot]
2e6d03c41f Use rustup default profile (#18727) (#18730)
(cherry picked from commit 2ec81f627d)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-07-16 21:13:39 +00:00
Trent Nelson
3dbdaa5341 Bump version to v1.6.18 2021-07-16 09:57:58 +00:00
mergify[bot]
8f3ce5fc57 Cli configurable validators (backport #18630) (#18665)
* rpc: more params for `GetVoteAccountsConfig`

(cherry picked from commit bf90ea282a)

# Conflicts:
#	docs/src/developing/clients/jsonrpc-api.md

* cli: allow returning more `solana validators`

(cherry picked from commit a4a24b6531)

# Conflicts:
#	Cargo.lock
#	cli/Cargo.toml
#	cli/src/cluster_query.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-16 07:45:49 +00:00
Trent Nelson
49ac17a595 nonce: Unify NonceError with SystemError 2021-07-16 01:35:38 -06:00
mergify[bot]
63d7fdb4bd Gate libsecp256k1 update (backport #18656) (#18700)
* hijack secp256k1 enablement feature plumbing for libsecp256k1 upgrade

* Bump libsecp256k1 to v0.5.0

* gate libsecp256k1 upgrade to v0.5.0

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-16 03:34:13 +00:00
Michael Vines
e15721f22d Drop default_on_eof attribute from Reward struct
(cherry picked from commit 33718e5fb4)
2021-07-14 12:33:25 -07:00
mergify[bot]
17177a41c7 Cli: expose last valid block height (#18620) (#18626)
* Add Fees struct to client

* Add complete RpcClient::get_fees methods

* Switch cli to last_valid_block_height

(cherry picked from commit 8ad4ffdee5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-07-13 07:18:42 +00:00
Michael Vines
84f0234151 rebase 2021-07-12 18:16:25 -07:00
Michael Vines
b629291848 Record vote account commission with voting/staking rewards and surface in RPC
(cherry picked from commit 4098af3b5b)

# Conflicts:
#	docs/src/developing/clients/jsonrpc-api.md
#	runtime/src/bank.rs
2021-07-12 18:16:25 -07:00
mergify[bot]
240895d387 storage-proto: Rework source generation (#18583)
(cherry picked from commit 899b09872b)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-11 03:11:13 +00:00
mergify[bot]
50d510e4c7 Update ouroboros [fix potential UB] (backport #18567) (#18572)
* chore: bump ouroboros from 0.5.1 to 0.9.3 (#18189)

* chore: bump ouroboros from 0.5.1 to 0.9.3

Bumps [ouroboros](https://github.com/joshua-maros/ouroboros) from 0.5.1 to 0.9.3.
- [Release notes](https://github.com/joshua-maros/ouroboros/releases)
- [Commits](https://github.com/joshua-maros/ouroboros/commits)

---
updated-dependencies:
- dependency-name: ouroboros
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

* Api changes

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>

* CI: Ignore inconsistent_struct_constructor lint

This lint was introduced at `warning`, which is an excessively high
level for cosmetics, and later demoted to `pedantic`

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-10 21:06:42 +00:00
mergify[bot]
868e757d9d CI: Make BPF test suite a first-class citizen (backport #18535) (#18570)
* CI: Extricate BPF tests from stable-perf

(cherry picked from commit 1eab0773af)

* CI: Dump BPF assembly listings and upload as artifact

(cherry picked from commit f1996ca0f3)

# Conflicts:
#	ci/test-stable.sh

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-10 19:24:27 +00:00
mergify[bot]
92b2b8dae7 Add storage-proto build.rs and readme (backport #18353) (#18561)
* Add storage-proto build.rs and readme (#18353)

* Use build.rs for storage-proto generation

* Add readme

* Single use statements

(cherry picked from commit c2e7d39154)

# Conflicts:
#	Cargo.lock
#	storage-proto/build-proto/Cargo.lock
#	storage-proto/build-proto/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-07-09 22:12:01 +00:00
mergify[bot]
a53fe7b779 Bump prost, prost-types, and tonic (backport #18537) (#18557)
* Bump prost, prost-types, and tonic (#18537)

* Bump prost+tonic and accommodate generated service changes

* Unignore advisory

* Fixup .proto error list

(cherry picked from commit 761de8b1a3)

# Conflicts:
#	Cargo.lock
#	storage-bigtable/Cargo.toml
#	storage-proto/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-07-09 19:37:51 +00:00
mergify[bot]
a9d6b90e9a Show grcov version as well (#18549) (#18550)
(cherry picked from commit a5b91ef4c3)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-07-09 12:54:51 +00:00
mergify[bot]
88c5d6b10c featurize_policy_update (backport #18492) (#18501)
* featurize_policy_update (#18492)

(cherry picked from commit ccdf93e2b8)

# Conflicts:
#	runtime/benches/message_processor.rs
#	runtime/src/message_processor.rs

* fix conflicts

* nudge

Co-authored-by: Jack May <jack@solana.com>
2021-07-08 22:21:37 +00:00
mergify[bot]
9891cc6a17 Remove dead solana airdrop parameters (#18520) (#18523)
(cherry picked from commit f39ffa69f6)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-07-08 22:03:13 +00:00
mergify[bot]
d18a08471e Temporarily ignore prost-types advisory (backport #18525) (#18526)
* Temporarily ignore prost-types audit (#18525)

(cherry picked from commit 6188283ba6)

* Bump tokio

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-07-08 14:04:06 -06:00
mergify[bot]
50393adadd Record parent slot to reconstruct fork tree from influxdb (#18482) (#18487)
(cherry picked from commit d69f469b83)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-07-08 02:53:08 +00:00
mergify[bot]
1be989c5d2 adds fallback logic if retransmit multicast fails (backport #17714) (#18498)
* adds fallback logic if retransmit multicast fails (#17714)

In retransmit-stage, based on the packet.meta.seed and resulting
children/neighbors, each packet is sent to a different set of peers:
https://github.com/solana-labs/solana/blob/708bbcb00/core/src/retransmit_stage.rs#L421-L457

However, current code errors out as soon as a multicast call fails,
which will skip all the remaining packets:
https://github.com/solana-labs/solana/blob/708bbcb00/core/src/retransmit_stage.rs#L467-L470

This can exacerbate packets loss in turbine.

This commit:
  * keeps iterating over retransmit packets for loop even if some
    intermediate sends fail.
  * adds a fallback to UdpSocket::send_to if multicast fails.

Recent discord chat:
https://discord.com/channels/428295358100013066/689412830075551748/849530845052403733

(cherry picked from commit be957f25c9)

# Conflicts:
#	core/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-07-07 22:36:32 +00:00
mergify[bot]
6bc914989b Update verify policy (backport #18459) (#18490)
* Update verify policy (#18459)

(cherry picked from commit 44289e6728)

# Conflicts:
#	runtime/src/message_processor.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-07-07 18:45:58 +00:00
mergify[bot]
9a7ea1229b Add vote/stake checked instructions (backport #18345) (#18456)
* Add vote/stake checked instructions

(cherry picked from commit ee219ffa47)

# Conflicts:
#	programs/stake/src/stake_instruction.rs
#	sdk/program/src/stake/instruction.rs
#	sdk/src/feature_set.rs

* Fix set-lockup custodian index

(cherry picked from commit 544f62c92f)

* Add parsing for new stake instructions; clean up confusing test args

(cherry picked from commit 9b302ac0b5)

# Conflicts:
#	transaction-status/src/parse_stake.rs

* Add parsing for new vote instructions

(cherry picked from commit 39bac256ab)

* Add VoteInstruction::AuthorizeChecked test

(cherry picked from commit b8ca2250fd)

* Add Stake checked tests

(cherry picked from commit 74e89a3e3e)

# Conflicts:
#	programs/stake/src/stake_instruction.rs

* Fix conflicts and accommodate old apis in backport

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-07-07 03:14:54 +00:00
mergify[bot]
d8a2de1fd9 Remove obsolete stake-monitor command (backport #18020) (#18461)
* Remove obsolete stake-monitor command

(cherry picked from commit f859a39b86)

# Conflicts:
#	Cargo.lock
#	scripts/cargo-install-all.sh
#	stake-monitor/Cargo.toml
#	stake-monitor/src/lib.rs

* Fix conflicts

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-07-07 01:13:48 +00:00
Trent Nelson
6859516126 sdk: add is_interactive() method Signer trait
(cherry picked from commit 2af5ec4f57)
2021-07-06 11:13:53 -07:00
mergify[bot]
57b69b5804 Cli: expose --with-memo to nonce and stake commands (backport #18404) (#18409)
* Cli: expose `--with-memo` to nonce and stake commands (#18404)

* Fmt memo_arg

* Add --with-memo to nonce and stake cli commands

(cherry picked from commit 1dd730d685)

# Conflicts:
#	cli/src/stake.rs

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-07-05 00:34:36 +00:00
mergify[bot]
9a5a9ff633 SDK: Add test for illegal Pubkey::create_with_seed owners (#18401)
(cherry picked from commit 216983c50e)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-07-03 01:10:39 +00:00
sakridge
06b1c980d4 Bump version to v1.6.17 (#18393) 2021-07-02 19:40:37 +00:00
Trent Nelson
86c26f8432 Revert "Clean up build warning"
This reverts commit 17a173ebb5.

(cherry picked from commit d269975784)
2021-07-01 19:09:08 -07:00
mergify[bot]
78d44ae215 More detailed voting timings in replay stage (#18229) (#18246)
(cherry picked from commit 5d08bf9aa3)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-07-01 14:10:04 +00:00
Trent Nelson
31dc79a4f9 Bump version to v1.6.16 2021-06-30 22:53:51 -06:00
Michael Vines
5c2dab8055 solana validators output now includes average skip rate
(cherry picked from commit 52290dbd35)
2021-06-30 17:54:17 -07:00
mergify[bot]
0ecd7e5c90 Fixed an issue doing the set_roots repeatedly for the same set. Instead doing the per chunks. (#18314) (#18336)
(cherry picked from commit a67d26a1e8)

Co-authored-by: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
2021-06-30 23:49:38 +00:00
mergify[bot]
df8cf37b89 test-validator: hold rent constant with --slots-per-epoch (#18317)
(cherry picked from commit 02b14caa5f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-06-30 08:30:39 +00:00
mergify[bot]
56a4fc3dd2 Use timeout to allow RpcClient to retry initial transaction confirmation (backport #18311) (#18315)
* Use timeout to allow RpcClient to retry initial transaction confirmation (#18311)

* Tidying: relocate function

* Use proper helper method for RpcClient commitment

* Add RpcClientConfig

* Add configurable confirm_transaction_initial_timeout

* Use default 5s timeout for initial tx confirmation

(cherry picked from commit 9d4428d3d8)

* Fixup deprecated method

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-06-30 04:51:11 +00:00
mergify[bot]
5380623f32 tx-status: Don't assume a memo instruction succeeded (#18287)
(cherry picked from commit 7babf28ef7)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-06-29 03:02:03 +00:00
mergify[bot]
313e8bddd4 Cli epoch-info: generate epoch-completed time from block times (#18258) (#18284)
* Generate epoch-completed time from block times

* Add annotation when block times not available

(cherry picked from commit f2b0d562b0)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-06-28 23:17:29 +00:00
mergify[bot]
7d68c44307 Don't update if already an executable (#18252)
(cherry picked from commit 2fbedd834f)

# Conflicts:
#	runtime/src/message_processor.rs

Co-authored-by: Jack May <jack@solana.com>
2021-06-27 19:21:02 +00:00
mergify[bot]
6b9a529cda Bump borsh from 0.8.1 to 0.9.0 (backport #18230) (#18237)
* Bump borsh from 0.8.1 to 0.9.0 (#18230)

(cherry picked from commit 7ed2cf30a5)

# Conflicts:
#	programs/bpf/Cargo.lock

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-06-25 23:10:01 +00:00
mergify[bot]
12eea11f93 Add entrypoint to run CI locally (backport #18131) (#18216)
* ci: use versioned cargo wrapper for crate ordering

(cherry picked from commit 554002b73c)

* ci: nvidia persistence mode isn't a hard requirement

(cherry picked from commit f213e48067)

* sdk: ensure `ld` can find criterion when running BPF tests

(cherry picked from commit 7ee39fcb0f)

* ci: give localnet nodes a more time to startup

(cherry picked from commit 278a241db3)

* ci: add downstream build wrapper

(cherry picked from commit 761e324982)

* ci: add wrapper script for running ci locally

Linux only for now

(cherry picked from commit 0bc38153ca)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-06-25 05:18:15 +00:00
mergify[bot]
d2a0445fff Update README.md (#18191) (#18194)
(cherry picked from commit 3362ac06b5)

Co-authored-by: Hao-Cher Hong <rax333j@gmail.com>
2021-06-24 15:47:05 +00:00
Trent Nelson
232ba8473d fix build broken by 37f618f 2021-06-23 01:43:11 -06:00
Trent Nelson
a455c8a5af ci: isolate release builds 2021-06-23 06:02:39 +00:00
mergify[bot]
07865a97ce Add metrics for rpc send-transaction failures (backport #18156) (#18163)
* Add metrics for rpc send-tx failures (#18156)

(cherry picked from commit 64cff8c5a1)

# Conflicts:
#	core/src/rpc.rs

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-06-23 00:00:10 -06:00
Trent Nelson
37f618fc62 programs/config: Disallow duplicate signers 2021-06-22 23:04:24 -06:00
mergify[bot]
86ff6f82f8 chore(pubkey): remove dead code (#18161)
(cherry picked from commit 755b7c7aee)

Co-authored-by: hrls <viktor.kharitonovich@gmail.com>
2021-06-23 03:12:30 +00:00
mergify[bot]
57baf7f79b Add memory operation syscalls (backport #16447) (#18149)
* Add memory operation syscalls (#16447)

(cherry picked from commit 2b50529265)

# Conflicts:
#	programs/bpf/Cargo.lock
#	programs/bpf/rust/sysvar/tests/lib.rs
#	programs/bpf/tests/programs.rs
#	programs/bpf_loader/src/syscalls.rs
#	sdk/src/feature_set.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-06-22 19:57:51 +00:00
mergify[bot]
e259388069 Move stake_weighted_timestamp module (backport #18114) (#18119)
* Move stake_weighted_timestamp module (#18114)

* Move timestamp module into runtime

* Less public

* Remove unused enum

(cherry picked from commit 19fe1dd463)

# Conflicts:
#	runtime/src/bank.rs
#	runtime/src/lib.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-06-22 00:28:02 +00:00
Trent Nelson
06d6e357ae Bump version to v1.6.15 (#18108) 2021-06-21 14:23:43 -06:00
Tyera Eulberg
7759210ff3 Add logging when RpcHealthStatus::Unknown (#18099) 2021-06-21 11:40:15 -06:00
Trent Nelson
16c42a7b30 docs: flesh out validator network requirements 2021-06-21 11:14:42 -06:00
Trent Nelson
d0f08cf25b docs: don't suggest cloud instances for validators 2021-06-21 11:14:42 -06:00
Trent Nelson
0ed9f7144c sdk: refactor pda generation 2021-06-21 10:16:49 -06:00
mergify[bot]
b17c2f451a Add additional subscription metrics (#18071) (#18075)
(cherry picked from commit 83a6c669a5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-06-18 23:56:17 +00:00
mergify[bot]
4aedc086e5 fix loader instruction checker (#18047) (#18073)
(cherry picked from commit d18e02ef44)

Co-authored-by: Jack May <jack@solana.com>
2021-06-18 21:06:23 +00:00
mergify[bot]
0afb330db0 validator: expose max active pubsub subscriptions to CLI (#18035)
(cherry picked from commit 5efc48fc69)

# Conflicts:
#	core/src/rpc_pubsub_service.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-06-18 00:35:26 +00:00
Tyera Eulberg
1201ef172e Bump version to v1.6.14 (#18050) 2021-06-17 20:42:10 +00:00
mergify[bot]
b63a65bc21 validator: run poh speed test earlier in start up (#18023)
(cherry picked from commit 5bc6c89adc)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-06-17 00:51:27 +00:00
mergify[bot]
392d2dbd8a metrics: Don't unwrap client instantiation errors (#18018)
(cherry picked from commit 5cc073420a)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-06-16 22:02:43 +00:00
Tyera Eulberg
4733d6dfc3 v1.6: Properly handle block_height in Bigtable bincode deserialization (#17992)
* Default block_height on eof

* Add comment to prevent future errors
2021-06-16 00:55:19 +00:00
Ryo Onodera
337de51088 Bump version to v1.6.13 (#17972) 2021-06-15 23:47:22 +09:00
mergify[bot]
24ee0b3934 Avoid full-range compactions with periodic filtered b.g. ones (backport #16697) (#17956)
* Avoid full-range compactions with periodic filtered b.g. ones (#16697)

* Update rocksdb to v0.16.0

* Promote the infrequent and important log to info!

* Force background compaction by ttl without manual compaction

* Fix test

* Support no compaction mode in test_ledger_cleanup_compaction

* Fix comment

* Make compaction_interval customizable

* Avoid major compaction with periodic filtering...

* Adress lazy_static, special cfs and range check

* Clean up a bit and add comment

* Add comment

* More comments...

* Config code cleanup

* Add comment

* Use .conflicts_with()

* Nullify unneeded delete_range ops for special CFs

* Some clean ups

* Clarify the locking intention

* Ensure special CFs' consistency with PurgeType::CompactionFilter

* Fix comment

* Fix bad copy paste

* Fix various types...

* Don't use tuples

* Add a unit test for compaction_filter

* Fix typo...

* Remove flag and just use new behavior always

* Fix wrong condition negation...

* Doc. about no set_last_purged_slot in purge_slots

* Write a test and fix off-by-one bug....

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Follow up to github review suggestions

* Fix line-wrapping

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 1f97b2365f)

# Conflicts:
#	Cargo.lock
#	ledger/src/blockstore_db.rs

* Fix conflicts

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-06-15 08:49:13 +00:00
mergify[bot]
ff8f78199d Bump spl-token to v3.1.1 (backport #17951) (#17957)
* Bump spl-token to v3.1.1 (#17951)

(cherry picked from commit b7de369992)

# Conflicts:
#	Cargo.lock
#	account-decoder/Cargo.toml
#	accounts-cluster-bench/Cargo.toml
#	programs/bpf/Cargo.lock
#	rpc/Cargo.toml
#	tokens/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-06-15 07:54:09 +00:00
mergify[bot]
b524e0a1a7 add data point for cap mismatch (#17746) (#17752)
(cherry picked from commit f6fb8906c7)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-06-15 05:28:40 +00:00
mergify[bot]
7dcecdd285 Account for duplicate before a bank is frozen or replayed (#17866) (#17882)
(cherry picked from commit afafa624a3)

Co-authored-by: carllin <carl@solana.com>
2021-06-11 07:28:22 +00:00
mergify[bot]
151f025bee Update a dangling devnet endpoint doc (#17836) (#17838)
(cherry picked from commit 2dfb5b7579)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-06-08 16:07:13 +00:00
Tyera Eulberg
edc83c0543 v1.6: Bump jsonrpc crates (#17799)
* Bump jsonrpc crates

* Update error text
2021-06-07 18:09:09 +00:00
mergify[bot]
b777bbf7db system-program: Remove zero lamport check on transfers (backport #17726) (#17763)
* system-program: Remove zero lamport check on transfers (#17726)

* system-program: Move lamports == 0 check on transfers

* Address feedback

* Update stake split to explicitly allocate + assign

* Update stake tests referring to split instruction

* Revert whitespace

* Update split instruction index in test

* Remove unnecessary `assign_with_seed` from `split_with_seed`

* Fix stake instruction parser

* Update test to allow splitting into account with lamports

(cherry picked from commit 8f5e773caf)

# Conflicts:
#	runtime/src/system_instruction_processor.rs
#	sdk/src/feature_set.rs

* Resolve merge conflicts

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-06-07 12:55:57 +00:00
mergify[bot]
a29344e681 Document ProgramTest::new and fix ProgramTest::add_program (#17754) (#17767)
* document ProgramTest::new

* simplify ProgramTest::new doc-string

* make ProgramTest::add_program noisier

`add_program` (and `new`, implicitly) now prints a warning when the user
supplies a bogus program name to a ProgramTest and invokes `test-bpf`.

Additionally, it is now impossible to ask for a regular `test` and for
the generated ProgramTest to load BPF code instead of native code.
Previously, this was caused by a precedence issue: BPF code would always
be preferred over native if the program name was valid, regardless of
user choice.

(cherry picked from commit 2aaf55795f)

Co-authored-by: xuoe <alex@psi.io>
2021-06-06 05:56:24 +00:00
Jack May
1bce8a99a2 Add more CPI call depth tests (#17657) 2021-06-02 00:22:29 -07:00
Tyera Eulberg
3a3454d788 Bump version to v1.6.12 (#17651) 2021-06-01 21:40:36 -06:00
mergify[bot]
0e3131f2b4 Purge expired BlockHeight data from blockstore (backport #17634) (#17640)
* Purge expired BlockHeight data from blockstore (#17634)

* Purge expired BlockHeight data from blockstore

* Also call compact_storage and add comment....

(cherry picked from commit 96cdbfdcc0)

# Conflicts:
#	ledger/src/blockstore_db.rs

* Fix conflict

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-06-02 00:04:12 +00:00
mergify[bot]
27997653f1 Rework #17486 (backport #17566) (#17596)
* Revert "Improve missing default signer error messaging (#17486)"

This reverts commit 6d40d0d141.

(cherry picked from commit ca8c1c6c42)

* Improve missing default filepath signer error messaging

(cherry picked from commit 06a926f2f4)

* CI: temporarily skip spl downstream build

(cherry picked from commit d01b4f80f9)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-31 17:27:12 +00:00
mergify[bot]
c3f66dcfa7 Make initialize public (#17605) (#17606)
(cherry picked from commit 2896fc3987)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-31 07:45:34 -07:00
Jack May
6a2377dd50 Disable read-only optimization features (#17583)
* Disable RO optimization features

* nudge
2021-05-28 21:55:37 +00:00
mergify[bot]
8b1a1d9c99 test-validator: add an arg to control faucet genesis balance (#17581)
(cherry picked from commit 974a96738a)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-28 10:43:31 -07:00
mergify[bot]
01e2d5cd35 Add block height to ConfirmedBlock structs (backport #17523) (#17534)
* Add block height to ConfirmedBlock structs (#17523)

* Add BlockHeight CF to blockstore

* Rename CacheBlockTimeService to be more general

* Cache block-height using service

* Fixup previous proto mishandling

* Add block_height to block structs

* Add block-height to solana block

* Fallback to BankForks if block time or block height are not yet written to Blockstore

* Add docs

* Review comments

(cherry picked from commit ab581dafc2)

# Conflicts:
#	core/src/replay_stage.rs
#	core/src/tvu.rs
#	core/src/validator.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-26 22:42:46 -07:00
mergify[bot]
8b61ba4d8d Add missing fields from getClusterNodes documentation (backport #17501) (#17502)
* Add missing fields from getClusterNodes documentation

(cherry picked from commit 3d40ec3c88)

# Conflicts:
#	docs/src/developing/clients/jsonrpc-api.md

* rebase

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-26 21:38:27 -07:00
mergify[bot]
f364956d15 Plumb transaction-level rewards (aka "rent debits") into the getTransaction RPC method (backport #17528) (#17532)
* Plumb transaction-level rewards (aka "rent debits") into the `getTransaction` RPC method

(cherry picked from commit 9541411c15)

# Conflicts:
#	docs/src/developing/clients/jsonrpc-api.md

* rebase

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-26 21:33:30 -07:00
mergify[bot]
cedd00e82e Fix typo in docs (#17531)
(cherry picked from commit 7dfc1d9790)

Co-authored-by: Felipe Lima <felipe.lima@gmail.com>
2021-05-27 03:13:21 +00:00
mergify[bot]
3b22f5b833 simulateTransaction RPC method can now return accounts modified by the simulation (backport #17499) (#17526)
* simulateTransaction can now return accounts modified by the simulation

(cherry picked from commit cbce440af4)

# Conflicts:
#	rpc/src/parsed_token_accounts.rs

* rebase

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-27 00:06:05 +00:00
mergify[bot]
9c549a3ccf Add custom error for tx-history queries when node does not support (backport #17494) (#17522)
* Add custom error for tx-history queries when node does not support (#17494)

(cherry picked from commit 6abe089740)

# Conflicts:
#	core/src/rpc.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-26 21:37:26 +00:00
mergify[bot]
f4cf7d2c84 Add last valid block height to rpc Fees (backport #17506) (#17507)
* Add last valid block height to rpc Fees (#17506)

* Add last_valid_block_height to fees rpc

* Add getBlockHeight rpc

* Update docs

(cherry picked from commit e9bc1c6b07)

# Conflicts:
#	client/src/rpc_request.rs
#	docs/src/developing/clients/jsonrpc-api.md

* Fix conflicts and a-z docs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-26 16:25:41 +00:00
mergify[bot]
b9834ed9eb docs: Add find_program_address and example (#17515) (#17517)
(cherry picked from commit bb72ab7f1b)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-05-26 15:49:53 +00:00
mergify[bot]
2781d69319 runtime: add rent debit charges to block metadata (backport #17504) (#17513)
* runtime: add rent debit charges to block metadata

(cherry picked from commit 97eab7edf9)

* add tests from `RentDebits`

(cherry picked from commit 2a6c5ed0ac)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-26 15:39:38 +00:00
mergify[bot]
4d58a0e200 Add a hacky shell for fun code reading (#17503) (#17505)
(cherry picked from commit 7ce910f459)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-05-26 06:39:26 +00:00
mergify[bot]
b37e5c8a36 Improve missing default signer error messaging (#17486) (#17500)
(cherry picked from commit 6d40d0d141)

Co-authored-by: Jack May <jack@solana.com>
2021-05-26 02:51:36 +00:00
mergify[bot]
b06bfeec8d Add a flag to simulateTransaction to use most recent blockhash (backport #17485) (#17497)
* Add a flag to simulateTransaction to use most recent blockhash

(cherry picked from commit 96cef5260c)

* rename flag

(cherry picked from commit e14f3eb529)

* sigVerify conflicts with replace, add tests

(cherry picked from commit 660d37aadf)

Co-authored-by: Justin Starry <justin@solana.com>
2021-05-26 01:49:52 +00:00
mergify[bot]
02c4170357 Update sysvar docs (#17493) (#17495)
(cherry picked from commit 4eb6deee2d)

Co-authored-by: Jack May <jack@solana.com>
2021-05-26 00:22:32 +00:00
mergify[bot]
24a21d0ba6 docs: Add RPC node HW recommendations (#17490)
(cherry picked from commit 64bfc14a75)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-25 23:22:26 +00:00
Tyera Eulberg
ae1687bc0a Bump version to v1.6.11 (#17484) 2021-05-25 15:35:50 -06:00
mergify[bot]
5d4654d2f4 docs: Add inner instruction and cross-program invocation (#17476) (#17479)
(cherry picked from commit a03230338a)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-05-25 17:09:11 +00:00
mergify[bot]
d69c1d6db6 docs: budget program is gone, link to SPL Token multisig (#17478)
(cherry picked from commit 2019558f03)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-25 16:09:40 +00:00
mergify[bot]
fa65107460 Avoid ip_echo_server unwrap (#17445)
(cherry picked from commit 30b60a976b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-24 14:40:47 -06:00
Tyera Eulberg
dd2d119d2b v1.6: Ensure cluster-confirmed roots are set on boot (#17442)
* Add blockstore-root-scan for api nodes on boot

* Ensure cluster-confirmed root and parents are set as root in blockstore in load_frozen_forks()

* Plumb rpc-scan-and-fix-roots validator flag
2021-05-24 20:16:37 +00:00
mergify[bot]
b7dc7d859c removes Crds::lookup and lookup_versioned (backport #17438) (#17441)
* removes Crds::lookup and lookup_versioned (#17438)

(cherry picked from commit e867d7f3b8)

* patches push_epoch_slots for backport

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-24 20:10:44 +00:00
mergify[bot]
0db23fee53 encapsulates purged values bookkeeping into crds module (#17265) (#17436)
For all code paths (gossip push, pull, purge, etc) that remove or
override a crds value, it is necessary to record hash of values purged
from crds table, in order to exclude them from subsequent pull-requests;
otherwise the next pull request will likely return outdated values,
wasting bandwidth:
https://github.com/solana-labs/solana/blob/ed51cde37/core/src/crds_gossip_pull.rs#L486-L491

Currently this is done all over the place in multiple modules, and this
has caused bugs in the past where purged values were not recorded.

This commit encapsulated this bookkeeping into crds module, so that any
code path which removes or overrides a crds value, also records the hash
of purged value in-place.

(cherry picked from commit 9d112cf41f)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-24 15:04:24 +00:00
mergify[bot]
7e073e64a3 indexes crds votes by insert order (#17340) (#17435)
Crds::get_votes is scanning over all votes in the crds table only to
return those inserted since the given cursor:
https://github.com/solana-labs/solana/blob/2ae57c172/core/src/crds.rs#L250-L266

Having votes indexed by insert order avoids the table scan and will be
more efficient.

(cherry picked from commit 060332c704)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-24 14:55:09 +00:00
mergify[bot]
7d438e5c28 rolls back min number of bloom items for debug builds (#17420) (#17421)
coverage ci builds are have become flaky presumably because of the
overhead added in https://github.com/solana-labs/solana/pull/17236
for very small test clusters.

This commit uses a smaller min number of bloom items condition on that
if debug assertions are enabled or not.

Previous attempt at fixing the flakiness:
https://github.com/solana-labs/solana/pull/17408

(cherry picked from commit 5567305a5f)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-23 18:07:38 +00:00
mergify[bot]
83cc44953d increases timeout duration for gossip discover (backport #17408) (#17414)
* increases timeout duration for gossip discover

(cherry picked from commit d6496376ce)

* uses Duration type for gossip discover timeout

(cherry picked from commit cf1acfb021)

# Conflicts:
#	core/src/gossip_service.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-22 20:53:56 +00:00
mergify[bot]
215928445c records hash of timed-out pull responses (#17410)
Gossip should record hash of pull responses which are timed out and
fail to insert:
https://github.com/solana-labs/solana/blob/ed51cde37/core/src/crds_gossip_pull.rs#L397-L400

so that they are excluded from the next pull request:
https://github.com/solana-labs/solana/blob/ed51cde37/core/src/crds_gossip_pull.rs#L486-L491

otherwise the next pull request will likely include the same timed out
values and waste bandwidth.

(cherry picked from commit a7870cda8d)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-22 18:35:16 +00:00
mergify[bot]
de6de7a367 rpc: add context toggle to getProgramAccounts (#17399) (#17403)
* fix(rpc): return context in get_program_accounts

* doc(rpc): document withContext flag

* fix(rpc): fix comment

Co-authored-by: Michael Vines <mvines@gmail.com>

* fix(rpc): fix doc

Co-authored-by: Michael Vines <mvines@gmail.com>

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit d41266e4e9)

Co-authored-by: Nikita <bananaelecitrus@gmail.com>
2021-05-22 08:28:19 +00:00
mergify[bot]
e8c054b1f4 account-decoder: don't use strings to convert between Pubkey types (#17391) (#17398)
* account-decoder: don't use strings to convert between Pubkey types

* transaction-status: don't use strings to convert between Pubkey types

(cherry picked from commit 51178ccb33)

Co-authored-by: Alexander Polakov <polachok@users.noreply.github.com>
2021-05-22 01:24:02 +00:00
mergify[bot]
df08ba5dcd SetLockup now requires the authorized withdrawer when the lockup is not in force (#17394)
(cherry picked from commit 96cde36784)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-21 21:17:06 +00:00
mergify[bot]
72d038ecd8 Remove const qualifier from syscall out-parameters (#17382) (#17395)
(cherry picked from commit 8758e9ed82)

Co-authored-by: Christian Machacek <39452430+machacekch@users.noreply.github.com>
2021-05-21 20:45:35 +00:00
mergify[bot]
b08c0caefe adds metric for turbine retransmit tree mismatch (backport #17351) (#17392)
* adds metric for turbine retransmit tree mismatch

In order to remove port-based forwarding logic in turbine, we need to
first track how often the turbine retransmit/broadcast trees mismatch
across nodes.
One consistency condition is that if the node is on the critical path
(i.e. the first node in each neighborhood), then we expect that the
packet arrives at tvu socket as opposed to tvu-forwards.

This commit adds a metric to track how often above condition is not met.

(cherry picked from commit 71de021177)

* removes the nested for loop from retransmit-stage

The code can be simplified by just flattening the vector of packets.

(cherry picked from commit ff0e623d30)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-21 20:08:12 +00:00
mergify[bot]
8fe5b41f5f Stake merge inactive lockup (backport #17376) (#17390)
* stake: plumb `can_merge_expired_lockups` feature flag

(cherry picked from commit 74ac6ab80f)

* stake: merge accounts with mismatched, but expired lockups

(cherry picked from commit 019bccab51)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-21 20:00:59 +00:00
mergify[bot]
25333abd96 extends crds values timeouts if stakes are unknown (#17261) (#17389)
If stakes are unknown, then timeouts will be short, resulting in values
being purged from the crds table, and consequently higher pull-response
load when they are obtained again from gossip. In particular, this slows
down validator start where almost all values obtained from entrypoint
are immediately discarded.

(cherry picked from commit 2adce67260)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-21 17:29:37 +00:00
mergify[bot]
f09a100e60 retains one node-instance per pubkey (#17187) (#17386)
crds table retains up to 32 node-instance values per each pubkey. This
is so because if there are multiple running instances of the same node,
then we want gossip to propagate node-instance values associated with
both instances, therefore the corresponding label/key includes the
randomly generated token in addition to the pubkey:
https://github.com/solana-labs/solana/blob/9c42a89a4/core/src/crds_value.rs#L448
https://github.com/solana-labs/solana/pull/14037

As a result, the number of such values per pubkey are effectively
unbounded, requiring custom mitigations implemented in:
https://github.com/solana-labs/solana/pull/14467
but still taking redundant extra memory and bandwidth.

This commit instead retains only one node-instance per pubkey by
extending crds values override logic. If a crds value is of type
node-instance, it will always override an existing one with the same key
if it has more recent starting timestamp (not wallclock). As a result,
gossip will always propagate the node-instance with more recent
timestamp. Since the check_duplicate logic will stop the node with older
timestamp, this change should preserve existing functionality.

(cherry picked from commit 0aa7824884)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-21 17:20:36 +00:00
mergify[bot]
7cc96dc20f Update getrandom bpf dependency (#17388)
(cherry picked from commit 8c073b2c94)

Co-authored-by: Jack May <jack@solana.com>
2021-05-21 16:56:09 +00:00
mergify[bot]
40c95dde4f prioritizes more recent values in pull responses (#17238) (#17384)
On the receiving end, the outdated values are discarded, and they will
only waste bandwidth:
https://github.com/solana-labs/solana/blob/3f0480d06/core/src/crds_gossip_pull.rs#L385-L400

This is also exacerbating validator start, since the entrypoint is
returning old values in pull responses, and the validator immediately
discards those; resulting in huge delay until the validator obtains
contact-info of the entrypoint and is able to adopt shred-version and
fully start.

(cherry picked from commit 5e6b00fe98)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-21 16:03:04 +00:00
mergify[bot]
0d38f11998 bumps up min number of bloom items in gossip pull requests (#17236) (#17383)
When a validator starts, it has an (almost) empty crds table and it only
sends one pull-request to the entrypoint. The bloom filter in the
pull-request targets 10% false rate given the number of items. So, if
the `num_items` is very wrong, it makes a very small bloom filter with a
very high false rate:
https://github.com/solana-labs/solana/blob/2ae57c172/runtime/src/bloom.rs#L70-L80
https://github.com/solana-labs/solana/blob/2ae57c172/core/src/crds_gossip_pull.rs#L48

As a result, it is very unlikely that the validator obtains entrypoint's
contact-info in response. This exacerbates how long the validator will
loop on:
    > Waiting to adopt entrypoint shred version
https://github.com/solana-labs/solana/blob/ed51cde37/validator/src/main.rs#L390-L412

This commit increases the min number of bloom items when making gossip
pull requests. Effectively this will break the entrypoint crds table
into 64 shards, one pull-request for each, a larger bloom filter for
each shard, and increases the chances that the response will include
entrypoint's contact-info, which is needed for adopting shred version
and validator start.

(cherry picked from commit e8b35a4f7b)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-21 15:21:52 +00:00
mergify[bot]
3e89cb6b43 programs/stake: cancel deactivate (backport #17344) (#17375)
* programs/stake: cancel deactivate (#17344)

fix: remove stray println

add error for inconsistent input.

fix: lamports don't need to match when redelegating to same vote account

Improve comments

bump

Apply suggestions from code review

Add assert in test

Use stake_program_v4

Co-Authored-By: Trent Nelson <trent.a.b.nelson@gmail.com>

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit 662c2aaeec)

# Conflicts:
#	programs/stake/src/stake_instruction.rs
#	programs/stake/src/stake_state.rs

* Fix conflicts

Co-authored-by: jon-chuang <9093549+jon-chuang@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-21 10:41:13 +00:00
mergify[bot]
5025c7c983 Prevent withrawing Initialized stake account to rent-exempt reserve (backport #17366) (#17370)
* Prevent withrawing Initialized stake account to zero stake (#17366)

(cherry picked from commit 91f2b6185e)

# Conflicts:
#	programs/stake/src/stake_instruction.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-21 10:23:20 +00:00
mergify[bot]
c3fafda981 clap-utils: Fix signer resolution on Windows (#17371)
(cherry picked from commit e320af99a0)

# Conflicts:
#	clap-utils/src/keypair.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-21 07:27:49 +00:00
mergify[bot]
3f6964d264 InvokeContext: Add get_sysvar() helper to sdk (backport #17360) (#17368)
* Add get_sysvar() helper to sdk

(cherry picked from commit 2c99b23ad7)

# Conflicts:
#	runtime/src/message_processor.rs
#	sdk/src/process_instruction.rs

* Resolve conflicts

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-21 03:42:20 +00:00
mergify[bot]
b1d294de75 Add stake_program_v4 feature (#17356)
(cherry picked from commit a1a0d6f84b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-20 22:32:30 +00:00
mergify[bot]
2b34800870 docs: Update transaction expiration time (#17347) (#17349)
(cherry picked from commit ddfc15b9f2)

Co-authored-by: Justin Starry <justin@solana.com>
2021-05-20 15:29:44 +00:00
mergify[bot]
e9c3e0b0ee datapoint for verify_snapshot_bank (#17306) (#17339)
(cherry picked from commit 75335b4f58)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-05-20 04:37:10 +00:00
mergify[bot]
0e9fe0847f Optimize aligned memory used by the runtime (backport #17324) (#17334)
* Optimize aligned memory used by the runtime (#17324)

(cherry picked from commit 477898f682)

# Conflicts:
#	cli/Cargo.toml
#	programs/bpf/Cargo.toml
#	programs/bpf_loader/Cargo.toml
#	programs/bpf_loader/src/syscalls.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-05-19 23:21:47 +00:00
mergify[bot]
d2e98cb531 prunes received-cache only once per unique owner's key (#17039) (#17337)
(cherry picked from commit 0e646d10bb)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-19 22:50:42 +00:00
mergify[bot]
32681e2739 removes manual trait impl for contact-info (#17332) (#17335)
The current implementations use only the id and disregard other fields,
in particular wallclock. This can lead to bugs where an outdated
contact-info shadows or overrides a current one because they compare
equal.

(cherry picked from commit 13b032b2d4)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-19 22:33:32 +00:00
mergify[bot]
dc0b21fa83 patches flaky test_new_mark_creation_time (#17288) (#17336)
(cherry picked from commit f7b0184f81)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-19 22:22:24 +00:00
mergify[bot]
c4e770e2f8 Add C Serialization Tests for #17217 (#17294) (#17297)
(cherry picked from commit f15dd1b4ef)

Co-authored-by: Jack May <jack@solana.com>
2021-05-19 22:14:23 +00:00
mergify[bot]
f80af6dc1c adds gossip metrics for number of staked nodes (#17330) (#17333)
(cherry picked from commit e7073ecab1)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-19 20:41:07 +00:00
mergify[bot]
36ac9b3bb1 Fix typo (#17326) (#17331)
(cherry picked from commit f1b4a0a2e0)

Co-authored-by: Ulrich Stark <8657779+ulrichstark@users.noreply.github.com>
2021-05-19 17:46:50 +00:00
mergify[bot]
282c98a82a Validator progress bars are now rendered when stdout is not a terminal (#17323)
(cherry picked from commit 305d9dd3f4)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-19 08:49:17 +00:00
mergify[bot]
11f6c04990 get_program_accounts_with_config() now correctly defaults to RpcClient's commitment level (backport #17312) (#17315)
* get_program_accounts_with_config() now correctly defaults to RpcClient's commitment level

(cherry picked from commit 63b97729e6)

# Conflicts:
#	client/src/rpc_client.rs

* Update rpc_client.rs

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-18 21:19:19 +00:00
mergify[bot]
18c4d13ab2 fix test (#17310) (#17314)
(cherry picked from commit cfcae50022)

# Conflicts:
#	programs/bpf/c/src/ser/ser.c

Co-authored-by: Jack May <jack@solana.com>
2021-05-18 20:05:00 +00:00
mergify[bot]
d2e907655f Add Contextual Search (#17299) (#17300)
* this should prevent other language results appearing in the search area

(cherry picked from commit c65c4475f6)

Co-authored-by: Ryan M. Shea <8948187+rmshea@users.noreply.github.com>
2021-05-18 06:32:51 +00:00
mergify[bot]
e182afa50f Minor test cleanup and comments (backport #17283) (#17295)
* Minor test cleanup and comments (#17283)

(cherry picked from commit bcbe155575)

# Conflicts:
#	runtime/src/rent_collector.rs

* Fix conflicts

* More clean cherry-pick...

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-05-18 03:50:28 +00:00
mergify[bot]
00d1cb0333 Clear release cache for stable-perf (#17287) (#17296)
(cherry picked from commit 7ea1131090)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-17 23:38:24 +00:00
mergify[bot]
bab82ab632 Update keypair configuration output (backport #17277) (#17285)
* Update keypair configuration output

While going through the tutorial to start a validator I noticed that the output I received from running...

```
solana config set --keypair ~/validator-keypair.json
```

...different from the output I was seeing. Wondering whether the docs are out of date I thought I'd propose an update to the docs just in case.

(cherry picked from commit 02157f4753)

* Update docs/src/running-validator/validator-start.md

(cherry picked from commit de76adbdf3)

Co-authored-by: Chris Bellew <cjbellew@gmail.com>
Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2021-05-17 17:38:31 +00:00
mergify[bot]
b9ba312975 Add two more testnet entrypoints (#17282)
(cherry picked from commit 1f322b8a9c)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-17 16:32:36 +00:00
mergify[bot]
c5ff373965 fixed getProgramAccounts fields list (#17278) (#17279)
(cherry picked from commit 611628a402)

Co-authored-by: Marcin Zawiejski <dragmz@gmail.com>
2021-05-17 14:46:59 +00:00
mergify[bot]
51a2d93a0d Remove duplicate std::net reference (#17254) (#17266)
(cherry picked from commit d6ab4196ea)

Co-authored-by: Sebastian Ibarguen <sebasibarguen@users.noreply.github.com>
2021-05-17 01:15:22 +00:00
Tyera Eulberg
409ac4dcfa Bump version to v1.6.10 (#17250) 2021-05-15 01:47:56 +00:00
mergify[bot]
9e42883d4b Fix a bug in input deserialization in the C SDK (#17217) (#17249)
When the input contains more accounts than the user has requested to be deserialized, and one of the excess ones is a dup, the input pointer is not adjusted correctly.

Compare the lines added by this commit to line 401: "input += 7; // padding". Since the input data layout does not depend on the number of accounts the user wants to deserialize, this adjustment by 7 bytes must happen in both branches.

(cherry picked from commit e02b4e1192)

Co-authored-by: Christian Machacek <39452430+machacekch@users.noreply.github.com>
2021-05-15 00:10:02 +00:00
mergify[bot]
e41460d500 feat: update api urls (backport #17186) (#17248)
* feat: update api urls

(cherry picked from commit 0f3045fb68)

* fix: cluster test

(cherry picked from commit ae5a10dffd)

* docs: update old devnet and testnet url references

(cherry picked from commit ec621e71dc)

* fix: update devnet and testnet urls

(cherry picked from commit 7be3171f4a)

Co-authored-by: Josh Hundley <josh.hundley@gmail.com>
2021-05-15 00:08:24 +00:00
steviez
9aacd0f3c3 Zero pad data shreds on fetch from blockstore (#17147)
* Zero pad data shreds on fetch from blockstore

This is a partial backport of #16602 to allow compatibility with that change.

* Remove size check and resize shreds to consistent length
2021-05-14 16:18:00 -05:00
mergify[bot]
3f908306a3 test-validator: Hint at airdrop when wallet is unavailable (#17235)
(cherry picked from commit 2c8dde7224)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-14 18:32:48 +00:00
mergify[bot]
8093586b78 docs: remove missig link (#17212) (#17230)
(cherry picked from commit 5e642a174c)

Co-authored-by: Laptev Stanislav <42931743+dubalda@users.noreply.github.com>
2021-05-14 15:51:22 +00:00
mergify[bot]
a08a6d55fa test-validator: Display genesis hash in dashboard (backport #17216) (#17225)
* rpc: plumb shred_version through RpcContactInfo

(cherry picked from commit 67e6a3106f)

* test-validator: Display more cluster info in dash

(cherry picked from commit 754c708473)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-14 09:56:27 +00:00
mergify[bot]
802c5fcb00 Update clusters.md (#17220) (#17221)
(cherry picked from commit 26afc7620b)

Co-authored-by: joeaba <77398477+joeaba@users.noreply.github.com>
2021-05-14 04:39:04 +00:00
mergify[bot]
8749a97b94 Remove bloat from secondary indexes (#17048) (#17219)
(cherry picked from commit 239ab8799c)

Co-authored-by: carllin <carl@solana.com>
2021-05-14 04:04:55 +00:00
mergify[bot]
4313240b1b Return error for excluded secondary-index keys (backport #17193) (#17215)
* Return error for excluded secondary-index keys (#17193)

* Add runtime helpers to check secondary indexes for key

* Add custom rpc error

* Check secondary-index key inclusion in rpc

* Clone complete AccountSecondaryIndexes into rpc to avoid bank query

(cherry picked from commit 27004f1b76)

# Conflicts:
#	core/src/rpc.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-13 23:04:01 +00:00
mergify[bot]
24bae00560 docs: Add docs for solana-test-validator (backport #17199) (#17211)
* docs: Add docs for `solana-test-validator`

(cherry picked from commit 768a2ebe9d)

* Update docs/src/developing/test-validator.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 056c1a7b50)

* Update docs/src/developing/test-validator.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 5b13d4057b)

* Update docs/src/developing/test-validator.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 38d7e9a4c4)

* Update docs/src/developing/test-validator.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit e08687acfd)

* Update docs/src/developing/test-validator.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 3214105a21)

* Update docs/src/developing/test-validator.md

(cherry picked from commit 7868df3211)

* Update docs/src/developing/test-validator.md

(cherry picked from commit 3e0c0abb53)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-13 18:41:16 +00:00
Trent Nelson
fae0a6307d docs: add hackathon banner 2021-05-13 05:03:15 +00:00
mergify[bot]
9753f1a6ca Add bip32 support to solana-keygen recover (#17180) (#17189)
* Fix spelling

* Add validator for  SignerSources

* Add helper to generate Keypair from supporting SignerSources

* Add bip32 support to solana-keygen recover

* Make SignerSourceKind const strs, use for Debug impl and URI schemes

(cherry picked from commit b437b0a49d)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-12 20:48:53 +00:00
mergify[bot]
8ad1554bc9 Update devnet and testnet endpoints (#17188) (#17191)
(cherry picked from commit 597373f5fa)

Co-authored-by: joeaba <77398477+joeaba@users.noreply.github.com>
2021-05-12 20:25:05 +00:00
mergify[bot]
2367f561dc include/exclude keys on account secondary index (backport #17110) (#17179)
* include/exclude keys on account secondary index (#17110)

* AccountSecondaryIndexes.include/exclude

* use normal scan if key is not indexed

* add a test to ask for a scan for an excluded secondary index

* fix up cli args

(cherry picked from commit 7d96f78821)

# Conflicts:
#	runtime/src/accounts_db.rs
#	runtime/src/accounts_index.rs

* resolve merge conflicts

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-05-12 15:09:46 +00:00
mergify[bot]
21d41b976b Move Signer types out of the signature module (backport #17099) (#17177)
* sdk: Move `Signer` trait to own module

(cherry picked from commit af6f3d776e)

* sdk: Move `Keypair` to `signer` module

(cherry picked from commit 0eba6eb401)

* sdk: Move `Presigner` to `signer` module

(cherry picked from commit 12bf6c06c3)

* sdk: Move `NullSigner` to `signer` module

(cherry picked from commit b71e4bdc61)

* sdk: Move `signers` module into `signer` module

(cherry picked from commit 967840aed6)

* sdk: keypair - drop superfluous iter()

(cherry picked from commit dbac38702a)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-11 20:44:54 +00:00
mergify[bot]
3303ead54d Add Keccak256 syscall and sdk support (backport #16498) (#17157)
* Add Keccak256 syscall and sdk support (#16498)

(cherry picked from commit 8eb05d6ed4)

# Conflicts:
#	Cargo.lock
#	programs/bpf/Cargo.lock
#	programs/bpf/rust/sha/Cargo.toml
#	programs/bpf/tests/programs.rs
#	programs/bpf_loader/Cargo.toml
#	sdk/program/Cargo.toml
#	sdk/program/src/lib.rs
#	sdk/src/feature_set.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-05-11 09:31:16 +00:00
mergify[bot]
f91d7da5a4 sdk: Add get_instance_packed_len for variable-size types (#17092) (#17153)
* sdk: Add get_instance_packed_len for variable-size types

* Add comment for get_packed_len

* Add more tests

(cherry picked from commit 4b60b2863e)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-05-11 09:16:14 +00:00
mergify[bot]
b2dad84d05 Update web-wallet.md to add phantom with fixed link (#17161) (#17163)
* Update web-wallet.md to add phantom with fixed link

Update web-wallet.md to add phantom with fixed link

* Update web-wallets.md for phantom

removing trailing whitespaces

* Update docs/src/wallet-guide/web-wallets.md

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 4625231e30)

Co-authored-by: chaseeb <chaseeb@gmail.com>
2021-05-11 04:46:41 +00:00
mergify[bot]
a7b2939bc8 SignerSource: rename input scheme to prompt, default to bip44 solana base key (#17154) (#17159)
* Rename ask to prompt

* Default to Solana bip44 base if no derivation-path

* Add SignerSource legacy field, support legacy ASK

* Update docs

* Fix docs: validator current doesn't support uri SignerSources

(cherry picked from commit a5ec3a0547)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-11 02:43:49 +00:00
mergify[bot]
ea3b783b63 fix c program deploy help (#17152) (#17156)
(cherry picked from commit 82fb6712e7)

Co-authored-by: Jack May <jack@solana.com>
2021-05-10 23:36:16 +00:00
mergify[bot]
733ef4b0b8 type AccountSecondaryIndexes = HashSet (backport #17108) (#17149)
* type AccountSecondaryIndexes = HashSet (#17108)

(cherry picked from commit f39dda00e0)

# Conflicts:
#	runtime/benches/accounts.rs
#	runtime/src/accounts.rs
#	runtime/src/accounts_db.rs
#	runtime/src/accounts_index.rs

* resolve merge errors

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-05-10 20:55:33 +00:00
mergify[bot]
0cf83887c6 Move block-time caching earlier (#17109) (#17150)
* Require that blockstore block-time only be recognized slot, instead of root

* Move cache_block_time to after Bank freeze

* Single use statement

* Pass transaction_status_sender by reference

* Remove unnecessary slot-existence check before caching block time altogether

* Move block-time existence check into Blockstore::cache_block_time, Blockstore no longer needed in blockstore_processor helper

(cherry picked from commit 6e9deaf1bd)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-10 20:31:56 +00:00
mergify[bot]
094271be7d indexes crds values by their insert order (backport #16809) (#17132)
* indexes crds values by their insert order

(cherry picked from commit dfa3e7a61c)

* reads gossip push messages off crds ordinal index

Having an ordinal index on crds values based on insert order allows to
efficiently filter values using a cursor. In particular
CrdsGossipPush::push_messages hash-map can be replaced with a cursor,
saving on the bookkeepings, purging, etc

(cherry picked from commit 22c02b917e)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-10 00:00:00 +00:00
mergify[bot]
efc3c0d65f Add a make target to run the readelf utility on a compiled program (#17131)
The readelf utility (already shipped with the solana tools) shows meta-information about ELF files, such as symbol tables. It is useful for investigating "unresolved symbol" errors that crop up at runtime.

This commit also fixes the objdump flags (two dashes are required and there is no "color" option) as well as a few typos.

(cherry picked from commit ff95e2aaa6)

Co-authored-by: Christian Machacek <39452430+machacekch@users.noreply.github.com>
2021-05-09 02:46:49 +00:00
mergify[bot]
0300eea0d6 Fix syscalls in the C SDK failing at runtime when compiled as C++ (#17124) (#17126)
Some syscalls are wrongly declared "static" in solana_sdk.h, which makes clang++ assume they are local to the compilation unit. It therefore ignores the extern "C" {} block and mangles their names. While that doesn't break C++ compilation, the syscall fails at runtime with something along the lines of "ELF error: Unresolved symbol (_ZL26sol_create_program_addressPK13SolSignerSeediPK9SolPubkeyS4_)".

(cherry picked from commit 6927d0c77e)

Co-authored-by: Christian Machacek <39452430+machacekch@users.noreply.github.com>
2021-05-08 17:27:56 +00:00
mergify[bot]
b03186e3c6 Add chinese translations to docs (#17125) (#17127)
* import zh translations

* Fix broken links

* fix whitespace

(cherry picked from commit a1df57a4ea)

Co-authored-by: Justin Starry <justin@solana.com>
2021-05-08 17:09:51 +00:00
Michael Vines
65e1b881f9 Bump version to v1.6.9 2021-05-08 06:28:08 +00:00
mergify[bot]
28b9e5b572 getBlockProduction now correctly reports block production (#17116)
(cherry picked from commit d6c076f1b6)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-08 04:19:39 +00:00
mergify[bot]
072e884c24 solana-validator exit now uses process::exit() to ensure prompt termination (#17107)
(cherry picked from commit ec2b06d81d)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-07 18:49:36 +00:00
mergify[bot]
dc95663de7 Add ledger-tool for restoring roots to the Roots CF (#17045) (#17091)
* Add ledger-tool for restoring roots to the Roots CF

* Print successful repair data, and repair in chunks

* Add parameter to limit num slots checked for root repair

(cherry picked from commit ddfbae260f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-06 21:28:28 +00:00
mergify[bot]
a73303be22 Fixing a broken link in the docs (#16975) (#17085)
(cherry picked from commit 40c31f87e0)

Co-authored-by: Jordan Sexton <jordan@jordansexton.com>
2021-05-06 16:05:37 +00:00
mergify[bot]
330f42c375 implements cursor for gossip crds table queries (#16952) (#17084)
VersionedCrdsValue.insert_timestamp is used for fetching crds values
inserted since last query:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L1197-L1215
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L1274-L1298

So it is crucial that insert_timestamp does not go backward in time when
new values are inserted into the table. However std::time::SystemTime is
not monotonic, or due to workload, lock contention, thread scheduling,
etc, ... new values may be inserted with a stalled timestamp way in the
past. Additionally, reading system time for the above purpose is
inefficient/unnecessary.

This commit adds an ordinal index to crds values indicating their insert
order. Additionally, it implements a new Cursor type for fetching values
inserted since last query.

(cherry picked from commit fa86a335b0)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-06 15:23:01 +00:00
mergify[bot]
5d088c7d06 Dump rent_collector/inflation with ledger-tool cap (#17069) (#17081)
(cherry picked from commit d19526e6c2)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-05-06 11:48:21 +00:00
mergify[bot]
5c9495f955 CLI: Print gossip nodes with cli-output crate (#17072)
(cherry picked from commit cb5e000615)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-06 09:07:36 +00:00
Michael Vines
fb163187b5 RpcClient now respects the retry-after server response header when getting rate limited
(cherry picked from commit 7d1637d89a)
2021-05-05 19:34:18 -07:00
mergify[bot]
970bba495f Add --tower argument to specify where tower files are persisted (#17060)
(cherry picked from commit 9ba2c53b85)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-05-05 20:37:36 +00:00
Stephen Akridge
9761af201b Don't recognize temp snapshots as possible snapshots to open
(cherry picked from commit 3e0fed48e7)
2021-05-05 09:32:54 -07:00
mergify[bot]
7600be946a SDK: Factor out pubkey on-curve test to a helper (#16935)
(cherry picked from commit cfc1cb1aee)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-05 06:19:19 +00:00
Michael Vines
524b380a71 Bump version to 1.6.8 2021-05-04 12:46:57 -07:00
Sebastian.Bor
ebb5fc1285 chore: conflate use statement
(cherry picked from commit 6d11d5dd9f)
2021-05-04 10:28:45 -07:00
Sebastian.Bor
4cfb3dcc7b fix: add bpf_loader_upgradeable to ProgramTest default builtins
(cherry picked from commit 4ede5117f9)
2021-05-04 10:28:45 -07:00
Ruud van Asseldonk
e8fff4561e Document that Transaction::sign might panic (#17026)
(cherry picked from commit 9abfa65920)
2021-05-04 09:06:36 -07:00
Jeff Washington (jwash)
b56e66310d Revert "reclaims unref accounts from index (#16838) (#17005)"
This reverts commit 3e43b042eb.
2021-05-04 08:48:13 -07:00
mergify[bot]
bda3bd1557 Correct days/year (#17024) (#17033)
(cherry picked from commit 46d2755205)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-04 11:22:26 +00:00
mergify[bot]
7723673038 test-validator: Plumb --limit-ledger-size (#17027)
(cherry picked from commit f17b80236f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-05-04 10:09:53 +00:00
mergify[bot]
c69e667f5e Update web3.js import sample (#17022)
(cherry picked from commit 9ff17a1c18)

Co-authored-by: Colin Gray <colin@cgray.dev>
2021-05-04 06:49:20 +00:00
mergify[bot]
6157860c0a Implement Bip32 for seed-phrase/passphrase signing (backport #16942) (#17018)
* Implement Bip32 for seed-phrase/passphrase signing (#16942)

* Add Keypair helpers for bip32 derivation

* Plumb bip32 for SignerSourceKind::Ask

* Support full-path querystring

* Use as_ref

* Add public wrappers for from_uri cases

* Support master root derivations (and fix too-deep print

* Add ask:// HD documentation

* Update ASK elsewhere in docs

(cherry picked from commit 694c674aa6)

# Conflicts:
#	programs/bpf/Cargo.lock

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-04 03:58:23 +00:00
mergify[bot]
32fc4e3d0f Add keys (backport #17014) (#17015)
* Rotate keys

(cherry picked from commit b2778f34f5)

* Key rotation

(cherry picked from commit b948a18841)

* Add keys

(cherry picked from commit 6318705607)

Co-authored-by: publish-docs.sh <maintainers@solana.com>
2021-05-04 01:34:53 +00:00
Michael Vines
ee0c0c4a59 Add ALL support to withdraw-stake subcommand
(cherry picked from commit cf779c63c5)
2021-05-03 13:55:35 -07:00
Ryan M. Shea
356117819c Add hackathon banner (#17010) 2021-05-03 19:47:34 +00:00
mergify[bot]
834c96a374 validates gossip addresses before sending pull-requests (backport #16748) (#17009)
* uses Mutex instead of RwLock for ping_cache

(cherry picked from commit 2231017b35)

* validates gossip addresses before sending pull-requests

IP addresses need to be validated before sending packets to them.
This commit, sends a ping packet to nodes before any pull requests.
Pull requests are then only sent to the nodes which have responded with
the correct hash of their respective ping packet.

(cherry picked from commit 7cea2c4466)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-03 19:40:02 +00:00
mergify[bot]
2195d980a2 patches local pending push messages processing (#16833) (#17007)
process_push_messages writes local pending push messages to the crds
table, but it discards the return value:
https://github.com/solana-labs/solana/blob/cf779c63c/core/src/crds_gossip.rs#L96-L102

In order to exclude outdated values from the next pull-request, we need
to record the hash of values purged/overridden by the local push
messages, otherwise pull-responses will return outdated values back to
the node:
https://github.com/solana-labs/solana/blob/c1829dd00/core/src/crds_gossip_pull.rs#L447-L452

Additionally, gossip packets arrive and are processed out of order. So,
local pending push messages should be flushed *before* generating bloom
filters for pull-requests, preventing pull-responses returning the same
values back to the node itself. This requires flipping order of
generating pull and push messages:
https://github.com/solana-labs/solana/blob/cf779c63c/core/src/cluster_info.rs#L1757-L1762

Both above bugs cause redundant traffic and bandwidth waste in gossip
pull-responses.

(cherry picked from commit a698e34744)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-05-03 17:27:38 +00:00
mergify[bot]
3e43b042eb reclaims unref accounts from index (#16838) (#17005)
(cherry picked from commit 6381ee38eb)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-05-03 17:10:09 +00:00
mergify[bot]
851742e5d9 Update sysvars.md (#16998) (#16999)
a typo

(cherry picked from commit 43ccaf14b0)

Co-authored-by: Max Block <40041609+max-block@users.noreply.github.com>
2021-05-03 09:58:20 +00:00
mergify[bot]
c6c7feb0c2 Retry latest vote if expired (#16735) (#16927)
(cherry picked from commit b5d30846d6)

Co-authored-by: carllin <carl@solana.com>
2021-05-03 05:13:29 +00:00
Justin Starry
1fde69ef48 Docs cleanup (#16997) 2021-05-03 02:59:03 +00:00
mergify[bot]
894bedcae7 Remove errant backslash (#16994) (#16995)
(cherry picked from commit d7166c5778)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-02 20:20:30 +00:00
mergify[bot]
47f15eaa03 Corrected typo in calling between programs document (backport #16991) (#16993)
* Corrected typo in calling between programs document (#16991)

* Corrected typo in calling between programs document

* corrected another typo

Co-authored-by: Srinivas Valekar <srinivasvalekar@Srinivass-MacBook-Pro.local>
(cherry picked from commit c003f8e93c)

# Conflicts:
#	docs/src/developing/programming-model/calling-between-programs.md

* Fix conflict

Co-authored-by: srinivas valekar <srinivas.valekar@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-05-02 18:40:41 +00:00
mergify[bot]
b0c0739db9 Allow SetUpgradeAuthority instruction in CPI calls (backport #16676) (#16954)
* Allow SetUpgradeAuthority instruction in CPI calls (#16676)

* feat: allow SetAuthority in CLI calls

* chore: clippy match_like_matches_macro

* chore: clippy match_like_matches_macro

* chore: rename CLI to CPI

* chore: move check for cpi authorised instruction to syscalls

* chore: add set_upgrade_authority cpi test

* chore: assert upgrade authority was changed

* feat: gate set_upgrade_authority via cpi with a feature

* chore: move feature to the end of the list

* chore: remove white spaces

* chore: remove white spaces

* chore: update comment to rerun build

(cherry picked from commit 1a658c7f31)

# Conflicts:
#	programs/bpf/Cargo.toml
#	programs/bpf_loader/src/syscalls.rs
#	sdk/src/feature_set.rs

* chore: fixe merge conflicts

Co-authored-by: Sebastian Bor <Sebastian_Bor@hotmail.com>
2021-04-30 20:47:38 +00:00
mergify[bot]
c3dc23e84a docs: fix copy-pasta breaking typo in getRecentBlockhash example (#16962)
(cherry picked from commit 3d98321b38)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-30 04:24:40 +00:00
mergify[bot]
cc7fc447a4 Distinguish max replayed and max observed vote (#16936) (#16956)
(cherry picked from commit 5981399612)

Co-authored-by: carllin <carl@solana.com>
2021-04-30 00:48:56 +00:00
mergify[bot]
a401b2b4cf Refactor SignerSource to expose DerivationPath to other kinds of signers (backport #16933) (#16941)
* Refactor SignerSource to expose DerivationPath to other kinds of signers (#16933)

* One use statement

* Add stdin uri scheme

* Convert parse_signer_source to return Result

* A-Z deps

* Convert Usb data to Locator

* Pull DerivationPath out of Locator

* Wrap SignerSource to share derivation_path

* Review comments

* Check Filepath existence, readability in parse_signer_source

(cherry picked from commit d6f30b7537)

# Conflicts:
#	sdk/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-29 09:11:56 +00:00
mergify[bot]
d8c66c8981 Add skip rate to solana validators (#16939)
(cherry picked from commit d640ac143b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-29 07:14:26 +00:00
Michael Vines
49a415414f Add getBlockProduction RPC method 2021-04-28 21:38:53 -07:00
mergify[bot]
6c540d2ada Fixup rpc-endpoints (#16924) (#16930)
(cherry picked from commit 783bd79e9d)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-28 14:44:16 -06:00
joeaba
da62ebac1a Update rpc-endpoints.md (#16926) 2021-04-29 00:12:42 +05:30
mergify[bot]
25aee12502 retains peer's contact-info when making pull requests (#16715) (#16907)
ClusterInfo::new_pull_requests has to lookup contact-infos:
https://github.com/solana-labs/solana/blob/a1ef2bd74/core/src/cluster_info.rs#L1663-L1673

when it was already available when making pull requests:
https://github.com/solana-labs/solana/blob/a1ef2bd74/core/src/crds_gossip_pull.rs#L232

(cherry picked from commit 25054bfd35)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-28 14:54:54 +00:00
mergify[bot]
d8e8528797 removes delayed crds inserts when upserting gossip table (#16806) (#16905)
It is crucial that VersionedCrdsValue::insert_timestamp does not go
backward in time:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/crds.rs#L67-L79

Otherwise methods such as get_votes and get_epoch_slots_since will
break, which will break their downstream flow, including vote-listener
and optimistic confirmation:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L1197-L1215
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L1274-L1298

For that, Crds::new_versioned is intended to be called "atomically" with
Crds::insert_verioned (as the comment already says so):
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/crds.rs#L126-L129

However, currently this is violated in the code. For example,
filter_pull_responses creates VersionedCrdsValues (with the current
timestamp), then acquires an exclusive lock on gossip, then
process_pull_responses writes those values to the crds table:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L2375-L2392

Depending on the workload and lock contention, the insert_timestamps may
well be in the past when these values finally are inserted into gossip.

To avoid such scenarios, this commit:
  * removes Crds::new_versioned and Crd::insert_versioned.
  * makes VersionedCrdsValue constructor private, only invoked in
    Crds::insert, so that insert_timestamp is populated right before
    insert.

This will improve insert_timestamp monotonicity as long as Crds::insert
is not called with a stalled timestamp. Following commits may further
improve this by calling timestamp() inside Crds::insert, and/or
switching to std::time::Instant which guarantees monotonicity.

(cherry picked from commit 1ac2a8cfa5)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-28 13:36:21 +00:00
mergify[bot]
ed8c796877 moves cluster-info metrics to a separate module (#16883) (#16898)
(cherry picked from commit b17d5eeaee)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-28 04:18:46 +00:00
mergify[bot]
ec750cf3eb Add allowed-ip list to faucet (#16891) (#16897)
(cherry picked from commit 36574c30ef)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-28 03:32:37 +00:00
mergify[bot]
4a35053fba uses current timestamp when flushing local pending push queue (#16808) (#16896)
local_message_pending_push_queue is recording timestamps at the time the
value is created, and uses that when the pending values are flushed:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L321
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/crds_gossip.rs#L96-L102

which is then used as the insert_timestamp when inserting values in the
crds table:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/crds_gossip_push.rs#L183

The flushing may happen 100ms after the values are created (or even
later if there is a lock contention). This will cause non-monotone
insert_timestamps in the crds table (where time goes backward),
hindering the usability of insert_timestamps for other computations.

For example both ClusterInfo::get_votes and get_epoch_slots_since rely
on monotone insert_timestamps when values are inserted into the table:
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L1197-L1215
https://github.com/solana-labs/solana/blob/ec37a843a/core/src/cluster_info.rs#L1274-L1298

This commit removes timestamps from local_message_pending_push_queue and
uses current timestamp when flushing the queue.

(cherry picked from commit b468ead1b1)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-28 01:59:29 +00:00
mergify[bot]
9797178ad1 Refactor remote-wallet path parsing (backport #16798) (#16894)
* SDK: More conversions for `Pubkey`

(cherry picked from commit 9b7120bf73)

* SDK: More conversion for `DerivationPath`

(cherry picked from commit 722de942ca)

* remote-wallet: Add helpers for locating remote wallets

(cherry picked from commit 64fcb792c2)

* remote-wallet: Plumb `Locator` into `RemoteWalletInfo`

(cherry picked from commit 3d12be29ec)

* remote-wallet: `derivation-path` crate doesn't like empty trailing child indexes

(cherry picked from commit 4ce4f04c58)

* remote-wallet: Move `Locator` to its own module

(cherry picked from commit cac666d035)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-28 01:20:41 +00:00
mergify[bot]
dbc58455df Retain alloc'd and updated data in cpi (backport #16850) (#16890)
* Retain alloc'd and updated data in cpi (#16850)

(cherry picked from commit 9b3a59f030)

# Conflicts:
#	programs/bpf_loader/src/syscalls.rs
#	sdk/src/feature_set.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-27 23:01:43 +00:00
mergify[bot]
4a3f851e49 Enable multiple payers in accounts-cluster-bench (#16889) (#16892)
* Enable multiple payer keypairs

* Suppress tx creation if batch size == 0

* Suppress logs when waiting to create txs

* Double airdrop threshold to prevent stall when closing accounts

(cherry picked from commit 283f587afe)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-27 22:43:06 +00:00
mergify[bot]
5a3bf5c90e limits to data_header.size when combining shreds' payloads (backport #16708) (#16870)
* limits to data_header.size when combining shreds' payloads (#16708)

Shredder::deshred is ignoring data_header.size when combining shreds' payloads:
https://github.com/solana-labs/solana/blob/37b8587d4/ledger/src/shred.rs#L940-L961

Also adding more sanity checks on the alignment of data shreds indices.

(cherry picked from commit 0f3ac51cf1)

# Conflicts:
#	ledger/src/shred.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-27 14:44:58 +00:00
mergify[bot]
de6ec11efc records hash of values purged by expired pull-responses (#16800) (#16871)
process_pull_responses should record hash of values purged by expired
responses (as well as unexpired ones):
https://github.com/solana-labs/solana/blob/c1829dd00/core/src/crds_gossip_pull.rs#L385-L387

otherwise, these values are not excluded from following pull-requests
(from likely different nodes):
https://github.com/solana-labs/solana/blob/c1829dd00/core/src/crds_gossip_pull.rs#L447-L452

and would waste bandwidth should they be included in subsequent
pull-responses.

(cherry picked from commit 3b8d6b59fb)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-27 13:26:11 +00:00
mergify[bot]
7aec87c086 Add getVoteAccounts RPC method parameter to restrict results to a single vote account (#16859)
(cherry picked from commit 59fc33635a)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-27 05:43:44 +00:00
mergify[bot]
eabc21c23a block-production subcommand now uses SlotHistory sysvar when possible (#16858)
(cherry picked from commit b66a68975b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-27 05:32:59 +00:00
mergify[bot]
713f346211 Fix limit-ledger-size syntax (#16856) (#16857)
(cherry picked from commit 3af8cb0150)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-27 04:41:35 +00:00
Michael Vines
a81bc0ecf8 solana leader-schedule -um works again
(cherry picked from commit c2becbc0a8)
2021-04-26 17:32:05 -07:00
mergify[bot]
a3f1580b8b Update bpf loader info on native-programs docs (#16840) (#16845)
* Update bpf loader info on native-programs docs

* Link to program deployment docs

(cherry picked from commit 5eb5d9b2f5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-26 20:41:36 +00:00
mergify[bot]
4f20798654 removes old runtime feature gates in gossip and turbine (#16633) (#16828)
(cherry picked from commit 9706512115)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-26 18:40:42 +00:00
mergify[bot]
0ecd1755a6 docs: getInflationReward rpc output fields should be in lower camel case (#16802) (#16805)
(cherry picked from commit ec37a843a4)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-04-24 19:37:55 +00:00
mergify[bot]
57dd8a555a Disable flaky test_poh_service (#16772) (#16797)
(cherry picked from commit 63436cc2bf)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-24 04:38:27 +00:00
Michael Vines
f64cd4a75a Show last vote/root behind distance in solana validators output
(cherry picked from commit c1829dd00b)
2021-04-23 20:12:15 -07:00
mergify[bot]
2ce6c86c2a runtime: checked math for Bank::withdraw (#16788)
(cherry picked from commit be29568318)

# Conflicts:
#	runtime/src/bank.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-24 00:25:41 +00:00
Michael Vines
ff9573714b get_packed_len() now correctly handles u32/i32 types
(cherry picked from commit 1500011fc6)
2021-04-23 13:52:10 -07:00
mergify[bot]
826111cf79 Restore text wrapping (#16776) (#16780)
(cherry picked from commit da58f20a99)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-23 17:34:47 +00:00
mergify[bot]
f31f1d0f52 fix reference to Rust Restrictions section (#16763) (#16775)
(cherry picked from commit e9a616cfc2)

Co-authored-by: strykerin <dacosta.pereirafabio@gmail.com>
2021-04-23 17:02:27 +00:00
mergify[bot]
e220f7067b docs: fix formatting issue (#16761) (#16774)
(cherry picked from commit c217ee3a00)

Co-authored-by: strykerin <dacosta.pereirafabio@gmail.com>
2021-04-23 17:02:18 +00:00
mergify[bot]
d9726e61bc retains crds values if the origin is still active (#16576) (#16771)
Local timestamps are updated for records associated with a pubkey if the
origin is still active:
https://github.com/solana-labs/solana/blob/c8ed14c64/core/src/crds.rs#L301-L311

However this is done inconsistently on some gossip paths (pull requests
and pull responses) but not all (e.g. push messages). Additionally
update_record_timestamp is inefficient since there can be ~800 values
associated with each pubkey.

This commit updates records timestamps only on contact-infos; and,
instead utilizes origin's timestamp when purging old values.

(cherry picked from commit 2c82f2154d)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-23 16:42:54 +00:00
mergify[bot]
786fa4f22e removes first_coding_index from erasure recovery code (#16646) (#16770)
first_coding_index is the same as the set_index and is so redundant:
https://github.com/solana-labs/solana/blob/37b8587d4/ledger/src/blockstore_meta.rs#L49-L60

(cherry picked from commit 03194145c0)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-23 13:21:27 +00:00
mergify[bot]
5b74678e37 Ingest votes from gossip into fork choice (#16560) (#16724)
(cherry picked from commit 4c94f8933f)

Co-authored-by: carllin <carl@solana.com>
2021-04-23 07:20:10 +00:00
mergify[bot]
d203bd1998 Add TPU client for sending txs to the current leader tpu port (#16736) (#16762)
* Add TPU client for sending txs to the current leader tpu port

* Update tpu_client.rs

(cherry picked from commit 75b8434b76)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-23 02:50:30 +00:00
mergify[bot]
5f5fa38d85 program-test: Add large bootstrap stake for realistic warmups (backport #16739) (#16741)
* program-test: Add large bootstrap stake for realistic warmups (#16739)

(cherry picked from commit f4214637a9)

# Conflicts:
#	program-test/Cargo.toml

* Fix merge conflict

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-04-22 23:07:52 +00:00
mergify[bot]
fadf1efa41 Update getLeaderSchedule options (#16749) (#16752)
(cherry picked from commit 636b5987af)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-22 21:02:46 +00:00
mergify[bot]
0269fffa5a Remove unactivated ristretto syscall (backport #16727) (#16745)
* Remove unactivated ristretto syscall (#16727)

(cherry picked from commit be4df39a4c)

# Conflicts:
#	programs/bpf/Cargo.lock
#	programs/bpf/rust/ristretto/Cargo.toml
#	programs/bpf/tests/programs.rs
#	programs/bpf_loader/src/syscalls.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-22 18:33:27 +00:00
mergify[bot]
50e441a9ed Update secp instruction link in docs (#16729) (#16733)
(cherry picked from commit b22c13dcd7)

Co-authored-by: Jack May <jack@solana.com>
2021-04-22 04:53:38 +00:00
mergify[bot]
9413051053 Clean up "APR" language around inflation rewards (#16732)
(cherry picked from commit b8b54567b1)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-22 03:29:52 +00:00
mergify[bot]
13e176a633 getLeaderSchedule now supports filtered results based on validator identity (#16731)
(cherry picked from commit 6004c0abf5)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-22 02:29:01 +00:00
mergify[bot]
9268239c75 Make metrics tests independent of RUST_LOG env var (#16710) (#16730)
Previously, running the tests with RUST_LOG=none would fail, because the
env logger would set its filter level to reject all log messages, and
incrementing a counter only happens if the global logger has at least
the specified log level. Having the tests behave differently when
RUST_LOG is set is surprising, they should be self-contained.

Nix' buildRustPackage sets RUST_LOG="" to make the build logs less
verbose. I have trouble packaging Solana for Nix because of this, and I
believe making the tests independent of the environment is a good
solution for this.

(cherry picked from commit 3f92abedd5)

Co-authored-by: Ruud van Asseldonk <dev@veniogames.com>
2021-04-22 01:41:07 +00:00
mergify[bot]
e51d7af847 verify_pubkey() now takes a ref (#16725)
(cherry picked from commit 91b6888e15)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-21 23:22:13 +00:00
mergify[bot]
147ba1de69 Update float docs (#16695) (#16726)
(cherry picked from commit bb2b4c7e0b)

Co-authored-by: Jack May <jack@solana.com>
2021-04-21 22:55:00 +00:00
mergify[bot]
7cc709c82a CLI: Make pay subcommand a proper alias of transfer (#16721)
(cherry picked from commit 63957f0677)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-21 22:40:11 +00:00
mergify[bot]
a2395e8730 Add --seed support to delegate-stake and withdraw-stake commands (#16717)
(cherry picked from commit ba9a502e7e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-21 21:49:39 +00:00
mergify[bot]
ae605f8f02 expands number of erasure coding shreds in the last batch in slots (backport #16484) (#16707)
* expands number of erasure coding shreds in the last batch in slots (#16484)

Number of parity coding shreds is always less than the number of data
shreds in FEC blocks:
https://github.com/solana-labs/solana/blob/6907a2366/ledger/src/shred.rs#L719

Data shreds are batched in chunks of 32 shreds each:
https://github.com/solana-labs/solana/blob/6907a2366/ledger/src/shred.rs#L714

However the very last batch of data shreds in a slot can be small, in
which case the loss rate can be exacerbated.

This commit expands the number of coding shreds in the last FEC block in
slots to: 64 - number of data shreds; so that FEC blocks are always 64
data and parity coding shreds each.

As a consequence of this, the last FEC block has more parity coding
shreds than data shreds. So for some shred indices we will have a coding
shred but no data shreds. This should not cause any kind of overlapping
FEC blocks as in:
https://github.com/solana-labs/solana/pull/10095
since this is done only for the very last batch in a slot, and the next
slot will reset the shred index.

(cherry picked from commit 37b8587d4e)

# Conflicts:
#	core/benches/shredder.rs
#	ledger/src/shred.rs

* removes backport merge conflicts

* ignore the flaky test for now

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-21 15:25:26 +00:00
mergify[bot]
ea2cc90215 Improve net scripts (backport #16699) (#16700)
* Pass limit-ledger-size value

(cherry picked from commit 51b748408c)

* Initialize non-bootstrap ndoes with faucet address

(cherry picked from commit 053120e04c)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-21 08:45:11 +00:00
mergify[bot]
e15ddbb979 Add port and gossip options to solana-test-validator (#16696) (#16698)
(cherry picked from commit 0924c2d070)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-21 03:46:52 +00:00
mergify[bot]
bbd8bd2e74 Enforce host aligned memory for program regions (backport #16590) (#16683)
* Enforce host aligned memory for program regions (#16590)

(cherry picked from commit 08d5253651)

# Conflicts:
#	cli/Cargo.toml
#	programs/bpf/Cargo.toml
#	programs/bpf/benches/bpf_loader.rs
#	programs/bpf/tests/programs.rs
#	programs/bpf_loader/Cargo.toml
#	programs/bpf_loader/src/lib.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-21 01:47:00 +00:00
mergify[bot]
a5794efe16 getVoteAccounts: Limit the length of the epoch_credits array (#16692)
(cherry picked from commit 34addee882)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-20 22:55:17 +00:00
mergify[bot]
27095378fa Remove unwrap from bpf_loader serialization (#16645) (#16649)
(cherry picked from commit 2409bb18f3)

Co-authored-by: Jack May <jack@solana.com>
2021-04-20 16:35:45 +00:00
mergify[bot]
a8836649cb uses current local timestamp when recording purged values (#16675)
CrdsGossipPull.purged_values is meant to record recently purged values
so that they are excluded from imminent pull requests, until the entire
cluster have synced to the updated value:
https://github.com/solana-labs/solana/blob/c826cddbb/core/src/crds_gossip_pull.rs#L449-L454

However, VersionedCrdsValue.local_timestamp represents the local time
when the value was last updated, and given that crds values may have
different timeouts based on stake, it does not necessarily represent how
recently the value was purged:
https://github.com/solana-labs/solana/blob/c826cddbb/core/src/crds.rs#L75-L76

As such, recording current local timestamp when purging values is more
appropriate. Additionally, purge_purged assumes that the purge_values is
sorted in timestamps when draining the old ones; which is not true if
those timestamps are VersionedCrdsValue.local_timestamp:
https://github.com/solana-labs/solana/blob/c826cddbb/core/src/crds_gossip_pull.rs#L563-L571

(cherry picked from commit bc90e04e64)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-20 12:42:30 +00:00
mergify[bot]
cc81830f13 CLI: Limit stake-history output by default (#16673)
(cherry picked from commit f91de6a84d)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-20 11:36:15 +00:00
mergify[bot]
558a46f5d5 RPC: use finalized as default pubsub commitment level (#16659) (#16666)
* RPC: use finalized as default pubsub commitment level

* update docs

* Fix tests

(cherry picked from commit a7e65c0034)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-20 09:47:50 +00:00
mergify[bot]
57add5366e Expand a couple docs sections (backport #16664) (#16671)
* docs: Flesh out address verification in integraion guide

(cherry picked from commit d575450ef0)

* docs: Expand native program descriptions

(cherry picked from commit 12678a819d)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-20 09:33:40 +00:00
mergify[bot]
5057aaddc0 Send votes to next leader's TPU instead of our TPU (#16663)
(cherry picked from commit c8b474cd0b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-20 08:45:58 +00:00
mergify[bot]
3865219085 Remove unwrap (#16652) (#16657)
(cherry picked from commit 01786f684e)

Co-authored-by: Jack May <jack@solana.com>
2021-04-20 04:45:37 +00:00
mergify[bot]
6da06654ff Wrap derivation_path::DerivationPath (backport #16609) (#16651)
* Wrap derivation_path::DerivationPath (#16609)

* Replace custom DerivationPath impl

* Add method to parse full-path from str with hardening

* Convert Bip44 to trait

* Hoist more work on derivation-path

* Privatize Bip44 trait

(cherry picked from commit 185bbf2db5)

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-20 00:50:16 +00:00
mergify[bot]
4a375acebc Add --sort argument to solana validators (backport #16640) (#16655)
* Add --sort argument to `solana validators`

(cherry picked from commit b66faf7e80)

* Add line numbers to `solana validators` output

(cherry picked from commit 818c3198c1)

* Print the header as a footer when there's a large number of validators to show

(cherry picked from commit 1824b5a2ce)

* Add --number argument

(cherry picked from commit f14cf3ed1a)

* Prefix current validators with nbsp for easier sed-ing

(cherry picked from commit 568438aa6f)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-20 00:42:19 +00:00
mergify[bot]
9fcd465928 solana validators: Restore the meaning of "credits" in the JSON output (#16647)
(cherry picked from commit 1b63bdaf44)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-19 21:08:12 +00:00
mergify[bot]
1f8ef5e640 solana validators now shows current epoch credits instead of lifetime credits (#16639)
(cherry picked from commit f5f06904c3)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-19 18:52:27 +00:00
Michael Vines
a1b0f2f681 Increase test timeout 2021-04-19 04:12:16 +00:00
Michael Vines
f59d4f29d9 clippy 2021-04-19 04:12:16 +00:00
Michael Vines
b379004c3b Upgrade to Rust 1.51.0 2021-04-19 04:12:16 +00:00
mergify[bot]
25491780df Remove unnecessary clone (#16621)
(cherry picked from commit 6907a2366e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-17 18:31:18 +00:00
mergify[bot]
4354ad3299 Documentation typo for langauge (#16620)
(cherry picked from commit 5399faaf53)

Co-authored-by: Guillaume Claret <dev@clarus.me>
2021-04-17 15:20:54 +00:00
Trent Nelson
4e94446fc3 Bump version to v1.6.7 2021-04-16 23:31:30 +00:00
mergify[bot]
d99795c000 Move derivation path into sdk (#16603) (#16607)
* Move DerivationPath to sdk

* Remove eprintln

(cherry picked from commit 52f4b96a80)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-16 23:23:14 +00:00
mergify[bot]
fe775a9716 CLI BIP32 prep: KeypairUrl refactor (backport #16592) (#16605)
* clap-utils: Rename KeypairUrl to SignerSource

(cherry picked from commit 09dcc9ea04)

* clap-utils: Reduce SignerSource's visibility

(cherry picked from commit c5ab3ba6f1)

* clap-utils: Use `uriparse` crate to parse `SignerSource`

(cherry picked from commit 5d1ef5d01d)

* clap-utils: Add explicit schemes for `ask` and `file` `SignerSource`s

(cherry picked from commit 6444f0e57b)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-16 21:14:31 +00:00
mergify[bot]
ac76a75937 Feature-gate hash-based duplicate transaction check (#16601)
(cherry picked from commit 285f3c9d56)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-16 19:59:55 +00:00
mergify[bot]
6c1678244f docs: Fix typo in program deploy instructions (#16572) (#16575)
(cherry picked from commit c8ed14c647)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-16 05:58:31 +00:00
mergify[bot]
63a9f33be1 Don't parse uninitialized system/nonce accounts (#16584) (#16587)
(cherry picked from commit ba77e48c12)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-15 23:39:16 +00:00
mergify[bot]
c9da91cb1c Rotate CODECOV_TOKEN (#16579)
(cherry picked from commit a535c0e129)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-15 16:15:20 +00:00
mergify[bot]
b3488e0139 Cli: move airdrop to rpc requests (#16557) (#16564)
* Add recent_blockhash to requestAirdrop

* Move tx confirmation to separate method

* Add RpcClient airdrop methods

* Request cli airdrop via RpcClient

* Pass optional faucet_addr into TestValidator and fix tests

* Update client/src/rpc_client.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 7dfb51c0b4)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-15 07:35:19 +00:00
mergify[bot]
f3814a0478 docs: freshen and clarify rent-exempt dev description (#16562)
(cherry picked from commit 76ce28c723)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-15 04:35:08 +00:00
mergify[bot]
5e8d8cfb49 fix transaction spelling (#16558) (#16559)
(cherry picked from commit 1f29031b9d)

Co-authored-by: strykerin <dacosta.pereirafabio@gmail.com>
2021-04-15 02:24:26 +00:00
mergify[bot]
ad37276d83 dl-utils: use wide_msg everywhere for truncation on narrow terminals (#16555)
(cherry picked from commit e61b4b7d70)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-15 01:01:55 +00:00
mergify[bot]
719db7eed0 uses timeouts based on stake for filtering pull responses (#16549) (#16551)
filter_pull_responses is using default timeout when discarding pull
responses (except for ContactInfo):
https://github.com/solana-labs/solana/blob/f804ce63c/core/src/crds_gossip_pull.rs#L349-L350

But purging code uses timeouts based on stake:
https://github.com/solana-labs/solana/blob/f804ce63c/core/src/cluster_info.rs#L1867-L1870

So the crds value will not be purged from the sender's table and will be
sent again over the next pull request.

(cherry picked from commit d92721aab9)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-14 21:43:48 +00:00
mergify[bot]
4ddb72a32d prioritizes contact-infos in pull responses (#16541) (#16550)
Expired crds values where the contact-info does not exist are wasted:
https://github.com/solana-labs/solana/blob/f804ce63c/core/src/crds_gossip_pull.rs#L353-L378
and then are sent again over the next pull-request.

Also, the stake of the first response (which can be anything) is used to
weight all pull-responses to a node, while the rest of responses can
have different stake.
https://github.com/solana-labs/solana/blob/f804ce63c/core/src/cluster_info.rs#L2231

(cherry picked from commit f35a6a8be0)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-14 20:14:22 +00:00
mergify[bot]
ff1171338f Fix channel panic in tests (#16503) (#16543)
* Fix channel panic

* Add exit signal to PohRecorder because Crossbeam doesnt drop objects inside dropped channel

(cherry picked from commit f0c150cfb9)

Co-authored-by: carllin <carl@solana.com>
2021-04-14 19:04:31 +00:00
mergify[bot]
28683b0ad8 Fix sanity test flakiness by prebuilding binaries (#16530) (#16547)
* Fix sanity test flakiness by prebuilding binaries

* ignore shellcheck

* bump

* nudge

* simplify

(cherry picked from commit 328e7690f3)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-14 18:52:15 +00:00
Michael Vines
4ef3a679a4 Bump version to v1.6.6 2021-04-14 10:27:02 -07:00
Connor McFarlane
e02bcbdae2 Other hostname changes
(cherry picked from commit eddfe06a00)
2021-04-14 10:08:29 -07:00
Connor McFarlane
7b0187a148 Correct gossip hostname
(cherry picked from commit d684ec00aa)
2021-04-14 10:08:29 -07:00
Michael Vines
e92283c8d2 Add --faucet-port option
(cherry picked from commit f804ce63c2)
2021-04-14 09:39:27 -07:00
Jack May
ef3781d4ee fix cross-merge (#16535) 2021-04-14 10:16:24 +00:00
mergify[bot]
6da4bec41d Return sysvars via syscalls (bp #16422) (#16497)
* Return sysvars via syscalls (#16422)

(cherry picked from commit fa83f3bd73)

* bad merge

* Fix branch diffs

* nudge

Co-authored-by: Jack May <jack@solana.com>
2021-04-14 05:33:27 +00:00
mergify[bot]
31ed985fd0 RpcClient no longer panics in a tokio multi-threaded runtime (#16393)
(cherry picked from commit a4f0d8636a)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-14 03:17:33 +00:00
mergify[bot]
cdc10712b1 Bump scripts to current commitment variants (#16526) (#16527)
(cherry picked from commit 3bfae8e829)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-14 01:51:56 +00:00
mergify[bot]
935a836a7d bump solana_rbpf from 0.2.5 to 0.2.7 (backport #16515) (#16525)
* bump solana_rbpf from 0.2.5 to 0.2.7 (#16515)

(cherry picked from commit f7eadd9d70)

# Conflicts:
#	cli/Cargo.toml
#	programs/bpf/Cargo.toml
#	programs/bpf_loader/Cargo.toml
#	programs/bpf_loader/src/syscalls.rs

* Fix conflicts

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
2021-04-13 23:53:48 +00:00
mergify[bot]
97ba3cbeb0 Cleanup unsupported sysvars (backport #16390) (#16517)
* Cleanup unsupported sysvars (#16390)

* Cleanup unsupported sysvars

* fix ser description

(cherry picked from commit 92f4018b07)

# Conflicts:
#	runtime/src/bank.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-13 23:28:08 +00:00
mergify[bot]
e8ca35f9ec Deprecate RpcClient methods, RpcRequest variants (#16516) (#16519)
* Deprecate RpcClient methods, RpcRequest variants

* Update cli to getSupply

(cherry picked from commit ccb11a939f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-13 22:23:39 +00:00
Michael Vines
d5aae9a8af Derive PartialEq for RpcStakeActivation 2021-04-13 12:33:46 -07:00
mergify[bot]
3bb8016a40 Remove blake3 from bpf program dependencies (#16506) (#16509)
(cherry picked from commit f641429056)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-13 11:18:26 +00:00
Justin Starry
579065443a v1.6: Use blake3 message hash in status cache (#16507) 2021-04-13 16:57:20 +08:00
Trent Nelson
81d636c2bf Merge pull request from GHSA-fmvj-vqp5-qqh9
* Sanitize permissions

* Forbid creating directories under ledger/rocksdb/

* hardened_unpack: Disallow dirs under rocksdb/ in genesis

* hardened_unpack: expand valid genesis entry test coverage

* hardened_unpack: rework old-style bsd directory entry rejection

Co-authored-by: Ivan Mironov <mironov.ivan@gmail.com>
2021-04-12 23:56:37 -06:00
Michael Vines
6a7ce8500b canonicalize authorized voter filepath
(cherry picked from commit 05ad979a2d)
2021-04-12 20:01:56 -07:00
mergify[bot]
8ee294639a validator: Add authorized-voter add/remove-all commands (bp #16492) (#16496)
* Clean up build warning

(cherry picked from commit 17a173ebb5)

* Add authorized-voter add/remove-all commands

(cherry picked from commit 2229b70c4e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-13 00:07:06 +00:00
Tyera Eulberg
b275f65ef1 Add address_cache and exclude loopback from ip limit (#16491) 2021-04-12 20:31:30 +00:00
mergify[bot]
37c2b68677 poll checking for new record in poh service after every batch of hashes instead of busy waiting (#16167) (#16486)
* poll waiting in poh service after every batch of hashes

* clippy

(cherry picked from commit 414c7070cb)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-04-12 19:07:23 +00:00
mergify[bot]
d9944c8ae3 TransactionRecorder uses unique channel so we can use Recv instead of RecvTimeout (#16195) (#16485)
* time

* new channel each call

* new channel every time

(cherry picked from commit 5eff23db0c)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-04-12 18:56:22 +00:00
mergify[bot]
6c8bbdca0a Allow fork choice to support multiple versions of a slot (#16266) (#16480)
(cherry picked from commit dc7030ffaa)

Co-authored-by: carllin <carl@solana.com>
2021-04-12 09:14:02 +00:00
Michael Vines
10e8f3ab32 Fix up App formatting
(cherry picked from commit ef30943c5c)
2021-04-11 23:36:31 -07:00
mergify[bot]
8c0b0f235e docker: Expose all ports in Dockerfile, add back localnet.sh (#16401) (#16474)
* docker: Expose all ports in Dockerfile, add back localnet.sh

* Add documentation for where to find containers

* Obliterate script

(cherry picked from commit 448d5be79f)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-04-11 20:17:08 +00:00
mergify[bot]
ec8ba76e4d Fix account copy step in program test message processor (#16469) (#16472)
(cherry picked from commit 278c125d99)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-11 20:31:22 +08:00
mergify[bot]
60fba7be75 Track gossip vote updates per hash for replay stage (#16421) (#16468)
* Track gossip vote updates per hash for replay stage

(cherry picked from commit 99b3aab703)

Co-authored-by: carllin <carl@solana.com>
2021-04-11 09:33:28 +00:00
mergify[bot]
24075ceeff Fill in not-yet-finalized block-time if possible (#16460) (#16463)
(cherry picked from commit 8bc0bdd40b)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-09 21:48:16 +00:00
mergify[bot]
f7ef1e68b0 patches bug in banking stage where buffered packets are never retained (#16276) (#16458)
banking_stage::handle_forwarding is retaining buffered packets with
empty index, so nothing is held:
https://github.com/solana-labs/solana/blob/6f3926b64/core/src/banking_stage.rs#L520

(cherry picked from commit 701fc93343)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-09 18:44:32 +00:00
mergify[bot]
723e7f11b9 Simplify some pattern-matches (#16402) (#16446)
When those match an exact combinator on Option / Result.

Tool-aided by [comby-rust](https://github.com/huitseeker/comby-rust).

(cherry picked from commit b08cff9e77)

Co-authored-by: François Garillot <4142+huitseeker@users.noreply.github.com>
2021-04-08 20:45:01 +00:00
mergify[bot]
f7211d3c07 Cli: use get_inflation_rewards and limit epochs queried (#16408) (#16444)
* Fix block-with-limit when not finalized blocks found

* Enable confirmed commitment in getInflationReward

* Use get_inflation_rewards in cli

* Line up rewards output

* Add range validator

* Change cli epoch arg -> num epochs

* Add solana inflation rewards subcommand

* Consolidate epoch rewards meta

(cherry picked from commit bb9d2fd07a)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-08 18:16:04 +00:00
mergify[bot]
6234090361 Fix cargo-build/test-bpf --workspace (bp #16431) (#16432)
* Fix cargo-build/test-bpf --workspace (#16431)

(cherry picked from commit 878e52f0b9)

# Conflicts:
#	ci/test-stable.sh

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-08 16:55:21 +00:00
mergify[bot]
a001c1c8f6 CI: Let cargo-install-all.sh resolve stable (#16430)
(cherry picked from commit 388ce12207)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-07 21:30:08 +00:00
mergify[bot]
7f62f4f621 CLI: Fix rent panic (#16417) (#16426)
* CLI: Fix `rent` panic on non-numeric input (+monikers)

* Update cli/src/cluster_query.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Update cli/src/cluster_query.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Update cli/src/cluster_query.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit c5c3ae0203)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-07 18:04:20 +00:00
mergify[bot]
8334a76e5b docs: Validator SOL reqs followup (#16424)
(cherry picked from commit 117860218f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-07 16:15:20 +00:00
mergify[bot]
eadab5e2f0 No wallclock throttle tests (#16396) (#16399)
(cherry picked from commit 1219842a96)

Co-authored-by: carllin <carl@solana.com>
2021-04-07 11:05:51 +00:00
mergify[bot]
8bb7b53f3b Speed up net.sh builds (bp #16360) (#16420)
* Speed up net.sh builds (#16360)

* Speed up net.sh builds

* feedback

* Update net/net.sh

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* feedback

* fix

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 6cd4bc5e60)

# Conflicts:
#	scripts/cargo-install-all.sh

* fix

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-07 09:03:53 +00:00
Trent Nelson
8c7b8e8c5d docs: Add validator SOL reqs
(cherry picked from commit 0e42a35e4f)
2021-04-06 22:47:48 -06:00
mergify[bot]
a2857928a4 Rpc: introduce get_inflation_reward rpc call (#16278) (#16410)
* feat: introduce get_inflation_reward rpc call

* fix: style suggestions

* fix: more style changes and match how other rpc functions are defined

* feat: get reward for a single epoch

* feat: default to the most recent epoch

* fix: don't factor out get_confirmed_block

* style: introduce from impl for RpcEncodingConfigWrapper

* style: bring commitment into variable

* feat: support multiple pubkeys for get_inflation_reward

* feat: add get_inflation_reward to rpc client

* feat: return rewards in order

* fix: rename pubkeys to addresses

* docs: introduce jsonrpc docs for get_inflation_reward

* style: early return in map (not sure which is more idiomatic)

* fix: call the rpc client function args addresses as well

* fix: style

* fix: filter out only addresses we care about

* style: make this more idiomatic

* fix: change rpc client epoch to optional and include some docs edits

* feat: filter out rent rewards in get_inflation_reward

* feat: add option epoch config param to get_inflation_reward

* feat: rpc client get_inflation_reward takes epoch instead of config and some filter staking and voting rewards

(cherry picked from commit e501fa5f0b)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-04-07 02:26:45 +00:00
mergify[bot]
f6780d72b1 Faucet: repurpose cap and slice args to apply to single IPs (bp #16381) (#16400)
* Faucet: repurpose cap and slice args to apply to single IPs (#16381)

* Single use stmt

* Log request IP

* Switch cap and slice to apply per IP

* Use SOL in logs, error msgs

* Use thiserror instead of overloading io::Error

* Return memo transaction for requests that exceed per-request-cap

* Handle faucet memos in cli

* Add some docs, esp about memo transaction

* Use SOL symbol & standardize memo

Co-authored-by: Michael Vines <mvines@gmail.com>

* Differentiate faucet tx-length errors

* Populate signature in cli airdrop memo case

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 03d3ae1cb9)

# Conflicts:
#	Cargo.lock
#	client/Cargo.toml
#	faucet/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-06 08:48:10 +00:00
mergify[bot]
5d003c6dab Use spl-memo v3.0.1 (#16384) (#16397)
* Use memo v3.0.1, which simplifies id imports

* tree

(cherry picked from commit ae7bc8299d)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-06 05:12:04 +00:00
mergify[bot]
79ee0e06b2 Cluster info shred spies (bp #16389) (#16395)
* cluster-info: Don't subtract non-shred spies from node count

(cherry picked from commit b6b08706b9)

* cluster-info: Get rid of some integer math while we're here

(cherry picked from commit b71875df61)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-06 01:37:16 +00:00
mergify[bot]
443f132de5 Add cluster state verifier logging (#16330) (#16336)
* Add cluster state verifier logging

* Add duplicate-slots iterator to ledger tool

(cherry picked from commit 4e5ef6bce2)

Co-authored-by: carllin <carl@solana.com>
2021-04-06 01:25:12 +00:00
mergify[bot]
95299e43a2 validator: Use a const for wait for supermajority threshold (#16392)
(cherry picked from commit 7a2a39093d)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-06 00:53:05 +00:00
mergify[bot]
0cf1894ede issue #10831: added --with-memo option to all cli commands that submit (bp #16291) (#16387)
* issue #10831: added --with-memo option to all cli commands that submit (#16291)

* issue #10831: added --with-memo option to all cli commands that submit
transactions.  Also, improve the block command to show UTF-8 string instead
of integer values for memo program data.

* Fixed tests and changed some syntax according to feedback.

* Use spl_memo id (all versions where applicable) instead of hardcoding id.

* Update Cargo.toml in programs/bpf.

* Update formatting via cargo fmt.

* Update to use spl_memo version 3.0.1, which simplifies package imports

(cherry picked from commit 364af3a3e0)

# Conflicts:
#	cli-output/Cargo.toml
#	cli/Cargo.toml

* Fix conflicts

Co-authored-by: bji <bryan@ischo.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-05 23:14:16 +00:00
mergify[bot]
f2f4f28c0b merkle-tree: fix build when targeting bpf (bp #16335) (#16342)
* merkle-tree: Add Xargo.toml

(cherry picked from commit a1d9b53cd7)

* merkle-tree: Get `Hash` et. al from program instead of sdk

(cherry picked from commit ddc0a16cec)

* merkle-tree: Use `matches` crate when targeting eBPF

(cherry picked from commit a44c32694f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-05 21:57:26 +00:00
Michael Vines
57da68d563 Update Cargo.toml 2021-04-05 14:02:34 -07:00
Michael Vines
8d2337ccf8 Update Cargo.toml 2021-04-05 14:02:34 -07:00
Michael Vines
270749185c Adjust tokio version to just "1"
(cherry picked from commit 43feef7362)

# Conflicts:
#	faucet/Cargo.toml
#	net-utils/Cargo.toml
2021-04-05 14:02:34 -07:00
Michael Vines
6184254416 Reduce test-validator ledger size
(cherry picked from commit b242f82696)
2021-04-05 09:24:49 -07:00
mergify[bot]
c8bb13b3f7 Fixup AncestorIterator method (bp #16357) (#16359)
* Fixup iterator method (#16357)

(cherry picked from commit 1a13d22984)

* Only get Blockstore::last_root once (#16362)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-05 05:27:58 +00:00
Tyera Eulberg
5da83c1491 Bump version to v1.6.5 (#16361) 2021-04-04 22:00:40 -06:00
Michael Vines
b04ce80255 Add channel version check
(cherry picked from commit 527adbed34)
2021-04-04 13:53:50 -07:00
sakridge
a788021181 Bump version to v1.6.4 (#16345) 2021-04-04 13:31:35 -07:00
mergify[bot]
553e9fb8cd Set ticks_per_slot higher for banking stage tests (#16094) (#16356)
Tests are timing out because the bank hit the MaxTickHeight and
will not process the transactions.

(cherry picked from commit 96ccc40f0a)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-04-04 20:29:53 +00:00
Michael Vines
f1bc7ec4fa wait-for-restart-window works again for unstaked nodes
(cherry picked from commit a679aebc82)
2021-04-04 12:59:13 -07:00
mergify[bot]
581181e87f Fix test_replay_commitment_cache (#16131) (#16355)
(cherry picked from commit 9b94741290)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-04-04 19:41:45 +00:00
mergify[bot]
36e1f9fae8 Bump bpf-tools to version v1.5 (#16331) (#16350)
The new version of bpf-tools eliminates the separate
rust-bpf-sysroot. The Rust standard libraries for the BPF target are
built in tree when the compiler is built.  The standard libraries code
is slightly more optimized and some reduction of compute budget can be
expected with this version of bpf-tools.

(cherry picked from commit 1359bceb5d)

Co-authored-by: Dmitri Makarov <dmakarov@users.noreply.github.com>
2021-04-04 16:58:52 +00:00
Justin Starry
f10ae394c8 Remove unprocessed transactions from log notifications (#16349)
(cherry picked from commit 0596cf5405)
2021-04-04 09:38:23 -07:00
mergify[bot]
f7905d369a Throttle PoH ticks by cumulative slot time (#16139) (#16315)
* Throttle PoH ticks by cumulative slot time

    * respond to pr feedback

    * saturating sub

    * updated comment

    (cherry picked from commit 4f4cffbd03)

    # Conflicts:
    #       core/src/poh_recorder.rs

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-04-03 23:40:46 +00:00
mergify[bot]
ef079d202b Wait for 90 percent of stake before starting (#16340) (#16344)
(cherry picked from commit 3429785d9b)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-04-03 22:50:58 +00:00
Michael Vines
b7efc2373c wait-for-restart-window now indicates how far away the next restart window is
(cherry picked from commit c8c89dd5f7)
2021-04-02 23:23:16 -07:00
Michael Vines
d3b50bc55b Remove UNSTABLE warning from logsSubscribe 2021-04-02 12:54:20 -07:00
mergify[bot]
8fd3465f8a Cleanup use (bp #16327) (#16328)
* Cleanup use (#16327)

(cherry picked from commit dee655df35)

# Conflicts:
#	Cargo.lock
#	program-test/Cargo.toml

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-02 19:54:00 +00:00
mergify[bot]
23b9e6eae3 add metric for ticks from poh_recorder.record (#16047) (#16312)
(cherry picked from commit 2fc609a294)

    # Conflicts:
    #       core/src/poh_recorder.rs

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-04-02 18:50:50 +00:00
mergify[bot]
fe1a977f9e Parse SPL associated-token-account instructions (bp #16318) (#16321)
* Parse SPL associated-token-account instructions (#16318)

(cherry picked from commit a902505810)

# Conflicts:
#	transaction-status/Cargo.toml

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-02 02:38:28 +00:00
mergify[bot]
5e538eff7c metrics for poh_recorder.record (#15998) (#16317)
(cherry picked from commit ddc758439e)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-04-02 02:35:09 +00:00
mergify[bot]
3efe4b5478 increase timeout in TransactionRecorder.record (#16133) (#16314)
(cherry picked from commit 06ac0fe9a3)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-04-02 00:58:52 +00:00
mergify[bot]
90e0d4fefe poh record metrics (#16092) (#16313)
(cherry picked from commit f68860a643)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-04-01 21:27:16 +00:00
behzad nouri
2e983fb39f pushes addresses instead of insert 2021-04-01 11:14:05 -06:00
mergify[bot]
527b20fbbd nit: fix variable names (#16283) (#16295)
(cherry picked from commit aa45e81b3e)

Co-authored-by: Jack May <jack@solana.com>
2021-04-01 09:24:52 +00:00
mergify[bot]
a0c4b4e5fc Rpc: enable getConfirmedSignaturesForAddress2 to return confirmed (not yet finalized) data (#16281) (#16293)
* Update blockstore method to allow return of unfinalized signature

* Support confirmed sigs in getConfirmedSignaturesForAddress2

* Add deprecated comments

* Update docs

* Enable confirmed transaction-history in cli

* Return real confirmation_status; fill in not-yet-finalized block time if possible

(cherry picked from commit da27acabcc)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-01 06:30:36 +00:00
mergify[bot]
282315a721 Rpc: fix getConfirmedTransaction slot (#16288) (#16290)
* Fix transaction blockstore apis

* Update blockstore apis in rpc

(cherry picked from commit 18bd47dbe1)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-01 04:58:53 +00:00
mergify[bot]
b8198f8cc5 removes OrderedIterator and transaction batch iteration order (#16153) (#16285)
In TransactionBatch,
https://github.com/solana-labs/solana/blob/e50f59844/runtime/src/transaction_batch.rs#L4-L11
lock_results[i] is aligned with transactions[iteration_order[i]]:
https://github.com/solana-labs/solana/blob/e50f59844/runtime/src/bank.rs#L2414-L2424
https://github.com/solana-labs/solana/blob/e50f59844/runtime/src/accounts.rs#L788-L817

However load_and_execute_transactions is iterating over
  lock_results[iteration_order[i]]
https://github.com/solana-labs/solana/blob/e50f59844/runtime/src/bank.rs#L2878-L2889
and then returning i as for the index of the retryable transaction.

If iteratorion_order is [1, 2, 0], and i is 0, then:
  lock_results[iteration_order[i]] = lock_results[1]
which corresponds to
  transactions[iteration_order[1]] = transactions[2]
so neither i = 0, nor iteration_order[i] = 1 gives the correct index for the
corresponding transaction (which is 2).

This commit removes OrderedIterator and transaction batch iteration order
entirely. There is only one place in blockstore processor which the
iteration order is not ordinal:
https://github.com/solana-labs/solana/blob/e50f59844/ledger/src/blockstore_processor.rs#L269-L271
It seems like, instead of using an iteration order, that can shuffle entry
transactions in-place.

(cherry picked from commit 3f63ed9a72)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-01 01:28:01 +00:00
mergify[bot]
68ad2dcce1 Use more performant copy (#16282) (#16284)
(cherry picked from commit ad7f8e7f23)

Co-authored-by: Jack May <jack@solana.com>
2021-04-01 01:08:01 +00:00
mergify[bot]
e87c3421bc Update overview.md (#16280)
fix link which was broken/wrong

(cherry picked from commit c723251575)

Co-authored-by: Huge <mr.huge@seznam.cz>
2021-03-31 22:05:24 +00:00
mergify[bot]
20754a7115 Drop write lock on sysvars (#15497) (#16233)
* Drop write lock on sysvars

* adds env var for demoting sysvar write lock demotion

* moves demote logic to is_writable

* feature gates sysvar write lock demotion

* adds builtins to write lock demotion

* adds system program id to builtins

* adds Feature111...

* adds an abi-freeze test

* mvines set of builtin program keys

Co-authored-by: Michael Vines <mvines@gmail.com>

* update tests

* adds bpf loader keys

* Add test sysvar

* Plumb demote_sysvar to is_writable

* more plumbing of demote_sysvar_write_locks to is_writable

* patches test_program_bpf_instruction_introspection

* hard codes demote_sysvar_write_locks to false for serialization/encoding methods

* Revert "hard codes demote_sysvar_write_locks to false for serialization/encoding methods"

This reverts commit ae3e2d2e777437bddd753933097a210dcbc1b1fc.

* change the hardcoded ones to demote_sysvar_write_locks=true

* Use data_as_mut_slice

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 54c68ea83f)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-31 20:23:20 +00:00
mergify[bot]
8a57ee181e Cleanup nits (bp #16211) (#16237)
* Cleanup nits (#16211)

(cherry picked from commit f84e88f0a2)

# Conflicts:
#	programs/bpf/Cargo.lock
#	programs/bpf/rust/sysvar/Cargo.toml

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-03-31 10:01:18 +00:00
mergify[bot]
4e6b5a9808 Fix BPF ELF layout (#16256) (#16261)
* Fix BPF ELF layout

* whitespace

(cherry picked from commit bcd89dd34c)

Co-authored-by: Jack May <jack@solana.com>
2021-03-31 09:56:57 +00:00
mergify[bot]
f24fbde43b Helpful const and Arg doc (#16248) (#16252)
(cherry picked from commit 67b747938f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-31 06:32:18 +00:00
Michael Vines
47f60c7607 Validator monitor now displays the max retransmit slot
(cherry picked from commit aac18d7564)
2021-03-30 21:57:23 -07:00
mergify[bot]
8b307ed409 security policy: Add out-of-scope section (bp #16249) (#16251)
* security policy: Add out-of-scope section

(cherry picked from commit e9e46ff521)

* Update SECURITY.md

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 700ebde474)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-31 04:49:17 +00:00
mergify[bot]
cf21719a07 Add get_max_retransmit_slot/get_max_shred_insert_slot to RpcClient (#16243)
(cherry picked from commit 2a1639836a)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-31 01:09:11 +00:00
mergify[bot]
3157b464c4 Align ProcessInstruction error handling (#16232) (#16238)
(cherry picked from commit ce7f7c2b6c)

Co-authored-by: Jack May <jack@solana.com>
2021-03-30 21:55:08 +00:00
mergify[bot]
2581db5748 docs: Reduce airdrop examples to 1 SOL (#16241)
(cherry picked from commit 2bcfbad653)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-30 21:52:42 +00:00
Trent Nelson
634959b3ab Bump version to v1.6.3 2021-03-30 16:17:47 +00:00
Trent Nelson
03b21f2e9d Bump version to v1.6.2 2021-03-30 00:06:01 -06:00
carllin
cc5565b17e Setup ReplayStage confirmation scaffolding for duplicate slots (#9698)
(cherry picked from commit 52703badfa)
2021-03-29 22:07:14 -06:00
mergify[bot]
50beef0b15 Allow incomplete features in frozen-abi (#16205)
(cherry picked from commit 9ba9d2a8ae)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-30 03:46:10 +00:00
mergify[bot]
06a54e1423 remove old code (#15988) (#15993)
(cherry picked from commit 9760fded2d)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-03-30 00:50:27 +00:00
mergify[bot]
4d731ecd08 eliminate lock on record (#15929) (#16073)
* eliminate lock on record

* use same error as MaxHeightReached

* clippy

* review feedback

* refactor should_tick code

* pr feedback

(cherry picked from commit 57ba86c821)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-03-30 00:46:13 +00:00
mergify[bot]
ee06789a66 sdk: Add try_from_slice_unchecked for Borsh (#16098) (#16158)
* sdk: Add try_from_slice_unchecked for Borsh

* Add tests

* Rename + clarify comment

* Rename back to unchecked

(cherry picked from commit cffa851e0f)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-03-29 23:15:34 +00:00
mergify[bot]
2dabe1d706 Add handling to close accounts to many-accounts bench (#16199) (#16201)
* gitignore farf

* Improve cli args

* Use derived addresses for accounts

* Add parameter to close every nth account created

(cherry picked from commit 1d145e1fc2)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-29 22:54:09 +00:00
Tyera Eulberg
3b1279a005 Future-aware enum name 2021-03-29 14:58:35 -06:00
mergify[bot]
5c9f85f28d Rpc: enable getConfirmedBlocks and getConfirmedBlocksWithLimit to return confirmed (not yet finalized) data (#16161) (#16198)
* Add commitment config capabilities

* Use rpc limit if no end_slot provided

* Limit to actually finalized blocks

* Support confirmed blocks in getConfirmedBlocks and getConfirmedBlocksWithLimit

* Update docs

* Add client plumbing

* Rename config enum

(cherry picked from commit 60ed8e2892)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-29 19:53:17 +00:00
mergify[bot]
e12dd46ef3 Derive PartialEq for StakeActivationState (#16196)
(cherry picked from commit 4e7bd45d4c)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-29 18:16:44 +00:00
mergify[bot]
c4fa03b478 Status cache improvements (#16174) (#16178)
(cherry picked from commit 5e5b63712b)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-29 10:11:16 -07:00
mergify[bot]
9fb749deb7 Print the rust version when building bpf programs (#16181) (#16183)
(cherry picked from commit abada56ba1)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-29 07:18:55 +00:00
mergify[bot]
bd48344de2 Fix handling of invoked ix accounts in program-test (#16170) (#16176)
(cherry picked from commit 27ab415ecc)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-29 01:55:11 +00:00
mergify[bot]
78e54f1d2c Implement mnemonic support for solana-keygen grind (solana-labs#9325) (#16108) (#16173)
* Implement mnemonic support for solana-keygen grind (solana-labs#9325)

* Updated to include feedback from review.

* Renaming as per review feedback

* Fixed an incorrectly transcribed underscore

* Properly re-use string constants.

(cherry picked from commit e50f598449)

Co-authored-by: bji <bryan@ischo.com>
2021-03-28 07:05:17 +00:00
mergify[bot]
76a6576976 sdk: Use u32::MAX from std to unbreak BPF builds (#16171) (#16172)
(cherry picked from commit aabe186e3f)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-27 17:05:53 +00:00
mergify[bot]
92ec1ae255 Switch to a single use (#16169)
(cherry picked from commit 16e4ccca13)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-27 06:58:31 +00:00
Michael Vines
0d203728cc Add RpcClient::get_stake_activation() 2021-03-26 22:33:06 -07:00
mergify[bot]
625773e5b8 Rpc: enable getConfirmedBlock and getConfirmedTransaction to return confirmed (not yet finalized) data (bp #16142) (#16160)
* Rpc: enable getConfirmedBlock and getConfirmedTransaction to return confirmed (not yet finalized) data (#16142)

* Add Blockstore block and tx apis that allow unrooted responses

* Add TransactionStatusMessage, and send on bank freeze; also refactor TransactionStatusSender

* Track highest slot with tx-status writes complete

* Rename and unpub fn

* Add commitment to GetConfirmed input configs

* Support confirmed blocks in getConfirmedBlock

* Support confirmed txs in getConfirmedTransaction

* Update sigs-for-addr2 comment

* Enable confirmed block in cli

* Enable confirmed transaction in cli

* Review comments

* Rename blockstore method

(cherry picked from commit 433f1ead1c)

# Conflicts:
#	core/src/replay_stage.rs

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-27 04:51:53 +00:00
mergify[bot]
a4cb1e45ae Only print skipped leader slot message when the node is actually leader (#16156) (#16164)
Also, check vote signature after the vote is signed

(cherry picked from commit 60b4771fc6)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-27 02:03:10 +00:00
mergify[bot]
8aded2778e Bump bpf-tools to version v1.4 (#16152) (#16154)
(cherry picked from commit 658ddd1c9c)

Co-authored-by: Dmitri Makarov <dmakarov@users.noreply.github.com>
2021-03-26 20:51:25 +00:00
mergify[bot]
d940c5b1a3 Skip leader slots until a vote lands (#15607) (#16147)
(cherry picked from commit b99ae8f334)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-26 19:07:24 +00:00
Trent Nelson
1be045df94 sq: optimize
(cherry picked from commit 482c027d3b)
2021-03-25 21:31:52 -06:00
Trent Nelson
86191911c7 perf: use saturating/checked integer arithmetic
(cherry picked from commit 834fae684b)
2021-03-25 21:31:52 -06:00
mergify[bot]
8f852d8a6b makes test_pull_request_time_pruning smaller (#16128) (#16144)
(cherry picked from commit b041b55028)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-26 01:20:26 +00:00
Kristofer Peterson
68a439f8da Refactored ShortU16Visitor::visit_seq() to reject overflows, extra leading zeros and ensure one-to-one encoding. 2021-03-26 01:20:22 +00:00
Trent Nelson
e021832708 sdk: ShortU16 - rename variables for clarity
ShortU16's implementation embeds its usage as the length of a
ShortVec, confusingly referring to both a 'len' and a 'size'
at the same time.
2021-03-26 01:20:22 +00:00
Trent Nelson
87b11aa187 sdk: Add ShortU16 deser test 2021-03-26 01:20:22 +00:00
mergify[bot]
7475a6f444 makes turbine peer computation consistent between broadcast and retransmit (#14910) (#16143)
get_broadcast_peers is using tvu_peers:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/broadcast_stage.rs#L362-L370
which is potentially inconsistent with retransmit_peers:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/cluster_info.rs#L1332-L1345

Also, the leader does not include its own contact-info when broadcasting
shreds:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/cluster_info.rs#L1324
but on the retransmit side, slot leader is removed only _after_ neighbors and
children are computed:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/retransmit_stage.rs#L383-L384
So the turbine broadcast tree is different between the two stages.

This commit:
* Removes retransmit_peers. Broadcast and retransmit stages will use tvu_peers
  consistently.
* Retransmit stage removes slot leader _before_ computing children and
  neighbors.

(cherry picked from commit 570fd3f810)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-26 00:16:48 +00:00
mergify[bot]
86ce650661 Add timeout for local cluster partition tests (bp #16123) (#16137)
* Add timeout for local cluster partition tests (#16123)

* Add timeout for local cluster partition tests

* fix optimistic conf test logs

* Bump instruction count assertions

(cherry picked from commit e817a6db00)

# Conflicts:
#	local-cluster/Cargo.toml

* Fix conflict

Co-authored-by: Justin Starry <justin@solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-25 22:56:05 +00:00
mergify[bot]
4dc5a53014 Show bpf-tools download progress (#16135)
(cherry picked from commit 07273bfa9e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-25 20:55:11 +00:00
mergify[bot]
5e35cf3536 program: Correct clamp in Message::signer_keys() (#16114)
(cherry picked from commit 8b3de72e2a)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-25 17:53:34 +00:00
Trent Nelson
e8a8d1efb3 clap-utils: Allow NullSigners outside sign-only mode
(cherry picked from commit 7f0ac6a67c)
2021-03-25 11:10:53 -06:00
mergify[bot]
defd9238fa Simplify account.rent_epoch handling for sysvar rent (bp #16049) (#16118)
* Simplify account.rent_epoch handling for sysvar rent (#16049)

* Add some code for special local testing

* Add comment to store_account_and_update_capitalization

* Simplify account.rent_epoch handling for sysvar rent

* Introduce *_for_test functions

* Add deprecation messages to existing api

(cherry picked from commit 6d5c6c17c5)

# Conflicts:
#	sdk/src/native_loader.rs

* Fix conflicts

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-03-25 17:17:43 +09:00
mergify[bot]
5f061dcea1 Support getBlockTime for unfinalized blocks (#16103) (#16110)
(cherry picked from commit a8ef29df27)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-25 04:18:00 +00:00
mergify[bot]
e6ee27a738 Add Exodus as Solana Mobile app option (#16100) (#16101)
* Add Exodus as Solana Mobile app option

* Update docs/src/wallet-guide/apps.md

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit ad47c63f27)

Co-authored-by: Davey <35187388+davidzelaya@users.noreply.github.com>
2021-03-24 21:34:58 +00:00
mergify[bot]
dd2d25d698 limits CrdsGossipPull::pull_request_time size (#15793) (#16097)
There is no pruning logic on CrdsGossipPull::pull_request_time
https://github.com/solana-labs/solana/blob/79ac1997d/core/src/crds_gossip_pull.rs#L172-L174
potentially allowing this to take too much memory.

Additionally, CrdsGossipPush::last_pushed_to is pruning recent push
timestamps:
https://github.com/solana-labs/solana/blob/79ac1997d/core/src/crds_gossip_push.rs#L275-L279
instead of the older ones.

Co-authored-by: Nathan Hawkins <utsl@utsl.org>
(cherry picked from commit a6c23648cb)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-24 20:05:04 +00:00
Dmitri Makarov
9096c3df02 Adjust BPF test programs instruction counts 2021-03-24 11:59:59 +01:00
Dmitri Makarov
9f94c2a9a0 Bump bpf-tools to version v1.3
This brings in the fix for increased compute budget that wasn't caught
when bpf-tools v1.2 were released.
2021-03-24 11:59:59 +01:00
Dmitri Makarov
34213da9f4 Bump bpf-tools to v1.2 and get rid of xargo 2021-03-24 11:59:59 +01:00
mergify[bot]
c3c4991c44 rpc: add getSlotLeaders method (#16057) (#16079)
(cherry picked from commit e7fd7d46cf)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-23 19:27:18 +00:00
mergify[bot]
9d37a33dcd buffers data shreds to make larger erasure coded sets (bp #15849) (#16074)
* buffers data shreds to make larger erasure coded sets (#15849)

Broadcast stage batches up to 8 entries:
https://github.com/solana-labs/solana/blob/79280b304/core/src/broadcast_stage/broadcast_utils.rs#L26-L29
which will be serialized into some number of shreds and chunked into FEC
sets of at most 32 shreds each:
https://github.com/solana-labs/solana/blob/79280b304/ledger/src/shred.rs#L576-L597
So depending on the size of entries, FEC sets can be small, which may
aggravate loss rate.
For example 16 FEC sets of 2:2 data/code shreds each have higher loss
rate than one 32:32 set.

This commit broadcasts data shreds immediately, but also buffers them
until it has a batch of 32 data shreds, at which point 32 coding shreds
are generated and broadcasted.

(cherry picked from commit 4f82b897bc)

# Conflicts:
#	ledger/src/shred.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-23 18:23:09 +00:00
mergify[bot]
a04ca03fee renames is_last_in_fec_set back to is_last_data (#15848) (#16075)
https://github.com/solana-labs/solana/pull/10095
renamed is_last_data to is_last_in_fec_set. However, the code shows that
this is actually meant to indicate where the serialized data is
complete:
https://github.com/solana-labs/solana/blob/420174d3d/ledger/src/shred.rs#L599-L600
https://github.com/solana-labs/solana/blob/420174d3d/ledger/src/shred.rs#L229-L231

There are multiple FEC sets for each `&[Entry]` serialized and this flag
does not represent shreds last in FEC sets (only the very last one by
overlap). So the name is wrong and confusing

(cherry picked from commit 3b85cbc504)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-23 16:59:47 +00:00
mergify[bot]
64ce4a6203 solana transfer now requires --allow-unfunded-recipient if the recipient doesn't exist (bp #16060) (#16067)
* transfer now requires --allow-unfunded-recipient if the recipient doesn't exist

(cherry picked from commit 3dff5c9dee)

* Avoid RPC in `--sign-only` mode

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit 6271665ba6)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-23 03:54:42 +00:00
mergify[bot]
7ac3c9ec76 Handle blockstore insert dup checks (#16051) (#16066)
(cherry picked from commit d76ad33597)

Co-authored-by: carllin <carl@solana.com>
2021-03-23 00:49:10 +00:00
mergify[bot]
7d91515e8d Make getStakeActivation response consistent for undelegated accounts (#16038) (#16040)
(cherry picked from commit 2ec24d438f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-19 22:07:30 +00:00
mergify[bot]
4e3f2c3d2d program-test: Fix warp and staking issue (#16002) (#16031)
Since program-test creates a test genesis and then adds fees and rent,
some of the genesis accounts get rent-collected after warping.  Most
notably, `StakeConfig` gets rent-collected, causing any stake operations
to fail after warp.  This fix creates genesis with the `Rent` and
`FeeRateGovernor` actually used by the bank.

(cherry picked from commit 6cc22e62d4)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-03-19 14:54:58 +00:00
mergify[bot]
8b67ba6d3d docs: SIGUSR1 killing wrapper shell scripts (#16009)
(cherry picked from commit 07dc522981)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-19 07:45:34 +00:00
mergify[bot]
c2ce68ab90 Santize instruction index when loading instruction from sysvar (#15942) (#16004)
(cherry picked from commit 4c5660ba7a)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-19 02:48:41 +00:00
mergify[bot]
fe87cb1cd1 Update to reqwest 0.11.2 (#16000)
(cherry picked from commit 02b81dd05d)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-18 22:12:01 +00:00
mergify[bot]
1c8f6a836a cli cleanup (#15990) (#15997)
(cherry picked from commit 067b390194)

Co-authored-by: Jack May <jack@solana.com>
2021-03-18 20:03:04 +00:00
mergify[bot]
3d5ff7968e rpc: Add config options limiting getConfirmedBlock response data (#15970) (#15995)
* Add new confirmed block struct

* Add RpcConfirmedBlockConfig options

* Configure block response based on new options

* Add client api, use in cli fetch_epoch_rewards

* Update docs

* Apply review suggestions

(cherry picked from commit aa54c468ea)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-18 19:33:01 +00:00
Tyera Eulberg
d6160f7744 Avoid panic when validator doesn't have performance samples (#15976)
(cherry picked from commit ba33c9e18e)
2021-03-18 08:28:31 -07:00
mergify[bot]
5e9ce99abf remote-wallet: Expose Ledger app settings (#15978)
(cherry picked from commit 2dabcac0da)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-18 09:13:24 +00:00
mergify[bot]
ebd6fe7acb Avoid a panic when --slots-per-epoch is less than MINIMUM_SLOTS_PER_EPOCH (#15975)
(cherry picked from commit 4ab98fff02)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-18 07:17:50 +00:00
mergify[bot]
9e91a2c2fd Add Close instrruction and tooling to upgradeable loader (#15887) (#15972)
(cherry picked from commit 7f500d610c)

Co-authored-by: Jack May <jack@solana.com>
2021-03-18 06:02:57 +00:00
Michael Vines
899f57962a Add --slots-per-epoch argument
(cherry picked from commit 04c99cf7ea)
2021-03-17 17:25:51 -07:00
Michael Vines
3176b00e57 Add --slots-per-epoch validator
(cherry picked from commit c06ff47a90)
2021-03-17 17:25:51 -07:00
Jeff Washington (jwash)
08b9da8397 drop poh lock after record (#15930)
(cherry picked from commit 5460fb10a2)
2021-03-17 17:24:53 -07:00
mergify[bot]
2bc21ecba2 Allow unbounded wallclock processing time in tests (#15961) (#15966)
(cherry picked from commit f548a04fae)

Co-authored-by: carllin <carl@solana.com>
2021-03-18 00:22:06 +00:00
Jeff Washington (jwash)
5b2a65fab3 add metrics for tick producer and poh_recorder (#15931)
(cherry picked from commit 40997d0aef)
2021-03-17 16:36:50 -07:00
mergify[bot]
f5d56eabf3 Build full SPL in CI (bp #15886) (#15964)
* Build full SPL in CI

(cherry picked from commit 82269f1351)

* Avoid changing signature of ProgramTest::add_account

(cherry picked from commit 03180b502d)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-17 22:46:55 +00:00
Michael Vines
af45efb62c Notice the user when the --mint, --bpf-program, or --clone arguments are ignored
(cherry picked from commit 59c19d9fbf)
2021-03-17 14:10:14 -07:00
mergify[bot]
f528cda832 Ignore flaky test_banking_stage_entries_only and test_banking_stage_entryfication (#15959)
(cherry picked from commit 8a9b51952e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-17 20:34:30 +00:00
mergify[bot]
eeef9f4e59 Separate snapshot location (bp #15840) (#15956)
* Add option for separate snapshot location

(cherry picked from commit 6126878f509c69e23480a5ec22b3271e2b16e072)
(cherry picked from commit 0209d334bd)

* Apply suggestions from code review

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit cfb01e26dd)

* add missed suggestion

(cherry picked from commit a43b3674c7)

* Revert to snapshots

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 0b42379ed7)

* Revert to snapshots 2

(cherry picked from commit 20b53eb4b4)

* Revert to removing only tmp-

(cherry picked from commit a5d144b00f)

Co-authored-by: DimAn <diman@diman.io>
Co-authored-by: DimAn <andiman7000@gmail.com>
2021-03-17 20:25:18 +00:00
Michael Vines
32124b59e9 Download snapshot files with a tmp- prefix so they'll automatically be cleaned up if interrupted
(cherry picked from commit 58b980f9cd)
2021-03-17 10:18:18 -07:00
Michael Vines
aa9772f9c0 Replace solana-program-test when building example-helloworld 2021-03-17 09:08:41 -07:00
mergify[bot]
5f183bd773 Add helper for paring down signers to those requried by a tx message (bp #15899) (#15938)
* sdk: Add accessor for signer pubkeys of a tx message

(cherry picked from commit bf33ce8906)

* clap-utils: Add helper to `CliSignerInfo` for getting signers for a message

(cherry picked from commit 4e99f1e634)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-17 07:48:47 +00:00
mergify[bot]
2238e5001b solana-install init can now select a pre-release from Github (#15936)
(cherry picked from commit d9176c1903)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-17 04:31:55 +00:00
mergify[bot]
79fa7ef55c CLI: Support dumping the TX message in sign-only mode (#15933)
(cherry picked from commit 672e9c640f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-17 04:13:21 +00:00
mergify[bot]
07df827411 Bump tokio to 1.1 (#15926) (#15928)
(cherry picked from commit 654449ce91)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-16 23:29:55 +00:00
mergify[bot]
a259ff0e72 Wallclock BankingStage Throttle (#15731) (#15890)
(cherry picked from commit c1ba265dd9)

Co-authored-by: carllin <carl@solana.com>
2021-03-16 21:12:59 +00:00
mergify[bot]
d7d3e767e7 fix: compute pre/post token balances on all accounts if token program present (#15900) (#15923)
* fix: compute pre/post token balances on all accounts if token program present

* fix: skip token program in balance query

* fix: prevent program ids from being collected

(cherry picked from commit 61112d4826)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-03-16 18:23:29 +00:00
mergify[bot]
6e8aa9af17 nit: fix spelling (#15908) (#15911)
(cherry picked from commit 5760cf0f41)

# Conflicts:
#	sdk/src/feature_set.rs

Co-authored-by: Jack May <jack@solana.com>
2021-03-16 10:58:39 -07:00
mergify[bot]
0236de7bc8 Encourage use of the default --ledger location (#15921)
(cherry picked from commit 1c261d293f)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-16 16:58:14 +00:00
mergify[bot]
899bd1572a Show flags for accounts in tx by solana confirm (#15804) (#15906)
* Show flags for accounts in tx by solana confirm

* Address review comments

* Improve comment a bit

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Further apply review suggestions

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 74aa32175b)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-03-16 10:43:06 +00:00
mergify[bot]
97ec4cd44e Cli: better estimate of epoch time elapsed/remaining (#15893) (#15918)
* Add rpc_client api for getRecentPerformanceSamples

* Prep fn for variable avg slot time

* Use recent-perf-samples to more-accurately estimate epoch completed times

* Spell out average

(cherry picked from commit 3726358f51)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-16 09:58:51 +00:00
mergify[bot]
5500970a7e Add cargo-bpf-test --no-run flag, matching cargo-test (#15916)
(cherry picked from commit eb19e11688)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-16 09:45:35 +00:00
Michael Vines
caea04d8d5 Pin solana crate versions to prevent downstream users from accidentally mixing crate versions 2021-03-16 08:41:28 +00:00
Michael Vines
b1a90c3580 =1.6.1 2021-03-16 08:41:28 +00:00
mergify[bot]
5bd4e38345 Charge compute budget for bytes passed via cpi (#15874) (#15905)
(cherry picked from commit ad9901d7c6)

Co-authored-by: Jack May <jack@solana.com>
2021-03-16 07:57:32 +00:00
mergify[bot]
fddba08571 Improve Instruction::new deprecation warning (#15896)
(cherry picked from commit 8567b41d5f)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-16 05:18:31 +00:00
Michael Vines
87963764fa Export tokio for program-test clients
(cherry picked from commit 430ed6d774)
2021-03-15 22:14:17 -07:00
mergify[bot]
b691a159dd increment_cargo_version.sh tune ups (bp #15880) (#15892)
* Disallow version bump with dirty working tree

(cherry picked from commit 853e735edf)

* Ignore `not_paths` for `*.md` files when bumping version

(cherry picked from commit 510760d81b)

* Also ignore `*/node_modules/*` paths when bumping version

(cherry picked from commit 2bf46b789f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-16 02:07:46 +00:00
mergify[bot]
5af1d48be8 Display actual account length (#15875) (#15884)
(cherry picked from commit 60e5fd11c9)

Co-authored-by: Jack May <jack@solana.com>
2021-03-16 01:01:25 +00:00
mergify[bot]
3b3ec3313f Fix real_number_string_trimmed zero-decimal behavior (#15873) (#15877)
* Add failing test

* Don't strip zeroes from zero-decimal amounts

* Add zero-case test

(cherry picked from commit c40bd5f394)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-15 21:33:01 +00:00
Michael Vines
be00246fb5 Bump version to v1.6.1 2021-03-15 14:47:58 -06:00
Michael Vines
1d80ba9edf Update cargo lock files on version bump 2021-03-15 14:47:58 -06:00
mergify[bot]
4bcf976ecd Fix delinquent stake display (#15839)
(cherry picked from commit eab182188a)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-13 20:27:25 +00:00
1653 changed files with 108669 additions and 219587 deletions

View File

@@ -1,28 +0,0 @@
name: Explorer_build&test_on_PR
on:
pull_request:
branches:
- master
paths:
- 'explorer/**'
jobs:
check-explorer:
runs-on: ubuntu-latest
defaults:
run:
working-directory: explorer
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v2
with:
node-version: '14'
cache: 'npm'
cache-dependency-path: explorer/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run format
- run: npm run build
- run: npm run test

View File

@@ -1,22 +0,0 @@
name : explorer_preview
on:
workflow_run:
workflows: ["Explorer_build&test_on_PR"]
# types:
# - completed
jobs:
explorer_preview:
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }} # Required
github-token: ${{ secrets.GITHUB_TOKEN }} #Optional
vercel-org-id: ${{ secrets.ORG_ID}} #Required
vercel-project-id: ${{ secrets.PROJECT_ID}} #Required
working-directory: ./explorer
scope: ${{ secrets.TEAM_ID }}

View File

@@ -1,46 +0,0 @@
name: Explorer_production_build&test
on:
push:
branches: [master]
paths:
- 'explorer/**'
jobs:
Explorer_production_build_test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: explorer
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v2
with:
node-version: '14'
cache: 'npm'
cache-dependency-path: explorer/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run format
- run: npm run build
- run: npm run test
Explorer_production_deploy:
needs: Explorer_production_build_test
runs-on: ubuntu-latest
defaults:
run:
working-directory: explorer
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }} # Required
github-token: ${{ secrets.GITHUB_TOKEN }} #Optional
vercel-args: '--prod' #for production
vercel-org-id: ${{ secrets.ORG_ID}} #Required
vercel-project-id: ${{ secrets.PROJECT_ID}} #Required
working-directory: ./explorer
scope: ${{ secrets.TEAM_ID }}

View File

@@ -1,208 +0,0 @@
name : minimal
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
Export_Github_Repositories:
runs-on: ubuntu-latest
env:
VERCEL_TOKEN: ${{secrets.VERCEL_TOKEN}}
GITHUB_TOKEN: ${{secrets.PAT_ANM}}
COMMIT_RANGE: ${{ github.event.before}}...${{ github.event.after}}
steps:
- name: Checkout repo
uses: actions/checkout@v2
with:
fetch-depth: 2
- run: echo "COMMIT_DIFF_RANGE=$(echo $COMMIT_RANGE)" >> $GITHUB_ENV
# - run: echo "$COMMIT_DIFF_RANGE"
- name: Set up Python
uses: actions/setup-python@v2
with:
GITHUB_TOKEN: ${{secrets.PAT_ANM}}
if: ${{ github.event_name == 'push' && 'cron'&& github.ref == 'refs/heads/master'}}
- name: cmd
run : |
.travis/export-github-repo.sh web3.js/ solana-web3.js
macos-artifacts:
needs: [Export_Github_Repositories]
strategy:
fail-fast: false
runs-on: macos-latest
if : ${{ github.event_name == 'api' && 'cron' || 'push' || startsWith(github.ref, 'refs/tags/v')}}
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup | Rust
uses: ATiltedTree/setup-rust@v1
with:
rust-version: stable
- name: release artifact
run: |
source ci/rust-version.sh
brew install coreutils
export PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
greadlink -f .
source ci/env.sh
rustup set profile default
ci/publish-tarball.sh
shell: bash
- name: Cache modules
uses: actions/cache@master
id: yarn-cache
with:
path: node_modules
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: ${{ runner.os }}-yarn-
# - To stop from uploading on the production
# - uses: ochanje210/simple-s3-upload-action@master
# with:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY}}
# AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
# SOURCE_DIR: 'travis-s3-upload1'
# DEST_DIR: 'giitsol'
# - uses: ochanje210/simple-s3-upload-action@master
# with:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY}}
# AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
# SOURCE_DIR: './docs/'
# DEST_DIR: 'giitsol'
windows-artifact:
needs: [Export_Github_Repositories]
strategy:
fail-fast: false
runs-on: windows-latest
if : ${{ github.event_name == 'api' && 'cron' || 'push' || startsWith(github.ref, 'refs/tags/v')}}
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup | Rust
uses: ATiltedTree/setup-rust@v1
with:
rust-version: stable
release-artifact:
needs: windows-artifact
runs-on: windows-latest
if : ${{ github.event_name == 'api' && 'cron' || github.ref == 'refs/heads/master'}}
steps:
- name: release artifact
run: |
git clone git://git.openssl.org/openssl.git
cd openssl
make
make test
make install
openssl version
# choco install openssl
# vcpkg integrate install
# refreshenv
- name: Checkout repository
uses: actions/checkout@v2
- uses: actions/checkout@v2
- run: choco install msys2
- uses: actions/checkout@v2
- run: |
openssl version
bash ci/rust-version.sh
readlink -f .
bash ci/env.sh
rustup set profile default
bash ci/publish-tarball.sh
shell: bash
- name: Cache modules
uses: actions/cache@v1
id: yarn-cache
with:
path: node_modules
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: ${{ runner.os }}-yarn-
# - To stop from uploading on the production
# - name: Config. aws cred
# uses: aws-actions/configure-aws-credentials@v1
# with:
# aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
# aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# aws-region: us-east-2
# - name: Deploy
# uses: shallwefootball/s3-upload-action@master
# with:
# folder: build
# aws_bucket: ${{ secrets.AWS_S3_BUCKET }}
# aws_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
# aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# destination_dir: /
# bucket-region: us-east-2
# delete-removed: true
# no-cache: true
# private: true
# Docs:
# needs: [windows-artifact,release-artifact]
# runs-on: ubuntu-latest
# env:
# GITHUB_TOKEN: ${{secrets.PAT_NEW}}
# GITHUB_EVENT_BEFORE: ${{ github.event.before }}
# GITHUB_EVENT_AFTER: ${{ github.event.after }}
# COMMIT_RANGE: ${{ github.event.before}}...${{ github.event.after}}
# steps:
# - name: Checkout repo
# uses: actions/checkout@v2
# with:
# fetch-depth: 2
# - name: docs
# if: ${{github.event_name == 'pull_request' || startsWith(github.ref, 'refs/tags/v')}}
# run: |
# touch .env
# echo "COMMIT_RANGE=($COMMIT_RANGE)" > .env
# source ci/env.sh
# .travis/channel_restriction.sh edge beta || exit 0
# .travis/affects.sh docs/ .travis || exit 0
# cd docs/
# source .travis/before_install.sh
# source .travis/script.sh
# - name: setup-node
# uses: actions/checkout@v2
# - name: setup-node
# uses: actions/setup-node@v2
# with:
# node-version: 'lts/*'
# - name: Cache
# uses: actions/cache@v1
# with:
# path: ~/.npm
# key: ${{ runner.OS }}-npm-cache-${{ hashFiles('**/package-lock.json') }}
# restore-keys: |
# ${{ runner.OS }}-npm-cache-2
# auto_bump:
# needs: [windows-artifact,release-artifact,Docs]
# runs-on: ubuntu-latest
# steps:
# - name : checkout repo
# uses: actions/checkout@v2
# with:
# fetch-depth: '0'
# - name: Bump version and push tag
# uses: anothrNick/github-tag-action@1.26.0
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# WITH_V: true
# DEFAULT_BUMP: patch

View File

@@ -1,84 +0,0 @@
name: Web3
on:
push:
branches: [ master ]
paths:
- "web3.js/**"
pull_request:
branches: [ master ]
paths:
- "web3.js/**"
jobs:
# needed for grouping check-web3 strategies into one check for mergify
all-web3-checks:
runs-on: ubuntu-latest
needs:
- check-web3
steps:
- run: echo "Done"
web3-commit-lint:
runs-on: ubuntu-latest
# Set to true in order to avoid cancelling other workflow jobs.
# Mergify will still require web3-commit-lint for automerge
continue-on-error: true
defaults:
run:
working-directory: web3.js
steps:
- uses: actions/checkout@v2
with:
# maybe needed for base sha below
fetch-depth: 0
- uses: actions/setup-node@v2
with:
node-version: '16'
cache: 'npm'
cache-dependency-path: web3.js/package-lock.json
- run: npm ci
- name: commit-lint
if: ${{ github.event_name == 'pull_request' }}
run: bash commitlint.sh
env:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
check-web3:
runs-on: ubuntu-latest
defaults:
run:
working-directory: web3.js
strategy:
matrix:
node: [ '12', '14', '16' ]
name: Node ${{ matrix.node }}
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: ${{ matrix.node }}
cache: 'npm'
cache-dependency-path: web3.js/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run lint
- run: |
npm run build
ls -l lib
test -r lib/index.iife.js
test -r lib/index.cjs.js
test -r lib/index.esm.js
- run: npm run doc
- run: npm run codecov
- run: |
sh -c "$(curl -sSfL https://release.solana.com/edge/install)"
echo "$HOME/.local/share/solana/install/active_release/bin" >> $GITHUB_PATH
PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"
solana --version
- run: npm run test:live-with-test-validator

2
.gitignore vendored
View File

@@ -27,5 +27,3 @@ log-*/
/spl_*.so
.DS_Store
# scripts that may be generated by cargo *-bpf commands
**/cargo-*-bpf-child-script-*.sh

View File

@@ -4,71 +4,24 @@
#
# https://doc.mergify.io/
pull_request_rules:
- name: label changes from community
conditions:
- author≠@core-contributors
- author≠mergify[bot]
- author≠dependabot[bot]
actions:
label:
add:
- community
- name: request review for community changes
conditions:
- author≠@core-contributors
- author≠mergify[bot]
- author≠dependabot[bot]
# Only request reviews from the pr subscribers group if no one
# has reviewed the community PR yet. These checks only match
# reviewers with admin, write or maintain permission on the repository.
- "#approved-reviews-by=0"
- "#commented-reviews-by=0"
- "#changes-requested-reviews-by=0"
actions:
request_reviews:
teams:
- "@solana-labs/community-pr-subscribers"
- name: automatic merge (squash) on CI success
conditions:
- and:
- status-success=buildkite/solana
- status-success=Travis CI - Pull Request
- status-success=ci-gate
- label=automerge
- author≠@dont-squash-my-commits
- or:
# only require travis success if docs files changed
- status-success=Travis CI - Pull Request
- -files~=^docs/
- or:
# only require explorer checks if explorer files changed
- status-success=check-explorer
- -files~=^explorer/
- or:
- and:
- status-success=all-web3-checks
- status-success=web3-commit-lint
# only require web3 checks if web3.js files changed
- -files~=^web3.js/
actions:
merge:
method: squash
# Join the dont-squash-my-commits group if you won't like your commits squashed
- name: automatic merge (rebase) on CI success
conditions:
- and:
- status-success=buildkite/solana
- status-success=Travis CI - Pull Request
- status-success=ci-gate
- label=automerge
- author=@dont-squash-my-commits
- or:
# only require explorer checks if explorer files changed
- status-success=check-explorer
- -files~=^explorer/
- or:
# only require web3 checks if web3.js files changed
- status-success=all-web3-checks
- -files~=^web3.js/
actions:
merge:
method: rebase
@@ -97,19 +50,35 @@ pull_request_rules:
label:
add:
- automerge
- name: v1.8 backport
- name: v1.4 backport
conditions:
- label=v1.8
- label=v1.4
actions:
backport:
ignore_conflicts: true
branches:
- v1.8
- name: v1.9 backport
- v1.4
- name: v1.5 backport
conditions:
- label=v1.9
- label=v1.5
actions:
backport:
ignore_conflicts: true
branches:
- v1.9
- v1.5
- name: v1.6 backport
conditions:
- label=v1.6
actions:
backport:
ignore_conflicts: true
branches:
- v1.6
- name: v1.7 backport
conditions:
- label=v1.7
actions:
backport:
ignore_conflicts: true
branches:
- v1.7

View File

@@ -23,12 +23,12 @@ jobs:
depth: false
script:
- .travis/export-github-repo.sh web3.js/ solana-web3.js
- .travis/export-github-repo.sh explorer/ explorer
- &release-artifacts
if: type IN (api, cron) OR tag IS present
name: "macOS release artifacts"
os: osx
osx_image: xcode12
language: rust
rust:
- stable
@@ -36,9 +36,6 @@ jobs:
- source ci/rust-version.sh
- PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
- readlink -f .
- brew install gnu-tar
- PATH="/usr/local/opt/gnu-tar/libexec/gnubin:$PATH"
- tar --version
script:
- source ci/env.sh
- rustup set profile default
@@ -64,12 +61,6 @@ jobs:
- <<: *release-artifacts
name: "Windows release artifacts"
os: windows
install:
- choco install openssl
- export OPENSSL_DIR="C:\Program Files\OpenSSL-Win64"
- source ci/rust-version.sh
- PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
- readlink -f .
# Linux release artifacts are still built by ci/buildkite-secondary.yml
#- <<: *release-artifacts
# name: "Linux release artifacts"
@@ -77,12 +68,56 @@ jobs:
# before_install:
# - sudo apt-get install libssl-dev libudev-dev
# explorer pull request
- name: "explorer"
if: type = pull_request AND branch = master
language: node_js
node_js:
- "node"
cache:
directories:
- ~/.npm
before_install:
- .travis/affects.sh explorer/ .travis || travis_terminate 0
- cd explorer
script:
- npm run build
- npm run format
# web3.js pull request
- name: "web3.js"
if: type = pull_request AND branch = master
language: node_js
node_js:
- "lts/*"
services:
- docker
cache:
directories:
- ~/.npm
before_install:
- .travis/affects.sh web3.js/ .travis || travis_terminate 0
- cd web3.js/
- source .travis/before_install.sh
script:
- ../.travis/commitlint.sh
- source .travis/script.sh
# docs pull request
- name: "docs"
if: type IN (push, pull_request) OR tag IS present
language: node_js
node_js:
- "lts/*"
- "node"
services:
- docker

View File

@@ -20,13 +20,13 @@ if [[ ! -f "$basedir"/commitlint.config.js ]]; then
exit 1
fi
if [[ -z $COMMIT_RANGE ]]; then
echo "Error: COMMIT_RANGE not defined"
if [[ -z $TRAVIS_COMMIT_RANGE ]]; then
echo "Error: TRAVIS_COMMIT_RANGE not defined"
exit 1
fi
cd "$basedir"
echo "Checking commits in COMMIT_RANGE: $COMMIT_RANGE"
echo "Checking commits in TRAVIS_COMMIT_RANGE: $TRAVIS_COMMIT_RANGE"
while IFS= read -r line; do
echo "$line" | npx commitlint
done < <(git log "$COMMIT_RANGE" --format=%s -- .)
done < <(git log "$TRAVIS_COMMIT_RANGE" --format=%s -- .)

4459
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,7 @@
[workspace]
members = [
"accountsdb-plugin-interface",
"accountsdb-plugin-manager",
"accountsdb-plugin-postgres",
"accounts-cluster-bench",
"bench-exchange",
"bench-streamer",
"bench-tps",
"accounts-bench",
@@ -11,7 +9,6 @@ members = [
"banks-client",
"banks-interface",
"banks-server",
"bucket_map",
"clap-utils",
"cli-config",
"cli-output",
@@ -19,13 +16,11 @@ members = [
"core",
"dos",
"download-utils",
"entry",
"faucet",
"frozen-abi",
"perf",
"validator",
"genesis",
"genesis-utils",
"gossip",
"install",
"keygen",
@@ -36,6 +31,7 @@ members = [
"log-analyzer",
"merkle-root-bench",
"merkle-tree",
"stake-o-matic",
"storage-bigtable",
"storage-proto",
"streamer",
@@ -43,29 +39,30 @@ members = [
"metrics",
"net-shaper",
"notifier",
"poh",
"poh-bench",
"program-test",
"programs/address-lookup-table",
"programs/address-lookup-table-tests",
"programs/secp256k1",
"programs/bpf_loader",
"programs/compute-budget",
"programs/budget",
"programs/config",
"programs/exchange",
"programs/failure",
"programs/noop",
"programs/ownable",
"programs/stake",
"programs/vest",
"programs/vote",
"rbpf-cli",
"remote-wallet",
"rpc",
"ramp-tps",
"runtime",
"runtime/store-tool",
"sdk",
"sdk/cargo-build-bpf",
"sdk/cargo-test-bpf",
"send-transaction-service",
"scripts",
"stake-accounts",
"sys-tuner",
"tokens",
"transaction-dos",
"transaction-status",
"account-decoder",
"upload-perf",
@@ -74,18 +71,11 @@ members = [
"cli",
"rayon-threadlimit",
"watchtower",
"replica-node",
"replica-lib",
"test-validator",
"rpc-test",
"client-test",
]
exclude = [
"programs/bpf",
]
# TODO: Remove once the "simd-accel" feature from the reed-solomon-erasure
# dependency is supported on Apple M1. v2 of the feature resolver is needed to
# specify arch-specific features.
resolver = "2"
[profile.dev]
split-debuginfo = "unpacked"

View File

@@ -1,6 +1,6 @@
<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/IKyzQ6T.png" width="250" />
<img alt="Solana" src="https://i.imgur.com/OMnvVEz.png" width="250" />
</a>
</p>
@@ -19,18 +19,12 @@ $ source $HOME/.cargo/env
$ rustup component add rustfmt
```
When building the master branch, please make sure you are using the latest stable rust version by running:
Please make sure you are always using the latest stable rust version by running:
```bash
$ rustup update
```
When building a specific release branch, you should check the rust version in `ci/rust-version.sh` and if necessary, install that version by running:
```bash
$ rustup install VERSION
```
Note that if this is not the latest rust version on your machine, cargo commands may require an [override](https://rust-lang.github.io/rustup/overrides.html) in order to use the correct version.
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:
```bash
@@ -38,12 +32,6 @@ $ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang make
```
On Mac M1s, make sure you set up your terminal & homebrew [to use](https://5balloons.info/correct-way-to-install-and-use-homebrew-on-m1-macs/) Rosetta. You can install it with:
```bash
$ softwareupdate --install-rosetta
```
## **2. Download the source code.**
```bash
@@ -57,6 +45,11 @@ $ cd solana
$ cargo build
```
## **4. Run a minimal local cluster.**
```bash
$ ./run.sh
```
# Testing
**Run the test suite:**
@@ -116,13 +109,13 @@ send us that patch!
All claims, content, designs, algorithms, estimates, roadmaps,
specifications, and performance measurements described in this project
are done with the Solana Foundation's ("SF") good faith efforts. It is up to
are done with the Solana Foundation's ("SF") best efforts. It is up to
the reader to check and validate their accuracy and truthfulness.
Furthermore, nothing in this project constitutes a solicitation for
Furthermore nothing in this project constitutes a solicitation for
investment.
Any content produced by SF or developer resources that SF provides, are
for educational and inspirational purposes only. SF does not encourage,
for educational and inspiration purposes only. SF does not encourage,
induce or sanction the deployment, integration or use of any such
applications (including the code comprising the Solana blockchain
protocol) in violation of applicable laws or regulations and hereby

View File

@@ -18,24 +18,24 @@ Expect a response as fast as possible, within one business day at the latest.
We offer bounties for critical security issues. Please see below for more details.
Loss of Funds:
$2,000,000 USD in locked SOL tokens (locked for 12 months)
$500,000 USD in locked SOL tokens (locked for 12 months)
* Theft of funds without users signature from any account
* Theft of funds without users interaction in system, token, stake, vote programs
* Theft of funds that requires users signature - creating a vote program that drains the delegated stakes.
Consensus/Safety Violations:
$1,000,000 USD in locked SOL tokens (locked for 12 months)
$250,000 USD in locked SOL tokens (locked for 12 months)
* Consensus safety violation
* Tricking a validator to accept an optimistic confirmation or rooted slot without a double vote, etc..
Other Attacks:
$400,000 USD in locked SOL tokens (locked for 12 months)
$100,000 USD in locked SOL tokens (locked for 12 months)
* Protocol liveness attacks,
* Eclipse attacks,
* Remote attacks that partition the network,
DoS Attacks:
$100,000 USD in locked SOL tokens (locked for 12 months)
$25,000 USD in locked SOL tokens (locked for 12 months)
* Remote resource exaustion via Non-RPC protocols
RPC DoS/Crashes:
@@ -51,27 +51,13 @@ The following components are out of scope for the bounty program
* Attacks that require social engineering
Eligibility:
* The participant submitting the bug report shall follow the process outlined within this document
* The participant submitting the bug bounty shall follow the process outlined within this document
* Valid exploits can be eligible even if they are not successfully executed on the cluster
* Multiple submissions for the same class of exploit are still eligible for compensation, though may be compensated at a lower rate, however these will be assessed on a case-by-case basis
* Participants must complete KYC and sign the participation agreement here when the registrations are open https://solana.com/validator-registration. Security exploits will still be assessed and open for submission at all times. This needs only be done prior to distribution of tokens.
Payment of Bug Bounties:
* Payments for eligible bug reports are distributed monthly.
* Bounties for all bug reports submitted in a given month are paid out in the middle of the
following month.
* The SOL/USD conversion rate used for payments is the market price at the end of
the last day of the month for the month in which the bug was submitted.
* The reference for this price is the Closing Price given by Coingecko.com on
that date given here:
https://www.coingecko.com/en/coins/solana/historical_data/usd#panel
* For example, for all bugs submitted in March 2021, the SOL/USD price for bug
payouts is the Close price on 2021-03-31 of $19.49. This applies to all bugs
submitted in March 2021, to be paid in mid-April 2021.
* Bug bounties are paid out in
[stake accounts](https://solana.com/staking) with a
[lockup](https://docs.solana.com/staking/stake-accounts#lockups)
expiring 12 months from the last day of the month in which the bug was submitted.
Notes:
* All locked tokens can be staked during the lockup period
<a name="process"></a>
## Incident Response Process

View File

@@ -1,30 +1,31 @@
[package]
name = "solana-account-decoder"
version = "1.10.0"
version = "1.6.28"
description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-account-decoder"
license = "Apache-2.0"
edition = "2021"
edition = "2018"
[dependencies]
base64 = "0.12.3"
bincode = "1.3.3"
bs58 = "0.4.0"
bincode = "1.3.1"
bs58 = "0.3.1"
bv = "0.11.1"
Inflector = "0.11.4"
lazy_static = "1.4.0"
serde = "1.0.131"
serde = "1.0.122"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-config-program = { path = "../programs/config", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.0" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
serde_json = "1.0.56"
solana-config-program = { path = "../programs/config", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-stake-program = { path = "../programs/stake", version = "=1.6.28" }
solana-vote-program = { path = "../programs/vote", version = "=1.6.28" }
spl-token-v2-0 = { package = "spl-token", version = "=3.1.1", features = ["no-entrypoint"] }
thiserror = "1.0"
zstd = "0.9.0"
zstd = "0.5.1"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -17,10 +17,8 @@ pub mod validator_info;
use {
crate::parse_account_data::{parse_account_data, AccountAdditionalData, ParsedAccount},
solana_sdk::{
account::{ReadableAccount, WritableAccount},
clock::Epoch,
fee_calculator::FeeCalculator,
pubkey::Pubkey,
account::ReadableAccount, account::WritableAccount, clock::Epoch,
fee_calculator::FeeCalculator, pubkey::Pubkey,
},
std::{
io::{Read, Write},
@@ -30,7 +28,6 @@ use {
pub type StringAmount = String;
pub type StringDecimals = String;
pub const MAX_BASE58_BYTES: usize = 128;
/// A duplicate representation of an Account for pretty JSON serialization
#[derive(Serialize, Deserialize, Clone, Debug)]
@@ -63,17 +60,6 @@ pub enum UiAccountEncoding {
}
impl UiAccount {
fn encode_bs58<T: ReadableAccount>(
account: &T,
data_slice_config: Option<UiDataSliceConfig>,
) -> String {
if account.data().len() <= MAX_BASE58_BYTES {
bs58::encode(slice_data(account.data(), data_slice_config)).into_string()
} else {
"error: data too large for bs58 encoding".to_string()
}
}
pub fn encode<T: ReadableAccount>(
pubkey: &Pubkey,
account: &T,
@@ -82,34 +68,33 @@ impl UiAccount {
data_slice_config: Option<UiDataSliceConfig>,
) -> Self {
let data = match encoding {
UiAccountEncoding::Binary => {
let data = Self::encode_bs58(account, data_slice_config);
UiAccountData::LegacyBinary(data)
}
UiAccountEncoding::Base58 => {
let data = Self::encode_bs58(account, data_slice_config);
UiAccountData::Binary(data, encoding)
}
UiAccountEncoding::Binary => UiAccountData::LegacyBinary(
bs58::encode(slice_data(&account.data(), data_slice_config)).into_string(),
),
UiAccountEncoding::Base58 => UiAccountData::Binary(
bs58::encode(slice_data(&account.data(), data_slice_config)).into_string(),
encoding,
),
UiAccountEncoding::Base64 => UiAccountData::Binary(
base64::encode(slice_data(account.data(), data_slice_config)),
base64::encode(slice_data(&account.data(), data_slice_config)),
encoding,
),
UiAccountEncoding::Base64Zstd => {
let mut encoder = zstd::stream::write::Encoder::new(Vec::new(), 0).unwrap();
match encoder
.write_all(slice_data(account.data(), data_slice_config))
.write_all(slice_data(&account.data(), data_slice_config))
.and_then(|()| encoder.finish())
{
Ok(zstd_data) => UiAccountData::Binary(base64::encode(zstd_data), encoding),
Err(_) => UiAccountData::Binary(
base64::encode(slice_data(account.data(), data_slice_config)),
base64::encode(slice_data(&account.data(), data_slice_config)),
UiAccountEncoding::Base64,
),
}
}
UiAccountEncoding::JsonParsed => {
if let Ok(parsed_data) =
parse_account_data(pubkey, account.owner(), account.data(), additional_data)
parse_account_data(pubkey, &account.owner(), &account.data(), additional_data)
{
UiAccountData::Json(parsed_data)
} else {
@@ -204,10 +189,8 @@ fn slice_data(data: &[u8], data_slice_config: Option<UiDataSliceConfig>) -> &[u8
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::account::{Account, AccountSharedData},
};
use super::*;
use solana_sdk::account::{Account, AccountSharedData};
#[test]
fn test_slice_data() {

View File

@@ -1,27 +1,25 @@
use {
crate::{
use crate::{
parse_bpf_loader::parse_bpf_upgradeable_loader,
parse_config::parse_config,
parse_nonce::parse_nonce,
parse_stake::parse_stake,
parse_sysvar::parse_sysvar,
parse_token::{parse_token, spl_token_id},
parse_token::{parse_token, spl_token_id_v2_0},
parse_vote::parse_vote,
},
inflector::Inflector,
serde_json::Value,
solana_sdk::{instruction::InstructionError, pubkey::Pubkey, stake, system_program, sysvar},
std::collections::HashMap,
thiserror::Error,
};
use inflector::Inflector;
use serde_json::Value;
use solana_sdk::{instruction::InstructionError, pubkey::Pubkey, system_program, sysvar};
use std::collections::HashMap;
use thiserror::Error;
lazy_static! {
static ref BPF_UPGRADEABLE_LOADER_PROGRAM_ID: Pubkey = solana_sdk::bpf_loader_upgradeable::id();
static ref CONFIG_PROGRAM_ID: Pubkey = solana_config_program::id();
static ref STAKE_PROGRAM_ID: Pubkey = stake::program::id();
static ref STAKE_PROGRAM_ID: Pubkey = solana_stake_program::id();
static ref SYSTEM_PROGRAM_ID: Pubkey = system_program::id();
static ref SYSVAR_PROGRAM_ID: Pubkey = sysvar::id();
static ref TOKEN_PROGRAM_ID: Pubkey = spl_token_id();
static ref TOKEN_PROGRAM_ID: Pubkey = spl_token_id_v2_0();
static ref VOTE_PROGRAM_ID: Pubkey = solana_vote_program::id();
pub static ref PARSABLE_PROGRAM_IDS: HashMap<Pubkey, ParsableAccount> = {
let mut m = HashMap::new();
@@ -114,14 +112,12 @@ pub fn parse_account_data(
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::nonce::{
use super::*;
use solana_sdk::nonce::{
state::{Data, Versions},
State,
},
solana_vote_program::vote_state::{VoteState, VoteStateVersions},
};
use solana_vote_program::vote_state::{VoteState, VoteStateVersions};
#[test]
fn test_parse_account_data() {

View File

@@ -1,11 +1,9 @@
use {
crate::{
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
UiAccountData, UiAccountEncoding,
},
bincode::{deserialize, serialized_size},
solana_sdk::{bpf_loader_upgradeable::UpgradeableLoaderState, pubkey::Pubkey},
};
use bincode::{deserialize, serialized_size};
use solana_sdk::{bpf_loader_upgradeable::UpgradeableLoaderState, pubkey::Pubkey};
pub fn parse_bpf_upgradeable_loader(
data: &[u8],
@@ -92,7 +90,9 @@ pub struct UiProgramData {
#[cfg(test)]
mod test {
use {super::*, bincode::serialize, solana_sdk::pubkey::Pubkey};
use super::*;
use bincode::serialize;
use solana_sdk::pubkey::Pubkey;
#[test]
fn test_parse_bpf_upgradeable_loader_accounts() {

View File

@@ -1,19 +1,15 @@
use {
crate::{
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
validator_info,
},
bincode::deserialize,
serde_json::Value,
solana_config_program::{get_config_data, ConfigKeys},
solana_sdk::{
pubkey::Pubkey,
stake::config::{self as stake_config, Config as StakeConfig},
},
};
use bincode::deserialize;
use serde_json::Value;
use solana_config_program::{get_config_data, ConfigKeys};
use solana_sdk::pubkey::Pubkey;
use solana_stake_program::config::Config as StakeConfig;
pub fn parse_config(data: &[u8], pubkey: &Pubkey) -> Result<ConfigAccountType, ParseAccountError> {
let parsed_account = if pubkey == &stake_config::id() {
let parsed_account = if pubkey == &solana_stake_program::config::id() {
get_config_data(data)
.ok()
.and_then(|data| deserialize::<StakeConfig>(data).ok())
@@ -41,7 +37,7 @@ fn parse_config_data<T>(data: &[u8], keys: Vec<(Pubkey, bool)>) -> Option<UiConf
where
T: serde::de::DeserializeOwned,
{
let config_data: T = deserialize(get_config_data(data).ok()?).ok()?;
let config_data: T = deserialize(&get_config_data(data).ok()?).ok()?;
let keys = keys
.iter()
.map(|key| UiConfigKey {
@@ -91,10 +87,11 @@ pub struct UiConfig<T> {
#[cfg(test)]
mod test {
use {
super::*, crate::validator_info::ValidatorInfo, serde_json::json,
solana_config_program::create_config_account, solana_sdk::account::ReadableAccount,
};
use super::*;
use crate::validator_info::ValidatorInfo;
use serde_json::json;
use solana_config_program::create_config_account;
use solana_sdk::account::ReadableAccount;
#[test]
fn test_parse_config() {
@@ -104,7 +101,11 @@ mod test {
};
let stake_config_account = create_config_account(vec![], &stake_config, 10);
assert_eq!(
parse_config(stake_config_account.data(), &stake_config::id()).unwrap(),
parse_config(
&stake_config_account.data(),
&solana_stake_program::config::id()
)
.unwrap(),
ConfigAccountType::StakeConfig(UiStakeConfig {
warmup_cooldown_rate: 0.25,
slash_penalty: 50,
@@ -124,7 +125,7 @@ mod test {
10,
);
assert_eq!(
parse_config(validator_info_config_account.data(), &info_pubkey).unwrap(),
parse_config(&validator_info_config_account.data(), &info_pubkey).unwrap(),
ConfigAccountType::ValidatorInfo(UiConfig {
keys: vec![
UiConfigKey {

View File

@@ -1,9 +1,7 @@
use {
crate::{parse_account_data::ParseAccountError, UiFeeCalculator},
solana_sdk::{
use crate::{parse_account_data::ParseAccountError, UiFeeCalculator};
use solana_sdk::{
instruction::InstructionError,
nonce::{state::Versions, State},
},
};
pub fn parse_nonce(data: &[u8]) -> Result<UiNonceState, ParseAccountError> {
@@ -44,16 +42,14 @@ pub struct UiNonceData {
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::{
use super::*;
use solana_sdk::{
hash::Hash,
nonce::{
state::{Data, Versions},
State,
},
pubkey::Pubkey,
},
};
#[test]

View File

@@ -1,14 +1,10 @@
use {
crate::{
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount,
},
bincode::deserialize,
solana_sdk::{
clock::{Epoch, UnixTimestamp},
stake::state::{Authorized, Delegation, Lockup, Meta, Stake, StakeState},
},
};
use bincode::deserialize;
use solana_sdk::clock::{Epoch, UnixTimestamp};
use solana_stake_program::stake_state::{Authorized, Delegation, Lockup, Meta, Stake, StakeState};
pub fn parse_stake(data: &[u8]) -> Result<StakeAccountType, ParseAccountError> {
let stake_state: StakeState = deserialize(data)
@@ -136,7 +132,8 @@ impl From<Delegation> for UiDelegation {
#[cfg(test)]
mod test {
use {super::*, bincode::serialize};
use super::*;
use bincode::serialize;
#[test]
fn test_parse_stake() {

View File

@@ -1,13 +1,10 @@
#[allow(deprecated)]
use solana_sdk::sysvar::{fees::Fees, recent_blockhashes::RecentBlockhashes};
use {
crate::{
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount, UiFeeCalculator,
},
bincode::deserialize,
bv::BitVec,
solana_sdk::{
};
use bincode::deserialize;
use bv::BitVec;
use solana_sdk::{
clock::{Clock, Epoch, Slot, UnixTimestamp},
epoch_schedule::EpochSchedule,
pubkey::Pubkey,
@@ -15,12 +12,10 @@ use {
slot_hashes::SlotHashes,
slot_history::{self, SlotHistory},
stake_history::{StakeHistory, StakeHistoryEntry},
sysvar::{self, rewards::Rewards},
},
sysvar::{self, fees::Fees, recent_blockhashes::RecentBlockhashes, rewards::Rewards},
};
pub fn parse_sysvar(data: &[u8], pubkey: &Pubkey) -> Result<SysvarAccountType, ParseAccountError> {
#[allow(deprecated)]
let parsed_account = {
if pubkey == &sysvar::clock::id() {
deserialize::<Clock>(data)
@@ -96,9 +91,7 @@ pub fn parse_sysvar(data: &[u8], pubkey: &Pubkey) -> Result<SysvarAccountType, P
pub enum SysvarAccountType {
Clock(UiClock),
EpochSchedule(EpochSchedule),
#[allow(deprecated)]
Fees(UiFees),
#[allow(deprecated)]
RecentBlockhashes(Vec<UiRecentBlockhashesEntry>),
Rent(UiRent),
Rewards(UiRewards),
@@ -134,7 +127,6 @@ impl From<Clock> for UiClock {
pub struct UiFees {
pub fee_calculator: UiFeeCalculator,
}
#[allow(deprecated)]
impl From<Fees> for UiFees {
fn from(fees: Fees) -> Self {
Self {
@@ -220,17 +212,14 @@ pub struct UiStakeHistoryEntry {
#[cfg(test)]
mod test {
#[allow(deprecated)]
use solana_sdk::sysvar::recent_blockhashes::IterItem;
use {
super::*,
solana_sdk::{account::create_account_for_test, fee_calculator::FeeCalculator, hash::Hash},
use super::*;
use solana_sdk::{
account::create_account_for_test, fee_calculator::FeeCalculator, hash::Hash,
sysvar::recent_blockhashes::IterItem,
};
#[test]
fn test_parse_sysvars() {
let hash = Hash::new(&[1; 32]);
let clock_sysvar = create_account_for_test(&Clock::default());
assert_eq!(
parse_sysvar(&clock_sysvar.data, &sysvar::clock::id()).unwrap(),
@@ -250,16 +239,19 @@ mod test {
SysvarAccountType::EpochSchedule(epoch_schedule),
);
#[allow(deprecated)]
{
let fees_sysvar = create_account_for_test(&Fees::default());
assert_eq!(
parse_sysvar(&fees_sysvar.data, &sysvar::fees::id()).unwrap(),
SysvarAccountType::Fees(UiFees::default()),
);
let recent_blockhashes: RecentBlockhashes =
vec![IterItem(0, &hash, 10)].into_iter().collect();
let hash = Hash::new(&[1; 32]);
let fee_calculator = FeeCalculator {
lamports_per_signature: 10,
};
let recent_blockhashes: RecentBlockhashes = vec![IterItem(0, &hash, &fee_calculator)]
.into_iter()
.collect();
let recent_blockhashes_sysvar = create_account_for_test(&recent_blockhashes);
assert_eq!(
parse_sysvar(
@@ -269,10 +261,9 @@ mod test {
.unwrap(),
SysvarAccountType::RecentBlockhashes(vec![UiRecentBlockhashesEntry {
blockhash: hash.to_string(),
fee_calculator: FeeCalculator::new(10).into(),
fee_calculator: fee_calculator.into(),
}]),
);
}
let rent = Rent {
lamports_per_byte_year: 10,

View File

@@ -1,37 +1,35 @@
use {
crate::{
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount, StringDecimals,
},
solana_sdk::pubkey::Pubkey,
spl_token::{
};
use solana_sdk::pubkey::Pubkey;
use spl_token_v2_0::{
solana_program::{
program_option::COption, program_pack::Pack, pubkey::Pubkey as SplTokenPubkey,
},
state::{Account, AccountState, Mint, Multisig},
},
std::str::FromStr,
};
use std::str::FromStr;
// A helper function to convert spl_token::id() as spl_sdk::pubkey::Pubkey to
// A helper function to convert spl_token_v2_0::id() as spl_sdk::pubkey::Pubkey to
// solana_sdk::pubkey::Pubkey
pub fn spl_token_id() -> Pubkey {
Pubkey::new_from_array(spl_token::id().to_bytes())
pub fn spl_token_id_v2_0() -> Pubkey {
Pubkey::new_from_array(spl_token_v2_0::id().to_bytes())
}
// A helper function to convert spl_token::native_mint::id() as spl_sdk::pubkey::Pubkey to
// A helper function to convert spl_token_v2_0::native_mint::id() as spl_sdk::pubkey::Pubkey to
// solana_sdk::pubkey::Pubkey
pub fn spl_token_native_mint() -> Pubkey {
Pubkey::new_from_array(spl_token::native_mint::id().to_bytes())
pub fn spl_token_v2_0_native_mint() -> Pubkey {
Pubkey::new_from_array(spl_token_v2_0::native_mint::id().to_bytes())
}
// A helper function to convert a solana_sdk::pubkey::Pubkey to spl_sdk::pubkey::Pubkey
pub fn spl_token_pubkey(pubkey: &Pubkey) -> SplTokenPubkey {
pub fn spl_token_v2_0_pubkey(pubkey: &Pubkey) -> SplTokenPubkey {
SplTokenPubkey::new_from_array(pubkey.to_bytes())
}
// A helper function to convert a spl_sdk::pubkey::Pubkey to solana_sdk::pubkey::Pubkey
pub fn pubkey_from_spl_token(pubkey: &SplTokenPubkey) -> Pubkey {
pub fn pubkey_from_spl_token_v2_0(pubkey: &SplTokenPubkey) -> Pubkey {
Pubkey::new_from_array(pubkey.to_bytes())
}
@@ -118,7 +116,6 @@ pub fn parse_token(
#[derive(Debug, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "camelCase", tag = "type", content = "info")]
#[allow(clippy::large_enum_variant)]
pub enum TokenAccountType {
Account(UiTokenAccount),
Mint(UiMint),

View File

@@ -1,11 +1,9 @@
use {
crate::{parse_account_data::ParseAccountError, StringAmount},
solana_sdk::{
use crate::{parse_account_data::ParseAccountError, StringAmount};
use solana_sdk::{
clock::{Epoch, Slot},
pubkey::Pubkey,
},
solana_vote_program::vote_state::{BlockTimestamp, Lockout, VoteState},
};
use solana_vote_program::vote_state::{BlockTimestamp, Lockout, VoteState};
pub fn parse_vote(data: &[u8]) -> Result<VoteAccountType, ParseAccountError> {
let mut vote_state = VoteState::deserialize(data).map_err(ParseAccountError::from)?;
@@ -123,7 +121,8 @@ struct UiEpochCredits {
#[cfg(test)]
mod test {
use {super::*, solana_vote_program::vote_state::VoteStateVersions};
use super::*;
use solana_vote_program::vote_state::VoteStateVersions;
#[test]
fn test_parse_vote() {

View File

@@ -1,22 +1,24 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-accounts-bench"
version = "1.10.0"
version = "1.6.28"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
log = "0.4.14"
rayon = "1.5.1"
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
log = "0.4.11"
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.6.28" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-measure = { path = "../measure", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-version = { path = "../version", version = "=1.6.28" }
rand = "0.7.0"
clap = "2.33.1"
crossbeam-channel = "0.4"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,19 +1,15 @@
#![allow(clippy::integer_arithmetic)]
#[macro_use]
extern crate log;
use {
clap::{crate_description, crate_name, value_t, App, Arg},
rayon::prelude::*,
solana_measure::measure::Measure,
solana_runtime::{
use clap::{crate_description, crate_name, value_t, App, Arg};
use rayon::prelude::*;
use solana_measure::measure::Measure;
use solana_runtime::{
accounts::{create_test_accounts, update_accounts_bench, Accounts},
accounts_db::AccountShrinkThreshold,
accounts_index::AccountSecondaryIndexes,
ancestors::Ancestors,
},
solana_sdk::{genesis_config::ClusterType, pubkey::Pubkey},
std::{env, fs, path::PathBuf},
accounts_index::{AccountSecondaryIndexes, Ancestors},
};
use solana_sdk::{genesis_config::ClusterType, pubkey::Pubkey};
use std::{env, fs, path::PathBuf};
fn main() {
solana_logger::setup();
@@ -62,12 +58,11 @@ fn main() {
if fs::remove_dir_all(path.clone()).is_err() {
println!("Warning: Couldn't remove {:?}", path);
}
let accounts = Accounts::new_with_config_for_benches(
let accounts = Accounts::new_with_config(
vec![path],
&ClusterType::Testnet,
AccountSecondaryIndexes::default(),
false,
AccountShrinkThreshold::default(),
);
println!("Creating {} accounts", num_accounts);
let mut create_time = Measure::start("create accounts");
@@ -92,19 +87,17 @@ fn main() {
num_slots,
create_time
);
let mut ancestors = Vec::with_capacity(num_slots);
ancestors.push(0);
let mut ancestors: Ancestors = vec![(0, 0)].into_iter().collect();
for i in 1..num_slots {
ancestors.push(i as u64);
ancestors.insert(i as u64, i - 1);
accounts.add_root(i as u64);
}
let ancestors = Ancestors::from(ancestors);
let mut elapsed = vec![0; iterations];
let mut elapsed_store = vec![0; iterations];
for x in 0..iterations {
if clean {
let mut time = Measure::start("clean");
accounts.accounts_db.clean_accounts(None, false, None);
accounts.accounts_db.clean_accounts(None);
time.stop();
println!("{}", time);
for slot in 0..num_slots {
@@ -123,9 +116,6 @@ fn main() {
solana_sdk::clock::Slot::default(),
&ancestors,
None,
false,
None,
false,
);
time_store.stop();
if results != results_store {

View File

@@ -1,8 +1,8 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-accounts-cluster-bench"
version = "1.10.0"
version = "1.6.28"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,28 +10,25 @@ publish = false
[dependencies]
clap = "2.33.1"
log = "0.4.14"
log = "0.4.11"
rand = "0.7.0"
rayon = "1.5.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-core = { path = "../core", version = "=1.10.0" }
solana-faucet = { path = "../faucet", version = "=1.10.0" }
solana-gossip = { path = "../gossip", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-test-validator = { path = "../test-validator", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
rayon = "1.4.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.6.28" }
solana-clap-utils = { path = "../clap-utils", version = "=1.6.28" }
solana-client = { path = "../client", version = "=1.6.28" }
solana-core = { path = "../core", version = "=1.6.28" }
solana-measure = { path = "../measure", version = "=1.6.28" }
solana-logger = { path = "../logger", version = "=1.6.28" }
solana-net-utils = { path = "../net-utils", version = "=1.6.28" }
solana-faucet = { path = "../faucet", version = "=1.6.28" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-transaction-status = { path = "../transaction-status", version = "=1.6.28" }
solana-version = { path = "../version", version = "=1.6.28" }
spl-token-v2-0 = { package = "spl-token", version = "=3.1.1", features = ["no-entrypoint"] }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.10.0" }
solana-local-cluster = { path = "../local-cluster", version = "=1.6.28" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,40 +1,41 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, value_t, values_t_or_exit, App, Arg},
log::*,
rand::{thread_rng, Rng},
rayon::prelude::*,
solana_account_decoder::parse_token::spl_token_pubkey,
solana_clap_utils::input_parsers::pubkey_of,
solana_client::{rpc_client::RpcClient, transaction_executor::TransactionExecutor},
solana_faucet::faucet::{request_airdrop_transaction, FAUCET_PORT},
solana_gossip::gossip_service::discover,
solana_runtime::inline_spl_token,
solana_sdk::{
use clap::{crate_description, crate_name, value_t, values_t_or_exit, App, Arg};
use log::*;
use rand::{thread_rng, Rng};
use rayon::prelude::*;
use solana_account_decoder::parse_token::spl_token_v2_0_pubkey;
use solana_clap_utils::input_parsers::pubkey_of;
use solana_client::rpc_client::RpcClient;
use solana_core::gossip_service::discover;
use solana_faucet::faucet::{request_airdrop_transaction, FAUCET_PORT};
use solana_measure::measure::Measure;
use solana_runtime::inline_spl_token_v2_0;
use solana_sdk::{
commitment_config::CommitmentConfig,
instruction::{AccountMeta, Instruction},
message::Message,
pubkey::Pubkey,
rpc_port::DEFAULT_RPC_PORT,
signature::{read_keypair_file, Keypair, Signer},
signature::{read_keypair_file, Keypair, Signature, Signer},
system_instruction, system_program,
timing::timestamp,
transaction::Transaction,
},
solana_streamer::socket::SocketAddrSpace,
solana_transaction_status::parse_token::spl_token_instruction,
std::{
cmp::min,
};
use solana_transaction_status::parse_token::spl_token_v2_0_instruction;
use std::{
net::SocketAddr,
process::exit,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
atomic::{AtomicBool, AtomicU64, Ordering},
Arc, RwLock,
},
thread::sleep,
thread::{sleep, Builder, JoinHandle},
time::{Duration, Instant},
},
};
// Create and close messages both require 2 signatures; if transaction construction changes, update
// this magic number
const NUM_SIGNATURES: u64 = 2;
pub fn airdrop_lamports(
client: &RpcClient,
faucet_addr: &SocketAddr,
@@ -53,8 +54,8 @@ pub fn airdrop_lamports(
id.pubkey(),
);
let blockhash = client.get_latest_blockhash().unwrap();
match request_airdrop_transaction(faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
let (blockhash, _fee_calculator) = client.get_recent_blockhash().unwrap();
match request_airdrop_transaction(&faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
Ok(transaction) => {
let mut tries = 0;
loop {
@@ -98,6 +99,160 @@ pub fn airdrop_lamports(
true
}
// signature, timestamp, id
type PendingQueue = Vec<(Signature, u64, u64)>;
struct TransactionExecutor {
sig_clear_t: JoinHandle<()>,
sigs: Arc<RwLock<PendingQueue>>,
cleared: Arc<RwLock<Vec<u64>>>,
exit: Arc<AtomicBool>,
counter: AtomicU64,
client: RpcClient,
}
impl TransactionExecutor {
fn new(entrypoint_addr: SocketAddr) -> Self {
let sigs = Arc::new(RwLock::new(Vec::new()));
let cleared = Arc::new(RwLock::new(Vec::new()));
let exit = Arc::new(AtomicBool::new(false));
let sig_clear_t = Self::start_sig_clear_thread(&exit, &sigs, &cleared, entrypoint_addr);
let client =
RpcClient::new_socket_with_commitment(entrypoint_addr, CommitmentConfig::confirmed());
Self {
sigs,
cleared,
sig_clear_t,
exit,
counter: AtomicU64::new(0),
client,
}
}
fn num_outstanding(&self) -> usize {
self.sigs.read().unwrap().len()
}
fn push_transactions(&self, txs: Vec<Transaction>) -> Vec<u64> {
let mut ids = vec![];
let new_sigs = txs.into_iter().filter_map(|tx| {
let id = self.counter.fetch_add(1, Ordering::Relaxed);
ids.push(id);
match self.client.send_transaction(&tx) {
Ok(sig) => {
return Some((sig, timestamp(), id));
}
Err(e) => {
info!("error: {:#?}", e);
}
}
None
});
let mut sigs_w = self.sigs.write().unwrap();
sigs_w.extend(new_sigs);
ids
}
fn drain_cleared(&self) -> Vec<u64> {
std::mem::take(&mut *self.cleared.write().unwrap())
}
fn close(self) {
self.exit.store(true, Ordering::Relaxed);
self.sig_clear_t.join().unwrap();
}
fn start_sig_clear_thread(
exit: &Arc<AtomicBool>,
sigs: &Arc<RwLock<PendingQueue>>,
cleared: &Arc<RwLock<Vec<u64>>>,
entrypoint_addr: SocketAddr,
) -> JoinHandle<()> {
let sigs = sigs.clone();
let exit = exit.clone();
let cleared = cleared.clone();
Builder::new()
.name("sig_clear".to_string())
.spawn(move || {
let client = RpcClient::new_socket_with_commitment(
entrypoint_addr,
CommitmentConfig::confirmed(),
);
let mut success = 0;
let mut error_count = 0;
let mut timed_out = 0;
let mut last_log = Instant::now();
while !exit.load(Ordering::Relaxed) {
let sigs_len = sigs.read().unwrap().len();
if sigs_len > 0 {
let mut sigs_w = sigs.write().unwrap();
let mut start = Measure::start("sig_status");
let statuses: Vec<_> = sigs_w
.chunks(200)
.flat_map(|sig_chunk| {
let only_sigs: Vec<_> = sig_chunk.iter().map(|s| s.0).collect();
client
.get_signature_statuses(&only_sigs)
.expect("status fail")
.value
})
.collect();
let mut num_cleared = 0;
let start_len = sigs_w.len();
let now = timestamp();
let mut new_ids = vec![];
let mut i = 0;
let mut j = 0;
while i != sigs_w.len() {
let mut retain = true;
let sent_ts = sigs_w[i].1;
if let Some(e) = &statuses[j] {
debug!("error: {:?}", e);
if e.status.is_ok() {
success += 1;
} else {
error_count += 1;
}
num_cleared += 1;
retain = false;
} else if now - sent_ts > 30_000 {
retain = false;
timed_out += 1;
}
if !retain {
new_ids.push(sigs_w.remove(i).2);
} else {
i += 1;
}
j += 1;
}
let final_sigs_len = sigs_w.len();
drop(sigs_w);
cleared.write().unwrap().extend(new_ids);
start.stop();
debug!(
"sigs len: {:?} success: {} took: {}ms cleared: {}/{}",
final_sigs_len,
success,
start.as_ms(),
num_cleared,
start_len,
);
if last_log.elapsed().as_millis() > 5000 {
info!(
"success: {} error: {} timed_out: {}",
success, error_count, timed_out,
);
last_log = Instant::now();
}
}
sleep(Duration::from_millis(200));
}
})
.unwrap()
}
}
struct SeedTracker {
max_created: Arc<AtomicU64>,
max_closed: Arc<AtomicU64>,
@@ -118,7 +273,7 @@ fn make_create_message(
.into_iter()
.map(|_| {
let program_id = if mint.is_some() {
inline_spl_token::id()
inline_spl_token_v2_0::id()
} else {
system_program::id()
};
@@ -135,12 +290,12 @@ fn make_create_message(
&program_id,
)];
if let Some(mint_address) = mint {
instructions.push(spl_token_instruction(
spl_token::instruction::initialize_account(
&spl_token::id(),
&spl_token_pubkey(&to_pubkey),
&spl_token_pubkey(&mint_address),
&spl_token_pubkey(&base_keypair.pubkey()),
instructions.push(spl_token_v2_0_instruction(
spl_token_v2_0::instruction::initialize_account(
&spl_token_v2_0::id(),
&spl_token_v2_0_pubkey(&to_pubkey),
&spl_token_v2_0_pubkey(&mint_address),
&spl_token_v2_0_pubkey(&base_keypair.pubkey()),
)
.unwrap(),
));
@@ -148,8 +303,8 @@ fn make_create_message(
instructions
})
.flatten()
.collect();
let instructions: Vec<_> = instructions.into_iter().flatten().collect();
Message::new(&instructions, Some(&keypair.pubkey()))
}
@@ -157,48 +312,42 @@ fn make_create_message(
fn make_close_message(
keypair: &Keypair,
base_keypair: &Keypair,
max_created: Arc<AtomicU64>,
max_closed: Arc<AtomicU64>,
max_closed_seed: Arc<AtomicU64>,
num_instructions: usize,
balance: u64,
spl_token: bool,
) -> Message {
let instructions: Vec<_> = (0..num_instructions)
.into_iter()
.filter_map(|_| {
.map(|_| {
let program_id = if spl_token {
inline_spl_token::id()
inline_spl_token_v2_0::id()
} else {
system_program::id()
};
let max_created_seed = max_created.load(Ordering::Relaxed);
let max_closed_seed = max_closed.load(Ordering::Relaxed);
if max_closed_seed >= max_created_seed {
return None;
}
let seed = max_closed.fetch_add(1, Ordering::Relaxed).to_string();
let seed = max_closed_seed.fetch_add(1, Ordering::Relaxed).to_string();
let address =
Pubkey::create_with_seed(&base_keypair.pubkey(), &seed, &program_id).unwrap();
if spl_token {
Some(spl_token_instruction(
spl_token::instruction::close_account(
&spl_token::id(),
&spl_token_pubkey(&address),
&spl_token_pubkey(&keypair.pubkey()),
&spl_token_pubkey(&base_keypair.pubkey()),
spl_token_v2_0_instruction(
spl_token_v2_0::instruction::close_account(
&spl_token_v2_0::id(),
&spl_token_v2_0_pubkey(&address),
&spl_token_v2_0_pubkey(&keypair.pubkey()),
&spl_token_v2_0_pubkey(&base_keypair.pubkey()),
&[],
)
.unwrap(),
))
)
} else {
Some(system_instruction::transfer_with_seed(
system_instruction::transfer_with_seed(
&address,
&base_keypair.pubkey(),
seed,
&program_id,
&keypair.pubkey(),
balance,
))
)
}
})
.collect();
@@ -214,11 +363,10 @@ fn run_accounts_bench(
iterations: usize,
maybe_space: Option<u64>,
batch_size: usize,
close_nth_batch: u64,
close_nth: u64,
maybe_lamports: Option<u64>,
num_instructions: usize,
mint: Option<Pubkey>,
reclaim_accounts: bool,
) {
assert!(num_instructions > 0);
let client =
@@ -226,10 +374,10 @@ fn run_accounts_bench(
info!("Targeting {}", entrypoint_addr);
let mut latest_blockhash = Instant::now();
let mut last_blockhash = Instant::now();
let mut last_log = Instant::now();
let mut count = 0;
let mut blockhash = client.get_latest_blockhash().expect("blockhash");
let mut recent_blockhash = client.get_recent_blockhash().expect("blockhash");
let mut tx_sent_count = 0;
let mut total_accounts_created = 0;
let mut total_accounts_closed = 0;
@@ -257,33 +405,16 @@ fn run_accounts_bench(
let executor = TransactionExecutor::new(entrypoint_addr);
// Create and close messages both require 2 signatures, fake a 2 signature message to calculate fees
let mut message = Message::new(
&[
Instruction::new_with_bytes(
Pubkey::new_unique(),
&[],
vec![AccountMeta::new(Pubkey::new_unique(), true)],
),
Instruction::new_with_bytes(
Pubkey::new_unique(),
&[],
vec![AccountMeta::new(Pubkey::new_unique(), true)],
),
],
None,
);
loop {
if latest_blockhash.elapsed().as_millis() > 10_000 {
blockhash = client.get_latest_blockhash().expect("blockhash");
latest_blockhash = Instant::now();
if last_blockhash.elapsed().as_millis() > 10_000 {
recent_blockhash = client.get_recent_blockhash().expect("blockhash");
last_blockhash = Instant::now();
}
message.recent_blockhash = blockhash;
let fee = client
.get_fee_for_message(&message)
.expect("get_fee_for_message");
let fee = recent_blockhash
.1
.lamports_per_signature
.saturating_mul(NUM_SIGNATURES);
let lamports = min_balance + fee;
for (i, balance) in balances.iter_mut().enumerate() {
@@ -300,7 +431,7 @@ fn run_accounts_bench(
if !airdrop_lamports(
&client,
&faucet_addr,
payer_keypairs[i],
&payer_keypairs[i],
lamports * 100_000,
) {
warn!("failed airdrop, exiting");
@@ -310,7 +441,6 @@ fn run_accounts_bench(
}
}
// Create accounts
let sigs_len = executor.num_outstanding();
if sigs_len < batch_size {
let num_to_create = batch_size - sigs_len;
@@ -332,7 +462,7 @@ fn run_accounts_bench(
mint,
);
let signers: Vec<&Keypair> = vec![keypair, &base_keypair];
Transaction::new(&signers, message, blockhash)
Transaction::new(&signers, message, recent_blockhash.0)
})
.collect();
balances[i] = balances[i].saturating_sub(lamports * txs.len() as u64);
@@ -345,27 +475,22 @@ fn run_accounts_bench(
}
}
if close_nth_batch > 0 {
let num_batches_to_close =
total_accounts_created as u64 / (close_nth_batch * batch_size as u64);
let expected_closed = num_batches_to_close * batch_size as u64;
let max_closed_seed = seed_tracker.max_closed.load(Ordering::Relaxed);
// Close every account we've created with seed between max_closed_seed..expected_closed
if max_closed_seed < expected_closed {
let txs: Vec<_> = (0..expected_closed - max_closed_seed)
if close_nth > 0 {
let expected_closed = total_accounts_created as u64 / close_nth;
if expected_closed > total_accounts_closed {
let txs: Vec<_> = (0..expected_closed - total_accounts_closed)
.into_par_iter()
.map(|_| {
let message = make_close_message(
payer_keypairs[0],
&payer_keypairs[0],
&base_keypair,
seed_tracker.max_created.clone(),
seed_tracker.max_closed.clone(),
1,
min_balance,
mint.is_some(),
);
let signers: Vec<&Keypair> = vec![payer_keypairs[0], &base_keypair];
Transaction::new(&signers, message, blockhash)
let signers: Vec<&Keypair> = vec![&payer_keypairs[0], &base_keypair];
Transaction::new(&signers, message, recent_blockhash.0)
})
.collect();
balances[0] = balances[0].saturating_sub(fee * txs.len() as u64);
@@ -381,7 +506,7 @@ fn run_accounts_bench(
}
count += 1;
if last_log.elapsed().as_millis() > 3000 || count >= iterations {
if last_log.elapsed().as_millis() > 3000 {
info!(
"total_accounts_created: {} total_accounts_closed: {} tx_sent_count: {} loop_count: {} balance(s): {:?}",
total_accounts_created, total_accounts_closed, tx_sent_count, count, balances
@@ -396,83 +521,6 @@ fn run_accounts_bench(
}
}
executor.close();
if reclaim_accounts {
let executor = TransactionExecutor::new(entrypoint_addr);
loop {
let max_closed_seed = seed_tracker.max_closed.load(Ordering::Relaxed);
let max_created_seed = seed_tracker.max_created.load(Ordering::Relaxed);
if latest_blockhash.elapsed().as_millis() > 10_000 {
blockhash = client.get_latest_blockhash().expect("blockhash");
latest_blockhash = Instant::now();
}
message.recent_blockhash = blockhash;
let fee = client
.get_fee_for_message(&message)
.expect("get_fee_for_message");
let sigs_len = executor.num_outstanding();
if sigs_len < batch_size && max_closed_seed < max_created_seed {
let num_to_close = min(
batch_size - sigs_len,
(max_created_seed - max_closed_seed) as usize,
);
if num_to_close >= payer_keypairs.len() {
info!("closing {} accounts", num_to_close);
let chunk_size = num_to_close / payer_keypairs.len();
info!("{:?} chunk_size", chunk_size);
if chunk_size > 0 {
for (i, keypair) in payer_keypairs.iter().enumerate() {
let txs: Vec<_> = (0..chunk_size)
.into_par_iter()
.filter_map(|_| {
let message = make_close_message(
keypair,
&base_keypair,
seed_tracker.max_created.clone(),
seed_tracker.max_closed.clone(),
num_instructions,
min_balance,
mint.is_some(),
);
if message.instructions.is_empty() {
return None;
}
let signers: Vec<&Keypair> = vec![keypair, &base_keypair];
Some(Transaction::new(&signers, message, blockhash))
})
.collect();
balances[i] = balances[i].saturating_sub(fee * txs.len() as u64);
info!("close txs: {}", txs.len());
let new_ids = executor.push_transactions(txs);
info!("close ids: {}", new_ids.len());
tx_sent_count += new_ids.len();
total_accounts_closed += (num_instructions * new_ids.len()) as u64;
}
}
}
} else {
let _ = executor.drain_cleared();
}
count += 1;
if last_log.elapsed().as_millis() > 3000 || max_closed_seed >= max_created_seed {
info!(
"total_accounts_closed: {} tx_sent_count: {} loop_count: {} balance(s): {:?}",
total_accounts_closed, tx_sent_count, count, balances
);
last_log = Instant::now();
}
if max_closed_seed >= max_created_seed {
break;
}
if executor.num_outstanding() >= batch_size {
sleep(Duration::from_millis(500));
}
}
executor.close();
}
}
fn main() {
@@ -524,14 +572,14 @@ fn main() {
.help("Number of transactions to send per batch"),
)
.arg(
Arg::with_name("close_nth_batch")
Arg::with_name("close_nth")
.long("close-frequency")
.takes_value(true)
.value_name("BYTES")
.help(
"Every `n` batches, create a batch of close transactions for
the earliest remaining batch of accounts created.
Note: Should be > 1 to avoid situations where the close \
"Send close transactions after this many accounts created. \
Note: a `close-frequency` value near or below `batch-size` \
may result in transaction-simulation errors, as the close \
transactions will be submitted before the corresponding \
create transactions have been confirmed",
),
@@ -548,7 +596,7 @@ fn main() {
.long("iterations")
.takes_value(true)
.value_name("NUM")
.help("Number of iterations to make. 0 = unlimited iterations."),
.help("Number of iterations to make"),
)
.arg(
Arg::with_name("check_gossip")
@@ -561,12 +609,6 @@ fn main() {
.takes_value(true)
.help("Mint address to initialize account"),
)
.arg(
Arg::with_name("reclaim_accounts")
.long("reclaim-accounts")
.takes_value(false)
.help("Reclaim accounts after session ends; incompatible with --iterations 0"),
)
.get_matches();
let skip_gossip = !matches.is_present("check_gossip");
@@ -590,7 +632,7 @@ fn main() {
let space = value_t!(matches, "space", u64).ok();
let lamports = value_t!(matches, "lamports", u64).ok();
let batch_size = value_t!(matches, "batch_size", usize).unwrap_or(4);
let close_nth_batch = value_t!(matches, "close_nth_batch", u64).unwrap_or(0);
let close_nth = value_t!(matches, "close_nth", u64).unwrap_or(0);
let iterations = value_t!(matches, "iterations", usize).unwrap_or(10);
let num_instructions = value_t!(matches, "num_instructions", usize).unwrap_or(1);
if num_instructions == 0 || num_instructions > 500 {
@@ -623,7 +665,6 @@ fn main() {
Some(&entrypoint_addr), // find_node_by_gossip_addr
None, // my_gossip_addr
0, // my_shred_version
SocketAddrSpace::Unspecified,
)
.unwrap_or_else(|err| {
eprintln!("Failed to discover {} node: {:?}", entrypoint_addr, err);
@@ -644,32 +685,22 @@ fn main() {
iterations,
space,
batch_size,
close_nth_batch,
close_nth,
lamports,
num_instructions,
mint,
matches.is_present("reclaim_accounts"),
);
}
#[cfg(test)]
pub mod test {
use {
super::*,
solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet,
solana_local_cluster::{
use super::*;
use solana_core::validator::ValidatorConfig;
use solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
},
solana_measure::measure::Measure,
solana_sdk::{native_token::sol_to_lamports, poh_config::PohConfig},
solana_test_validator::TestValidator,
spl_token::{
solana_program::program_pack::Pack,
state::{Account, Mint},
},
};
use solana_sdk::poh_config::PohConfig;
#[test]
fn test_accounts_cluster_bench() {
@@ -685,11 +716,11 @@ pub mod test {
};
let faucet_addr = SocketAddr::from(([127, 0, 0, 1], 9900));
let cluster = LocalCluster::new(&mut config, SocketAddrSpace::Unspecified);
let cluster = LocalCluster::new(&mut config);
let iterations = 10;
let maybe_space = None;
let batch_size = 100;
let close_nth_batch = 100;
let close_nth = 100;
let maybe_lamports = None;
let num_instructions = 2;
let mut start = Measure::start("total accounts run");
@@ -700,112 +731,10 @@ pub mod test {
iterations,
maybe_space,
batch_size,
close_nth_batch,
close_nth,
maybe_lamports,
num_instructions,
None,
false,
);
start.stop();
info!("{}", start);
}
#[test]
fn test_create_then_reclaim_spl_token_accounts() {
solana_logger::setup();
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
let test_validator = TestValidator::with_custom_fees(
mint_pubkey,
1,
Some(faucet_addr),
SocketAddrSpace::Unspecified,
);
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
// Created funder
let funder = Keypair::new();
let latest_blockhash = rpc_client.get_latest_blockhash().unwrap();
let signature = rpc_client
.request_airdrop_with_blockhash(
&funder.pubkey(),
sol_to_lamports(1.0),
&latest_blockhash,
)
.unwrap();
rpc_client
.confirm_transaction_with_spinner(
&signature,
&latest_blockhash,
CommitmentConfig::confirmed(),
)
.unwrap();
// Create Mint
let spl_mint_keypair = Keypair::new();
let spl_mint_len = Mint::get_packed_len();
let spl_mint_rent = rpc_client
.get_minimum_balance_for_rent_exemption(spl_mint_len)
.unwrap();
let transaction = Transaction::new_signed_with_payer(
&[
system_instruction::create_account(
&funder.pubkey(),
&spl_mint_keypair.pubkey(),
spl_mint_rent,
spl_mint_len as u64,
&inline_spl_token::id(),
),
spl_token_instruction(
spl_token::instruction::initialize_mint(
&spl_token::id(),
&spl_token_pubkey(&spl_mint_keypair.pubkey()),
&spl_token_pubkey(&spl_mint_keypair.pubkey()),
None,
2,
)
.unwrap(),
),
],
Some(&funder.pubkey()),
&[&funder, &spl_mint_keypair],
latest_blockhash,
);
let _sig = rpc_client
.send_and_confirm_transaction(&transaction)
.unwrap();
let account_len = Account::get_packed_len();
let minimum_balance = rpc_client
.get_minimum_balance_for_rent_exemption(account_len)
.unwrap();
let iterations = 5;
let batch_size = 100;
let close_nth_batch = 0;
let num_instructions = 4;
let mut start = Measure::start("total accounts run");
let keypair0 = Keypair::new();
let keypair1 = Keypair::new();
let keypair2 = Keypair::new();
run_accounts_bench(
test_validator
.rpc_url()
.replace("http://", "")
.parse()
.unwrap(),
faucet_addr,
&[&keypair0, &keypair1, &keypair2],
iterations,
Some(account_len as u64),
batch_size,
close_nth_batch,
Some(minimum_balance),
num_instructions,
Some(spl_mint_keypair.pubkey()),
true,
);
start.stop();
info!("{}", start);

View File

@@ -1,19 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-interface"
description = "The Solana AccountsDb plugin interface."
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-accountsdb-plugin-interface"
[dependencies]
log = "0.4.11"
thiserror = "1.0.30"
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,20 +0,0 @@
<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/IKyzQ6T.png" width="250" />
</a>
</p>
# Solana AccountsDb Plugin Interface
This crate enables an AccountsDb plugin to be plugged into the Solana Validator runtime to take actions
at the time of each account update; for example, saving the account state to an external database. The plugin must implement the `AccountsDbPlugin` trait. Please see the detail of the `accountsdb_plugin_interface.rs` for the interface definition.
The plugin should produce a `cdylib` dynamic library, which must expose a `C` function `_create_plugin()` that
instantiates the implementation of the interface.
The `solana-accountsdb-plugin-postgres` crate provides an example of how to create a plugin which saves the accounts data into an
external PostgreSQL databases.
More information about Solana is available in the [Solana documentation](https://docs.solana.com/).
Still have questions? Ask us on [Discord](https://discordapp.com/invite/pquxPsq)

View File

@@ -1,189 +0,0 @@
/// The interface for AccountsDb plugins. A plugin must implement
/// the AccountsDbPlugin trait to work with the runtime.
/// In addition, the dynamic library must export a "C" function _create_plugin which
/// creates the implementation of the plugin.
use {
solana_sdk::{signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
std::{any::Any, error, io},
thiserror::Error,
};
impl Eq for ReplicaAccountInfo<'_> {}
#[derive(Clone, PartialEq, Debug)]
/// Information about an account being updated
pub struct ReplicaAccountInfo<'a> {
/// The Pubkey for the account
pub pubkey: &'a [u8],
/// The lamports for the account
pub lamports: u64,
/// The Pubkey of the owner program account
pub owner: &'a [u8],
/// This account's data contains a loaded program (and is now read-only)
pub executable: bool,
/// The epoch at which this account will next owe rent
pub rent_epoch: u64,
/// The data held in this account.
pub data: &'a [u8],
/// A global monotonically increasing atomic number, which can be used
/// to tell the order of the account update. For example, when an
/// account is updated in the same slot multiple times, the update
/// with higher write_version should supersede the one with lower
/// write_version.
pub write_version: u64,
}
/// A wrapper to future-proof ReplicaAccountInfo handling.
/// If there were a change to the structure of ReplicaAccountInfo,
/// there would be new enum entry for the newer version, forcing
/// plugin implementations to handle the change.
pub enum ReplicaAccountInfoVersions<'a> {
V0_0_1(&'a ReplicaAccountInfo<'a>),
}
#[derive(Clone, Debug)]
pub struct ReplicaTransactionInfo<'a> {
pub signature: &'a Signature,
pub is_vote: bool,
pub transaction: &'a SanitizedTransaction,
pub transaction_status_meta: &'a TransactionStatusMeta,
}
pub enum ReplicaTransactionInfoVersions<'a> {
V0_0_1(&'a ReplicaTransactionInfo<'a>),
}
/// Errors returned by plugin calls
#[derive(Error, Debug)]
pub enum AccountsDbPluginError {
/// Error opening the configuration file; for example, when the file
/// is not found or when the validator process has no permission to read it.
#[error("Error opening config file. Error detail: ({0}).")]
ConfigFileOpenError(#[from] io::Error),
/// Error in reading the content of the config file or the content
/// is not in the expected format.
#[error("Error reading config file. Error message: ({msg})")]
ConfigFileReadError { msg: String },
/// Error when updating the account.
#[error("Error updating account. Error message: ({msg})")]
AccountsUpdateError { msg: String },
/// Error when updating the slot status
#[error("Error updating slot status. Error message: ({msg})")]
SlotStatusUpdateError { msg: String },
/// Any custom error defined by the plugin.
#[error("Plugin-defined custom error. Error message: ({0})")]
Custom(Box<dyn error::Error + Send + Sync>),
}
/// The current status of a slot
#[derive(Debug, Clone)]
pub enum SlotStatus {
/// The highest slot of the heaviest fork processed by the node. Ledger state at this slot is
/// not derived from a confirmed or finalized block, but if multiple forks are present, is from
/// the fork the validator believes is most likely to finalize.
Processed,
/// The highest slot having reached max vote lockout.
Rooted,
/// The highest slot that has been voted on by supermajority of the cluster, ie. is confirmed.
Confirmed,
}
impl SlotStatus {
pub fn as_str(&self) -> &'static str {
match self {
SlotStatus::Confirmed => "confirmed",
SlotStatus::Processed => "processed",
SlotStatus::Rooted => "rooted",
}
}
}
pub type Result<T> = std::result::Result<T, AccountsDbPluginError>;
/// Defines an AccountsDb plugin, to stream data from the runtime.
/// AccountsDb plugins must describe desired behavior for load and unload,
/// as well as how they will handle streamed data.
pub trait AccountsDbPlugin: Any + Send + Sync + std::fmt::Debug {
fn name(&self) -> &'static str;
/// The callback called when a plugin is loaded by the system,
/// used for doing whatever initialization is required by the plugin.
/// The _config_file contains the name of the
/// of the config file. The config must be in JSON format and
/// include a field "libpath" indicating the full path
/// name of the shared library implementing this interface.
fn on_load(&mut self, _config_file: &str) -> Result<()> {
Ok(())
}
/// The callback called right before a plugin is unloaded by the system
/// Used for doing cleanup before unload.
fn on_unload(&mut self) {}
/// Called when an account is updated at a slot.
/// When `is_startup` is true, it indicates the account is loaded from
/// snapshots when the validator starts up. When `is_startup` is false,
/// the account is updated during transaction processing.
#[allow(unused_variables)]
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
Ok(())
}
/// Called when all accounts are notified of during startup.
fn notify_end_of_startup(&mut self) -> Result<()> {
Ok(())
}
/// Called when a slot status is updated
#[allow(unused_variables)]
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
Ok(())
}
/// Called when a transaction is updated at a slot.
#[allow(unused_variables)]
fn notify_transaction(
&mut self,
transaction: ReplicaTransactionInfoVersions,
slot: u64,
) -> Result<()> {
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.
fn account_data_notifications_enabled(&self) -> bool {
true
}
/// Check if the plugin is interested in transaction data
/// Default is false -- if the plugin is not interested in
/// transaction data, please return false.
fn transaction_notifications_enabled(&self) -> bool {
false
}
}

View File

@@ -1 +0,0 @@
pub mod accountsdb_plugin_interface;

View File

@@ -1,31 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-manager"
description = "The Solana AccountsDb plugin manager."
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[dependencies]
bs58 = "0.4.0"
crossbeam-channel = "0.5"
libloading = "0.7.2"
log = "0.4.11"
serde = "1.0.131"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-metrics = { path = "../metrics", version = "=1.10.0" }
solana-rpc = { path = "../rpc", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
thiserror = "1.0.30"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,180 +0,0 @@
/// Module responsible for notifying plugins of account updates
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
ReplicaAccountInfo, ReplicaAccountInfoVersions,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_runtime::{
accounts_update_notifier_interface::AccountsUpdateNotifierInterface,
append_vec::{StoredAccountMeta, StoredMeta},
},
solana_sdk::{
account::{AccountSharedData, ReadableAccount},
clock::Slot,
},
std::sync::{Arc, RwLock},
};
#[derive(Debug)]
pub(crate) struct AccountsUpdateNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl AccountsUpdateNotifierInterface for AccountsUpdateNotifierImpl {
fn notify_account_update(&self, slot: Slot, meta: &StoredMeta, account: &AccountSharedData) {
if let Some(account_info) = self.accountinfo_from_shared_account_data(meta, account) {
self.notify_plugins_of_account_update(account_info, slot, false);
}
}
fn notify_account_restore_from_snapshot(&self, slot: Slot, account: &StoredAccountMeta) {
let mut measure_all = Measure::start("accountsdb-plugin-notify-account-restore-all");
let mut measure_copy = Measure::start("accountsdb-plugin-copy-stored-account-info");
let account = self.accountinfo_from_stored_account_meta(account);
measure_copy.stop();
inc_new_counter_debug!(
"accountsdb-plugin-copy-stored-account-info-us",
measure_copy.as_us() as usize,
100000,
100000
);
if let Some(account_info) = account {
self.notify_plugins_of_account_update(account_info, slot, true);
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify-account-restore-all-us",
measure_all.as_us() as usize,
100000,
100000
);
}
fn notify_end_of_restore_from_snapshot(&self) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-end-of-restore-from-snapshot");
match plugin.notify_end_of_startup() {
Err(err) => {
error!(
"Failed to notify the end of restore from snapshot, error: {} to plugin {}",
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully notified the end of restore from snapshot to plugin {}",
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-end-of-restore-from-snapshot",
measure.as_us() as usize
);
}
}
}
impl AccountsUpdateNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
AccountsUpdateNotifierImpl { plugin_manager }
}
fn accountinfo_from_shared_account_data<'a>(
&self,
meta: &'a StoredMeta,
account: &'a AccountSharedData,
) -> Option<ReplicaAccountInfo<'a>> {
Some(ReplicaAccountInfo {
pubkey: meta.pubkey.as_ref(),
lamports: account.lamports(),
owner: account.owner().as_ref(),
executable: account.executable(),
rent_epoch: account.rent_epoch(),
data: account.data(),
write_version: meta.write_version,
})
}
fn accountinfo_from_stored_account_meta<'a>(
&self,
stored_account_meta: &'a StoredAccountMeta,
) -> Option<ReplicaAccountInfo<'a>> {
Some(ReplicaAccountInfo {
pubkey: stored_account_meta.meta.pubkey.as_ref(),
lamports: stored_account_meta.account_meta.lamports,
owner: stored_account_meta.account_meta.owner.as_ref(),
executable: stored_account_meta.account_meta.executable,
rent_epoch: stored_account_meta.account_meta.rent_epoch,
data: stored_account_meta.data,
write_version: stored_account_meta.meta.write_version,
})
}
fn notify_plugins_of_account_update(
&self,
account: ReplicaAccountInfo,
slot: Slot,
is_startup: bool,
) {
let mut measure2 = Measure::start("accountsdb-plugin-notify_plugins_of_account_update");
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-account");
match plugin.update_account(
ReplicaAccountInfoVersions::V0_0_1(&account),
slot,
is_startup,
) {
Err(err) => {
error!(
"Failed to update account {} at slot {}, error: {} to plugin {}",
bs58::encode(account.pubkey).into_string(),
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated account {} at slot {} to plugin {}",
bs58::encode(account.pubkey).into_string(),
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-account-us",
measure.as_us() as usize,
100000,
100000
);
}
measure2.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify_plugins_of_account_update-us",
measure2.as_us() as usize,
100000,
100000
);
}
}

View File

@@ -1,75 +0,0 @@
/// Managing the AccountsDb plugins
use {
libloading::{Library, Symbol},
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::AccountsDbPlugin,
std::error::Error,
};
#[derive(Default, Debug)]
pub struct AccountsDbPluginManager {
pub plugins: Vec<Box<dyn AccountsDbPlugin>>,
libs: Vec<Library>,
}
impl AccountsDbPluginManager {
pub fn new() -> Self {
AccountsDbPluginManager {
plugins: Vec::default(),
libs: Vec::default(),
}
}
/// # Safety
///
/// This function loads the dynamically linked library specified in the path. The library
/// must do necessary initializations.
pub unsafe fn load_plugin(
&mut self,
libpath: &str,
config_file: &str,
) -> Result<(), Box<dyn Error>> {
type PluginConstructor = unsafe fn() -> *mut dyn AccountsDbPlugin;
let lib = Library::new(libpath)?;
let constructor: Symbol<PluginConstructor> = lib.get(b"_create_plugin")?;
let plugin_raw = constructor();
let mut plugin = Box::from_raw(plugin_raw);
plugin.on_load(config_file)?;
self.plugins.push(plugin);
self.libs.push(lib);
Ok(())
}
/// Unload all plugins and loaded plugin libraries, making sure to fire
/// their `on_plugin_unload()` methods so they can do any necessary cleanup.
pub fn unload(&mut self) {
for mut plugin in self.plugins.drain(..) {
info!("Unloading plugin for {:?}", plugin.name());
plugin.on_unload();
}
for lib in self.libs.drain(..) {
drop(lib);
}
}
/// Check if there is any plugin interested in account data
pub fn account_data_notifications_enabled(&self) -> bool {
for plugin in &self.plugins {
if plugin.account_data_notifications_enabled() {
return true;
}
}
false
}
/// Check if there is any plugin interested in transaction data
pub fn transaction_notifications_enabled(&self) -> bool {
for plugin in &self.plugins {
if plugin.transaction_notifications_enabled() {
return true;
}
}
false
}
}

View File

@@ -1,196 +0,0 @@
use {
crate::{
accounts_update_notifier::AccountsUpdateNotifierImpl,
accountsdb_plugin_manager::AccountsDbPluginManager,
slot_status_notifier::SlotStatusNotifierImpl, slot_status_observer::SlotStatusObserver,
transaction_notifier::TransactionNotifierImpl,
},
crossbeam_channel::Receiver,
log::*,
serde_json,
solana_rpc::{
optimistically_confirmed_bank_tracker::BankNotification,
transaction_notifier_interface::TransactionNotifierLock,
},
solana_runtime::accounts_update_notifier_interface::AccountsUpdateNotifier,
std::{
fs::File,
io::Read,
path::{Path, PathBuf},
sync::{Arc, RwLock},
thread,
},
thiserror::Error,
};
#[derive(Error, Debug)]
pub enum AccountsdbPluginServiceError {
#[error("Cannot open the the plugin config file")]
CannotOpenConfigFile(String),
#[error("Cannot read the the plugin config file")]
CannotReadConfigFile(String),
#[error("The config file is not in a valid Json format")]
InvalidConfigFileFormat(String),
#[error("Plugin library path is not specified in the config file")]
LibPathNotSet,
#[error("Invalid plugin path")]
InvalidPluginPath,
#[error("Cannot load plugin shared library")]
PluginLoadError(String),
}
/// The service managing the AccountsDb plugin workflow.
pub struct AccountsDbPluginService {
slot_status_observer: Option<SlotStatusObserver>,
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
accounts_update_notifier: Option<AccountsUpdateNotifier>,
transaction_notifier: Option<TransactionNotifierLock>,
}
impl AccountsDbPluginService {
/// Creates and returns the AccountsDbPluginService.
/// # Arguments
/// * `confirmed_bank_receiver` - The receiver for confirmed bank notification
/// * `accountsdb_plugin_config_file` - The config file path for the plugin. The
/// config file controls the plugin responsible
/// for transporting the data to external data stores. It is defined in JSON format.
/// The `libpath` field should be pointed to the full path of the dynamic shared library
/// (.so file) to be loaded. The shared library must implement the `AccountsDbPlugin`
/// trait. And the shared library shall export a `C` function `_create_plugin` which
/// shall create the implementation of `AccountsDbPlugin` and returns to the caller.
/// The rest of the JSON fields' definition is up to to the concrete plugin implementation
/// It is usually used to configure the connection information for the external data store.
pub fn new(
confirmed_bank_receiver: Receiver<BankNotification>,
accountsdb_plugin_config_files: &[PathBuf],
) -> Result<Self, AccountsdbPluginServiceError> {
info!(
"Starting AccountsDbPluginService from config files: {:?}",
accountsdb_plugin_config_files
);
let mut plugin_manager = AccountsDbPluginManager::new();
for accountsdb_plugin_config_file in accountsdb_plugin_config_files {
Self::load_plugin(&mut plugin_manager, accountsdb_plugin_config_file)?;
}
let account_data_notifications_enabled =
plugin_manager.account_data_notifications_enabled();
let transaction_notifications_enabled = plugin_manager.transaction_notifications_enabled();
let plugin_manager = Arc::new(RwLock::new(plugin_manager));
let accounts_update_notifier: Option<AccountsUpdateNotifier> =
if account_data_notifications_enabled {
let accounts_update_notifier =
AccountsUpdateNotifierImpl::new(plugin_manager.clone());
Some(Arc::new(RwLock::new(accounts_update_notifier)))
} else {
None
};
let transaction_notifier: Option<TransactionNotifierLock> =
if transaction_notifications_enabled {
let transaction_notifier = TransactionNotifierImpl::new(plugin_manager.clone());
Some(Arc::new(RwLock::new(transaction_notifier)))
} else {
None
};
let slot_status_observer =
if account_data_notifications_enabled || transaction_notifications_enabled {
let slot_status_notifier = SlotStatusNotifierImpl::new(plugin_manager.clone());
let slot_status_notifier = Arc::new(RwLock::new(slot_status_notifier));
Some(SlotStatusObserver::new(
confirmed_bank_receiver,
slot_status_notifier,
))
} else {
None
};
info!("Started AccountsDbPluginService");
Ok(AccountsDbPluginService {
slot_status_observer,
plugin_manager,
accounts_update_notifier,
transaction_notifier,
})
}
fn load_plugin(
plugin_manager: &mut AccountsDbPluginManager,
accountsdb_plugin_config_file: &Path,
) -> Result<(), AccountsdbPluginServiceError> {
let mut file = match File::open(accountsdb_plugin_config_file) {
Ok(file) => file,
Err(err) => {
return Err(AccountsdbPluginServiceError::CannotOpenConfigFile(format!(
"Failed to open the plugin config file {:?}, error: {:?}",
accountsdb_plugin_config_file, err
)));
}
};
let mut contents = String::new();
if let Err(err) = file.read_to_string(&mut contents) {
return Err(AccountsdbPluginServiceError::CannotReadConfigFile(format!(
"Failed to read the plugin config file {:?}, error: {:?}",
accountsdb_plugin_config_file, err
)));
}
let result: serde_json::Value = match serde_json::from_str(&contents) {
Ok(value) => value,
Err(err) => {
return Err(AccountsdbPluginServiceError::InvalidConfigFileFormat(
format!(
"The config file {:?} is not in a valid Json format, error: {:?}",
accountsdb_plugin_config_file, err
),
));
}
};
let libpath = result["libpath"]
.as_str()
.ok_or(AccountsdbPluginServiceError::LibPathNotSet)?;
let config_file = accountsdb_plugin_config_file
.as_os_str()
.to_str()
.ok_or(AccountsdbPluginServiceError::InvalidPluginPath)?;
unsafe {
let result = plugin_manager.load_plugin(libpath, config_file);
if let Err(err) = result {
let msg = format!(
"Failed to load the plugin library: {:?}, error: {:?}",
libpath, err
);
return Err(AccountsdbPluginServiceError::PluginLoadError(msg));
}
}
Ok(())
}
pub fn get_accounts_update_notifier(&self) -> Option<AccountsUpdateNotifier> {
self.accounts_update_notifier.clone()
}
pub fn get_transaction_notifier(&self) -> Option<TransactionNotifierLock> {
self.transaction_notifier.clone()
}
pub fn join(self) -> thread::Result<()> {
if let Some(mut slot_status_observer) = self.slot_status_observer {
slot_status_observer.join()?;
}
self.plugin_manager.write().unwrap().unload();
Ok(())
}
}

View File

@@ -1,6 +0,0 @@
pub mod accounts_update_notifier;
pub mod accountsdb_plugin_manager;
pub mod accountsdb_plugin_service;
pub mod slot_status_notifier;
pub mod slot_status_observer;
pub mod transaction_notifier;

View File

@@ -1,81 +0,0 @@
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::SlotStatus,
solana_measure::measure::Measure,
solana_metrics::*,
solana_sdk::clock::Slot,
std::sync::{Arc, RwLock},
};
pub trait SlotStatusNotifierInterface {
/// Notified when a slot is optimistically confirmed
fn notify_slot_confirmed(&self, slot: Slot, parent: Option<Slot>);
/// Notified when a slot is marked frozen.
fn notify_slot_processed(&self, slot: Slot, parent: Option<Slot>);
/// Notified when a slot is rooted.
fn notify_slot_rooted(&self, slot: Slot, parent: Option<Slot>);
}
pub type SlotStatusNotifier = Arc<RwLock<dyn SlotStatusNotifierInterface + Sync + Send>>;
pub struct SlotStatusNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl SlotStatusNotifierInterface for SlotStatusNotifierImpl {
fn notify_slot_confirmed(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Confirmed);
}
fn notify_slot_processed(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Processed);
}
fn notify_slot_rooted(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Rooted);
}
}
impl SlotStatusNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
Self { plugin_manager }
}
pub fn notify_slot_status(&self, slot: Slot, parent: Option<Slot>, slot_status: SlotStatus) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-slot");
match plugin.update_slot_status(slot, parent, slot_status.clone()) {
Err(err) => {
error!(
"Failed to update slot status at slot {}, error: {} to plugin {}",
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated slot status at slot {} to plugin {}",
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-slot-us",
measure.as_us() as usize,
1000,
1000
);
}
}
}

View File

@@ -1,80 +0,0 @@
use {
crate::slot_status_notifier::SlotStatusNotifier,
crossbeam_channel::Receiver,
solana_rpc::optimistically_confirmed_bank_tracker::BankNotification,
std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, Builder, JoinHandle},
},
};
#[derive(Debug)]
pub(crate) struct SlotStatusObserver {
bank_notification_receiver_service: Option<JoinHandle<()>>,
exit_updated_slot_server: Arc<AtomicBool>,
}
impl SlotStatusObserver {
pub fn new(
bank_notification_receiver: Receiver<BankNotification>,
slot_status_notifier: SlotStatusNotifier,
) -> Self {
let exit_updated_slot_server = Arc::new(AtomicBool::new(false));
Self {
bank_notification_receiver_service: Some(Self::run_bank_notification_receiver(
bank_notification_receiver,
exit_updated_slot_server.clone(),
slot_status_notifier,
)),
exit_updated_slot_server,
}
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_updated_slot_server.store(true, Ordering::Relaxed);
self.bank_notification_receiver_service
.take()
.map(JoinHandle::join)
.unwrap()
}
fn run_bank_notification_receiver(
bank_notification_receiver: Receiver<BankNotification>,
exit: Arc<AtomicBool>,
slot_status_notifier: SlotStatusNotifier,
) -> JoinHandle<()> {
Builder::new()
.name("bank_notification_receiver".to_string())
.spawn(move || {
while !exit.load(Ordering::Relaxed) {
if let Ok(slot) = bank_notification_receiver.recv() {
match slot {
BankNotification::OptimisticallyConfirmed(slot) => {
slot_status_notifier
.read()
.unwrap()
.notify_slot_confirmed(slot, None);
}
BankNotification::Frozen(bank) => {
slot_status_notifier
.read()
.unwrap()
.notify_slot_processed(bank.slot(), Some(bank.parent_slot()));
}
BankNotification::Root(bank) => {
slot_status_notifier
.read()
.unwrap()
.notify_slot_rooted(bank.slot(), Some(bank.parent_slot()));
}
}
}
}
})
.unwrap()
}
}

View File

@@ -1,93 +0,0 @@
/// Module responsible for notifying plugins of transactions
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
ReplicaTransactionInfo, ReplicaTransactionInfoVersions,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_rpc::transaction_notifier_interface::TransactionNotifier,
solana_runtime::bank,
solana_sdk::{clock::Slot, signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
std::sync::{Arc, RwLock},
};
/// This implementation of TransactionNotifier is passed to the rpc's TransactionStatusService
/// at the validator startup. TransactionStatusService invokes the notify_transaction method
/// for new transactions. The implementation in turn invokes the notify_transaction of each
/// plugin enabled with transaction notification managed by the AccountsDbPluginManager.
pub(crate) struct TransactionNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl TransactionNotifier for TransactionNotifierImpl {
fn notify_transaction(
&self,
slot: Slot,
signature: &Signature,
transaction_status_meta: &TransactionStatusMeta,
transaction: &SanitizedTransaction,
) {
let mut measure = Measure::start("accountsdb-plugin-notify_plugins_of_transaction_info");
let transaction_log_info =
Self::build_replica_transaction_info(signature, transaction_status_meta, transaction);
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
if !plugin.transaction_notifications_enabled() {
continue;
}
match plugin.notify_transaction(
ReplicaTransactionInfoVersions::V0_0_1(&transaction_log_info),
slot,
) {
Err(err) => {
error!(
"Failed to notify transaction, error: ({}) to plugin {}",
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully notified transaction to plugin {}",
plugin.name()
);
}
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify_plugins_of_transaction_info-us",
measure.as_us() as usize,
10000,
10000
);
}
}
impl TransactionNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
Self { plugin_manager }
}
fn build_replica_transaction_info<'a>(
signature: &'a Signature,
transaction_status_meta: &'a TransactionStatusMeta,
transaction: &'a SanitizedTransaction,
) -> ReplicaTransactionInfo<'a> {
ReplicaTransactionInfo {
signature,
is_vote: bank::is_simple_vote_transaction(transaction),
transaction,
transaction_status_meta,
}
}
}

View File

@@ -1,39 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-postgres"
description = "The Solana AccountsDb plugin for PostgreSQL database."
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5"
log = "0.4.14"
postgres = { version = "0.19.2", features = ["with-chrono-0_4"] }
postgres-types = { version = "0.2.2", features = ["derive"] }
serde = "1.0.131"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-metrics = { path = "../metrics", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
thiserror = "1.0.30"
tokio-postgres = "0.7.4"
[dev-dependencies]
solana-account-decoder = { path = "../account-decoder", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,5 +0,0 @@
This is an example implementing the AccountsDb plugin for PostgreSQL database.
Please see the `src/accountsdb_plugin_postgres.rs` for the format of the plugin's configuration file.
To create the schema objects for the database, please use `scripts/create_schema.sql`.
`scripts/drop_schema.sql` can be used to tear down the schema objects.

View File

@@ -1,184 +0,0 @@
/**
* This plugin implementation for PostgreSQL requires the following tables
*/
-- The table storing accounts
CREATE TABLE account (
pubkey BYTEA PRIMARY KEY,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- The table storing slot information
CREATE TABLE slot (
slot BIGINT PRIMARY KEY,
parent BIGINT,
status VARCHAR(16) NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- Types for Transactions
Create TYPE "TransactionErrorCode" AS ENUM (
'AccountInUse',
'AccountLoadedTwice',
'AccountNotFound',
'ProgramAccountNotFound',
'InsufficientFundsForFee',
'InvalidAccountForFee',
'AlreadyProcessed',
'BlockhashNotFound',
'InstructionError',
'CallChainTooDeep',
'MissingSignatureForFee',
'InvalidAccountIndex',
'SignatureFailure',
'InvalidProgramForExecution',
'SanitizeFailure',
'ClusterMaintenance',
'AccountBorrowOutstanding',
'WouldExceedMaxAccountCostLimit',
'WouldExceedMaxBlockCostLimit',
'UnsupportedVersion',
'InvalidWritableAccount'
);
CREATE TYPE "TransactionError" AS (
error_code "TransactionErrorCode",
error_detail VARCHAR(256)
);
CREATE TYPE "CompiledInstruction" AS (
program_id_index SMALLINT,
accounts SMALLINT[],
data BYTEA
);
CREATE TYPE "InnerInstructions" AS (
index SMALLINT,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionTokenBalance" AS (
account_index SMALLINT,
mint VARCHAR(44),
ui_token_amount DOUBLE PRECISION,
owner VARCHAR(44)
);
Create TYPE "RewardType" AS ENUM (
'Fee',
'Rent',
'Staking',
'Voting'
);
CREATE TYPE "Reward" AS (
pubkey VARCHAR(44),
lamports BIGINT,
post_balance BIGINT,
reward_type "RewardType",
commission SMALLINT
);
CREATE TYPE "TransactionStatusMeta" AS (
error "TransactionError",
fee BIGINT,
pre_balances BIGINT[],
post_balances BIGINT[],
inner_instructions "InnerInstructions"[],
log_messages TEXT[],
pre_token_balances "TransactionTokenBalance"[],
post_token_balances "TransactionTokenBalance"[],
rewards "Reward"[]
);
CREATE TYPE "TransactionMessageHeader" AS (
num_required_signatures SMALLINT,
num_readonly_signed_accounts SMALLINT,
num_readonly_unsigned_accounts SMALLINT
);
CREATE TYPE "TransactionMessage" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionMessageAddressTableLookup" AS (
account_key: BYTEA[],
writable_indexes SMALLINT[],
readonly_indexes SMALLINT[]
);
CREATE TYPE "TransactionMessageV0" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[],
address_table_lookups "TransactionMessageAddressTableLookup"[]
);
CREATE TYPE "LoadedAddresses" AS (
writable BYTEA[],
readonly BYTEA[]
);
CREATE TYPE "LoadedMessageV0" AS (
message "TransactionMessageV0",
loaded_addresses "LoadedAddresses"
);
-- The table storing transactions
CREATE TABLE transaction (
slot BIGINT NOT NULL,
signature BYTEA NOT NULL,
is_vote BOOL NOT NULL,
message_type SMALLINT, -- 0: legacy, 1: v0 message
legacy_message "TransactionMessage",
v0_loaded_message "LoadedMessageV0",
signatures BYTEA[],
message_hash BYTEA,
meta "TransactionStatusMeta",
updated_on TIMESTAMP NOT NULL,
CONSTRAINT transaction_pk PRIMARY KEY (slot, signature)
);
/**
* The following is for keeping historical data for accounts and is not required for plugin to work.
*/
-- The table storing historical data for accounts
CREATE TABLE account_audit (
pubkey BYTEA,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
CREATE INDEX account_audit_account_key ON account_audit (pubkey, write_version);
CREATE FUNCTION audit_account_update() RETURNS trigger AS $audit_account_update$
BEGIN
INSERT INTO account_audit (pubkey, owner, lamports, slot, executable, rent_epoch, data, write_version, updated_on)
VALUES (OLD.pubkey, OLD.owner, OLD.lamports, OLD.slot,
OLD.executable, OLD.rent_epoch, OLD.data, OLD.write_version, OLD.updated_on);
RETURN NEW;
END;
$audit_account_update$ LANGUAGE plpgsql;
CREATE TRIGGER account_update_trigger AFTER UPDATE OR DELETE ON account
FOR EACH ROW EXECUTE PROCEDURE audit_account_update();

View File

@@ -1,25 +0,0 @@
/**
* Script for cleaning up the schema for PostgreSQL used for the AccountsDb plugin.
*/
DROP TRIGGER account_update_trigger ON account;
DROP FUNCTION audit_account_update;
DROP TABLE account_audit;
DROP TABLE account;
DROP TABLE slot;
DROP TABLE transaction;
DROP TYPE "TransactionError" CASCADE;
DROP TYPE "TransactionErrorCode" CASCADE;
DROP TYPE "LoadedMessageV0" CASCADE;
DROP TYPE "LoadedAddresses" CASCADE;
DROP TYPE "TransactionMessageV0" CASCADE;
DROP TYPE "TransactionMessage" CASCADE;
DROP TYPE "TransactionMessageHeader" CASCADE;
DROP TYPE "TransactionMessageAddressTableLookup" CASCADE;
DROP TYPE "TransactionStatusMeta" CASCADE;
DROP TYPE "RewardType" CASCADE;
DROP TYPE "Reward" CASCADE;
DROP TYPE "TransactionTokenBalance" CASCADE;
DROP TYPE "InnerInstructions" CASCADE;
DROP TYPE "CompiledInstruction" CASCADE;

View File

@@ -1,802 +0,0 @@
# This a reference configuration file for the PostgreSQL database version 14.
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: B = bytes Time units: us = microseconds
# kB = kilobytes ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
data_directory = '/var/lib/postgresql/14/main' # use data in another directory
# (change requires restart)
hba_file = '/etc/postgresql/14/main/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/etc/postgresql/14/main/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/14-main.pid' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
listen_addresses = '*'
port = 5433 # (change requires restart)
max_connections = 200 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP settings -
# see "man tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
# 0 selects the system default
#client_connection_check_interval = 0 # time between checks for client
# disconnection while running queries;
# 0 for never
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = scram-sha-256 # scram-sha-256 or md5
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
#krb_caseins_users = off
# - SSL -
ssl = on
#ssl_ca_file = ''
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
#ssl_crl_file = ''
#ssl_crl_dir = ''
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1.2'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 1GB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#huge_page_size = 0 # zero for system default
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash table work_mem
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first option
# supported by the operating system:
# mmap
# sysv
# windows
# (change requires restart)
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# (change requires restart)
#min_dynamic_shared_memory = 0MB # (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 64
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 2 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#backend_flush_after = 0 # measured in pages, 0 disables
effective_io_concurrency = 1000 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = minimal # minimal, replica, or logical
# (change requires restart)
fsync = off # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
synchronous_commit = off # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux and FreeBSD)
# fsync
# fsync_writethrough
# open_sync
full_page_writes = off # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_compression = off # enable compression of full-page writes
#wal_init_zero = on # zero-fill new WAL files
#wal_recycle = on # recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#wal_skip_threshold = 2MB
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
max_wal_size = 1GB
min_wal_size = 80MB
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = '' # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
# %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
#archive_cleanup_command = '' # command to execute at every restartpoint
#recovery_end_command = '' # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = '' # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = '' # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = '' # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the primary and on any standby that will send replication data.
max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#wal_keep_size = 0 # in megabytes; 0 disables
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Primary Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
#promote_trigger_file = '' # file name whose presence ends recovery
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
# is not set
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_async_append = on
#enable_bitmapscan = on
#enable_gathermerge = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_incremental_sort = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_memoize = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_parallel_hash = on
#enable_partition_pruning = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#jit = on # allow JIT compilation
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#plan_cache_mode = auto # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (Windows):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
# and their durations, > 0 logs only a sample of
# statements running at least this number
# of milliseconds;
# sample fraction is determined by log_statement_sample_rate
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
# log_min_duration_sample to be logged;
# 1.0 logs all such statements, 0.0 never logs
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
# are logged regardless of their duration; 1.0 logs all
# statements from all transactions, 0.0 never logs
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_autovacuum_min_duration = -1 # log autovacuum activity;
# -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%m [%p] %q%u@%d ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %b = backend type
# %p = process ID
# %P = process ID of parallel group leader
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %Q = query ID (0 if none or not computed)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_recovery_conflict_waits = off # log standby recovery conflict waits
# >= deadlock_timeout
#log_parameter_max_length = -1 # when logging statements, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Etc/UTC'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
cluster_name = '14/main' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_activity_query_size = 1024 # (change requires restart)
#track_counts = on
#track_io_timing = off
#track_wal_io_timing = off
#track_functions = none # none, pl, all
stats_temp_directory = '/var/run/postgresql/14-main.pg_stat_tmp'
# - Monitoring -
#compute_query_id = auto
#log_statement_stats = off
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_table_access_method = 'heap'
#default_tablespace = '' # a tablespace name, '' uses the default
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_table_age = 150000000
#vacuum_freeze_min_age = 50000000
#vacuum_failsafe_age = 1600000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_failsafe_age = 1600000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Etc/UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C.UTF-8' # locale for system error message
# strings
lc_monetary = 'C.UTF-8' # locale for monetary formatting
lc_numeric = 'C.UTF-8' # locale for number formatting
lc_time = 'C.UTF-8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#extension_destdir = '' # prepend path when loading extensions
# and shared objects (added by Debian)
#gin_fuzzy_search_limit = 0
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
include_dir = 'conf.d' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

View File

@@ -1,74 +0,0 @@
use {log::*, std::collections::HashSet};
#[derive(Debug)]
pub(crate) struct AccountsSelector {
pub accounts: HashSet<Vec<u8>>,
pub owners: HashSet<Vec<u8>>,
pub select_all_accounts: bool,
}
impl AccountsSelector {
pub fn default() -> Self {
AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts: true,
}
}
pub fn new(accounts: &[String], owners: &[String]) -> Self {
info!(
"Creating AccountsSelector from accounts: {:?}, owners: {:?}",
accounts, owners
);
let select_all_accounts = accounts.iter().any(|key| key == "*");
if select_all_accounts {
return AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts,
};
}
let accounts = accounts
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
let owners = owners
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
AccountsSelector {
accounts,
owners,
select_all_accounts,
}
}
pub fn is_account_selected(&self, account: &[u8], owner: &[u8]) -> bool {
self.select_all_accounts || self.accounts.contains(account) || self.owners.contains(owner)
}
/// Check if any account is of interested at all
pub fn is_enabled(&self) -> bool {
self.select_all_accounts || !self.accounts.is_empty() || !self.owners.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_create_accounts_selector() {
AccountsSelector::new(
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
&[],
);
AccountsSelector::new(
&[],
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
);
}
}

View File

@@ -1,437 +0,0 @@
use solana_measure::measure::Measure;
/// Main entry for the PostgreSQL plugin
use {
crate::{
accounts_selector::AccountsSelector,
postgres_client::{ParallelPostgresClient, PostgresClientBuilder},
transaction_selector::TransactionSelector,
},
bs58,
log::*,
serde_derive::{Deserialize, Serialize},
serde_json,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPlugin, AccountsDbPluginError, ReplicaAccountInfoVersions,
ReplicaTransactionInfoVersions, Result, SlotStatus,
},
solana_metrics::*,
std::{fs::File, io::Read},
thiserror::Error,
};
#[derive(Default)]
pub struct AccountsDbPluginPostgres {
client: Option<ParallelPostgresClient>,
accounts_selector: Option<AccountsSelector>,
transaction_selector: Option<TransactionSelector>,
}
impl std::fmt::Debug for AccountsDbPluginPostgres {
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct AccountsDbPluginPostgresConfig {
pub host: Option<String>,
pub user: Option<String>,
pub port: Option<u16>,
pub connection_str: Option<String>,
pub threads: Option<usize>,
pub batch_size: Option<usize>,
pub panic_on_db_errors: Option<bool>,
}
#[derive(Error, Debug)]
pub enum AccountsDbPluginPostgresError {
#[error("Error connecting to the backend data store. Error message: ({msg})")]
DataStoreConnectionError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
DataSchemaError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
ConfigurationError { msg: String },
}
impl AccountsDbPlugin for AccountsDbPluginPostgres {
fn name(&self) -> &'static str {
"AccountsDbPluginPostgres"
}
/// Do initialization for the PostgreSQL plugin.
///
/// # Format of the config file:
/// * The `accounts_selector` section allows the user to controls accounts selections.
/// "accounts_selector" : {
/// "accounts" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// or:
/// "accounts_selector" = {
/// "owners" : \["pubkey-1", "pubkey-2", ..., "pubkey-m"\]
/// }
/// Accounts either satisyfing the accounts condition or owners condition will be selected.
/// When only owners is specified,
/// all accounts belonging to the owners will be streamed.
/// The accounts field support wildcard to select all accounts:
/// "accounts_selector" : {
/// "accounts" : \["*"\],
/// }
/// * "host", optional, specifies the PostgreSQL server.
/// * "user", optional, specifies the PostgreSQL user.
/// * "port", optional, specifies the PostgreSQL server's port.
/// * "connection_str", optional, the custom PostgreSQL connection string.
/// Please refer to https://docs.rs/postgres/0.19.2/postgres/config/struct.Config.html for the connection configuration.
/// When `connection_str` is set, the values in "host", "user" and "port" are ignored. If `connection_str` is not given,
/// `host` and `user` must be given.
/// * "threads" optional, specifies the number of worker threads for the plugin. A thread
/// maintains a PostgreSQL connection to the server. The default is '10'.
/// * "batch_size" optional, specifies the batch size of bulk insert when the AccountsDb is created
/// from restoring a snapshot. The default is '10'.
/// * "panic_on_db_errors", optional, contols if to panic when there are errors replicating data to the
/// PostgreSQL database. The default is 'false'.
/// * "transaction_selector", optional, controls if and what transaction to store. If this field is missing
/// None of the transction is stored.
/// "transaction_selector" : {
/// "mentions" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// The `mentions` field support wildcard to select all transaction or all 'vote' transactions:
/// For example, to select all transactions:
/// "transaction_selector" : {
/// "mentions" : \["*"\],
/// }
/// To select all vote transactions:
/// "transaction_selector" : {
/// "mentions" : \["all_votes"\],
/// }
/// # Examples
///
/// {
/// "libpath": "/home/solana/target/release/libsolana_accountsdb_plugin_postgres.so",
/// "host": "host_foo",
/// "user": "solana",
/// "threads": 10,
/// "accounts_selector" : {
/// "owners" : ["9oT9R5ZyRovSVnt37QvVoBttGpNqR3J7unkb567NP8k3"]
/// }
/// }
fn on_load(&mut self, config_file: &str) -> Result<()> {
solana_logger::setup_with_default("info");
info!(
"Loading plugin {:?} from config_file {:?}",
self.name(),
config_file
);
let mut file = File::open(config_file)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let result: serde_json::Value = serde_json::from_str(&contents).unwrap();
self.accounts_selector = Some(Self::create_accounts_selector_from_config(&result));
self.transaction_selector = Some(Self::create_transaction_selector_from_config(&result));
let result: serde_json::Result<AccountsDbPluginPostgresConfig> =
serde_json::from_str(&contents);
match result {
Err(err) => {
return Err(AccountsDbPluginError::ConfigFileReadError {
msg: format!(
"The config file is not in the JSON format expected: {:?}",
err
),
})
}
Ok(config) => {
let client = PostgresClientBuilder::build_pararallel_postgres_client(&config)?;
self.client = Some(client);
}
}
Ok(())
}
fn on_unload(&mut self) {
info!("Unloading plugin: {:?}", self.name());
match &mut self.client {
None => {}
Some(client) => {
client.join().unwrap();
}
}
}
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
let mut measure_all = Measure::start("accountsdb-plugin-postgres-update-account-main");
match account {
ReplicaAccountInfoVersions::V0_0_1(account) => {
let mut measure_select =
Measure::start("accountsdb-plugin-postgres-update-account-select");
if let Some(accounts_selector) = &self.accounts_selector {
if !accounts_selector.is_account_selected(account.pubkey, account.owner) {
return Ok(());
}
} else {
return Ok(());
}
measure_select.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-select-us",
measure_select.as_us() as usize,
100000,
100000
);
debug!(
"Updating account {:?} with owner {:?} at slot {:?} using account selector {:?}",
bs58::encode(account.pubkey).into_string(),
bs58::encode(account.owner).into_string(),
slot,
self.accounts_selector.as_ref().unwrap()
);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database."
.to_string(),
},
)));
}
Some(client) => {
let mut measure_update =
Measure::start("accountsdb-plugin-postgres-update-account-client");
let result = { client.update_account(account, slot, is_startup) };
measure_update.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-client-us",
measure_update.as_us() as usize,
100000,
100000
);
if let Err(err) = result {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!("Failed to persist the update of account to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
}
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-main-us",
measure_all.as_us() as usize,
100000,
100000
);
Ok(())
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
info!("Updating slot {:?} at with status {:?}", slot, status);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.update_slot_status(slot, parent, status);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the update of slot to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<()> {
info!("Notifying the end of startup for accounts notifications");
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.notify_end_of_startup();
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to notify the end of startup for accounts notifications. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_transaction(
&mut self,
transaction_info: ReplicaTransactionInfoVersions,
slot: u64,
) -> Result<()> {
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => match transaction_info {
ReplicaTransactionInfoVersions::V0_0_1(transaction_info) => {
if let Some(transaction_selector) = &self.transaction_selector {
if !transaction_selector.is_transaction_selected(
transaction_info.is_vote,
transaction_info.transaction.message().account_keys_iter(),
) {
return Ok(());
}
} else {
return Ok(());
}
let result = client.log_transaction_info(transaction_info, slot);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the transaction info to the PostgreSQL database. Error: {:?}", err)
});
}
}
},
}
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.
fn account_data_notifications_enabled(&self) -> bool {
self.accounts_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
/// Check if the plugin is interested in transaction data
fn transaction_notifications_enabled(&self) -> bool {
self.transaction_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
}
impl AccountsDbPluginPostgres {
fn create_accounts_selector_from_config(config: &serde_json::Value) -> AccountsSelector {
let accounts_selector = &config["accounts_selector"];
if accounts_selector.is_null() {
AccountsSelector::default()
} else {
let accounts = &accounts_selector["accounts"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
let owners = &accounts_selector["owners"];
let owners: Vec<String> = if owners.is_array() {
owners
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
AccountsSelector::new(&accounts, &owners)
}
}
fn create_transaction_selector_from_config(config: &serde_json::Value) -> TransactionSelector {
let transaction_selector = &config["transaction_selector"];
if transaction_selector.is_null() {
TransactionSelector::default()
} else {
let accounts = &transaction_selector["mentions"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
TransactionSelector::new(&accounts)
}
}
pub fn new() -> Self {
Self::default()
}
}
#[no_mangle]
#[allow(improper_ctypes_definitions)]
/// # Safety
///
/// This function returns the AccountsDbPluginPostgres pointer as trait AccountsDbPlugin.
pub unsafe extern "C" fn _create_plugin() -> *mut dyn AccountsDbPlugin {
let plugin = AccountsDbPluginPostgres::new();
let plugin: Box<dyn AccountsDbPlugin> = Box::new(plugin);
Box::into_raw(plugin)
}
#[cfg(test)]
pub(crate) mod tests {
use {super::*, serde_json};
#[test]
fn test_accounts_selector_from_config() {
let config = "{\"accounts_selector\" : { \
\"owners\" : [\"9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin\"] \
}}";
let config: serde_json::Value = serde_json::from_str(config).unwrap();
AccountsDbPluginPostgres::create_accounts_selector_from_config(&config);
}
}

View File

@@ -1,4 +0,0 @@
pub mod accounts_selector;
pub mod accountsdb_plugin_postgres;
pub mod postgres_client;
pub mod transaction_selector;

View File

@@ -1,910 +0,0 @@
#![allow(clippy::integer_arithmetic)]
mod postgres_client_transaction;
/// A concurrent implementation for writing accounts into the PostgreSQL in parallel.
use {
crate::accountsdb_plugin_postgres::{
AccountsDbPluginPostgresConfig, AccountsDbPluginPostgresError,
},
chrono::Utc,
crossbeam_channel::{bounded, Receiver, RecvTimeoutError, Sender},
log::*,
postgres::{Client, NoTls, Statement},
postgres_client_transaction::LogTransactionRequest,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPluginError, ReplicaAccountInfo, SlotStatus,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_sdk::timing::AtomicInterval,
std::{
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
Arc, Mutex,
},
thread::{self, sleep, Builder, JoinHandle},
time::Duration,
},
tokio_postgres::types,
};
/// The maximum asynchronous requests allowed in the channel to avoid excessive
/// memory usage. The downside -- calls after this threshold is reached can get blocked.
const MAX_ASYNC_REQUESTS: usize = 40960;
const DEFAULT_POSTGRES_PORT: u16 = 5432;
const DEFAULT_THREADS_COUNT: usize = 100;
const DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE: usize = 10;
const ACCOUNT_COLUMN_COUNT: usize = 9;
const DEFAULT_PANIC_ON_DB_ERROR: bool = false;
struct PostgresSqlClientWrapper {
client: Client,
update_account_stmt: Statement,
bulk_account_insert_stmt: Statement,
update_slot_with_parent_stmt: Statement,
update_slot_without_parent_stmt: Statement,
update_transaction_log_stmt: Statement,
}
pub struct SimplePostgresClient {
batch_size: usize,
pending_account_updates: Vec<DbAccountInfo>,
client: Mutex<PostgresSqlClientWrapper>,
}
struct PostgresClientWorker {
client: SimplePostgresClient,
/// Indicating if accounts notification during startup is done.
is_startup_done: bool,
}
impl Eq for DbAccountInfo {}
#[derive(Clone, PartialEq, Debug)]
pub struct DbAccountInfo {
pub pubkey: Vec<u8>,
pub lamports: i64,
pub owner: Vec<u8>,
pub executable: bool,
pub rent_epoch: i64,
pub data: Vec<u8>,
pub slot: i64,
pub write_version: i64,
}
pub(crate) fn abort() -> ! {
#[cfg(not(test))]
{
// standard error is usually redirected to a log file, cry for help on standard output as
// well
eprintln!("Validator process aborted. The validator log may contain further details");
std::process::exit(1);
}
#[cfg(test)]
panic!("process::exit(1) is intercepted for friendly test failure...");
}
impl DbAccountInfo {
fn new<T: ReadableAccountInfo>(account: &T, slot: u64) -> DbAccountInfo {
let data = account.data().to_vec();
Self {
pubkey: account.pubkey().to_vec(),
lamports: account.lamports() as i64,
owner: account.owner().to_vec(),
executable: account.executable(),
rent_epoch: account.rent_epoch() as i64,
data,
slot: slot as i64,
write_version: account.write_version(),
}
}
}
pub trait ReadableAccountInfo: Sized {
fn pubkey(&self) -> &[u8];
fn owner(&self) -> &[u8];
fn lamports(&self) -> i64;
fn executable(&self) -> bool;
fn rent_epoch(&self) -> i64;
fn data(&self) -> &[u8];
fn write_version(&self) -> i64;
}
impl ReadableAccountInfo for DbAccountInfo {
fn pubkey(&self) -> &[u8] {
&self.pubkey
}
fn owner(&self) -> &[u8] {
&self.owner
}
fn lamports(&self) -> i64 {
self.lamports
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch
}
fn data(&self) -> &[u8] {
&self.data
}
fn write_version(&self) -> i64 {
self.write_version
}
}
impl<'a> ReadableAccountInfo for ReplicaAccountInfo<'a> {
fn pubkey(&self) -> &[u8] {
self.pubkey
}
fn owner(&self) -> &[u8] {
self.owner
}
fn lamports(&self) -> i64 {
self.lamports as i64
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch as i64
}
fn data(&self) -> &[u8] {
self.data
}
fn write_version(&self) -> i64 {
self.write_version as i64
}
}
pub trait PostgresClient {
fn join(&mut self) -> thread::Result<()> {
Ok(())
}
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError>;
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError>;
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError>;
fn log_transaction(
&mut self,
transaction_log_info: LogTransactionRequest,
) -> Result<(), AccountsDbPluginError>;
}
impl SimplePostgresClient {
fn connect_to_db(
config: &AccountsDbPluginPostgresConfig,
) -> Result<Client, AccountsDbPluginError> {
let port = config.port.unwrap_or(DEFAULT_POSTGRES_PORT);
let connection_str = if let Some(connection_str) = &config.connection_str {
connection_str.clone()
} else {
if config.host.is_none() || config.user.is_none() {
let msg = format!(
"\"connection_str\": {:?}, or \"host\": {:?} \"user\": {:?} must be specified",
config.connection_str, config.host, config.user
);
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::ConfigurationError { msg },
)));
}
format!(
"host={} user={} port={}",
config.host.as_ref().unwrap(),
config.user.as_ref().unwrap(),
port
)
};
match Client::connect(&connection_str, NoTls) {
Err(err) => {
let msg = format!(
"Error in connecting to the PostgreSQL database: {:?} connection_str: {:?}",
err, connection_str
);
error!("{}", msg);
Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError { msg },
)))
}
Ok(client) => Ok(client),
}
}
fn build_bulk_account_insert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
let mut stmt = String::from("INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) VALUES");
for j in 0..batch_size {
let row = j * ACCOUNT_COLUMN_COUNT;
let val_str = format!(
"(${}, ${}, ${}, ${}, ${}, ${}, ${}, ${}, ${})",
row + 1,
row + 2,
row + 3,
row + 4,
row + 5,
row + 6,
row + 7,
row + 8,
row + 9,
);
if j == 0 {
stmt = format!("{} {}", &stmt, val_str);
} else {
stmt = format!("{}, {}", &stmt, val_str);
}
}
let handle_conflict = "ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
stmt = format!("{} {}", stmt, handle_conflict);
info!("{}", stmt);
let bulk_stmt = client.prepare(&stmt);
match bulk_stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_single_account_upsert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) \
ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_slot_upsert_statement_with_parent(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO slot (slot, parent, status, updated_on) \
VALUES ($1, $2, $3, $4) \
ON CONFLICT (slot) DO UPDATE SET parent=excluded.parent, status=excluded.status, updated_on=excluded.updated_on";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the slot update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
fn build_slot_upsert_statement_without_parent(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO slot (slot, status, updated_on) \
VALUES ($1, $2, $3) \
ON CONFLICT (slot) DO UPDATE SET status=excluded.status, updated_on=excluded.updated_on";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the slot update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
/// Internal function for updating or inserting a single account
fn upsert_account_internal(
account: &DbAccountInfo,
statement: &Statement,
client: &mut Client,
) -> Result<(), AccountsDbPluginError> {
let lamports = account.lamports() as i64;
let rent_epoch = account.rent_epoch() as i64;
let updated_on = Utc::now().naive_utc();
let result = client.query(
statement,
&[
&account.pubkey(),
&account.slot,
&account.owner(),
&lamports,
&account.executable(),
&rent_epoch,
&account.data(),
&account.write_version(),
&updated_on,
],
);
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
Ok(())
}
/// Update or insert a single account
fn upsert_account(&mut self, account: &DbAccountInfo) -> Result<(), AccountsDbPluginError> {
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
Self::upsert_account_internal(account, statement, client)
}
/// Insert accounts in batch to reduce network overhead
fn insert_accounts_in_batch(
&mut self,
account: DbAccountInfo,
) -> Result<(), AccountsDbPluginError> {
self.pending_account_updates.push(account);
if self.pending_account_updates.len() == self.batch_size {
let mut measure = Measure::start("accountsdb-plugin-postgres-prepare-values");
let mut values: Vec<&(dyn types::ToSql + Sync)> =
Vec::with_capacity(self.batch_size * ACCOUNT_COLUMN_COUNT);
let updated_on = Utc::now().naive_utc();
for j in 0..self.batch_size {
let account = &self.pending_account_updates[j];
values.push(&account.pubkey);
values.push(&account.slot);
values.push(&account.owner);
values.push(&account.lamports);
values.push(&account.executable);
values.push(&account.rent_epoch);
values.push(&account.data);
values.push(&account.write_version);
values.push(&updated_on);
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-prepare-values-us",
measure.as_us() as usize,
10000,
10000
);
let mut measure = Measure::start("accountsdb-plugin-postgres-update-account");
let client = self.client.get_mut().unwrap();
let result = client
.client
.query(&client.bulk_account_insert_stmt, &values);
self.pending_account_updates.clear();
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-us",
measure.as_us() as usize,
10000,
10000
);
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-count",
self.batch_size,
10000,
10000
);
}
Ok(())
}
/// Flush any left over accounts in batch which are not processed in the last batch
fn flush_buffered_writes(&mut self) -> Result<(), AccountsDbPluginError> {
if self.pending_account_updates.is_empty() {
return Ok(());
}
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
for account in self.pending_account_updates.drain(..) {
Self::upsert_account_internal(&account, statement, client)?;
}
Ok(())
}
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating SimplePostgresClient...");
let mut client = Self::connect_to_db(config)?;
let bulk_account_insert_stmt =
Self::build_bulk_account_insert_statement(&mut client, config)?;
let update_account_stmt = Self::build_single_account_upsert_statement(&mut client, config)?;
let update_slot_with_parent_stmt =
Self::build_slot_upsert_statement_with_parent(&mut client, config)?;
let update_slot_without_parent_stmt =
Self::build_slot_upsert_statement_without_parent(&mut client, config)?;
let update_transaction_log_stmt =
Self::build_transaction_info_upsert_statement(&mut client, config)?;
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
info!("Created SimplePostgresClient.");
Ok(Self {
batch_size,
pending_account_updates: Vec::with_capacity(batch_size),
client: Mutex::new(PostgresSqlClientWrapper {
client,
update_account_stmt,
bulk_account_insert_stmt,
update_slot_with_parent_stmt,
update_slot_without_parent_stmt,
update_transaction_log_stmt,
}),
})
}
}
impl PostgresClient for SimplePostgresClient {
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
trace!(
"Updating account {} with owner {} at slot {}",
bs58::encode(account.pubkey()).into_string(),
bs58::encode(account.owner()).into_string(),
account.slot,
);
if !is_startup {
return self.upsert_account(&account);
}
self.insert_accounts_in_batch(account)
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
info!("Updating slot {:?} at with status {:?}", slot, status);
let slot = slot as i64; // postgres only supports i64
let parent = parent.map(|parent| parent as i64);
let updated_on = Utc::now().naive_utc();
let status_str = status.as_str();
let client = self.client.get_mut().unwrap();
let result = match parent {
Some(parent) => client.client.execute(
&client.update_slot_with_parent_stmt,
&[&slot, &parent, &status_str, &updated_on],
),
None => client.client.execute(
&client.update_slot_without_parent_stmt,
&[&slot, &status_str, &updated_on],
),
};
match result {
Err(err) => {
let msg = format!(
"Failed to persist the update of slot to the PostgreSQL database. Error: {:?}",
err
);
error!("{:?}", msg);
return Err(AccountsDbPluginError::SlotStatusUpdateError { msg });
}
Ok(rows) => {
assert_eq!(1, rows, "Expected one rows to be updated a time");
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
self.flush_buffered_writes()
}
fn log_transaction(
&mut self,
transaction_log_info: LogTransactionRequest,
) -> Result<(), AccountsDbPluginError> {
self.log_transaction_impl(transaction_log_info)
}
}
struct UpdateAccountRequest {
account: DbAccountInfo,
is_startup: bool,
}
struct UpdateSlotRequest {
slot: u64,
parent: Option<u64>,
slot_status: SlotStatus,
}
#[warn(clippy::large_enum_variant)]
enum DbWorkItem {
UpdateAccount(Box<UpdateAccountRequest>),
UpdateSlot(Box<UpdateSlotRequest>),
LogTransaction(Box<LogTransactionRequest>),
}
impl PostgresClientWorker {
fn new(config: AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
let result = SimplePostgresClient::new(&config);
match result {
Ok(client) => Ok(PostgresClientWorker {
client,
is_startup_done: false,
}),
Err(err) => {
error!("Error in creating SimplePostgresClient: {}", err);
Err(err)
}
}
}
fn do_work(
&mut self,
receiver: Receiver<DbWorkItem>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
panic_on_db_errors: bool,
) -> Result<(), AccountsDbPluginError> {
while !exit_worker.load(Ordering::Relaxed) {
let mut measure = Measure::start("accountsdb-plugin-postgres-worker-recv");
let work = receiver.recv_timeout(Duration::from_millis(500));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-worker-recv-us",
measure.as_us() as usize,
100000,
100000
);
match work {
Ok(work) => match work {
DbWorkItem::UpdateAccount(request) => {
if let Err(err) = self
.client
.update_account(request.account, request.is_startup)
{
error!("Failed to update account: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
DbWorkItem::UpdateSlot(request) => {
if let Err(err) = self.client.update_slot_status(
request.slot,
request.parent,
request.slot_status,
) {
error!("Failed to update slot: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
DbWorkItem::LogTransaction(transaction_log_info) => {
if let Err(err) = self.client.log_transaction(*transaction_log_info) {
error!("Failed to update transaction: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
},
Err(err) => match err {
RecvTimeoutError::Timeout => {
if !self.is_startup_done && is_startup_done.load(Ordering::Relaxed) {
if let Err(err) = self.client.notify_end_of_startup() {
error!("Error in notifying end of startup: ({})", err);
if panic_on_db_errors {
abort();
}
}
self.is_startup_done = true;
startup_done_count.fetch_add(1, Ordering::Relaxed);
}
continue;
}
_ => {
error!("Error in receiving the item {:?}", err);
if panic_on_db_errors {
abort();
}
break;
}
},
}
}
Ok(())
}
}
pub struct ParallelPostgresClient {
workers: Vec<JoinHandle<Result<(), AccountsDbPluginError>>>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
initialized_worker_count: Arc<AtomicUsize>,
sender: Sender<DbWorkItem>,
last_report: AtomicInterval,
}
impl ParallelPostgresClient {
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating ParallelPostgresClient...");
let (sender, receiver) = bounded(MAX_ASYNC_REQUESTS);
let exit_worker = Arc::new(AtomicBool::new(false));
let mut workers = Vec::default();
let is_startup_done = Arc::new(AtomicBool::new(false));
let startup_done_count = Arc::new(AtomicUsize::new(0));
let worker_count = config.threads.unwrap_or(DEFAULT_THREADS_COUNT);
let initialized_worker_count = Arc::new(AtomicUsize::new(0));
for i in 0..worker_count {
let cloned_receiver = receiver.clone();
let exit_clone = exit_worker.clone();
let is_startup_done_clone = is_startup_done.clone();
let startup_done_count_clone = startup_done_count.clone();
let initialized_worker_count_clone = initialized_worker_count.clone();
let config = config.clone();
let worker = Builder::new()
.name(format!("worker-{}", i))
.spawn(move || -> Result<(), AccountsDbPluginError> {
let panic_on_db_errors = *config
.panic_on_db_errors
.as_ref()
.unwrap_or(&DEFAULT_PANIC_ON_DB_ERROR);
let result = PostgresClientWorker::new(config);
match result {
Ok(mut worker) => {
initialized_worker_count_clone.fetch_add(1, Ordering::Relaxed);
worker.do_work(
cloned_receiver,
exit_clone,
is_startup_done_clone,
startup_done_count_clone,
panic_on_db_errors,
)?;
Ok(())
}
Err(err) => {
error!("Error when making connection to database: ({})", err);
if panic_on_db_errors {
abort();
}
Err(err)
}
}
})
.unwrap();
workers.push(worker);
}
info!("Created ParallelPostgresClient.");
Ok(Self {
last_report: AtomicInterval::default(),
workers,
exit_worker,
is_startup_done,
startup_done_count,
initialized_worker_count,
sender,
})
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_worker.store(true, Ordering::Relaxed);
while !self.workers.is_empty() {
let worker = self.workers.pop();
if worker.is_none() {
break;
}
let worker = worker.unwrap();
let result = worker.join().unwrap();
if result.is_err() {
error!("The worker thread has failed: {:?}", result);
}
}
Ok(())
}
pub fn update_account(
&mut self,
account: &ReplicaAccountInfo,
slot: u64,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
if self.last_report.should_update(30000) {
datapoint_debug!(
"postgres-plugin-stats",
("message-queue-length", self.sender.len() as i64, i64),
);
}
let mut measure = Measure::start("accountsdb-plugin-posgres-create-work-item");
let wrk_item = DbWorkItem::UpdateAccount(Box::new(UpdateAccountRequest {
account: DbAccountInfo::new(account, slot),
is_startup,
}));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-create-work-item-us",
measure.as_us() as usize,
100000,
100000
);
let mut measure = Measure::start("accountsdb-plugin-posgres-send-msg");
if let Err(err) = self.sender.send(wrk_item) {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!(
"Failed to update the account {:?}, error: {:?}",
bs58::encode(account.pubkey()).into_string(),
err
),
});
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-send-msg-us",
measure.as_us() as usize,
100000,
100000
);
Ok(())
}
pub fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
if let Err(err) = self
.sender
.send(DbWorkItem::UpdateSlot(Box::new(UpdateSlotRequest {
slot,
parent,
slot_status: status,
})))
{
return Err(AccountsDbPluginError::SlotStatusUpdateError {
msg: format!("Failed to update the slot {:?}, error: {:?}", slot, err),
});
}
Ok(())
}
pub fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
info!("Notifying the end of startup");
// Ensure all items in the queue has been received by the workers
while !self.sender.is_empty() {
sleep(Duration::from_millis(100));
}
self.is_startup_done.store(true, Ordering::Relaxed);
// Wait for all worker threads to be done with flushing
while self.startup_done_count.load(Ordering::Relaxed)
!= self.initialized_worker_count.load(Ordering::Relaxed)
{
info!(
"Startup done count: {}, good worker thread count: {}",
self.startup_done_count.load(Ordering::Relaxed),
self.initialized_worker_count.load(Ordering::Relaxed)
);
sleep(Duration::from_millis(100));
}
info!("Done with notifying the end of startup");
Ok(())
}
}
pub struct PostgresClientBuilder {}
impl PostgresClientBuilder {
pub fn build_pararallel_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<ParallelPostgresClient, AccountsDbPluginError> {
ParallelPostgresClient::new(config)
}
pub fn build_simple_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<SimplePostgresClient, AccountsDbPluginError> {
SimplePostgresClient::new(config)
}
}

View File

@@ -1,194 +0,0 @@
/// The transaction selector is responsible for filtering transactions
/// in the plugin framework.
use {log::*, solana_sdk::pubkey::Pubkey, std::collections::HashSet};
pub(crate) struct TransactionSelector {
pub mentioned_addresses: HashSet<Vec<u8>>,
pub select_all_transactions: bool,
pub select_all_vote_transactions: bool,
}
#[allow(dead_code)]
impl TransactionSelector {
pub fn default() -> Self {
Self {
mentioned_addresses: HashSet::default(),
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Create a selector based on the mentioned addresses
/// To select all transactions use ["*"] or ["all"]
/// To select all vote transactions, use ["all_votes"]
/// To select transactions mentioning specific addresses use ["<pubkey1>", "<pubkey2>", ...]
pub fn new(mentioned_addresses: &[String]) -> Self {
info!(
"Creating TransactionSelector from addresses: {:?}",
mentioned_addresses
);
let select_all_transactions = mentioned_addresses
.iter()
.any(|key| key == "*" || key == "all");
if select_all_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let select_all_vote_transactions = mentioned_addresses.iter().any(|key| key == "all_votes");
if select_all_vote_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let mentioned_addresses = mentioned_addresses
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
Self {
mentioned_addresses,
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Check if a transaction is of interest.
pub fn is_transaction_selected(
&self,
is_vote: bool,
mentioned_addresses: Box<dyn Iterator<Item = &Pubkey> + '_>,
) -> bool {
if !self.is_enabled() {
return false;
}
if self.select_all_transactions || (self.select_all_vote_transactions && is_vote) {
return true;
}
for address in mentioned_addresses {
if self.mentioned_addresses.contains(address.as_ref()) {
return true;
}
}
false
}
/// Check if any transaction is of interest at all
pub fn is_enabled(&self) -> bool {
self.select_all_transactions
|| self.select_all_vote_transactions
|| !self.mentioned_addresses.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_select_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[pubkey1.to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_using_wildcard() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["*".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_all() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_vote_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all_votes".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
#[test]
fn test_select_no_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[]);
assert!(!selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
}

View File

@@ -1,8 +1,8 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-banking-bench"
version = "1.10.0"
version = "1.6.28"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,21 +10,20 @@ publish = false
[dependencies]
clap = "2.33.1"
crossbeam-channel = "0.5"
log = "0.4.14"
crossbeam-channel = "0.4"
log = "0.4.11"
rand = "0.7.0"
rayon = "1.5.1"
solana-core = { path = "../core", version = "=1.10.0" }
solana-gossip = { path = "../gossip", version = "=1.10.0" }
solana-ledger = { path = "../ledger", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-perf = { path = "../perf", version = "=1.10.0" }
solana-poh = { path = "../poh", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
rayon = "1.5.0"
solana-core = { path = "../core", version = "=1.6.28" }
solana-clap-utils = { path = "../clap-utils", version = "=1.6.28" }
solana-streamer = { path = "../streamer", version = "=1.6.28" }
solana-perf = { path = "../perf", version = "=1.6.28" }
solana-ledger = { path = "../ledger", version = "=1.6.28" }
solana-logger = { path = "../logger", version = "=1.6.28" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-measure = { path = "../measure", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-version = { path = "../version", version = "=1.6.28" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,37 +1,38 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, value_t, App, Arg},
crossbeam_channel::unbounded,
log::*,
rand::{thread_rng, Rng},
rayon::prelude::*,
solana_core::banking_stage::BankingStage,
solana_gossip::cluster_info::{ClusterInfo, Node},
solana_ledger::{
use clap::{crate_description, crate_name, value_t, App, Arg};
use crossbeam_channel::unbounded;
use log::*;
use rand::{thread_rng, Rng};
use rayon::prelude::*;
use solana_core::{
banking_stage::{create_test_recorder, BankingStage},
cluster_info::ClusterInfo,
cluster_info::Node,
poh_recorder::PohRecorder,
poh_recorder::WorkingBankEntry,
};
use solana_ledger::{
blockstore::Blockstore,
genesis_utils::{create_genesis_config, GenesisConfigInfo},
get_tmp_ledger_path,
},
solana_measure::measure::Measure,
solana_perf::packet::to_packet_batches,
solana_poh::poh_recorder::{create_test_recorder, PohRecorder, WorkingBankEntry},
solana_runtime::{
};
use solana_measure::measure::Measure;
use solana_perf::packet::to_packets_chunked;
use solana_runtime::{
accounts_background_service::AbsRequestSender, bank::Bank, bank_forks::BankForks,
cost_model::CostModel,
},
solana_sdk::{
};
use solana_sdk::{
hash::Hash,
signature::{Keypair, Signature},
signature::Keypair,
signature::Signature,
system_transaction,
timing::{duration_as_us, timestamp},
transaction::Transaction,
},
solana_streamer::socket::SocketAddrSpace,
std::{
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex, RwLock},
};
use std::{
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex},
thread::sleep,
time::{Duration, Instant},
},
};
fn check_txs(
@@ -77,7 +78,7 @@ fn make_accounts_txs(
.into_par_iter()
.map(|_| {
let mut new = dummy.clone();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
if !same_payer {
new.message.account_keys[0] = solana_sdk::pubkey::new_rand();
}
@@ -168,9 +169,8 @@ fn main() {
let (verified_sender, verified_receiver) = unbounded();
let (vote_sender, vote_receiver) = unbounded();
let (tpu_vote_sender, tpu_vote_receiver) = unbounded();
let (replay_vote_sender, _replay_vote_receiver) = unbounded();
let bank0 = Bank::new_for_benches(&genesis_config);
let bank0 = Bank::new(&genesis_config);
let mut bank_forks = BankForks::new(bank0);
let mut bank = bank_forks.working_bank();
@@ -189,7 +189,7 @@ fn main() {
genesis_config.hash(),
);
// Ignore any pesky duplicate signature errors in the case we are using single-payer
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
fund.signatures = vec![Signature::new(&sig[0..64])];
let x = bank.process_transaction(&fund);
x.unwrap();
@@ -199,20 +199,19 @@ fn main() {
if !skip_sanity {
//sanity check, make sure all the transactions can execute sequentially
transactions.iter().for_each(|tx| {
let res = bank.process_transaction(tx);
let res = bank.process_transaction(&tx);
assert!(res.is_ok(), "sanity test transactions error: {:?}", res);
});
bank.clear_signatures();
//sanity check, make sure all the transactions can execute in parallel
let res = bank.process_transactions(transactions.iter());
let res = bank.process_transactions(&transactions);
for r in res {
assert!(r.is_ok(), "sanity parallel execution error: {:?}", r);
}
bank.clear_signatures();
}
let mut verified: Vec<_> = to_packet_batches(&transactions, packets_per_chunk);
let mut verified: Vec<_> = to_packets_chunked(&transactions, packets_per_chunk);
let ledger_path = get_tmp_ledger_path!();
{
let blockstore = Arc::new(
@@ -220,21 +219,15 @@ fn main() {
);
let (exit, poh_recorder, poh_service, signal_receiver) =
create_test_recorder(&bank, &blockstore, None);
let cluster_info = ClusterInfo::new(
Node::new_localhost().info,
Arc::new(Keypair::new()),
SocketAddrSpace::Unspecified,
);
let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info);
let cluster_info = Arc::new(cluster_info);
let banking_stage = BankingStage::new(
&cluster_info,
&poh_recorder,
verified_receiver,
tpu_vote_receiver,
vote_receiver,
None,
replay_vote_sender,
Arc::new(RwLock::new(CostModel::default())),
);
poh_recorder.lock().unwrap().set_bank(&bank);
@@ -314,10 +307,11 @@ fn main() {
tx_total_us += duration_as_us(&now.elapsed());
let mut poh_time = Measure::start("poh_time");
poh_recorder
.lock()
.unwrap()
.reset(bank.clone(), Some((bank.slot(), bank.slot() + 1)));
poh_recorder.lock().unwrap().reset(
bank.last_blockhash(),
bank.slot(),
Some((bank.slot(), bank.slot() + 1)),
);
poh_time.stop();
let mut new_bank_time = Measure::start("new_bank");
@@ -361,10 +355,10 @@ fn main() {
if bank.slot() > 0 && bank.slot() % 16 == 0 {
for tx in transactions.iter_mut() {
tx.message.recent_blockhash = bank.last_blockhash();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
tx.signatures[0] = Signature::new(&sig[0..64]);
}
verified = to_packet_batches(&transactions.clone(), packets_per_chunk);
verified = to_packets_chunked(&transactions.clone(), packets_per_chunk);
}
start += chunk_len;
@@ -386,7 +380,6 @@ fn main() {
);
drop(verified_sender);
drop(tpu_vote_sender);
drop(vote_sender);
exit.store(true, Ordering::Relaxed);
banking_stage.join().unwrap();

View File

@@ -1,27 +1,30 @@
[package]
name = "solana-banks-client"
version = "1.10.0"
version = "1.6.28"
description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-banks-client"
edition = "2021"
edition = "2018"
[dependencies]
borsh = "0.9.1"
bincode = "1.3.1"
borsh = "0.9.0"
borsh-derive = "0.9.0"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.10.0" }
solana-program = { path = "../sdk/program", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
tarpc = { version = "0.26.2", features = ["full"] }
mio = "0.7.6"
solana-banks-interface = { path = "../banks-interface", version = "=1.6.28" }
solana-program = { path = "../sdk/program", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
tarpc = { version = "0.24.1", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
[dev-dependencies]
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-banks-server = { path = "../banks-server", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-banks-server = { path = "../banks-server", version = "=1.6.28" }
[lib]
crate-type = ["lib"]

View File

@@ -5,33 +5,31 @@
//! but they are undocumented, may change over time, and are generally more
//! cumbersome to use.
use borsh::BorshDeserialize;
use futures::{future::join_all, Future, FutureExt};
pub use solana_banks_interface::{BanksClient as TarpcClient, TransactionStatus};
use {
borsh::BorshDeserialize,
futures::{future::join_all, Future, FutureExt},
solana_banks_interface::{BanksRequest, BanksResponse},
solana_program::{
use solana_banks_interface::{BanksRequest, BanksResponse};
use solana_program::{
clock::Slot, fee_calculator::FeeCalculator, hash::Hash, program_pack::Pack, pubkey::Pubkey,
rent::Rent, sysvar::Sysvar,
},
solana_sdk::{
rent::Rent, sysvar,
};
use solana_sdk::{
account::{from_account, Account},
commitment_config::CommitmentLevel,
message::Message,
signature::Signature,
transaction::{self, Transaction},
transport,
},
std::io::{self, Error, ErrorKind},
tarpc::{
client::{self, NewClient, RequestDispatch},
context::{self, Context},
serde_transport::tcp,
ClientMessage, Response, Transport,
},
tokio::{net::ToSocketAddrs, time::Duration},
tokio_serde::formats::Bincode,
};
use std::io::{self, Error, ErrorKind};
use tarpc::{
client::{self, channel::RequestDispatch, NewClient},
context::{self, Context},
rpc::{ClientMessage, Response},
serde_transport::tcp,
Transport,
};
use tokio::{net::ToSocketAddrs, time::Duration};
use tokio_serde::formats::Bincode;
// This exists only for backward compatibility
pub trait BanksClientExt {}
@@ -61,16 +59,11 @@ impl BanksClient {
self.inner.send_transaction_with_context(ctx, transaction)
}
#[deprecated(
since = "1.9.0",
note = "Please use `get_fee_for_message` or `is_blockhash_valid` instead"
)]
pub fn get_fees_with_commitment_and_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, u64)>> + '_ {
#[allow(deprecated)]
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, Slot)>> + '_ {
self.inner
.get_fees_with_commitment_and_context(ctx, commitment)
}
@@ -92,14 +85,6 @@ impl BanksClient {
self.inner.get_slot_with_context(ctx, commitment)
}
pub fn get_block_height_with_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Slot>> + '_ {
self.inner.get_block_height_with_context(ctx, commitment)
}
pub fn process_transaction_with_commitment_and_context(
&mut self,
ctx: Context,
@@ -133,38 +118,28 @@ impl BanksClient {
/// Return the fee parameters associated with a recent, rooted blockhash. The cluster
/// will use the transaction's blockhash to look up these same fee parameters and
/// use them to calculate the transaction fee.
#[deprecated(
since = "1.9.0",
note = "Please use `get_fee_for_message` or `is_blockhash_valid` instead"
)]
pub fn get_fees(
&mut self,
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, u64)>> + '_ {
#[allow(deprecated)]
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, Slot)>> + '_ {
self.get_fees_with_commitment_and_context(context::current(), CommitmentLevel::default())
}
/// Return the cluster Sysvar
pub fn get_sysvar<T: Sysvar>(&mut self) -> impl Future<Output = io::Result<T>> + '_ {
self.get_account(T::id()).map(|result| {
let sysvar = result?
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Sysvar not present"))?;
from_account::<T, _>(&sysvar)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Failed to deserialize sysvar"))
})
}
/// Return the cluster rent
pub fn get_rent(&mut self) -> impl Future<Output = io::Result<Rent>> + '_ {
self.get_sysvar::<Rent>()
self.get_account(sysvar::rent::id()).map(|result| {
let rent_sysvar = result?
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Rent sysvar not present"))?;
from_account::<Rent, _>(&rent_sysvar).ok_or_else(|| {
io::Error::new(io::ErrorKind::Other, "Failed to deserialize Rent sysvar")
})
})
}
/// Return a recent, rooted blockhash from the server. The cluster will only accept
/// transactions with a blockhash that has not yet expired. Use the `get_fees`
/// method to get both a blockhash and the blockhash's last valid slot.
#[deprecated(since = "1.9.0", note = "Please use `get_latest_blockhash` instead")]
pub fn get_recent_blockhash(&mut self) -> impl Future<Output = io::Result<Hash>> + '_ {
self.get_latest_blockhash()
self.get_fees().map(|result| Ok(result?.1))
}
/// Send a transaction and return after the transaction has been rejected or
@@ -217,18 +192,12 @@ impl BanksClient {
self.process_transactions_with_commitment(transactions, CommitmentLevel::default())
}
/// Return the most recent rooted slot. All transactions at or below this slot
/// are said to be finalized. The cluster will not fork to a higher slot.
/// Return the most recent rooted slot height. All transactions at or below this height
/// are said to be finalized. The cluster will not fork to a higher slot height.
pub fn get_root_slot(&mut self) -> impl Future<Output = io::Result<Slot>> + '_ {
self.get_slot_with_context(context::current(), CommitmentLevel::default())
}
/// Return the most recent rooted block height. All transactions at or below this height
/// are said to be finalized. The cluster will not fork to a higher block height.
pub fn get_root_block_height(&mut self) -> impl Future<Output = io::Result<Slot>> + '_ {
self.get_block_height_with_context(context::current(), CommitmentLevel::default())
}
/// Return the account at the given address at the slot corresponding to the given
/// commitment level. If the account is not found, None is returned.
pub fn get_account_with_commitment(
@@ -324,41 +293,6 @@ impl BanksClient {
// Convert Vec<Result<_, _>> to Result<Vec<_>>
statuses.into_iter().collect()
}
pub fn get_latest_blockhash(&mut self) -> impl Future<Output = io::Result<Hash>> + '_ {
self.get_latest_blockhash_with_commitment(CommitmentLevel::default())
.map(|result| {
result?
.map(|x| x.0)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "account not found"))
})
}
pub fn get_latest_blockhash_with_commitment(
&mut self,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<(Hash, u64)>>> + '_ {
self.get_latest_blockhash_with_commitment_and_context(context::current(), commitment)
}
pub fn get_latest_blockhash_with_commitment_and_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<(Hash, u64)>>> + '_ {
self.inner
.get_latest_blockhash_with_commitment_and_context(ctx, commitment)
}
pub fn get_fee_for_message_with_commitment_and_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
message: Message,
) -> impl Future<Output = io::Result<Option<u64>>> + '_ {
self.inner
.get_fee_for_message_with_commitment_and_context(ctx, commitment, message)
}
}
pub async fn start_client<C>(transport: C) -> io::Result<BanksClient>
@@ -366,31 +300,29 @@ where
C: Transport<ClientMessage<BanksRequest>, Response<BanksResponse>> + Send + 'static,
{
Ok(BanksClient {
inner: TarpcClient::new(client::Config::default(), transport).spawn(),
inner: TarpcClient::new(client::Config::default(), transport).spawn()?,
})
}
pub async fn start_tcp_client<T: ToSocketAddrs>(addr: T) -> io::Result<BanksClient> {
let transport = tcp::connect(addr, Bincode::default).await?;
Ok(BanksClient {
inner: TarpcClient::new(client::Config::default(), transport).spawn(),
inner: TarpcClient::new(client::Config::default(), transport).spawn()?,
})
}
#[cfg(test)]
mod tests {
use {
super::*,
solana_banks_server::banks_server::start_local_server,
solana_runtime::{
use super::*;
use solana_banks_server::banks_server::start_local_server;
use solana_runtime::{
bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache,
genesis_utils::create_genesis_config,
},
solana_sdk::{message::Message, signature::Signer, system_instruction},
std::sync::{Arc, RwLock},
tarpc::transport,
tokio::{runtime::Runtime, time::sleep},
};
use solana_sdk::{message::Message, signature::Signer, system_instruction};
use std::sync::{Arc, RwLock};
use tarpc::transport;
use tokio::{runtime::Runtime, time::sleep};
#[test]
fn test_banks_client_new() {
@@ -405,7 +337,7 @@ mod tests {
// `runtime.block_on()` just once, to run all the async code.
let genesis = create_genesis_config(10);
let bank = Bank::new_for_tests(&genesis.genesis_config);
let bank = Bank::new(&genesis.genesis_config);
let slot = bank.slot();
let block_commitment_cache = Arc::new(RwLock::new(
BlockCommitmentCache::new_for_tests_with_slots(slot, slot),
@@ -418,12 +350,10 @@ mod tests {
let message = Message::new(&[instruction], Some(&mint_pubkey));
Runtime::new()?.block_on(async {
let client_transport =
start_local_server(bank_forks, block_commitment_cache, Duration::from_millis(1))
.await;
let client_transport = start_local_server(bank_forks, block_commitment_cache).await;
let mut banks_client = start_client(client_transport).await?;
let recent_blockhash = banks_client.get_latest_blockhash().await?;
let recent_blockhash = banks_client.get_recent_blockhash().await?;
let transaction = Transaction::new(&[&genesis.mint_keypair], message, recent_blockhash);
banks_client.process_transaction(transaction).await.unwrap();
assert_eq!(banks_client.get_balance(bob_pubkey).await?, 1);
@@ -438,7 +368,7 @@ mod tests {
// server-side functionality is available to the client.
let genesis = create_genesis_config(10);
let bank = Bank::new_for_tests(&genesis.genesis_config);
let bank = Bank::new(&genesis.genesis_config);
let slot = bank.slot();
let block_commitment_cache = Arc::new(RwLock::new(
BlockCommitmentCache::new_for_tests_with_slots(slot, slot),
@@ -447,18 +377,13 @@ mod tests {
let mint_pubkey = &genesis.mint_keypair.pubkey();
let bob_pubkey = solana_sdk::pubkey::new_rand();
let instruction = system_instruction::transfer(mint_pubkey, &bob_pubkey, 1);
let message = Message::new(&[instruction], Some(mint_pubkey));
let instruction = system_instruction::transfer(&mint_pubkey, &bob_pubkey, 1);
let message = Message::new(&[instruction], Some(&mint_pubkey));
Runtime::new()?.block_on(async {
let client_transport =
start_local_server(bank_forks, block_commitment_cache, Duration::from_millis(1))
.await;
let client_transport = start_local_server(bank_forks, block_commitment_cache).await;
let mut banks_client = start_client(client_transport).await?;
let (recent_blockhash, last_valid_block_height) = banks_client
.get_latest_blockhash_with_commitment(CommitmentLevel::default())
.await?
.unwrap();
let (_, recent_blockhash, last_valid_slot) = banks_client.get_fees().await?;
let transaction = Transaction::new(&[&genesis.mint_keypair], message, recent_blockhash);
let signature = transaction.signatures[0];
banks_client.send_transaction(transaction).await?;
@@ -466,8 +391,8 @@ mod tests {
let mut status = banks_client.get_transaction_status(signature).await?;
while status.is_none() {
let root_block_height = banks_client.get_root_block_height().await?;
if root_block_height > last_valid_block_height {
let root_slot = banks_client.get_root_slot().await?;
if root_slot > last_valid_slot {
break;
}
sleep(Duration::from_millis(100)).await;

View File

@@ -1,18 +1,22 @@
[package]
name = "solana-banks-interface"
version = "1.10.0"
version = "1.6.28"
description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-banks-interface"
edition = "2021"
edition = "2018"
[dependencies]
serde = { version = "1.0.131", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
tarpc = { version = "0.26.2", features = ["full"] }
mio = "0.7.6"
serde = { version = "1.0.122", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
tarpc = { version = "0.24.1", features = ["full"] }
[dev-dependencies]
tokio = { version = "1", features = ["full"] }
[lib]
crate-type = ["lib"]

View File

@@ -1,18 +1,13 @@
#![allow(deprecated)]
use {
serde::{Deserialize, Serialize},
solana_sdk::{
use serde::{Deserialize, Serialize};
use solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
fee_calculator::FeeCalculator,
hash::Hash,
message::Message,
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction, TransactionError},
},
};
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
@@ -33,17 +28,12 @@ pub struct TransactionStatus {
#[tarpc::service]
pub trait Banks {
async fn send_transaction_with_context(transaction: Transaction);
#[deprecated(
since = "1.9.0",
note = "Please use `get_fee_for_message_with_commitment_and_context` instead"
)]
async fn get_fees_with_commitment_and_context(
commitment: CommitmentLevel,
) -> (FeeCalculator, Hash, Slot);
async fn get_transaction_status_with_context(signature: Signature)
-> Option<TransactionStatus>;
async fn get_slot_with_context(commitment: CommitmentLevel) -> Slot;
async fn get_block_height_with_context(commitment: CommitmentLevel) -> u64;
async fn process_transaction_with_commitment_and_context(
transaction: Transaction,
commitment: CommitmentLevel,
@@ -52,22 +42,12 @@ pub trait Banks {
address: Pubkey,
commitment: CommitmentLevel,
) -> Option<Account>;
async fn get_latest_blockhash_with_context() -> Hash;
async fn get_latest_blockhash_with_commitment_and_context(
commitment: CommitmentLevel,
) -> Option<(Hash, u64)>;
async fn get_fee_for_message_with_commitment_and_context(
commitment: CommitmentLevel,
message: Message,
) -> Option<u64>;
}
#[cfg(test)]
mod tests {
use {
super::*,
tarpc::{client, transport},
};
use super::*;
use tarpc::{client, transport};
#[test]
fn test_banks_client_new() {

View File

@@ -1,22 +1,24 @@
[package]
name = "solana-banks-server"
version = "1.10.0"
version = "1.6.28"
description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-banks-server"
edition = "2021"
edition = "2018"
[dependencies]
bincode = "1.3.3"
bincode = "1.3.1"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.10.0" }
tarpc = { version = "0.26.2", features = ["full"] }
log = "0.4.11"
mio = "0.7.6"
solana-banks-interface = { path = "../banks-interface", version = "=1.6.28" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-metrics = { path = "../metrics", version = "=1.6.28" }
tarpc = { version = "0.24.1", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
tokio-stream = "0.1"

View File

@@ -1,28 +1,24 @@
use {
bincode::{deserialize, serialize},
futures::{future, prelude::stream::StreamExt},
solana_banks_interface::{
use crate::send_transaction_service::{SendTransactionService, TransactionInfo};
use bincode::{deserialize, serialize};
use futures::{
future,
prelude::stream::{self, StreamExt},
};
use solana_banks_interface::{
Banks, BanksRequest, BanksResponse, TransactionConfirmationStatus, TransactionStatus,
},
solana_runtime::{bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache},
solana_sdk::{
};
use solana_runtime::{bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache};
use solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
feature_set::FeatureSet,
fee_calculator::FeeCalculator,
hash::Hash,
message::{Message, SanitizedMessage},
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction},
},
solana_send_transaction_service::{
send_transaction_service::{SendTransactionService, TransactionInfo},
tpu_info::NullTpuInfo,
},
std::{
convert::TryFrom,
};
use std::{
io,
net::{Ipv4Addr, SocketAddr},
sync::{
@@ -31,24 +27,22 @@ use {
},
thread::Builder,
time::Duration,
},
tarpc::{
context::Context,
serde_transport::tcp,
server::{self, Channel, Incoming},
transport::{self, channel::UnboundedChannel},
ClientMessage, Response,
},
tokio::time::sleep,
tokio_serde::formats::Bincode,
};
use tarpc::{
context::Context,
rpc::{transport::channel::UnboundedChannel, ClientMessage, Response},
serde_transport::tcp,
server::{self, Channel, Handler},
transport,
};
use tokio::time::sleep;
use tokio_serde::formats::Bincode;
#[derive(Clone)]
struct BanksServer {
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
transaction_sender: Sender<TransactionInfo>,
poll_signature_status_sleep_duration: Duration,
}
impl BanksServer {
@@ -60,13 +54,11 @@ impl BanksServer {
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
transaction_sender: Sender<TransactionInfo>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
Self {
bank_forks,
block_commitment_cache,
transaction_sender,
poll_signature_status_sleep_duration,
}
}
@@ -81,7 +73,7 @@ impl BanksServer {
.map(|info| deserialize(&info.wire_transaction).unwrap())
.collect();
let bank = bank_forks.read().unwrap().working_bank();
let _ = bank.try_process_transactions(transactions.iter());
let _ = bank.process_transactions(&transactions);
}
}
@@ -89,7 +81,6 @@ impl BanksServer {
fn new_loopback(
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
let (transaction_sender, transaction_receiver) = channel();
let bank = bank_forks.read().unwrap().working_bank();
@@ -104,12 +95,7 @@ impl BanksServer {
.name("solana-bank-forks-client".to_string())
.spawn(move || Self::run(server_bank_forks, transaction_receiver))
.unwrap();
Self::new(
bank_forks,
block_commitment_cache,
transaction_sender,
poll_signature_status_sleep_duration,
)
Self::new(bank_forks, block_commitment_cache, transaction_sender)
}
fn slot(&self, commitment: CommitmentLevel) -> Slot {
@@ -127,16 +113,16 @@ impl BanksServer {
self,
signature: &Signature,
blockhash: &Hash,
last_valid_block_height: u64,
last_valid_slot: Slot,
commitment: CommitmentLevel,
) -> Option<transaction::Result<()>> {
let mut status = self
.bank(commitment)
.get_signature_status_with_blockhash(signature, blockhash);
while status.is_none() {
sleep(self.poll_signature_status_sleep_duration).await;
sleep(Duration::from_millis(200)).await;
let bank = self.bank(commitment);
if bank.block_height() > last_valid_block_height {
if bank.slot() > last_valid_slot {
break;
}
status = bank.get_signature_status_with_blockhash(signature, blockhash);
@@ -147,11 +133,11 @@ impl BanksServer {
fn verify_transaction(
transaction: &Transaction,
feature_set: &Arc<FeatureSet>,
libsecp256k1_0_5_upgrade_enabled: bool,
) -> transaction::Result<()> {
if let Err(err) = transaction.verify() {
Err(err)
} else if let Err(err) = transaction.verify_precompiles(feature_set) {
} else if let Err(err) = transaction.verify_precompiles(libsecp256k1_0_5_upgrade_enabled) {
Err(err)
} else {
Ok(())
@@ -162,21 +148,16 @@ fn verify_transaction(
impl Banks for BanksServer {
async fn send_transaction_with_context(self, _: Context, transaction: Transaction) {
let blockhash = &transaction.message.recent_blockhash;
let last_valid_block_height = self
let last_valid_slot = self
.bank_forks
.read()
.unwrap()
.root_bank()
.get_blockhash_last_valid_block_height(blockhash)
.get_blockhash_last_valid_slot(&blockhash)
.unwrap();
let signature = transaction.signatures.get(0).cloned().unwrap_or_default();
let info = TransactionInfo::new(
signature,
serialize(&transaction).unwrap(),
last_valid_block_height,
None,
None,
);
let info =
TransactionInfo::new(signature, serialize(&transaction).unwrap(), last_valid_slot);
self.transaction_sender.send(info).unwrap();
}
@@ -184,18 +165,11 @@ impl Banks for BanksServer {
self,
_: Context,
commitment: CommitmentLevel,
) -> (FeeCalculator, Hash, u64) {
) -> (FeeCalculator, Hash, Slot) {
let bank = self.bank(commitment);
let blockhash = bank.last_blockhash();
let lamports_per_signature = bank.get_lamports_per_signature();
let last_valid_block_height = bank
.get_blockhash_last_valid_block_height(&blockhash)
.unwrap();
(
FeeCalculator::new(lamports_per_signature),
blockhash,
last_valid_block_height,
)
let (blockhash, fee_calculator) = bank.last_blockhash_with_fee_calculator();
let last_valid_slot = bank.get_blockhash_last_valid_slot(&blockhash).unwrap();
(fee_calculator, blockhash, last_valid_slot)
}
async fn get_transaction_status_with_context(
@@ -238,35 +212,32 @@ impl Banks for BanksServer {
self.slot(commitment)
}
async fn get_block_height_with_context(self, _: Context, commitment: CommitmentLevel) -> u64 {
self.bank(commitment).block_height()
}
async fn process_transaction_with_commitment_and_context(
self,
_: Context,
transaction: Transaction,
commitment: CommitmentLevel,
) -> Option<transaction::Result<()>> {
if let Err(err) = verify_transaction(&transaction, &self.bank(commitment).feature_set) {
if let Err(err) = verify_transaction(
&transaction,
self.bank(commitment).libsecp256k1_0_5_upgrade_enabled(),
) {
return Some(Err(err));
}
let blockhash = &transaction.message.recent_blockhash;
let last_valid_block_height = self
.bank(commitment)
.get_blockhash_last_valid_block_height(blockhash)
let last_valid_slot = self
.bank_forks
.read()
.unwrap()
.root_bank()
.get_blockhash_last_valid_slot(blockhash)
.unwrap();
let signature = transaction.signatures.get(0).cloned().unwrap_or_default();
let info = TransactionInfo::new(
signature,
serialize(&transaction).unwrap(),
last_valid_block_height,
None,
None,
);
let info =
TransactionInfo::new(signature, serialize(&transaction).unwrap(), last_valid_slot);
self.transaction_sender.send(info).unwrap();
self.poll_signature_status(&signature, blockhash, last_valid_block_height, commitment)
self.poll_signature_status(&signature, blockhash, last_valid_slot, commitment)
.await
}
@@ -279,47 +250,17 @@ impl Banks for BanksServer {
let bank = self.bank(commitment);
bank.get_account(&address).map(Account::from)
}
async fn get_latest_blockhash_with_context(self, _: Context) -> Hash {
let bank = self.bank(CommitmentLevel::default());
bank.last_blockhash()
}
async fn get_latest_blockhash_with_commitment_and_context(
self,
_: Context,
commitment: CommitmentLevel,
) -> Option<(Hash, u64)> {
let bank = self.bank(commitment);
let blockhash = bank.last_blockhash();
let last_valid_block_height = bank.get_blockhash_last_valid_block_height(&blockhash)?;
Some((blockhash, last_valid_block_height))
}
async fn get_fee_for_message_with_commitment_and_context(
self,
_: Context,
commitment: CommitmentLevel,
message: Message,
) -> Option<u64> {
let bank = self.bank(commitment);
let sanitized_message = SanitizedMessage::try_from(message).ok()?;
bank.get_fee_for_message(&sanitized_message)
}
}
pub async fn start_local_server(
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> UnboundedChannel<Response<BanksResponse>, ClientMessage<BanksRequest>> {
let banks_server = BanksServer::new_loopback(
bank_forks,
block_commitment_cache,
poll_signature_status_sleep_duration,
);
let banks_server = BanksServer::new_loopback(bank_forks, block_commitment_cache);
let (client_transport, server_transport) = transport::channel::unbounded();
let server = server::BaseChannel::with_defaults(server_transport).execute(banks_server.serve());
let server = server::new(server::Config::default())
.incoming(stream::once(future::ready(server_transport)))
.respond_with(banks_server.serve());
tokio::spawn(server);
client_transport
}
@@ -348,22 +289,11 @@ pub async fn start_tcp_server(
.map(move |chan| {
let (sender, receiver) = channel();
SendTransactionService::new::<NullTpuInfo>(
tpu_addr,
&bank_forks,
None,
receiver,
5_000,
0,
);
SendTransactionService::new(tpu_addr, &bank_forks, receiver);
let server = BanksServer::new(
bank_forks.clone(),
block_commitment_cache.clone(),
sender,
Duration::from_millis(200),
);
chan.execute(server.serve())
let server =
BanksServer::new(bank_forks.clone(), block_commitment_cache.clone(), sender);
chan.respond_with(server.serve()).execute()
})
// Max 10 channels.
.buffer_unordered(10)

View File

@@ -1,3 +1,7 @@
#![allow(clippy::integer_arithmetic)]
pub mod banks_server;
pub mod rpc_banks_service;
pub mod send_transaction_service;
#[macro_use]
extern crate solana_metrics;

View File

@@ -1,23 +1,21 @@
//! The `rpc_banks_service` module implements the Solana Banks RPC API.
use {
crate::banks_server::start_tcp_server,
futures::{future::FutureExt, pin_mut, prelude::stream::StreamExt, select},
solana_runtime::{bank_forks::BankForks, commitment::BlockCommitmentCache},
std::{
use crate::banks_server::start_tcp_server;
use futures::{future::FutureExt, pin_mut, prelude::stream::StreamExt, select};
use solana_runtime::{bank_forks::BankForks, commitment::BlockCommitmentCache};
use std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
thread::{self, Builder, JoinHandle},
},
tokio::{
};
use tokio::{
runtime::Runtime,
time::{self, Duration},
},
tokio_stream::wrappers::IntervalStream,
};
use tokio_stream::wrappers::IntervalStream;
pub struct RpcBanksService {
thread_hdl: JoinHandle<()>,
@@ -103,11 +101,12 @@ impl RpcBanksService {
#[cfg(test)]
mod tests {
use {super::*, solana_runtime::bank::Bank};
use super::*;
use solana_runtime::bank::Bank;
#[test]
fn test_rpc_banks_server_exit() {
let bank_forks = Arc::new(RwLock::new(BankForks::new(Bank::default_for_tests())));
let bank_forks = Arc::new(RwLock::new(BankForks::new(Bank::default())));
let block_commitment_cache = Arc::new(RwLock::new(BlockCommitmentCache::default()));
let exit = Arc::new(AtomicBool::new(false));
let addr = "127.0.0.1:0".parse().unwrap();

View File

@@ -0,0 +1,343 @@
// TODO: Merge this implementation with the one at `core/src/send_transaction_service.rs`
use log::*;
use solana_metrics::{datapoint_warn, inc_new_counter_info};
use solana_runtime::{bank::Bank, bank_forks::BankForks};
use solana_sdk::{clock::Slot, signature::Signature};
use std::{
collections::HashMap,
net::{SocketAddr, UdpSocket},
sync::{
mpsc::{Receiver, RecvTimeoutError},
Arc, RwLock,
},
thread::{self, Builder, JoinHandle},
time::{Duration, Instant},
};
/// Maximum size of the transaction queue
const MAX_TRANSACTION_QUEUE_SIZE: usize = 10_000; // This seems like a lot but maybe it needs to be bigger one day
pub struct SendTransactionService {
thread: JoinHandle<()>,
}
pub struct TransactionInfo {
pub signature: Signature,
pub wire_transaction: Vec<u8>,
pub last_valid_slot: Slot,
}
impl TransactionInfo {
pub fn new(signature: Signature, wire_transaction: Vec<u8>, last_valid_slot: Slot) -> Self {
Self {
signature,
wire_transaction,
last_valid_slot,
}
}
}
#[derive(Default, Debug, PartialEq)]
struct ProcessTransactionsResult {
rooted: u64,
expired: u64,
retried: u64,
failed: u64,
retained: u64,
}
impl SendTransactionService {
pub fn new(
tpu_address: SocketAddr,
bank_forks: &Arc<RwLock<BankForks>>,
receiver: Receiver<TransactionInfo>,
) -> Self {
let thread = Self::retry_thread(receiver, bank_forks.clone(), tpu_address);
Self { thread }
}
fn retry_thread(
receiver: Receiver<TransactionInfo>,
bank_forks: Arc<RwLock<BankForks>>,
tpu_address: SocketAddr,
) -> JoinHandle<()> {
let mut last_status_check = Instant::now();
let mut transactions = HashMap::new();
let send_socket = UdpSocket::bind("0.0.0.0:0").unwrap();
Builder::new()
.name("send-tx-svc".to_string())
.spawn(move || loop {
match receiver.recv_timeout(Duration::from_secs(1)) {
Err(RecvTimeoutError::Disconnected) => break,
Err(RecvTimeoutError::Timeout) => {}
Ok(transaction_info) => {
Self::send_transaction(
&send_socket,
&tpu_address,
&transaction_info.wire_transaction,
);
if transactions.len() < MAX_TRANSACTION_QUEUE_SIZE {
transactions.insert(transaction_info.signature, transaction_info);
} else {
datapoint_warn!("send_transaction_service-queue-overflow");
}
}
}
if Instant::now().duration_since(last_status_check).as_secs() >= 5 {
if !transactions.is_empty() {
datapoint_info!(
"send_transaction_service-queue-size",
("len", transactions.len(), i64)
);
let bank_forks = bank_forks.read().unwrap();
let root_bank = bank_forks.root_bank();
let working_bank = bank_forks.working_bank();
let _result = Self::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
}
last_status_check = Instant::now();
}
})
.unwrap()
}
fn process_transactions(
working_bank: &Arc<Bank>,
root_bank: &Arc<Bank>,
send_socket: &UdpSocket,
tpu_address: &SocketAddr,
transactions: &mut HashMap<Signature, TransactionInfo>,
) -> ProcessTransactionsResult {
let mut result = ProcessTransactionsResult::default();
transactions.retain(|signature, transaction_info| {
if root_bank.has_signature(signature) {
info!("Transaction is rooted: {}", signature);
result.rooted += 1;
inc_new_counter_info!("send_transaction_service-rooted", 1);
false
} else if transaction_info.last_valid_slot < root_bank.slot() {
info!("Dropping expired transaction: {}", signature);
result.expired += 1;
inc_new_counter_info!("send_transaction_service-expired", 1);
false
} else {
match working_bank.get_signature_status_slot(signature) {
None => {
// Transaction is unknown to the working bank, it might have been
// dropped or landed in another fork. Re-send it
info!("Retrying transaction: {}", signature);
result.retried += 1;
inc_new_counter_info!("send_transaction_service-retry", 1);
Self::send_transaction(
&send_socket,
&tpu_address,
&transaction_info.wire_transaction,
);
true
}
Some((_slot, status)) => {
if status.is_err() {
info!("Dropping failed transaction: {}", signature);
result.failed += 1;
inc_new_counter_info!("send_transaction_service-failed", 1);
false
} else {
result.retained += 1;
true
}
}
}
}
});
result
}
fn send_transaction(
send_socket: &UdpSocket,
tpu_address: &SocketAddr,
wire_transaction: &[u8],
) {
if let Err(err) = send_socket.send_to(wire_transaction, tpu_address) {
warn!("Failed to send transaction to {}: {:?}", tpu_address, err);
}
}
pub fn join(self) -> thread::Result<()> {
self.thread.join()
}
}
#[cfg(test)]
mod test {
use super::*;
use solana_sdk::{
genesis_config::create_genesis_config, pubkey::Pubkey, signature::Signer,
system_transaction,
};
use std::sync::mpsc::channel;
#[test]
fn service_exit() {
let tpu_address = "127.0.0.1:0".parse().unwrap();
let bank = Bank::default();
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let (sender, receiver) = channel();
let send_tranaction_service =
SendTransactionService::new(tpu_address, &bank_forks, receiver);
drop(sender);
send_tranaction_service.join().unwrap();
}
#[test]
fn process_transactions() {
let (genesis_config, mint_keypair) = create_genesis_config(4);
let bank = Bank::new(&genesis_config);
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let send_socket = UdpSocket::bind("0.0.0.0:0").unwrap();
let tpu_address = "127.0.0.1:0".parse().unwrap();
let root_bank = Arc::new(Bank::new_from_parent(
&bank_forks.read().unwrap().working_bank(),
&Pubkey::default(),
1,
));
let rooted_signature = root_bank
.transfer(1, &mint_keypair, &mint_keypair.pubkey())
.unwrap();
let working_bank = Arc::new(Bank::new_from_parent(&root_bank, &Pubkey::default(), 2));
let non_rooted_signature = working_bank
.transfer(2, &mint_keypair, &mint_keypair.pubkey())
.unwrap();
let failed_signature = {
let blockhash = working_bank.last_blockhash();
let transaction =
system_transaction::transfer(&mint_keypair, &Pubkey::default(), 1, blockhash);
let signature = transaction.signatures[0];
working_bank.process_transaction(&transaction).unwrap_err();
signature
};
let mut transactions = HashMap::new();
info!("Expired transactions are dropped..");
transactions.insert(
Signature::default(),
TransactionInfo::new(Signature::default(), vec![], root_bank.slot() - 1),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert!(transactions.is_empty());
assert_eq!(
result,
ProcessTransactionsResult {
expired: 1,
..ProcessTransactionsResult::default()
}
);
info!("Rooted transactions are dropped...");
transactions.insert(
rooted_signature,
TransactionInfo::new(rooted_signature, vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert!(transactions.is_empty());
assert_eq!(
result,
ProcessTransactionsResult {
rooted: 1,
..ProcessTransactionsResult::default()
}
);
info!("Failed transactions are dropped...");
transactions.insert(
failed_signature,
TransactionInfo::new(failed_signature, vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert!(transactions.is_empty());
assert_eq!(
result,
ProcessTransactionsResult {
failed: 1,
..ProcessTransactionsResult::default()
}
);
info!("Non-rooted transactions are kept...");
transactions.insert(
non_rooted_signature,
TransactionInfo::new(non_rooted_signature, vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert_eq!(transactions.len(), 1);
assert_eq!(
result,
ProcessTransactionsResult {
retained: 1,
..ProcessTransactionsResult::default()
}
);
transactions.clear();
info!("Unknown transactions are retried...");
transactions.insert(
Signature::default(),
TransactionInfo::new(Signature::default(), vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert_eq!(transactions.len(), 1);
assert_eq!(
result,
ProcessTransactionsResult {
retried: 1,
..ProcessTransactionsResult::default()
}
);
}
}

4
bench-exchange/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
/target/
/config/
/config-local/
/farf/

38
bench-exchange/Cargo.toml Normal file
View File

@@ -0,0 +1,38 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-bench-exchange"
version = "1.6.28"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
clap = "2.33.1"
itertools = "0.9.0"
log = "0.4.11"
num-derive = "0.3"
num-traits = "0.2"
rand = "0.7.0"
rayon = "1.5.0"
serde_json = "1.0.56"
serde_yaml = "0.8.13"
solana-clap-utils = { path = "../clap-utils", version = "=1.6.28" }
solana-core = { path = "../core", version = "=1.6.28" }
solana-genesis = { path = "../genesis", version = "=1.6.28" }
solana-client = { path = "../client", version = "=1.6.28" }
solana-faucet = { path = "../faucet", version = "=1.6.28" }
solana-exchange-program = { path = "../programs/exchange", version = "=1.6.28" }
solana-logger = { path = "../logger", version = "=1.6.28" }
solana-metrics = { path = "../metrics", version = "=1.6.28" }
solana-net-utils = { path = "../net-utils", version = "=1.6.28" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-version = { path = "../version", version = "=1.6.28" }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.6.28" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

479
bench-exchange/README.md Normal file
View File

@@ -0,0 +1,479 @@
# token-exchange
Solana Token Exchange Bench
If you can't wait; jump to [Running the exchange](#Running-the-exchange) to
learn how to start and interact with the exchange.
### Table of Contents
[Overview](#Overview)<br>
[Premise](#Premise)<br>
[Exchange startup](#Exchange-startup)<br>
[Order Requests](#Trade-requests)<br>
[Order Cancellations](#Trade-cancellations)<br>
[Trade swap](#Trade-swap)<br>
[Exchange program operations](#Exchange-program-operations)<br>
[Quotes and OHLCV](#Quotes-and-OHLCV)<br>
[Investor strategies](#Investor-strategies)<br>
[Running the exchange](#Running-the-exchange)<br>
## Overview
An exchange is a marketplace where one asset can be traded for another. This
demo demonstrates one way to host an exchange on the Solana blockchain by
emulating a currency exchange.
The assets are virtual tokens held by investors who may post order requests to
the exchange. A Matcher monitors the exchange and posts swap requests for
matching orders. All the transactions can execute concurrently.
## Premise
- Exchange
- An exchange is a marketplace where one asset can be traded for another.
The exchange in this demo is the on-chain program that implements the
tokens and the policies for trading those tokens.
- Token
- A virtual asset that can be owned, traded, and holds virtual intrinsic value
compared to other assets. There are four types of tokens in this demo, A,
B, C, D. Each one may be traded for another.
- Token account
- An account owned by the exchange that holds a quantity of one type of token.
- Account request
- A request to create a token account
- Token request
- A request to deposit tokens of a particular type into a token account.
- Asset pair
- A struct with fields Base and Quote, representing the two assets which make up a
trading pair, which themselves are Tokens. The Base or 'primary' asset is the
numerator and the Quote is the denominator for pricing purposes.
- Order side
- Describes which side of the market an investor wants to place a trade on. Options
are "Bid" or "Ask", where a bid represents an offer to purchase the Base asset of
the AssetPair for a sum of the Quote Asset and an Ask is an offer to sell Base asset
for the Quote asset.
- Price ratio
- An expression of the relative prices of two tokens. Calculated with the Base
Asset as the numerator and the Quote Asset as the denominator. Ratios are
represented as fixed point numbers. The fixed point scaler is defined in
[exchange_state.rs](https://github.com/solana-labs/solana/blob/c2fdd1362a029dcf89c8907c562d2079d977df11/programs/exchange_api/src/exchange_state.rs#L7)
- Order request
- A Solana transaction sent by a trader to the exchange to submit an order.
Order requests are made up of the token pair, the order side (bid or ask),
quantity of the primary token, the price ratio, and the two token accounts
to be credited/deducted. An example trade request looks like "T AB 5 2"
which reads "Exchange 5 A tokens to B tokens at a price ratio of 1:2" A fulfilled trade would result in 5 A tokens
deducted and 10 B tokens credited to the trade initiator's token accounts.
Successful order requests result in an order.
- Order
- The result of a successful order request. orders are stored in
accounts owned by the submitter of the order request. They can only be
canceled by their owner but can be used by anyone in a trade swap. They
contain the same information as the order request.
- Price spread
- The difference between the two matching orders. The spread is the
profit of the Matcher initiating the swap request.
- Match requirements
- Policies that result in a successful trade swap.
- Match request
- A request to fill two complementary orders (bid/ask), resulting if successful,
in a trade being created.
- Trade
- A successful trade is created from two matching orders that meet
swap requirements which are submitted in a Match Request by a Matcher and
executed by the exchange. A trade may not wholly satisfy one or both of the
orders in which case the orders are adjusted appropriately. Upon execution,
tokens are distributed to the traders' accounts and any overlap or
"negative spread" between orders is deposited into the Matcher's profit
account. All successful trades are recorded in the data of a new solana
account for posterity.
- Investor
- Individual investors who hold a number of tokens and wish to trade them on
the exchange. Investors operate as Solana thin clients who own a set of
accounts containing tokens and/or order requests. Investors post
transactions to the exchange in order to request tokens and post or cancel
order requests.
- Matcher
- An agent who facilitates trading between investors. Matchers operate as
Solana thin clients who monitor all the orders looking for a trade
match. Once found, the Matcher issues a swap request to the exchange.
Matchers are the engine of the exchange and are rewarded for their efforts by
accumulating the price spreads of the swaps they initiate. Matchers also
provide current bid/ask price and OHLCV (Open, High, Low, Close, Volume)
information on demand via a public network port.
- Transaction fees
- Solana transaction fees are paid for by the transaction submitters who are
the Investors and Matchers.
## Exchange startup
The exchange is up and running when it reaches a state where it can take
investors' trades and Matchers' match requests. To achieve this state the
following must occur in order:
- Start the Solana blockchain
- Start the thin-client
- The Matcher subscribes to change notifications for all the accounts owned by
the exchange program id. The subscription is managed via Solana's JSON RPC
interface.
- The Matcher starts responding to queries for bid/ask price and OHLCV
The Matcher responding successfully to price and OHLCV requests is the signal to
the investors that trades submitted after that point will be analyzed. <!--This
is not ideal, and instead investors should be able to submit trades at any time,
and the Matcher could come and go without missing a trade. One way to achieve
this is for the Matcher to read the current state of all accounts looking for all
open orders.-->
Investors will initially query the exchange to discover their current balance
for each type of token. If the investor does not already have an account for
each type of token, they will submit account requests. Matcher as well will
request accounts to hold the tokens they earn by initiating trade swaps.
```rust
/// Supported token types
pub enum Token {
A,
B,
C,
D,
}
/// Supported token pairs
pub enum TokenPair {
AB,
AC,
AD,
BC,
BD,
CD,
}
pub enum ExchangeInstruction {
/// New token account
/// key 0 - Signer
/// key 1 - New token account
AccountRequest,
}
/// Token accounts are populated with this structure
pub struct TokenAccountInfo {
/// Investor who owns this account
pub owner: Pubkey,
/// Current number of tokens this account holds
pub tokens: Tokens,
}
```
For this demo investors or Matcher can request more tokens from the exchange at
any time by submitting token requests. In non-demos, an exchange of this type
would provide another way to exchange a 3rd party asset into tokens.
To request tokens, investors submit transfer requests:
```rust
pub enum ExchangeInstruction {
/// Transfer tokens between two accounts
/// key 0 - Account to transfer tokens to
/// key 1 - Account to transfer tokens from. This can be the exchange program itself,
/// the exchange has a limitless number of tokens it can transfer.
TransferRequest(Token, u64),
}
```
## Order Requests
When an investor decides to exchange a token of one type for another, they
submit a transaction to the Solana Blockchain containing an order request, which,
if successful, is turned into an order. orders do not expire but are
cancellable. <!-- orders should have a timestamp to enable trade
expiration --> When an order is created, tokens are deducted from a token
account and the order acts as an escrow. The tokens are held until the
order is fulfilled or canceled. If the direction is `To`, then the number
of `tokens` are deducted from the primary account, if `From` then `tokens`
multiplied by `price` are deducted from the secondary account. orders are
no longer valid when the number of `tokens` goes to zero, at which point they
can no longer be used. <!-- Could support refilling orders, so order
accounts are refilled rather than accumulating -->
```rust
/// Direction of the exchange between two tokens in a pair
pub enum Direction {
/// Trade first token type (primary) in the pair 'To' the second
To,
/// Trade first token type in the pair 'From' the second (secondary)
From,
}
pub struct OrderRequestInfo {
/// Direction of trade
pub direction: Direction,
/// Token pair to trade
pub pair: TokenPair,
/// Number of tokens to exchange; refers to the primary or the secondary depending on the direction
pub tokens: u64,
/// The price ratio the primary price over the secondary price. The primary price is fixed
/// and equal to the variable `SCALER`.
pub price: u64,
/// Token account to deposit tokens on successful swap
pub dst_account: Pubkey,
}
pub enum ExchangeInstruction {
/// order request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - Token account associated with this trade
TradeRequest(TradeRequestInfo),
}
/// Trade accounts are populated with this structure
pub struct TradeOrderInfo {
/// Owner of the order
pub owner: Pubkey,
/// Direction of the exchange
pub direction: Direction,
/// Token pair indicating two tokens to exchange, first is primary
pub pair: TokenPair,
/// Number of tokens to exchange; primary or secondary depending on direction
pub tokens: u64,
/// Scaled price of the secondary token given the primary is equal to the scale value
/// If scale is 1 and price is 2 then ratio is 1:2 or 1 primary token for 2 secondary tokens
pub price: u64,
/// account which the tokens were source from. The trade account holds the tokens in escrow
/// until either one or more part of a swap or the trade is canceled.
pub src_account: Pubkey,
/// account which the tokens the tokens will be deposited into on a successful trade
pub dst_account: Pubkey,
}
```
## Order cancellations
An investor may cancel a trade at anytime, but only trades they own. If the
cancellation is successful, any tokens held in escrow are returned to the
account from which they came.
```rust
pub enum ExchangeInstruction {
/// order cancellation
/// key 0 - Signer
/// key 1 -order to cancel
TradeCancellation,
}
```
## Trade swaps
The Matcher is monitoring the accounts assigned to the exchange program and
building a trade-order table. The order table is used to identify
matching orders which could be fulfilled. When a match is found the
Matcher should issue a swap request. Swap requests may not satisfy the entirety
of either order, but the exchange will greedily fulfill it. Any leftover tokens
in either account will keep the order valid for further swap requests in
the future.
Matching orders are defined by the following swap requirements:
- Opposite polarity (one `To` and one `From`)
- Operate on the same token pair
- The price ratio of the `From` order is greater than or equal to the `To` order
- There are sufficient tokens to perform the trade
Orders can be written in the following format:
`investor direction pair quantity price-ratio`
For example:
- `1 T AB 2 1`
- Investor 1 wishes to exchange 2 A tokens to B tokens at a ratio of 1 A to 1
B
- `2 F AC 6 1.2`
- Investor 2 wishes to exchange A tokens from 6 B tokens at a ratio of 1 A
from 1.2 B
An order table could look something like the following. Notice how the columns
are sorted low to high and high to low, respectively. Prices are dramatic and
whole for clarity.
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 2 4 | 2 F AB 2 8 |
| 2 | 1 T AB 1 4 | 2 F AB 2 8 |
| 3 | 1 T AB 6 6 | 2 F AB 2 7 |
| 4 | 1 T AB 2 8 | 2 F AB 3 6 |
| 5 | 1 T AB 2 10 | 2 F AB 1 5 |
As part of a successful swap request, the exchange will credit tokens to the
Matcher's account equal to the difference in the price ratios or the two orders.
These tokens are considered the Matcher's profit for initiating the trade.
The Matcher would initiate the following swap on the order table above:
- Row 1, To: Investor 1 trades 2 A tokens to 8 B tokens
- Row 1, From: Investor 2 trades 2 A tokens from 8 B tokens
- Matcher takes 8 B tokens as profit
Both row 1 trades are fully realized, table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 1 4 | 2 F AB 2 8 |
| 2 | 1 T AB 6 6 | 2 F AB 2 7 |
| 3 | 1 T AB 2 8 | 2 F AB 3 6 |
| 4 | 1 T AB 2 10 | 2 F AB 1 5 |
The Matcher would initiate the following swap:
- Row 1, To: Investor 1 trades 1 A token to 4 B tokens
- Row 1, From: Investor 2 trades 1 A token from 4 B tokens
- Matcher takes 4 B tokens as profit
Row 1 From is not fully realized, table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 6 6 | 2 F AB 1 8 |
| 2 | 1 T AB 2 8 | 2 F AB 2 7 |
| 3 | 1 T AB 2 10 | 2 F AB 3 6 |
| 4 | | 2 F AB 1 5 |
The Matcher would initiate the following swap:
- Row 1, To: Investor 1 trades 1 A token to 6 B tokens
- Row 1, From: Investor 2 trades 1 A token from 6 B tokens
- Matcher takes 2 B tokens as profit
Row 1 To is now fully realized, table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 5 6 | 2 F AB 2 7 |
| 2 | 1 T AB 2 8 | 2 F AB 3 5 |
| 3 | 1 T AB 2 10 | 2 F AB 1 5 |
The Matcher would initiate the following last swap:
- Row 1, To: Investor 1 trades 2 A token to 12 B tokens
- Row 1, From: Investor 2 trades 2 A token from 12 B tokens
- Matcher takes 2 B tokens as profit
Table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 3 6 | 2 F AB 3 5 |
| 2 | 1 T AB 2 8 | 2 F AB 1 5 |
| 3 | 1 T AB 2 10 | |
At this point the lowest To's price is larger than the largest From's price so
no more swaps would be initiated until new orders came in.
```rust
pub enum ExchangeInstruction {
/// Trade swap request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - 'To' order
/// key 3 - `From` order
/// key 4 - Token account associated with the To Trade
/// key 5 - Token account associated with From trade
/// key 6 - Token account in which to deposit the Matcher profit from the swap.
SwapRequest,
}
/// Swap accounts are populated with this structure
pub struct TradeSwapInfo {
/// Pair swapped
pub pair: TokenPair,
/// `To` order
pub to_trade_order: Pubkey,
/// `From` order
pub from_trade_order: Pubkey,
/// Number of primary tokens exchanged
pub primary_tokens: u64,
/// Price the primary tokens were exchanged for
pub primary_price: u64,
/// Number of secondary tokens exchanged
pub secondary_tokens: u64,
/// Price the secondary tokens were exchanged for
pub secondary_price: u64,
}
```
## Exchange program operations
Putting all the commands together from above, the following operations will be
supported by the on-chain exchange program:
```rust
pub enum ExchangeInstruction {
/// New token account
/// key 0 - Signer
/// key 1 - New token account
AccountRequest,
/// Transfer tokens between two accounts
/// key 0 - Account to transfer tokens to
/// key 1 - Account to transfer tokens from. This can be the exchange program itself,
/// the exchange has a limitless number of tokens it can transfer.
TransferRequest(Token, u64),
/// order request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - Token account associated with this trade
TradeRequest(TradeRequestInfo),
/// order cancellation
/// key 0 - Signer
/// key 1 -order to cancel
TradeCancellation,
/// Trade swap request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - 'To' order
/// key 3 - `From` order
/// key 4 - Token account associated with the To Trade
/// key 5 - Token account associated with From trade
/// key 6 - Token account in which to deposit the Matcher profit from the swap.
SwapRequest,
}
```
## Quotes and OHLCV
The Matcher will provide current bid/ask price quotes based on trade actively and
also provide OHLCV based on some time window. The details of how the bid/ask
price quotes are calculated are yet to be decided.
## Investor strategies
To make a compelling demo, the investors needs to provide interesting trade
behavior. Something as simple as a randomly twiddled baseline would be a
minimum starting point.
## Running the exchange
The exchange bench posts trades and swaps matches as fast as it can.
You might want to bump the duration up
to 60 seconds and the batch size to 1000 for better numbers. You can modify those
in client_demo/src/demo.rs::test_exchange_local_cluster.
The following command runs the bench:
```bash
$ RUST_LOG=solana_bench_exchange=info cargo test --release -- --nocapture test_exchange_local_cluster
```
To also see the cluster messages:
```bash
$ RUST_LOG=solana_bench_exchange=info,solana=info cargo test --release -- --nocapture test_exchange_local_cluster
```

1028
bench-exchange/src/bench.rs Normal file

File diff suppressed because it is too large Load Diff

221
bench-exchange/src/cli.rs Normal file
View File

@@ -0,0 +1,221 @@
use clap::{crate_description, crate_name, value_t, App, Arg, ArgMatches};
use solana_core::gen_keys::GenKeys;
use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::signature::{read_keypair_file, Keypair};
use std::net::SocketAddr;
use std::process::exit;
use std::time::Duration;
pub struct Config {
pub entrypoint_addr: SocketAddr,
pub faucet_addr: SocketAddr,
pub identity: Keypair,
pub threads: usize,
pub num_nodes: usize,
pub duration: Duration,
pub transfer_delay: u64,
pub fund_amount: u64,
pub batch_size: usize,
pub chunk_size: usize,
pub account_groups: usize,
pub client_ids_and_stake_file: String,
pub write_to_client_file: bool,
pub read_from_client_file: bool,
}
impl Default for Config {
fn default() -> Self {
Self {
entrypoint_addr: SocketAddr::from(([127, 0, 0, 1], 8001)),
faucet_addr: SocketAddr::from(([127, 0, 0, 1], FAUCET_PORT)),
identity: Keypair::new(),
num_nodes: 1,
threads: 4,
duration: Duration::new(u64::max_value(), 0),
transfer_delay: 0,
fund_amount: 100_000,
batch_size: 100,
chunk_size: 100,
account_groups: 100,
client_ids_and_stake_file: String::new(),
write_to_client_file: false,
read_from_client_file: false,
}
}
}
pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
App::new(crate_name!())
.about(crate_description!())
.version(version)
.arg(
Arg::with_name("entrypoint")
.short("n")
.long("entrypoint")
.value_name("HOST:PORT")
.takes_value(true)
.required(false)
.default_value("127.0.0.1:8001")
.help("Cluster entry point; defaults to 127.0.0.1:8001"),
)
.arg(
Arg::with_name("faucet")
.short("d")
.long("faucet")
.value_name("HOST:PORT")
.takes_value(true)
.required(false)
.default_value("127.0.0.1:9900")
.help("Location of the faucet; defaults to 127.0.0.1:9900"),
)
.arg(
Arg::with_name("identity")
.short("i")
.long("identity")
.value_name("PATH")
.takes_value(true)
.help("File containing a client identity (keypair)"),
)
.arg(
Arg::with_name("threads")
.long("threads")
.value_name("<threads>")
.takes_value(true)
.required(false)
.default_value("1")
.help("Number of threads submitting transactions"),
)
.arg(
Arg::with_name("num-nodes")
.long("num-nodes")
.value_name("NUM")
.takes_value(true)
.required(false)
.default_value("1")
.help("Wait for NUM nodes to converge"),
)
.arg(
Arg::with_name("duration")
.long("duration")
.value_name("SECS")
.takes_value(true)
.default_value("60")
.help("Seconds to run benchmark, then exit; default is forever"),
)
.arg(
Arg::with_name("transfer-delay")
.long("transfer-delay")
.value_name("<delay>")
.takes_value(true)
.required(false)
.default_value("0")
.help("Delay between each chunk"),
)
.arg(
Arg::with_name("fund-amount")
.long("fund-amount")
.value_name("<fund>")
.takes_value(true)
.required(false)
.default_value("100000")
.help("Number of lamports to fund to each signer"),
)
.arg(
Arg::with_name("batch-size")
.long("batch-size")
.value_name("<batch>")
.takes_value(true)
.required(false)
.default_value("1000")
.help("Number of transactions before the signer rolls over"),
)
.arg(
Arg::with_name("chunk-size")
.long("chunk-size")
.value_name("<cunk>")
.takes_value(true)
.required(false)
.default_value("500")
.help("Number of transactions to generate and send at a time"),
)
.arg(
Arg::with_name("account-groups")
.long("account-groups")
.value_name("<groups>")
.takes_value(true)
.required(false)
.default_value("10")
.help("Number of account groups to cycle for each batch"),
)
.arg(
Arg::with_name("write-client-keys")
.long("write-client-keys")
.value_name("FILENAME")
.takes_value(true)
.help("Generate client keys and stakes and write the list to YAML file"),
)
.arg(
Arg::with_name("read-client-keys")
.long("read-client-keys")
.value_name("FILENAME")
.takes_value(true)
.help("Read client keys and stakes from the YAML file"),
)
}
#[allow(clippy::field_reassign_with_default)]
pub fn extract_args(matches: &ArgMatches) -> Config {
let mut args = Config::default();
args.entrypoint_addr = solana_net_utils::parse_host_port(
matches.value_of("entrypoint").unwrap(),
)
.unwrap_or_else(|e| {
eprintln!("failed to parse entrypoint address: {}", e);
exit(1)
});
args.faucet_addr = solana_net_utils::parse_host_port(matches.value_of("faucet").unwrap())
.unwrap_or_else(|e| {
eprintln!("failed to parse faucet address: {}", e);
exit(1)
});
if matches.is_present("identity") {
args.identity = read_keypair_file(matches.value_of("identity").unwrap())
.expect("can't read client identity");
} else {
args.identity = {
let seed = [42_u8; 32];
let mut rnd = GenKeys::new(seed);
rnd.gen_keypair()
};
}
args.threads = value_t!(matches.value_of("threads"), usize).expect("Failed to parse threads");
args.num_nodes =
value_t!(matches.value_of("num-nodes"), usize).expect("Failed to parse num-nodes");
let duration = value_t!(matches.value_of("duration"), u64).expect("Failed to parse duration");
args.duration = Duration::from_secs(duration);
args.transfer_delay =
value_t!(matches.value_of("transfer-delay"), u64).expect("Failed to parse transfer-delay");
args.fund_amount =
value_t!(matches.value_of("fund-amount"), u64).expect("Failed to parse fund-amount");
args.batch_size =
value_t!(matches.value_of("batch-size"), usize).expect("Failed to parse batch-size");
args.chunk_size =
value_t!(matches.value_of("chunk-size"), usize).expect("Failed to parse chunk-size");
args.account_groups = value_t!(matches.value_of("account-groups"), usize)
.expect("Failed to parse account-groups");
if let Some(s) = matches.value_of("write-client-keys") {
args.write_to_client_file = true;
args.client_ids_and_stake_file = s.to_string();
}
if let Some(s) = matches.value_of("read-client-keys") {
assert!(!args.write_to_client_file);
args.read_from_client_file = true;
args.client_ids_and_stake_file = s.to_string();
}
args
}

View File

@@ -0,0 +1,3 @@
pub mod bench;
pub mod cli;
mod order_book;

View File

@@ -0,0 +1,83 @@
#![allow(clippy::integer_arithmetic)]
pub mod bench;
mod cli;
pub mod order_book;
use crate::bench::{airdrop_lamports, create_client_accounts_file, do_bench_exchange, Config};
use log::*;
use solana_core::gossip_service::{discover_cluster, get_multi_client};
use solana_sdk::signature::Signer;
fn main() {
solana_logger::setup();
solana_metrics::set_panic_hook("bench-exchange");
let matches = cli::build_args(solana_version::version!()).get_matches();
let cli_config = cli::extract_args(&matches);
let cli::Config {
entrypoint_addr,
faucet_addr,
identity,
threads,
num_nodes,
duration,
transfer_delay,
fund_amount,
batch_size,
chunk_size,
account_groups,
client_ids_and_stake_file,
write_to_client_file,
read_from_client_file,
..
} = cli_config;
let config = Config {
identity,
threads,
duration,
transfer_delay,
fund_amount,
batch_size,
chunk_size,
account_groups,
client_ids_and_stake_file,
read_from_client_file,
};
if write_to_client_file {
create_client_accounts_file(
&config.client_ids_and_stake_file,
config.batch_size,
config.account_groups,
config.fund_amount,
);
} else {
info!("Connecting to the cluster");
let nodes = discover_cluster(&entrypoint_addr, num_nodes).unwrap_or_else(|_| {
panic!("Failed to discover nodes");
});
let (client, num_clients) = get_multi_client(&nodes);
info!("{} nodes found", num_clients);
if num_clients < num_nodes {
panic!("Error: Insufficient nodes discovered");
}
if !read_from_client_file {
info!("Funding keypair: {}", config.identity.pubkey());
let accounts_in_groups = batch_size * account_groups;
const NUM_SIGNERS: u64 = 2;
airdrop_lamports(
&client,
&faucet_addr,
&config.identity,
fund_amount * (accounts_in_groups + 1) as u64 * NUM_SIGNERS,
);
}
do_bench_exchange(vec![client], config);
}
}

View File

@@ -0,0 +1,134 @@
use itertools::EitherOrBoth::{Both, Left, Right};
use itertools::Itertools;
use log::*;
use solana_exchange_program::exchange_state::*;
use solana_sdk::pubkey::Pubkey;
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use std::{error, fmt};
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct ToOrder {
pub pubkey: Pubkey,
pub info: OrderInfo,
}
impl Ord for ToOrder {
fn cmp(&self, other: &Self) -> Ordering {
other.info.price.cmp(&self.info.price)
}
}
impl PartialOrd for ToOrder {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct FromOrder {
pub pubkey: Pubkey,
pub info: OrderInfo,
}
impl Ord for FromOrder {
fn cmp(&self, other: &Self) -> Ordering {
self.info.price.cmp(&other.info.price)
}
}
impl PartialOrd for FromOrder {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
#[derive(Default)]
pub struct OrderBook {
// TODO scale to x token types
to_ab: BinaryHeap<ToOrder>,
from_ab: BinaryHeap<FromOrder>,
}
impl fmt::Display for OrderBook {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(
f,
"+-Order Book--------------------------+-------------------------------------+"
)?;
for (i, it) in self
.to_ab
.iter()
.zip_longest(self.from_ab.iter())
.enumerate()
{
match it {
Both(to, from) => writeln!(
f,
"| T AB {:8} for {:8}/{:8} | F AB {:8} for {:8}/{:8} |{}",
to.info.tokens,
SCALER,
to.info.price,
from.info.tokens,
SCALER,
from.info.price,
i
)?,
Left(to) => writeln!(
f,
"| T AB {:8} for {:8}/{:8} | |{}",
to.info.tokens, SCALER, to.info.price, i
)?,
Right(from) => writeln!(
f,
"| | F AB {:8} for {:8}/{:8} |{}",
from.info.tokens, SCALER, from.info.price, i
)?,
}
}
write!(
f,
"+-------------------------------------+-------------------------------------+"
)?;
Ok(())
}
}
impl OrderBook {
// TODO
// pub fn cancel(&mut self, pubkey: Pubkey) -> Result<(), Box<dyn error::Error>> {
// Ok(())
// }
pub fn push(&mut self, pubkey: Pubkey, info: OrderInfo) -> Result<(), Box<dyn error::Error>> {
check_trade(info.side, info.tokens, info.price)?;
match info.side {
OrderSide::Ask => {
self.to_ab.push(ToOrder { pubkey, info });
}
OrderSide::Bid => {
self.from_ab.push(FromOrder { pubkey, info });
}
}
Ok(())
}
pub fn pop(&mut self) -> Option<(ToOrder, FromOrder)> {
if let Some(pair) = Self::pop_pair(&mut self.to_ab, &mut self.from_ab) {
return Some(pair);
}
None
}
pub fn get_num_outstanding(&self) -> (usize, usize) {
(self.to_ab.len(), self.from_ab.len())
}
fn pop_pair(
to_ab: &mut BinaryHeap<ToOrder>,
from_ab: &mut BinaryHeap<FromOrder>,
) -> Option<(ToOrder, FromOrder)> {
let to = to_ab.peek()?;
let from = from_ab.peek()?;
if from.info.price < to.info.price {
debug!("Trade not viable");
return None;
}
let to = to_ab.pop()?;
let from = from_ab.pop()?;
Some((to, from))
}
}

View File

@@ -0,0 +1,115 @@
use log::*;
use solana_bench_exchange::bench::{airdrop_lamports, do_bench_exchange, Config};
use solana_core::{
gossip_service::{discover_cluster, get_multi_client},
validator::ValidatorConfig,
};
use solana_exchange_program::{
exchange_processor::process_instruction, id, solana_exchange_program,
};
use solana_faucet::faucet::run_local_faucet_with_port;
use solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
};
use solana_runtime::{bank::Bank, bank_client::BankClient};
use solana_sdk::{
genesis_config::create_genesis_config,
signature::{Keypair, Signer},
};
use std::{process::exit, sync::mpsc::channel, time::Duration};
#[test]
#[ignore]
fn test_exchange_local_cluster() {
solana_logger::setup();
const NUM_NODES: usize = 1;
let config = Config {
identity: Keypair::new(),
duration: Duration::from_secs(1),
fund_amount: 100_000,
threads: 1,
transfer_delay: 20, // 15
batch_size: 100, // 1000
chunk_size: 10, // 200
account_groups: 1, // 10
..Config::default()
};
let Config {
fund_amount,
batch_size,
account_groups,
..
} = config;
let accounts_in_groups = batch_size * account_groups;
let cluster = LocalCluster::new(&mut ClusterConfig {
node_stakes: vec![100_000; NUM_NODES],
cluster_lamports: 100_000_000_000_000,
validator_configs: make_identical_validator_configs(&ValidatorConfig::default(), NUM_NODES),
native_instruction_processors: [solana_exchange_program!()].to_vec(),
..ClusterConfig::default()
});
let faucet_keypair = Keypair::new();
cluster.transfer(
&cluster.funding_keypair,
&faucet_keypair.pubkey(),
2_000_000_000_000,
);
let (addr_sender, addr_receiver) = channel();
run_local_faucet_with_port(faucet_keypair, addr_sender, Some(1_000_000_000_000), 0);
let faucet_addr = addr_receiver
.recv_timeout(Duration::from_secs(2))
.expect("run_local_faucet")
.expect("faucet_addr");
info!("Connecting to the cluster");
let nodes =
discover_cluster(&cluster.entry_point_info.gossip, NUM_NODES).unwrap_or_else(|err| {
error!("Failed to discover {} nodes: {:?}", NUM_NODES, err);
exit(1);
});
let (client, num_clients) = get_multi_client(&nodes);
info!("clients: {}", num_clients);
assert!(num_clients >= NUM_NODES);
const NUM_SIGNERS: u64 = 2;
airdrop_lamports(
&client,
&faucet_addr,
&config.identity,
fund_amount * (accounts_in_groups + 1) as u64 * NUM_SIGNERS,
);
do_bench_exchange(vec![client], config);
}
#[test]
fn test_exchange_bank_client() {
solana_logger::setup();
let (genesis_config, identity) = create_genesis_config(100_000_000_000_000);
let mut bank = Bank::new(&genesis_config);
bank.add_builtin("exchange_program", id(), process_instruction);
let clients = vec![BankClient::new(bank)];
do_bench_exchange(
clients,
Config {
identity,
duration: Duration::from_secs(1),
fund_amount: 100_000,
threads: 1,
transfer_delay: 20, // 0;
batch_size: 100, // 1500;
chunk_size: 10, // 1500;
account_groups: 1, // 50;
..Config::default()
},
);
}

View File

@@ -1,8 +1,8 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-bench-streamer"
version = "1.10.0"
version = "1.6.28"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,11 +10,11 @@ publish = false
[dependencies]
clap = "2.33.1"
solana-clap-utils = { path = "../clap-utils", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.6.28" }
solana-streamer = { path = "../streamer", version = "=1.6.28" }
solana-logger = { path = "../logger", version = "=1.6.28" }
solana-net-utils = { path = "../net-utils", version = "=1.6.28" }
solana-version = { path = "../version", version = "=1.6.28" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,38 +1,32 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, App, Arg},
solana_streamer::{
packet::{Packet, PacketBatch, PacketBatchRecycler, PACKET_DATA_SIZE},
streamer::{receiver, PacketBatchReceiver},
},
std::{
cmp::max,
net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket},
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
mpsc::channel,
Arc,
},
thread::{sleep, spawn, JoinHandle, Result},
time::{Duration, SystemTime},
},
};
use clap::{crate_description, crate_name, App, Arg};
use solana_streamer::packet::{Packet, Packets, PacketsRecycler, PACKET_DATA_SIZE};
use solana_streamer::streamer::{receiver, PacketReceiver};
use std::cmp::max;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::mpsc::channel;
use std::sync::Arc;
use std::thread::sleep;
use std::thread::{spawn, JoinHandle, Result};
use std::time::Duration;
use std::time::SystemTime;
fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
let send = UdpSocket::bind("0.0.0.0:0").unwrap();
let mut packet_batch = PacketBatch::default();
packet_batch.packets.resize(10, Packet::default());
for w in packet_batch.packets.iter_mut() {
let mut msgs = Packets::default();
msgs.packets.resize(10, Packet::default());
for w in msgs.packets.iter_mut() {
w.meta.size = PACKET_DATA_SIZE;
w.meta.set_addr(addr);
w.meta.set_addr(&addr);
}
let packet_batch = Arc::new(packet_batch);
let msgs = Arc::new(msgs);
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
}
let mut num = 0;
for p in &packet_batch.packets {
for p in &msgs.packets {
let a = p.meta.addr();
assert!(p.meta.size <= PACKET_DATA_SIZE);
send.send_to(&p.data[..p.meta.size], &a).unwrap();
@@ -42,14 +36,14 @@ fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
})
}
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketBatchReceiver) -> JoinHandle<()> {
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketReceiver) -> JoinHandle<()> {
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
}
let timer = Duration::new(1, 0);
if let Ok(packet_batch) = r.recv_timeout(timer) {
rvs.fetch_add(packet_batch.packets.len(), Ordering::Relaxed);
if let Ok(msgs) = r.recv_timeout(timer) {
rvs.fetch_add(msgs.packets.len(), Ordering::Relaxed);
}
})
}
@@ -81,7 +75,7 @@ fn main() -> Result<()> {
let mut read_channels = Vec::new();
let mut read_threads = Vec::new();
let recycler = PacketBatchRecycler::default();
let recycler = PacketsRecycler::new_without_limit("bench-streamer-recycler-shrink-stats");
for _ in 0..num_sockets {
let read = solana_net_utils::bind_to(ip_addr, port, false).unwrap();
read.set_read_timeout(Some(Duration::new(1, 0))).unwrap();
@@ -98,7 +92,6 @@ fn main() -> Result<()> {
recycler.clone(),
"bench-streamer-test",
1,
true,
));
}

View File

@@ -1,36 +1,36 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-bench-tps"
version = "1.10.0"
version = "1.6.28"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
bincode = "1.3.1"
clap = "2.33.1"
log = "0.4.14"
rayon = "1.5.1"
serde_json = "1.0.72"
serde_yaml = "0.8.21"
solana-core = { path = "../core", version = "=1.10.0" }
solana-genesis = { path = "../genesis", version = "=1.10.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-faucet = { path = "../faucet", version = "=1.10.0" }
solana-gossip = { path = "../gossip", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-metrics = { path = "../metrics", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
log = "0.4.11"
rayon = "1.5.0"
serde_json = "1.0.56"
serde_yaml = "0.8.13"
solana-clap-utils = { path = "../clap-utils", version = "=1.6.28" }
solana-core = { path = "../core", version = "=1.6.28" }
solana-genesis = { path = "../genesis", version = "=1.6.28" }
solana-client = { path = "../client", version = "=1.6.28" }
solana-faucet = { path = "../faucet", version = "=1.6.28" }
solana-logger = { path = "../logger", version = "=1.6.28" }
solana-metrics = { path = "../metrics", version = "=1.6.28" }
solana-measure = { path = "../measure", version = "=1.6.28" }
solana-net-utils = { path = "../net-utils", version = "=1.6.28" }
solana-runtime = { path = "../runtime", version = "=1.6.28" }
solana-sdk = { path = "../sdk", version = "=1.6.28" }
solana-version = { path = "../version", version = "=1.6.28" }
[dev-dependencies]
serial_test = "0.5.1"
solana-local-cluster = { path = "../local-cluster", version = "=1.10.0" }
serial_test = "0.4.0"
solana-local-cluster = { path = "../local-cluster", version = "=1.6.28" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,26 +1,25 @@
use {
crate::cli::Config,
log::*,
rayon::prelude::*,
solana_client::perf_utils::{sample_txs, SampleStats},
solana_core::gen_keys::GenKeys,
solana_faucet::faucet::request_airdrop_transaction,
solana_measure::measure::Measure,
solana_metrics::{self, datapoint_info},
solana_sdk::{
use crate::cli::Config;
use log::*;
use rayon::prelude::*;
use solana_client::perf_utils::{sample_txs, SampleStats};
use solana_core::gen_keys::GenKeys;
use solana_faucet::faucet::request_airdrop_transaction;
use solana_measure::measure::Measure;
use solana_metrics::{self, datapoint_info};
use solana_sdk::{
client::Client,
clock::{DEFAULT_MS_PER_SLOT, DEFAULT_S_PER_SLOT, MAX_PROCESSING_AGE},
clock::{DEFAULT_S_PER_SLOT, MAX_PROCESSING_AGE},
commitment_config::CommitmentConfig,
fee_calculator::FeeCalculator,
hash::Hash,
instruction::{AccountMeta, Instruction},
message::Message,
pubkey::Pubkey,
signature::{Keypair, Signer},
system_instruction, system_transaction,
timing::{duration_as_ms, duration_as_s, duration_as_us, timestamp},
transaction::Transaction,
},
std::{
};
use std::{
collections::{HashSet, VecDeque},
net::SocketAddr,
process::exit,
@@ -30,7 +29,6 @@ use {
},
thread::{sleep, Builder, JoinHandle},
time::{Duration, Instant},
},
};
// The point at which transactions become "too old", in seconds.
@@ -47,12 +45,14 @@ pub type Result<T> = std::result::Result<T, BenchTpsError>;
pub type SharedTransactions = Arc<RwLock<VecDeque<Vec<(Transaction, u64)>>>>;
fn get_latest_blockhash<T: Client>(client: &T) -> Hash {
fn get_recent_blockhash<T: Client>(client: &T) -> (Hash, FeeCalculator) {
loop {
match client.get_latest_blockhash_with_commitment(CommitmentConfig::processed()) {
Ok((blockhash, _)) => return blockhash,
match client.get_recent_blockhash_with_commitment(CommitmentConfig::processed()) {
Ok((blockhash, fee_calculator, _last_valid_slot)) => {
return (blockhash, fee_calculator)
}
Err(err) => {
info!("Couldn't get last blockhash: {:?}", err);
info!("Couldn't get recent blockhash: {:?}", err);
sleep(Duration::from_secs(1));
}
};
@@ -239,19 +239,19 @@ where
let shared_txs: SharedTransactions = Arc::new(RwLock::new(VecDeque::new()));
let blockhash = Arc::new(RwLock::new(get_latest_blockhash(client.as_ref())));
let recent_blockhash = Arc::new(RwLock::new(get_recent_blockhash(client.as_ref()).0));
let shared_tx_active_thread_count = Arc::new(AtomicIsize::new(0));
let total_tx_sent_count = Arc::new(AtomicUsize::new(0));
let blockhash_thread = {
let exit_signal = exit_signal.clone();
let blockhash = blockhash.clone();
let recent_blockhash = recent_blockhash.clone();
let client = client.clone();
let id = id.pubkey();
Builder::new()
.name("solana-blockhash-poller".to_string())
.spawn(move || {
poll_blockhash(&exit_signal, &blockhash, &client, &id);
poll_blockhash(&exit_signal, &recent_blockhash, &client, &id);
})
.unwrap()
};
@@ -271,7 +271,7 @@ where
let start = Instant::now();
generate_chunked_transfers(
blockhash,
recent_blockhash,
&shared_txs,
shared_tx_active_thread_count,
source_keypair_chunks,
@@ -391,22 +391,6 @@ fn generate_txs(
}
}
fn get_new_latest_blockhash<T: Client>(client: &Arc<T>, blockhash: &Hash) -> Option<Hash> {
let start = Instant::now();
while start.elapsed().as_secs() < 5 {
if let Ok(new_blockhash) = client.get_latest_blockhash() {
if new_blockhash != *blockhash {
return Some(new_blockhash);
}
}
debug!("Got same blockhash ({:?}), will retry...", blockhash);
// Retry ~twice during a slot
sleep(Duration::from_millis(DEFAULT_MS_PER_SLOT / 2));
}
None
}
fn poll_blockhash<T: Client>(
exit_signal: &Arc<AtomicBool>,
blockhash: &Arc<RwLock<Hash>>,
@@ -418,7 +402,7 @@ fn poll_blockhash<T: Client>(
loop {
let blockhash_updated = {
let old_blockhash = *blockhash.read().unwrap();
if let Some(new_blockhash) = get_new_latest_blockhash(client, &old_blockhash) {
if let Ok((new_blockhash, _fee)) = client.get_new_blockhash(&old_blockhash) {
*blockhash.write().unwrap() = new_blockhash;
blockhash_last_updated = Instant::now();
true
@@ -556,16 +540,16 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
self.len(),
);
let blockhash = get_latest_blockhash(client.as_ref());
let (blockhash, _fee_calculator) = get_recent_blockhash(client.as_ref());
// re-sign retained to_fund_txes with updated blockhash
self.sign(blockhash);
self.send(client);
self.send(&client);
// Sleep a few slots to allow transactions to process
sleep(Duration::from_secs(1));
self.verify(client, to_lamports);
self.verify(&client, to_lamports);
// retry anything that seems to have dropped through cracks
// again since these txs are all or nothing, they're fine to
@@ -580,7 +564,7 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
let to_fund_txs: Vec<(&Keypair, Transaction)> = to_fund
.par_iter()
.map(|(k, t)| {
let instructions = system_instruction::transfer_many(&k.pubkey(), t);
let instructions = system_instruction::transfer_many(&k.pubkey(), &t);
let message = Message::new(&instructions, Some(&k.pubkey()));
(*k, Transaction::new_unsigned(message))
})
@@ -633,7 +617,7 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
return None;
}
let verified = if verify_funding_transfer(&client, tx, to_lamports) {
let verified = if verify_funding_transfer(&client, &tx, to_lamports) {
verified_txs.fetch_add(1, Ordering::Relaxed);
Some(k.pubkey())
} else {
@@ -748,8 +732,8 @@ pub fn airdrop_lamports<T: Client>(
id.pubkey(),
);
let blockhash = get_latest_blockhash(client);
match request_airdrop_transaction(faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
let (blockhash, _fee_calculator) = get_recent_blockhash(client);
match request_airdrop_transaction(&faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
Ok(transaction) => {
let mut tries = 0;
loop {
@@ -906,16 +890,8 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
// pay for the transaction fees in a new run.
let enough_lamports = 8 * lamports_per_account / 10;
if first_keypair_balance < enough_lamports || last_keypair_balance < enough_lamports {
let single_sig_message = Message::new_with_blockhash(
&[Instruction::new_with_bytes(
Pubkey::new_unique(),
&[],
vec![AccountMeta::new(Pubkey::new_unique(), true)],
)],
None,
&client.get_latest_blockhash().unwrap(),
);
let max_fee = client.get_fee_for_message(&single_sig_message).unwrap();
let fee_rate_governor = client.get_fee_rate_governor().unwrap();
let max_fee = fee_rate_governor.max_lamports_per_signature;
let extra_fees = extra * max_fee;
let total_keypairs = keypairs.len() as u64 + 1; // Add one for funding keypair
let total = lamports_per_account * total_keypairs + extra_fees;
@@ -948,19 +924,17 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
#[cfg(test)]
mod tests {
use {
super::*,
solana_runtime::{bank::Bank, bank_client::BankClient},
solana_sdk::{
client::SyncClient, fee_calculator::FeeRateGovernor,
genesis_config::create_genesis_config,
},
};
use super::*;
use solana_runtime::bank::Bank;
use solana_runtime::bank_client::BankClient;
use solana_sdk::client::SyncClient;
use solana_sdk::fee_calculator::FeeRateGovernor;
use solana_sdk::genesis_config::create_genesis_config;
#[test]
fn test_bench_tps_bank_client() {
let (genesis_config, id) = create_genesis_config(10_000);
let bank = Bank::new_for_tests(&genesis_config);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let config = Config {
@@ -981,7 +955,7 @@ mod tests {
#[test]
fn test_bench_tps_fund_keys() {
let (genesis_config, id) = create_genesis_config(10_000);
let bank = Bank::new_for_tests(&genesis_config);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let keypair_count = 20;
let lamports = 20;
@@ -1004,7 +978,7 @@ mod tests {
let (mut genesis_config, id) = create_genesis_config(10_000);
let fee_rate_governor = FeeRateGovernor::new(11, 0);
genesis_config.fee_rate_governor = fee_rate_governor;
let bank = Bank::new_for_tests(&genesis_config);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let keypair_count = 20;
let lamports = 20;

View File

@@ -1,13 +1,11 @@
use {
clap::{crate_description, crate_name, App, Arg, ArgMatches},
solana_faucet::faucet::FAUCET_PORT,
solana_sdk::{
fee_calculator::FeeRateGovernor,
use clap::{crate_description, crate_name, App, Arg, ArgMatches};
use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::fee_calculator::FeeRateGovernor;
use solana_sdk::{
pubkey::Pubkey,
signature::{read_keypair_file, Keypair},
},
std::{net::SocketAddr, process::exit, time::Duration},
};
use std::{net::SocketAddr, process::exit, time::Duration};
const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::LAMPORTS_PER_SOL;

View File

@@ -1,20 +1,13 @@
#![allow(clippy::integer_arithmetic)]
use {
log::*,
solana_bench_tps::{
bench::{do_bench_tps, generate_and_fund_keypairs, generate_keypairs},
cli,
},
solana_genesis::Base64Account,
solana_gossip::gossip_service::{discover_cluster, get_client, get_multi_client},
solana_sdk::{
fee_calculator::FeeRateGovernor,
signature::{Keypair, Signer},
system_program,
},
solana_streamer::socket::SocketAddrSpace,
std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc},
};
use log::*;
use solana_bench_tps::bench::{do_bench_tps, generate_and_fund_keypairs, generate_keypairs};
use solana_bench_tps::cli;
use solana_core::gossip_service::{discover_cluster, get_client, get_multi_client};
use solana_genesis::Base64Account;
use solana_sdk::fee_calculator::FeeRateGovernor;
use solana_sdk::signature::{Keypair, Signer};
use solana_sdk::system_program;
use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc};
/// Number of signatures for all transactions in ~1 week at ~100K TPS
pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7;
@@ -46,7 +39,7 @@ fn main() {
let keypair_count = *tx_count * keypair_multiplier;
if *write_to_client_file {
info!("Generating {} keypairs", keypair_count);
let (keypairs, _) = generate_keypairs(id, keypair_count as u64);
let (keypairs, _) = generate_keypairs(&id, keypair_count as u64);
let num_accounts = keypairs.len() as u64;
let max_fee =
FeeRateGovernor::new(*target_lamports_per_signature, 0).max_lamports_per_signature;
@@ -75,14 +68,13 @@ fn main() {
}
info!("Connecting to the cluster");
let nodes = discover_cluster(entrypoint_addr, *num_nodes, SocketAddrSpace::Unspecified)
.unwrap_or_else(|err| {
let nodes = discover_cluster(&entrypoint_addr, *num_nodes).unwrap_or_else(|err| {
eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
exit(1);
});
let client = if *multi_client {
let (client, num_clients) = get_multi_client(&nodes, &SocketAddrSpace::Unspecified);
let (client, num_clients) = get_multi_client(&nodes);
if nodes.len() < num_clients {
eprintln!(
"Error: Insufficient nodes discovered. Expecting {} or more",
@@ -96,7 +88,7 @@ fn main() {
let mut target_client = None;
for node in nodes {
if node.id == *target_node {
target_client = Some(Arc::new(get_client(&[node], &SocketAddrSpace::Unspecified)));
target_client = Some(Arc::new(get_client(&[node])));
break;
}
}
@@ -105,7 +97,7 @@ fn main() {
exit(1);
})
} else {
Arc::new(get_client(&nodes, &SocketAddrSpace::Unspecified))
Arc::new(get_client(&nodes))
};
let keypairs = if *read_from_client_file {
@@ -143,7 +135,7 @@ fn main() {
generate_and_fund_keypairs(
client.clone(),
Some(*faucet_addr),
id,
&id,
keypair_count,
*num_lamports_per_account,
)

View File

@@ -1,24 +1,20 @@
#![allow(clippy::integer_arithmetic)]
use {
serial_test::serial,
solana_bench_tps::{
use serial_test::serial;
use solana_bench_tps::{
bench::{do_bench_tps, generate_and_fund_keypairs},
cli::Config,
},
solana_client::thin_client::create_client,
solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet_with_port,
solana_gossip::cluster_info::VALIDATOR_PORT_RANGE,
solana_local_cluster::{
};
use solana_client::thin_client::create_client;
use solana_core::{cluster_info::VALIDATOR_PORT_RANGE, validator::ValidatorConfig};
use solana_faucet::faucet::run_local_faucet_with_port;
use solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
},
solana_sdk::signature::{Keypair, Signer},
solana_streamer::socket::SocketAddrSpace,
std::{
};
use solana_sdk::signature::{Keypair, Signer};
use std::{
sync::{mpsc::channel, Arc},
time::Duration,
},
};
fn test_bench_tps_local_cluster(config: Config) {
@@ -26,19 +22,13 @@ fn test_bench_tps_local_cluster(config: Config) {
solana_logger::setup();
const NUM_NODES: usize = 1;
let cluster = LocalCluster::new(
&mut ClusterConfig {
let cluster = LocalCluster::new(&mut ClusterConfig {
node_stakes: vec![999_990; NUM_NODES],
cluster_lamports: 200_000_000,
validator_configs: make_identical_validator_configs(
&ValidatorConfig::default(),
NUM_NODES,
),
validator_configs: make_identical_validator_configs(&ValidatorConfig::default(), NUM_NODES),
native_instruction_processors,
..ClusterConfig::default()
},
SocketAddrSpace::Unspecified,
);
});
let faucet_keypair = Keypair::new();
cluster.transfer(

View File

@@ -1,88 +0,0 @@
# Leader Duplicate Block Slashing
This design describes how the cluster slashes leaders that produce duplicate
blocks.
Leaders that produce multiple blocks for the same slot increase the number of
potential forks that the cluster has to resolve.
## Primitives
1. gossip_root: Nodes now gossip their current root
2. gossip_duplicate_slots: Nodes can gossip up to `N` duplicate slot proofs.
3. `DUPLICATE_THRESHOLD`: The minimum percentage of stake that needs to vote on a fork with version `X` of a duplicate slot, in order for that fork to become votable.
## Protocol
1. When WindowStage detects a duplicate slot proof `P`, it checks the new `gossip_root` to see if `<= 1/3` of the nodes have rooted a slot `S >= P`. If so, it pushes a proof to `gossip_duplicate_slots` to gossip. WindowStage then signals ReplayStage about this duplicate slot `S`. These proofs can be purged from gossip once the validator sees > 2/3 of people gossiping roots `R > S`.
2. When ReplayStage receives the signal for a duplicate slot `S` from `1)` above, the validator monitors gossip and replay waiting for`>= DUPLICATE_THRESHOLD` votes for the same hash which implies the same version of the slot. If this conditon is met for some version with hash `H` of slot `S`, this is then known as the `duplicate_confirmed` version of the slot.
Before a duplicate slot `S` is `duplicate_confirmed`, it's first excluded from the vote candidate set in the fork choice rules. In addition, ReplayStage also resets PoH to the *latest* ancestor of the *earliest* `non-duplicate/confirmed_duplicate_slot`, so that block generation can start happening on the earliest known *safe* block.
Some notes about the `DUPLICATE_THRESHOLD`. In the cases below, assume `DUPLICATE_THRESHOLD = 52`:
a) If less than `2 * DUPLICATE_THRESHOLD - 1` percentage of the network is malicious, then there can only be one such `duplicate_confirmed` version of the slot. With `DUPLICATE_THRESHOLD = 52`, this is
a malcious tolerance of `4%`
b) The liveness of the network is at most `1 - DUPLICATE_THRESHOLD - SWITCH_THRESHOLD`. This is because if you need at least `SWITCH_THRESHOLD` percentage of the stake voting on a different fork in order to switch off of a duplicate fork that has `< DUPLICATE_THRESHOLD` stake voting on it, and is *not* `duplicate_confirmed`. For `DUPLICATE_THRESHOLD = 52` and `DUPLICATE_THRESHOLD = 38`, this implies a liveness tolerance of `10%`.
For example in the situation below, validators that voted on `2` can't vote any further on fork `2` because it's been removed from fork choice. Now slot 6 better have enough stake for a switching proof, or the network halts.
```text
|-------- 2 (51% voted, then detected this slot was a duplicate and removed this slot from fork choice)
0---|
|---------- 6 (39%)
```
3. Switching proofs need to be extended to allow including vote hashes from different versions of the same same slot (detected through 1). Right now this is not supported since switching proofs can
only be built using votes from banks in BankForks, and two different versions of the same slot cannot
simultaneously exist in BankForks. For instance:
```text
|-------- 2
|
0------------- 1 ------ 2'
|
|---------- 6
```
Imagine each version of slot 2 and 2' have `DUPLICATE_THRESHOLD / 2` of the votes on them, so neither duplicate can be confirmed. At most slot 6 has `1 - DUPLICATE_THRESHOLD / 2` of the votes
on it, which is less than the switching threshold. Thus, in order for validators voting on `2` or `2'` to switch to slot 6, and make progress, they need to incorporate votes from the other version of the slot into their switching proofs.
### The repair problem.
Now what happens if one of the following occurs:
1) Due to network blips/latencies, some validators fail to observe the gossip votes before they are overwritten by newer votes? Then some validators may conclude a slot `S` is `duplicate_confirmed` while others don't.
2) Due to lockouts, no version of duplicate slot `S` reaches `duplicate_confirmed` status, but one of its descendants may reach `duplicate_confirmed` after those lockouts expire, which by definition, means `S` is also `duplicate_confirmed`.
3) People who are catching up and don't see the votes in gossip encounter a dup block and can't make progress.
We assume that given a network is eventually stable, if at least one correct validator observed `S` is `duplicate_confirmed`, then if `S` is part of the heaviest fork, then eventually all validators will observe some descendant of `S` is duplicate confirmed.
This problem we need to solve is modeled simply by the below scenario:
```text
1 -> 2 (duplicate) -> 3 -> 4 (duplicate)
```
Assume the following:
1. Due to gossiping duplciate proofs, we assume everyone will eventually see duplicate proofs for 2 and 4, so everyone agrees to remove them from fork choice until they are `duplicate_confirmed`.
2. Due to lockouts, `> DUPLICATE_THRESHOLD` of the stake votes on 4, but not 2. This means at least `DUPLICATE_THRESHOLD` of people have the "correct" version of both slots 2 and 4.
3. However, the remaining `1-DUPLICATE_THRESHOLD` of people have wrong version of 2. This means in replay, their slot 3 will be marked dead, *even though the faulty slot is 2*. The goal is to get these people on the right fork again.
Possible solution:
1. Change `EpochSlots` to signal when a bank is frozen, not when a slot is complete. If we see > `DUPLICATE_THRESHOLD` have frozen the dead slot 3, then we attempt recovery. Note this does not mean that all `DUPLICATE_THRESHOLD` have frozen the same version of the bank, it's just a signal to us that something may be wrong with our version of the bank.
2. Recovery takes the form of a special repair request, `RepairDuplicateConfirmed(dead_slot, Vec<(Slot, Hash)>)`, which specifies a dead slot, and then a vector of `(slot, hash)` of `N` of its latest parents.
3. The repairer sees this request and responds with the correct hash only if any element of the `(slot, hash)` vector is both `duplicate_confirmed` and the hash doesn't match the requester's hash in the vector.
4. Once the requester sees the "correct" hash is different than their frozen hash, they dump the block so that they can accept a new block, and ask the network for the block with the correct hash.
Of course the repairer might lie to you, and you'll get the wrong version of the block, in which case you'll end up with another dead block and repeat the procedure.

View File

@@ -1,29 +0,0 @@
[package]
name = "solana-bucket-map"
version = "1.10.0"
description = "solana-bucket-map"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bucket-map"
readme = "../README.md"
repository = "https://github.com/solana-labs/solana"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
license = "Apache-2.0"
edition = "2021"
[dependencies]
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
memmap2 = "0.5.0"
log = { version = "0.4.11" }
solana-measure = { path = "../measure", version = "=1.10.0" }
rand = "0.7.0"
fs_extra = "1.2.0"
tempfile = "3.2.0"
[lib]
crate-type = ["lib"]
name = "solana_bucket_map"
[[bench]]
name = "bucket_map"

View File

@@ -1,77 +0,0 @@
#![feature(test)]
macro_rules! DEFINE_NxM_BENCH {
($i:ident, $n:literal, $m:literal) => {
mod $i {
use super::*;
#[bench]
fn bench_insert_baseline_hashmap(bencher: &mut Bencher) {
do_bench_insert_baseline_hashmap(bencher, $n, $m);
}
#[bench]
fn bench_insert_bucket_map(bencher: &mut Bencher) {
do_bench_insert_bucket_map(bencher, $n, $m);
}
}
};
}
extern crate test;
use {
rayon::prelude::*,
solana_bucket_map::bucket_map::{BucketMap, BucketMapConfig},
solana_sdk::pubkey::Pubkey,
std::{collections::hash_map::HashMap, sync::RwLock},
test::Bencher,
};
type IndexValue = u64;
DEFINE_NxM_BENCH!(dim_01x02, 1, 2);
DEFINE_NxM_BENCH!(dim_02x04, 2, 4);
DEFINE_NxM_BENCH!(dim_04x08, 4, 8);
DEFINE_NxM_BENCH!(dim_08x16, 8, 16);
DEFINE_NxM_BENCH!(dim_16x32, 16, 32);
DEFINE_NxM_BENCH!(dim_32x64, 32, 64);
/// Benchmark insert with Hashmap as baseline for N threads inserting M keys each
fn do_bench_insert_baseline_hashmap(bencher: &mut Bencher, n: usize, m: usize) {
let index = RwLock::new(HashMap::new());
(0..n).into_iter().into_par_iter().for_each(|i| {
let key = Pubkey::new_unique();
index
.write()
.unwrap()
.insert(key, vec![(i, IndexValue::default())]);
});
bencher.iter(|| {
(0..n).into_iter().into_par_iter().for_each(|_| {
for j in 0..m {
let key = Pubkey::new_unique();
index
.write()
.unwrap()
.insert(key, vec![(j, IndexValue::default())]);
}
})
});
}
/// Benchmark insert with BucketMap with N buckets for N threads inserting M keys each
fn do_bench_insert_bucket_map(bencher: &mut Bencher, n: usize, m: usize) {
let index = BucketMap::new(BucketMapConfig::new(n));
(0..n).into_iter().into_par_iter().for_each(|i| {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![(i, IndexValue::default())], 0)));
});
bencher.iter(|| {
(0..n).into_iter().into_par_iter().for_each(|_| {
for j in 0..m {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![(j, IndexValue::default())], 0)));
}
})
});
}

View File

@@ -1,491 +0,0 @@
use {
crate::{
bucket_item::BucketItem,
bucket_map::BucketMapError,
bucket_stats::BucketMapStats,
bucket_storage::{BucketStorage, Uid, DEFAULT_CAPACITY_POW2, UID_UNLOCKED},
index_entry::IndexEntry,
MaxSearch, RefCount,
},
rand::{thread_rng, Rng},
solana_measure::measure::Measure,
solana_sdk::pubkey::Pubkey,
std::{
collections::hash_map::DefaultHasher,
hash::{Hash, Hasher},
marker::PhantomData,
ops::RangeBounds,
path::PathBuf,
sync::{
atomic::{AtomicUsize, Ordering},
Arc, Mutex,
},
},
};
#[derive(Default)]
pub struct ReallocatedItems {
// Some if the index was reallocated
// u64 is random associated with the new index
pub index: Option<(u64, BucketStorage)>,
// Some for a data bucket reallocation
// u64 is data bucket index
pub data: Option<(u64, BucketStorage)>,
}
#[derive(Default)]
pub struct Reallocated {
/// > 0 if reallocations are encoded
pub active_reallocations: AtomicUsize,
/// actual reallocated bucket
/// mutex because bucket grow code runs with a read lock
pub items: Mutex<ReallocatedItems>,
}
impl Reallocated {
/// specify that a reallocation has occurred
pub fn add_reallocation(&self) {
assert_eq!(
0,
self.active_reallocations.fetch_add(1, Ordering::Relaxed),
"Only 1 reallocation can occur at a time"
);
}
/// Return true IFF a reallocation has occurred.
/// Calling this takes conceptual ownership of the reallocation encoded in the struct.
pub fn get_reallocated(&self) -> bool {
self.active_reallocations
.compare_exchange(1, 0, Ordering::Acquire, Ordering::Relaxed)
.is_ok()
}
}
// >= 2 instances of BucketStorage per 'bucket' in the bucket map. 1 for index, >= 1 for data
pub struct Bucket<T> {
drives: Arc<Vec<PathBuf>>,
//index
pub index: BucketStorage,
//random offset for the index
random: u64,
//storage buckets to store SlotSlice up to a power of 2 in len
pub data: Vec<BucketStorage>,
_phantom: PhantomData<T>,
stats: Arc<BucketMapStats>,
pub reallocated: Reallocated,
}
impl<T: Clone + Copy> Bucket<T> {
pub fn new(
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
stats: Arc<BucketMapStats>,
) -> Self {
let index = BucketStorage::new(
Arc::clone(&drives),
1,
std::mem::size_of::<IndexEntry>() as u64,
max_search,
Arc::clone(&stats.index),
);
Self {
random: thread_rng().gen(),
drives,
index,
data: vec![],
_phantom: PhantomData::default(),
stats,
reallocated: Reallocated::default(),
}
}
pub fn bucket_len(&self) -> u64 {
self.index.used.load(Ordering::Relaxed)
}
pub fn keys(&self) -> Vec<Pubkey> {
let mut rv = vec![];
for i in 0..self.index.capacity() {
if self.index.uid(i) == UID_UNLOCKED {
continue;
}
let ix: &IndexEntry = self.index.get(i);
rv.push(ix.key);
}
rv
}
pub fn items_in_range<R>(&self, range: &Option<&R>) -> Vec<BucketItem<T>>
where
R: RangeBounds<Pubkey>,
{
let mut result = Vec::with_capacity(self.index.used.load(Ordering::Relaxed) as usize);
for i in 0..self.index.capacity() {
let ii = i % self.index.capacity();
if self.index.uid(ii) == UID_UNLOCKED {
continue;
}
let ix: &IndexEntry = self.index.get(ii);
let key = ix.key;
if range.map(|r| r.contains(&key)).unwrap_or(true) {
let val = ix.read_value(self);
result.push(BucketItem {
pubkey: key,
ref_count: ix.ref_count(),
slot_list: val.map(|(v, _ref_count)| v.to_vec()).unwrap_or_default(),
});
}
}
result
}
pub fn find_entry(&self, key: &Pubkey) -> Option<(&IndexEntry, u64)> {
Self::bucket_find_entry(&self.index, key, self.random)
}
fn find_entry_mut(&self, key: &Pubkey) -> Option<(&mut IndexEntry, u64)> {
Self::bucket_find_entry_mut(&self.index, key, self.random)
}
fn bucket_find_entry_mut<'a>(
index: &'a BucketStorage,
key: &Pubkey,
random: u64,
) -> Option<(&'a mut IndexEntry, u64)> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i % index.capacity();
if index.uid(ii) == UID_UNLOCKED {
continue;
}
let elem: &mut IndexEntry = index.get_mut(ii);
if elem.key == *key {
return Some((elem, ii));
}
}
None
}
fn bucket_find_entry<'a>(
index: &'a BucketStorage,
key: &Pubkey,
random: u64,
) -> Option<(&'a IndexEntry, u64)> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i % index.capacity();
if index.uid(ii) == UID_UNLOCKED {
continue;
}
let elem: &IndexEntry = index.get(ii);
if elem.key == *key {
return Some((elem, ii));
}
}
None
}
fn bucket_create_key(
index: &BucketStorage,
key: &Pubkey,
elem_uid: Uid,
random: u64,
) -> Result<u64, BucketMapError> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i as u64 % index.capacity();
if index.uid(ii) != UID_UNLOCKED {
continue;
}
index.allocate(ii, elem_uid).unwrap();
let mut elem: &mut IndexEntry = index.get_mut(ii);
elem.key = *key;
// These will be overwritten after allocation by callers.
// Since this part of the mmapped file could have previously been used by someone else, there can be garbage here.
elem.ref_count = 0;
elem.storage_offset = 0;
elem.storage_capacity_when_created_pow2 = 0;
elem.num_slots = 0;
//debug!( "INDEX ALLOC {:?} {} {} {}", key, ii, index.capacity, elem_uid );
return Ok(ii);
}
Err(BucketMapError::IndexNoSpace(index.capacity_pow2))
}
pub fn addref(&mut self, key: &Pubkey) -> Option<RefCount> {
let (elem, _) = self.find_entry_mut(key)?;
elem.ref_count += 1;
Some(elem.ref_count)
}
pub fn unref(&mut self, key: &Pubkey) -> Option<RefCount> {
let (elem, _) = self.find_entry_mut(key)?;
elem.ref_count -= 1;
Some(elem.ref_count)
}
fn create_key(&self, key: &Pubkey) -> Result<u64, BucketMapError> {
Self::bucket_create_key(&self.index, key, IndexEntry::key_uid(key), self.random)
}
pub fn read_value(&self, key: &Pubkey) -> Option<(&[T], RefCount)> {
//debug!("READ_VALUE: {:?}", key);
let (elem, _) = self.find_entry(key)?;
elem.read_value(self)
}
pub fn try_write(
&mut self,
key: &Pubkey,
data: &[T],
ref_count: u64,
) -> Result<(), BucketMapError> {
let best_fit_bucket = IndexEntry::data_bucket_from_num_slots(data.len() as u64);
if self.data.get(best_fit_bucket as usize).is_none() {
// fail early if the data bucket we need doesn't exist - we don't want the index entry partially allocated
return Err(BucketMapError::DataNoSpace((best_fit_bucket, 0)));
}
let index_entry = self.find_entry_mut(key);
let (elem, elem_ix) = match index_entry {
None => {
let ii = self.create_key(key)?;
let elem: &mut IndexEntry = self.index.get_mut(ii);
(elem, ii)
}
Some(res) => res,
};
elem.ref_count = ref_count;
let elem_uid = self.index.uid(elem_ix);
let bucket_ix = elem.data_bucket_ix();
let current_bucket = &self.data[bucket_ix as usize];
if best_fit_bucket == bucket_ix && elem.num_slots > 0 {
// in place update
let elem_loc = elem.data_loc(current_bucket);
let slice: &mut [T] = current_bucket.get_mut_cell_slice(elem_loc, data.len() as u64);
assert!(current_bucket.uid(elem_loc) == elem_uid);
elem.num_slots = data.len() as u64;
slice.clone_from_slice(data);
Ok(())
} else {
// need to move the allocation to a best fit spot
let best_bucket = &self.data[best_fit_bucket as usize];
let cap_power = best_bucket.capacity_pow2;
let cap = best_bucket.capacity();
let pos = thread_rng().gen_range(0, cap);
for i in pos..pos + self.index.max_search() {
let ix = i % cap;
if best_bucket.uid(ix) == UID_UNLOCKED {
let elem_loc = elem.data_loc(current_bucket);
if elem.num_slots > 0 {
current_bucket.free(elem_loc, elem_uid);
}
elem.storage_offset = ix;
elem.storage_capacity_when_created_pow2 = best_bucket.capacity_pow2;
elem.num_slots = data.len() as u64;
//debug!( "DATA ALLOC {:?} {} {} {}", key, elem.data_location, best_bucket.capacity, elem_uid );
if elem.num_slots > 0 {
best_bucket.allocate(ix, elem_uid).unwrap();
let slice = best_bucket.get_mut_cell_slice(ix, data.len() as u64);
slice.copy_from_slice(data);
}
return Ok(());
}
}
Err(BucketMapError::DataNoSpace((best_fit_bucket, cap_power)))
}
}
pub fn delete_key(&mut self, key: &Pubkey) {
if let Some((elem, elem_ix)) = self.find_entry(key) {
let elem_uid = self.index.uid(elem_ix);
if elem.num_slots > 0 {
let data_bucket = &self.data[elem.data_bucket_ix() as usize];
let loc = elem.data_loc(data_bucket);
//debug!( "DATA FREE {:?} {} {} {}", key, elem.data_location, data_bucket.capacity, elem_uid );
data_bucket.free(loc, elem_uid);
}
//debug!("INDEX FREE {:?} {}", key, elem_uid);
self.index.free(elem_ix, elem_uid);
}
}
pub fn grow_index(&self, current_capacity_pow2: u8) {
if self.index.capacity_pow2 == current_capacity_pow2 {
let mut m = Measure::start("grow_index");
//debug!("GROW_INDEX: {}", current_capacity_pow2);
let increment = 1;
for i in increment.. {
//increasing the capacity by ^4 reduces the
//likelyhood of a re-index collision of 2^(max_search)^2
//1 in 2^32
let index = BucketStorage::new_with_capacity(
Arc::clone(&self.drives),
1,
std::mem::size_of::<IndexEntry>() as u64,
// *2 causes rapid growth of index buckets
self.index.capacity_pow2 + i, // * 2,
self.index.max_search,
Arc::clone(&self.stats.index),
);
let random = thread_rng().gen();
let mut valid = true;
for ix in 0..self.index.capacity() {
let uid = self.index.uid(ix);
if UID_UNLOCKED != uid {
let elem: &IndexEntry = self.index.get(ix);
let new_ix = Self::bucket_create_key(&index, &elem.key, uid, random);
if new_ix.is_err() {
valid = false;
break;
}
let new_ix = new_ix.unwrap();
let new_elem: &mut IndexEntry = index.get_mut(new_ix);
*new_elem = *elem;
/*
let dbg_elem: IndexEntry = *new_elem;
assert_eq!(
Self::bucket_find_entry(&index, &elem.key, random).unwrap(),
(&dbg_elem, new_ix)
);
*/
}
}
if valid {
let sz = index.capacity();
{
let mut max = self.stats.index.max_size.lock().unwrap();
*max = std::cmp::max(*max, sz);
}
let mut items = self.reallocated.items.lock().unwrap();
items.index = Some((random, index));
self.reallocated.add_reallocation();
break;
}
}
m.stop();
self.stats.index.resizes.fetch_add(1, Ordering::Relaxed);
self.stats
.index
.resize_us
.fetch_add(m.as_us(), Ordering::Relaxed);
}
}
pub fn apply_grow_index(&mut self, random: u64, index: BucketStorage) {
self.random = random;
self.index = index;
}
fn elem_size() -> u64 {
std::mem::size_of::<T>() as u64
}
pub fn apply_grow_data(&mut self, ix: usize, bucket: BucketStorage) {
if self.data.get(ix).is_none() {
for i in self.data.len()..ix {
// insert empty data buckets
self.data.push(BucketStorage::new(
Arc::clone(&self.drives),
1 << i,
Self::elem_size(),
self.index.max_search,
Arc::clone(&self.stats.data),
))
}
self.data.push(bucket);
} else {
self.data[ix] = bucket;
}
}
/// grow a data bucket
/// The application of the new bucket is deferred until the next write lock.
pub fn grow_data(&self, data_index: u64, current_capacity_pow2: u8) {
let new_bucket = BucketStorage::new_resized(
&self.drives,
self.index.max_search,
self.data.get(data_index as usize),
std::cmp::max(current_capacity_pow2 + 1, DEFAULT_CAPACITY_POW2),
1 << data_index,
Self::elem_size(),
&self.stats.data,
);
self.reallocated.add_reallocation();
let mut items = self.reallocated.items.lock().unwrap();
items.data = Some((data_index, new_bucket));
}
fn bucket_index_ix(index: &BucketStorage, key: &Pubkey, random: u64) -> u64 {
let uid = IndexEntry::key_uid(key);
let mut s = DefaultHasher::new();
uid.hash(&mut s);
//the locally generated random will make it hard for an attacker
//to deterministically cause all the pubkeys to land in the same
//location in any bucket on all validators
random.hash(&mut s);
let ix = s.finish();
ix % index.capacity()
//debug!( "INDEX_IX: {:?} uid:{} loc: {} cap:{}", key, uid, location, index.capacity() );
}
/// grow the appropriate piece. Note this takes an immutable ref.
/// The actual grow is set into self.reallocated and applied later on a write lock
pub fn grow(&self, err: BucketMapError) {
match err {
BucketMapError::DataNoSpace((data_index, current_capacity_pow2)) => {
//debug!("GROWING SPACE {:?}", (data_index, current_capacity_pow2));
self.grow_data(data_index, current_capacity_pow2);
}
BucketMapError::IndexNoSpace(current_capacity_pow2) => {
//debug!("GROWING INDEX {}", sz);
self.grow_index(current_capacity_pow2);
}
}
}
/// if a bucket was resized previously with a read lock, then apply that resize now
pub fn handle_delayed_grows(&mut self) {
if self.reallocated.get_reallocated() {
// swap out the bucket that was resized previously with a read lock
let mut items = ReallocatedItems::default();
std::mem::swap(&mut items, &mut self.reallocated.items.lock().unwrap());
if let Some((random, bucket)) = items.index.take() {
self.apply_grow_index(random, bucket);
} else {
// data bucket
let (i, new_bucket) = items.data.take().unwrap();
self.apply_grow_data(i as usize, new_bucket);
}
}
}
pub fn insert(&mut self, key: &Pubkey, value: (&[T], RefCount)) {
let (new, refct) = value;
loop {
let rv = self.try_write(key, new, refct);
match rv {
Ok(_) => return,
Err(err) => {
self.grow(err);
self.handle_delayed_grows();
}
}
}
}
pub fn update<F>(&mut self, key: &Pubkey, mut updatefn: F)
where
F: FnMut(Option<(&[T], RefCount)>) -> Option<(Vec<T>, RefCount)>,
{
let current = self.read_value(key);
let new = updatefn(current);
if new.is_none() {
self.delete_key(key);
return;
}
let (new, refct) = new.unwrap();
self.insert(key, (&new, refct));
}
}

View File

@@ -1,148 +0,0 @@
use {
crate::{
bucket::Bucket, bucket_item::BucketItem, bucket_map::BucketMapError,
bucket_stats::BucketMapStats, MaxSearch, RefCount,
},
solana_sdk::pubkey::Pubkey,
std::{
ops::RangeBounds,
path::PathBuf,
sync::{
atomic::{AtomicU64, Ordering},
Arc, RwLock, RwLockWriteGuard,
},
},
};
type LockedBucket<T> = RwLock<Option<Bucket<T>>>;
pub struct BucketApi<T: Clone + Copy> {
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
pub stats: Arc<BucketMapStats>,
bucket: LockedBucket<T>,
count: Arc<AtomicU64>,
}
impl<T: Clone + Copy> BucketApi<T> {
pub fn new(
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
stats: Arc<BucketMapStats>,
count: Arc<AtomicU64>,
) -> Self {
Self {
drives,
max_search,
stats,
bucket: RwLock::default(),
count,
}
}
/// Get the items for bucket
pub fn items_in_range<R>(&self, range: &Option<&R>) -> Vec<BucketItem<T>>
where
R: RangeBounds<Pubkey>,
{
self.bucket
.read()
.unwrap()
.as_ref()
.map(|bucket| bucket.items_in_range(range))
.unwrap_or_default()
}
/// Get the Pubkeys
pub fn keys(&self) -> Vec<Pubkey> {
self.bucket
.read()
.unwrap()
.as_ref()
.map_or_else(Vec::default, |bucket| bucket.keys())
}
/// Get the values for Pubkey `key`
pub fn read_value(&self, key: &Pubkey) -> Option<(Vec<T>, RefCount)> {
self.bucket.read().unwrap().as_ref().and_then(|bucket| {
bucket
.read_value(key)
.map(|(value, ref_count)| (value.to_vec(), ref_count))
})
}
pub fn bucket_len(&self) -> u64 {
self.bucket
.read()
.unwrap()
.as_ref()
.map(|bucket| bucket.bucket_len())
.unwrap_or_default()
}
pub fn delete_key(&self, key: &Pubkey) {
let mut bucket = self.get_write_bucket();
if let Some(bucket) = bucket.as_mut() {
bucket.delete_key(key)
}
}
fn get_write_bucket(&self) -> RwLockWriteGuard<Option<Bucket<T>>> {
let mut bucket = self.bucket.write().unwrap();
if bucket.is_none() {
*bucket = Some(Bucket::new(
Arc::clone(&self.drives),
self.max_search,
Arc::clone(&self.stats),
));
} else {
let write = bucket.as_mut().unwrap();
write.handle_delayed_grows();
self.count.store(write.bucket_len(), Ordering::Relaxed);
}
bucket
}
pub fn addref(&self, key: &Pubkey) -> Option<RefCount> {
self.get_write_bucket()
.as_mut()
.and_then(|bucket| bucket.addref(key))
}
pub fn unref(&self, key: &Pubkey) -> Option<RefCount> {
self.get_write_bucket()
.as_mut()
.and_then(|bucket| bucket.unref(key))
}
pub fn insert(&self, pubkey: &Pubkey, value: (&[T], RefCount)) {
let mut bucket = self.get_write_bucket();
bucket.as_mut().unwrap().insert(pubkey, value)
}
pub fn grow(&self, err: BucketMapError) {
// grows are special - they get a read lock and modify 'reallocated'
// the grown changes are applied the next time there is a write lock taken
if let Some(bucket) = self.bucket.read().unwrap().as_ref() {
bucket.grow(err)
}
}
pub fn update<F>(&self, key: &Pubkey, updatefn: F)
where
F: FnMut(Option<(&[T], RefCount)>) -> Option<(Vec<T>, RefCount)>,
{
let mut bucket = self.get_write_bucket();
bucket.as_mut().unwrap().update(key, updatefn)
}
pub fn try_write(
&self,
pubkey: &Pubkey,
value: (&[T], RefCount),
) -> Result<(), BucketMapError> {
let mut bucket = self.get_write_bucket();
bucket.as_mut().unwrap().try_write(pubkey, value.0, value.1)
}
}

View File

@@ -1,8 +0,0 @@
use {crate::RefCount, solana_sdk::pubkey::Pubkey};
#[derive(Debug, Default, Clone)]
pub struct BucketItem<T> {
pub pubkey: Pubkey,
pub ref_count: RefCount,
pub slot_list: Vec<T>,
}

View File

@@ -1,529 +0,0 @@
//! BucketMap is a mostly contention free concurrent map backed by MmapMut
use {
crate::{bucket_api::BucketApi, bucket_stats::BucketMapStats, MaxSearch, RefCount},
solana_sdk::pubkey::Pubkey,
std::{convert::TryInto, fmt::Debug, fs, path::PathBuf, sync::Arc},
tempfile::TempDir,
};
#[derive(Debug, Default, Clone)]
pub struct BucketMapConfig {
pub max_buckets: usize,
pub drives: Option<Vec<PathBuf>>,
pub max_search: Option<MaxSearch>,
}
impl BucketMapConfig {
/// Create a new BucketMapConfig
/// NOTE: BucketMap requires that max_buckets is a power of two
pub fn new(max_buckets: usize) -> BucketMapConfig {
BucketMapConfig {
max_buckets,
..BucketMapConfig::default()
}
}
}
pub struct BucketMap<T: Clone + Copy + Debug> {
buckets: Vec<Arc<BucketApi<T>>>,
drives: Arc<Vec<PathBuf>>,
max_buckets_pow2: u8,
pub stats: Arc<BucketMapStats>,
pub temp_dir: Option<TempDir>,
}
impl<T: Clone + Copy + Debug> Drop for BucketMap<T> {
fn drop(&mut self) {
if self.temp_dir.is_none() {
BucketMap::<T>::erase_previous_drives(&self.drives);
}
}
}
impl<T: Clone + Copy + Debug> std::fmt::Debug for BucketMap<T> {
fn fmt(&self, _f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Debug)]
pub enum BucketMapError {
/// (bucket_index, current_capacity_pow2)
DataNoSpace((u64, u8)),
/// current_capacity_pow2
IndexNoSpace(u8),
}
impl<T: Clone + Copy + Debug> BucketMap<T> {
pub fn new(config: BucketMapConfig) -> Self {
assert_ne!(
config.max_buckets, 0,
"Max number of buckets must be non-zero"
);
assert!(
config.max_buckets.is_power_of_two(),
"Max number of buckets must be a power of two"
);
// this should be <= 1 << DEFAULT_CAPACITY or we end up searching the same items over and over - probably not a big deal since it is so small anyway
const MAX_SEARCH: MaxSearch = 32;
let max_search = config.max_search.unwrap_or(MAX_SEARCH);
if let Some(drives) = config.drives.as_ref() {
Self::erase_previous_drives(drives);
}
let mut temp_dir = None;
let drives = config.drives.unwrap_or_else(|| {
temp_dir = Some(TempDir::new().unwrap());
vec![temp_dir.as_ref().unwrap().path().to_path_buf()]
});
let drives = Arc::new(drives);
let mut per_bucket_count = Vec::with_capacity(config.max_buckets);
per_bucket_count.resize_with(config.max_buckets, Arc::default);
let stats = Arc::new(BucketMapStats {
per_bucket_count,
..BucketMapStats::default()
});
let buckets = stats
.per_bucket_count
.iter()
.map(|per_bucket_count| {
Arc::new(BucketApi::new(
Arc::clone(&drives),
max_search,
Arc::clone(&stats),
Arc::clone(per_bucket_count),
))
})
.collect();
// A simple log2 function that is correct if x is a power of two
let log2 = |x: usize| usize::BITS - x.leading_zeros() - 1;
Self {
buckets,
drives,
max_buckets_pow2: log2(config.max_buckets) as u8,
stats,
temp_dir,
}
}
fn erase_previous_drives(drives: &[PathBuf]) {
drives.iter().for_each(|folder| {
let _ = fs::remove_dir_all(&folder);
let _ = fs::create_dir_all(&folder);
})
}
pub fn num_buckets(&self) -> usize {
self.buckets.len()
}
/// Get the values for Pubkey `key`
pub fn read_value(&self, key: &Pubkey) -> Option<(Vec<T>, RefCount)> {
self.get_bucket(key).read_value(key)
}
/// Delete the Pubkey `key`
pub fn delete_key(&self, key: &Pubkey) {
self.get_bucket(key).delete_key(key);
}
/// Update Pubkey `key`'s value with 'value'
pub fn insert(&self, key: &Pubkey, value: (&[T], RefCount)) {
self.get_bucket(key).insert(key, value)
}
/// Update Pubkey `key`'s value with 'value'
pub fn try_insert(&self, key: &Pubkey, value: (&[T], RefCount)) -> Result<(), BucketMapError> {
self.get_bucket(key).try_write(key, value)
}
/// Update Pubkey `key`'s value with function `updatefn`
pub fn update<F>(&self, key: &Pubkey, updatefn: F)
where
F: FnMut(Option<(&[T], RefCount)>) -> Option<(Vec<T>, RefCount)>,
{
self.get_bucket(key).update(key, updatefn)
}
pub fn get_bucket(&self, key: &Pubkey) -> &Arc<BucketApi<T>> {
self.get_bucket_from_index(self.bucket_ix(key))
}
pub fn get_bucket_from_index(&self, ix: usize) -> &Arc<BucketApi<T>> {
&self.buckets[ix]
}
/// Get the bucket index for Pubkey `key`
pub fn bucket_ix(&self, key: &Pubkey) -> usize {
if self.max_buckets_pow2 > 0 {
let location = read_be_u64(key.as_ref());
(location >> (u64::BITS - self.max_buckets_pow2 as u32)) as usize
} else {
0
}
}
/// Increment the refcount for Pubkey `key`
pub fn addref(&self, key: &Pubkey) -> Option<RefCount> {
let ix = self.bucket_ix(key);
let bucket = &self.buckets[ix];
bucket.addref(key)
}
/// Decrement the refcount for Pubkey `key`
pub fn unref(&self, key: &Pubkey) -> Option<RefCount> {
let ix = self.bucket_ix(key);
let bucket = &self.buckets[ix];
bucket.unref(key)
}
}
/// Look at the first 8 bytes of the input and reinterpret them as a u64
fn read_be_u64(input: &[u8]) -> u64 {
assert!(input.len() >= std::mem::size_of::<u64>());
u64::from_be_bytes(input[0..std::mem::size_of::<u64>()].try_into().unwrap())
}
#[cfg(test)]
mod tests {
use {
super::*,
rand::{thread_rng, Rng},
std::{collections::HashMap, sync::RwLock},
};
#[test]
fn bucket_map_test_insert() {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.update(&key, |_| Some((vec![0], 0)));
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
}
#[test]
fn bucket_map_test_insert2() {
for pass in 0..3 {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
let bucket = index.get_bucket(&key);
if pass == 0 {
index.insert(&key, (&[0], 0));
} else {
let result = index.try_insert(&key, (&[0], 0));
assert!(result.is_err());
assert_eq!(index.read_value(&key), None);
if pass == 2 {
// another call to try insert again - should still return an error
let result = index.try_insert(&key, (&[0], 0));
assert!(result.is_err());
assert_eq!(index.read_value(&key), None);
}
bucket.grow(result.unwrap_err());
let result = index.try_insert(&key, (&[0], 0));
assert!(result.is_ok());
}
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
}
}
#[test]
fn bucket_map_test_update2() {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.insert(&key, (&[0], 0));
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
index.insert(&key, (&[1], 0));
assert_eq!(index.read_value(&key), Some((vec![1], 0)));
}
#[test]
fn bucket_map_test_update() {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.update(&key, |_| Some((vec![0], 0)));
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
index.update(&key, |_| Some((vec![1], 0)));
assert_eq!(index.read_value(&key), Some((vec![1], 0)));
}
#[test]
fn bucket_map_test_update_to_0_len() {
solana_logger::setup();
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.update(&key, |_| Some((vec![0], 1)));
assert_eq!(index.read_value(&key), Some((vec![0], 1)));
// sets len to 0, updates in place
index.update(&key, |_| Some((vec![], 1)));
assert_eq!(index.read_value(&key), Some((vec![], 1)));
// sets len to 0, doesn't update in place - finds a new place, which causes us to no longer have an allocation in data
index.update(&key, |_| Some((vec![], 2)));
assert_eq!(index.read_value(&key), Some((vec![], 2)));
// sets len to 1, doesn't update in place - finds a new place
index.update(&key, |_| Some((vec![1], 2)));
assert_eq!(index.read_value(&key), Some((vec![1], 2)));
}
#[test]
fn bucket_map_test_delete() {
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
for i in 0..10 {
let key = Pubkey::new_unique();
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
}
}
#[test]
fn bucket_map_test_delete_2() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
for i in 0..100 {
let key = Pubkey::new_unique();
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
}
}
#[test]
fn bucket_map_test_n_drives() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
for i in 0..100 {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
}
}
#[test]
fn bucket_map_test_grow_read() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
let keys: Vec<Pubkey> = (0..100).into_iter().map(|_| Pubkey::new_unique()).collect();
for k in 0..keys.len() {
let key = &keys[k];
let i = read_be_u64(key.as_ref());
index.update(key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(key), Some((vec![i], 0)));
for (ix, key) in keys.iter().enumerate() {
let i = read_be_u64(key.as_ref());
//debug!("READ: {:?} {}", key, i);
let expected = if ix <= k { Some((vec![i], 0)) } else { None };
assert_eq!(index.read_value(key), expected);
}
}
}
#[test]
fn bucket_map_test_n_delete() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
let keys: Vec<Pubkey> = (0..20).into_iter().map(|_| Pubkey::new_unique()).collect();
for key in keys.iter() {
let i = read_be_u64(key.as_ref());
index.update(key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(key), Some((vec![i], 0)));
}
for key in keys.iter() {
let i = read_be_u64(key.as_ref());
//debug!("READ: {:?} {}", key, i);
assert_eq!(index.read_value(key), Some((vec![i], 0)));
}
for k in 0..keys.len() {
let key = &keys[k];
index.delete_key(key);
assert_eq!(index.read_value(key), None);
for key in keys.iter().skip(k + 1) {
let i = read_be_u64(key.as_ref());
assert_eq!(index.read_value(key), Some((vec![i], 0)));
}
}
}
#[test]
fn hashmap_compare() {
use std::sync::Mutex;
solana_logger::setup();
let maps = (0..2)
.into_iter()
.map(|max_buckets_pow2| {
let config = BucketMapConfig::new(1 << max_buckets_pow2);
BucketMap::new(config)
})
.collect::<Vec<_>>();
let hash_map = RwLock::new(HashMap::<Pubkey, (Vec<(usize, usize)>, RefCount)>::new());
let max_slot_list_len = 3;
let all_keys = Mutex::new(vec![]);
let gen_rand_value = || {
let count = thread_rng().gen_range(0, max_slot_list_len);
let v = (0..count)
.into_iter()
.map(|x| (x as usize, x as usize /*thread_rng().gen::<usize>()*/))
.collect::<Vec<_>>();
let rc = thread_rng().gen::<RefCount>();
(v, rc)
};
let get_key = || {
let mut keys = all_keys.lock().unwrap();
if keys.is_empty() {
return None;
}
let len = keys.len();
Some(keys.remove(thread_rng().gen_range(0, len)))
};
let return_key = |key| {
let mut keys = all_keys.lock().unwrap();
keys.push(key);
};
let verify = || {
let mut maps = maps
.iter()
.map(|map| {
let mut r = vec![];
for bin in 0..map.num_buckets() {
r.append(
&mut map.buckets[bin]
.items_in_range(&None::<&std::ops::RangeInclusive<Pubkey>>),
);
}
r
})
.collect::<Vec<_>>();
let hm = hash_map.read().unwrap();
for (k, v) in hm.iter() {
for map in maps.iter_mut() {
for i in 0..map.len() {
if k == &map[i].pubkey {
assert_eq!(map[i].slot_list, v.0);
assert_eq!(map[i].ref_count, v.1);
map.remove(i);
break;
}
}
}
}
for map in maps.iter() {
assert!(map.is_empty());
}
};
let mut initial = 100; // put this many items in to start
// do random operations: insert, update, delete, add/unref in random order
// verify consistency between hashmap and all bucket maps
for i in 0..10000 {
if initial > 0 {
initial -= 1;
}
if initial > 0 || thread_rng().gen_range(0, 5) == 0 {
// insert
let k = solana_sdk::pubkey::new_rand();
let v = gen_rand_value();
hash_map.write().unwrap().insert(k, v.clone());
let insert = thread_rng().gen_range(0, 2) == 0;
maps.iter().for_each(|map| {
if insert {
map.insert(&k, (&v.0, v.1))
} else {
map.update(&k, |current| {
assert!(current.is_none());
Some(v.clone())
})
}
});
return_key(k);
}
if thread_rng().gen_range(0, 10) == 0 {
// update
if let Some(k) = get_key() {
let hm = hash_map.read().unwrap();
let (v, rc) = gen_rand_value();
let v_old = hm.get(&k);
let insert = thread_rng().gen_range(0, 2) == 0;
maps.iter().for_each(|map| {
if insert {
map.insert(&k, (&v, rc))
} else {
map.update(&k, |current| {
assert_eq!(current, v_old.map(|(v, rc)| (&v[..], *rc)), "{}", k);
Some((v.clone(), rc))
})
}
});
drop(hm);
hash_map.write().unwrap().insert(k, (v, rc));
return_key(k);
}
}
if thread_rng().gen_range(0, 20) == 0 {
// delete
if let Some(k) = get_key() {
let mut hm = hash_map.write().unwrap();
hm.remove(&k);
maps.iter().for_each(|map| {
map.delete_key(&k);
});
}
}
if thread_rng().gen_range(0, 10) == 0 {
// add/unref
if let Some(k) = get_key() {
let mut inc = thread_rng().gen_range(0, 2) == 0;
let mut hm = hash_map.write().unwrap();
let (v, mut rc) = hm.get(&k).map(|(v, rc)| (v.to_vec(), *rc)).unwrap();
if !inc && rc == 0 {
// can't decrement rc=0
inc = true;
}
rc = if inc { rc + 1 } else { rc - 1 };
hm.insert(k, (v.to_vec(), rc));
maps.iter().for_each(|map| {
if thread_rng().gen_range(0, 2) == 0 {
map.update(&k, |current| Some((current.unwrap().0.to_vec(), rc)))
} else if inc {
map.addref(&k);
} else {
map.unref(&k);
}
});
return_key(k);
}
}
if i % 1000 == 0 {
verify();
}
}
verify();
}
}

View File

@@ -1,18 +0,0 @@
use std::sync::{atomic::AtomicU64, Arc, Mutex};
#[derive(Debug, Default)]
pub struct BucketStats {
pub resizes: AtomicU64,
pub max_size: Mutex<u64>,
pub resize_us: AtomicU64,
pub new_file_us: AtomicU64,
pub flush_file_us: AtomicU64,
pub mmap_us: AtomicU64,
}
#[derive(Debug, Default)]
pub struct BucketMapStats {
pub index: Arc<BucketStats>,
pub data: Arc<BucketStats>,
pub per_bucket_count: Vec<Arc<AtomicU64>>,
}

View File

@@ -1,343 +0,0 @@
use {
crate::{bucket_stats::BucketStats, MaxSearch},
memmap2::MmapMut,
rand::{thread_rng, Rng},
solana_measure::measure::Measure,
std::{
fs::{remove_file, OpenOptions},
io::{Seek, SeekFrom, Write},
path::PathBuf,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
},
};
/*
1 2
2 4
3 8
4 16
5 32
6 64
7 128
8 256
9 512
10 1,024
11 2,048
12 4,096
13 8,192
14 16,384
23 8,388,608
24 16,777,216
*/
pub const DEFAULT_CAPACITY_POW2: u8 = 5;
/// A Header UID of 0 indicates that the header is unlocked
pub(crate) const UID_UNLOCKED: Uid = 0;
pub(crate) type Uid = u64;
#[repr(C)]
struct Header {
lock: AtomicU64,
}
impl Header {
fn try_lock(&self, uid: Uid) -> bool {
Ok(UID_UNLOCKED)
== self
.lock
.compare_exchange(UID_UNLOCKED, uid, Ordering::AcqRel, Ordering::Relaxed)
}
fn unlock(&self) -> Uid {
self.lock.swap(UID_UNLOCKED, Ordering::Release)
}
fn uid(&self) -> Uid {
self.lock.load(Ordering::Acquire)
}
}
pub struct BucketStorage {
path: PathBuf,
mmap: MmapMut,
pub cell_size: u64,
pub capacity_pow2: u8,
pub used: AtomicU64,
pub stats: Arc<BucketStats>,
pub max_search: MaxSearch,
}
#[derive(Debug)]
pub enum BucketStorageError {
AlreadyAllocated,
}
impl Drop for BucketStorage {
fn drop(&mut self) {
let _ = remove_file(&self.path);
}
}
impl BucketStorage {
pub fn new_with_capacity(
drives: Arc<Vec<PathBuf>>,
num_elems: u64,
elem_size: u64,
capacity_pow2: u8,
max_search: MaxSearch,
stats: Arc<BucketStats>,
) -> Self {
let cell_size = elem_size * num_elems + std::mem::size_of::<Header>() as u64;
let (mmap, path) = Self::new_map(&drives, cell_size as usize, capacity_pow2, &stats);
Self {
path,
mmap,
cell_size,
used: AtomicU64::new(0),
capacity_pow2,
stats,
max_search,
}
}
pub fn max_search(&self) -> u64 {
self.max_search as u64
}
pub fn new(
drives: Arc<Vec<PathBuf>>,
num_elems: u64,
elem_size: u64,
max_search: MaxSearch,
stats: Arc<BucketStats>,
) -> Self {
Self::new_with_capacity(
drives,
num_elems,
elem_size,
DEFAULT_CAPACITY_POW2,
max_search,
stats,
)
}
pub fn uid(&self, ix: u64) -> Uid {
assert!(ix < self.capacity(), "bad index size");
let ix = (ix * self.cell_size) as usize;
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
return hdr.as_ref().unwrap().uid();
}
}
pub fn allocate(&self, ix: u64, uid: Uid) -> Result<(), BucketStorageError> {
assert!(ix < self.capacity(), "allocate: bad index size");
assert!(UID_UNLOCKED != uid, "allocate: bad uid");
let mut e = Err(BucketStorageError::AlreadyAllocated);
let ix = (ix * self.cell_size) as usize;
//debug!("ALLOC {} {}", ix, uid);
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
if hdr.as_ref().unwrap().try_lock(uid) {
e = Ok(());
self.used.fetch_add(1, Ordering::Relaxed);
}
};
e
}
pub fn free(&self, ix: u64, uid: Uid) {
assert!(ix < self.capacity(), "bad index size");
assert!(UID_UNLOCKED != uid, "free: bad uid");
let ix = (ix * self.cell_size) as usize;
//debug!("FREE {} {}", ix, uid);
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
//debug!("FREE uid: {}", hdr.as_ref().unwrap().uid());
let previous_uid = hdr.as_ref().unwrap().unlock();
assert_eq!(
previous_uid, uid,
"free: unlocked a header with a differet uid: {}",
previous_uid
);
self.used.fetch_sub(1, Ordering::Relaxed);
}
}
pub fn get<T: Sized>(&self, ix: u64) -> &T {
assert!(ix < self.capacity(), "bad index size");
let start = (ix * self.cell_size) as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>();
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *const T;
&*item
}
}
pub fn get_empty_cell_slice<T: Sized>(&self) -> &[T] {
let len = 0;
let item_slice: &[u8] = &self.mmap[0..0];
unsafe {
let item = item_slice.as_ptr() as *const T;
std::slice::from_raw_parts(item, len as usize)
}
}
pub fn get_cell_slice<T: Sized>(&self, ix: u64, len: u64) -> &[T] {
assert!(ix < self.capacity(), "bad index size");
let ix = self.cell_size * ix;
let start = ix as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>() * len as usize;
//debug!("GET slice {} {}", start, end);
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *const T;
std::slice::from_raw_parts(item, len as usize)
}
}
#[allow(clippy::mut_from_ref)]
pub fn get_mut<T: Sized>(&self, ix: u64) -> &mut T {
assert!(ix < self.capacity(), "bad index size");
let start = (ix * self.cell_size) as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>();
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *mut T;
&mut *item
}
}
#[allow(clippy::mut_from_ref)]
pub fn get_mut_cell_slice<T: Sized>(&self, ix: u64, len: u64) -> &mut [T] {
assert!(ix < self.capacity(), "bad index size");
let ix = self.cell_size * ix;
let start = ix as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>() * len as usize;
//debug!("GET mut slice {} {}", start, end);
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *mut T;
std::slice::from_raw_parts_mut(item, len as usize)
}
}
fn new_map(
drives: &[PathBuf],
cell_size: usize,
capacity_pow2: u8,
stats: &BucketStats,
) -> (MmapMut, PathBuf) {
let mut measure_new_file = Measure::start("measure_new_file");
let capacity = 1u64 << capacity_pow2;
let r = thread_rng().gen_range(0, drives.len());
let drive = &drives[r];
let pos = format!("{}", thread_rng().gen_range(0, u128::MAX),);
let file = drive.join(pos);
let mut data = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(file.clone())
.map_err(|e| {
panic!(
"Unable to create data file {} in current dir({:?}): {:?}",
file.display(),
std::env::current_dir(),
e
);
})
.unwrap();
// Theoretical performance optimization: write a zero to the end of
// the file so that we won't have to resize it later, which may be
// expensive.
//debug!("GROWING file {}", capacity * cell_size as u64);
data.seek(SeekFrom::Start(capacity * cell_size as u64 - 1))
.unwrap();
data.write_all(&[0]).unwrap();
data.seek(SeekFrom::Start(0)).unwrap();
measure_new_file.stop();
let mut measure_flush = Measure::start("measure_flush");
data.flush().unwrap(); // can we skip this?
measure_flush.stop();
let mut measure_mmap = Measure::start("measure_mmap");
let res = (unsafe { MmapMut::map_mut(&data).unwrap() }, file);
measure_mmap.stop();
stats
.new_file_us
.fetch_add(measure_new_file.as_us(), Ordering::Relaxed);
stats
.flush_file_us
.fetch_add(measure_flush.as_us(), Ordering::Relaxed);
stats
.mmap_us
.fetch_add(measure_mmap.as_us(), Ordering::Relaxed);
res
}
/// copy contents from 'old_bucket' to 'self'
fn copy_contents(&mut self, old_bucket: &Self) {
let mut m = Measure::start("grow");
let old_cap = old_bucket.capacity();
let old_map = &old_bucket.mmap;
let increment = self.capacity_pow2 - old_bucket.capacity_pow2;
let index_grow = 1 << increment;
(0..old_cap as usize).into_iter().for_each(|i| {
let old_ix = i * old_bucket.cell_size as usize;
let new_ix = old_ix * index_grow;
let dst_slice: &[u8] = &self.mmap[new_ix..new_ix + old_bucket.cell_size as usize];
let src_slice: &[u8] = &old_map[old_ix..old_ix + old_bucket.cell_size as usize];
unsafe {
let dst = dst_slice.as_ptr() as *mut u8;
let src = src_slice.as_ptr() as *const u8;
std::ptr::copy_nonoverlapping(src, dst, old_bucket.cell_size as usize);
};
});
m.stop();
self.stats.resizes.fetch_add(1, Ordering::Relaxed);
self.stats.resize_us.fetch_add(m.as_us(), Ordering::Relaxed);
}
/// allocate a new bucket, copying data from 'bucket'
pub fn new_resized(
drives: &Arc<Vec<PathBuf>>,
max_search: MaxSearch,
bucket: Option<&Self>,
capacity_pow_2: u8,
num_elems: u64,
elem_size: u64,
stats: &Arc<BucketStats>,
) -> Self {
let mut new_bucket = Self::new_with_capacity(
Arc::clone(drives),
num_elems,
elem_size,
capacity_pow_2,
max_search,
Arc::clone(stats),
);
if let Some(bucket) = bucket {
new_bucket.copy_contents(bucket);
}
let sz = new_bucket.capacity();
{
let mut max = new_bucket.stats.max_size.lock().unwrap();
*max = std::cmp::max(*max, sz);
}
new_bucket
}
/// Return the number of cells currently allocated
pub fn capacity(&self) -> u64 {
1 << self.capacity_pow2
}
}

View File

@@ -1,67 +0,0 @@
use {
crate::{
bucket::Bucket,
bucket_storage::{BucketStorage, Uid},
RefCount,
},
solana_sdk::{clock::Slot, pubkey::Pubkey},
std::{
collections::hash_map::DefaultHasher,
fmt::Debug,
hash::{Hash, Hasher},
},
};
#[repr(C)]
#[derive(Debug, Copy, Clone, PartialEq)]
// one instance of this per item in the index
// stored in the index bucket
pub struct IndexEntry {
pub key: Pubkey, // can this be smaller if we have reduced the keys into buckets already?
pub ref_count: RefCount, // can this be smaller? Do we ever need more than 4B refcounts?
pub storage_offset: u64, // smaller? since these are variably sized, this could get tricky. well, actually accountinfo is not variable sized...
// if the bucket doubled, the index can be recomputed using create_bucket_capacity_pow2
pub storage_capacity_when_created_pow2: u8, // see data_location
pub num_slots: Slot, // can this be smaller? epoch size should ~ be the max len. this is the num elements in the slot list
}
impl IndexEntry {
pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 {
(num_slots as f64).log2().ceil() as u64 // use int log here?
}
pub fn data_bucket_ix(&self) -> u64 {
Self::data_bucket_from_num_slots(self.num_slots)
}
pub fn ref_count(&self) -> RefCount {
self.ref_count
}
// This function maps the original data location into an index in the current bucket storage.
// This is coupled with how we resize bucket storages.
pub fn data_loc(&self, storage: &BucketStorage) -> u64 {
self.storage_offset << (storage.capacity_pow2 - self.storage_capacity_when_created_pow2)
}
pub fn read_value<'a, T>(&self, bucket: &'a Bucket<T>) -> Option<(&'a [T], RefCount)> {
let data_bucket_ix = self.data_bucket_ix();
let data_bucket = &bucket.data[data_bucket_ix as usize];
let slice = if self.num_slots > 0 {
let loc = self.data_loc(data_bucket);
let uid = Self::key_uid(&self.key);
assert_eq!(uid, bucket.data[data_bucket_ix as usize].uid(loc));
bucket.data[data_bucket_ix as usize].get_cell_slice(loc, self.num_slots)
} else {
// num_slots is 0. This means we don't have an actual allocation.
// can we trust that the data_bucket is even safe?
bucket.data[data_bucket_ix as usize].get_empty_cell_slice()
};
Some((slice, self.ref_count))
}
pub fn key_uid(key: &Pubkey) -> Uid {
let mut s = DefaultHasher::new();
key.hash(&mut s);
s.finish().max(1u64)
}
}

View File

@@ -1,11 +0,0 @@
#![allow(clippy::integer_arithmetic)]
mod bucket;
pub mod bucket_api;
mod bucket_item;
pub mod bucket_map;
mod bucket_stats;
mod bucket_storage;
mod index_entry;
pub type MaxSearch = u8;
pub type RefCount = u64;

View File

@@ -1,48 +0,0 @@
use {
rayon::prelude::*,
solana_bucket_map::bucket_map::{BucketMap, BucketMapConfig},
solana_measure::measure::Measure,
solana_sdk::pubkey::Pubkey,
std::path::PathBuf,
};
#[test]
#[ignore]
fn bucket_map_test_mt() {
let threads = 4096;
let items = 4096;
let tmpdir1 = std::env::temp_dir().join("bucket_map_test_mt");
let tmpdir2 = PathBuf::from("/mnt/data/0").join("bucket_map_test_mt");
let paths: Vec<PathBuf> = [tmpdir1, tmpdir2]
.iter()
.filter(|x| std::fs::create_dir_all(x).is_ok())
.cloned()
.collect();
assert!(!paths.is_empty());
let index = BucketMap::new(BucketMapConfig {
max_buckets: 1 << 12,
drives: Some(paths.clone()),
..BucketMapConfig::default()
});
(0..threads).into_iter().into_par_iter().for_each(|_| {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![0u64], 0)));
});
let mut timer = Measure::start("bucket_map_test_mt");
(0..threads).into_iter().into_par_iter().for_each(|_| {
for _ in 0..items {
let key = Pubkey::new_unique();
let ix: u64 = index.bucket_ix(&key) as u64;
index.update(&key, |_| Some((vec![ix], 0)));
assert_eq!(index.read_value(&key), Some((vec![ix], 0)));
}
});
timer.stop();
println!("time: {}ns per item", timer.as_ns() / (threads * items));
let mut total = 0;
for tmpdir in paths.iter() {
let folder_size = fs_extra::dir::get_size(tmpdir).unwrap();
total += folder_size;
std::fs::remove_dir_all(tmpdir).unwrap();
}
println!("overhead: {}bytes per item", total / (threads * items));
}

View File

@@ -137,7 +137,7 @@ all_test_steps() {
^ci/test-coverage.sh \
^scripts/coverage.sh \
; then
command_step coverage ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image ci/test-coverage.sh" 40
command_step coverage ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image ci/test-coverage.sh" 30
wait_step
else
annotate --style info --context test-coverage \
@@ -226,19 +226,6 @@ EOF
annotate --style info \
"downstream-projects skipped as no relevant files were modified"
fi
# Wasm support
if affects \
^ci/test-wasm.sh \
^ci/test-stable.sh \
^sdk/ \
; then
command_step wasm ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-wasm.sh" 20
else
annotate --style info \
"wasm skipped as no relevant files were modified"
fi
# Benches...
if affects \
.rs$ \
@@ -256,7 +243,7 @@ EOF
command_step "local-cluster" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster.sh" \
50
45
}
pull_or_push_steps() {
@@ -275,7 +262,7 @@ pull_or_push_steps() {
all_test_steps
fi
# web3.js, explorer and docs changes run on Travis or Github actions...
# web3.js, explorer and docs changes run on Travis...
}

View File

@@ -7,6 +7,11 @@ steps:
- "queue=release-build"
timeout_in_minutes: 60
name: "publish tarball"
- command: "ci/publish-bpf-sdk.sh"
agents:
- "queue=release-build"
timeout_in_minutes: 5
name: "publish bpf sdk"
- wait
- command: "sdk/docker-solana/build.sh"
agents:

View File

@@ -23,7 +23,7 @@ echo --- "(FAILING) Backpropagating dependabot-triggered Cargo.lock updates"
name="dependabot-buildkite"
api_base="https://api.github.com/repos/solana-labs/solana/pulls"
pr_num=$(echo "$BUILDKITE_BRANCH" | grep -Eo '[0-9]+')
branch=$(curl -s "$api_base/$pr_num" | python3 -c 'import json,sys;print(json.load(sys.stdin)["head"]["ref"])')
branch=$(curl -s "$api_base/$pr_num" | python -c 'import json,sys;print json.load(sys.stdin)["head"]["ref"]')
git add :**/Cargo.lock
EMAIL="dependabot-buildkite@noreply.solana.com" \

View File

@@ -46,11 +46,11 @@ cargo_audit_ignores=(
# https://github.com/paritytech/jsonrpc/issues/605
--ignore RUSTSEC-2021-0079
# chrono: Potential segfault in `localtime_r` invocations
# tar: Links in archive can create arbitrary directories
#
# Blocked due to no safe upgrade
# https://github.com/chronotope/chrono/issues/499
--ignore RUSTSEC-2020-0159
# Blocked on `tar` releasing safe upgrade
# https://github.com/alexcrichton/tar-rs/issues/238
--ignore RUSTSEC-2021-0080
)
scripts/cargo-for-all-lock-files.sh stable audit "${cargo_audit_ignores[@]}"

View File

@@ -1,4 +1,4 @@
FROM solanalabs/rust:1.57.0
FROM solanalabs/rust:1.51.0
ARG date
RUN set -x \

View File

@@ -1,6 +1,6 @@
# Note: when the rust version is changed also modify
# ci/rust-version.sh to pick up the new image tag
FROM rust:1.57.0
FROM rust:1.51.0
# Add Google Protocol Buffers for Libra's metrics library.
ENV PROTOC_VERSION 3.8.0
@@ -11,34 +11,28 @@ RUN set -x \
&& apt-get install apt-transport-https \
&& echo deb https://apt.buildkite.com/buildkite-agent stable main > /etc/apt/sources.list.d/buildkite-agent.list \
&& apt-key adv --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 32A37959C2FA5C3C99EFBC32A79206696452D198 \
&& curl -fsSL https://deb.nodesource.com/setup_current.x | bash - \
&& apt update \
&& apt install -y \
buildkite-agent \
clang \
clang-7 \
cmake \
lcov \
libudev-dev \
libclang-common-7-dev \
mscgen \
nodejs \
net-tools \
rsync \
sudo \
golang \
unzip \
\
&& apt remove -y libcurl4-openssl-dev \
&& rm -rf /var/lib/apt/lists/* \
&& node --version \
&& npm --version \
&& rustup component add rustfmt \
&& rustup component add clippy \
&& rustup target add wasm32-unknown-unknown \
&& cargo install cargo-audit \
&& cargo install svgbob_cli \
&& cargo install mdbook \
&& cargo install mdbook-linkcheck \
&& cargo install svgbob_cli \
&& cargo install wasm-pack \
&& rustc --version \
&& cargo --version \
&& curl -OL https://github.com/google/protobuf/releases/download/v$PROTOC_VERSION/$PROTOC_ZIP \

27
ci/publish-bpf-sdk.sh Executable file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")/.."
eval "$(ci/channel-info.sh)"
if [[ -n "$CI_TAG" ]]; then
CHANNEL_OR_TAG=$CI_TAG
else
CHANNEL_OR_TAG=$CHANNEL
fi
(
set -x
sdk/bpf/scripts/package.sh
[[ -f bpf-sdk.tar.bz2 ]]
)
source ci/upload-ci-artifact.sh
echo --- AWS S3 Store
if [[ -z $CHANNEL_OR_TAG ]]; then
echo Skipped
else
upload-s3-artifact "/solana/bpf-sdk.tar.bz2" "s3://solana-sdk/$CHANNEL_OR_TAG/bpf-sdk.tar.bz2"
fi
exit 0

View File

@@ -7,7 +7,7 @@ source multinode-demo/common.sh
rm -rf config/run/init-completed config/ledger config/snapshot-ledger
SOLANA_RUN_SH_VALIDATOR_ARGS="--snapshot-interval-slots 200" timeout 120 ./scripts/run.sh &
timeout 120 ./run.sh &
pid=$!
attempts=20
@@ -16,8 +16,6 @@ while [[ ! -f config/run/init-completed ]]; do
if ((--attempts == 0)); then
echo "Error: validator failed to boot"
exit 1
else
echo "Checking init"
fi
done
@@ -26,7 +24,6 @@ snapshot_slot=1
# wait a bit longer than snapshot_slot
while [[ $($solana_cli --url http://localhost:8899 slot --commitment processed) -le $((snapshot_slot + 1)) ]]; do
sleep 1
echo "Checking slot"
done
$solana_validator --ledger config/ledger exit --force || true

View File

@@ -18,13 +18,13 @@
if [[ -n $RUST_STABLE_VERSION ]]; then
stable_version="$RUST_STABLE_VERSION"
else
stable_version=1.57.0
stable_version=1.51.0
fi
if [[ -n $RUST_NIGHTLY_VERSION ]]; then
nightly_version="$RUST_NIGHTLY_VERSION"
else
nightly_version=2021-12-03
nightly_version=2021-04-18
fi

View File

@@ -76,7 +76,7 @@ RestartForceExitStatus=SIGPIPE
TimeoutStartSec=10
TimeoutStopSec=0
KillMode=process
LimitNOFILE=1000000
LimitNOFILE=700000
[Install]
WantedBy=multi-user.target

View File

@@ -8,5 +8,5 @@ source "$HERE"/utils.sh
ensure_env || exit 1
# Allow more files to be opened by a user
echo "* - nofile 1000000" > /etc/security/limits.d/90-solana-nofiles.conf
echo "* - nofile 700000" > /etc/security/limits.d/90-solana-nofiles.conf

Some files were not shown because too many files have changed in this diff Show More