Compare commits

..

71 Commits

Author SHA1 Message Date
Jon Cinque
f2561ea547
v1.10-ci: Fix downstream build (#24404) 2022-04-16 12:45:01 +02:00
mergify[bot]
b98e133f2d
Add Ident case (#24390) (#24402)
(cherry picked from commit a0e3e3c193a7a7b0478cf7648b279e05dd839df8)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-15 22:48:36 -06:00
Trent Nelson
25274e8a33 rpc-pubsub: reduce metrics/log spam
(cherry picked from commit 50fba0184241e6bcc0c621c97bb10cca711d40b8)
2022-04-15 22:21:44 -06:00
mergify[bot]
3bf00e4af5
cli: sort option for validators by version (#24237)
(cherry picked from commit 91993d89b0e68f5400bec49e724e1a53bcb928f2)

# Conflicts:
#	Cargo.lock
#	cli-output/Cargo.toml
#	programs/bpf/Cargo.lock

Co-authored-by: Trent Nelson <trent@solana.com>
2022-04-15 09:19:12 +00:00
mergify[bot]
c289cd2a4b
Do not require default keypair to exist for bench-tps (#24356) (#24365)
(cherry picked from commit 5e8c12ebdf2969389b32f8e85845ef1af6b304bf)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-15 03:09:34 +00:00
mergify[bot]
84ac4ff57f
add metrics around rewards (#24160) (#24168)
(cherry picked from commit 48d1af01c89f490fa635c7790ac4c32b4657d7ce)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2022-04-14 23:04:56 +00:00
mergify[bot]
58ef6b31f9
Quic client stats (#24195) (#24305)
* Add metrics to connection-cache to measure cache hits and misses

* Add congestion stats

* Add more client stats

* Review comments

Co-authored-by: Ryan Leung <ryan.leung@solana.com>

Co-authored-by: sakridge <sakridge@gmail.com>
Co-authored-by: Ryan Leung <ryan.leung@solana.com>
2022-04-14 16:42:20 -04:00
mergify[bot]
304cd65ecb
Support quic in bench-tps (#24295) (#24317)
* Update comment

* Use connection_cache in tpu_client

* Add --tpu-use-quic to bench-tps

* Use connection_cache async send

(cherry picked from commit 26899359d196e33992f34d83138872c3a9154ab9)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-13 22:56:29 -06:00
mergify[bot]
3bee921088
Add a stringified credential option for LedgerStorage (#24314) (#24324)
* add a stringified credential option for LedgerStorage

* fix clippy::useless-format warning

* change CredentialOption to enum CredentialType

* rename credential_option to credential_type

* restore LedgerStorage new fn signature

* fmt

Co-authored-by: Tyera Eulberg <tyera@solana.com>

Co-authored-by: Rachael Pai <komimi.p@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-13 22:17:56 -06:00
mergify[bot]
35d4f390ad
Add RpcClient support to bench-tps (#24297) (#24320)
* Impl BenchTpsClient for RpcClient

* Support RpcClient in bench-tps

(cherry picked from commit 96e3555e93b826a77724454b8c73e83e61516743)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-13 19:33:11 -06:00
mergify[bot]
785481ace4
Async send for send transaction service (backport #24265) (#24323)
* Async send for send transaction service (#24265)

* async send

(cherry picked from commit 474080608a17b5c2914cd01d32b56897d9c457ac)

# Conflicts:
#	client/Cargo.toml

* Fix conflicts

Co-authored-by: anatoly yakovenko <anatoly@solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-13 19:29:03 -06:00
mergify[bot]
0e6ba29859
Use LRU in connection-cache (#24109) (#24322)
Switch to using LRU for connection-cache

Co-authored-by: ryleung-solana <91908731+ryleung-solana@users.noreply.github.com>
2022-04-13 21:21:19 -04:00
mergify[bot]
ec1d06240c
Add ability to interact with a Bigtable with a custom instance id (backport #23779) (#24325)
* Refactor validator bigtable config

(cherry picked from commit 63ee00e6472346397e3cc653b447de5fe9da3cf2)

* bigtable: add a config ctor for `LedgerStorage`

(cherry picked from commit f5131954686e9e038e88aaa49a33d8f14893458f)

* bigtable: allow custom instance names

(cherry picked from commit 9b32b72990ca5f5af957e7456b71516545dfe49c)

# Conflicts:
#	validator/Cargo.toml

* Add ability to query bigtable via solana-test-validator, with hidden params

(cherry picked from commit 9c60991cd3e32bad9d1d9c448469e9c7be76c995)

* Fix conflicts

Co-authored-by: Tyera Eulberg <tyera@solana.com>
Co-authored-by: Trent Nelson <trent@solana.com>
2022-04-13 19:00:47 -06:00
mergify[bot]
32dea4427b
Bump bpf-tools to v1.25 (#24290)
- Tweak linker script
  Ensure that all read only sections end up in one segment, and
  everything else in other segments. Discard .eh_frame, .hash and
  .gnu.hash since they are unused.
- Don't create invalid string slices in stdout/stderr on Solana
- Report exceeded stack size as a warning if dynamic frames are off
- Native support for signed division in SBF
  Adds BPF_SDIV, which is enabled only for the SBF subtarget.
- Introduce dynamic stack frames and the SBFv2 flag
  Dynamic stack frames  are currently opt-in and enabled setting
  cpu=sbfv2. When sbfv2 is used, ELF files are flagged with
  e_flags=EF_SBF_V2 so the runtime can detect it and react
  accordingly.

(cherry picked from commit 6b611e1c52f6f7fa0ea0b18820058a4ab671aafe)

Co-authored-by: Dmitri Makarov <dmakarov@alumni.stanford.edu>
2022-04-13 20:26:13 +00:00
mergify[bot]
9aa95870fa
Make tpu_use_quic a flag only without argument (#24018) (#24027)
(cherry picked from commit 98525ddea99c6ebdcfe0dcbda9de49c93035050f)

Co-authored-by: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
2022-04-13 13:00:49 -06:00
Dmitri Makarov
d48e9b3a7b Bump sbf-tools version to v1.24
(cherry picked from commit 689064a4f4d0a7122c9a24bef061712548663600)
2022-04-13 09:15:12 -07:00
Dmitri Makarov
95a279f310 Double the chunk size for sending the program binary data in tx
(cherry picked from commit 03ed334ebbf31f343d5be595b922ae97dc7c211f)

# Conflicts:
#	programs/bpf/tests/programs.rs
2022-04-13 09:15:12 -07:00
mergify[bot]
1700820583
Add TpuClient support to bench-tps (backport #24227) (#24284)
* Add TpuClient support to bench-tps (#24227)

* Add fallible send methods, and rpc_client helper

* Add helper to return RpcClient url

* Implement BenchTpsClient for TpuClient

* Add cli rpc and identity handling

* Handle different kinds of clients in main, use TpuClient

* Add tpu_client integration test

(cherry picked from commit 8487030ea64abf39df388f0b395e899e6fbf6a82)

# Conflicts:
#	bench-tps/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-12 13:12:33 -06:00
mergify[bot]
0745738eb1
Add resolver = 2 to fix Windows build error on Travis CI (#24196) (#24259)
(cherry picked from commit a5e740431ad8f86ac3f4cfa8d6149601f2fb406b)

Co-authored-by: Will Hickey <will.hickey@solana.com>
2022-04-12 14:05:18 -05:00
mergify[bot]
d02bf12976
Add BenchTpsClient trait (backport #24208) (#24256)
* Add BenchTpsClient trait (#24208)

* Add BenchTpsClient

* Impl BenchTpsClient for used clients

* Use BenchTpsClient in do_bench

* Update integration test to use faucet via rpc

* Support keypairs from file that are not prefunded

* Remove old perf-utils

(cherry picked from commit 3871c85fd73c3dd3e34c63b3ec81cedb63eb4e4a)

# Conflicts:
#	bench-tps/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-12 07:45:46 +00:00
mergify[bot]
587d45769d
Thin client quic (#23973) (#24266)
Change thin-client to use connection-cache

(cherry picked from commit 8b72200afb3f7b6eb8db2d5735cdb48246ab1b37)

Co-authored-by: ryleung-solana <91908731+ryleung-solana@users.noreply.github.com>
2022-04-11 23:54:39 -06:00
mergify[bot]
f495024591
Move helpers to solana-cli-config (#24246) (#24250)
* Add solana-cli-utils crate

* Use cli-utils in cli

* Move println fn to cli-output

* Use cli-config instead

(cherry picked from commit 8a73badf3d08cd4ce26608559d2f163920f9a6cb)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-11 22:17:01 -06:00
mergify[bot]
28a681a7bf
Remove duplicate increment (#24219) (#24226)
(cherry picked from commit ff3b6d2b8b971f42cf8bec6a54c8d18c81cab9ff)

Co-authored-by: carllin <carl@solana.com>
2022-04-11 17:19:48 -04:00
Tyera Eulberg
15acdcc19a
Bump version to v1.10.9 (#24253) 2022-04-11 13:30:11 -06:00
mergify[bot]
623ac6567b
AcctIdx: fix infinite loop (#23806) (#23816)
(cherry picked from commit 965ab9186dfb5eaec947fd3c27dec36e237ee254)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2022-04-11 09:54:11 -05:00
Trent Nelson
a628034eb5 Bump version to v1.10.8 2022-04-09 00:06:32 -06:00
Christian Kamm
8bce2dd446 Address review comments
(cherry picked from commit a058f348a22a44a3ce8ffbb8a7dde51f8d7d2655)
2022-04-08 19:22:35 -05:00
Christian Kamm
60020632c1 Unittest for cost tracker after process_and_record_transactions
(cherry picked from commit 2ed29771f223e52931450e07e1c2610b689debcc)
2022-04-08 19:22:35 -05:00
Christian Kamm
864253a85b Adjustments to cost_tracker updates
- don't store pending tx signatures and costs in CostTracker
- apply tx costs to global state immediately again
- go from commit_or_cancel to update_or_remove, where the cost tracker
  is either updated with the true costs for successful tx, or the costs
  of a retryable tx is removed
- move the function into qos_service and hold the cost tracker lock for
  the whole loop

(cherry picked from commit 924b8ea1eb4a9a59de64414d24e85fe0345685fe)
2022-04-08 19:22:35 -05:00
Tao Zhu
637ac7933b - Only commit successfully executed transactions' cost to cost_tracker;
- In-fly transactions are pended in cost_tracker until being committed
  or cancelled;

(cherry picked from commit 9e07272af805ac562f93276c53b165a67d9ca1c1)
2022-04-08 19:22:35 -05:00
Tyera Eulberg
5a29e95f71
v1.10: Bump tonic, tonic-build, prost, and etcd-client (#24157)
* Bump tonic, prost, and etcd-client

* Restore doc ignores
2022-04-08 10:21:53 -06:00
mergify[bot]
ba72f347e4
Move duplicate-block proposal (#24167) (#24181)
(cherry picked from commit fbe5e51a161caa7028f1cefe5701a0b4816815ef)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-07 23:57:44 +00:00
mergify[bot]
720ad85632
providing clarity on airdrop amount constraints (#24115) (#24178)
* providing clarity on airdrop amount constraints

This change is in response to a review of a PR in the `solana-program-library` found here: https://github.com/solana-labs/solana-program-library/pull/3062

* replaced static limits with info on how to find them

* removed trailing whitespace

(cherry picked from commit 781094edb2a8a9a905be128353eba5437ff2368d)

Co-authored-by: T.J. Kyner <78994885+tjkyner@users.noreply.github.com>
2022-04-07 23:02:18 +00:00
mergify[bot]
e5623d288e
removes legacy weighted_shuffle and weighted_best methods (#24125) (#24139)
Older weighted_shuffle is based on a heuristic which results in biased
samples as shown in:
https://github.com/solana-labs/solana/pull/18343
and can be replaced with WeightedShuffle.

Also, as described in:
https://github.com/solana-labs/solana/pull/13919
weighted_best can be replaced with rand::distributions::WeightedIndex,
or WeightdShuffle::first.

(cherry picked from commit db23295e1ceae090bd0d3c6f9bcf130f726fcc91)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-07 00:52:40 +00:00
Tyera Eulberg
ad530d73ce
Bump lru crate (#24151) 2022-04-06 16:45:27 -06:00
mergify[bot]
36122a27af
reduces gossip crds stats (#24132) (#24144)
(cherry picked from commit cd09390367d2ac66e2269a39cd40c4b3097c6732)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-06 17:24:16 +00:00
mergify[bot]
a6ded6a5ed
removes turbine legacy code and already activated features (backport #24080) (#24117) 2022-04-06 01:12:18 +00:00
mergify[bot]
c5541efdc2
Set drop callback on first root bank (#23999) (#24129)
(cherry picked from commit 4ea59d8cb402b9f45cf03498579d2c0c366ad49b)

Co-authored-by: carllin <carl@solana.com>
2022-04-05 20:32:17 +00:00
mergify[bot]
3f3e1b30d6
removes outdated and flaky test_skip_repair from retransmit-stage (#24121) (#24126)
test_skip_repair in retransmit-stage is no longer relevant because
following: https://github.com/solana-labs/solana/pull/19233
repair packets are filtered out earlier in window-service and so
retransmit stage does not know if a shred is repaired or not.
Also, following turbine peer shuffle changes:
https://github.com/solana-labs/solana/pull/24080
the test has become flaky since it does not take into account how peers
are shuffled for each shred.

(cherry picked from commit 228257149307fc295678ebe42813fdefe3368102)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-05 19:08:18 +00:00
mergify[bot]
5365b939bf
Implement get_account_with_config (#23997). (#24095) (#24113)
(cherry picked from commit 41f2fd7fca08f2a5f238a8fc96e51bd4148266c1)

Co-authored-by: hana <81144685+2501babe@users.noreply.github.com>
2022-04-05 00:47:44 +00:00
Will Hickey
1b6de0f08d
Bump version to v1.10.7 (#24105) 2022-04-04 11:20:53 -05:00
mergify[bot]
8d5c7b7d89
hides implementation details of vote-accounts from public interface (#24087) (#24102)
(cherry picked from commit ef3e3dce7ab613b903cc92494eb87737a533ada6)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-04 15:08:21 +00:00
mergify[bot]
ca1a282a60
demotes WeightedShuffle failures to error metrics (#24079) (#24088)
Since call-sites are calling unwrap anyways, panicking seems too punitive
for our use cases.

(cherry picked from commit 7cb3b6cbe225181ee8aab2f2a5d5e423a8990129)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-03 18:10:00 +00:00
mergify[bot]
3f661f25fb
improves Stakes::activate_epoch performance (#24068) (#24081)
Tested with mainnet stakes obtained from the ledger at 5 recent epoch
boundaries, this code is ~30% faster than current master.

Current code:
  epoch: 289, elapsed: 82901us
  epoch: 290, elapsed: 80525us
  epoch: 291, elapsed: 79122us
  epoch: 292, elapsed: 79961us
  epoch: 293, elapsed: 78965us

This commit:
  epoch: 289, elapsed: 61710us
  epoch: 290, elapsed: 55721us
  epoch: 291, elapsed: 55886us
  epoch: 292, elapsed: 55399us
  epoch: 293, elapsed: 56803us

(cherry picked from commit fa7eb7f30cf6167fb4449f8e35f1e716d3a975af)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-03 13:44:19 +00:00
mergify[bot]
b157a9111f
Note this is a modified backport that does not SAVE the new fields, but does load them. (#24074)
Original:
Start saving/loading prior_roots(_with_hash) to snapshot (#23844)

    * Start saving/loading prior_roots(_with_hash) to snapshot

    * Update runtime/src/accounts_index.rs

    Co-authored-by: Michael Vines <mvines@gmail.com>

    * Update runtime/src/accounts_index.rs

    Co-authored-by: Michael Vines <mvines@gmail.com>

    * update comment

    Co-authored-by: Michael Vines <mvines@gmail.com>
    (cherry picked from commit 396b49a7c10edec5d82b8c15648b4fa42370653c)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2022-04-02 17:22:33 +00:00
mergify[bot]
f2f20af768
Fix typo in documentation (#24076) (#24077)
(cherry picked from commit 4968e7d38c7481bf0d8840aa3af6a295a6438cc7)

Co-authored-by: blake <572337+bartenbach@users.noreply.github.com>
2022-04-02 13:35:39 +00:00
mergify[bot]
a8855386c1
zk-token-sdk: handle edge cases for transfer with fee (#23804) (#23818)
* zk-token-sdk: handle edge cases for transfer with fee

* zk-token-sdk: clippy

* zk-token-sdk: clippy

* zk-token-sdk: cargo fmt

(cherry picked from commit 10eeafd3d6eca190d0a1e9be035637cae167fb12)

Co-authored-by: samkim-crypto <skim13@cs.stanford.edu>
2022-04-01 20:02:10 -04:00
mergify[bot]
6048b71640
Revert voting service to use UDP instead of QUIC (backport #24032) (#24052)
* Revert voting service to use UDP instead of QUIC (#24032)

(cherry picked from commit df4d92f9cfc4d3bb3a4ad6915d888af1431983fe)

# Conflicts:
#	core/src/voting_service.rs

* resolve merge conflicts

Co-authored-by: Pankaj Garg <pankaj@solana.com>
2022-04-01 18:52:27 +00:00
mergify[bot]
4a4a1db836
expands lifetime of SlotStats (#23872) (#24002)
Current slot stats are removed when the slot is full or every 30 seconds
if the slot is before root:
https://github.com/solana-labs/solana/blob/493a8e234/ledger/src/blockstore.rs#L2017-L2027

In order to track if the slot is ultimately marked as dead or rooted and
emit more metrics, this commit expands lifetime of SlotStats while
bounding total size of cache using an LRU eviction policy.

(cherry picked from commit 1f9c89c1e8506346f7583a20c9203ab7360f1fd4)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-01 14:50:12 +00:00
mergify[bot]
c7889f8def
uses first_coding_index for erasure meta obtained from coding shreds (#23974) (#24001)
Now that nodes correctly populate position field in coding shreds, and
first_coding_index in erasure meta, the old code to maintain backward
compatibility can be removed.
The commit is working towards changing erasure coding schema to 32:64.

(cherry picked from commit cda3d66b21367bd8fda16e6265fa61e7fb4ba6c9)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-04-01 14:49:39 +00:00
Michael Vines
832f524687 Update Version CrdsData on node identity changes
(cherry picked from commit 7ef18f220a3ed5193a535f8b60ee96b7efb7c27e)
2022-03-28 19:57:48 -07:00
Will Hickey
a639282c0f
Bump version to 1.10.6 (#23969) 2022-03-28 10:56:01 -05:00
mergify[bot]
5eb085fcaf
Implement forwarding via TpuConnection (#23817) (#23936)
(cherry picked from commit 6b85c2104cb95881234546131a1086594844c0b1)

Co-authored-by: ryleung-solana <91908731+ryleung-solana@users.noreply.github.com>
2022-03-28 16:38:44 +02:00
mergify[bot]
c66d086db1
fix: thread enforce_ulimit_nofile config down when opening blockstore (#23925) (#23958)
(cherry picked from commit f44c8f296fae884239ef0993dca33550a0ccd302)

Co-authored-by: Steven Luscher <steveluscher@users.noreply.github.com>
2022-03-26 20:09:49 +00:00
mergify[bot]
0c740ebba6
Specify if archive size datapoint is for full or incremental snapshots (#23941) (#23957)
(cherry picked from commit 31b707b6258a5266a40a986e05244e7bef5c170a)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-03-26 19:25:39 +00:00
Will Hickey
fd49ed1959
Bump version to 1.10.5 (#23955) 2022-03-26 11:34:12 -05:00
Michael Vines
df9f4193af improve arg documentation 2022-03-25 21:37:35 -07:00
Trent Nelson
be99d1d55d cli: allow skipping fee-checks when writing program buffers (hidden) 2022-03-25 18:17:15 -06:00
mergify[bot]
22af384700
Add get_confirmed_blocks_with_data method (backport #23618) (#23940)
* add get_confirmed_blocks_with_data and get_protobuf_or_bincode_cells

(cherry picked from commit f3219fb695054c610a9d3fad5fa1a0f5bf55b6b6)

* appease clippy

(cherry picked from commit 5533e9393c20820312607fe1d471d5ca19573b29)

* use &[T] instead of Vec<T> where appropriate

clippy

(cherry picked from commit fbcf6a08022222b0ac5fde4f7422b3138353eabe)

* modify get_protobuf_or_bincode_cells to accept and return an iterator

(cherry picked from commit f717fda9a33aa47ccc205145b1785ec402a0d19e)

* make get_protobuf_or_bincode_cells accept IntoIter on row_keys, make get_confirmed_blocks_with_data return an Iterator

(cherry picked from commit d8be0d943010ca27e65ed547dfe0dd319c60396e)

Co-authored-by: Edgar Xi <edgarxi97@gmail.com>
2022-03-25 22:45:01 +00:00
mergify[bot]
30059510cc
Add solana-faucet to the list of dependencies referenced by downstream projects (#23935) (#23938)
(cherry picked from commit c6dda3b32401f5b96e852e815f5d312ff5624bf6)

Co-authored-by: Will Hickey <will.hickey@solana.com>
2022-03-25 13:36:29 -05:00
mergify[bot]
c0d3cd145e
Optimize TpuConnection and its implementations and refactor connection-cache to not use dyn in order to enable those changes (#23877) (#23909)
Co-authored-by: ryleung-solana <91908731+ryleung-solana@users.noreply.github.com>
2022-03-25 19:09:26 +01:00
mergify[bot]
af79a86a72
ci: don't allow mergify to add automerge label to merged PRs (#23931)
(cherry picked from commit e34c52934c6ef5aa0cf2c21ee77057daf77b1e64)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-03-25 16:20:09 +00:00
Michael Vines
86acc8c59b vote-authorize-voter now accepts either the vote or withdraw authority
(cherry picked from commit c8c3c4359f5bbe698398d4eea36a24ba986aeee5)
2022-03-25 08:37:44 -07:00
mergify[bot]
8222f3a675
Update TpuConnection interface to be compatible with versioned txs (#23760) (#23913)
* Update TpuConnection interface to be compatible with versioned txs

* Add convenience method for sending txs

* use parallel iterator to serialize transactions

(cherry picked from commit 016d3c450a47386702a89935c434b82a7dbc7a94)

Co-authored-by: Justin Starry <justin@solana.com>
2022-03-24 22:09:34 +00:00
mergify[bot]
d135e3b839
Use QUIC client in voting service (#23713) (#23813)
* Use QUIC client in voting service

* guard quic-client usage with a flag

* add measure to time the quic client

* move time measure outside if block

* remove quic vs UDP flag from voting service

(cherry picked from commit 5d03b188c8c362dd33840ae91feed951905ebfda)

Co-authored-by: Pankaj Garg <pankaj@solana.com>
2022-03-24 22:03:33 +00:00
Lijun Wang
821261a2d1
Use connection cache in send transaction (#23712) (#23900)
Use connection cache in send transaction (#23712)
2022-03-24 13:24:57 -07:00
mergify[bot]
f0c5962817
disable 'check_hash' on accounts hash calc (#23873) (#23902)
(cherry picked from commit 5a892af2fe309cc775d453fac5f90eee96508d0f)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2022-03-24 18:33:50 +00:00
mergify[bot]
1b930a1485
Make find_program_address client example runnable (#23492) (#23901)
(cherry picked from commit 6428602cd98d4d73397b92c044449df5b4d34624)

Co-authored-by: Brian Anderson <andersrb@gmail.com>
2022-03-24 15:00:32 +00:00
mergify[bot]
2ed9655958
Set accounts_data_len on feature activation (#23730) (#23810)
(cherry picked from commit cb061263887fb20017d7b813ee9ef178a2e304f4)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-03-21 21:50:31 +00:00
mergify[bot]
c63782f833
Made connection cache configurable. (#23783) (#23812)
Added command-line argument tpu-use-quic argument.
Changed connection cache to return different connections based on the config.

(cherry picked from commit ae76fe2bd74ab0b7ef579486f8034380ccdc7df2)

Co-authored-by: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
2022-03-21 21:42:53 +00:00
mergify[bot]
258f752e5d
Add ability to get the latest incremental snapshot via RPC (#23788) (#23809)
(cherry picked from commit 739e43ba58dbc54ab4e651d7adc571d953eb45fc)

Co-authored-by: DimAn <diman@diman.io>
2022-03-21 20:19:30 +00:00
419 changed files with 20949 additions and 38656 deletions

6
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,6 @@
#### Problem
#### Proposed Solution

View File

@ -1,12 +0,0 @@
---
name: General Issue
about: Create a report describing a problem and a proposed solution
title: ''
assignees: ''
---
#### Problem
#### Proposed Solution

View File

@ -1,70 +0,0 @@
name: Feature Gate Tracker
description: Track the development and status of an on-chain feature
title: "Feature Gate: "
labels: ["feature-gate"]
body:
- type: markdown
attributes:
value: >
Steps to add a new feature are outlined below. Note that these steps only cover
the process of getting a feature into the core Solana code.
- For features that are unambiguously good (ie bug fixes), these steps are sufficient.
- For features that should go up for community vote (ie fee structure changes), more
information on the additional steps to follow can be found at:
<https://spl.solana.com/feature-proposal#feature-proposal-life-cycle>
1. Generate a new keypair with `solana-keygen new --outfile feature.json --no-passphrase`
- Keypairs should be held by core contributors only. If you're a non-core contirbutor going
through these steps, the PR process will facilitate a keypair holder being picked. That
person will generate the keypair, provide pubkey for PR, and ultimately enable the feature.
2. Add a public module for the feature, specifying keypair pubkey as the id with
`solana_sdk::declare_id!()` within the module. Additionally, add an entry to `FEATURE_NAMES` map.
3. Add desired logic to check for and switch on feature availability.
- type: textarea
id: description
attributes:
label: Description
placeholder: Describe why the new feature gate is needed and any necessary conditions for its activation
validations:
required: true
- type: input
id: id
attributes:
label: Feature ID
description: The public key of the feature account
validations:
required: true
- type: dropdown
id: activation-method
attributes:
label: Activation Method
options:
- Single Core Contributor
- Staked Validator Vote
validations:
required: true
- type: input
id: testnet
attributes:
label: Testnet Activation Epoch
placeholder: Edit this response when feature is activated on this cluster
validations:
required: false
- type: input
id: devnet
attributes:
label: Devnet Activation Epoch
placeholder: Edit this response when feature is activated on this cluster
validations:
required: false
- type: input
id: mainnet-beta
attributes:
label: Mainnet-Beta Activation Epoch
placeholder: Edit this response when feature is activated on this cluster
validations:
required: false

View File

@ -1,9 +1,9 @@
#### Problem #### Problem
#### Summary of Changes #### Summary of Changes
Fixes # Fixes #
<!-- OPTIONAL: Feature Gate Issue: # -->
<!-- Don't forget to add the "feature-gate" label -->

View File

@ -1,37 +0,0 @@
name: 'Autolock RitBot for for PR'
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
concurrency:
group: lock
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v3
with:
github-token: ${{ github.token }}
pr-inactive-days: '14'
exclude-pr-created-before: ''
exclude-pr-created-after: ''
exclude-pr-created-between: ''
exclude-pr-closed-before: ''
exclude-pr-closed-after: ''
exclude-pr-closed-between: ''
include-any-pr-labels: 'automerge'
include-all-pr-labels: ''
exclude-any-pr-labels: ''
add-pr-labels: 'locked PR'
remove-pr-labels: ''
pr-comment: 'This PR has been automatically locked since there has not been any activity in past 14 days after it was merged.'
pr-lock-reason: 'resolved'
log-output: true

View File

@ -1,38 +0,0 @@
name: 'Autolock NaviBot for closed issue'
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
concurrency:
group: lock
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v3
with:
github-token: ${{ github.token }}
issue-inactive-days: '7'
exclude-issue-created-before: ''
exclude-issue-created-after: ''
exclude-issue-created-between: ''
exclude-issue-closed-before: ''
exclude-issue-closed-after: ''
exclude-issue-closed-between: ''
include-any-issue-labels: ''
include-all-issue-labels: ''
exclude-any-issue-labels: ''
add-issue-labels: 'locked issue'
remove-issue-labels: ''
issue-comment: 'This issue has been automatically locked since there has not been any activity in past 7 days after it was closed. Please open a new issue for related bugs.'
issue-lock-reason: 'resolved'
process-only: 'issues'
log-output: true

View File

@ -24,7 +24,6 @@ pull_request_rules:
- "#approved-reviews-by=0" - "#approved-reviews-by=0"
- "#commented-reviews-by=0" - "#commented-reviews-by=0"
- "#changes-requested-reviews-by=0" - "#changes-requested-reviews-by=0"
- "#review-requested=0"
actions: actions:
request_reviews: request_reviews:
teams: teams:
@ -35,7 +34,6 @@ pull_request_rules:
- status-success=buildkite/solana - status-success=buildkite/solana
- status-success=ci-gate - status-success=ci-gate
- label=automerge - label=automerge
- label!=no-automerge
- author≠@dont-squash-my-commits - author≠@dont-squash-my-commits
- or: - or:
# only require travis success if docs files changed # only require travis success if docs files changed
@ -62,7 +60,6 @@ pull_request_rules:
- status-success=Travis CI - Pull Request - status-success=Travis CI - Pull Request
- status-success=ci-gate - status-success=ci-gate
- label=automerge - label=automerge
- label!=no-automerge
- author=@dont-squash-my-commits - author=@dont-squash-my-commits
- or: - or:
# only require explorer checks if explorer files changed # only require explorer checks if explorer files changed
@ -97,51 +94,26 @@ pull_request_rules:
- head~=^mergify/bp/ - head~=^mergify/bp/
- "#status-failure=0" - "#status-failure=0"
- "-merged" - "-merged"
- label!=no-automerge
actions: actions:
label: label:
add: add:
- automerge - automerge
- name: v1.9 feature-gate backport - name: v1.8 backport
conditions: conditions:
- label=v1.9 - label=v1.8
- label=feature-gate
actions: actions:
backport: backport:
ignore_conflicts: true ignore_conflicts: true
labels:
- feature-gate
branches: branches:
- v1.9 - v1.8
- name: v1.9 non-feature-gate backport - name: v1.9 backport
conditions: conditions:
- label=v1.9 - label=v1.9
- label!=feature-gate
actions: actions:
backport: backport:
ignore_conflicts: true ignore_conflicts: true
branches: branches:
- v1.9 - v1.9
- name: v1.10 feature-gate backport
conditions:
- label=v1.10
- label=feature-gate
actions:
backport:
ignore_conflicts: true
labels:
- feature-gate
branches:
- v1.10
- name: v1.10 non-feature-gate backport
conditions:
- label=v1.10
- label!=feature-gate
actions:
backport:
ignore_conflicts: true
branches:
- v1.10
commands_restrictions: commands_restrictions:
# The author of copied PRs is the Mergify user. # The author of copied PRs is the Mergify user.

View File

@ -146,31 +146,6 @@ the subject lines of the git commits contained in the PR. It's especially
generous (and not expected) to rebase or reword commits such that each change generous (and not expected) to rebase or reword commits such that each change
matches the logical flow in your PR description. matches the logical flow in your PR description.
### The PR / Issue Labels
Labels make it easier to manage and track PRs / issues. Below some common labels
that we use in Solana. For the complete list of labels, please refer to the
[label page](https://github.com/solana-labs/solana/issues/labels):
* "feature-gate": when you add a new feature gate or modify the behavior of
an existing feature gate, please add the "feature-gate" label to your PR.
New feature gates should also always have a corresponding tracking issue
(go to "New Issue" -> "Feature Gate Tracker [Get Started](https://github.com/solana-labs/solana/issues/new?assignees=&labels=feature-gate&template=1-feature-gate.yml&title=Feature+Gate%3A+)")
and should be updated each time the feature is activated on a cluster.
* "automerge": When a PR is labelled with "automerge", the PR will be
automically merged once CI passes. In general, this label should only
be used for small hot-fix (fewer than 100 lines) or automatic generated
PRs. If you're uncertain, it's usually the case that the PR is not
qualified as "automerge".
* "good first issue": If you happen to find an issue that is non-urgent and
self-contained with moderate scope, you might want to consider attaching
"good first issue" to it as it might be a good practice for newcomers.
* "rust": this pull request updates Rust code.
* "javascript": this pull request updates Javascript code.
### When will my PR be reviewed? ### When will my PR be reviewed?
PRs are typically reviewed and merged in under 7 days. If your PR has been open PRs are typically reviewed and merged in under 7 days. If your PR has been open

518
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -59,6 +59,8 @@ members = [
"rayon-threadlimit", "rayon-threadlimit",
"rbpf-cli", "rbpf-cli",
"remote-wallet", "remote-wallet",
"replica-lib",
"replica-node",
"rpc", "rpc",
"rpc-test", "rpc-test",
"runtime", "runtime",

View File

@ -35,7 +35,7 @@ On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc
```bash ```bash
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make $ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang make
``` ```
## **2. Download the source code.** ## **2. Download the source code.**

View File

@ -11,7 +11,7 @@
email to security@solana.com and provide your github username so we can add you email to security@solana.com and provide your github username so we can add you
to a new draft security advisory for further discussion. to a new draft security advisory for further discussion.
Expect a response as fast as possible, typically within 72 hours. Expect a response as fast as possible, within one business day at the latest.
<a name="bounty"></a> <a name="bounty"></a>
## Security Bug Bounties ## Security Bug Bounties

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-account-decoder" name = "solana-account-decoder"
version = "1.11.0" version = "1.10.9"
description = "Solana account decoder" description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -19,9 +19,9 @@ lazy_static = "1.4.0"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_json = "1.0.79" serde_json = "1.0.79"
solana-config-program = { path = "../programs/config", version = "=1.11.0" } solana-config-program = { path = "../programs/config", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" } solana-vote-program = { path = "../programs/vote", version = "=1.10.9" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] } spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
thiserror = "1.0" thiserror = "1.0"
zstd = "0.11.1" zstd = "0.11.1"

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-accounts-bench" name = "solana-accounts-bench"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -12,11 +12,11 @@ publish = false
clap = "2.33.1" clap = "2.33.1"
log = "0.4.14" log = "0.4.14"
rayon = "1.5.1" rayon = "1.5.1"
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -6,18 +6,12 @@ use {
rayon::prelude::*, rayon::prelude::*,
solana_measure::measure::Measure, solana_measure::measure::Measure,
solana_runtime::{ solana_runtime::{
accounts::{ accounts::{create_test_accounts, update_accounts_bench, Accounts},
test_utils::{create_test_accounts, update_accounts_bench},
Accounts,
},
accounts_db::AccountShrinkThreshold, accounts_db::AccountShrinkThreshold,
accounts_index::AccountSecondaryIndexes, accounts_index::AccountSecondaryIndexes,
ancestors::Ancestors, ancestors::Ancestors,
rent_collector::RentCollector,
},
solana_sdk::{
genesis_config::ClusterType, pubkey::Pubkey, sysvar::epoch_schedule::EpochSchedule,
}, },
solana_sdk::{genesis_config::ClusterType, pubkey::Pubkey},
std::{env, fs, path::PathBuf}, std::{env, fs, path::PathBuf},
}; };
@ -120,12 +114,7 @@ fn main() {
} else { } else {
let mut pubkeys: Vec<Pubkey> = vec![]; let mut pubkeys: Vec<Pubkey> = vec![];
let mut time = Measure::start("hash"); let mut time = Measure::start("hash");
let results = accounts.accounts_db.update_accounts_hash( let results = accounts.accounts_db.update_accounts_hash(0, &ancestors);
0,
&ancestors,
&EpochSchedule::default(),
&RentCollector::default(),
);
time.stop(); time.stop();
let mut time_store = Measure::start("hash using store"); let mut time_store = Measure::start("hash using store");
let results_store = accounts.accounts_db.update_accounts_hash_with_index_option( let results_store = accounts.accounts_db.update_accounts_hash_with_index_option(
@ -135,8 +124,7 @@ fn main() {
&ancestors, &ancestors,
None, None,
false, false,
&EpochSchedule::default(), None,
&RentCollector::default(),
false, false,
); );
time_store.stop(); time_store.stop();

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-accounts-cluster-bench" name = "solana-accounts-cluster-bench"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -13,25 +13,25 @@ clap = "2.33.1"
log = "0.4.14" log = "0.4.14"
rand = "0.7.0" rand = "0.7.0"
rayon = "1.5.1" rayon = "1.5.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" } solana-account-decoder = { path = "../account-decoder", version = "=1.10.9" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.10.9" }
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-faucet = { path = "../faucet", version = "=1.11.0" } solana-faucet = { path = "../faucet", version = "=1.10.9" }
solana-gossip = { path = "../gossip", version = "=1.11.0" } solana-gossip = { path = "../gossip", version = "=1.10.9" }
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] } spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
[dev-dependencies] [dev-dependencies]
solana-core = { path = "../core", version = "=1.11.0" } solana-core = { path = "../core", version = "=1.10.9" }
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" } solana-local-cluster = { path = "../local-cluster", version = "=1.10.9" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" } solana-test-validator = { path = "../test-validator", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-banking-bench" name = "solana-banking-bench"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -14,17 +14,17 @@ crossbeam-channel = "0.5"
log = "0.4.14" log = "0.4.14"
rand = "0.7.0" rand = "0.7.0"
rayon = "1.5.1" rayon = "1.5.1"
solana-core = { path = "../core", version = "=1.11.0" } solana-core = { path = "../core", version = "=1.10.9" }
solana-gossip = { path = "../gossip", version = "=1.11.0" } solana-gossip = { path = "../gossip", version = "=1.10.9" }
solana-ledger = { path = "../ledger", version = "=1.11.0" } solana-ledger = { path = "../ledger", version = "=1.10.9" }
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-perf = { path = "../perf", version = "=1.11.0" } solana-perf = { path = "../perf", version = "=1.10.9" }
solana-poh = { path = "../poh", version = "=1.11.0" } solana-poh = { path = "../poh", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-banks-client" name = "solana-banks-client"
version = "1.11.0" version = "1.10.9"
description = "Solana banks client" description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -12,17 +12,17 @@ edition = "2021"
[dependencies] [dependencies]
borsh = "0.9.3" borsh = "0.9.3"
futures = "0.3" futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.11.0" } solana-banks-interface = { path = "../banks-interface", version = "=1.10.9" }
solana-program = { path = "../sdk/program", version = "=1.11.0" } solana-program = { path = "../sdk/program", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
tarpc = { version = "0.27.2", features = ["full"] } tarpc = { version = "0.27.2", features = ["full"] }
thiserror = "1.0" thiserror = "1.0"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] } tokio-serde = { version = "0.8", features = ["bincode"] }
[dev-dependencies] [dev-dependencies]
solana-banks-server = { path = "../banks-server", version = "=1.11.0" } solana-banks-server = { path = "../banks-server", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
[lib] [lib]
crate-type = ["lib"] crate-type = ["lib"]

View File

@ -1,8 +1,5 @@
use { use {
solana_sdk::{ solana_sdk::{transaction::TransactionError, transport::TransportError},
transaction::TransactionError, transaction_context::TransactionReturnData,
transport::TransportError,
},
std::io, std::io,
tarpc::client::RpcError, tarpc::client::RpcError,
thiserror::Error, thiserror::Error,
@ -28,7 +25,6 @@ pub enum BanksClientError {
err: TransactionError, err: TransactionError,
logs: Vec<String>, logs: Vec<String>,
units_consumed: u64, units_consumed: u64,
return_data: Option<TransactionReturnData>,
}, },
} }

View File

@ -247,7 +247,6 @@ impl BanksClient {
err, err,
logs: simulation_details.logs, logs: simulation_details.logs,
units_consumed: simulation_details.units_consumed, units_consumed: simulation_details.units_consumed,
return_data: simulation_details.return_data,
}), }),
BanksTransactionResultWithSimulation { BanksTransactionResultWithSimulation {
result: Some(result), result: Some(result),

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-banks-interface" name = "solana-banks-interface"
version = "1.11.0" version = "1.10.9"
description = "Solana banks RPC interface" description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -11,7 +11,7 @@ edition = "2021"
[dependencies] [dependencies]
serde = { version = "1.0.136", features = ["derive"] } serde = { version = "1.0.136", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
tarpc = { version = "0.27.2", features = ["full"] } tarpc = { version = "0.27.2", features = ["full"] }
[lib] [lib]

View File

@ -12,7 +12,6 @@ use {
pubkey::Pubkey, pubkey::Pubkey,
signature::Signature, signature::Signature,
transaction::{self, Transaction, TransactionError}, transaction::{self, Transaction, TransactionError},
transaction_context::TransactionReturnData,
}, },
}; };
@ -36,7 +35,6 @@ pub struct TransactionStatus {
pub struct TransactionSimulationDetails { pub struct TransactionSimulationDetails {
pub logs: Vec<String>, pub logs: Vec<String>,
pub units_consumed: u64, pub units_consumed: u64,
pub return_data: Option<TransactionReturnData>,
} }
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)] #[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-banks-server" name = "solana-banks-server"
version = "1.11.0" version = "1.10.9"
description = "Solana banks server" description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -13,10 +13,10 @@ edition = "2021"
bincode = "1.3.3" bincode = "1.3.3"
crossbeam-channel = "0.5" crossbeam-channel = "0.5"
futures = "0.3" futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.11.0" } solana-banks-interface = { path = "../banks-interface", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.11.0" } solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.10.9" }
tarpc = { version = "0.27.2", features = ["full"] } tarpc = { version = "0.27.2", features = ["full"] }
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] } tokio-serde = { version = "0.8", features = ["bincode"] }

View File

@ -266,7 +266,6 @@ impl Banks for BanksServer {
logs, logs,
post_simulation_accounts: _, post_simulation_accounts: _,
units_consumed, units_consumed,
return_data,
} = self } = self
.bank(commitment) .bank(commitment)
.simulate_transaction_unchecked(sanitized_transaction) .simulate_transaction_unchecked(sanitized_transaction)
@ -276,7 +275,6 @@ impl Banks for BanksServer {
simulation_details: Some(TransactionSimulationDetails { simulation_details: Some(TransactionSimulationDetails {
logs, logs,
units_consumed, units_consumed,
return_data,
}), }),
}; };
} }

View File

@ -2,18 +2,18 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-bench-streamer" name = "solana-bench-streamer"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
publish = false publish = false
[dependencies] [dependencies]
clap = "2.33.1"
crossbeam-channel = "0.5" crossbeam-channel = "0.5"
clap = { version = "3.1.5", features = ["cargo"] } solana-net-utils = { path = "../net-utils", version = "=1.10.9" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,6 +1,6 @@
#![allow(clippy::integer_arithmetic)] #![allow(clippy::integer_arithmetic)]
use { use {
clap::{crate_description, crate_name, Arg, Command}, clap::{crate_description, crate_name, value_t, App, Arg},
crossbeam_channel::unbounded, crossbeam_channel::unbounded,
solana_streamer::{ solana_streamer::{
packet::{Packet, PacketBatch, PacketBatchRecycler, PACKET_DATA_SIZE}, packet::{Packet, PacketBatch, PacketBatchRecycler, PACKET_DATA_SIZE},
@ -57,18 +57,18 @@ fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketBatchReceiver) ->
fn main() -> Result<()> { fn main() -> Result<()> {
let mut num_sockets = 1usize; let mut num_sockets = 1usize;
let matches = Command::new(crate_name!()) let matches = App::new(crate_name!())
.about(crate_description!()) .about(crate_description!())
.version(solana_version::version!()) .version(solana_version::version!())
.arg( .arg(
Arg::new("num-recv-sockets") Arg::with_name("num-recv-sockets")
.long("num-recv-sockets") .long("num-recv-sockets")
.value_name("NUM") .value_name("NUM")
.takes_value(true) .takes_value(true)
.help("Use NUM receive sockets"), .help("Use NUM receive sockets"),
) )
.arg( .arg(
Arg::new("num-producers") Arg::with_name("num-producers")
.long("num-producers") .long("num-producers")
.value_name("NUM") .value_name("NUM")
.takes_value(true) .takes_value(true)
@ -80,7 +80,7 @@ fn main() -> Result<()> {
num_sockets = max(num_sockets, n.to_string().parse().expect("integer")); num_sockets = max(num_sockets, n.to_string().parse().expect("integer"));
} }
let num_producers: u64 = matches.value_of_t("num_producers").unwrap_or(4); let num_producers = value_t!(matches, "num_producers", u64).unwrap_or(4);
let port = 0; let port = 0;
let ip_addr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)); let ip_addr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0));

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-bench-tps" name = "solana-bench-tps"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -15,28 +15,28 @@ log = "0.4.14"
rayon = "1.5.1" rayon = "1.5.1"
serde_json = "1.0.79" serde_json = "1.0.79"
serde_yaml = "0.8.23" serde_yaml = "0.8.23"
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.10.9" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" } solana-cli-config = { path = "../cli-config", version = "=1.10.9" }
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-core = { path = "../core", version = "=1.11.0" } solana-core = { path = "../core", version = "=1.10.9" }
solana-faucet = { path = "../faucet", version = "=1.11.0" } solana-faucet = { path = "../faucet", version = "=1.10.9" }
solana-genesis = { path = "../genesis", version = "=1.11.0" } solana-genesis = { path = "../genesis", version = "=1.10.9" }
solana-gossip = { path = "../gossip", version = "=1.11.0" } solana-gossip = { path = "../gossip", version = "=1.10.9" }
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-metrics = { path = "../metrics", version = "=1.11.0" } solana-metrics = { path = "../metrics", version = "=1.10.9" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.10.9" }
solana-rpc = { path = "../rpc", version = "=1.11.0" } solana-rpc = { path = "../rpc", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
thiserror = "1.0" thiserror = "1.0"
[dev-dependencies] [dev-dependencies]
serial_test = "0.6.0" serial_test = "0.6.0"
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" } solana-local-cluster = { path = "../local-cluster", version = "=1.10.9" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" } solana-test-validator = { path = "../test-validator", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -69,10 +69,10 @@ fn test_bench_tps_local_cluster(config: Config) {
cluster.transfer(&cluster.funding_keypair, &faucet_pubkey, 100_000_000); cluster.transfer(&cluster.funding_keypair, &faucet_pubkey, 100_000_000);
let client = Arc::new(create_client( let client = Arc::new(create_client((
cluster.entry_point_info.rpc, cluster.entry_point_info.rpc,
cluster.entry_point_info.tpu, cluster.entry_point_info.tpu,
)); )));
let lamports_per_account = 100; let lamports_per_account = 100;

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-bloom" name = "solana-bloom"
version = "1.11.0" version = "1.10.9"
description = "Solana bloom filter" description = "Solana bloom filter"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -17,9 +17,9 @@ rand = "0.7.0"
rayon = "1.5.1" rayon = "1.5.1"
serde = { version = "1.0.136", features = ["rc"] } serde = { version = "1.0.136", features = ["rc"] }
serde_derive = "1.0.103" serde_derive = "1.0.103"
solana-frozen-abi = { path = "../frozen-abi", version = "=1.11.0" } solana-frozen-abi = { path = "../frozen-abi", version = "=1.10.9" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.11.0" } solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
[lib] [lib]
crate-type = ["lib"] crate-type = ["lib"]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-bucket-map" name = "solana-bucket-map"
version = "1.11.0" version = "1.10.9"
description = "solana-bucket-map" description = "solana-bucket-map"
homepage = "https://solana.com/" homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bucket-map" documentation = "https://docs.rs/solana-bucket-map"
@ -15,14 +15,14 @@ log = { version = "0.4.11" }
memmap2 = "0.5.3" memmap2 = "0.5.3"
modular-bitfield = "0.11.2" modular-bitfield = "0.11.2"
rand = "0.7.0" rand = "0.7.0"
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
tempfile = "3.3.0" tempfile = "3.3.0"
[dev-dependencies] [dev-dependencies]
fs_extra = "1.2.0" fs_extra = "1.2.0"
rayon = "1.5.0" rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
[lib] [lib]
crate-type = ["lib"] crate-type = ["lib"]

View File

@ -1,5 +1,4 @@
#![allow(dead_code)] #![allow(dead_code)]
use { use {
crate::{ crate::{
bucket::Bucket, bucket::Bucket,
@ -58,18 +57,8 @@ impl IndexEntry {
.expect("New storage offset must fit into 7 bytes!") .expect("New storage offset must fit into 7 bytes!")
} }
/// return closest bucket index fit for the slot slice.
/// Since bucket size is 2^index, the return value is
/// min index, such that 2^index >= num_slots
/// index = ceiling(log2(num_slots))
/// special case, when slot slice empty, return 0th index.
pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 { pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 {
// Compute the ceiling of log2 for integer (num_slots as f64).log2().ceil() as u64 // use int log here?
if num_slots == 0 {
0
} else {
(Slot::BITS - (num_slots - 1).leading_zeros()) as u64
}
} }
pub fn data_bucket_ix(&self) -> u64 { pub fn data_bucket_ix(&self) -> u64 {
@ -164,23 +153,4 @@ mod tests {
let mut index = IndexEntry::new(Pubkey::new_unique()); let mut index = IndexEntry::new(Pubkey::new_unique());
index.set_storage_offset(too_big); index.set_storage_offset(too_big);
} }
#[test]
fn test_data_bucket_from_num_slots() {
for n in 0..512 {
assert_eq!(
IndexEntry::data_bucket_from_num_slots(n),
(n as f64).log2().ceil() as u64
);
}
assert_eq!(IndexEntry::data_bucket_from_num_slots(u32::MAX as u64), 32);
assert_eq!(
IndexEntry::data_bucket_from_num_slots(u32::MAX as u64 + 1),
32
);
assert_eq!(
IndexEntry::data_bucket_from_num_slots(u32::MAX as u64 + 2),
33
);
}
} }

View File

@ -303,7 +303,7 @@ EOF
command_step "local-cluster-slow" \ command_step "local-cluster-slow" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \ ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \
40 30
} }
pull_or_push_steps() { pull_or_push_steps() {

View File

@ -148,18 +148,6 @@ all_test_steps() {
# Full test suite # Full test suite
command_step stable ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-stable.sh" 70 command_step stable ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-stable.sh" 70
# Docs tests
if affects \
.rs$ \
^ci/rust-version.sh \
^ci/test-docs.sh \
; then
command_step doctest "ci/test-docs.sh" 15
else
annotate --style info --context test-docs \
"Docs skipped as no .rs files were modified"
fi
wait_step wait_step
# BPF test suite # BPF test suite
@ -307,7 +295,7 @@ EOF
command_step "local-cluster-slow" \ command_step "local-cluster-slow" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \ ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \
40 30
} }
pull_or_push_steps() { pull_or_push_steps() {

View File

@ -1,4 +1,4 @@
FROM solanalabs/rust:1.60.0 FROM solanalabs/rust:1.59.0
ARG date ARG date
RUN set -x \ RUN set -x \

View File

@ -3,14 +3,8 @@ set -ex
cd "$(dirname "$0")" cd "$(dirname "$0")"
platform=()
if [[ $(uname -m) = arm64 ]]; then
# Ref: https://blog.jaimyn.dev/how-to-build-multi-architecture-docker-images-on-an-m1-mac/#tldr
platform+=(--platform linux/amd64)
fi
nightlyDate=${1:-$(date +%Y-%m-%d)} nightlyDate=${1:-$(date +%Y-%m-%d)}
docker build "${platform[@]}" -t solanalabs/rust-nightly:"$nightlyDate" --build-arg date="$nightlyDate" . docker build -t solanalabs/rust-nightly:"$nightlyDate" --build-arg date="$nightlyDate" .
maybeEcho= maybeEcho=
if [[ -z $CI ]]; then if [[ -z $CI ]]; then

View File

@ -1,6 +1,6 @@
# Note: when the rust version is changed also modify # Note: when the rust version is changed also modify
# ci/rust-version.sh to pick up the new image tag # ci/rust-version.sh to pick up the new image tag
FROM rust:1.60.0 FROM rust:1.59.0
# Add Google Protocol Buffers for Libra's metrics library. # Add Google Protocol Buffers for Libra's metrics library.
ENV PROTOC_VERSION 3.8.0 ENV PROTOC_VERSION 3.8.0

View File

@ -3,14 +3,7 @@ set -ex
cd "$(dirname "$0")" cd "$(dirname "$0")"
docker build -t solanalabs/rust .
platform=()
if [[ $(uname -m) = arm64 ]]; then
# Ref: https://blog.jaimyn.dev/how-to-build-multi-architecture-docker-images-on-an-m1-mac/#tldr
platform+=(--platform linux/amd64)
fi
docker build "${platform[@]}" -t solanalabs/rust .
read -r rustc version _ < <(docker run solanalabs/rust rustc --version) read -r rustc version _ < <(docker run solanalabs/rust rustc --version)
[[ $rustc = rustc ]] [[ $rustc = rustc ]]

View File

@ -18,13 +18,13 @@
if [[ -n $RUST_STABLE_VERSION ]]; then if [[ -n $RUST_STABLE_VERSION ]]; then
stable_version="$RUST_STABLE_VERSION" stable_version="$RUST_STABLE_VERSION"
else else
stable_version=1.60.0 stable_version=1.59.0
fi fi
if [[ -n $RUST_NIGHTLY_VERSION ]]; then if [[ -n $RUST_NIGHTLY_VERSION ]]; then
nightly_version="$RUST_NIGHTLY_VERSION" nightly_version="$RUST_NIGHTLY_VERSION"
else else
nightly_version=2022-04-01 nightly_version=2022-02-24
fi fi

View File

@ -1 +0,0 @@
test-stable.sh

View File

@ -30,7 +30,7 @@ JOBS=$((JOBS>NPROC ? NPROC : JOBS))
echo "Executing $testName" echo "Executing $testName"
case $testName in case $testName in
test-stable) test-stable)
_ "$cargo" stable test --jobs "$JOBS" --all --tests --exclude solana-local-cluster ${V:+--verbose} -- --nocapture _ "$cargo" stable test --jobs "$JOBS" --all --exclude solana-local-cluster ${V:+--verbose} -- --nocapture
;; ;;
test-stable-bpf) test-stable-bpf)
# Clear the C dependency files, if dependency moves these files are not regenerated # Clear the C dependency files, if dependency moves these files are not regenerated
@ -130,10 +130,6 @@ test-wasm)
done done
exit 0 exit 0
;; ;;
test-docs)
_ "$cargo" stable test --jobs "$JOBS" --all --doc --exclude solana-local-cluster ${V:+--verbose} -- --nocapture
exit 0
;;
*) *)
echo "Error: Unknown test: $testName" echo "Error: Unknown test: $testName"
;; ;;

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-clap-utils" name = "solana-clap-utils"
version = "1.11.0" version = "1.10.9"
description = "Solana utilities for the clap" description = "Solana utilities for the clap"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -13,12 +13,12 @@ edition = "2021"
chrono = "0.4" chrono = "0.4"
clap = "2.33.0" clap = "2.33.0"
rpassword = "6.0" rpassword = "6.0"
solana-perf = { path = "../perf", version = "=1.11.0" } solana-perf = { path = "../perf", version = "=1.10.9" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.11.0", default-features = false } solana-remote-wallet = { path = "../remote-wallet", version = "=1.10.9", default-features = false }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
thiserror = "1.0.30" thiserror = "1.0.30"
tiny-bip39 = "0.8.2" tiny-bip39 = "0.8.2"
uriparse = "0.6.4" uriparse = "0.6.3"
url = "2.2.2" url = "2.2.2"
[dev-dependencies] [dev-dependencies]

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-cli-config" name = "solana-cli-config"
description = "Blockchain, Rebuilt for Scale" description = "Blockchain, Rebuilt for Scale"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -15,8 +15,8 @@ lazy_static = "1.4.0"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_yaml = "0.8.23" serde_yaml = "0.8.23"
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
url = "2.2.2" url = "2.2.2"
[dev-dependencies] [dev-dependencies]

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-cli-output" name = "solana-cli-output"
description = "Blockchain, Rebuilt for Scale" description = "Blockchain, Rebuilt for Scale"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -17,17 +17,16 @@ clap = "2.33.0"
console = "0.15.0" console = "0.15.0"
humantime = "2.0.1" humantime = "2.0.1"
indicatif = "0.16.2" indicatif = "0.16.2"
pretty-hex = "0.2.1" semver = "1.0.6"
semver = "1.0.7"
serde = "1.0.136" serde = "1.0.136"
serde_json = "1.0.79" serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" } solana-account-decoder = { path = "../account-decoder", version = "=1.10.9" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.10.9" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-client = { path = "../client", version = "=1.11.0" } solana-cli-config = { path = "../cli-config", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.10.9" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" } solana-vote-program = { path = "../programs/vote", version = "=1.10.9" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] } spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
[dev-dependencies] [dev-dependencies]

View File

@ -15,7 +15,6 @@ use {
signature::Signature, signature::Signature,
stake, stake,
transaction::{TransactionError, TransactionVersion, VersionedTransaction}, transaction::{TransactionError, TransactionVersion, VersionedTransaction},
transaction_context::TransactionReturnData,
}, },
solana_transaction_status::{Rewards, UiTransactionStatusMeta}, solana_transaction_status::{Rewards, UiTransactionStatusMeta},
spl_memo::{id as spl_memo_id, v1::id as spl_memo_v1_id}, spl_memo::{id as spl_memo_id, v1::id as spl_memo_v1_id},
@ -263,7 +262,6 @@ fn write_transaction<W: io::Write>(
write_fees(w, transaction_status.fee, prefix)?; write_fees(w, transaction_status.fee, prefix)?;
write_balances(w, transaction_status, prefix)?; write_balances(w, transaction_status, prefix)?;
write_log_messages(w, transaction_status.log_messages.as_ref(), prefix)?; write_log_messages(w, transaction_status.log_messages.as_ref(), prefix)?;
write_return_data(w, transaction_status.return_data.as_ref(), prefix)?;
write_rewards(w, transaction_status.rewards.as_ref(), prefix)?; write_rewards(w, transaction_status.rewards.as_ref(), prefix)?;
} else { } else {
writeln!(w, "{}Status: Unavailable", prefix)?; writeln!(w, "{}Status: Unavailable", prefix)?;
@ -594,25 +592,6 @@ fn write_balances<W: io::Write>(
Ok(()) Ok(())
} }
fn write_return_data<W: io::Write>(
w: &mut W,
return_data: Option<&TransactionReturnData>,
prefix: &str,
) -> io::Result<()> {
if let Some(return_data) = return_data {
if !return_data.data.is_empty() {
use pretty_hex::*;
writeln!(
w,
"{}Return Data from Program {}:",
prefix, return_data.program_id
)?;
writeln!(w, "{} {:?}", prefix, return_data.data.hex_dump())?;
}
}
Ok(())
}
fn write_log_messages<W: io::Write>( fn write_log_messages<W: io::Write>(
w: &mut W, w: &mut W,
log_messages: Option<&Vec<String>>, log_messages: Option<&Vec<String>>,
@ -787,10 +766,6 @@ mod test {
commission: None, commission: None,
}]), }]),
loaded_addresses: LoadedAddresses::default(), loaded_addresses: LoadedAddresses::default(),
return_data: Some(TransactionReturnData {
program_id: Pubkey::new_from_array([2u8; 32]),
data: vec![1, 2, 3],
}),
}; };
let output = { let output = {
@ -827,9 +802,6 @@ Status: Ok
Account 1 balance: 0.00001 -> 0.0000099 Account 1 balance: 0.00001 -> 0.0000099
Log Messages: Log Messages:
Test message Test message
Return Data from Program 8qbHbw2BbbTHBW1sbeqakYXVKRQM8Ne7pLK7m6CVfeR:
Length: 3 (0x3) bytes
0000: 01 02 03 ...
Rewards: Rewards:
Address Type Amount New Balance \0 Address Type Amount New Balance \0
4vJ9JU1bJJE96FWSJKvHsmmFADCg4gpZQff4P3bkLKi rent -0.000000100 0.000009900 \0 4vJ9JU1bJJE96FWSJKvHsmmFADCg4gpZQff4P3bkLKi rent -0.000000100 0.000009900 \0
@ -864,10 +836,6 @@ Rewards:
commission: None, commission: None,
}]), }]),
loaded_addresses, loaded_addresses,
return_data: Some(TransactionReturnData {
program_id: Pubkey::new_from_array([2u8; 32]),
data: vec![1, 2, 3],
}),
}; };
let output = { let output = {
@ -913,9 +881,6 @@ Status: Ok
Account 3 balance: 0.00002 Account 3 balance: 0.00002
Log Messages: Log Messages:
Test message Test message
Return Data from Program 8qbHbw2BbbTHBW1sbeqakYXVKRQM8Ne7pLK7m6CVfeR:
Length: 3 (0x3) bytes
0000: 01 02 03 ...
Rewards: Rewards:
Address Type Amount New Balance \0 Address Type Amount New Balance \0
CktRuQ2mttgRGkXJtyksdKHjUdc2C4TgDzyB98oEzy8 rent -0.000000100 0.000014900 \0 CktRuQ2mttgRGkXJtyksdKHjUdc2C4TgDzyB98oEzy8 rent -0.000000100 0.000014900 \0

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-cli" name = "solana-cli"
description = "Blockchain, Rebuilt for Scale" description = "Blockchain, Rebuilt for Scale"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -23,33 +23,33 @@ log = "0.4.14"
num-traits = "0.2" num-traits = "0.2"
pretty-hex = "0.2.1" pretty-hex = "0.2.1"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
semver = "1.0.7" semver = "1.0.6"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_json = "1.0.79" serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" } solana-account-decoder = { path = "../account-decoder", version = "=1.10.9" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.11.0" } solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.10.9" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.10.9" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" } solana-cli-config = { path = "../cli-config", version = "=1.10.9" }
solana-cli-output = { path = "../cli-output", version = "=1.11.0" } solana-cli-output = { path = "../cli-output", version = "=1.10.9" }
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-config-program = { path = "../programs/config", version = "=1.11.0" } solana-config-program = { path = "../programs/config", version = "=1.10.9" }
solana-faucet = { path = "../faucet", version = "=1.11.0" } solana-faucet = { path = "../faucet", version = "=1.10.9" }
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" } solana-program-runtime = { path = "../program-runtime", version = "=1.10.9" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.11.0" } solana-remote-wallet = { path = "../remote-wallet", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" } solana-vote-program = { path = "../programs/vote", version = "=1.10.9" }
solana_rbpf = "=0.2.25" solana_rbpf = "=0.2.24"
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] } spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0.30" thiserror = "1.0.30"
tiny-bip39 = "0.8.2" tiny-bip39 = "0.8.2"
[dev-dependencies] [dev-dependencies]
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" } solana-test-validator = { path = "../test-validator", version = "=1.10.9" }
tempfile = "3.3.0" tempfile = "3.3.0"
[[bin]] [[bin]]

View File

@ -188,7 +188,6 @@ pub enum CliCommand {
stake_account_pubkey: Pubkey, stake_account_pubkey: Pubkey,
stake_authority: SignerIndex, stake_authority: SignerIndex,
sign_only: bool, sign_only: bool,
deactivate_delinquent: bool,
dump_transaction_message: bool, dump_transaction_message: bool,
blockhash_query: BlockhashQuery, blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>, nonce_account: Option<Pubkey>,
@ -1084,7 +1083,6 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
stake_account_pubkey, stake_account_pubkey,
stake_authority, stake_authority,
sign_only, sign_only,
deactivate_delinquent,
dump_transaction_message, dump_transaction_message,
blockhash_query, blockhash_query,
nonce_account, nonce_account,
@ -1098,7 +1096,6 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
stake_account_pubkey, stake_account_pubkey,
*stake_authority, *stake_authority,
*sign_only, *sign_only,
*deactivate_delinquent,
*dump_transaction_message, *dump_transaction_message,
blockhash_query, blockhash_query,
*nonce_account, *nonce_account,
@ -2095,7 +2092,6 @@ mod tests {
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
dump_transaction_message: false, dump_transaction_message: false,
deactivate_delinquent: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
nonce_authority: 0, nonce_authority: 0,

View File

@ -41,7 +41,6 @@ use {
self, self,
instruction::{self as stake_instruction, LockupArgs, StakeError}, instruction::{self as stake_instruction, LockupArgs, StakeError},
state::{Authorized, Lockup, Meta, StakeActivationStatus, StakeAuthorize, StakeState}, state::{Authorized, Lockup, Meta, StakeActivationStatus, StakeAuthorize, StakeState},
tools::{acceptable_reference_epoch_credits, eligible_for_deactivate_delinquent},
}, },
stake_history::StakeHistory, stake_history::StakeHistory,
system_instruction::SystemError, system_instruction::SystemError,
@ -380,13 +379,6 @@ impl StakeSubCommands for App<'_, '_> {
.help("Seed for address generation; if specified, the resulting account \ .help("Seed for address generation; if specified, the resulting account \
will be at a derived address of STAKE_ACCOUNT_ADDRESS") will be at a derived address of STAKE_ACCOUNT_ADDRESS")
) )
.arg(
Arg::with_name("delinquent")
.long("delinquent")
.takes_value(false)
.conflicts_with(SIGN_ONLY_ARG.name)
.help("Deactivate abandoned stake that is currently delegated to a delinquent vote account")
)
.arg(stake_authority_arg()) .arg(stake_authority_arg())
.offline_args() .offline_args()
.nonce_args(false) .nonce_args(false)
@ -1003,13 +995,11 @@ pub fn parse_stake_deactivate_stake(
let stake_account_pubkey = let stake_account_pubkey =
pubkey_of_signer(matches, "stake_account_pubkey", wallet_manager)?.unwrap(); pubkey_of_signer(matches, "stake_account_pubkey", wallet_manager)?.unwrap();
let sign_only = matches.is_present(SIGN_ONLY_ARG.name); let sign_only = matches.is_present(SIGN_ONLY_ARG.name);
let deactivate_delinquent = matches.is_present("delinquent");
let dump_transaction_message = matches.is_present(DUMP_TRANSACTION_MESSAGE.name); let dump_transaction_message = matches.is_present(DUMP_TRANSACTION_MESSAGE.name);
let blockhash_query = BlockhashQuery::new_from_matches(matches); let blockhash_query = BlockhashQuery::new_from_matches(matches);
let nonce_account = pubkey_of(matches, NONCE_ARG.name); let nonce_account = pubkey_of(matches, NONCE_ARG.name);
let memo = matches.value_of(MEMO_ARG.name).map(String::from); let memo = matches.value_of(MEMO_ARG.name).map(String::from);
let seed = value_t!(matches, "seed", String).ok(); let seed = value_t!(matches, "seed", String).ok();
let (stake_authority, stake_authority_pubkey) = let (stake_authority, stake_authority_pubkey) =
signer_of(matches, STAKE_AUTHORITY_ARG.name, wallet_manager)?; signer_of(matches, STAKE_AUTHORITY_ARG.name, wallet_manager)?;
let (nonce_authority, nonce_authority_pubkey) = let (nonce_authority, nonce_authority_pubkey) =
@ -1028,7 +1018,6 @@ pub fn parse_stake_deactivate_stake(
stake_account_pubkey, stake_account_pubkey,
stake_authority: signer_info.index_of(stake_authority_pubkey).unwrap(), stake_authority: signer_info.index_of(stake_authority_pubkey).unwrap(),
sign_only, sign_only,
deactivate_delinquent,
dump_transaction_message, dump_transaction_message,
blockhash_query, blockhash_query,
nonce_account, nonce_account,
@ -1488,7 +1477,6 @@ pub fn process_deactivate_stake_account(
stake_account_pubkey: &Pubkey, stake_account_pubkey: &Pubkey,
stake_authority: SignerIndex, stake_authority: SignerIndex,
sign_only: bool, sign_only: bool,
deactivate_delinquent: bool,
dump_transaction_message: bool, dump_transaction_message: bool,
blockhash_query: &BlockhashQuery, blockhash_query: &BlockhashQuery,
nonce_account: Option<Pubkey>, nonce_account: Option<Pubkey>,
@ -1498,6 +1486,7 @@ pub fn process_deactivate_stake_account(
fee_payer: SignerIndex, fee_payer: SignerIndex,
) -> ProcessResult { ) -> ProcessResult {
let recent_blockhash = blockhash_query.get_blockhash(rpc_client, config.commitment)?; let recent_blockhash = blockhash_query.get_blockhash(rpc_client, config.commitment)?;
let stake_authority = config.signers[stake_authority];
let stake_account_address = if let Some(seed) = seed { let stake_account_address = if let Some(seed) = seed {
Pubkey::create_with_seed(stake_account_pubkey, seed, &stake::program::id())? Pubkey::create_with_seed(stake_account_pubkey, seed, &stake::program::id())?
@ -1505,77 +1494,11 @@ pub fn process_deactivate_stake_account(
*stake_account_pubkey *stake_account_pubkey
}; };
let ixs = vec![if deactivate_delinquent { let ixs = vec![stake_instruction::deactivate_stake(
let stake_account = rpc_client.get_account(&stake_account_address)?;
if stake_account.owner != stake::program::id() {
return Err(CliError::BadParameter(format!(
"{} is not a stake account",
stake_account_address,
))
.into());
}
let vote_account_address = match stake_account.state() {
Ok(stake_state) => match stake_state {
StakeState::Stake(_, stake) => stake.delegation.voter_pubkey,
_ => {
return Err(CliError::BadParameter(format!(
"{} is not a delegated stake account",
stake_account_address,
))
.into())
}
},
Err(err) => {
return Err(CliError::RpcRequestError(format!(
"Account data could not be deserialized to stake state: {}",
err
))
.into())
}
};
let current_epoch = rpc_client.get_epoch_info()?.epoch;
let (_, vote_state) = crate::vote::get_vote_account(
rpc_client,
&vote_account_address,
rpc_client.commitment(),
)?;
if !eligible_for_deactivate_delinquent(&vote_state.epoch_credits, current_epoch) {
return Err(CliError::BadParameter(format!(
"Stake has not been delinquent for {} epochs",
stake::MINIMUM_DELINQUENT_EPOCHS_FOR_DEACTIVATION,
))
.into());
}
// Search for a reference vote account
let reference_vote_account_address = rpc_client
.get_vote_accounts()?
.current
.into_iter()
.find(|vote_account_info| {
acceptable_reference_epoch_credits(&vote_account_info.epoch_credits, current_epoch)
});
let reference_vote_account_address = reference_vote_account_address
.ok_or_else(|| {
CliError::RpcRequestError("Unable to find a reference vote account".into())
})?
.vote_pubkey
.parse()?;
stake_instruction::deactivate_delinquent_stake(
&stake_account_address, &stake_account_address,
&vote_account_address, &stake_authority.pubkey(),
&reference_vote_account_address, )]
)
} else {
let stake_authority = config.signers[stake_authority];
stake_instruction::deactivate_stake(&stake_account_address, &stake_authority.pubkey())
}]
.with_memo(memo); .with_memo(memo);
let nonce_authority = config.signers[nonce_authority]; let nonce_authority = config.signers[nonce_authority];
let fee_payer = config.signers[fee_payer]; let fee_payer = config.signers[fee_payer];
@ -4251,34 +4174,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(),
nonce_account: None,
nonce_authority: 0,
memo: None,
seed: None,
fee_payer: 0,
},
signers: vec![read_keypair_file(&default_keypair_file).unwrap().into()],
}
);
// Test DeactivateStake Subcommand with delinquent flag
let test_deactivate_stake = test_commands.clone().get_matches_from(vec![
"test",
"deactivate-stake",
&stake_account_string,
"--delinquent",
]);
assert_eq!(
parse_command(&test_deactivate_stake, &default_signer, &mut None).unwrap(),
CliCommandInfo {
command: CliCommand::DeactivateStake {
stake_account_pubkey,
stake_authority: 0,
sign_only: false,
deactivate_delinquent: true,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -4306,7 +4201,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 1, stake_authority: 1,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -4341,7 +4235,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::Cluster, blockhash_query::Source::Cluster,
@ -4372,7 +4265,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: true, sign_only: true,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash), blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None, nonce_account: None,
@ -4407,7 +4299,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::Cluster, blockhash_query::Source::Cluster,
@ -4454,7 +4345,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::NonceAccount(nonce_account), blockhash_query::Source::NonceAccount(nonce_account),
@ -4489,7 +4379,6 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster), blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None, nonce_account: None,

View File

@ -1140,7 +1140,7 @@ pub fn process_vote_update_commission(
} }
} }
pub(crate) fn get_vote_account( fn get_vote_account(
rpc_client: &RpcClient, rpc_client: &RpcClient,
vote_account_pubkey: &Pubkey, vote_account_pubkey: &Pubkey,
commitment_config: CommitmentConfig, commitment_config: CommitmentConfig,

View File

@ -204,7 +204,6 @@ fn test_seed_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_address, stake_account_pubkey: stake_address,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -288,7 +287,6 @@ fn test_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -414,7 +412,6 @@ fn test_offline_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: true, sign_only: true,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash), blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None, nonce_account: None,
@ -434,7 +431,6 @@ fn test_offline_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash), blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash),
nonce_account: None, nonce_account: None,
@ -550,7 +546,6 @@ fn test_nonced_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::NonceAccount(nonce_account.pubkey()), blockhash_query::Source::NonceAccount(nonce_account.pubkey()),

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-client-test" name = "solana-client-test"
version = "1.11.0" version = "1.10.9"
description = "Solana RPC Test" description = "Solana RPC Test"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -14,25 +14,25 @@ publish = false
futures-util = "0.3.21" futures-util = "0.3.21"
serde_json = "1.0.79" serde_json = "1.0.79"
serial_test = "0.6.0" serial_test = "0.6.0"
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-ledger = { path = "../ledger", version = "=1.11.0" } solana-ledger = { path = "../ledger", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.11.0" } solana-merkle-tree = { path = "../merkle-tree", version = "=1.10.9" }
solana-metrics = { path = "../metrics", version = "=1.11.0" } solana-metrics = { path = "../metrics", version = "=1.10.9" }
solana-perf = { path = "../perf", version = "=1.11.0" } solana-perf = { path = "../perf", version = "=1.10.9" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" } solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.9" }
solana-rpc = { path = "../rpc", version = "=1.11.0" } solana-rpc = { path = "../rpc", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" } solana-test-validator = { path = "../test-validator", version = "=1.10.9" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
systemstat = "0.1.10" systemstat = "0.1.10"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
[dev-dependencies] [dev-dependencies]
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-client" name = "solana-client"
version = "1.11.0" version = "1.10.9"
description = "Solana Client" description = "Solana Client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -11,7 +11,7 @@ edition = "2021"
[dependencies] [dependencies]
async-mutex = "1.4.0" async-mutex = "1.4.0"
async-trait = "0.1.53" async-trait = "0.1.52"
base64 = "0.13.0" base64 = "0.13.0"
bincode = "1.3.3" bincode = "1.3.3"
bs58 = "0.4.0" bs58 = "0.4.0"
@ -33,21 +33,21 @@ rand_chacha = "0.2.2"
rayon = "1.5.1" rayon = "1.5.1"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
rustls = { version = "0.20.2", features = ["dangerous_configuration"] } rustls = { version = "0.20.2", features = ["dangerous_configuration"] }
semver = "1.0.7" semver = "1.0.6"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_json = "1.0.79" serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" } solana-account-decoder = { path = "../account-decoder", version = "=1.10.9" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.10.9" }
solana-faucet = { path = "../faucet", version = "=1.11.0" } solana-faucet = { path = "../faucet", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-metrics = { path = "../metrics", version = "=1.11.0" } solana-metrics = { path = "../metrics", version = "=1.10.9" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" } solana-vote-program = { path = "../programs/vote", version = "=1.10.9" }
thiserror = "1.0" thiserror = "1.0"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1.8" tokio-stream = "0.1.8"
@ -56,10 +56,9 @@ tungstenite = { version = "0.17.2", features = ["rustls-tls-webpki-roots"] }
url = "2.2.2" url = "2.2.2"
[dev-dependencies] [dev-dependencies]
anyhow = "1.0.45"
assert_matches = "1.5.0" assert_matches = "1.5.0"
jsonrpc-http-server = "18.0.0" jsonrpc-http-server = "18.0.0"
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -20,10 +20,10 @@ use {
}; };
// Should be non-zero // Should be non-zero
static MAX_CONNECTIONS: usize = 1024; static MAX_CONNECTIONS: usize = 64;
#[derive(Clone)] #[derive(Clone)]
pub enum Connection { enum Connection {
Udp(Arc<UdpTpuConnection>), Udp(Arc<UdpTpuConnection>),
Quic(Arc<QuicTpuConnection>), Quic(Arc<QuicTpuConnection>),
} }
@ -117,14 +117,14 @@ impl ConnectionCacheStats {
} }
} }
struct ConnectionMap { struct ConnMap {
map: LruCache<SocketAddr, Connection>, map: LruCache<SocketAddr, Connection>,
stats: Arc<ConnectionCacheStats>, stats: Arc<ConnectionCacheStats>,
last_stats: AtomicInterval, last_stats: AtomicInterval,
use_quic: bool, use_quic: bool,
} }
impl ConnectionMap { impl ConnMap {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
map: LruCache::new(MAX_CONNECTIONS), map: LruCache::new(MAX_CONNECTIONS),
@ -140,7 +140,7 @@ impl ConnectionMap {
} }
lazy_static! { lazy_static! {
static ref CONNECTION_MAP: Mutex<ConnectionMap> = Mutex::new(ConnectionMap::new()); static ref CONNECTION_MAP: Mutex<ConnMap> = Mutex::new(ConnMap::new());
} }
pub fn set_use_quic(use_quic: bool) { pub fn set_use_quic(use_quic: bool) {
@ -256,25 +256,6 @@ pub fn send_wire_transaction_async(
r r
} }
pub fn send_wire_transaction_batch_async(
packets: Vec<Vec<u8>>,
addr: &SocketAddr,
) -> Result<(), TransportError> {
let (conn, stats) = get_connection(addr);
let client_stats = Arc::new(ClientStats::default());
let len = packets.len();
let r = match conn {
Connection::Udp(conn) => {
conn.send_wire_transaction_batch_async(packets, client_stats.clone())
}
Connection::Quic(conn) => {
conn.send_wire_transaction_batch_async(packets, client_stats.clone())
}
};
stats.add_client_stats(&client_stats, len, r.is_ok());
r
}
pub fn send_wire_transaction( pub fn send_wire_transaction(
wire_transaction: &[u8], wire_transaction: &[u8],
addr: &SocketAddr, addr: &SocketAddr,
@ -346,7 +327,6 @@ mod tests {
#[test] #[test]
fn test_connection_cache() { fn test_connection_cache() {
solana_logger::setup();
// Allow the test to run deterministically // Allow the test to run deterministically
// with the same pseudorandom sequence between runs // with the same pseudorandom sequence between runs
// and on different platforms - the cryptographic security // and on different platforms - the cryptographic security

View File

@ -2,9 +2,6 @@
#[macro_use] #[macro_use]
extern crate serde_derive; extern crate serde_derive;
#[macro_use]
extern crate solana_metrics;
pub mod blockhash_query; pub mod blockhash_query;
pub mod client_error; pub mod client_error;
pub mod connection_cache; pub mod connection_cache;
@ -30,6 +27,9 @@ pub mod tpu_connection;
pub mod transaction_executor; pub mod transaction_executor;
pub mod udp_client; pub mod udp_client;
#[macro_use]
extern crate solana_metrics;
pub mod mock_sender_for_cli { pub mod mock_sender_for_cli {
/// Magic `SIGNATURE` value used by `solana-cli` unit tests. /// Magic `SIGNATURE` value used by `solana-cli` unit tests.
/// Please don't use this constant. /// Please don't use this constant.

View File

@ -229,7 +229,6 @@ impl RpcSender for MockSender {
post_token_balances: None, post_token_balances: None,
rewards: None, rewards: None,
loaded_addresses: None, loaded_addresses: None,
return_data: None,
}), }),
}, },
block_time: Some(1628633791), block_time: Some(1628633791),
@ -341,7 +340,6 @@ impl RpcSender for MockSender {
logs: None, logs: None,
accounts: None, accounts: None,
units_consumed: None, units_consumed: None,
return_data: None,
}, },
})?, })?,
"getMinimumBalanceForRentExemption" => json![20], "getMinimumBalanceForRentExemption" => json![20],

View File

@ -1,5 +1,3 @@
//! Durable transaction nonce helpers.
use { use {
crate::rpc_client::RpcClient, crate::rpc_client::RpcClient,
solana_sdk::{ solana_sdk::{
@ -34,23 +32,10 @@ pub enum Error {
Client(String), Client(String),
} }
/// Get a nonce account from the network.
///
/// This is like [`RpcClient::get_account`] except:
///
/// - it returns this module's [`Error`] type,
/// - it returns an error if any of the checks from [`account_identity_ok`] fail.
pub fn get_account(rpc_client: &RpcClient, nonce_pubkey: &Pubkey) -> Result<Account, Error> { pub fn get_account(rpc_client: &RpcClient, nonce_pubkey: &Pubkey) -> Result<Account, Error> {
get_account_with_commitment(rpc_client, nonce_pubkey, CommitmentConfig::default()) get_account_with_commitment(rpc_client, nonce_pubkey, CommitmentConfig::default())
} }
/// Get a nonce account from the network.
///
/// This is like [`RpcClient::get_account_with_commitment`] except:
///
/// - it returns this module's [`Error`] type,
/// - it returns an error if the account does not exist,
/// - it returns an error if any of the checks from [`account_identity_ok`] fail.
pub fn get_account_with_commitment( pub fn get_account_with_commitment(
rpc_client: &RpcClient, rpc_client: &RpcClient,
nonce_pubkey: &Pubkey, nonce_pubkey: &Pubkey,
@ -67,13 +52,6 @@ pub fn get_account_with_commitment(
.and_then(|a| account_identity_ok(&a).map(|()| a)) .and_then(|a| account_identity_ok(&a).map(|()| a))
} }
/// Perform basic checks that an account has nonce-like properties.
///
/// # Errors
///
/// Returns [`Error::InvalidAccountOwner`] if the account is not owned by the
/// system program. Returns [`Error::UnexpectedDataSize`] if the account
/// contains no data.
pub fn account_identity_ok<T: ReadableAccount>(account: &T) -> Result<(), Error> { pub fn account_identity_ok<T: ReadableAccount>(account: &T) -> Result<(), Error> {
if account.owner() != &system_program::id() { if account.owner() != &system_program::id() {
Err(Error::InvalidAccountOwner) Err(Error::InvalidAccountOwner)
@ -84,47 +62,6 @@ pub fn account_identity_ok<T: ReadableAccount>(account: &T) -> Result<(), Error>
} }
} }
/// Deserialize the state of a durable transaction nonce account.
///
/// # Errors
///
/// Returns an error if the account is not owned by the system program or
/// contains no data.
///
/// # Examples
///
/// Determine if a nonce account is initialized:
///
/// ```no_run
/// use solana_client::{
/// rpc_client::RpcClient,
/// nonce_utils,
/// };
/// use solana_sdk::{
/// nonce::State,
/// pubkey::Pubkey,
/// };
/// use anyhow::Result;
///
/// fn is_nonce_initialized(
/// client: &RpcClient,
/// nonce_account_pubkey: &Pubkey,
/// ) -> Result<bool> {
///
/// // Sign the tx with nonce_account's `blockhash` instead of the
/// // network's latest blockhash.
/// let nonce_account = client.get_account(nonce_account_pubkey)?;
/// let nonce_state = nonce_utils::state_from_account(&nonce_account)?;
///
/// Ok(!matches!(nonce_state, State::Uninitialized))
/// }
/// #
/// # let client = RpcClient::new(String::new());
/// # let nonce_account_pubkey = Pubkey::new_unique();
/// # is_nonce_initialized(&client, &nonce_account_pubkey)?;
/// #
/// # Ok::<(), anyhow::Error>(())
/// ```
pub fn state_from_account<T: ReadableAccount + StateMut<Versions>>( pub fn state_from_account<T: ReadableAccount + StateMut<Versions>>(
account: &T, account: &T,
) -> Result<State, Error> { ) -> Result<State, Error> {
@ -134,93 +71,6 @@ pub fn state_from_account<T: ReadableAccount + StateMut<Versions>>(
.map(|v| v.convert_to_current()) .map(|v| v.convert_to_current())
} }
/// Deserialize the state data of a durable transaction nonce account.
///
/// # Errors
///
/// Returns an error if the account is not owned by the system program or
/// contains no data. Returns an error if the account state is uninitialized or
/// fails to deserialize.
///
/// # Examples
///
/// Create and sign a transaction with a durable nonce:
///
/// ```no_run
/// use solana_client::{
/// rpc_client::RpcClient,
/// nonce_utils,
/// };
/// use solana_sdk::{
/// message::Message,
/// pubkey::Pubkey,
/// signature::{Keypair, Signer},
/// system_instruction,
/// transaction::Transaction,
/// };
/// use std::path::Path;
/// use anyhow::Result;
/// # use anyhow::anyhow;
///
/// fn create_transfer_tx_with_nonce(
/// client: &RpcClient,
/// nonce_account_pubkey: &Pubkey,
/// payer: &Keypair,
/// receiver: &Pubkey,
/// amount: u64,
/// tx_path: &Path,
/// ) -> Result<()> {
///
/// let instr_transfer = system_instruction::transfer(
/// &payer.pubkey(),
/// receiver,
/// amount,
/// );
///
/// // In this example, `payer` is `nonce_account_pubkey`'s authority
/// let instr_advance_nonce_account = system_instruction::advance_nonce_account(
/// nonce_account_pubkey,
/// &payer.pubkey(),
/// );
///
/// // The `advance_nonce_account` instruction must be the first issued in
/// // the transaction.
/// let message = Message::new(
/// &[
/// instr_advance_nonce_account,
/// instr_transfer
/// ],
/// Some(&payer.pubkey()),
/// );
///
/// let mut tx = Transaction::new_unsigned(message);
///
/// // Sign the tx with nonce_account's `blockhash` instead of the
/// // network's latest blockhash.
/// let nonce_account = client.get_account(nonce_account_pubkey)?;
/// let nonce_data = nonce_utils::data_from_account(&nonce_account)?;
/// let blockhash = nonce_data.blockhash;
///
/// tx.try_sign(&[payer], blockhash)?;
///
/// // Save the signed transaction locally for later submission.
/// save_tx_to_file(&tx_path, &tx)?;
///
/// Ok(())
/// }
/// #
/// # fn save_tx_to_file(path: &Path, tx: &Transaction) -> Result<()> {
/// # Ok(())
/// # }
/// #
/// # let client = RpcClient::new(String::new());
/// # let nonce_account_pubkey = Pubkey::new_unique();
/// # let payer = Keypair::new();
/// # let receiver = Pubkey::new_unique();
/// # create_transfer_tx_with_nonce(&client, &nonce_account_pubkey, &payer, &receiver, 1024, Path::new("new_tx"))?;
/// #
/// # Ok::<(), anyhow::Error>(())
/// ```
pub fn data_from_account<T: ReadableAccount + StateMut<Versions>>( pub fn data_from_account<T: ReadableAccount + StateMut<Versions>>(
account: &T, account: &T,
) -> Result<Data, Error> { ) -> Result<Data, Error> {
@ -228,12 +78,6 @@ pub fn data_from_account<T: ReadableAccount + StateMut<Versions>>(
state_from_account(account).and_then(|ref s| data_from_state(s).map(|d| d.clone())) state_from_account(account).and_then(|ref s| data_from_state(s).map(|d| d.clone()))
} }
/// Get the nonce data from its [`State`] value.
///
/// # Errors
///
/// Returns [`Error::InvalidStateForOperation`] if `state` is
/// [`State::Uninitialized`].
pub fn data_from_state(state: &State) -> Result<&Data, Error> { pub fn data_from_state(state: &State) -> Result<&Data, Error> {
match state { match state {
State::Uninitialized => Err(Error::InvalidStateForOperation), State::Uninitialized => Err(Error::InvalidStateForOperation),

View File

@ -11,20 +11,15 @@ use {
itertools::Itertools, itertools::Itertools,
lazy_static::lazy_static, lazy_static::lazy_static,
log::*, log::*,
quinn::{ quinn::{ClientConfig, Endpoint, EndpointConfig, NewConnection, WriteError},
ClientConfig, Endpoint, EndpointConfig, IdleTimeout, NewConnection, VarInt, WriteError,
},
quinn_proto::ConnectionStats, quinn_proto::ConnectionStats,
solana_sdk::{ solana_sdk::{
quic::{ quic::{QUIC_MAX_CONCURRENT_STREAMS, QUIC_PORT_OFFSET},
QUIC_KEEP_ALIVE_MS, QUIC_MAX_CONCURRENT_STREAMS, QUIC_MAX_TIMEOUT_MS, QUIC_PORT_OFFSET,
},
transport::Result as TransportResult, transport::Result as TransportResult,
}, },
std::{ std::{
net::{SocketAddr, UdpSocket}, net::{SocketAddr, UdpSocket},
sync::{atomic::Ordering, Arc}, sync::{atomic::Ordering, Arc},
time::Duration,
}, },
tokio::runtime::Runtime, tokio::runtime::Runtime,
}; };
@ -124,31 +119,16 @@ impl TpuConnection for QuicTpuConnection {
stats: Arc<ClientStats>, stats: Arc<ClientStats>,
) -> TransportResult<()> { ) -> TransportResult<()> {
let _guard = RUNTIME.enter(); let _guard = RUNTIME.enter();
let client = self.client.clone();
//drop and detach the task //drop and detach the task
let client = self.client.clone();
inc_new_counter_info!("send_wire_transaction_async", 1);
let _ = RUNTIME.spawn(async move { let _ = RUNTIME.spawn(async move {
let send_buffer = client.send_buffer(wire_transaction, &stats); let send_buffer = client.send_buffer(wire_transaction, &stats);
if let Err(e) = send_buffer.await { if let Err(e) = send_buffer.await {
inc_new_counter_warn!("send_wire_transaction_async_fail", 1);
warn!("Failed to send transaction async to {:?}", e); warn!("Failed to send transaction async to {:?}", e);
datapoint_warn!("send-wire-async", ("failure", 1, i64),); } else {
} inc_new_counter_info!("send_wire_transaction_async_pass", 1);
});
Ok(())
}
fn send_wire_transaction_batch_async(
&self,
buffers: Vec<Vec<u8>>,
stats: Arc<ClientStats>,
) -> TransportResult<()> {
let _guard = RUNTIME.enter();
let client = self.client.clone();
//drop and detach the task
let _ = RUNTIME.spawn(async move {
let send_batch = client.send_batch(&buffers, &stats);
if let Err(e) = send_batch.await {
warn!("Failed to send transaction batch async to {:?}", e);
datapoint_warn!("send-wire-batch-async", ("failure", 1, i64),);
} }
}); });
Ok(()) Ok(())
@ -168,13 +148,7 @@ impl QuicClient {
let mut endpoint = RUNTIME.block_on(create_endpoint); let mut endpoint = RUNTIME.block_on(create_endpoint);
let mut config = ClientConfig::new(Arc::new(crypto)); endpoint.set_default_client_config(ClientConfig::new(Arc::new(crypto)));
let transport_config = Arc::get_mut(&mut config.transport).unwrap();
let timeout = IdleTimeout::from(VarInt::from_u32(QUIC_MAX_TIMEOUT_MS));
transport_config.max_idle_timeout(Some(timeout));
transport_config.keep_alive_interval(Some(Duration::from_millis(QUIC_KEEP_ALIVE_MS)));
endpoint.set_default_client_config(config);
Self { Self {
endpoint, endpoint,
@ -295,16 +269,13 @@ impl QuicClient {
.iter() .iter()
.chunks(QUIC_MAX_CONCURRENT_STREAMS); .chunks(QUIC_MAX_CONCURRENT_STREAMS);
let futures: Vec<_> = chunks let futures = chunks.into_iter().map(|buffs| {
.into_iter()
.map(|buffs| {
join_all( join_all(
buffs buffs
.into_iter() .into_iter()
.map(|buf| Self::_send_buffer_using_conn(buf.as_ref(), connection_ref)), .map(|buf| Self::_send_buffer_using_conn(buf.as_ref(), connection_ref)),
) )
}) });
.collect();
for f in futures { for f in futures {
f.await.into_iter().try_for_each(|res| res)?; f.await.into_iter().try_for_each(|res| res)?;

View File

@ -7,7 +7,6 @@ use {
hash::Hash, hash::Hash,
inflation::Inflation, inflation::Inflation,
transaction::{Result, TransactionError}, transaction::{Result, TransactionError},
transaction_context::TransactionReturnData,
}, },
solana_transaction_status::{ solana_transaction_status::{
ConfirmedTransactionStatusWithSignature, TransactionConfirmationStatus, UiConfirmedBlock, ConfirmedTransactionStatusWithSignature, TransactionConfirmationStatus, UiConfirmedBlock,
@ -348,29 +347,6 @@ pub struct RpcSimulateTransactionResult {
pub logs: Option<Vec<String>>, pub logs: Option<Vec<String>>,
pub accounts: Option<Vec<Option<UiAccount>>>, pub accounts: Option<Vec<Option<UiAccount>>>,
pub units_consumed: Option<u64>, pub units_consumed: Option<u64>,
pub return_data: Option<RpcTransactionReturnData>,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct RpcTransactionReturnData {
pub program_id: String,
pub data: (String, ReturnDataEncoding),
}
impl From<TransactionReturnData> for RpcTransactionReturnData {
fn from(return_data: TransactionReturnData) -> Self {
Self {
program_id: return_data.program_id.to_string(),
data: (base64::encode(return_data.data), ReturnDataEncoding::Base64),
}
}
}
#[derive(Serialize, Deserialize, Clone, Copy, Debug, Eq, Hash, PartialEq)]
#[serde(rename_all = "camelCase")]
pub enum ReturnDataEncoding {
Base64,
} }
#[derive(Serialize, Deserialize, Clone, Debug)] #[derive(Serialize, Deserialize, Clone, Debug)]

View File

@ -595,30 +595,55 @@ impl SyncClient for ThinClient {
} }
impl AsyncClient for ThinClient { impl AsyncClient for ThinClient {
fn async_send_versioned_transaction( fn async_send_transaction(&self, transaction: Transaction) -> TransportResult<Signature> {
&self, let transaction = VersionedTransaction::from(transaction);
transaction: VersionedTransaction,
) -> TransportResult<Signature> {
serialize_and_send_transaction(&transaction, self.tpu_addr())?; serialize_and_send_transaction(&transaction, self.tpu_addr())?;
Ok(transaction.signatures[0]) Ok(transaction.signatures[0])
} }
fn async_send_versioned_transaction_batch( fn async_send_batch(&self, transactions: Vec<Transaction>) -> TransportResult<()> {
&self, let batch: Vec<VersionedTransaction> = transactions.into_iter().map(Into::into).collect();
batch: Vec<VersionedTransaction>,
) -> TransportResult<()> {
par_serialize_and_send_transaction_batch(&batch[..], self.tpu_addr())?; par_serialize_and_send_transaction_batch(&batch[..], self.tpu_addr())?;
Ok(()) Ok(())
} }
fn async_send_message<T: Signers>(
&self,
keypairs: &T,
message: Message,
recent_blockhash: Hash,
) -> TransportResult<Signature> {
let transaction = Transaction::new(keypairs, message, recent_blockhash);
self.async_send_transaction(transaction)
}
fn async_send_instruction(
&self,
keypair: &Keypair,
instruction: Instruction,
recent_blockhash: Hash,
) -> TransportResult<Signature> {
let message = Message::new(&[instruction], Some(&keypair.pubkey()));
self.async_send_message(&[keypair], message, recent_blockhash)
}
fn async_transfer(
&self,
lamports: u64,
keypair: &Keypair,
pubkey: &Pubkey,
recent_blockhash: Hash,
) -> TransportResult<Signature> {
let transfer_instruction =
system_instruction::transfer(&keypair.pubkey(), pubkey, lamports);
self.async_send_instruction(keypair, transfer_instruction, recent_blockhash)
}
} }
pub fn create_client(rpc: SocketAddr, tpu: SocketAddr) -> ThinClient { pub fn create_client((rpc, tpu): (SocketAddr, SocketAddr)) -> ThinClient {
ThinClient::new(rpc, tpu) ThinClient::new(rpc, tpu)
} }
pub fn create_client_with_timeout( pub fn create_client_with_timeout(
rpc: SocketAddr, (rpc, tpu): (SocketAddr, SocketAddr),
tpu: SocketAddr,
timeout: Duration, timeout: Duration,
) -> ThinClient { ) -> ThinClient {
ThinClient::new_socket_with_timeout(rpc, tpu, timeout) ThinClient::new_socket_with_timeout(rpc, tpu, timeout)

View File

@ -70,10 +70,4 @@ pub trait TpuConnection {
) -> TransportResult<()> ) -> TransportResult<()>
where where
T: AsRef<[u8]>; T: AsRef<[u8]>;
fn send_wire_transaction_batch_async(
&self,
buffers: Vec<Vec<u8>>,
stats: Arc<ClientStats>,
) -> TransportResult<()>;
} }

View File

@ -62,13 +62,4 @@ impl TpuConnection for UdpTpuConnection {
batch_send(&self.socket, &pkts)?; batch_send(&self.socket, &pkts)?;
Ok(()) Ok(())
} }
fn send_wire_transaction_batch_async(
&self,
buffers: Vec<Vec<u8>>,
_stats: Arc<ClientStats>,
) -> TransportResult<()> {
let pkts: Vec<_> = buffers.into_iter().zip(repeat(self.tpu_addr())).collect();
batch_send(&self.socket, &pkts)?;
Ok(())
}
} }

View File

@ -1,7 +1,7 @@
[package] [package]
name = "solana-core" name = "solana-core"
description = "Blockchain, Rebuilt for Scale" description = "Blockchain, Rebuilt for Scale"
version = "1.11.0" version = "1.10.9"
homepage = "https://solana.com/" homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-core" documentation = "https://docs.rs/solana-core"
readme = "../README.md" readme = "../README.md"
@ -33,29 +33,30 @@ rayon = "1.5.1"
retain_mut = "0.1.7" retain_mut = "0.1.7"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
solana-address-lookup-table-program = { path = "../programs/address-lookup-table", version = "=1.11.0" } solana-address-lookup-table-program = { path = "../programs/address-lookup-table", version = "=1.10.9" }
solana-bloom = { path = "../bloom", version = "=1.11.0" } solana-bloom = { path = "../bloom", version = "=1.10.9" }
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-entry = { path = "../entry", version = "=1.11.0" } solana-entry = { path = "../entry", version = "=1.10.9" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.11.0" } solana-frozen-abi = { path = "../frozen-abi", version = "=1.10.9" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.11.0" } solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.10.9" }
solana-geyser-plugin-manager = { path = "../geyser-plugin-manager", version = "=1.11.0" } solana-geyser-plugin-manager = { path = "../geyser-plugin-manager", version = "=1.10.9" }
solana-gossip = { path = "../gossip", version = "=1.11.0" } solana-gossip = { path = "../gossip", version = "=1.10.9" }
solana-ledger = { path = "../ledger", version = "=1.11.0" } solana-ledger = { path = "../ledger", version = "=1.10.9" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-metrics = { path = "../metrics", version = "=1.11.0" } solana-metrics = { path = "../metrics", version = "=1.10.9" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.10.9" }
solana-perf = { path = "../perf", version = "=1.11.0" } solana-perf = { path = "../perf", version = "=1.10.9" }
solana-poh = { path = "../poh", version = "=1.11.0" } solana-poh = { path = "../poh", version = "=1.10.9" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" } solana-program-runtime = { path = "../program-runtime", version = "=1.10.9" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" } solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.9" }
solana-rpc = { path = "../rpc", version = "=1.11.0" } solana-replica-lib = { path = "../replica-lib", version = "=1.10.9" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-rpc = { path = "../rpc", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.10.9" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.10.9" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.9" }
sys-info = "0.9.1" sys-info = "0.9.1"
tempfile = "3.3.0" tempfile = "3.3.0"
thiserror = "1.0" thiserror = "1.0"
@ -68,10 +69,10 @@ raptorq = "1.6.5"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
serde_json = "1.0.79" serde_json = "1.0.79"
serial_test = "0.6.0" serial_test = "0.6.0"
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" } solana-program-runtime = { path = "../program-runtime", version = "=1.10.9" }
solana-stake-program = { path = "../programs/stake", version = "=1.11.0" } solana-stake-program = { path = "../programs/stake", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
static_assertions = "1.1.0" static_assertions = "1.1.0"
systemstat = "0.1.10" systemstat = "0.1.10"

View File

@ -1,28 +1,28 @@
// Service to verify accounts hashes with other known validator nodes. // Service to verify accounts hashes with other known validator nodes.
// //
// Each interval, publish the snapshot hash which is the full accounts state // Each interval, publish the snapshat hash which is the full accounts state
// hash on gossip. Monitor gossip for messages from validators in the `--known-validator`s // hash on gossip. Monitor gossip for messages from validators in the `--known-validator`s
// set and halt the node if a mismatch is detected. // set and halt the node if a mismatch is detected.
use { use {
crossbeam_channel::RecvTimeoutError,
rayon::ThreadPool,
solana_gossip::cluster_info::{ClusterInfo, MAX_SNAPSHOT_HASHES}, solana_gossip::cluster_info::{ClusterInfo, MAX_SNAPSHOT_HASHES},
solana_measure::measure::Measure, solana_measure::measure::Measure,
solana_runtime::{ solana_runtime::{
accounts_hash::{CalcAccountsHashConfig, HashStats}, accounts_db::{self, AccountsDb},
accounts_hash::HashStats,
snapshot_config::SnapshotConfig, snapshot_config::SnapshotConfig,
snapshot_package::{ snapshot_package::{
AccountsPackage, PendingAccountsPackage, PendingSnapshotPackage, SnapshotPackage, AccountsPackage, AccountsPackageReceiver, PendingSnapshotPackage, SnapshotPackage,
SnapshotType, SnapshotType,
}, },
sorted_storages::SortedStorages, sorted_storages::SortedStorages,
}, },
solana_sdk::{ solana_sdk::{clock::Slot, hash::Hash, pubkey::Pubkey},
clock::{Slot, SLOT_MS},
hash::Hash,
pubkey::Pubkey,
},
std::{ std::{
collections::{HashMap, HashSet}, collections::{HashMap, HashSet},
path::{Path, PathBuf},
sync::{ sync::{
atomic::{AtomicBool, Ordering}, atomic::{AtomicBool, Ordering},
Arc, Arc,
@ -38,7 +38,7 @@ pub struct AccountsHashVerifier {
impl AccountsHashVerifier { impl AccountsHashVerifier {
pub fn new( pub fn new(
pending_accounts_package: PendingAccountsPackage, accounts_package_receiver: AccountsPackageReceiver,
pending_snapshot_package: Option<PendingSnapshotPackage>, pending_snapshot_package: Option<PendingSnapshotPackage>,
exit: &Arc<AtomicBool>, exit: &Arc<AtomicBool>,
cluster_info: &Arc<ClusterInfo>, cluster_info: &Arc<ClusterInfo>,
@ -46,6 +46,7 @@ impl AccountsHashVerifier {
halt_on_known_validators_accounts_hash_mismatch: bool, halt_on_known_validators_accounts_hash_mismatch: bool,
fault_injection_rate_slots: u64, fault_injection_rate_slots: u64,
snapshot_config: Option<SnapshotConfig>, snapshot_config: Option<SnapshotConfig>,
ledger_path: PathBuf,
) -> Self { ) -> Self {
let exit = exit.clone(); let exit = exit.clone();
let cluster_info = cluster_info.clone(); let cluster_info = cluster_info.clone();
@ -53,17 +54,18 @@ impl AccountsHashVerifier {
.name("solana-hash-accounts".to_string()) .name("solana-hash-accounts".to_string())
.spawn(move || { .spawn(move || {
let mut hashes = vec![]; let mut hashes = vec![];
let mut thread_pool = None;
loop { loop {
if exit.load(Ordering::Relaxed) { if exit.load(Ordering::Relaxed) {
break; break;
} }
let accounts_package = pending_accounts_package.lock().unwrap().take(); match accounts_package_receiver.recv_timeout(Duration::from_secs(1)) {
if accounts_package.is_none() { Ok(accounts_package) => {
std::thread::sleep(Duration::from_millis(SLOT_MS)); if accounts_package.hash_for_testing.is_some() && thread_pool.is_none()
continue; {
thread_pool = Some(accounts_db::make_min_priority_thread_pool());
} }
let accounts_package = accounts_package.unwrap();
Self::process_accounts_package( Self::process_accounts_package(
accounts_package, accounts_package,
@ -75,8 +77,14 @@ impl AccountsHashVerifier {
&exit, &exit,
fault_injection_rate_slots, fault_injection_rate_slots,
snapshot_config.as_ref(), snapshot_config.as_ref(),
thread_pool.as_ref(),
&ledger_path,
); );
} }
Err(RecvTimeoutError::Disconnected) => break,
Err(RecvTimeoutError::Timeout) => (),
}
}
}) })
.unwrap(); .unwrap();
Self { Self {
@ -95,8 +103,10 @@ impl AccountsHashVerifier {
exit: &Arc<AtomicBool>, exit: &Arc<AtomicBool>,
fault_injection_rate_slots: u64, fault_injection_rate_slots: u64,
snapshot_config: Option<&SnapshotConfig>, snapshot_config: Option<&SnapshotConfig>,
thread_pool: Option<&ThreadPool>,
ledger_path: &Path,
) { ) {
let accounts_hash = Self::calculate_and_verify_accounts_hash(&accounts_package); Self::verify_accounts_package_hash(&accounts_package, thread_pool, ledger_path);
Self::push_accounts_hashes_to_cluster( Self::push_accounts_hashes_to_cluster(
&accounts_package, &accounts_package,
@ -106,63 +116,39 @@ impl AccountsHashVerifier {
hashes, hashes,
exit, exit,
fault_injection_rate_slots, fault_injection_rate_slots,
accounts_hash,
); );
Self::submit_for_packaging( Self::submit_for_packaging(accounts_package, pending_snapshot_package, snapshot_config);
accounts_package,
pending_snapshot_package,
snapshot_config,
accounts_hash,
);
} }
/// returns calculated accounts hash fn verify_accounts_package_hash(
fn calculate_and_verify_accounts_hash(accounts_package: &AccountsPackage) -> Hash { accounts_package: &AccountsPackage,
thread_pool: Option<&ThreadPool>,
ledger_path: &Path,
) {
let mut measure_hash = Measure::start("hash"); let mut measure_hash = Measure::start("hash");
let mut sort_time = Measure::start("sort_storages"); if let Some(expected_hash) = accounts_package.hash_for_testing {
let sorted_storages = SortedStorages::new(&accounts_package.snapshot_storages); let sorted_storages = SortedStorages::new(&accounts_package.snapshot_storages);
sort_time.stop(); let (hash, lamports) = AccountsDb::calculate_accounts_hash_without_index(
ledger_path,
let mut timings = HashStats {
storage_sort_us: sort_time.as_us(),
..HashStats::default()
};
timings.calc_storage_size_quartiles(&accounts_package.snapshot_storages);
let (accounts_hash, lamports) = accounts_package
.accounts
.accounts_db
.calculate_accounts_hash_without_index(
&CalcAccountsHashConfig {
use_bg_thread_pool: true,
check_hash: false,
ancestors: None,
use_write_cache: false,
epoch_schedule: &accounts_package.epoch_schedule,
rent_collector: &accounts_package.rent_collector,
},
&sorted_storages, &sorted_storages,
timings, thread_pool,
HashStats::default(),
false,
None,
None, // this will fail with filler accounts
None, // this code path is only for testing, so use default # passes here
) )
.unwrap(); .unwrap();
assert_eq!(accounts_package.expected_capitalization, lamports); assert_eq!(accounts_package.expected_capitalization, lamports);
if let Some(expected_hash) = accounts_package.accounts_hash_for_testing { assert_eq!(expected_hash, hash);
assert_eq!(expected_hash, accounts_hash);
}; };
measure_hash.stop(); measure_hash.stop();
solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
accounts_package.snapshot_links.path(),
accounts_package.slot,
&accounts_hash,
);
datapoint_info!( datapoint_info!(
"accounts_hash_verifier", "accounts_hash_verifier",
("calculate_hash", measure_hash.as_us(), i64), ("calculate_hash", measure_hash.as_us(), i64),
); );
accounts_hash
} }
fn push_accounts_hashes_to_cluster( fn push_accounts_hashes_to_cluster(
@ -173,8 +159,8 @@ impl AccountsHashVerifier {
hashes: &mut Vec<(Slot, Hash)>, hashes: &mut Vec<(Slot, Hash)>,
exit: &Arc<AtomicBool>, exit: &Arc<AtomicBool>,
fault_injection_rate_slots: u64, fault_injection_rate_slots: u64,
accounts_hash: Hash,
) { ) {
let hash = accounts_package.hash;
if fault_injection_rate_slots != 0 if fault_injection_rate_slots != 0
&& accounts_package.slot % fault_injection_rate_slots == 0 && accounts_package.slot % fault_injection_rate_slots == 0
{ {
@ -185,10 +171,10 @@ impl AccountsHashVerifier {
}; };
warn!("inserting fault at slot: {}", accounts_package.slot); warn!("inserting fault at slot: {}", accounts_package.slot);
let rand = thread_rng().gen_range(0, 10); let rand = thread_rng().gen_range(0, 10);
let hash = extend_and_hash(&accounts_hash, &[rand]); let hash = extend_and_hash(&hash, &[rand]);
hashes.push((accounts_package.slot, hash)); hashes.push((accounts_package.slot, hash));
} else { } else {
hashes.push((accounts_package.slot, accounts_hash)); hashes.push((accounts_package.slot, hash));
} }
while hashes.len() > MAX_SNAPSHOT_HASHES { while hashes.len() > MAX_SNAPSHOT_HASHES {
@ -212,7 +198,6 @@ impl AccountsHashVerifier {
accounts_package: AccountsPackage, accounts_package: AccountsPackage,
pending_snapshot_package: Option<&PendingSnapshotPackage>, pending_snapshot_package: Option<&PendingSnapshotPackage>,
snapshot_config: Option<&SnapshotConfig>, snapshot_config: Option<&SnapshotConfig>,
accounts_hash: Hash,
) { ) {
if accounts_package.snapshot_type.is_none() if accounts_package.snapshot_type.is_none()
|| pending_snapshot_package.is_none() || pending_snapshot_package.is_none()
@ -221,7 +206,7 @@ impl AccountsHashVerifier {
return; return;
}; };
let snapshot_package = SnapshotPackage::new(accounts_package, accounts_hash); let snapshot_package = SnapshotPackage::from(accounts_package);
let pending_snapshot_package = pending_snapshot_package.unwrap(); let pending_snapshot_package = pending_snapshot_package.unwrap();
let _snapshot_config = snapshot_config.unwrap(); let _snapshot_config = snapshot_config.unwrap();
@ -299,18 +284,13 @@ mod tests {
use { use {
super::*, super::*,
solana_gossip::{cluster_info::make_accounts_hashes_message, contact_info::ContactInfo}, solana_gossip::{cluster_info::make_accounts_hashes_message, contact_info::ContactInfo},
solana_runtime::{ solana_runtime::snapshot_utils::{ArchiveFormat, SnapshotVersion},
rent_collector::RentCollector,
snapshot_utils::{ArchiveFormat, SnapshotVersion},
},
solana_sdk::{ solana_sdk::{
genesis_config::ClusterType, genesis_config::ClusterType,
hash::hash, hash::hash,
signature::{Keypair, Signer}, signature::{Keypair, Signer},
sysvar::epoch_schedule::EpochSchedule,
}, },
solana_streamer::socket::SocketAddrSpace, solana_streamer::socket::SocketAddrSpace,
std::str::FromStr,
}; };
fn new_test_cluster_info(contact_info: ContactInfo) -> ClusterInfo { fn new_test_cluster_info(contact_info: ContactInfo) -> ClusterInfo {
@ -373,8 +353,6 @@ mod tests {
incremental_snapshot_archive_interval_slots: Slot::MAX, incremental_snapshot_archive_interval_slots: Slot::MAX,
..SnapshotConfig::default() ..SnapshotConfig::default()
}; };
let accounts = Arc::new(solana_runtime::accounts::Accounts::default_for_tests());
let expected_hash = Hash::from_str("GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn").unwrap();
for i in 0..MAX_SNAPSHOT_HASHES + 1 { for i in 0..MAX_SNAPSHOT_HASHES + 1 {
let accounts_package = AccountsPackage { let accounts_package = AccountsPackage {
slot: full_snapshot_archive_interval_slots + i as u64, slot: full_snapshot_archive_interval_slots + i as u64,
@ -382,18 +360,18 @@ mod tests {
slot_deltas: vec![], slot_deltas: vec![],
snapshot_links: TempDir::new().unwrap(), snapshot_links: TempDir::new().unwrap(),
snapshot_storages: vec![], snapshot_storages: vec![],
hash: hash(&[i as u8]),
archive_format: ArchiveFormat::TarBzip2, archive_format: ArchiveFormat::TarBzip2,
snapshot_version: SnapshotVersion::default(), snapshot_version: SnapshotVersion::default(),
snapshot_archives_dir: PathBuf::default(), snapshot_archives_dir: PathBuf::default(),
expected_capitalization: 0, expected_capitalization: 0,
accounts_hash_for_testing: None, hash_for_testing: None,
cluster_type: ClusterType::MainnetBeta, cluster_type: ClusterType::MainnetBeta,
snapshot_type: None, snapshot_type: None,
accounts: Arc::clone(&accounts),
epoch_schedule: EpochSchedule::default(),
rent_collector: RentCollector::default(),
}; };
let ledger_path = TempDir::new().unwrap();
AccountsHashVerifier::process_accounts_package( AccountsHashVerifier::process_accounts_package(
accounts_package, accounts_package,
&cluster_info, &cluster_info,
@ -404,6 +382,8 @@ mod tests {
&exit, &exit,
0, 0,
Some(&snapshot_config), Some(&snapshot_config),
None,
ledger_path.path(),
); );
// sleep for 1ms to create a newer timestmap for gossip entry // sleep for 1ms to create a newer timestmap for gossip entry
@ -419,13 +399,13 @@ mod tests {
assert_eq!(cluster_hashes.len(), MAX_SNAPSHOT_HASHES); assert_eq!(cluster_hashes.len(), MAX_SNAPSHOT_HASHES);
assert_eq!( assert_eq!(
cluster_hashes[0], cluster_hashes[0],
(full_snapshot_archive_interval_slots + 1, expected_hash) (full_snapshot_archive_interval_slots + 1, hash(&[1]))
); );
assert_eq!( assert_eq!(
cluster_hashes[MAX_SNAPSHOT_HASHES - 1], cluster_hashes[MAX_SNAPSHOT_HASHES - 1],
( (
full_snapshot_archive_interval_slots + MAX_SNAPSHOT_HASHES as u64, full_snapshot_archive_interval_slots + MAX_SNAPSHOT_HASHES as u64,
expected_hash hash(&[MAX_SNAPSHOT_HASHES as u8])
) )
); );
} }

View File

@ -1,5 +1,5 @@
//! The `banking_stage` processes Transaction messages. It is intended to be used //! The `banking_stage` processes Transaction messages. It is intended to be used
//! to construct a software pipeline. The stage uses all available CPU cores and //! to contruct a software pipeline. The stage uses all available CPU cores and
//! can do its processing in parallel with signature verification on the GPU. //! can do its processing in parallel with signature verification on the GPU.
use { use {
crate::{ crate::{
@ -23,7 +23,7 @@ use {
solana_perf::{ solana_perf::{
cuda_runtime::PinnedVec, cuda_runtime::PinnedVec,
data_budget::DataBudget, data_budget::DataBudget,
packet::{Packet, PacketBatch, PACKETS_PER_BATCH}, packet::{limited_deserialize, Packet, PacketBatch, PACKETS_PER_BATCH},
perf_libs, perf_libs,
}, },
solana_poh::poh_recorder::{BankStart, PohRecorder, PohRecorderError, TransactionRecorder}, solana_poh::poh_recorder::{BankStart, PohRecorder, PohRecorderError, TransactionRecorder},
@ -45,10 +45,13 @@ use {
MAX_TRANSACTION_FORWARDING_DELAY_GPU, MAX_TRANSACTION_FORWARDING_DELAY_GPU,
}, },
feature_set, feature_set,
message::Message,
pubkey::Pubkey, pubkey::Pubkey,
saturating_add_assign, saturating_add_assign,
timing::{duration_as_ms, timestamp, AtomicInterval}, timing::{duration_as_ms, timestamp, AtomicInterval},
transaction::{self, AddressLoader, SanitizedTransaction, TransactionError}, transaction::{
self, AddressLoader, SanitizedTransaction, TransactionError, VersionedTransaction,
},
transport::TransportError, transport::TransportError,
}, },
solana_transaction_status::token_balances::{ solana_transaction_status::token_balances::{
@ -192,7 +195,7 @@ impl BankingStageStats {
} }
fn report(&mut self, report_interval_ms: u64) { fn report(&mut self, report_interval_ms: u64) {
// skip reporting metrics if stats is empty // skip repoting metrics if stats is empty
if self.is_empty() { if self.is_empty() {
return; return;
} }
@ -555,6 +558,7 @@ impl BankingStage {
let mut reached_end_of_slot: Option<EndOfSlot> = None; let mut reached_end_of_slot: Option<EndOfSlot> = None;
RetainMut::retain_mut(buffered_packet_batches, |deserialized_packet_batch| { RetainMut::retain_mut(buffered_packet_batches, |deserialized_packet_batch| {
let packet_batch = &deserialized_packet_batch.packet_batch;
let original_unprocessed_indexes = deserialized_packet_batch let original_unprocessed_indexes = deserialized_packet_batch
.unprocessed_packets .unprocessed_packets
.keys() .keys()
@ -568,7 +572,8 @@ impl BankingStage {
let should_retain = if let Some(bank) = &end_of_slot.working_bank { let should_retain = if let Some(bank) = &end_of_slot.working_bank {
let new_unprocessed_indexes = Self::filter_unprocessed_packets_at_end_of_slot( let new_unprocessed_indexes = Self::filter_unprocessed_packets_at_end_of_slot(
bank, bank,
&deserialized_packet_batch.unprocessed_packets, packet_batch,
&original_unprocessed_indexes,
my_pubkey, my_pubkey,
end_of_slot.next_slot_leader, end_of_slot.next_slot_leader,
banking_stage_stats, banking_stage_stats,
@ -619,7 +624,8 @@ impl BankingStage {
&working_bank, &working_bank,
&bank_creation_time, &bank_creation_time,
recorder, recorder,
&deserialized_packet_batch.unprocessed_packets, packet_batch,
original_unprocessed_indexes.to_owned(),
transaction_status_sender.clone(), transaction_status_sender.clone(),
gossip_vote_sender, gossip_vote_sender,
banking_stage_stats, banking_stage_stats,
@ -711,11 +717,14 @@ impl BankingStage {
// `original_unprocessed_indexes` must have remaining packets to process // `original_unprocessed_indexes` must have remaining packets to process
// if not yet processed. // if not yet processed.
assert!(!original_unprocessed_indexes.is_empty()); assert!(Self::packet_has_more_unprocessed_transactions(
&original_unprocessed_indexes
));
true true
} }
} }
}); });
proc_start.stop(); proc_start.stop();
debug!( debug!(
@ -1185,7 +1194,6 @@ impl BankingStage {
MAX_PROCESSING_AGE, MAX_PROCESSING_AGE,
transaction_status_sender.is_some(), transaction_status_sender.is_some(),
transaction_status_sender.is_some(), transaction_status_sender.is_some(),
transaction_status_sender.is_some(),
&mut execute_and_commit_timings.execute_timings, &mut execute_and_commit_timings.execute_timings,
) )
}, },
@ -1679,27 +1687,32 @@ impl BankingStage {
// with their packet indexes. // with their packet indexes.
#[allow(clippy::needless_collect)] #[allow(clippy::needless_collect)]
fn transactions_from_packets( fn transactions_from_packets(
deserialized_packet_batch: &HashMap<usize, DeserializedPacket>, packet_batch: &PacketBatch,
transaction_indexes: &[usize],
feature_set: &Arc<feature_set::FeatureSet>, feature_set: &Arc<feature_set::FeatureSet>,
votes_only: bool, votes_only: bool,
address_loader: impl AddressLoader, address_loader: impl AddressLoader,
) -> (Vec<SanitizedTransaction>, Vec<usize>) { ) -> (Vec<SanitizedTransaction>, Vec<usize>) {
deserialized_packet_batch transaction_indexes
.iter() .iter()
.filter_map(|(&tx_index, deserialized_packet)| { .filter_map(|tx_index| {
if votes_only && !deserialized_packet.is_simple_vote { let p = &packet_batch.packets[*tx_index];
if votes_only && !p.meta.is_simple_vote_tx() {
return None; return None;
} }
let tx: VersionedTransaction = limited_deserialize(&p.data[0..p.meta.size]).ok()?;
let message_bytes = DeserializedPacketBatch::packet_message(p)?;
let message_hash = Message::hash_raw_message(message_bytes);
let tx = SanitizedTransaction::try_create( let tx = SanitizedTransaction::try_create(
deserialized_packet.versioned_transaction.clone(), tx,
deserialized_packet.message_hash, message_hash,
Some(deserialized_packet.is_simple_vote), Some(p.meta.is_simple_vote_tx()),
address_loader.clone(), address_loader.clone(),
) )
.ok()?; .ok()?;
tx.verify_precompiles(feature_set).ok()?; tx.verify_precompiles(feature_set).ok()?;
Some((tx, tx_index)) Some((tx, *tx_index))
}) })
.unzip() .unzip()
} }
@ -1748,7 +1761,8 @@ impl BankingStage {
bank: &Arc<Bank>, bank: &Arc<Bank>,
bank_creation_time: &Instant, bank_creation_time: &Instant,
poh: &TransactionRecorder, poh: &TransactionRecorder,
deserialized_packet_batch: &HashMap<usize, DeserializedPacket>, packet_batch: &PacketBatch,
packet_indexes: Vec<usize>,
transaction_status_sender: Option<TransactionStatusSender>, transaction_status_sender: Option<TransactionStatusSender>,
gossip_vote_sender: &ReplayVoteSender, gossip_vote_sender: &ReplayVoteSender,
banking_stage_stats: &BankingStageStats, banking_stage_stats: &BankingStageStats,
@ -1759,7 +1773,8 @@ impl BankingStage {
let ((transactions, transaction_to_packet_indexes), packet_conversion_time) = Measure::this( let ((transactions, transaction_to_packet_indexes), packet_conversion_time) = Measure::this(
|_| { |_| {
Self::transactions_from_packets( Self::transactions_from_packets(
deserialized_packet_batch, packet_batch,
&packet_indexes,
&bank.feature_set, &bank.feature_set,
bank.vote_only_bank(), bank.vote_only_bank(),
bank.as_ref(), bank.as_ref(),
@ -1847,7 +1862,8 @@ impl BankingStage {
fn filter_unprocessed_packets_at_end_of_slot( fn filter_unprocessed_packets_at_end_of_slot(
bank: &Arc<Bank>, bank: &Arc<Bank>,
deserialized_packet_batch: &HashMap<usize, DeserializedPacket>, packet_batch: &PacketBatch,
transaction_indexes: &[usize],
my_pubkey: &Pubkey, my_pubkey: &Pubkey,
next_leader: Option<Pubkey>, next_leader: Option<Pubkey>,
banking_stage_stats: &BankingStageStats, banking_stage_stats: &BankingStageStats,
@ -1857,17 +1873,15 @@ impl BankingStage {
// Filtering helps if we were going to forward the packets to some other node // Filtering helps if we were going to forward the packets to some other node
if let Some(leader) = next_leader { if let Some(leader) = next_leader {
if leader == *my_pubkey { if leader == *my_pubkey {
return deserialized_packet_batch return transaction_indexes.to_vec();
.keys()
.cloned()
.collect::<Vec<usize>>();
} }
} }
let mut unprocessed_packet_conversion_time = let mut unprocessed_packet_conversion_time =
Measure::start("unprocessed_packet_conversion"); Measure::start("unprocessed_packet_conversion");
let (transactions, transaction_to_packet_indexes) = Self::transactions_from_packets( let (transactions, transaction_to_packet_indexes) = Self::transactions_from_packets(
deserialized_packet_batch, packet_batch,
transaction_indexes,
&bank.feature_set, &bank.feature_set,
bank.vote_only_bank(), bank.vote_only_bank(),
bank.as_ref(), bank.as_ref(),
@ -2012,7 +2026,7 @@ impl BankingStage {
banking_stage_stats: &mut BankingStageStats, banking_stage_stats: &mut BankingStageStats,
slot_metrics_tracker: &mut LeaderSlotMetricsTracker, slot_metrics_tracker: &mut LeaderSlotMetricsTracker,
) { ) {
if !packet_indexes.is_empty() { if Self::packet_has_more_unprocessed_transactions(&packet_indexes) {
if unprocessed_packet_batches.len() >= batch_limit { if unprocessed_packet_batches.len() >= batch_limit {
*dropped_packet_batches_count += 1; *dropped_packet_batches_count += 1;
if let Some(dropped_batch) = unprocessed_packet_batches.pop_front() { if let Some(dropped_batch) = unprocessed_packet_batches.pop_front() {
@ -2038,6 +2052,10 @@ impl BankingStage {
} }
} }
fn packet_has_more_unprocessed_transactions(packet_indexes: &[usize]) -> bool {
!packet_indexes.is_empty()
}
pub fn join(self) -> thread::Result<()> { pub fn join(self) -> thread::Result<()> {
for bank_thread_hdl in self.bank_thread_hdls { for bank_thread_hdl in self.bank_thread_hdls {
bank_thread_hdl.join()?; bank_thread_hdl.join()?;
@ -2101,7 +2119,7 @@ mod tests {
get_tmp_ledger_path_auto_delete, get_tmp_ledger_path_auto_delete,
leader_schedule_cache::LeaderScheduleCache, leader_schedule_cache::LeaderScheduleCache,
}, },
solana_perf::packet::{limited_deserialize, to_packet_batches, PacketFlags}, solana_perf::packet::{to_packet_batches, PacketFlags},
solana_poh::{ solana_poh::{
poh_recorder::{create_test_recorder, Record, WorkingBankEntry}, poh_recorder::{create_test_recorder, Record, WorkingBankEntry},
poh_service::PohService, poh_service::PohService,
@ -2121,10 +2139,7 @@ mod tests {
signature::{Keypair, Signer}, signature::{Keypair, Signer},
system_instruction::SystemError, system_instruction::SystemError,
system_transaction, system_transaction,
transaction::{ transaction::{MessageHash, SimpleAddressLoader, Transaction, TransactionError},
MessageHash, SimpleAddressLoader, Transaction, TransactionError,
VersionedTransaction,
},
}, },
solana_streamer::{recvmmsg::recv_mmsg, socket::SocketAddrSpace}, solana_streamer::{recvmmsg::recv_mmsg, socket::SocketAddrSpace},
solana_transaction_status::{TransactionStatusMeta, VersionedTransactionWithStatusMeta}, solana_transaction_status::{TransactionStatusMeta, VersionedTransactionWithStatusMeta},
@ -2152,7 +2167,6 @@ mod tests {
log_messages: None, log_messages: None,
inner_instructions: None, inner_instructions: None,
durable_nonce_fee: None, durable_nonce_fee: None,
return_data: None,
}) })
} }
@ -4231,7 +4245,7 @@ mod tests {
fn make_test_packets( fn make_test_packets(
transactions: Vec<Transaction>, transactions: Vec<Transaction>,
vote_indexes: Vec<usize>, vote_indexes: Vec<usize>,
) -> DeserializedPacketBatch { ) -> (PacketBatch, Vec<usize>) {
let capacity = transactions.len(); let capacity = transactions.len();
let mut packet_batch = PacketBatch::with_capacity(capacity); let mut packet_batch = PacketBatch::with_capacity(capacity);
let mut packet_indexes = Vec::with_capacity(capacity); let mut packet_indexes = Vec::with_capacity(capacity);
@ -4243,7 +4257,7 @@ mod tests {
for index in vote_indexes.iter() { for index in vote_indexes.iter() {
packet_batch.packets[*index].meta.flags |= PacketFlags::SIMPLE_VOTE_TX; packet_batch.packets[*index].meta.flags |= PacketFlags::SIMPLE_VOTE_TX;
} }
DeserializedPacketBatch::new(packet_batch, packet_indexes, false) (packet_batch, packet_indexes)
} }
#[test] #[test]
@ -4261,30 +4275,28 @@ mod tests {
&keypair, &keypair,
None, None,
); );
let sorted = |mut v: Vec<usize>| {
v.sort_unstable();
v
};
// packets with no votes // packets with no votes
{ {
let vote_indexes = vec![]; let vote_indexes = vec![];
let deserialized_packet_batch = let (packet_batch, packet_indexes) =
make_test_packets(vec![transfer_tx.clone(), transfer_tx.clone()], vote_indexes); make_test_packets(vec![transfer_tx.clone(), transfer_tx.clone()], vote_indexes);
let mut votes_only = false; let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&deserialized_packet_batch.unprocessed_packets, &packet_batch,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(2, txs.len()); assert_eq!(2, txs.len());
assert_eq!(vec![0, 1], sorted(tx_packet_index)); assert_eq!(vec![0, 1], tx_packet_index);
votes_only = true; votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&deserialized_packet_batch.unprocessed_packets, &packet_batch,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
@ -4296,59 +4308,63 @@ mod tests {
// packets with some votes // packets with some votes
{ {
let vote_indexes = vec![0, 2]; let vote_indexes = vec![0, 2];
let deserialized_packet_batch = make_test_packets( let (packet_batch, packet_indexes) = make_test_packets(
vec![vote_tx.clone(), transfer_tx, vote_tx.clone()], vec![vote_tx.clone(), transfer_tx, vote_tx.clone()],
vote_indexes, vote_indexes,
); );
let mut votes_only = false; let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&deserialized_packet_batch.unprocessed_packets, &packet_batch,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(3, txs.len()); assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], sorted(tx_packet_index)); assert_eq!(vec![0, 1, 2], tx_packet_index);
votes_only = true; votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&deserialized_packet_batch.unprocessed_packets, &packet_batch,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(2, txs.len()); assert_eq!(2, txs.len());
assert_eq!(vec![0, 2], sorted(tx_packet_index)); assert_eq!(vec![0, 2], tx_packet_index);
} }
// packets with all votes // packets with all votes
{ {
let vote_indexes = vec![0, 1, 2]; let vote_indexes = vec![0, 1, 2];
let deserialized_packet_batch = make_test_packets( let (packet_batch, packet_indexes) = make_test_packets(
vec![vote_tx.clone(), vote_tx.clone(), vote_tx], vec![vote_tx.clone(), vote_tx.clone(), vote_tx],
vote_indexes, vote_indexes,
); );
let mut votes_only = false; let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&deserialized_packet_batch.unprocessed_packets, &packet_batch,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(3, txs.len()); assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], sorted(tx_packet_index)); assert_eq!(vec![0, 1, 2], tx_packet_index);
votes_only = true; votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&deserialized_packet_batch.unprocessed_packets, &packet_batch,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(3, txs.len()); assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], sorted(tx_packet_index)); assert_eq!(vec![0, 1, 2], tx_packet_index);
} }
} }

View File

@ -10,7 +10,7 @@ use {
solana_ledger::{ancestor_iterator::AncestorIterator, blockstore::Blockstore, blockstore_db}, solana_ledger::{ancestor_iterator::AncestorIterator, blockstore::Blockstore, blockstore_db},
solana_runtime::{ solana_runtime::{
bank::Bank, bank_forks::BankForks, commitment::VOTE_THRESHOLD_SIZE, bank::Bank, bank_forks::BankForks, commitment::VOTE_THRESHOLD_SIZE,
vote_account::VoteAccountsHashMap, vote_account::VoteAccount,
}, },
solana_sdk::{ solana_sdk::{
clock::{Slot, UnixTimestamp}, clock::{Slot, UnixTimestamp},
@ -253,7 +253,7 @@ impl Tower {
pub(crate) fn collect_vote_lockouts( pub(crate) fn collect_vote_lockouts(
vote_account_pubkey: &Pubkey, vote_account_pubkey: &Pubkey,
bank_slot: Slot, bank_slot: Slot,
vote_accounts: &VoteAccountsHashMap, vote_accounts: &HashMap<Pubkey, (/*stake:*/ u64, VoteAccount)>,
ancestors: &HashMap<Slot, HashSet<Slot>>, ancestors: &HashMap<Slot, HashSet<Slot>>,
get_frozen_hash: impl Fn(Slot) -> Option<Hash>, get_frozen_hash: impl Fn(Slot) -> Option<Hash>,
latest_validator_votes_for_frozen_banks: &mut LatestValidatorVotesForFrozenBanks, latest_validator_votes_for_frozen_banks: &mut LatestValidatorVotesForFrozenBanks,
@ -636,7 +636,7 @@ impl Tower {
descendants: &HashMap<Slot, HashSet<u64>>, descendants: &HashMap<Slot, HashSet<u64>>,
progress: &ProgressMap, progress: &ProgressMap,
total_stake: u64, total_stake: u64,
epoch_vote_accounts: &VoteAccountsHashMap, epoch_vote_accounts: &HashMap<Pubkey, (u64, VoteAccount)>,
latest_validator_votes_for_frozen_banks: &LatestValidatorVotesForFrozenBanks, latest_validator_votes_for_frozen_banks: &LatestValidatorVotesForFrozenBanks,
heaviest_subtree_fork_choice: &HeaviestSubtreeForkChoice, heaviest_subtree_fork_choice: &HeaviestSubtreeForkChoice,
) -> SwitchForkDecision { ) -> SwitchForkDecision {
@ -929,7 +929,7 @@ impl Tower {
descendants: &HashMap<Slot, HashSet<u64>>, descendants: &HashMap<Slot, HashSet<u64>>,
progress: &ProgressMap, progress: &ProgressMap,
total_stake: u64, total_stake: u64,
epoch_vote_accounts: &VoteAccountsHashMap, epoch_vote_accounts: &HashMap<Pubkey, (u64, VoteAccount)>,
latest_validator_votes_for_frozen_banks: &LatestValidatorVotesForFrozenBanks, latest_validator_votes_for_frozen_banks: &LatestValidatorVotesForFrozenBanks,
heaviest_subtree_fork_choice: &HeaviestSubtreeForkChoice, heaviest_subtree_fork_choice: &HeaviestSubtreeForkChoice,
) -> SwitchForkDecision { ) -> SwitchForkDecision {
@ -1377,7 +1377,7 @@ pub mod test {
}, },
itertools::Itertools, itertools::Itertools,
solana_ledger::{blockstore::make_slot_entries, get_tmp_ledger_path}, solana_ledger::{blockstore::make_slot_entries, get_tmp_ledger_path},
solana_runtime::{bank::Bank, vote_account::VoteAccount}, solana_runtime::bank::Bank,
solana_sdk::{ solana_sdk::{
account::{Account, AccountSharedData, ReadableAccount, WritableAccount}, account::{Account, AccountSharedData, ReadableAccount, WritableAccount},
clock::Slot, clock::Slot,
@ -1398,7 +1398,7 @@ pub mod test {
trees::tr, trees::tr,
}; };
fn gen_stakes(stake_votes: &[(u64, &[u64])]) -> VoteAccountsHashMap { fn gen_stakes(stake_votes: &[(u64, &[u64])]) -> HashMap<Pubkey, (u64, VoteAccount)> {
stake_votes stake_votes
.iter() .iter()
.map(|(lamports, votes)| { .map(|(lamports, votes)| {

View File

@ -1,8 +1,4 @@
//! The `ledger_cleanup_service` drops older ledger data to limit disk space usage. //! The `ledger_cleanup_service` drops older ledger data to limit disk space usage
//! The service works by counting the number of live data shreds in the ledger; this
//! can be done quickly and should have a fairly stable correlation to actual bytes.
//! Once the shred count (and thus roughly the byte count) reaches a threshold,
//! the services begins removing data in FIFO order.
use { use {
crossbeam_channel::{Receiver, RecvTimeoutError}, crossbeam_channel::{Receiver, RecvTimeoutError},

View File

@ -13,9 +13,7 @@ use {
}, },
}; };
// Determines how often we report blockstore metrics under // Determines how often we report blockstore metrics.
// LedgerMetricReportService. Note that there're other blockstore
// metrics that are reported outside LedgerMetricReportService.
const BLOCKSTORE_METRICS_REPORT_PERIOD_MILLIS: u64 = 10000; const BLOCKSTORE_METRICS_REPORT_PERIOD_MILLIS: u64 = 10000;
pub struct LedgerMetricReportService { pub struct LedgerMetricReportService {

View File

@ -37,8 +37,6 @@ pub mod optimistic_confirmation_verifier;
pub mod outstanding_requests; pub mod outstanding_requests;
pub mod packet_hasher; pub mod packet_hasher;
pub mod packet_threshold; pub mod packet_threshold;
pub mod poh_timing_report_service;
pub mod poh_timing_reporter;
pub mod progress_map; pub mod progress_map;
pub mod qos_service; pub mod qos_service;
pub mod repair_generic_traversal; pub mod repair_generic_traversal;
@ -59,7 +57,6 @@ pub mod sigverify;
pub mod sigverify_shreds; pub mod sigverify_shreds;
pub mod sigverify_stage; pub mod sigverify_stage;
pub mod snapshot_packager_service; pub mod snapshot_packager_service;
pub mod staked_nodes_updater_service;
pub mod stats_reporter_service; pub mod stats_reporter_service;
pub mod system_monitor_service; pub mod system_monitor_service;
mod tower1_7_14; mod tower1_7_14;
@ -74,7 +71,6 @@ pub mod verified_vote_packets;
pub mod vote_simulator; pub mod vote_simulator;
pub mod vote_stake_tracker; pub mod vote_stake_tracker;
pub mod voting_service; pub mod voting_service;
pub mod warm_quic_cache_service;
pub mod window_service; pub mod window_service;
#[macro_use] #[macro_use]

View File

@ -1,87 +0,0 @@
//! PohTimingReportService module
use {
crate::poh_timing_reporter::PohTimingReporter,
solana_metrics::poh_timing_point::{PohTimingReceiver, SlotPohTimingInfo},
std::{
string::ToString,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, Builder, JoinHandle},
time::Duration,
},
};
/// Timeout to wait on the poh timing points from the channel
const POH_TIMING_RECEIVER_TIMEOUT_MILLISECONDS: u64 = 1000;
/// The `poh_timing_report_service` receives signals of relevant timing points
/// during the processing of a slot, (i.e. from blockstore and poh), aggregate and
/// report the result as datapoints.
pub struct PohTimingReportService {
t_poh_timing: JoinHandle<()>,
}
impl PohTimingReportService {
pub fn new(receiver: PohTimingReceiver, exit: Arc<AtomicBool>) -> Self {
let exit_signal = exit;
let mut poh_timing_reporter = PohTimingReporter::default();
let t_poh_timing = Builder::new()
.name("poh_timing_report".to_string())
.spawn(move || loop {
if exit_signal.load(Ordering::Relaxed) {
break;
}
if let Ok(SlotPohTimingInfo {
slot,
root_slot,
timing_point,
}) = receiver.recv_timeout(Duration::from_millis(
POH_TIMING_RECEIVER_TIMEOUT_MILLISECONDS,
)) {
poh_timing_reporter.process(slot, root_slot, timing_point);
}
})
.unwrap();
Self { t_poh_timing }
}
pub fn join(self) -> thread::Result<()> {
self.t_poh_timing.join()
}
}
#[cfg(test)]
mod test {
use {
super::*, crossbeam_channel::unbounded, solana_metrics::poh_timing_point::SlotPohTimingInfo,
};
#[test]
/// Test the life cycle of the PohTimingReportService
fn test_poh_timing_report_service() {
let (poh_timing_point_sender, poh_timing_point_receiver) = unbounded();
let exit = Arc::new(AtomicBool::new(false));
// Create the service
let poh_timing_report_service =
PohTimingReportService::new(poh_timing_point_receiver, exit.clone());
// Send SlotPohTimingPoint
let _ = poh_timing_point_sender.send(SlotPohTimingInfo::new_slot_start_poh_time_point(
42, None, 100,
));
let _ = poh_timing_point_sender.send(SlotPohTimingInfo::new_slot_end_poh_time_point(
42, None, 200,
));
let _ = poh_timing_point_sender.send(SlotPohTimingInfo::new_slot_full_poh_time_point(
42, None, 150,
));
// Shutdown the service
exit.store(true, Ordering::Relaxed);
poh_timing_report_service
.join()
.expect("poh_timing_report_service completed");
}
}

View File

@ -1,239 +0,0 @@
//! A poh_timing_reporter module implement poh timing point and timing reporter
//! structs.
use {
solana_metrics::{datapoint_info, poh_timing_point::PohTimingPoint},
solana_sdk::clock::Slot,
std::{collections::HashMap, fmt},
};
/// A SlotPohTimestamp records timing of the events during the processing of a
/// slot by the validator
#[derive(Debug, Clone, Copy, Default)]
pub struct SlotPohTimestamp {
/// Slot start time from poh
pub start_time: u64,
/// Slot end time from poh
pub end_time: u64,
/// Last shred received time from block producer
pub full_time: u64,
}
/// Display trait
impl fmt::Display for SlotPohTimestamp {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"SlotPohTimestamp: start={} end={} full={}",
self.start_time, self.end_time, self.full_time
)
}
}
impl SlotPohTimestamp {
/// Return true if the timing points of all events are received.
pub fn is_complete(&self) -> bool {
self.start_time != 0 && self.end_time != 0 && self.full_time != 0
}
/// Update with timing point
pub fn update(&mut self, timing_point: PohTimingPoint) {
match timing_point {
PohTimingPoint::PohSlotStart(ts) => self.start_time = ts,
PohTimingPoint::PohSlotEnd(ts) => self.end_time = ts,
PohTimingPoint::FullSlotReceived(ts) => self.full_time = ts,
}
}
/// Return the time difference from slot start to slot full
fn slot_start_to_full_time(&self) -> i64 {
(self.full_time as i64).saturating_sub(self.start_time as i64)
}
/// Return the time difference from slot full to slot end
fn slot_full_to_end_time(&self) -> i64 {
(self.end_time as i64).saturating_sub(self.full_time as i64)
}
/// Report PohTiming for a slot
pub fn report(&self, slot: Slot) {
datapoint_info!(
"poh_slot_timing",
("slot", slot as i64, i64),
("start_time", self.start_time as i64, i64),
("end_time", self.end_time as i64, i64),
("full_time", self.full_time as i64, i64),
(
"start_to_full_time_diff",
self.slot_start_to_full_time(),
i64
),
("full_to_end_time_diff", self.slot_full_to_end_time(), i64),
);
}
}
/// A PohTimingReporter manages and reports the timing of events for incoming
/// slots
#[derive(Default)]
pub struct PohTimingReporter {
/// Storage map of SlotPohTimestamp per slot
slot_timestamps: HashMap<Slot, SlotPohTimestamp>,
last_root_slot: Slot,
}
impl PohTimingReporter {
/// Return true if PohTiming is complete for the slot
pub fn is_complete(&self, slot: Slot) -> bool {
if let Some(slot_timestamp) = self.slot_timestamps.get(&slot) {
slot_timestamp.is_complete()
} else {
false
}
}
/// Process incoming PohTimingPoint from the channel
pub fn process(&mut self, slot: Slot, root_slot: Option<Slot>, t: PohTimingPoint) -> bool {
let slot_timestamp = self
.slot_timestamps
.entry(slot)
.or_insert_with(SlotPohTimestamp::default);
slot_timestamp.update(t);
let is_completed = slot_timestamp.is_complete();
if is_completed {
slot_timestamp.report(slot);
}
// delete slots that are older than the root_slot
if let Some(root_slot) = root_slot {
if root_slot > self.last_root_slot {
self.slot_timestamps.retain(|&k, _| k >= root_slot);
self.last_root_slot = root_slot;
}
}
is_completed
}
/// Return the count of slot_timestamps in tracking
pub fn slot_count(&self) -> usize {
self.slot_timestamps.len()
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
/// Test poh_timing_reporter
fn test_poh_timing_reporter() {
// create a reporter
let mut reporter = PohTimingReporter::default();
// process all relevant PohTimingPoints for slot 42
let complete = reporter.process(42, None, PohTimingPoint::PohSlotStart(100));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::PohSlotEnd(200));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::FullSlotReceived(150));
// assert that the PohTiming is complete
assert!(complete);
// Move root to slot 43
let root = Some(43);
// process all relevant PohTimingPoints for slot 45
let complete = reporter.process(45, None, PohTimingPoint::PohSlotStart(100));
assert!(!complete);
let complete = reporter.process(45, None, PohTimingPoint::PohSlotEnd(200));
assert!(!complete);
let complete = reporter.process(45, root, PohTimingPoint::FullSlotReceived(150));
// assert that the PohTiming is complete
assert!(complete);
// assert that only one timestamp remains in track
assert_eq!(reporter.slot_count(), 1)
}
#[test]
/// Test poh_timing_reporter
fn test_poh_timing_reporter_out_of_order() {
// create a reporter
let mut reporter = PohTimingReporter::default();
// process all relevant PohTimingPoints for slot 42/43 out of order
let mut c = 0;
// slot_start 42
c += reporter.process(42, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot_full 42
c += reporter.process(42, None, PohTimingPoint::FullSlotReceived(120)) as i32;
// slot_full 43
c += reporter.process(43, None, PohTimingPoint::FullSlotReceived(140)) as i32;
// slot_end 42
c += reporter.process(42, None, PohTimingPoint::PohSlotEnd(200)) as i32;
// slot start 43
c += reporter.process(43, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot end 43
c += reporter.process(43, None, PohTimingPoint::PohSlotEnd(200)) as i32;
// assert that both timing points are complete
assert_eq!(c, 2);
// assert that both timestamps remain in track
assert_eq!(reporter.slot_count(), 2)
}
#[test]
/// Test poh_timing_reporter
fn test_poh_timing_reporter_never_complete() {
// create a reporter
let mut reporter = PohTimingReporter::default();
let mut c = 0;
// process all relevant PohTimingPoints for slot 42/43 out of order
// slot_start 42
c += reporter.process(42, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot_full 42
c += reporter.process(42, None, PohTimingPoint::FullSlotReceived(120)) as i32;
// slot_full 43
c += reporter.process(43, None, PohTimingPoint::FullSlotReceived(140)) as i32;
// skip slot 42, jump to slot 43
// slot start 43
c += reporter.process(43, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot end 43
c += reporter.process(43, None, PohTimingPoint::PohSlotEnd(200)) as i32;
// assert that only one timing point is complete
assert_eq!(c, 1);
// assert that both timestamp is in track
assert_eq!(reporter.slot_count(), 2)
}
#[test]
fn test_poh_timing_reporter_overflow() {
// create a reporter
let mut reporter = PohTimingReporter::default();
// process all relevant PohTimingPoints for a slot
let complete = reporter.process(42, None, PohTimingPoint::PohSlotStart(1647624609896));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::PohSlotEnd(1647624610286));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::FullSlotReceived(1647624610281));
// assert that the PohTiming is complete
assert!(complete);
}
#[test]
fn test_slot_poh_timestamp_fmt() {
let t = SlotPohTimestamp::default();
assert_eq!(format!("{}", t), "SlotPohTimestamp: start=0 end=0 full=0");
}
}

View File

@ -7,7 +7,7 @@ use {
}, },
solana_ledger::blockstore_processor::{ConfirmationProgress, ConfirmationTiming}, solana_ledger::blockstore_processor::{ConfirmationProgress, ConfirmationTiming},
solana_program_runtime::timings::ExecuteTimingType, solana_program_runtime::timings::ExecuteTimingType,
solana_runtime::{bank::Bank, bank_forks::BankForks, vote_account::VoteAccountsHashMap}, solana_runtime::{bank::Bank, bank_forks::BankForks, vote_account::VoteAccount},
solana_sdk::{clock::Slot, hash::Hash, pubkey::Pubkey}, solana_sdk::{clock::Slot, hash::Hash, pubkey::Pubkey},
std::{ std::{
collections::{BTreeMap, HashMap, HashSet}, collections::{BTreeMap, HashMap, HashSet},
@ -516,7 +516,7 @@ impl PropagatedStats {
&mut self, &mut self,
node_pubkey: &Pubkey, node_pubkey: &Pubkey,
vote_account_pubkeys: &[Pubkey], vote_account_pubkeys: &[Pubkey],
epoch_vote_accounts: &VoteAccountsHashMap, epoch_vote_accounts: &HashMap<Pubkey, (u64, VoteAccount)>,
) { ) {
self.propagated_node_ids.insert(*node_pubkey); self.propagated_node_ids.insert(*node_pubkey);
for vote_account_pubkey in vote_account_pubkeys.iter() { for vote_account_pubkey in vote_account_pubkeys.iter() {
@ -695,7 +695,7 @@ impl ProgressMap {
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use {super::*, solana_runtime::vote_account::VoteAccount}; use super::*;
#[test] #[test]
fn test_add_vote_pubkey() { fn test_add_vote_pubkey() {

View File

@ -1682,9 +1682,6 @@ impl ReplayStage {
blockstore blockstore
.set_dead_slot(slot) .set_dead_slot(slot)
.expect("Failed to mark slot as dead in blockstore"); .expect("Failed to mark slot as dead in blockstore");
blockstore.slots_stats.mark_dead(slot);
rpc_subscriptions.notify_slot_update(SlotUpdate::Dead { rpc_subscriptions.notify_slot_update(SlotUpdate::Dead {
slot, slot,
err: format!("error: {:?}", err), err: format!("error: {:?}", err),
@ -1791,9 +1788,6 @@ impl ReplayStage {
epoch_slots_frozen_slots, epoch_slots_frozen_slots,
drop_bank_sender, drop_bank_sender,
); );
blockstore.slots_stats.mark_rooted(new_root);
rpc_subscriptions.notify_roots(rooted_slots); rpc_subscriptions.notify_roots(rooted_slots);
if let Some(sender) = bank_notification_sender { if let Some(sender) = bank_notification_sender {
sender sender
@ -2307,7 +2301,7 @@ impl ReplayStage {
} }
} }
// send accumulated execute-timings to cost_update_service // send accumulated excute-timings to cost_update_service
if !execute_timings.details.per_program_timings.is_empty() { if !execute_timings.details.per_program_timings.is_empty() {
cost_update_sender cost_update_sender
.send(CostUpdate::ExecuteTiming { .send(CostUpdate::ExecuteTiming {
@ -2595,7 +2589,7 @@ impl ReplayStage {
*/ */
// Imagine 90% of validators voted on slot 4, but only 9% landed. If everybody that fails // Imagine 90% of validators voted on slot 4, but only 9% landed. If everybody that fails
// the switch threshold abandons slot 4 to build on slot 8 (because it's *currently* heavier), // the switch theshold abandons slot 4 to build on slot 8 (because it's *currently* heavier),
// then there will be no blocks to include the votes for slot 4, and the network halts // then there will be no blocks to include the votes for slot 4, and the network halts
// because 90% of validators can't vote // because 90% of validators can't vote
info!( info!(
@ -2937,7 +2931,6 @@ impl ReplayStage {
accounts_background_request_sender, accounts_background_request_sender,
highest_confirmed_root, highest_confirmed_root,
); );
drop_bank_sender drop_bank_sender
.send(removed_banks) .send(removed_banks)
.unwrap_or_else(|err| warn!("bank drop failed: {:?}", err)); .unwrap_or_else(|err| warn!("bank drop failed: {:?}", err));

View File

@ -240,7 +240,7 @@ fn retransmit(
epoch_fetch.stop(); epoch_fetch.stop();
stats.epoch_fetch += epoch_fetch.as_us(); stats.epoch_fetch += epoch_fetch.as_us();
let mut epoch_cache_update = Measure::start("retransmit_epoch_cache_update"); let mut epoch_cache_update = Measure::start("retransmit_epoch_cach_update");
maybe_reset_shreds_received_cache(shreds_received, hasher_reset_ts); maybe_reset_shreds_received_cache(shreds_received, hasher_reset_ts);
epoch_cache_update.stop(); epoch_cache_update.stop();
stats.epoch_cache_update += epoch_cache_update.as_us(); stats.epoch_cache_update += epoch_cache_update.as_us();

View File

@ -349,7 +349,6 @@ mod tests {
snapshots_dir, snapshots_dir,
accounts_dir, accounts_dir,
archive_format, archive_format,
snapshot_utils::VerifyBank::Deterministic,
); );
} }
} }

View File

@ -1,76 +0,0 @@
use {
solana_gossip::cluster_info::ClusterInfo,
solana_runtime::bank_forks::BankForks,
std::{
collections::HashMap,
net::IpAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
thread::{self, sleep, Builder, JoinHandle},
time::{Duration, Instant},
},
};
const IP_TO_STAKE_REFRESH_DURATION: Duration = Duration::from_secs(5);
pub struct StakedNodesUpdaterService {
thread_hdl: JoinHandle<()>,
}
impl StakedNodesUpdaterService {
pub fn new(
exit: Arc<AtomicBool>,
cluster_info: Arc<ClusterInfo>,
bank_forks: Arc<RwLock<BankForks>>,
shared_staked_nodes: Arc<RwLock<HashMap<IpAddr, u64>>>,
) -> Self {
let thread_hdl = Builder::new()
.name("sol-sn-updater".to_string())
.spawn(move || {
let mut last_stakes = Instant::now();
while !exit.load(Ordering::Relaxed) {
let mut new_ip_to_stake = HashMap::new();
Self::try_refresh_ip_to_stake(
&mut last_stakes,
&mut new_ip_to_stake,
&bank_forks,
&cluster_info,
);
let mut shared = shared_staked_nodes.write().unwrap();
*shared = new_ip_to_stake;
}
})
.unwrap();
Self { thread_hdl }
}
fn try_refresh_ip_to_stake(
last_stakes: &mut Instant,
ip_to_stake: &mut HashMap<IpAddr, u64>,
bank_forks: &RwLock<BankForks>,
cluster_info: &ClusterInfo,
) {
if last_stakes.elapsed() > IP_TO_STAKE_REFRESH_DURATION {
let root_bank = bank_forks.read().unwrap().root_bank();
let staked_nodes = root_bank.staked_nodes();
*ip_to_stake = cluster_info
.tvu_peers()
.into_iter()
.filter_map(|node| {
let stake = staked_nodes.get(&node.id)?;
Some((node.tvu.ip(), *stake))
})
.collect();
*last_stakes = Instant::now();
} else {
sleep(Duration::from_millis(1));
}
}
pub fn join(self) -> thread::Result<()> {
self.thread_hdl.join()
}
}

View File

@ -11,6 +11,7 @@ use {
fs::{self, File}, fs::{self, File},
io::{self, BufReader}, io::{self, BufReader},
path::PathBuf, path::PathBuf,
sync::RwLock,
}, },
}; };
@ -213,7 +214,7 @@ impl TowerStorage for FileTowerStorage {
} }
pub struct EtcdTowerStorage { pub struct EtcdTowerStorage {
client: tokio::sync::Mutex<etcd_client::Client>, client: RwLock<etcd_client::Client>,
instance_id: [u8; 8], instance_id: [u8; 8],
runtime: tokio::runtime::Runtime, runtime: tokio::runtime::Runtime,
} }
@ -259,7 +260,7 @@ impl EtcdTowerStorage {
.map_err(Self::etdc_to_tower_error)?; .map_err(Self::etdc_to_tower_error)?;
Ok(Self { Ok(Self {
client: tokio::sync::Mutex::new(client), client: RwLock::new(client),
instance_id: solana_sdk::timing::timestamp().to_le_bytes(), instance_id: solana_sdk::timing::timestamp().to_le_bytes(),
runtime, runtime,
}) })
@ -279,6 +280,7 @@ impl EtcdTowerStorage {
impl TowerStorage for EtcdTowerStorage { impl TowerStorage for EtcdTowerStorage {
fn load(&self, node_pubkey: &Pubkey) -> Result<Tower> { fn load(&self, node_pubkey: &Pubkey) -> Result<Tower> {
let (instance_key, tower_key) = Self::get_keys(node_pubkey); let (instance_key, tower_key) = Self::get_keys(node_pubkey);
let mut client = self.client.write().unwrap();
let txn = etcd_client::Txn::new().and_then(vec![etcd_client::TxnOp::put( let txn = etcd_client::Txn::new().and_then(vec![etcd_client::TxnOp::put(
instance_key.clone(), instance_key.clone(),
@ -286,7 +288,7 @@ impl TowerStorage for EtcdTowerStorage {
None, None,
)]); )]);
self.runtime self.runtime
.block_on(async { self.client.lock().await.txn(txn).await }) .block_on(async { client.txn(txn).await })
.map_err(|err| { .map_err(|err| {
error!("Failed to acquire etcd instance lock: {}", err); error!("Failed to acquire etcd instance lock: {}", err);
Self::etdc_to_tower_error(err) Self::etdc_to_tower_error(err)
@ -302,7 +304,7 @@ impl TowerStorage for EtcdTowerStorage {
let response = self let response = self
.runtime .runtime
.block_on(async { self.client.lock().await.txn(txn).await }) .block_on(async { client.txn(txn).await })
.map_err(|err| { .map_err(|err| {
error!("Failed to read etcd saved tower: {}", err); error!("Failed to read etcd saved tower: {}", err);
Self::etdc_to_tower_error(err) Self::etdc_to_tower_error(err)
@ -334,6 +336,7 @@ impl TowerStorage for EtcdTowerStorage {
fn store(&self, saved_tower: &SavedTowerVersions) -> Result<()> { fn store(&self, saved_tower: &SavedTowerVersions) -> Result<()> {
let (instance_key, tower_key) = Self::get_keys(&saved_tower.pubkey()); let (instance_key, tower_key) = Self::get_keys(&saved_tower.pubkey());
let mut client = self.client.write().unwrap();
let txn = etcd_client::Txn::new() let txn = etcd_client::Txn::new()
.when(vec![etcd_client::Compare::value( .when(vec![etcd_client::Compare::value(
@ -349,7 +352,7 @@ impl TowerStorage for EtcdTowerStorage {
let response = self let response = self
.runtime .runtime
.block_on(async { self.client.lock().await.txn(txn).await }) .block_on(async { client.txn(txn).await })
.map_err(|err| { .map_err(|err| {
error!("Failed to write etcd saved tower: {}", err); error!("Failed to write etcd saved tower: {}", err);
err err

View File

@ -13,9 +13,8 @@ use {
find_packet_sender_stake_stage::FindPacketSenderStakeStage, find_packet_sender_stake_stage::FindPacketSenderStakeStage,
sigverify::TransactionSigVerifier, sigverify::TransactionSigVerifier,
sigverify_stage::SigVerifyStage, sigverify_stage::SigVerifyStage,
staked_nodes_updater_service::StakedNodesUpdaterService,
}, },
crossbeam_channel::{bounded, unbounded, Receiver, RecvTimeoutError}, crossbeam_channel::{unbounded, Receiver},
solana_gossip::cluster_info::ClusterInfo, solana_gossip::cluster_info::ClusterInfo,
solana_ledger::{blockstore::Blockstore, blockstore_processor::TransactionStatusSender}, solana_ledger::{blockstore::Blockstore, blockstore_processor::TransactionStatusSender},
solana_poh::poh_recorder::{PohRecorder, WorkingBankEntry}, solana_poh::poh_recorder::{PohRecorder, WorkingBankEntry},
@ -29,21 +28,15 @@ use {
vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender}, vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender},
}, },
solana_sdk::signature::Keypair, solana_sdk::signature::Keypair,
solana_streamer::quic::{spawn_server, MAX_STAKED_CONNECTIONS, MAX_UNSTAKED_CONNECTIONS},
std::{ std::{
collections::HashMap,
net::UdpSocket, net::UdpSocket,
sync::{atomic::AtomicBool, Arc, Mutex, RwLock}, sync::{atomic::AtomicBool, Arc, Mutex, RwLock},
thread, thread,
time::Duration,
}, },
}; };
pub const DEFAULT_TPU_COALESCE_MS: u64 = 5; pub const DEFAULT_TPU_COALESCE_MS: u64 = 5;
/// Timeout interval when joining threads during TPU close
const TPU_THREADS_JOIN_TIMEOUT_SECONDS: u64 = 10;
// allow multiple connections for NAT and any open/close overlap // allow multiple connections for NAT and any open/close overlap
pub const MAX_QUIC_CONNECTIONS_PER_IP: usize = 8; pub const MAX_QUIC_CONNECTIONS_PER_IP: usize = 8;
@ -65,7 +58,6 @@ pub struct Tpu {
tpu_quic_t: thread::JoinHandle<()>, tpu_quic_t: thread::JoinHandle<()>,
find_packet_sender_stake_stage: FindPacketSenderStakeStage, find_packet_sender_stake_stage: FindPacketSenderStakeStage,
vote_find_packet_sender_stake_stage: FindPacketSenderStakeStage, vote_find_packet_sender_stake_stage: FindPacketSenderStakeStage,
staked_nodes_updater_service: StakedNodesUpdaterService,
} }
impl Tpu { impl Tpu {
@ -136,23 +128,13 @@ impl Tpu {
let (verified_sender, verified_receiver) = unbounded(); let (verified_sender, verified_receiver) = unbounded();
let staked_nodes = Arc::new(RwLock::new(HashMap::new())); let tpu_quic_t = solana_streamer::quic::spawn_server(
let staked_nodes_updater_service = StakedNodesUpdaterService::new(
exit.clone(),
cluster_info.clone(),
bank_forks.clone(),
staked_nodes.clone(),
);
let tpu_quic_t = spawn_server(
transactions_quic_sockets, transactions_quic_sockets,
keypair, keypair,
cluster_info.my_contact_info().tpu.ip(), cluster_info.my_contact_info().tpu.ip(),
packet_sender, packet_sender,
exit.clone(), exit.clone(),
MAX_QUIC_CONNECTIONS_PER_IP, MAX_QUIC_CONNECTIONS_PER_IP,
staked_nodes,
MAX_STAKED_CONNECTIONS,
MAX_UNSTAKED_CONNECTIONS,
) )
.unwrap(); .unwrap();
@ -222,27 +204,10 @@ impl Tpu {
tpu_quic_t, tpu_quic_t,
find_packet_sender_stake_stage, find_packet_sender_stake_stage,
vote_find_packet_sender_stake_stage, vote_find_packet_sender_stake_stage,
staked_nodes_updater_service,
} }
} }
pub fn join(self) -> thread::Result<()> { pub fn join(self) -> thread::Result<()> {
// spawn a new thread to wait for tpu close
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
let _ = self.do_join();
sender.send(()).unwrap();
});
// exit can deadlock. put an upper-bound on how long we wait for it
let timeout = Duration::from_secs(TPU_THREADS_JOIN_TIMEOUT_SECONDS);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
error!("timeout for closing tvu");
}
Ok(())
}
fn do_join(self) -> thread::Result<()> {
let results = vec![ let results = vec![
self.fetch_stage.join(), self.fetch_stage.join(),
self.sigverify_stage.join(), self.sigverify_stage.join(),
@ -251,7 +216,6 @@ impl Tpu {
self.banking_stage.join(), self.banking_stage.join(),
self.find_packet_sender_stake_stage.join(), self.find_packet_sender_stake_stage.join(),
self.vote_find_packet_sender_stake_stage.join(), self.vote_find_packet_sender_stake_stage.join(),
self.staked_nodes_updater_service.join(),
]; ];
self.tpu_quic_t.join()?; self.tpu_quic_t.join()?;
let broadcast_result = self.broadcast_stage.join(); let broadcast_result = self.broadcast_stage.join();

View File

@ -25,9 +25,8 @@ use {
sigverify_stage::SigVerifyStage, sigverify_stage::SigVerifyStage,
tower_storage::TowerStorage, tower_storage::TowerStorage,
voting_service::VotingService, voting_service::VotingService,
warm_quic_cache_service::WarmQuicCacheService,
}, },
crossbeam_channel::{bounded, unbounded, Receiver, RecvTimeoutError}, crossbeam_channel::{unbounded, Receiver},
solana_geyser_plugin_manager::block_metadata_notifier_interface::BlockMetadataNotifierLock, solana_geyser_plugin_manager::block_metadata_notifier_interface::BlockMetadataNotifierLock,
solana_gossip::cluster_info::ClusterInfo, solana_gossip::cluster_info::ClusterInfo,
solana_ledger::{ solana_ledger::{
@ -49,7 +48,9 @@ use {
commitment::BlockCommitmentCache, commitment::BlockCommitmentCache,
cost_model::CostModel, cost_model::CostModel,
snapshot_config::SnapshotConfig, snapshot_config::SnapshotConfig,
snapshot_package::{PendingAccountsPackage, PendingSnapshotPackage}, snapshot_package::{
AccountsPackageReceiver, AccountsPackageSender, PendingSnapshotPackage,
},
transaction_cost_metrics_sender::{ transaction_cost_metrics_sender::{
TransactionCostMetricsSender, TransactionCostMetricsService, TransactionCostMetricsSender, TransactionCostMetricsService,
}, },
@ -61,13 +62,9 @@ use {
net::UdpSocket, net::UdpSocket,
sync::{atomic::AtomicBool, Arc, Mutex, RwLock}, sync::{atomic::AtomicBool, Arc, Mutex, RwLock},
thread, thread,
time::Duration,
}, },
}; };
/// Timeout interval when joining threads during TVU close
const TVU_THREADS_JOIN_TIMEOUT_SECONDS: u64 = 10;
pub struct Tvu { pub struct Tvu {
fetch_stage: ShredFetchStage, fetch_stage: ShredFetchStage,
sigverify_stage: SigVerifyStage, sigverify_stage: SigVerifyStage,
@ -79,7 +76,6 @@ pub struct Tvu {
accounts_hash_verifier: AccountsHashVerifier, accounts_hash_verifier: AccountsHashVerifier,
cost_update_service: CostUpdateService, cost_update_service: CostUpdateService,
voting_service: VotingService, voting_service: VotingService,
warm_quic_cache_service: WarmQuicCacheService,
drop_bank_service: DropBankService, drop_bank_service: DropBankService,
transaction_cost_metrics_service: TransactionCostMetricsService, transaction_cost_metrics_service: TransactionCostMetricsService,
} }
@ -102,6 +98,7 @@ pub struct TvuConfig {
pub accounts_hash_fault_injection_slots: u64, pub accounts_hash_fault_injection_slots: u64,
pub accounts_db_caching_enabled: bool, pub accounts_db_caching_enabled: bool,
pub test_hash_calculation: bool, pub test_hash_calculation: bool,
pub use_index_hash_calculation: bool,
pub rocksdb_compaction_interval: Option<u64>, pub rocksdb_compaction_interval: Option<u64>,
pub rocksdb_max_compaction_jitter: Option<u64>, pub rocksdb_max_compaction_jitter: Option<u64>,
pub wait_for_vote_to_start_leader: bool, pub wait_for_vote_to_start_leader: bool,
@ -147,7 +144,7 @@ impl Tvu {
tvu_config: TvuConfig, tvu_config: TvuConfig,
max_slots: &Arc<MaxSlots>, max_slots: &Arc<MaxSlots>,
cost_model: &Arc<RwLock<CostModel>>, cost_model: &Arc<RwLock<CostModel>>,
pending_accounts_package: PendingAccountsPackage, accounts_package_channel: (AccountsPackageSender, AccountsPackageReceiver),
last_full_snapshot_slot: Option<Slot>, last_full_snapshot_slot: Option<Slot>,
block_metadata_notifier: Option<BlockMetadataNotifierLock>, block_metadata_notifier: Option<BlockMetadataNotifierLock>,
wait_to_vote_slot: Option<Slot>, wait_to_vote_slot: Option<Slot>,
@ -224,8 +221,9 @@ impl Tvu {
(Some(snapshot_config), Some(pending_snapshot_package)) (Some(snapshot_config), Some(pending_snapshot_package))
}) })
.unwrap_or((None, None)); .unwrap_or((None, None));
let (accounts_package_sender, accounts_package_receiver) = accounts_package_channel;
let accounts_hash_verifier = AccountsHashVerifier::new( let accounts_hash_verifier = AccountsHashVerifier::new(
Arc::clone(&pending_accounts_package), accounts_package_receiver,
pending_snapshot_package, pending_snapshot_package,
exit, exit,
cluster_info, cluster_info,
@ -233,6 +231,7 @@ impl Tvu {
tvu_config.halt_on_known_validators_accounts_hash_mismatch, tvu_config.halt_on_known_validators_accounts_hash_mismatch,
tvu_config.accounts_hash_fault_injection_slots, tvu_config.accounts_hash_fault_injection_slots,
snapshot_config.clone(), snapshot_config.clone(),
blockstore.ledger_path().to_path_buf(),
); );
let (snapshot_request_sender, snapshot_request_handler) = match snapshot_config { let (snapshot_request_sender, snapshot_request_handler) = match snapshot_config {
@ -244,7 +243,7 @@ impl Tvu {
Some(SnapshotRequestHandler { Some(SnapshotRequestHandler {
snapshot_config, snapshot_config,
snapshot_request_receiver, snapshot_request_receiver,
pending_accounts_package, accounts_package_sender,
}), }),
) )
} }
@ -285,9 +284,6 @@ impl Tvu {
bank_forks.clone(), bank_forks.clone(),
); );
let warm_quic_cache_service =
WarmQuicCacheService::new(cluster_info.clone(), poh_recorder.clone(), exit.clone());
let (cost_update_sender, cost_update_receiver) = unbounded(); let (cost_update_sender, cost_update_receiver) = unbounded();
let cost_update_service = let cost_update_service =
CostUpdateService::new(blockstore.clone(), cost_model.clone(), cost_update_receiver); CostUpdateService::new(blockstore.clone(), cost_model.clone(), cost_update_receiver);
@ -347,6 +343,7 @@ impl Tvu {
accounts_background_request_handler, accounts_background_request_handler,
tvu_config.accounts_db_caching_enabled, tvu_config.accounts_db_caching_enabled,
tvu_config.test_hash_calculation, tvu_config.test_hash_calculation,
tvu_config.use_index_hash_calculation,
last_full_snapshot_slot, last_full_snapshot_slot,
); );
@ -361,29 +358,12 @@ impl Tvu {
accounts_hash_verifier, accounts_hash_verifier,
cost_update_service, cost_update_service,
voting_service, voting_service,
warm_quic_cache_service,
drop_bank_service, drop_bank_service,
transaction_cost_metrics_service, transaction_cost_metrics_service,
} }
} }
pub fn join(self) -> thread::Result<()> { pub fn join(self) -> thread::Result<()> {
// spawn a new thread to wait for tvu close
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
let _ = self.do_join();
sender.send(()).unwrap();
});
// exit can deadlock. put an upper-bound on how long we wait for it
let timeout = Duration::from_secs(TVU_THREADS_JOIN_TIMEOUT_SECONDS);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
error!("timeout for closing tvu");
}
Ok(())
}
fn do_join(self) -> thread::Result<()> {
self.retransmit_stage.join()?; self.retransmit_stage.join()?;
self.fetch_stage.join()?; self.fetch_stage.join()?;
self.sigverify_stage.join()?; self.sigverify_stage.join()?;
@ -396,7 +376,6 @@ impl Tvu {
self.accounts_hash_verifier.join()?; self.accounts_hash_verifier.join()?;
self.cost_update_service.join()?; self.cost_update_service.join()?;
self.voting_service.join()?; self.voting_service.join()?;
self.warm_quic_cache_service.join()?;
self.drop_bank_service.join()?; self.drop_bank_service.join()?;
self.transaction_cost_metrics_service.join()?; self.transaction_cost_metrics_service.join()?;
Ok(()) Ok(())
@ -468,6 +447,7 @@ pub mod tests {
let (_, gossip_confirmed_slots_receiver) = unbounded(); let (_, gossip_confirmed_slots_receiver) = unbounded();
let bank_forks = Arc::new(RwLock::new(bank_forks)); let bank_forks = Arc::new(RwLock::new(bank_forks));
let tower = Tower::default(); let tower = Tower::default();
let accounts_package_channel = unbounded();
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default()); let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
let (_pruned_banks_sender, pruned_banks_receiver) = unbounded(); let (_pruned_banks_sender, pruned_banks_receiver) = unbounded();
let tvu = Tvu::new( let tvu = Tvu::new(
@ -515,7 +495,7 @@ pub mod tests {
TvuConfig::default(), TvuConfig::default(),
&Arc::new(MaxSlots::default()), &Arc::new(MaxSlots::default()),
&Arc::new(RwLock::new(CostModel::default())), &Arc::new(RwLock::new(CostModel::default())),
PendingAccountsPackage::default(), accounts_package_channel,
None, None,
None, None,
None, None,

View File

@ -15,9 +15,14 @@ use {
/// SanitizedTransaction /// SanitizedTransaction
#[derive(Debug, Default)] #[derive(Debug, Default)]
pub struct DeserializedPacket { pub struct DeserializedPacket {
pub versioned_transaction: VersionedTransaction, #[allow(dead_code)]
pub message_hash: Hash, versioned_transaction: VersionedTransaction,
pub is_simple_vote: bool,
#[allow(dead_code)]
message_hash: Hash,
#[allow(dead_code)]
is_simple_vote: bool,
} }
/// Defines the type of entry in `UnprocessedPacketBatches`, it holds original packet_batch /// Defines the type of entry in `UnprocessedPacketBatches`, it holds original packet_batch

View File

@ -8,7 +8,6 @@ use {
cluster_info_vote_listener::VoteTracker, cluster_info_vote_listener::VoteTracker,
completed_data_sets_service::CompletedDataSetsService, completed_data_sets_service::CompletedDataSetsService,
consensus::{reconcile_blockstore_roots_with_tower, Tower}, consensus::{reconcile_blockstore_roots_with_tower, Tower},
poh_timing_report_service::PohTimingReportService,
rewards_recorder_service::{RewardsRecorderSender, RewardsRecorderService}, rewards_recorder_service::{RewardsRecorderSender, RewardsRecorderService},
sample_performance_service::SamplePerformanceService, sample_performance_service::SamplePerformanceService,
serve_repair::ServeRepair, serve_repair::ServeRepair,
@ -45,11 +44,15 @@ use {
leader_schedule_cache::LeaderScheduleCache, leader_schedule_cache::LeaderScheduleCache,
}, },
solana_measure::measure::Measure, solana_measure::measure::Measure,
solana_metrics::{datapoint_info, poh_timing_point::PohTimingSender}, solana_metrics::datapoint_info,
solana_poh::{ solana_poh::{
poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS}, poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS},
poh_service::{self, PohService}, poh_service::{self, PohService},
}, },
solana_replica_lib::{
accountsdb_repl_server::{AccountsDbReplService, AccountsDbReplServiceConfig},
accountsdb_repl_server_factory,
},
solana_rpc::{ solana_rpc::{
max_slots::MaxSlots, max_slots::MaxSlots,
optimistically_confirmed_bank_tracker::{ optimistically_confirmed_bank_tracker::{
@ -73,11 +76,10 @@ use {
commitment::BlockCommitmentCache, commitment::BlockCommitmentCache,
cost_model::CostModel, cost_model::CostModel,
hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE}, hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE},
runtime_config::RuntimeConfig,
snapshot_archive_info::SnapshotArchiveInfoGetter, snapshot_archive_info::SnapshotArchiveInfoGetter,
snapshot_config::SnapshotConfig, snapshot_config::SnapshotConfig,
snapshot_hash::StartingSnapshotHashes, snapshot_hash::StartingSnapshotHashes,
snapshot_package::{PendingAccountsPackage, PendingSnapshotPackage}, snapshot_package::{AccountsPackageSender, PendingSnapshotPackage},
snapshot_utils, snapshot_utils,
}, },
solana_sdk::{ solana_sdk::{
@ -111,6 +113,7 @@ const MAX_COMPLETED_DATA_SETS_IN_CHANNEL: usize = 100_000;
const WAIT_FOR_SUPERMAJORITY_THRESHOLD_PERCENT: u64 = 80; const WAIT_FOR_SUPERMAJORITY_THRESHOLD_PERCENT: u64 = 80;
pub struct ValidatorConfig { pub struct ValidatorConfig {
pub dev_halt_at_slot: Option<Slot>,
pub expected_genesis_hash: Option<Hash>, pub expected_genesis_hash: Option<Hash>,
pub expected_bank_hash: Option<Hash>, pub expected_bank_hash: Option<Hash>,
pub expected_shred_version: Option<u16>, pub expected_shred_version: Option<u16>,
@ -118,6 +121,7 @@ pub struct ValidatorConfig {
pub account_paths: Vec<PathBuf>, pub account_paths: Vec<PathBuf>,
pub account_shrink_paths: Option<Vec<PathBuf>>, pub account_shrink_paths: Option<Vec<PathBuf>>,
pub rpc_config: JsonRpcConfig, pub rpc_config: JsonRpcConfig,
pub accountsdb_repl_service_config: Option<AccountsDbReplServiceConfig>,
pub geyser_plugin_config_files: Option<Vec<PathBuf>>, pub geyser_plugin_config_files: Option<Vec<PathBuf>>,
pub rpc_addrs: Option<(SocketAddr, SocketAddr)>, // (JsonRpc, JsonRpcPubSub) pub rpc_addrs: Option<(SocketAddr, SocketAddr)>, // (JsonRpc, JsonRpcPubSub)
pub pubsub_config: PubSubConfig, pub pubsub_config: PubSubConfig,
@ -146,6 +150,7 @@ pub struct ValidatorConfig {
pub debug_keys: Option<Arc<HashSet<Pubkey>>>, pub debug_keys: Option<Arc<HashSet<Pubkey>>>,
pub contact_debug_interval: u64, pub contact_debug_interval: u64,
pub contact_save_interval: u64, pub contact_save_interval: u64,
pub bpf_jit: bool,
pub send_transaction_service_config: send_transaction_service::Config, pub send_transaction_service_config: send_transaction_service::Config,
pub no_poh_speed_test: bool, pub no_poh_speed_test: bool,
pub no_os_memory_stats_reporting: bool, pub no_os_memory_stats_reporting: bool,
@ -158,18 +163,19 @@ pub struct ValidatorConfig {
pub warp_slot: Option<Slot>, pub warp_slot: Option<Slot>,
pub accounts_db_test_hash_calculation: bool, pub accounts_db_test_hash_calculation: bool,
pub accounts_db_skip_shrink: bool, pub accounts_db_skip_shrink: bool,
pub accounts_db_use_index_hash_calculation: bool,
pub tpu_coalesce_ms: u64, pub tpu_coalesce_ms: u64,
pub validator_exit: Arc<RwLock<Exit>>, pub validator_exit: Arc<RwLock<Exit>>,
pub no_wait_for_vote_to_start_leader: bool, pub no_wait_for_vote_to_start_leader: bool,
pub accounts_shrink_ratio: AccountShrinkThreshold, pub accounts_shrink_ratio: AccountShrinkThreshold,
pub wait_to_vote_slot: Option<Slot>, pub wait_to_vote_slot: Option<Slot>,
pub ledger_column_options: LedgerColumnOptions, pub ledger_column_options: LedgerColumnOptions,
pub runtime_config: RuntimeConfig,
} }
impl Default for ValidatorConfig { impl Default for ValidatorConfig {
fn default() -> Self { fn default() -> Self {
Self { Self {
dev_halt_at_slot: None,
expected_genesis_hash: None, expected_genesis_hash: None,
expected_bank_hash: None, expected_bank_hash: None,
expected_shred_version: None, expected_shred_version: None,
@ -178,6 +184,7 @@ impl Default for ValidatorConfig {
account_paths: Vec::new(), account_paths: Vec::new(),
account_shrink_paths: None, account_shrink_paths: None,
rpc_config: JsonRpcConfig::default(), rpc_config: JsonRpcConfig::default(),
accountsdb_repl_service_config: None,
geyser_plugin_config_files: None, geyser_plugin_config_files: None,
rpc_addrs: None, rpc_addrs: None,
pubsub_config: PubSubConfig::default(), pubsub_config: PubSubConfig::default(),
@ -205,6 +212,7 @@ impl Default for ValidatorConfig {
debug_keys: None, debug_keys: None,
contact_debug_interval: DEFAULT_CONTACT_DEBUG_INTERVAL_MILLIS, contact_debug_interval: DEFAULT_CONTACT_DEBUG_INTERVAL_MILLIS,
contact_save_interval: DEFAULT_CONTACT_SAVE_INTERVAL_MILLIS, contact_save_interval: DEFAULT_CONTACT_SAVE_INTERVAL_MILLIS,
bpf_jit: false,
send_transaction_service_config: send_transaction_service::Config::default(), send_transaction_service_config: send_transaction_service::Config::default(),
no_poh_speed_test: true, no_poh_speed_test: true,
no_os_memory_stats_reporting: true, no_os_memory_stats_reporting: true,
@ -216,6 +224,7 @@ impl Default for ValidatorConfig {
warp_slot: None, warp_slot: None,
accounts_db_test_hash_calculation: false, accounts_db_test_hash_calculation: false,
accounts_db_skip_shrink: false, accounts_db_skip_shrink: false,
accounts_db_use_index_hash_calculation: true,
tpu_coalesce_ms: DEFAULT_TPU_COALESCE_MS, tpu_coalesce_ms: DEFAULT_TPU_COALESCE_MS,
validator_exit: Arc::new(RwLock::new(Exit::default())), validator_exit: Arc::new(RwLock::new(Exit::default())),
no_wait_for_vote_to_start_leader: true, no_wait_for_vote_to_start_leader: true,
@ -223,7 +232,6 @@ impl Default for ValidatorConfig {
accounts_db_config: None, accounts_db_config: None,
wait_to_vote_slot: None, wait_to_vote_slot: None,
ledger_column_options: LedgerColumnOptions::default(), ledger_column_options: LedgerColumnOptions::default(),
runtime_config: RuntimeConfig::default(),
} }
} }
} }
@ -231,7 +239,6 @@ impl Default for ValidatorConfig {
impl ValidatorConfig { impl ValidatorConfig {
pub fn default_for_test() -> Self { pub fn default_for_test() -> Self {
Self { Self {
enforce_ulimit_nofile: false,
rpc_config: JsonRpcConfig::default_for_test(), rpc_config: JsonRpcConfig::default_for_test(),
..Self::default() ..Self::default()
} }
@ -319,7 +326,6 @@ pub struct Validator {
cache_block_meta_service: Option<CacheBlockMetaService>, cache_block_meta_service: Option<CacheBlockMetaService>,
system_monitor_service: Option<SystemMonitorService>, system_monitor_service: Option<SystemMonitorService>,
sample_performance_service: Option<SamplePerformanceService>, sample_performance_service: Option<SamplePerformanceService>,
poh_timing_report_service: PohTimingReportService,
stats_reporter_service: StatsReporterService, stats_reporter_service: StatsReporterService,
gossip_service: GossipService, gossip_service: GossipService,
serve_repair_service: ServeRepairService, serve_repair_service: ServeRepairService,
@ -332,7 +338,7 @@ pub struct Validator {
ip_echo_server: Option<solana_net_utils::IpEchoServer>, ip_echo_server: Option<solana_net_utils::IpEchoServer>,
pub cluster_info: Arc<ClusterInfo>, pub cluster_info: Arc<ClusterInfo>,
pub bank_forks: Arc<RwLock<BankForks>>, pub bank_forks: Arc<RwLock<BankForks>>,
pub blockstore: Arc<Blockstore>, accountsdb_repl_service: Option<AccountsDbReplService>,
geyser_plugin_service: Option<GeyserPluginService>, geyser_plugin_service: Option<GeyserPluginService>,
} }
@ -453,6 +459,8 @@ impl Validator {
.register_exit(Box::new(move || exit.store(true, Ordering::Relaxed))); .register_exit(Box::new(move || exit.store(true, Ordering::Relaxed)));
} }
let accounts_package_channel = unbounded();
let accounts_update_notifier = geyser_plugin_service let accounts_update_notifier = geyser_plugin_service
.as_ref() .as_ref()
.and_then(|geyser_plugin_service| geyser_plugin_service.get_accounts_update_notifier()); .and_then(|geyser_plugin_service| geyser_plugin_service.get_accounts_update_notifier());
@ -477,10 +485,6 @@ impl Validator {
!config.no_os_network_stats_reporting, !config.no_os_network_stats_reporting,
)); ));
let (poh_timing_point_sender, poh_timing_point_receiver) = unbounded();
let poh_timing_report_service =
PohTimingReportService::new(poh_timing_point_receiver, exit.clone());
let ( let (
genesis_config, genesis_config,
mut bank_forks, mut bank_forks,
@ -508,10 +512,8 @@ impl Validator {
&start_progress, &start_progress,
accounts_update_notifier, accounts_update_notifier,
transaction_notifier, transaction_notifier,
Some(poh_timing_point_sender.clone()),
); );
let pending_accounts_package = PendingAccountsPackage::default();
let last_full_snapshot_slot = process_blockstore( let last_full_snapshot_slot = process_blockstore(
&blockstore, &blockstore,
&mut bank_forks, &mut bank_forks,
@ -520,7 +522,7 @@ impl Validator {
transaction_status_sender.as_ref(), transaction_status_sender.as_ref(),
cache_block_meta_sender.as_ref(), cache_block_meta_sender.as_ref(),
config.snapshot_config.as_ref(), config.snapshot_config.as_ref(),
Arc::clone(&pending_accounts_package), accounts_package_channel.0.clone(),
blockstore_root_scan, blockstore_root_scan,
pruned_banks_receiver.clone(), pruned_banks_receiver.clone(),
); );
@ -528,6 +530,7 @@ impl Validator {
last_full_snapshot_slot.or_else(|| starting_snapshot_hashes.map(|x| x.full.hash.0)); last_full_snapshot_slot.or_else(|| starting_snapshot_hashes.map(|x| x.full.hash.0));
maybe_warp_slot(config, ledger_path, &mut bank_forks, &leader_schedule_cache); maybe_warp_slot(config, ledger_path, &mut bank_forks, &leader_schedule_cache);
let tower = { let tower = {
let restored_tower = Tower::restore(config.tower_storage.as_ref(), &id); let restored_tower = Tower::restore(config.tower_storage.as_ref(), &id);
if let Ok(tower) = &restored_tower { if let Ok(tower) = &restored_tower {
@ -652,10 +655,9 @@ impl Validator {
bank.ticks_per_slot(), bank.ticks_per_slot(),
&id, &id,
&blockstore, &blockstore,
blockstore.get_new_shred_signal(0), blockstore.new_shreds_signals.first().cloned(),
&leader_schedule_cache, &leader_schedule_cache,
&poh_config, &poh_config,
Some(poh_timing_point_sender),
exit.clone(), exit.clone(),
); );
let poh_recorder = Arc::new(Mutex::new(poh_recorder)); let poh_recorder = Arc::new(Mutex::new(poh_recorder));
@ -666,6 +668,7 @@ impl Validator {
pubsub_service, pubsub_service,
optimistically_confirmed_bank_tracker, optimistically_confirmed_bank_tracker,
bank_notification_sender, bank_notification_sender,
accountsdb_repl_service,
) = if let Some((rpc_addr, rpc_pubsub_addr)) = config.rpc_addrs { ) = if let Some((rpc_addr, rpc_pubsub_addr)) = config.rpc_addrs {
if ContactInfo::is_valid_address(&node.info.rpc, &socket_addr_space) { if ContactInfo::is_valid_address(&node.info.rpc, &socket_addr_space) {
assert!(ContactInfo::is_valid_address( assert!(ContactInfo::is_valid_address(
@ -679,6 +682,13 @@ impl Validator {
)); ));
} }
let accountsdb_repl_service = config.accountsdb_repl_service_config.as_ref().map(|accountsdb_repl_service_config| {
let (bank_notification_sender, bank_notification_receiver) = unbounded();
bank_notification_senders.push(bank_notification_sender);
accountsdb_repl_server_factory::AccountsDbReplServerFactory::build_accountsdb_repl_server(
accountsdb_repl_service_config.clone(), bank_notification_receiver, bank_forks.clone())
});
let (bank_notification_sender, bank_notification_receiver) = unbounded(); let (bank_notification_sender, bank_notification_receiver) = unbounded();
let confirmed_bank_subscribers = if !bank_notification_senders.is_empty() { let confirmed_bank_subscribers = if !bank_notification_senders.is_empty() {
Some(Arc::new(RwLock::new(bank_notification_senders))) Some(Arc::new(RwLock::new(bank_notification_senders)))
@ -731,12 +741,13 @@ impl Validator {
confirmed_bank_subscribers, confirmed_bank_subscribers,
)), )),
Some(bank_notification_sender), Some(bank_notification_sender),
accountsdb_repl_service,
) )
} else { } else {
(None, None, None, None) (None, None, None, None, None)
}; };
if config.runtime_config.dev_halt_at_slot.is_some() { if config.dev_halt_at_slot.is_some() {
// Simulate a confirmed root to avoid RPC errors with CommitmentConfig::finalized() and // Simulate a confirmed root to avoid RPC errors with CommitmentConfig::finalized() and
// to ensure RPC endpoints like getConfirmedBlock, which require a confirmed root, work // to ensure RPC endpoints like getConfirmedBlock, which require a confirmed root, work
block_commitment_cache block_commitment_cache
@ -798,7 +809,8 @@ impl Validator {
let enable_gossip_push = config let enable_gossip_push = config
.accounts_db_config .accounts_db_config
.as_ref() .as_ref()
.map(|config| config.filler_accounts_config.count == 0) .and_then(|config| config.filler_account_count)
.map(|count| count == 0)
.unwrap_or(true); .unwrap_or(true);
let snapshot_packager_service = SnapshotPackagerService::new( let snapshot_packager_service = SnapshotPackagerService::new(
@ -842,7 +854,7 @@ impl Validator {
record_receiver, record_receiver,
); );
assert_eq!( assert_eq!(
blockstore.get_new_shred_signals_len(), blockstore.new_shreds_signals.len(),
1, 1,
"New shred signal for the TVU should be the same as the clear bank signal." "New shred signal for the TVU should be the same as the clear bank signal."
); );
@ -858,11 +870,8 @@ impl Validator {
let (gossip_verified_vote_hash_sender, gossip_verified_vote_hash_receiver) = unbounded(); let (gossip_verified_vote_hash_sender, gossip_verified_vote_hash_receiver) = unbounded();
let (cluster_confirmed_slot_sender, cluster_confirmed_slot_receiver) = unbounded(); let (cluster_confirmed_slot_sender, cluster_confirmed_slot_receiver) = unbounded();
let rpc_completed_slots_service = RpcCompletedSlotsService::spawn( let rpc_completed_slots_service =
completed_slots_receiver, RpcCompletedSlotsService::spawn(completed_slots_receiver, rpc_subscriptions.clone());
rpc_subscriptions.clone(),
exit.clone(),
);
let (replay_vote_sender, replay_vote_receiver) = unbounded(); let (replay_vote_sender, replay_vote_receiver) = unbounded();
let tvu = Tvu::new( let tvu = Tvu::new(
@ -909,6 +918,7 @@ impl Validator {
accounts_hash_fault_injection_slots: config.accounts_hash_fault_injection_slots, accounts_hash_fault_injection_slots: config.accounts_hash_fault_injection_slots,
accounts_db_caching_enabled: config.accounts_db_caching_enabled, accounts_db_caching_enabled: config.accounts_db_caching_enabled,
test_hash_calculation: config.accounts_db_test_hash_calculation, test_hash_calculation: config.accounts_db_test_hash_calculation,
use_index_hash_calculation: config.accounts_db_use_index_hash_calculation,
rocksdb_compaction_interval: config.rocksdb_compaction_interval, rocksdb_compaction_interval: config.rocksdb_compaction_interval,
rocksdb_max_compaction_jitter: config.rocksdb_compaction_interval, rocksdb_max_compaction_jitter: config.rocksdb_compaction_interval,
wait_for_vote_to_start_leader, wait_for_vote_to_start_leader,
@ -916,7 +926,7 @@ impl Validator {
}, },
&max_slots, &max_slots,
&cost_model, &cost_model,
pending_accounts_package, accounts_package_channel,
last_full_snapshot_slot, last_full_snapshot_slot,
block_metadata_notifier, block_metadata_notifier,
config.wait_to_vote_slot, config.wait_to_vote_slot,
@ -970,7 +980,6 @@ impl Validator {
cache_block_meta_service, cache_block_meta_service,
system_monitor_service, system_monitor_service,
sample_performance_service, sample_performance_service,
poh_timing_report_service,
snapshot_packager_service, snapshot_packager_service,
completed_data_sets_service, completed_data_sets_service,
tpu, tpu,
@ -981,7 +990,7 @@ impl Validator {
validator_exit: config.validator_exit.clone(), validator_exit: config.validator_exit.clone(),
cluster_info, cluster_info,
bank_forks, bank_forks,
blockstore: blockstore.clone(), accountsdb_repl_service,
geyser_plugin_service, geyser_plugin_service,
} }
} }
@ -989,9 +998,6 @@ impl Validator {
// Used for notifying many nodes in parallel to exit // Used for notifying many nodes in parallel to exit
pub fn exit(&mut self) { pub fn exit(&mut self) {
self.validator_exit.write().unwrap().exit(); self.validator_exit.write().unwrap().exit();
// drop all signals in blockstore
self.blockstore.drop_signal();
} }
pub fn close(mut self) { pub fn close(mut self) {
@ -1101,13 +1107,15 @@ impl Validator {
ip_echo_server.shutdown_background(); ip_echo_server.shutdown_background();
} }
if let Some(accountsdb_repl_service) = self.accountsdb_repl_service {
accountsdb_repl_service
.join()
.expect("accountsdb_repl_service");
}
if let Some(geyser_plugin_service) = self.geyser_plugin_service { if let Some(geyser_plugin_service) = self.geyser_plugin_service {
geyser_plugin_service.join().expect("geyser_plugin_service"); geyser_plugin_service.join().expect("geyser_plugin_service");
} }
self.poh_timing_report_service
.join()
.expect("poh_timing_report_service");
} }
} }
@ -1246,7 +1254,6 @@ fn load_blockstore(
start_progress: &Arc<RwLock<ValidatorStartProgress>>, start_progress: &Arc<RwLock<ValidatorStartProgress>>,
accounts_update_notifier: Option<AccountsUpdateNotifier>, accounts_update_notifier: Option<AccountsUpdateNotifier>,
transaction_notifier: Option<TransactionNotifierLock>, transaction_notifier: Option<TransactionNotifierLock>,
poh_timing_point_sender: Option<PohTimingSender>,
) -> ( ) -> (
GenesisConfig, GenesisConfig,
BankForks, BankForks,
@ -1302,13 +1309,14 @@ fn load_blockstore(
) )
.expect("Failed to open ledger database"); .expect("Failed to open ledger database");
blockstore.set_no_compaction(config.no_rocksdb_compaction); blockstore.set_no_compaction(config.no_rocksdb_compaction);
blockstore.shred_timing_point_sender = poh_timing_point_sender;
let blockstore = Arc::new(blockstore); let blockstore = Arc::new(blockstore);
let blockstore_root_scan = BlockstoreRootScan::new(config, &blockstore, exit); let blockstore_root_scan = BlockstoreRootScan::new(config, &blockstore, exit);
let process_options = blockstore_processor::ProcessOptions { let process_options = blockstore_processor::ProcessOptions {
bpf_jit: config.bpf_jit,
poh_verify: config.poh_verify, poh_verify: config.poh_verify,
dev_halt_at_slot: config.dev_halt_at_slot,
new_hard_forks: config.new_hard_forks.clone(), new_hard_forks: config.new_hard_forks.clone(),
debug_keys: config.debug_keys.clone(), debug_keys: config.debug_keys.clone(),
account_indexes: config.account_indexes.clone(), account_indexes: config.account_indexes.clone(),
@ -1317,7 +1325,6 @@ fn load_blockstore(
shrink_ratio: config.accounts_shrink_ratio, shrink_ratio: config.accounts_shrink_ratio,
accounts_db_test_hash_calculation: config.accounts_db_test_hash_calculation, accounts_db_test_hash_calculation: config.accounts_db_test_hash_calculation,
accounts_db_skip_shrink: config.accounts_db_skip_shrink, accounts_db_skip_shrink: config.accounts_db_skip_shrink,
runtime_config: config.runtime_config.clone(),
..blockstore_processor::ProcessOptions::default() ..blockstore_processor::ProcessOptions::default()
}; };
@ -1330,7 +1337,7 @@ fn load_blockstore(
blockstore.clone(), blockstore.clone(),
exit, exit,
enable_rpc_transaction_history, enable_rpc_transaction_history,
config.rpc_config.enable_extended_tx_metadata_storage, config.rpc_config.enable_cpi_and_log_storage,
transaction_notifier, transaction_notifier,
) )
} else { } else {
@ -1388,7 +1395,7 @@ fn process_blockstore(
transaction_status_sender: Option<&TransactionStatusSender>, transaction_status_sender: Option<&TransactionStatusSender>,
cache_block_meta_sender: Option<&CacheBlockMetaSender>, cache_block_meta_sender: Option<&CacheBlockMetaSender>,
snapshot_config: Option<&SnapshotConfig>, snapshot_config: Option<&SnapshotConfig>,
pending_accounts_package: PendingAccountsPackage, accounts_package_sender: AccountsPackageSender,
blockstore_root_scan: BlockstoreRootScan, blockstore_root_scan: BlockstoreRootScan,
pruned_banks_receiver: DroppedSlotsReceiver, pruned_banks_receiver: DroppedSlotsReceiver,
) -> Option<Slot> { ) -> Option<Slot> {
@ -1400,7 +1407,7 @@ fn process_blockstore(
transaction_status_sender, transaction_status_sender,
cache_block_meta_sender, cache_block_meta_sender,
snapshot_config, snapshot_config,
pending_accounts_package, accounts_package_sender,
pruned_banks_receiver, pruned_banks_receiver,
) )
.unwrap_or_else(|err| { .unwrap_or_else(|err| {
@ -1545,7 +1552,7 @@ fn initialize_rpc_transaction_history_services(
blockstore: Arc<Blockstore>, blockstore: Arc<Blockstore>,
exit: &Arc<AtomicBool>, exit: &Arc<AtomicBool>,
enable_rpc_transaction_history: bool, enable_rpc_transaction_history: bool,
enable_extended_tx_metadata_storage: bool, enable_cpi_and_log_storage: bool,
transaction_notifier: Option<TransactionNotifierLock>, transaction_notifier: Option<TransactionNotifierLock>,
) -> TransactionHistoryServices { ) -> TransactionHistoryServices {
let max_complete_transaction_status_slot = Arc::new(AtomicU64::new(blockstore.max_root())); let max_complete_transaction_status_slot = Arc::new(AtomicU64::new(blockstore.max_root()));
@ -1559,7 +1566,7 @@ fn initialize_rpc_transaction_history_services(
enable_rpc_transaction_history, enable_rpc_transaction_history,
transaction_notifier.clone(), transaction_notifier.clone(),
blockstore.clone(), blockstore.clone(),
enable_extended_tx_metadata_storage, enable_cpi_and_log_storage,
exit, exit,
)); ));
@ -1801,10 +1808,9 @@ pub fn is_snapshot_config_valid(
mod tests { mod tests {
use { use {
super::*, super::*,
crossbeam_channel::{bounded, RecvTimeoutError},
solana_ledger::{create_new_tmp_ledger, genesis_utils::create_genesis_config_with_leader}, solana_ledger::{create_new_tmp_ledger, genesis_utils::create_genesis_config_with_leader},
solana_sdk::{genesis_config::create_genesis_config, poh_config::PohConfig}, solana_sdk::{genesis_config::create_genesis_config, poh_config::PohConfig},
std::{fs::remove_dir_all, thread, time::Duration}, std::fs::remove_dir_all,
}; };
#[test] #[test]
@ -1842,20 +1848,7 @@ mod tests {
*start_progress.read().unwrap(), *start_progress.read().unwrap(),
ValidatorStartProgress::Running ValidatorStartProgress::Running
); );
// spawn a new thread to wait for validator close
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
validator.close(); validator.close();
sender.send(()).unwrap();
});
// exit can deadlock. put an upper-bound on how long we wait for it
let timeout = Duration::from_secs(30);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
panic!("timeout for closing validator");
}
remove_dir_all(validator_ledger_path).unwrap(); remove_dir_all(validator_ledger_path).unwrap();
} }
@ -1937,21 +1930,11 @@ mod tests {
// Each validator can exit in parallel to speed many sequential calls to join` // Each validator can exit in parallel to speed many sequential calls to join`
validators.iter_mut().for_each(|v| v.exit()); validators.iter_mut().for_each(|v| v.exit());
// While join is called sequentially, the above exit call notified all the
// spawn a new thread to wait for the join of the validator // validators to exit from all their threads
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
validators.into_iter().for_each(|validator| { validators.into_iter().for_each(|validator| {
validator.join(); validator.join();
}); });
sender.send(()).unwrap();
});
// timeout of 30s for shutting down the validators
let timeout = Duration::from_secs(30);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
panic!("timeout for shutting down validators",);
}
for path in ledger_paths { for path in ledger_paths {
remove_dir_all(path).unwrap(); remove_dir_all(path).unwrap();

View File

@ -1,70 +0,0 @@
// Connect to future leaders with some jitter so the quic connection is warm
// by the time we need it.
use {
rand::{thread_rng, Rng},
solana_client::connection_cache::send_wire_transaction,
solana_gossip::cluster_info::ClusterInfo,
solana_poh::poh_recorder::PohRecorder,
std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc, Mutex,
},
thread::{self, sleep, Builder, JoinHandle},
time::Duration,
},
};
pub struct WarmQuicCacheService {
thread_hdl: JoinHandle<()>,
}
// ~50 seconds
const CACHE_OFFSET_SLOT: i64 = 100;
const CACHE_JITTER_SLOT: i64 = 20;
impl WarmQuicCacheService {
pub fn new(
cluster_info: Arc<ClusterInfo>,
poh_recorder: Arc<Mutex<PohRecorder>>,
exit: Arc<AtomicBool>,
) -> Self {
let thread_hdl = Builder::new()
.name("sol-warm-quic-service".to_string())
.spawn(move || {
let slot_jitter = thread_rng().gen_range(-CACHE_JITTER_SLOT, CACHE_JITTER_SLOT);
let mut maybe_last_leader = None;
while !exit.load(Ordering::Relaxed) {
if let Some(leader_pubkey) = poh_recorder
.lock()
.unwrap()
.leader_after_n_slots((CACHE_OFFSET_SLOT + slot_jitter) as u64)
{
if maybe_last_leader
.map_or(true, |last_leader| last_leader != leader_pubkey)
{
maybe_last_leader = Some(leader_pubkey);
if let Some(addr) = cluster_info
.lookup_contact_info(&leader_pubkey, |leader| leader.tpu)
{
if let Err(err) = send_wire_transaction(&[0u8], &addr) {
warn!(
"Failed to warmup QUIC connection to the leader {:?}, Error {:?}",
leader_pubkey, err
);
}
}
}
}
sleep(Duration::from_millis(200));
}
})
.unwrap();
Self { thread_hdl }
}
pub fn join(self) -> thread::Result<()> {
self.thread_hdl.join()
}
}

View File

@ -604,7 +604,7 @@ impl WindowService {
} }
if last_print.elapsed().as_secs() > 2 { if last_print.elapsed().as_secs() > 2 {
metrics.report_metrics("blockstore-insert-shreds"); metrics.report_metrics("recv-window-insert-shreds");
metrics = BlockstoreInsertionMetrics::default(); metrics = BlockstoreInsertionMetrics::default();
ws_metrics.report_metrics("recv-window-insert-shreds"); ws_metrics.report_metrics("recv-window-insert-shreds");
ws_metrics = WindowServiceMetrics::default(); ws_metrics = WindowServiceMetrics::default();

View File

@ -358,7 +358,6 @@ mod tests {
..BlockstoreRocksFifoOptions::default() ..BlockstoreRocksFifoOptions::default()
}, },
), ),
..LedgerColumnOptions::default()
}, },
..BlockstoreOptions::default() ..BlockstoreOptions::default()
} }

View File

@ -69,8 +69,7 @@ mod tests {
snapshot_archive_info::FullSnapshotArchiveInfo, snapshot_archive_info::FullSnapshotArchiveInfo,
snapshot_config::SnapshotConfig, snapshot_config::SnapshotConfig,
snapshot_package::{ snapshot_package::{
AccountsPackage, PendingAccountsPackage, PendingSnapshotPackage, SnapshotPackage, AccountsPackage, PendingSnapshotPackage, SnapshotPackage, SnapshotType,
SnapshotType,
}, },
snapshot_utils::{self, ArchiveFormat, SnapshotVersion}, snapshot_utils::{self, ArchiveFormat, SnapshotVersion},
status_cache::MAX_CACHE_ENTRIES, status_cache::MAX_CACHE_ENTRIES,
@ -166,10 +165,10 @@ mod tests {
old_genesis_config: &GenesisConfig, old_genesis_config: &GenesisConfig,
account_paths: &[PathBuf], account_paths: &[PathBuf],
) { ) {
let snapshot_archives_dir = old_bank_forks let (snapshot_path, snapshot_archives_dir) = old_bank_forks
.snapshot_config .snapshot_config
.as_ref() .as_ref()
.map(|c| &c.snapshot_archives_dir) .map(|c| (&c.bank_snapshots_dir, &c.snapshot_archives_dir))
.unwrap(); .unwrap();
let old_last_bank = old_bank_forks.get(old_last_slot).unwrap(); let old_last_bank = old_bank_forks.get(old_last_slot).unwrap();
@ -213,6 +212,12 @@ mod tests {
.unwrap() .unwrap()
.clone(); .clone();
assert_eq!(*bank, deserialized_bank); assert_eq!(*bank, deserialized_bank);
let bank_snapshots = snapshot_utils::get_bank_snapshots(&snapshot_path);
for p in bank_snapshots {
snapshot_utils::remove_bank_snapshot(p.slot, &snapshot_path).unwrap();
}
} }
// creates banks up to "last_slot" and runs the input function `f` on each bank created // creates banks up to "last_slot" and runs the input function `f` on each bank created
@ -242,11 +247,12 @@ mod tests {
let mint_keypair = &snapshot_test_config.genesis_config_info.mint_keypair; let mint_keypair = &snapshot_test_config.genesis_config_info.mint_keypair;
let (s, snapshot_request_receiver) = unbounded(); let (s, snapshot_request_receiver) = unbounded();
let (accounts_package_sender, _r) = unbounded();
let request_sender = AbsRequestSender::new(Some(s)); let request_sender = AbsRequestSender::new(Some(s));
let snapshot_request_handler = SnapshotRequestHandler { let snapshot_request_handler = SnapshotRequestHandler {
snapshot_config: snapshot_test_config.snapshot_config.clone(), snapshot_config: snapshot_test_config.snapshot_config.clone(),
snapshot_request_receiver, snapshot_request_receiver,
pending_accounts_package: PendingAccountsPackage::default(), accounts_package_sender,
}; };
for slot in 1..=last_slot { for slot in 1..=last_slot {
let mut bank = Bank::new_from_parent(&bank_forks[slot - 1], &Pubkey::default(), slot); let mut bank = Bank::new_from_parent(&bank_forks[slot - 1], &Pubkey::default(), slot);
@ -259,7 +265,8 @@ mod tests {
// set_root should send a snapshot request // set_root should send a snapshot request
bank_forks.set_root(bank.slot(), &request_sender, None); bank_forks.set_root(bank.slot(), &request_sender, None);
bank.update_accounts_hash(); bank.update_accounts_hash();
snapshot_request_handler.handle_snapshot_requests(false, false, 0, &mut None); snapshot_request_handler
.handle_snapshot_requests(false, false, false, 0, &mut None);
} }
} }
@ -268,7 +275,7 @@ mod tests {
let snapshot_config = &snapshot_test_config.snapshot_config; let snapshot_config = &snapshot_test_config.snapshot_config;
let bank_snapshots_dir = &snapshot_config.bank_snapshots_dir; let bank_snapshots_dir = &snapshot_config.bank_snapshots_dir;
let last_bank_snapshot_info = let last_bank_snapshot_info =
snapshot_utils::get_highest_bank_snapshot_pre(bank_snapshots_dir) snapshot_utils::get_highest_bank_snapshot_info(bank_snapshots_dir)
.expect("no bank snapshots found in path"); .expect("no bank snapshots found in path");
let accounts_package = AccountsPackage::new( let accounts_package = AccountsPackage::new(
last_bank, last_bank,
@ -283,13 +290,7 @@ mod tests {
Some(SnapshotType::FullSnapshot), Some(SnapshotType::FullSnapshot),
) )
.unwrap(); .unwrap();
solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash( let snapshot_package = SnapshotPackage::from(accounts_package);
accounts_package.snapshot_links.path(),
accounts_package.slot,
&last_bank.get_accounts_hash(),
);
let snapshot_package =
SnapshotPackage::new(accounts_package, last_bank.get_accounts_hash());
snapshot_utils::archive_snapshot_package( snapshot_utils::archive_snapshot_package(
&snapshot_package, &snapshot_package,
snapshot_config.maximum_full_snapshot_archives_to_retain, snapshot_config.maximum_full_snapshot_archives_to_retain,
@ -366,8 +367,8 @@ mod tests {
.unwrap(); .unwrap();
// Set up snapshotting channels // Set up snapshotting channels
let real_pending_accounts_package = PendingAccountsPackage::default(); let (sender, receiver) = unbounded();
let fake_pending_accounts_package = PendingAccountsPackage::default(); let (fake_sender, _fake_receiver) = unbounded();
// Create next MAX_CACHE_ENTRIES + 2 banks and snapshots. Every bank will get snapshotted // Create next MAX_CACHE_ENTRIES + 2 banks and snapshots. Every bank will get snapshotted
// and the snapshot purging logic will run on every snapshot taken. This means the three // and the snapshot purging logic will run on every snapshot taken. This means the three
@ -392,22 +393,23 @@ mod tests {
let tx = system_transaction::transfer(mint_keypair, &key1, 1, genesis_config.hash()); let tx = system_transaction::transfer(mint_keypair, &key1, 1, genesis_config.hash());
assert_eq!(bank.process_transaction(&tx), Ok(())); assert_eq!(bank.process_transaction(&tx), Ok(()));
bank.squash(); bank.squash();
let accounts_hash = bank.update_accounts_hash();
let pending_accounts_package = { let package_sender = {
if slot == saved_slot as u64 { if slot == saved_slot as u64 {
// Only send one package on the real pending_accounts_package so that the // Only send one package on the real sender so that the packaging service
// packaging service doesn't take forever to run the packaging logic on all // doesn't take forever to run the packaging logic on all MAX_CACHE_ENTRIES
// MAX_CACHE_ENTRIES later // later
&real_pending_accounts_package &sender
} else { } else {
&fake_pending_accounts_package &fake_sender
} }
}; };
snapshot_utils::snapshot_bank( snapshot_utils::snapshot_bank(
&bank, &bank,
vec![], vec![],
pending_accounts_package, package_sender,
bank_snapshots_dir, bank_snapshots_dir,
snapshot_archives_dir, snapshot_archives_dir,
snapshot_config.snapshot_version, snapshot_config.snapshot_version,
@ -461,9 +463,7 @@ mod tests {
saved_archive_path = Some(snapshot_utils::build_full_snapshot_archive_path( saved_archive_path = Some(snapshot_utils::build_full_snapshot_archive_path(
snapshot_archives_dir, snapshot_archives_dir,
slot, slot,
// this needs to match the hash value that we reserialize with later. It is complicated, so just use default. &accounts_hash,
// This hash value is just used to build the file name. Since this is mocked up test code, it is sufficient to pass default here.
&Hash::default(),
ArchiveFormat::TarBzip2, ArchiveFormat::TarBzip2,
)); ));
} }
@ -473,7 +473,7 @@ mod tests {
// currently sitting in the channel // currently sitting in the channel
snapshot_utils::purge_old_bank_snapshots(bank_snapshots_dir); snapshot_utils::purge_old_bank_snapshots(bank_snapshots_dir);
let mut bank_snapshots = snapshot_utils::get_bank_snapshots_pre(&bank_snapshots_dir); let mut bank_snapshots = snapshot_utils::get_bank_snapshots(&bank_snapshots_dir);
bank_snapshots.sort_unstable(); bank_snapshots.sort_unstable();
assert!(bank_snapshots assert!(bank_snapshots
.into_iter() .into_iter()
@ -507,21 +507,15 @@ mod tests {
let _package_receiver = std::thread::Builder::new() let _package_receiver = std::thread::Builder::new()
.name("package-receiver".to_string()) .name("package-receiver".to_string())
.spawn(move || { .spawn(move || {
let accounts_package = real_pending_accounts_package while let Ok(mut accounts_package) = receiver.recv() {
.lock() // Only package the latest
.unwrap() while let Ok(new_accounts_package) = receiver.try_recv() {
.take() accounts_package = new_accounts_package;
.unwrap(); }
solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
accounts_package.snapshot_links.path(), let snapshot_package = SnapshotPackage::from(accounts_package);
accounts_package.slot, *pending_snapshot_package.lock().unwrap() = Some(snapshot_package);
&Hash::default(), }
);
let snapshot_package = SnapshotPackage::new(accounts_package, Hash::default());
pending_snapshot_package
.lock()
.unwrap()
.replace(snapshot_package);
// Wait until the package is consumed by SnapshotPackagerService // Wait until the package is consumed by SnapshotPackagerService
while pending_snapshot_package.lock().unwrap().is_some() { while pending_snapshot_package.lock().unwrap().is_some() {
@ -533,6 +527,10 @@ mod tests {
}) })
.unwrap(); .unwrap();
// Close the channel so that the package receiver will exit after reading all the
// packages off the channel
drop(sender);
// Wait for service to finish // Wait for service to finish
snapshot_packager_service snapshot_packager_service
.join() .join()
@ -554,19 +552,11 @@ mod tests {
) )
.unwrap(); .unwrap();
// files were saved off before we reserialized the bank in the hacked up accounts_hash_verifier stand-in.
solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
saved_snapshots_dir.path(),
saved_slot,
&Hash::default(),
);
snapshot_utils::verify_snapshot_archive( snapshot_utils::verify_snapshot_archive(
saved_archive_path.unwrap(), saved_archive_path.unwrap(),
saved_snapshots_dir.path(), saved_snapshots_dir.path(),
saved_accounts_dir.path(), saved_accounts_dir.path(),
ArchiveFormat::TarBzip2, ArchiveFormat::TarBzip2,
snapshot_utils::VerifyBank::NonDeterministic(saved_slot),
); );
} }
@ -680,11 +670,12 @@ mod tests {
let mint_keypair = &snapshot_test_config.genesis_config_info.mint_keypair; let mint_keypair = &snapshot_test_config.genesis_config_info.mint_keypair;
let (snapshot_request_sender, snapshot_request_receiver) = unbounded(); let (snapshot_request_sender, snapshot_request_receiver) = unbounded();
let (accounts_package_sender, _accounts_package_receiver) = unbounded();
let request_sender = AbsRequestSender::new(Some(snapshot_request_sender)); let request_sender = AbsRequestSender::new(Some(snapshot_request_sender));
let snapshot_request_handler = SnapshotRequestHandler { let snapshot_request_handler = SnapshotRequestHandler {
snapshot_config: snapshot_test_config.snapshot_config.clone(), snapshot_config: snapshot_test_config.snapshot_config.clone(),
snapshot_request_receiver, snapshot_request_receiver,
pending_accounts_package: PendingAccountsPackage::default(), accounts_package_sender,
}; };
let mut last_full_snapshot_slot = None; let mut last_full_snapshot_slot = None;
@ -716,6 +707,7 @@ mod tests {
bank_forks.set_root(bank.slot(), &request_sender, None); bank_forks.set_root(bank.slot(), &request_sender, None);
bank.update_accounts_hash(); bank.update_accounts_hash();
snapshot_request_handler.handle_snapshot_requests( snapshot_request_handler.handle_snapshot_requests(
false,
false, false,
false, false,
0, 0,
@ -765,7 +757,7 @@ mod tests {
let slot = bank.slot(); let slot = bank.slot();
info!("Making full snapshot archive from bank at slot: {}", slot); info!("Making full snapshot archive from bank at slot: {}", slot);
let bank_snapshot_info = let bank_snapshot_info =
snapshot_utils::get_bank_snapshots_pre(&snapshot_config.bank_snapshots_dir) snapshot_utils::get_bank_snapshots(&snapshot_config.bank_snapshots_dir)
.into_iter() .into_iter()
.find(|elem| elem.slot == slot) .find(|elem| elem.slot == slot)
.ok_or_else(|| { .ok_or_else(|| {
@ -800,7 +792,7 @@ mod tests {
slot, incremental_snapshot_base_slot, slot, incremental_snapshot_base_slot,
); );
let bank_snapshot_info = let bank_snapshot_info =
snapshot_utils::get_bank_snapshots_pre(&snapshot_config.bank_snapshots_dir) snapshot_utils::get_bank_snapshots(&snapshot_config.bank_snapshots_dir)
.into_iter() .into_iter()
.find(|elem| elem.slot == slot) .find(|elem| elem.slot == slot)
.ok_or_else(|| { .ok_or_else(|| {
@ -902,7 +894,7 @@ mod tests {
let (pruned_banks_sender, pruned_banks_receiver) = unbounded(); let (pruned_banks_sender, pruned_banks_receiver) = unbounded();
let (snapshot_request_sender, snapshot_request_receiver) = unbounded(); let (snapshot_request_sender, snapshot_request_receiver) = unbounded();
let pending_accounts_package = PendingAccountsPackage::default(); let (accounts_package_sender, accounts_package_receiver) = unbounded();
let pending_snapshot_package = PendingSnapshotPackage::default(); let pending_snapshot_package = PendingSnapshotPackage::default();
let bank_forks = Arc::new(RwLock::new(snapshot_test_config.bank_forks)); let bank_forks = Arc::new(RwLock::new(snapshot_test_config.bank_forks));
@ -922,7 +914,7 @@ mod tests {
let snapshot_request_handler = Some(SnapshotRequestHandler { let snapshot_request_handler = Some(SnapshotRequestHandler {
snapshot_config: snapshot_test_config.snapshot_config.clone(), snapshot_config: snapshot_test_config.snapshot_config.clone(),
snapshot_request_receiver, snapshot_request_receiver,
pending_accounts_package: Arc::clone(&pending_accounts_package), accounts_package_sender,
}); });
let abs_request_handler = AbsRequestHandler { let abs_request_handler = AbsRequestHandler {
snapshot_request_handler, snapshot_request_handler,
@ -939,8 +931,9 @@ mod tests {
true, true,
); );
let tmpdir = TempDir::new().unwrap();
let accounts_hash_verifier = AccountsHashVerifier::new( let accounts_hash_verifier = AccountsHashVerifier::new(
pending_accounts_package, accounts_package_receiver,
Some(pending_snapshot_package), Some(pending_snapshot_package),
&exit, &exit,
&cluster_info, &cluster_info,
@ -948,6 +941,7 @@ mod tests {
false, false,
0, 0,
Some(snapshot_test_config.snapshot_config.clone()), Some(snapshot_test_config.snapshot_config.clone()),
tmpdir.path().to_path_buf(),
); );
let accounts_background_service = AccountsBackgroundService::new( let accounts_background_service = AccountsBackgroundService::new(
@ -955,6 +949,7 @@ mod tests {
&exit, &exit,
abs_request_handler, abs_request_handler,
false, false,
false,
true, true,
None, None,
); );

View File

@ -3059,7 +3059,7 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
Result: Result:
```json ```json
{ "jsonrpc": "2.0", "result": { "solana-core": "1.11.0" }, "id": 1 } { "jsonrpc": "2.0", "result": { "solana-core": "1.10.9" }, "id": 1 }
``` ```
### getVoteAccounts ### getVoteAccounts
@ -3384,7 +3384,7 @@ The result will be an RpcResponse JSON object with `value` set to a JSON object
- `err: <object | string | null>` - Error if transaction failed, null if transaction succeeded. [TransactionError definitions](https://github.com/solana-labs/solana/blob/c0c60386544ec9a9ec7119229f37386d9f070523/sdk/src/transaction/error.rs#L13) - `err: <object | string | null>` - Error if transaction failed, null if transaction succeeded. [TransactionError definitions](https://github.com/solana-labs/solana/blob/c0c60386544ec9a9ec7119229f37386d9f070523/sdk/src/transaction/error.rs#L13)
- `logs: <array | null>` - Array of log messages the transaction instructions output during execution, null if simulation failed before the transaction was able to execute (for example due to an invalid blockhash or signature verification failure) - `logs: <array | null>` - Array of log messages the transaction instructions output during execution, null if simulation failed before the transaction was able to execute (for example due to an invalid blockhash or signature verification failure)
- `accounts: <array | null>` - array of accounts with the same length as the `accounts.addresses` array in the request - `accounts: <array> | null>` - array of accounts with the same length as the `accounts.addresses` array in the request
- `<null>` - if the account doesn't exist or if `err` is not null - `<null>` - if the account doesn't exist or if `err` is not null
- `<object>` - otherwise, a JSON object containing: - `<object>` - otherwise, a JSON object containing:
- `lamports: <u64>`, number of lamports assigned to this account, as a u64 - `lamports: <u64>`, number of lamports assigned to this account, as a u64
@ -3393,9 +3393,6 @@ The result will be an RpcResponse JSON object with `value` set to a JSON object
- `executable: <bool>`, boolean indicating if the account contains a program \(and is strictly read-only\) - `executable: <bool>`, boolean indicating if the account contains a program \(and is strictly read-only\)
- `rentEpoch: <u64>`, the epoch at which this account will next owe rent, as u64 - `rentEpoch: <u64>`, the epoch at which this account will next owe rent, as u64
- `unitsConsumed: <u64 | undefined>`, The number of compute budget units consumed during the processing of this transaction - `unitsConsumed: <u64 | undefined>`, The number of compute budget units consumed during the processing of this transaction
- `returnData: <object | null>` - the most-recent return data generated by an instruction in the transaction, with the following fields:
- `programId: <string>`, the program that generated the return data, as base-58 encoded Pubkey
- `data: <[string, encoding]>`, the return data itself, as base-64 encoded binary data
#### Example: #### Example:
@ -3406,10 +3403,7 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
"id": 1, "id": 1,
"method": "simulateTransaction", "method": "simulateTransaction",
"params": [ "params": [
"AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEDArczbMia1tLmq7zz4DinMNN0pJ1JtLdqIJPUw3YrGCzYAMHBsgN27lcgB6H2WQvFgyZuJYHa46puOQo9yQ8CVQbd9uHXZaGT2cvhRs7reawctIXtX1s3kTqM9YV+/wCp20C7Wj2aiuk5TReAXo+VTVg8QTHjs0UjNMMKCvpzZ+ABAgEBARU=", "4hXTCkRzt9WyecNzV1XPgCDfGAZzQKNxLXgynz5QDuWWPSAZBZSHptvWRL3BjCvzUXRdKvHL2b7yGrRQcWyaqsaBCncVG7BFggS8w9snUts67BSh3EqKpXLUm5UMHfD7ZBe9GhARjbNQMLJ1QD3Spr6oMTBU6EhdB4RD8CP2xUxr2u3d6fos36PD98XS6oX8TQjLpsMwncs5DAMiD4nNnR8NBfyghGCWvCVifVwvA8B8TJxE1aiyiv2L429BCWfyzAme5sZW8rDb14NeCQHhZbtNqfXhcp2tAnaAT"
{
"encoding":"base64",
}
] ]
} }
' '
@ -3428,19 +3422,8 @@ Result:
"err": null, "err": null,
"accounts": null, "accounts": null,
"logs": [ "logs": [
"Program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri invoke [1]", "BPF program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri success"
"Program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri consumed 2366 of 1400000 compute units", ]
"Program return: 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri KgAAAAAAAAA=",
"Program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri success"
],
"returnData": {
"data": [
"Kg==",
"base64"
],
"programId": "83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri"
},
"unitsConsumed": 2366
} }
}, },
"id": 1 "id": 1

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021" edition = "2021"
name = "solana-dos" name = "solana-dos"
version = "1.11.0" version = "1.10.9"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
@ -15,18 +15,18 @@ clap = {version = "3.1.5", features = ["derive", "cargo"]}
log = "0.4.14" log = "0.4.14"
rand = "0.7.0" rand = "0.7.0"
serde = "1.0.136" serde = "1.0.136"
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.10.9" }
solana-core = { path = "../core", version = "=1.11.0" } solana-core = { path = "../core", version = "=1.10.9" }
solana-gossip = { path = "../gossip", version = "=1.11.0" } solana-gossip = { path = "../gossip", version = "=1.10.9" }
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.10.9" }
solana-perf = { path = "../perf", version = "=1.11.0" } solana-perf = { path = "../perf", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.10.9" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.10.9" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]
[dev-dependencies] [dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" } solana-local-cluster = { path = "../local-cluster", version = "=1.10.9" }

View File

@ -542,7 +542,6 @@ pub mod test {
} }
#[test] #[test]
#[ignore]
fn test_dos_local_cluster_transactions() { fn test_dos_local_cluster_transactions() {
let num_nodes = 1; let num_nodes = 1;
let cluster = let cluster =

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-download-utils" name = "solana-download-utils"
version = "1.11.0" version = "1.10.9"
description = "Solana Download Utils" description = "Solana Download Utils"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -14,8 +14,8 @@ console = "0.15.0"
indicatif = "0.16.2" indicatif = "0.16.2"
log = "0.4.14" log = "0.4.14"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
[lib] [lib]
crate-type = ["lib"] crate-type = ["lib"]

View File

@ -1,6 +1,6 @@
[package] [package]
name = "solana-entry" name = "solana-entry"
version = "1.11.0" version = "1.10.9"
description = "Solana Entry" description = "Solana Entry"
authors = ["Solana Maintainers <maintainers@solana.foundation>"] authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
@ -18,16 +18,16 @@ log = "0.4.11"
rand = "0.7.0" rand = "0.7.0"
rayon = "1.5.1" rayon = "1.5.1"
serde = "1.0.136" serde = "1.0.136"
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.10.9" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.11.0" } solana-merkle-tree = { path = "../merkle-tree", version = "=1.10.9" }
solana-metrics = { path = "../metrics", version = "=1.11.0" } solana-metrics = { path = "../metrics", version = "=1.10.9" }
solana-perf = { path = "../perf", version = "=1.11.0" } solana-perf = { path = "../perf", version = "=1.10.9" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" } solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.9" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.10.9" }
[dev-dependencies] [dev-dependencies]
matches = "0.1.9" matches = "0.1.9"
solana-logger = { path = "../logger", version = "=1.11.0" } solana-logger = { path = "../logger", version = "=1.10.9" }
[lib] [lib]
crate-type = ["lib"] crate-type = ["lib"]

File diff suppressed because it is too large Load Diff

View File

@ -5,11 +5,10 @@
"dependencies": { "dependencies": {
"@blockworks-foundation/mango-client": "^3.2.16", "@blockworks-foundation/mango-client": "^3.2.16",
"@bonfida/bot": "^0.5.3", "@bonfida/bot": "^0.5.3",
"@bonfida/spl-name-service": "^0.1.30", "@bonfida/spl-name-service": "^0.1.22",
"@cloudflare/stream-react": "^1.2.0", "@cloudflare/stream-react": "^1.2.0",
"@metamask/jazzicon": "^2.0.0", "@metamask/jazzicon": "^2.0.0",
"@metaplex/js": "4.12.0", "@metaplex/js": "4.12.0",
"@project-serum/anchor": "0.23.0",
"@project-serum/serum": "^0.13.61", "@project-serum/serum": "^0.13.61",
"@react-hook/debounce": "^4.0.0", "@react-hook/debounce": "^4.0.0",
"@sentry/react": "^6.16.1", "@sentry/react": "^6.16.1",

View File

@ -1,20 +1,18 @@
import React from "react";
import { Message, ParsedMessage } from "@solana/web3.js"; import { Message, ParsedMessage } from "@solana/web3.js";
import { Cluster } from "providers/cluster"; import { Cluster } from "providers/cluster";
import { TableCardBody } from "components/common/TableCardBody"; import { TableCardBody } from "components/common/TableCardBody";
import { programLabel } from "utils/tx";
import { InstructionLogs } from "utils/program-logs"; import { InstructionLogs } from "utils/program-logs";
import { ProgramName } from "utils/anchor";
import React from "react";
export function ProgramLogsCardBody({ export function ProgramLogsCardBody({
message, message,
logs, logs,
cluster, cluster,
url,
}: { }: {
message: Message | ParsedMessage; message: Message | ParsedMessage;
logs: InstructionLogs[]; logs: InstructionLogs[];
cluster: Cluster; cluster: Cluster;
url: string;
}) { }) {
return ( return (
<TableCardBody> <TableCardBody>
@ -30,6 +28,9 @@ export function ProgramLogsCardBody({
} else { } else {
programId = ix.programId; programId = ix.programId;
} }
const programName =
programLabel(programId.toBase58(), cluster) || "Unknown Program";
const programLogs: InstructionLogs | undefined = logs[index]; const programLogs: InstructionLogs | undefined = logs[index];
let badgeColor = "white"; let badgeColor = "white";
@ -44,12 +45,7 @@ export function ProgramLogsCardBody({
<span className={`badge bg-${badgeColor}-soft me-2`}> <span className={`badge bg-${badgeColor}-soft me-2`}>
#{index + 1} #{index + 1}
</span> </span>
<ProgramName {programName} Instruction
programId={programId}
cluster={cluster}
url={url}
/>{" "}
Instruction
</div> </div>
{programLogs && ( {programLogs && (
<div className="d-flex align-items-start flex-column font-monospace p-2 font-size-sm"> <div className="d-flex align-items-start flex-column font-monospace p-2 font-size-sm">

View File

@ -13,30 +13,14 @@ import {
import { Cluster, useCluster } from "providers/cluster"; import { Cluster, useCluster } from "providers/cluster";
import { useTokenRegistry } from "providers/mints/token-registry"; import { useTokenRegistry } from "providers/mints/token-registry";
import { TokenInfoMap } from "@solana/spl-token-registry"; import { TokenInfoMap } from "@solana/spl-token-registry";
import { Connection } from "@solana/web3.js";
import { getDomainInfo, hasDomainSyntax } from "utils/name-service";
interface SearchOptions {
label: string;
options: {
label: string;
value: string[];
pathname: string;
}[];
}
export function SearchBar() { export function SearchBar() {
const [search, setSearch] = React.useState(""); const [search, setSearch] = React.useState("");
const searchRef = React.useRef("");
const [searchOptions, setSearchOptions] = React.useState<SearchOptions[]>([]);
const [loadingSearch, setLoadingSearch] = React.useState<boolean>(false);
const [loadingSearchMessage, setLoadingSearchMessage] =
React.useState<string>("loading...");
const selectRef = React.useRef<StateManager<any> | null>(null); const selectRef = React.useRef<StateManager<any> | null>(null);
const history = useHistory(); const history = useHistory();
const location = useLocation(); const location = useLocation();
const { tokenRegistry } = useTokenRegistry(); const { tokenRegistry } = useTokenRegistry();
const { url, cluster, clusterInfo } = useCluster(); const { cluster, clusterInfo } = useCluster();
const onChange = ( const onChange = (
{ pathname }: ValueType<any, false>, { pathname }: ValueType<any, false>,
@ -49,54 +33,7 @@ export function SearchBar() {
}; };
const onInputChange = (value: string, { action }: InputActionMeta) => { const onInputChange = (value: string, { action }: InputActionMeta) => {
if (action === "input-change") { if (action === "input-change") setSearch(value);
setSearch(value);
}
};
React.useEffect(() => {
searchRef.current = search;
setLoadingSearchMessage("Loading...");
setLoadingSearch(true);
// builds and sets local search output
const options = buildOptions(
search,
cluster,
tokenRegistry,
clusterInfo?.epochInfo.epoch
);
setSearchOptions(options);
// checking for non local search output
if (hasDomainSyntax(search)) {
// if search input is a potential domain we continue the loading state
domainSearch(options);
} else {
// if search input is not a potential domain we can conclude the search has finished
setLoadingSearch(false);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [search]);
// appends domain lookup results to the local search state
const domainSearch = async (options: SearchOptions[]) => {
setLoadingSearchMessage("Looking up domain...");
const connection = new Connection(url);
const searchTerm = search;
const updatedOptions = await buildDomainOptions(
connection,
search,
options
);
if (searchRef.current === searchTerm) {
setSearchOptions(updatedOptions);
// after attempting to fetch the domain name we can conclude the loading state
setLoadingSearch(false);
setLoadingSearchMessage("Loading...");
}
}; };
const resetValue = "" as any; const resetValue = "" as any;
@ -105,11 +42,14 @@ export function SearchBar() {
<div className="row align-items-center"> <div className="row align-items-center">
<div className="col"> <div className="col">
<Select <Select
autoFocus
ref={(ref) => (selectRef.current = ref)} ref={(ref) => (selectRef.current = ref)}
options={searchOptions} options={buildOptions(
search,
cluster,
tokenRegistry,
clusterInfo?.epochInfo.epoch
)}
noOptionsMessage={() => "No Results"} noOptionsMessage={() => "No Results"}
loadingMessage={() => loadingSearchMessage}
placeholder="Search for blocks, accounts, transactions, programs, and tokens" placeholder="Search for blocks, accounts, transactions, programs, and tokens"
value={resetValue} value={resetValue}
inputValue={search} inputValue={search}
@ -124,7 +64,6 @@ export function SearchBar() {
onInputChange={onInputChange} onInputChange={onInputChange}
components={{ DropdownIndicator }} components={{ DropdownIndicator }}
classNamePrefix="search-bar" classNamePrefix="search-bar"
isLoading={loadingSearch}
/> />
</div> </div>
</div> </div>
@ -247,7 +186,7 @@ function buildTokenOptions(
if (matchedTokens.length > 0) { if (matchedTokens.length > 0) {
return { return {
label: "Tokens", label: "Tokens",
options: matchedTokens.slice(0, 10).map(([id, details]) => ({ options: matchedTokens.map(([id, details]) => ({
label: details.name, label: details.name,
value: [details.name, details.symbol, id], value: [details.name, details.symbol, id],
pathname: "/address/" + id, pathname: "/address/" + id,
@ -256,39 +195,6 @@ function buildTokenOptions(
} }
} }
async function buildDomainOptions(
connection: Connection,
search: string,
options: SearchOptions[]
) {
const domainInfo = await getDomainInfo(search, connection);
const updatedOptions: SearchOptions[] = [...options];
if (domainInfo && domainInfo.owner && domainInfo.address) {
updatedOptions.push({
label: "Domain Owner",
options: [
{
label: domainInfo.owner,
value: [search],
pathname: "/address/" + domainInfo.owner,
},
],
});
updatedOptions.push({
label: "Name Service Account",
options: [
{
label: search,
value: [search],
pathname: "/address/" + domainInfo.address,
},
],
});
}
return updatedOptions;
}
// builds local search options
function buildOptions( function buildOptions(
rawSearch: string, rawSearch: string,
cluster: Cluster, cluster: Cluster,
@ -380,7 +286,6 @@ function buildOptions(
}); });
} }
} catch (err) {} } catch (err) {}
return options; return options;
} }

View File

@ -1,7 +1,6 @@
import React from "react"; import React from "react";
import classNames from "classnames"; import classNames from "classnames";
import { import {
PingInfo,
PingRollupInfo, PingRollupInfo,
PingStatus, PingStatus,
useSolanaPingInfo, useSolanaPingInfo,
@ -108,10 +107,12 @@ const CUSTOM_TOOLTIP = function (this: any, tooltipModel: ChartTooltipModel) {
// Set Text // Set Text
if (tooltipModel.body) { if (tooltipModel.body) {
const { label } = tooltipModel.dataPoints[0]; const { label, value } = tooltipModel.dataPoints[0];
const tooltipContent = tooltipEl.querySelector("div"); const tooltipContent = tooltipEl.querySelector("div");
if (tooltipContent) { if (tooltipContent) {
tooltipContent.innerHTML = `${label}`; let innerHtml = `<div class="value">${value} ms</div>`;
innerHtml += `<div class="label">${label}</div>`;
tooltipContent.innerHTML = innerHtml;
} }
} }
@ -172,30 +173,11 @@ const CHART_OPTION: ChartOptions = {
function PingBarChart({ pingInfo }: { pingInfo: PingRollupInfo }) { function PingBarChart({ pingInfo }: { pingInfo: PingRollupInfo }) {
const [series, setSeries] = React.useState<Series>("short"); const [series, setSeries] = React.useState<Series>("short");
const seriesData = pingInfo[series] || []; const seriesData = pingInfo[series] || [];
const maxMean = seriesData.reduce((a, b) => {
return Math.max(a, b.mean);
}, 0);
const seriesLength = seriesData.length; const seriesLength = seriesData.length;
const backgroundColor = (val: PingInfo) => {
if (val.submitted === 0) {
return "#08a274";
}
return val.loss > 0.5 ? "#f00" : "#00D192";
};
const chartData: Chart.ChartData = { const chartData: Chart.ChartData = {
labels: seriesData.map((val, i) => { labels: seriesData.map((val, i) => {
if (val.submitted === 0) {
return ` return `
<div class="label">
<p class="mb-0">Ping statistics unavailable</p>
${SERIES_INFO[series].label(seriesLength - i)}min ago
</div>
`;
}
return `
<div class="value">${val.mean} ms</div>
<div class="label">
<p class="mb-0">${val.confirmed} of ${val.submitted} confirmed</p> <p class="mb-0">${val.confirmed} of ${val.submitted} confirmed</p>
${ ${
val.loss val.loss
@ -206,22 +188,18 @@ function PingBarChart({ pingInfo }: { pingInfo: PingRollupInfo }) {
: "" : ""
} }
${SERIES_INFO[series].label(seriesLength - i)}min ago ${SERIES_INFO[series].label(seriesLength - i)}min ago
</div>
`; `;
}), }),
datasets: [ datasets: [
{ {
minBarLength: 2, backgroundColor: seriesData.map((val) =>
backgroundColor: seriesData.map(backgroundColor), val.loss > 0.5 ? "#f00" : "#00D192"
hoverBackgroundColor: seriesData.map(backgroundColor), ),
hoverBackgroundColor: seriesData.map((val) =>
val.loss > 0.5 ? "#f00" : "#00D192"
),
borderWidth: 0, borderWidth: 0,
data: seriesData.map((val) => { data: seriesData.map((val) => val.mean || 0),
if (val.submitted === 0) {
return maxMean * 0.5;
}
return val.mean || 0;
}),
}, },
], ],
}; };

View File

@ -1,85 +0,0 @@
import React, { useMemo } from "react";
import { Account } from "providers/accounts";
import { useCluster } from "providers/cluster";
import { BorshAccountsCoder } from "@project-serum/anchor";
import { IdlTypeDef } from "@project-serum/anchor/dist/cjs/idl";
import { getProgramName, mapAccountToRows } from "utils/anchor";
import { ErrorCard } from "components/common/ErrorCard";
import { useAnchorProgram } from "providers/anchor";
export function AnchorAccountCard({ account }: { account: Account }) {
const { lamports } = account;
const { url } = useCluster();
const anchorProgram = useAnchorProgram(
account.details?.owner.toString() || "",
url
);
const rawData = account?.details?.rawData;
const programName = getProgramName(anchorProgram) || "Unknown Program";
const { decodedAccountData, accountDef } = useMemo(() => {
let decodedAccountData: any | null = null;
let accountDef: IdlTypeDef | undefined = undefined;
if (anchorProgram && rawData) {
const coder = new BorshAccountsCoder(anchorProgram.idl);
const accountDefTmp = anchorProgram.idl.accounts?.find(
(accountType: any) =>
(rawData as Buffer)
.slice(0, 8)
.equals(BorshAccountsCoder.accountDiscriminator(accountType.name))
);
if (accountDefTmp) {
accountDef = accountDefTmp;
decodedAccountData = coder.decode(accountDef.name, rawData);
}
}
return {
decodedAccountData,
accountDef,
};
}, [anchorProgram, rawData]);
if (lamports === undefined) return null;
if (!anchorProgram) return <ErrorCard text="No Anchor IDL found" />;
if (!decodedAccountData || !accountDef) {
return (
<ErrorCard text="Failed to decode account data according to the public Anchor interface" />
);
}
return (
<div>
<div className="card">
<div className="card-header">
<div className="row align-items-center">
<div className="col">
<h3 className="card-header-title">
{programName}: {accountDef.name}
</h3>
</div>
</div>
</div>
<div className="table-responsive mb-0">
<table className="table table-sm table-nowrap card-table">
<thead>
<tr>
<th className="w-1">Field</th>
<th className="w-1">Type</th>
<th className="w-1">Value</th>
</tr>
</thead>
<tbody>
{mapAccountToRows(
decodedAccountData,
accountDef as IdlTypeDef,
anchorProgram.idl
)}
</tbody>
</table>
</div>
</div>
</div>
);
}

View File

@ -1,36 +0,0 @@
import { PublicKey } from "@solana/web3.js";
import { useAnchorProgram } from "providers/anchor";
import { useCluster } from "providers/cluster";
import ReactJson from "react-json-view";
export function AnchorProgramCard({ programId }: { programId: PublicKey }) {
const { url } = useCluster();
const program = useAnchorProgram(programId.toString(), url);
if (!program) {
return null;
}
return (
<>
<div className="card">
<div className="card-header">
<div className="row align-items-center">
<div className="col">
<h3 className="card-header-title">Anchor IDL</h3>
</div>
</div>
</div>
<div className="card metadata-json-viewer m-4">
<ReactJson
src={program.idl}
theme={"solarized"}
style={{ padding: 25 }}
collapsed={1}
/>
</div>
</div>
</>
);
}

View File

@ -21,14 +21,15 @@ export function DomainsCard({ pubkey }: { pubkey: PublicKey }) {
return ( return (
<div className="card"> <div className="card">
<div className="card-header align-items-center"> <div className="card-header align-items-center">
<h3 className="card-header-title">Owned Domain Names</h3> <h3 className="card-header-title">Domain Names Owned</h3>
</div> </div>
<div className="table-responsive mb-0"> <div className="table-responsive mb-0">
<table className="table table-sm table-nowrap card-table"> <table className="table table-sm table-nowrap card-table">
<thead> <thead>
<tr> <tr>
<th className="text-muted">Domain Name</th> <th className="text-muted">Domain name</th>
<th className="text-muted">Name Service Account</th> <th className="text-muted">Domain Address</th>
<th className="text-muted">Domain Class Address</th>
</tr> </tr>
</thead> </thead>
<tbody className="list"> <tbody className="list">
@ -52,6 +53,9 @@ function RenderDomainRow({ domainInfo }: { domainInfo: DomainInfo }) {
<td> <td>
<Address pubkey={domainInfo.address} link /> <Address pubkey={domainInfo.address} link />
</td> </td>
<td>
<Address pubkey={domainInfo.class} link />
</td>
</tr> </tr>
); );
} }

View File

@ -1,271 +0,0 @@
import { ErrorCard } from "components/common/ErrorCard";
import { TableCardBody } from "components/common/TableCardBody";
import { UpgradeableLoaderAccountData } from "providers/accounts";
import { fromProgramData, SecurityTXT } from "utils/security-txt";
export function SecurityCard({ data }: { data: UpgradeableLoaderAccountData }) {
if (!data.programData) {
return <ErrorCard text="Account has no data" />;
}
const { securityTXT, error } = fromProgramData(data.programData);
if (!securityTXT) {
return <ErrorCard text={error!} />;
}
return (
<div className="card security-txt">
<div className="card-header">
<h3 className="card-header-title mb-0 d-flex align-items-center">
Security.txt
</h3>
<small>
Note that this is self-reported by the author of the program and might
not be accurate.
</small>
</div>
<TableCardBody>
{ROWS.filter((x) => x.key in securityTXT).map((x, idx) => {
return (
<tr key={idx}>
<td className="w-100">{x.display}</td>
<RenderEntry value={securityTXT[x.key]} type={x.type} />
</tr>
);
})}
</TableCardBody>
</div>
);
}
enum DisplayType {
String,
URL,
Date,
Contacts,
PGP,
Auditors,
}
type TableRow = {
display: string;
key: keyof SecurityTXT;
type: DisplayType;
};
const ROWS: TableRow[] = [
{
display: "Name",
key: "name",
type: DisplayType.String,
},
{
display: "Project URL",
key: "project_url",
type: DisplayType.URL,
},
{
display: "Contacts",
key: "contacts",
type: DisplayType.Contacts,
},
{
display: "Policy",
key: "policy",
type: DisplayType.URL,
},
{
display: "Preferred Languages",
key: "preferred_languages",
type: DisplayType.String,
},
{
display: "Source Code URL",
key: "source_code",
type: DisplayType.URL,
},
{
display: "Secure Contact Encryption",
key: "encryption",
type: DisplayType.PGP,
},
{
display: "Auditors",
key: "auditors",
type: DisplayType.Auditors,
},
{
display: "Acknowledgements",
key: "acknowledgements",
type: DisplayType.URL,
},
{
display: "Expiry",
key: "expiry",
type: DisplayType.Date,
},
];
function RenderEntry({
value,
type,
}: {
value: SecurityTXT[keyof SecurityTXT];
type: DisplayType;
}) {
if (!value) {
return <></>;
}
switch (type) {
case DisplayType.String:
return <td className="text-lg-end font-monospace">{value}</td>;
case DisplayType.Contacts:
return (
<td className="text-lg-end font-monospace">
<ul>
{value?.split(",").map((c, i) => {
const idx = c.indexOf(":");
if (idx < 0) {
//invalid contact
return <li key={i}>{c}</li>;
}
const [type, information] = [c.slice(0, idx), c.slice(idx + 1)];
return (
<li key={i}>
<Contact type={type} information={information} />
</li>
);
})}
</ul>
</td>
);
case DisplayType.URL:
if (isValidLink(value)) {
return (
<td className="text-lg-end">
<span className="font-monospace">
<a rel="noopener noreferrer" target="_blank" href={value}>
{value}
<span className="fe fe-external-link ms-2"></span>
</a>
</span>
</td>
);
}
return (
<td className="text-lg-end">
<pre>{value.trim()}</pre>
</td>
);
case DisplayType.Date:
return <td className="text-lg-end font-monospace">{value}</td>;
case DisplayType.PGP:
if (isValidLink(value)) {
return (
<td className="text-lg-end">
<span className="font-monospace">
<a rel="noopener noreferrer" target="_blank" href={value}>
{value}
<span className="fe fe-external-link ms-2"></span>
</a>
</span>
</td>
);
}
return (
<td>
<code>{value.trim()}</code>
</td>
);
case DisplayType.Auditors:
if (isValidLink(value)) {
return (
<td className="text-lg-end">
<span className="font-monospace">
<a rel="noopener noreferrer" target="_blank" href={value}>
{value}
<span className="fe fe-external-link ms-2"></span>
</a>
</span>
</td>
);
}
return (
<td>
<ul>
{value?.split(",").map((c, idx) => {
return <li key={idx}>{c}</li>;
})}
</ul>
</td>
);
default:
break;
}
return <></>;
}
function isValidLink(value: string) {
try {
const url = new URL(value);
return ["http:", "https:"].includes(url.protocol);
} catch (err) {
return false;
}
}
function Contact({ type, information }: { type: string; information: string }) {
switch (type) {
case "discord":
return <>Discord: {information}</>;
case "email":
return (
<a
rel="noopener noreferrer"
target="_blank"
href={`mailto:${information}`}
>
{information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
case "telegram":
return (
<a
rel="noopener noreferrer"
target="_blank"
href={`https://t.me/${information}`}
>
Telegram: {information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
case "twitter":
return (
<a
rel="noopener noreferrer"
target="_blank"
href={`https://twitter.com/${information}`}
>
Twitter {information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
case "link":
if (isValidLink(information)) {
return (
<a rel="noopener noreferrer" target="_blank" href={`${information}`}>
{information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
}
return <>{information}</>;
case "other":
default:
return (
<>
{type}: {information}
</>
);
}
}

View File

@ -296,11 +296,11 @@ function isFullyInactivated(
return false; return false;
} }
const delegatedStake = stake.delegation.stake; const delegatedStake = stake.delegation.stake.toNumber();
const inactiveStake = new BN(activation.inactive); const inactiveStake = activation.inactive;
return ( return (
!stake.delegation.deactivationEpoch.eq(MAX_EPOCH) && !stake.delegation.deactivationEpoch.eq(MAX_EPOCH) &&
delegatedStake.eq(inactiveStake) delegatedStake === inactiveStake
); );
} }

View File

@ -343,7 +343,7 @@ function NonFungibleTokenMintAccountCard({
</td> </td>
</tr> </tr>
)} )}
{!!nftData?.metadata.collection?.verified && ( {nftData?.metadata.collection?.verified && (
<tr> <tr>
<td>Verified Collection Address</td> <td>Verified Collection Address</td>
<td className="text-lg-end"> <td className="text-lg-end">

View File

@ -18,7 +18,6 @@ import { Downloadable } from "components/common/Downloadable";
import { CheckingBadge, VerifiedBadge } from "components/common/VerifiedBadge"; import { CheckingBadge, VerifiedBadge } from "components/common/VerifiedBadge";
import { InfoTooltip } from "components/common/InfoTooltip"; import { InfoTooltip } from "components/common/InfoTooltip";
import { useVerifiableBuilds } from "utils/program-verification"; import { useVerifiableBuilds } from "utils/program-verification";
import { SecurityTXTBadge } from "components/common/SecurityTXTBadge";
export function UpgradeableLoaderAccountSection({ export function UpgradeableLoaderAccountSection({
account, account,
@ -147,17 +146,6 @@ export function UpgradeableProgramSection({
)} )}
</td> </td>
</tr> </tr>
<tr>
<td>
<SecurityLabel />
</td>
<td className="text-lg-end">
<SecurityTXTBadge
programData={programData}
pubkey={account.pubkey}
/>
</td>
</tr>
<tr> <tr>
<td>Last Deployed Slot</td> <td>Last Deployed Slot</td>
<td className="text-lg-end"> <td className="text-lg-end">
@ -177,21 +165,6 @@ export function UpgradeableProgramSection({
); );
} }
function SecurityLabel() {
return (
<InfoTooltip text="Security.txt helps security researchers to contact developers if they find security bugs.">
<a
rel="noopener noreferrer"
target="_blank"
href="https://github.com/neodyme-labs/solana-security-txt"
>
<span className="security-txt-link-color-hack-reee">Security.txt</span>
<span className="fe fe-external-link ms-2"></span>
</a>
</InfoTooltip>
);
}
function LastVerifiedBuildLabel() { function LastVerifiedBuildLabel() {
return ( return (
<InfoTooltip text="Indicates whether the program currently deployed on-chain is verified to match the associated published source code, when it is available."> <InfoTooltip text="Indicates whether the program currently deployed on-chain is verified to match the associated published source code, when it is available.">

Some files were not shown because too many files have changed in this diff Show More