Compare commits

...

152 Commits

Author SHA1 Message Date
arusso-certik
41a9eb1bf2
Update doc for sol_memset (#24101) 2022-04-16 22:11:19 +08:00
kirill lykov
5c7060eaeb
fix test compilation error (#24413) 2022-04-16 14:41:33 +02:00
sakridge
d71986cecf
Separate staked and un-staked on quic tpu port (#24339) 2022-04-16 10:54:22 +02:00
Dennis Antela Martinez
5de8061bed
look up domain owner on .sol search (explorer) (#24300)
* lookup domain owner on .sol search

* add detected domain names to search options

* lookup domain owner on .sol search

* add detected domain names to search options

* add loading state and only append domain search results if search state has not changed

* rm url and rename fn

* useRef to check if domain lookup is still valid
2022-04-16 15:28:44 +08:00
Elliott W
6e03e0e987
feat: support overriding fetch function in Connection (#24367)
Co-authored-by: elliott-home-pc <elliott.wagener@mude.com.au>
2022-04-16 14:49:31 +08:00
Dennis Antela Martinez
8274959c50
limit token options appended to search options object (#24410) 2022-04-16 13:58:50 +08:00
dependabot[bot]
412c080b04
chore: bump async-trait from 0.1.52 to 0.1.53 (#24407)
* chore: bump async-trait from 0.1.52 to 0.1.53

Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.52 to 0.1.53.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.52...0.1.53)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-04-15 22:50:26 -06:00
dependabot[bot]
2dad5df523
chore: bump wasm-bindgen from 0.2.78 to 0.2.80 (#24395)
* chore: bump wasm-bindgen from 0.2.78 to 0.2.80

Bumps [wasm-bindgen](https://github.com/rustwasm/wasm-bindgen) from 0.2.78 to 0.2.80.
- [Release notes](https://github.com/rustwasm/wasm-bindgen/releases)
- [Changelog](https://github.com/rustwasm/wasm-bindgen/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rustwasm/wasm-bindgen/compare/0.2.78...0.2.80)

---
updated-dependencies:
- dependency-name: wasm-bindgen
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-04-15 20:28:23 -06:00
Tyera Eulberg
65d33b3715
Split out rust doc tests in CI (#24397) 2022-04-15 19:40:27 -06:00
Jon Cinque
4fda3a3ceb
ci: Fixup downstream build for token-cli (#24403) 2022-04-16 02:46:07 +02:00
Justin Starry
7c45d94ccc
feat: remove flow type generation (#24380) 2022-04-16 08:12:29 +08:00
Tyera Eulberg
a0e3e3c193
Add Ident case (#24390) 2022-04-15 16:27:25 -06:00
dependabot[bot]
75108d8e56
chore: bump crossbeam-channel from 0.5.3 to 0.5.4 (#24345)
* chore: bump crossbeam-channel from 0.5.3 to 0.5.4

Bumps [crossbeam-channel](https://github.com/crossbeam-rs/crossbeam) from 0.5.3 to 0.5.4.
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/compare/crossbeam-channel-0.5.3...crossbeam-channel-0.5.4)

---
updated-dependencies:
- dependency-name: crossbeam-channel
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-04-15 14:08:40 -06:00
sakridge
1b7d1f78de
Implement QUIC connection warmup service for future leaders (#24054)
* Increase connection timeouts

* Bump quic connection cache to 1024

* Use constant for quic connection timeout and add warm cache service

* Fixes to QUIC warmup service

* fix check failure

* fixes after rebase

* fix timeout test

Co-authored-by: Pankaj Garg <pankaj@solana.com>
2022-04-15 12:09:24 -07:00
dependabot[bot]
43d3f049e9
chore: bump semver from 1.0.6 to 1.0.7 (#24358)
* chore: bump semver from 1.0.6 to 1.0.7

Bumps [semver](https://github.com/dtolnay/semver) from 1.0.6 to 1.0.7.
- [Release notes](https://github.com/dtolnay/semver/releases)
- [Commits](https://github.com/dtolnay/semver/compare/1.0.6...1.0.7)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-04-15 12:54:54 -06:00
Brooks Prumo
f33ad34531
Add feature_set_override parameter to mock_process_instruction() (#24386) 2022-04-15 13:43:04 -05:00
Brooks Prumo
34418cb848
Stake tests use get_minimum_delegation() (#24382) 2022-04-15 18:30:45 +00:00
Jeff Washington (jwash)
b4fd9124bf
log secondary index contents on startup (#24348) 2022-04-15 13:30:03 -05:00
Jeff Washington (jwash)
8c9430359e
add early exit in get_corrected_rent_epoch_on_load (#24331) 2022-04-15 13:28:16 -05:00
Jeff Washington (jwash)
ba7a2efa66
SlotInfoInEpoch (#24332) 2022-04-15 13:27:41 -05:00
Justin Starry
4ed647d8ec
Test that tick slot hashes update the recent blockhash queue (#24242) 2022-04-16 00:30:20 +08:00
Jon Cinque
d54ec406df
sdk: Add --jobs parameter in build/test bpf (#24359) 2022-04-15 13:49:43 +02:00
Brooks Prumo
7cf80a3f62
Fix test to use correct/updated account in transaction (#24363) 2022-04-15 05:15:02 -05:00
Jon Cinque
3d0d7dc8fc
ci: Limit downstream spl projects (#24328)
* ci: Limit downstream spl projects

* Build governance mock addin program before governance
2022-04-15 11:11:12 +02:00
dependabot[bot]
052c64b01a
chore: bump async from 2.6.3 to 2.6.4 in /web3.js (#24379)
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)

---
updated-dependencies:
- dependency-name: async
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-15 08:41:12 +00:00
dependabot[bot]
3cd1aa4a1d
chore:(deps): bump async from 2.6.3 to 2.6.4 in /explorer (#24378)
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)

---
updated-dependencies:
- dependency-name: async
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-15 08:39:11 +00:00
Dennis Antela Martinez
47e1c9107d
Explorer: Fix domain table and bump @bonfida/spl-name-service to 0.1.30 (#24312)
* migrate performReverseLookup from bonfida

* Fix display name fetching

Co-authored-by: Justin Starry <justin@solana.com>
2022-04-15 16:32:10 +08:00
Trent Nelson
50fba01842 rpc-pubsub: reduce metrics/log spam 2022-04-15 01:16:58 -06:00
Christian Kamm
97f2eb8e65 Banking stage: Deserialize packets only once
Benchmarks show roughly a 6% improvement. The impact could be more
significant when transactions need to be retried a lot.

after patch:
{'name': 'banking_bench_total', 'median': '72767.43'}
{'name': 'banking_bench_tx_total', 'median': '80240.38'}
{'name': 'banking_bench_success_tx_total', 'median': '72767.43'}
test bench_banking_stage_multi_accounts
... bench:   6,137,264 ns/iter (+/- 1,364,111)
test bench_banking_stage_multi_programs
... bench:  10,086,435 ns/iter (+/- 2,921,440)

before patch:
{'name': 'banking_bench_total', 'median': '68572.26'}
{'name': 'banking_bench_tx_total', 'median': '75704.75'}
{'name': 'banking_bench_success_tx_total', 'median': '68572.26'}
test bench_banking_stage_multi_accounts
... bench:   6,521,007 ns/iter (+/- 1,926,741)
test bench_banking_stage_multi_programs
... bench:  10,526,433 ns/iter (+/- 2,736,530)
2022-04-15 00:57:11 -06:00
Tyera Eulberg
f7d557d5ae
Update simulateTransaction rpc handling of return_data, and update docs (#24355)
* Stringify return_data program_id; also camel-case all fields

* Update simulateTransaction json-rpc docs

* Base64-encode return data in simulation RPC responses
2022-04-14 23:42:08 -06:00
HaoranYi
e7e7e87c93
Fast log2 ceiling (#24301)
* typo

* a fast way to compute log2 ceiling

* rename and assert

* clippy

* fix test return 0 for empty slice

* add test for empty slice
2022-04-14 22:22:08 -05:00
ryleung-solana
8cfc010b84
Send async quic batch of txs (#24298)
Add an interface send_wire_transaction_batch_async to TpuConnection to allow for sending batches without waiting for completion

Co-authored-by: Anatoly Yakovenko <anatoly@solana.com>
2022-04-14 22:20:34 -04:00
Tyera Eulberg
5e8c12ebdf
Do not require default keypair to exist for bench-tps (#24356) 2022-04-14 19:05:08 -06:00
dependabot[bot]
77182fcdda
chore: bump fd-lock from 3.0.4 to 3.0.5 (#24344)
Bumps [fd-lock](https://github.com/yoshuawuyts/fd-lock) from 3.0.4 to 3.0.5.
- [Release notes](https://github.com/yoshuawuyts/fd-lock/releases)
- [Commits](https://github.com/yoshuawuyts/fd-lock/compare/v3.0.4...v3.0.5)

---
updated-dependencies:
- dependency-name: fd-lock
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-14 16:13:25 -06:00
Pankaj Garg
865e17f3cf
Fix merge Yaml file (#24354) 2022-04-14 14:15:39 -07:00
Jon Cinque
a8c695ba52
security: Set expectation on when to get a response (#24346)
* security: Set expectation on when to get a response

* Update SECURITY.md

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-04-14 21:05:57 +02:00
Jeff Washington (jwash)
0e7b0597db
check for rewrites skipped in closure (#24330) 2022-04-14 13:46:18 -05:00
Brooks Prumo
2456a7be35
Deprecate MINIMUM_STAKE_DELEGATION (#24329) 2022-04-14 11:59:18 -05:00
Jeff Washington (jwash)
d20b4c9958
rename function param to avoid conflict (#24342) 2022-04-14 11:56:32 -05:00
Jeff Washington (jwash)
a91b0c8ea3
dashmap -> rwlock<hashmap> for rewrites (#24327) 2022-04-14 11:55:58 -05:00
dependabot[bot]
e5fc0d3f76
chore: bump indexmap from 1.8.0 to 1.8.1 (#24334)
* chore: bump indexmap from 1.8.0 to 1.8.1

Bumps [indexmap](https://github.com/bluss/indexmap) from 1.8.0 to 1.8.1.
- [Release notes](https://github.com/bluss/indexmap/releases)
- [Changelog](https://github.com/bluss/indexmap/blob/master/RELEASES.md)
- [Commits](https://github.com/bluss/indexmap/compare/1.8.0...1.8.1)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-04-14 10:42:38 -06:00
Justin Starry
e570039ee0
Fix automerge automation for backport PRs (#24338) 2022-04-15 00:29:40 +08:00
HaoranYi
e3ef0741be
simplify bank drop calls (#24142)
* simplify bank drop calls

* clippy: import

* Update ledger/src/blockstore_processor.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* Update runtime/src/accounts_background_service.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* Update runtime/src/bank.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* cleanup

* format

* use msb of bank_id to indicates that we are dropping

* clippy

* restore bank id

* clippy

* revert is-serialized_with_abs flag

* assert

* clippy

* whitespace

* fix bank drop callback check

* more fix

* remove msb dropping implementation

* fix

Co-authored-by: Brooks Prumo <brooks@prumo.org>
2022-04-14 08:43:54 -05:00
Michael Vines
57ff7371b4 Add StakeInstruction::DeactivateDelinquent 2022-04-14 01:49:22 -04:00
dependabot[bot]
b9caa8cdfb
chore: bump uriparse from 0.6.3 to 0.6.4 (#23799)
* chore: bump uriparse from 0.6.3 to 0.6.4

Bumps [uriparse](https://github.com/sgodwincs/uriparse-rs) from 0.6.3 to 0.6.4.
- [Release notes](https://github.com/sgodwincs/uriparse-rs/releases)
- [Changelog](https://github.com/sgodwincs/uriparse-rs/blob/master/RELEASES.md)
- [Commits](https://github.com/sgodwincs/uriparse-rs/commits)

---
updated-dependencies:
- dependency-name: uriparse
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-04-13 19:33:42 -06:00
Jeff Washington (jwash)
255ef66a27
move feature activation log to correct fn for hash (#24326) 2022-04-13 17:43:33 -05:00
Justin Starry
e13efa0883 fix: do not modify transaction during simulation 2022-04-13 15:22:35 -07:00
Rachael Pai
aea17c35ae
Add a stringified credential option for LedgerStorage (#24314)
* add a stringified credential option for LedgerStorage

* fix clippy::useless-format warning

* change CredentialOption to enum CredentialType

* rename credential_option to credential_type

* restore LedgerStorage new fn signature

* fmt

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-04-13 14:35:06 -06:00
Brooks Prumo
e146e860e2
Add convenience function for programs to get minimum delegation (#24175) 2022-04-13 14:41:52 -05:00
man0s
b4b26894cd
Iterate on IDL account/instruction decoding (#24239)
* Switch to more integrated Anchor data decoding

* Revert anchor account data tab and better error handling
2022-04-13 15:38:59 -04:00
Tyera Eulberg
96e3555e93
Add RpcClient support to bench-tps (#24297)
* Impl BenchTpsClient for RpcClient

* Support RpcClient in bench-tps
2022-04-13 13:11:34 -06:00
Tyera Eulberg
26899359d1
Support quic in bench-tps (#24295)
* Update comment

* Use connection_cache in tpu_client

* Add --tpu-use-quic to bench-tps

* Use connection_cache async send
2022-04-13 12:17:10 -06:00
Jack May
2e0bc89ec4
featurize rejection of callx r10 instructions (#24309) 2022-04-13 11:09:33 -07:00
Jeff Washington (jwash)
6a474f29cd
default to disk index (#24251) 2022-04-13 09:24:50 -05:00
Jeff Washington (jwash)
b6b8783323
add maybe_update_rent_epoch_on_load (#24294) 2022-04-13 08:55:24 -05:00
Alexander Meißner
096febd593
Remove KeyedAccount in builtin program "address lookup table" (#24283)
* Makes sure that there is only one KeyedAccount at a time.

* KeyedAccount by BorrowedAccount in address_lookup_table.

* Cleanup unused code.
2022-04-13 12:17:07 +02:00
Alexander Meißner
b8ca1bcb68
Remove NativeLoader from program runtime (#24296)
* Deletes native_loader.rs

* Deletes the feature: remove_native_loader
2022-04-13 12:15:28 +02:00
Justin Starry
47b938e617
Don't request reviews for community pr's that have been reviewed (#24307) 2022-04-13 12:47:25 +08:00
Justin Starry
c29fca000b
Fix community PR review requests (#24306) 2022-04-13 11:24:30 +08:00
sakridge
e7fcda1424
Quic client stats (#24195)
* Add metrics to connection-cache to measure cache hits and misses

* Add congestion stats

* Add more client stats

* Review comments

Co-authored-by: Ryan Leung <ryan.leung@solana.com>
2022-04-13 05:04:40 +02:00
David Mohl
d8c45a69c3
fix: don't override a transaction's recentBlockhash when calling simulate if it's already set (#24280)
* Update simulate to add blockhash if not exist

Simulate has been overriding the recentBlockhash of the passed
Transaction which can be considered destructive and with side effects.

Since the purpose of this function is to purely simulate, it should not
override recentBlockhash if it has already been set

Refs https://github.com/solana-labs/solana/issues/24279

* Apply prettier
2022-04-13 10:15:50 +08:00
Jack May
a43ff3bbcb
cleanup (#24299) 2022-04-12 17:52:47 -07:00
Jack May
138f04a49f
featurize bpf function hash collision fix (#24262) 2022-04-12 17:52:32 -07:00
HaoranYi
929753a11f
typo (#24291) 2022-04-12 16:46:59 -05:00
Jack May
4ac730944e
Use explicit configuration of RBPF (#24286) 2022-04-12 13:54:39 -07:00
sakridge
7a4a6597c0
Don't enforce ulimit for validator test config (#24272) 2022-04-12 22:06:37 +02:00
Dmitri Makarov
6b611e1c52 Bump bpf-tools to v1.25
- Tweak linker script
  Ensure that all read only sections end up in one segment, and
  everything else in other segments. Discard .eh_frame, .hash and
  .gnu.hash since they are unused.
- Don't create invalid string slices in stdout/stderr on Solana
- Report exceeded stack size as a warning if dynamic frames are off
- Native support for signed division in SBF
  Adds BPF_SDIV, which is enabled only for the SBF subtarget.
- Introduce dynamic stack frames and the SBFv2 flag
  Dynamic stack frames  are currently opt-in and enabled setting
  cpu=sbfv2. When sbfv2 is used, ELF files are flagged with
  e_flags=EF_SBF_V2 so the runtime can detect it and react
  accordingly.
2022-04-12 10:51:15 -07:00
Jeff Washington (jwash)
69e9ad5571
update comment (#24288) 2022-04-12 12:37:46 -05:00
Jack May
b035991c35
migrate memberes from deprecated structs (#24263) 2022-04-12 09:49:42 -07:00
Jeff Washington (jwash)
2d4d639635
add expected_rent_collection (#24028)
* add expected_rent_collection

* update some comments for clarity and resolve a todo

* add test for 'LeaveAloneNoRent'
2022-04-12 11:32:23 -05:00
Tyera Eulberg
8487030ea6
Add TpuClient support to bench-tps (#24227)
* Add fallible send methods, and rpc_client helper

* Add helper to return RpcClient url

* Implement BenchTpsClient for TpuClient

* Add cli rpc and identity handling

* Handle different kinds of clients in main, use TpuClient

* Add tpu_client integration test
2022-04-12 09:43:29 -06:00
Jeff Washington (jwash)
bdbca3362e
increase test timeout (#24277) 2022-04-12 09:54:57 -05:00
Jeff Washington (jwash)
1bc49d219d
IndexLimitMb option adds 'Unspecified' state (#24249) 2022-04-12 09:38:09 -05:00
HaoranYi
605036c117
move test fn into its own mod (#24212)
* move test fn into its own mod

* pub
2022-04-12 09:36:05 -05:00
anatoly yakovenko
474080608a
Async send for send transaction service (#24265)
* async send
2022-04-12 07:15:59 -07:00
HaoranYi
f3aa80d3f8
typo (#24257) 2022-04-12 09:08:35 -05:00
Justin Starry
9488a73f52
Don't request reviews from community-pr-subscribers if reviewer assigned (#24270) 2022-04-12 16:35:05 +08:00
Justin Starry
b6903dab6e
Explorer: Fix verified collection row rendering (#24269) 2022-04-12 08:22:42 +00:00
yung soosh
865a8307e2
Enable the explorer to render content from data URIs (#24235)
* Enable explorer to render images from data URIs

* Add regex to check for image mime type
2022-04-12 08:17:26 +00:00
Yueh-Hsuan Chiang
077bc4f407
(LedgerStore) Change the default RocksDB perf sample rate to 1 / 1000. (#24234) 2022-04-12 04:12:47 +00:00
Yueh-Hsuan Chiang
5a48ef72fd
(LedgerStore) Skip sampling check when ROCKSDB_PERF_CONTEXT_SAMPLES_IN_1K_DEFAULT = 0 (#24221)
#### Problem
Currently, even if SOLANA_METRICS_ROCKSDB_PERF_SAMPLES_IN_1K == 0, we are still doing
the sampling check for every RocksDB read.

```
thread_rng().gen_range(0, METRIC_SAMPLES_1K) > *ROCKSDB_PERF_CONTEXT_SAMPLES_IN_1K
```

#### Summary of Changes
This PR skips the sampling check when SOLANA_METRICS_ROCKSDB_PERF_SAMPLES_IN_1K
is set to 0.
2022-04-11 20:39:46 -07:00
Jon Cinque
9b8850f99e
test-validator: Add --max-compute-units flag (#24130)
* test-validator: Add `--max-compute-units` flag

* Add `RuntimeConfig` for tweaking runtime behavior

* Actually add the file

* Move RuntimeConfig to runtime
2022-04-12 02:28:10 +02:00
Brooks Prumo
4fd184c131
Use Release/Acquire instead of SeqCst for is_bank_drop_callback_enabled (#24134) 2022-04-11 19:19:17 -05:00
Jeff Washington (jwash)
6fbe2b936c
fix comment (#24254) 2022-04-11 18:53:45 -05:00
Brooks Prumo
77f9f7cd60
Update docs for measure!() (#24255) 2022-04-11 18:15:16 -05:00
Jack May
8a754d45b3
Singlular syscall context (#24204) 2022-04-11 16:05:09 -07:00
Michael Vines
c1687b0604 Switch to await-aware tokio::sync::Mutex 2022-04-11 18:15:03 -04:00
Michael Vines
a2be810dbc Resolve new clippy complaints 2022-04-11 18:15:03 -04:00
Michael Vines
552d684bdc Upgrade to Rust 1.60.0 2022-04-11 18:15:03 -04:00
Michael Vines
8f9554b5b9 Build rust docker images for linux/amd64 2022-04-11 18:15:03 -04:00
Will Hickey
a5e740431a
Add resolver = 2 to fix Windows build error on Travis CI (#24196) 2022-04-11 16:39:14 -05:00
Brooks Prumo
f7b00ada1b
GetMinimumDelegation does not require a stake account (#24192) 2022-04-11 16:26:36 -05:00
Brian Anderson
b38833923d
Use atomics instead of mutable statics in slot_hashes (#24091) 2022-04-11 15:12:50 -06:00
Tyera Eulberg
3871c85fd7
Add BenchTpsClient trait (#24208)
* Add BenchTpsClient

* Impl BenchTpsClient for used clients

* Use BenchTpsClient in do_bench

* Update integration test to use faucet via rpc

* Support keypairs from file that are not prefunded

* Remove old perf-utils
2022-04-11 13:45:40 -06:00
Jeff Washington (jwash)
c0019edf00
document WaitableCondvar (#24252) 2022-04-11 14:45:23 -05:00
Tyera Eulberg
8a73badf3d
Move helpers to solana-cli-config (#24246)
* Add solana-cli-utils crate

* Use cli-utils in cli

* Move println fn to cli-output

* Use cli-config instead
2022-04-11 12:56:51 -06:00
Jeff Washington (jwash)
9ac2245970
remove clone (#24244) 2022-04-11 13:15:00 -05:00
Giorgio Gambino
60b2155bd3
Add accounts-filler-size command line option (#23896) 2022-04-11 13:10:09 -05:00
Kwan Sohn
eb478d72d1
Add a measure! macro (#23084) (#24137)
Co-authored-by: Kwanwoo Sohn <kwan@Kwanwoos-MacBook-Air-2.local>
2022-04-11 12:50:52 -05:00
Jack May
85e5b1e902
Bump solana-rbpf to v0.2.25 (#24213) 2022-04-11 10:38:47 -07:00
samkim-crypto
b22abbce7d
Additional tests for proof verification when ElGamal pubkey is zeroed (#24243)
* zk-token-sdk: add edge case tests for withdraw withheld proof

* zk-token-sdk: add test cases for proof verification when pubkeys are invalid
2022-04-11 17:53:31 +01:00
HaoranYi
e14933c54d
move bank test fn to its test_utils mod (#24171) 2022-04-11 10:42:24 -05:00
sakridge
f8628d39e0
Check tpu quic socket (#24122) 2022-04-11 16:48:36 +02:00
Ikko Ashimine
ecfa1964ff
sdk: fix typo in lib.rs (#24240)
recieved -> received
2022-04-11 22:36:08 +08:00
Justin Starry
8eef3d9713
Add tests to the blockhash queue (#24238) 2022-04-11 19:36:24 +08:00
Trent Nelson
91993d89b0 cli: sort option for validators by version 2022-04-11 00:47:47 -06:00
Alexander Meißner
bf13fb4c4b
Remove KeyedAccount in builtin program "stake" (#24210)
* Inline keyed_account_at_index() in all instructions of stake
which have more than one KeyedAccount parameter,
because these could cause a borrow collision.

* Uses transaction_context.get_key_of_account_at_index() in stake.

* Refactors stake::config::from to use BorrowedAccount instead of ReadableAccount.

* Replaces KeyedAccount by BorrowedAccount in stake.
2022-04-10 09:55:37 +02:00
steveluscher
1882434c69 test: add test for signature notifications 2022-04-09 19:43:15 -07:00
steveluscher
21a64db140 test: refactor notification tests on the basis of promises rather than polling 2022-04-09 19:43:15 -07:00
steveluscher
db50893fa1 test: reenable account change subscription test 2022-04-09 19:43:15 -07:00
steveluscher
35ee38b0f1 test: reenable log subscription test 2022-04-09 19:43:15 -07:00
carllin
ff3b6d2b8b
Remove duplicate increment (#24219) 2022-04-09 15:21:39 -05:00
samkim-crypto
b2d502b461
zk-token-sdk: add support for scalar - ciphertext/commitment multiplication (#24120) 2022-04-09 14:19:29 +01:00
dependabot[bot]
e98575743e
chore:(deps): bump moment from 2.29.1 to 2.29.2 in /explorer (#24222)
Bumps [moment](https://github.com/moment/moment) from 2.29.1 to 2.29.2.
- [Release notes](https://github.com/moment/moment/releases)
- [Changelog](https://github.com/moment/moment/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/moment/moment/compare/2.29.1...2.29.2)

---
updated-dependencies:
- dependency-name: moment
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-09 13:03:00 +00:00
Bijie Zhu
330bdc6580 filter the list before checking --no-snapshot-fetch 2022-04-09 00:41:56 -06:00
Jeff Washington (jwash)
64abd008ca
make ledger-tool arg help consistent (#24203) 2022-04-08 15:45:09 -05:00
Christian Kamm
a058f348a2 Address review comments 2022-04-08 14:37:55 -05:00
Christian Kamm
2ed29771f2 Unittest for cost tracker after process_and_record_transactions 2022-04-08 14:37:55 -05:00
Christian Kamm
924b8ea1eb Adjustments to cost_tracker updates
- don't store pending tx signatures and costs in CostTracker
- apply tx costs to global state immediately again
- go from commit_or_cancel to update_or_remove, where the cost tracker
  is either updated with the true costs for successful tx, or the costs
  of a retryable tx is removed
- move the function into qos_service and hold the cost tracker lock for
  the whole loop
2022-04-08 14:37:55 -05:00
Tao Zhu
9e07272af8 - Only commit successfully executed transactions' cost to cost_tracker;
- In-fly transactions are pended in cost_tracker until being committed
  or cancelled;
2022-04-08 14:37:55 -05:00
Alexander Meißner
2e5042d8bd
Remove KeyedAccount in builtin program "vote" (#24189)
* Uses transaction_context.get_key_of_account_at_index() in vote.

* Inline keyed_account_at_index() in all instructions of vote
which have more than one KeyedAccount parameter,
because these could cause a borrow collision.

* Replaces KeyedAccount by BorrowedAccount in vote.
2022-04-08 20:40:50 +02:00
Alexander Meißner
fad9bd0538
Removes KeyedAccount parameter from get_if_mergeable(). (#24190) 2022-04-08 20:40:09 +02:00
steviez
c090418f26
List cmake as a package to install in build instructions (#24199) 2022-04-08 12:45:09 -05:00
steviez
6ca84f8a40
Move PurgeType enum to blockstore_purge.rs (#24185) 2022-04-08 11:46:12 -05:00
Tyera Eulberg
d2702201ca
Bump tonic, tonic-build, prost, and etcd-client (#24147)
* Bump tonic, prost, and etcd-client

* Restore doc ignores
2022-04-08 10:21:45 -06:00
Dmitri Makarov
689064a4f4 Bump sbf-tools version to v1.24 2022-04-08 09:06:40 -07:00
Dmitri Makarov
03ed334ebb Double the chunk size for sending the program binary data in tx 2022-04-08 09:06:40 -07:00
Jeff Washington (jwash)
210f6a6fab
move hash calculation out of acct bg svc (#23689)
* move hash calculation out of acct bg svc

* pr feedback
2022-04-08 10:42:03 -05:00
Alexander Meißner
cb1507126f
Fixes check_number_of_instruction_accounts() in StakeInstruction::Authorize. (#24172) 2022-04-08 12:43:55 +02:00
Yueh-Hsuan Chiang
1f136de294
(LedgerStore) Report perf metrics for RocksDB deletes (#24138)
#### Summary of Changes
This PR enables perf metrics reporting for RocksDB deletes.
Samples are reported under "blockstore_rocksdb_write_perf" with op=delete
The sampling rate is still controlled by env arg SOLANA_METRICS_ROCKSDB_PERF_SAMPLES_IN_1K
and its default to 10 (meaning we report 10 in 1000 perf samples).
2022-04-08 00:18:05 -07:00
Yueh-Hsuan Chiang
b84521d47d
(LedgerStore) Report perf metrics for RocksDB write batch (#24061)
#### Summary of Changes
This PR enables perf metrics reporting for RocksDB write-batches.
Samples are reported under "blockstore_rocksdb_write_perf" with op=write_batch

Its cf_name tag is set to "write_batch" as well as each write-batch could include multiple column families.

The sampling rate is still controlled by env arg SOLANA_METRICS_ROCKSDB_PERF_SAMPLES_IN_1K
and its default to 10 (meaning we report 10 in 1000 perf samples).
2022-04-08 00:17:51 -07:00
steviez
1dd63631c0
Add high level overview comments on ledger_cleanup_service (#24184) 2022-04-08 00:49:21 -05:00
HaoranYi
e105547c14
tvu and tpu timeout on joining its microservices (#24111)
* panic when test timeout

* nonblocking send when when droping banks

* debug log

* timeout for tvu

* unused varaible

* timeout for tpu

* Revert "debug log"

This reverts commit da780a3301a51d7c496141a85fcd35014fe6dff5.

* add timeout const

* fix typo

* Revert "nonblocking send when when droping banks".
I will create another pull request for this.

This reverts commit 088c98ec0facf825b5eca058fb860deba6d28888.

* Update core/src/tpu.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/tpu.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/tvu.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/tvu.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-04-07 20:20:13 -05:00
Tyera Eulberg
fbe5e51a16
Move duplicate-block proposal (#24167) 2022-04-07 17:30:31 -06:00
steveluscher
4dd3987451 Reset onLogs subscriptions when websocket disconnects 2022-04-07 15:45:35 -07:00
T.J. Kyner
781094edb2
providing clarity on airdrop amount constraints (#24115)
* providing clarity on airdrop amount constraints

This change is in response to a review of a PR in the `solana-program-library` found here: https://github.com/solana-labs/solana-program-library/pull/3062

* replaced static limits with info on how to find them

* removed trailing whitespace
2022-04-07 16:35:13 -06:00
Jeff Washington (jwash)
c27150b1a3
reserialize_bank_fields_with_hash (#23916)
* reserialize_bank_with_new_accounts_hash

* Update runtime/src/serde_snapshot.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* Update runtime/src/serde_snapshot/tests.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* Update runtime/src/serde_snapshot/tests.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* pr feedback

Co-authored-by: Brooks Prumo <brooks@prumo.org>
2022-04-07 14:05:57 -05:00
ignassew
0c2d9194dd
Fix typo in solana-program lib.rs (#24170) 2022-04-07 11:23:54 -06:00
Brooks Prumo
a100b32b37
Add test for GetMinimumDelegation stake instruction (#24158) 2022-04-07 11:54:15 -05:00
Jeff Washington (jwash)
48d1af01c8
add metrics around rewards (#24160) 2022-04-07 11:44:26 -05:00
Jeff Washington (jwash)
f7b2951c79
move around some index metrics to reduce locks (#24161) 2022-04-07 09:43:19 -05:00
HaoranYi
42c094739d
add test cfg attribute (#24154) 2022-04-07 09:05:20 -05:00
Yueh-Hsuan Chiang
206c3dd402
(LedgerStore) Enable RocksDB Perf metrics reporting for get_bytes and put_bytes (#24066)
#### Summary of Changes
Enable RocksDB Perf metrics reporting for get_bytes and put_bytes.
2022-04-07 00:24:10 -07:00
Yueh-Hsuan Chiang
4f0e887702
(LedgerStore) Report RocksDB perf metrics for Protobuf Columns (#24065)
This PR enables the reporting of both RocksDB read and write perf metrics for ProtobufColumns,
including TransactionStatus and Rewards.
2022-04-07 00:15:00 -07:00
Jeff Washington (jwash)
550ca7bf92
compare contents of serialized banks instead of exact file format (#24141)
* compare contents of serialized banks instead of exact file format

* Update runtime/src/snapshot_utils.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* Update runtime/src/snapshot_utils.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

* pr feedback

* get rid of clone

* pr feedback

Co-authored-by: Brooks Prumo <brooks@prumo.org>
2022-04-06 21:55:44 -05:00
Jeff Washington (jwash)
fddd162645
reserialize bank in ahv by first writing to temp file in abs (#23947) 2022-04-06 21:39:26 -05:00
Tyera Eulberg
fb67ff14de
Remove replica-node crates (#24152) 2022-04-06 16:52:19 -06:00
Chao Xu
7ee1edddd1
add transaction update error to geyser plugin interface. (#24140) 2022-04-06 15:41:23 -07:00
Tyera Eulberg
afeb1d3cca
Bump lru crate (#24150) 2022-04-06 16:18:42 -06:00
Alexander Meißner
efb9cbd8e7
Refactor: Remove trait from stake keyed account (#24148)
Removes trait from StakeAccount.
2022-04-06 22:58:09 +02:00
Alexander Meißner
25304ce485
Inlines verify_rent_exemption() in vote processor. (#24146) 2022-04-06 22:13:06 +02:00
Yueh-Hsuan Chiang
2d1f27ed8e
(LedgerStore) Perf Metric for RocksDB Writes (#23951)
#### Summary of Changes
This PR implements the reporting of RocksDB write perf metrics to blockstore_rocksdb_write_perf
based on RocksDB's PerfContext.  The default sample rate is 10 in 1000, and the env arg SOLANA_METRICS_ROCKSDB_PERF_SAMPLES_IN_1K can control the sample rate.
2022-04-06 12:12:38 -07:00
235 changed files with 18228 additions and 12519 deletions

View File

@ -24,6 +24,7 @@ pull_request_rules:
- "#approved-reviews-by=0" - "#approved-reviews-by=0"
- "#commented-reviews-by=0" - "#commented-reviews-by=0"
- "#changes-requested-reviews-by=0" - "#changes-requested-reviews-by=0"
- "#review-requested=0"
actions: actions:
request_reviews: request_reviews:
teams: teams:
@ -34,6 +35,7 @@ pull_request_rules:
- status-success=buildkite/solana - status-success=buildkite/solana
- status-success=ci-gate - status-success=ci-gate
- label=automerge - label=automerge
- label!=no-automerge
- author≠@dont-squash-my-commits - author≠@dont-squash-my-commits
- or: - or:
# only require travis success if docs files changed # only require travis success if docs files changed
@ -60,6 +62,7 @@ pull_request_rules:
- status-success=Travis CI - Pull Request - status-success=Travis CI - Pull Request
- status-success=ci-gate - status-success=ci-gate
- label=automerge - label=automerge
- label!=no-automerge
- author=@dont-squash-my-commits - author=@dont-squash-my-commits
- or: - or:
# only require explorer checks if explorer files changed # only require explorer checks if explorer files changed
@ -88,6 +91,17 @@ pull_request_rules:
actions: actions:
dismiss_reviews: dismiss_reviews:
changes_requested: true changes_requested: true
- name: set automerge label on mergify backport PRs
conditions:
- author=mergify[bot]
- head~=^mergify/bp/
- "#status-failure=0"
- "-merged"
- label!=no-automerge
actions:
label:
add:
- automerge
- name: v1.9 feature-gate backport - name: v1.9 feature-gate backport
conditions: conditions:
- label=v1.9 - label=v1.9
@ -96,7 +110,6 @@ pull_request_rules:
backport: backport:
ignore_conflicts: true ignore_conflicts: true
labels: labels:
- automerge
- feature-gate - feature-gate
branches: branches:
- v1.9 - v1.9
@ -107,8 +120,6 @@ pull_request_rules:
actions: actions:
backport: backport:
ignore_conflicts: true ignore_conflicts: true
labels:
- automerge
branches: branches:
- v1.9 - v1.9
- name: v1.10 feature-gate backport - name: v1.10 feature-gate backport
@ -119,7 +130,6 @@ pull_request_rules:
backport: backport:
ignore_conflicts: true ignore_conflicts: true
labels: labels:
- automerge
- feature-gate - feature-gate
branches: branches:
- v1.10 - v1.10
@ -130,8 +140,6 @@ pull_request_rules:
actions: actions:
backport: backport:
ignore_conflicts: true ignore_conflicts: true
labels:
- automerge
branches: branches:
- v1.10 - v1.10

544
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -59,8 +59,6 @@ members = [
"rayon-threadlimit", "rayon-threadlimit",
"rbpf-cli", "rbpf-cli",
"remote-wallet", "remote-wallet",
"replica-lib",
"replica-node",
"rpc", "rpc",
"rpc-test", "rpc-test",
"runtime", "runtime",
@ -88,3 +86,6 @@ members = [
exclude = [ exclude = [
"programs/bpf", "programs/bpf",
] ]
# This prevents a Travis CI error when building for Windows.
resolver = "2"

View File

@ -35,7 +35,7 @@ On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc
```bash ```bash
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang make $ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make
``` ```
## **2. Download the source code.** ## **2. Download the source code.**

View File

@ -11,7 +11,7 @@
email to security@solana.com and provide your github username so we can add you email to security@solana.com and provide your github username so we can add you
to a new draft security advisory for further discussion. to a new draft security advisory for further discussion.
Expect a response as fast as possible, within one business day at the latest. Expect a response as fast as possible, typically within 72 hours.
<a name="bounty"></a> <a name="bounty"></a>
## Security Bug Bounties ## Security Bug Bounties

View File

@ -6,7 +6,10 @@ use {
rayon::prelude::*, rayon::prelude::*,
solana_measure::measure::Measure, solana_measure::measure::Measure,
solana_runtime::{ solana_runtime::{
accounts::{create_test_accounts, update_accounts_bench, Accounts}, accounts::{
test_utils::{create_test_accounts, update_accounts_bench},
Accounts,
},
accounts_db::AccountShrinkThreshold, accounts_db::AccountShrinkThreshold,
accounts_index::AccountSecondaryIndexes, accounts_index::AccountSecondaryIndexes,
ancestors::Ancestors, ancestors::Ancestors,

View File

@ -15,6 +15,8 @@ log = "0.4.14"
rayon = "1.5.1" rayon = "1.5.1"
serde_json = "1.0.79" serde_json = "1.0.79"
serde_yaml = "0.8.23" serde_yaml = "0.8.23"
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" }
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.11.0" }
solana-core = { path = "../core", version = "=1.11.0" } solana-core = { path = "../core", version = "=1.11.0" }
solana-faucet = { path = "../faucet", version = "=1.11.0" } solana-faucet = { path = "../faucet", version = "=1.11.0" }
@ -24,14 +26,17 @@ solana-logger = { path = "../logger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" } solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-rpc = { path = "../rpc", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.11.0" }
thiserror = "1.0"
[dev-dependencies] [dev-dependencies]
serial_test = "0.6.0" serial_test = "0.6.0"
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" } solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" }
[package.metadata.docs.rs] [package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"] targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,19 +1,21 @@
use { use {
crate::cli::Config, crate::{
bench_tps_client::*,
cli::Config,
perf_utils::{sample_txs, SampleStats},
},
log::*, log::*,
rayon::prelude::*, rayon::prelude::*,
solana_client::perf_utils::{sample_txs, SampleStats},
solana_core::gen_keys::GenKeys, solana_core::gen_keys::GenKeys,
solana_faucet::faucet::request_airdrop_transaction,
solana_measure::measure::Measure, solana_measure::measure::Measure,
solana_metrics::{self, datapoint_info}, solana_metrics::{self, datapoint_info},
solana_sdk::{ solana_sdk::{
client::Client,
clock::{DEFAULT_MS_PER_SLOT, DEFAULT_S_PER_SLOT, MAX_PROCESSING_AGE}, clock::{DEFAULT_MS_PER_SLOT, DEFAULT_S_PER_SLOT, MAX_PROCESSING_AGE},
commitment_config::CommitmentConfig, commitment_config::CommitmentConfig,
hash::Hash, hash::Hash,
instruction::{AccountMeta, Instruction}, instruction::{AccountMeta, Instruction},
message::Message, message::Message,
native_token::Sol,
pubkey::Pubkey, pubkey::Pubkey,
signature::{Keypair, Signer}, signature::{Keypair, Signer},
system_instruction, system_transaction, system_instruction, system_transaction,
@ -22,7 +24,6 @@ use {
}, },
std::{ std::{
collections::{HashSet, VecDeque}, collections::{HashSet, VecDeque},
net::SocketAddr,
process::exit, process::exit,
sync::{ sync::{
atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering}, atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering},
@ -38,16 +39,9 @@ const MAX_TX_QUEUE_AGE: u64 = (MAX_PROCESSING_AGE as f64 * DEFAULT_S_PER_SLOT) a
pub const MAX_SPENDS_PER_TX: u64 = 4; pub const MAX_SPENDS_PER_TX: u64 = 4;
#[derive(Debug)]
pub enum BenchTpsError {
AirdropFailure,
}
pub type Result<T> = std::result::Result<T, BenchTpsError>;
pub type SharedTransactions = Arc<RwLock<VecDeque<Vec<(Transaction, u64)>>>>; pub type SharedTransactions = Arc<RwLock<VecDeque<Vec<(Transaction, u64)>>>>;
fn get_latest_blockhash<T: Client>(client: &T) -> Hash { fn get_latest_blockhash<T: BenchTpsClient>(client: &T) -> Hash {
loop { loop {
match client.get_latest_blockhash_with_commitment(CommitmentConfig::processed()) { match client.get_latest_blockhash_with_commitment(CommitmentConfig::processed()) {
Ok((blockhash, _)) => return blockhash, Ok((blockhash, _)) => return blockhash,
@ -61,7 +55,7 @@ fn get_latest_blockhash<T: Client>(client: &T) -> Hash {
fn wait_for_target_slots_per_epoch<T>(target_slots_per_epoch: u64, client: &Arc<T>) fn wait_for_target_slots_per_epoch<T>(target_slots_per_epoch: u64, client: &Arc<T>)
where where
T: 'static + Client + Send + Sync, T: 'static + BenchTpsClient + Send + Sync,
{ {
if target_slots_per_epoch != 0 { if target_slots_per_epoch != 0 {
info!( info!(
@ -91,7 +85,7 @@ fn create_sampler_thread<T>(
maxes: &Arc<RwLock<Vec<(String, SampleStats)>>>, maxes: &Arc<RwLock<Vec<(String, SampleStats)>>>,
) -> JoinHandle<()> ) -> JoinHandle<()>
where where
T: 'static + Client + Send + Sync, T: 'static + BenchTpsClient + Send + Sync,
{ {
info!("Sampling TPS every {} second...", sample_period); info!("Sampling TPS every {} second...", sample_period);
let exit_signal = exit_signal.clone(); let exit_signal = exit_signal.clone();
@ -169,7 +163,7 @@ fn create_sender_threads<T>(
shared_tx_active_thread_count: &Arc<AtomicIsize>, shared_tx_active_thread_count: &Arc<AtomicIsize>,
) -> Vec<JoinHandle<()>> ) -> Vec<JoinHandle<()>>
where where
T: 'static + Client + Send + Sync, T: 'static + BenchTpsClient + Send + Sync,
{ {
(0..threads) (0..threads)
.map(|_| { .map(|_| {
@ -197,7 +191,7 @@ where
pub fn do_bench_tps<T>(client: Arc<T>, config: Config, gen_keypairs: Vec<Keypair>) -> u64 pub fn do_bench_tps<T>(client: Arc<T>, config: Config, gen_keypairs: Vec<Keypair>) -> u64
where where
T: 'static + Client + Send + Sync, T: 'static + BenchTpsClient + Send + Sync,
{ {
let Config { let Config {
id, id,
@ -391,7 +385,7 @@ fn generate_txs(
} }
} }
fn get_new_latest_blockhash<T: Client>(client: &Arc<T>, blockhash: &Hash) -> Option<Hash> { fn get_new_latest_blockhash<T: BenchTpsClient>(client: &Arc<T>, blockhash: &Hash) -> Option<Hash> {
let start = Instant::now(); let start = Instant::now();
while start.elapsed().as_secs() < 5 { while start.elapsed().as_secs() < 5 {
if let Ok(new_blockhash) = client.get_latest_blockhash() { if let Ok(new_blockhash) = client.get_latest_blockhash() {
@ -407,7 +401,7 @@ fn get_new_latest_blockhash<T: Client>(client: &Arc<T>, blockhash: &Hash) -> Opt
None None
} }
fn poll_blockhash<T: Client>( fn poll_blockhash<T: BenchTpsClient>(
exit_signal: &Arc<AtomicBool>, exit_signal: &Arc<AtomicBool>,
blockhash: &Arc<RwLock<Hash>>, blockhash: &Arc<RwLock<Hash>>,
client: &Arc<T>, client: &Arc<T>,
@ -449,7 +443,7 @@ fn poll_blockhash<T: Client>(
} }
} }
fn do_tx_transfers<T: Client>( fn do_tx_transfers<T: BenchTpsClient>(
exit_signal: &Arc<AtomicBool>, exit_signal: &Arc<AtomicBool>,
shared_txs: &SharedTransactions, shared_txs: &SharedTransactions,
shared_tx_thread_count: &Arc<AtomicIsize>, shared_tx_thread_count: &Arc<AtomicIsize>,
@ -467,11 +461,7 @@ fn do_tx_transfers<T: Client>(
}; };
if let Some(txs0) = txs { if let Some(txs0) = txs {
shared_tx_thread_count.fetch_add(1, Ordering::Relaxed); shared_tx_thread_count.fetch_add(1, Ordering::Relaxed);
info!( info!("Transferring 1 unit {} times...", txs0.len());
"Transferring 1 unit {} times... to {}",
txs0.len(),
client.as_ref().tpu_addr(),
);
let tx_len = txs0.len(); let tx_len = txs0.len();
let transfer_start = Instant::now(); let transfer_start = Instant::now();
let mut old_transactions = false; let mut old_transactions = false;
@ -487,7 +477,7 @@ fn do_tx_transfers<T: Client>(
transactions.push(tx.0); transactions.push(tx.0);
} }
if let Err(error) = client.async_send_batch(transactions) { if let Err(error) = client.send_batch(transactions) {
warn!("send_batch_sync in do_tx_transfers failed: {}", error); warn!("send_batch_sync in do_tx_transfers failed: {}", error);
} }
@ -514,7 +504,11 @@ fn do_tx_transfers<T: Client>(
} }
} }
fn verify_funding_transfer<T: Client>(client: &Arc<T>, tx: &Transaction, amount: u64) -> bool { fn verify_funding_transfer<T: BenchTpsClient>(
client: &Arc<T>,
tx: &Transaction,
amount: u64,
) -> bool {
for a in &tx.message().account_keys[1..] { for a in &tx.message().account_keys[1..] {
match client.get_balance_with_commitment(a, CommitmentConfig::processed()) { match client.get_balance_with_commitment(a, CommitmentConfig::processed()) {
Ok(balance) => return balance >= amount, Ok(balance) => return balance >= amount,
@ -525,7 +519,7 @@ fn verify_funding_transfer<T: Client>(client: &Arc<T>, tx: &Transaction, amount:
} }
trait FundingTransactions<'a> { trait FundingTransactions<'a> {
fn fund<T: 'static + Client + Send + Sync>( fn fund<T: 'static + BenchTpsClient + Send + Sync>(
&mut self, &mut self,
client: &Arc<T>, client: &Arc<T>,
to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)], to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)],
@ -533,12 +527,16 @@ trait FundingTransactions<'a> {
); );
fn make(&mut self, to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)]); fn make(&mut self, to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)]);
fn sign(&mut self, blockhash: Hash); fn sign(&mut self, blockhash: Hash);
fn send<T: Client>(&self, client: &Arc<T>); fn send<T: BenchTpsClient>(&self, client: &Arc<T>);
fn verify<T: 'static + Client + Send + Sync>(&mut self, client: &Arc<T>, to_lamports: u64); fn verify<T: 'static + BenchTpsClient + Send + Sync>(
&mut self,
client: &Arc<T>,
to_lamports: u64,
);
} }
impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> { impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
fn fund<T: 'static + Client + Send + Sync>( fn fund<T: 'static + BenchTpsClient + Send + Sync>(
&mut self, &mut self,
client: &Arc<T>, client: &Arc<T>,
to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)], to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)],
@ -607,16 +605,20 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
debug!("sign {} txs: {}us", self.len(), sign_txs.as_us()); debug!("sign {} txs: {}us", self.len(), sign_txs.as_us());
} }
fn send<T: Client>(&self, client: &Arc<T>) { fn send<T: BenchTpsClient>(&self, client: &Arc<T>) {
let mut send_txs = Measure::start("send_txs"); let mut send_txs = Measure::start("send_txs");
self.iter().for_each(|(_, tx)| { self.iter().for_each(|(_, tx)| {
client.async_send_transaction(tx.clone()).expect("transfer"); client.send_transaction(tx.clone()).expect("transfer");
}); });
send_txs.stop(); send_txs.stop();
debug!("send {} txs: {}us", self.len(), send_txs.as_us()); debug!("send {} txs: {}us", self.len(), send_txs.as_us());
} }
fn verify<T: 'static + Client + Send + Sync>(&mut self, client: &Arc<T>, to_lamports: u64) { fn verify<T: 'static + BenchTpsClient + Send + Sync>(
&mut self,
client: &Arc<T>,
to_lamports: u64,
) {
let starting_txs = self.len(); let starting_txs = self.len();
let verified_txs = Arc::new(AtomicUsize::new(0)); let verified_txs = Arc::new(AtomicUsize::new(0));
let too_many_failures = Arc::new(AtomicBool::new(false)); let too_many_failures = Arc::new(AtomicBool::new(false));
@ -691,7 +693,7 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
/// fund the dests keys by spending all of the source keys into MAX_SPENDS_PER_TX /// fund the dests keys by spending all of the source keys into MAX_SPENDS_PER_TX
/// on every iteration. This allows us to replay the transfers because the source is either empty, /// on every iteration. This allows us to replay the transfers because the source is either empty,
/// or full /// or full
pub fn fund_keys<T: 'static + Client + Send + Sync>( pub fn fund_keys<T: 'static + BenchTpsClient + Send + Sync>(
client: Arc<T>, client: Arc<T>,
source: &Keypair, source: &Keypair,
dests: &[Keypair], dests: &[Keypair],
@ -733,75 +735,6 @@ pub fn fund_keys<T: 'static + Client + Send + Sync>(
} }
} }
pub fn airdrop_lamports<T: Client>(
client: &T,
faucet_addr: &SocketAddr,
id: &Keypair,
desired_balance: u64,
) -> Result<()> {
let starting_balance = client.get_balance(&id.pubkey()).unwrap_or(0);
metrics_submit_lamport_balance(starting_balance);
info!("starting balance {}", starting_balance);
if starting_balance < desired_balance {
let airdrop_amount = desired_balance - starting_balance;
info!(
"Airdropping {:?} lamports from {} for {}",
airdrop_amount,
faucet_addr,
id.pubkey(),
);
let blockhash = get_latest_blockhash(client);
match request_airdrop_transaction(faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
Ok(transaction) => {
let mut tries = 0;
loop {
tries += 1;
let signature = client.async_send_transaction(transaction.clone()).unwrap();
let result = client.poll_for_signature_confirmation(&signature, 1);
if result.is_ok() {
break;
}
if tries >= 5 {
panic!(
"Error requesting airdrop: to addr: {:?} amount: {} {:?}",
faucet_addr, airdrop_amount, result
)
}
}
}
Err(err) => {
panic!(
"Error requesting airdrop: {:?} to addr: {:?} amount: {}",
err, faucet_addr, airdrop_amount
);
}
};
let current_balance = client
.get_balance_with_commitment(&id.pubkey(), CommitmentConfig::processed())
.unwrap_or_else(|e| {
info!("airdrop error {}", e);
starting_balance
});
info!("current balance {}...", current_balance);
metrics_submit_lamport_balance(current_balance);
if current_balance - starting_balance != airdrop_amount {
info!(
"Airdrop failed! {} {} {}",
id.pubkey(),
current_balance,
starting_balance
);
return Err(BenchTpsError::AirdropFailure);
}
}
Ok(())
}
fn compute_and_report_stats( fn compute_and_report_stats(
maxes: &Arc<RwLock<Vec<(String, SampleStats)>>>, maxes: &Arc<RwLock<Vec<(String, SampleStats)>>>,
sample_period: u64, sample_period: u64,
@ -885,15 +818,33 @@ pub fn generate_keypairs(seed_keypair: &Keypair, count: u64) -> (Vec<Keypair>, u
(rnd.gen_n_keypairs(total_keys), extra) (rnd.gen_n_keypairs(total_keys), extra)
} }
pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>( pub fn generate_and_fund_keypairs<T: 'static + BenchTpsClient + Send + Sync>(
client: Arc<T>, client: Arc<T>,
faucet_addr: Option<SocketAddr>,
funding_key: &Keypair, funding_key: &Keypair,
keypair_count: usize, keypair_count: usize,
lamports_per_account: u64, lamports_per_account: u64,
) -> Result<Vec<Keypair>> { ) -> Result<Vec<Keypair>> {
let rent = client.get_minimum_balance_for_rent_exemption(0)?;
let lamports_per_account = lamports_per_account + rent;
info!("Creating {} keypairs...", keypair_count); info!("Creating {} keypairs...", keypair_count);
let (mut keypairs, extra) = generate_keypairs(funding_key, keypair_count as u64); let (mut keypairs, extra) = generate_keypairs(funding_key, keypair_count as u64);
fund_keypairs(client, funding_key, &keypairs, extra, lamports_per_account)?;
// 'generate_keypairs' generates extra keys to be able to have size-aligned funding batches for fund_keys.
keypairs.truncate(keypair_count);
Ok(keypairs)
}
pub fn fund_keypairs<T: 'static + BenchTpsClient + Send + Sync>(
client: Arc<T>,
funding_key: &Keypair,
keypairs: &[Keypair],
extra: u64,
lamports_per_account: u64,
) -> Result<()> {
let rent = client.get_minimum_balance_for_rent_exemption(0)?;
info!("Get lamports..."); info!("Get lamports...");
// Sample the first keypair, to prevent lamport loss on repeated solana-bench-tps executions // Sample the first keypair, to prevent lamport loss on repeated solana-bench-tps executions
@ -901,7 +852,7 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
let first_keypair_balance = client.get_balance(&first_key).unwrap_or(0); let first_keypair_balance = client.get_balance(&first_key).unwrap_or(0);
// Sample the last keypair, to check if funding was already completed // Sample the last keypair, to check if funding was already completed
let last_key = keypairs[keypair_count - 1].pubkey(); let last_key = keypairs[keypairs.len() - 1].pubkey();
let last_keypair_balance = client.get_balance(&last_key).unwrap_or(0); let last_keypair_balance = client.get_balance(&last_key).unwrap_or(0);
// Repeated runs will eat up keypair balances from transaction fees. In order to quickly // Repeated runs will eat up keypair balances from transaction fees. In order to quickly
@ -930,24 +881,35 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
funding_key_balance, max_fee, lamports_per_account, extra, total funding_key_balance, max_fee, lamports_per_account, extra, total
); );
if client.get_balance(&funding_key.pubkey()).unwrap_or(0) < total { if funding_key_balance < total + rent {
airdrop_lamports(client.as_ref(), &faucet_addr.unwrap(), funding_key, total)?; error!(
"funder has {}, needed {}",
Sol(funding_key_balance),
Sol(total)
);
let latest_blockhash = get_latest_blockhash(client.as_ref());
if client
.request_airdrop_with_blockhash(
&funding_key.pubkey(),
total + rent - funding_key_balance,
&latest_blockhash,
)
.is_err()
{
return Err(BenchTpsError::AirdropFailure);
}
} }
fund_keys( fund_keys(
client, client,
funding_key, funding_key,
&keypairs, keypairs,
total, total,
max_fee, max_fee,
lamports_per_account, lamports_per_account,
); );
} }
Ok(())
// 'generate_keypairs' generates extra keys to be able to have size-aligned funding batches for fund_keys.
keypairs.truncate(keypair_count);
Ok(keypairs)
} }
#[cfg(test)] #[cfg(test)]
@ -956,14 +918,14 @@ mod tests {
super::*, super::*,
solana_runtime::{bank::Bank, bank_client::BankClient}, solana_runtime::{bank::Bank, bank_client::BankClient},
solana_sdk::{ solana_sdk::{
client::SyncClient, fee_calculator::FeeRateGovernor, fee_calculator::FeeRateGovernor, genesis_config::create_genesis_config,
genesis_config::create_genesis_config, native_token::sol_to_lamports,
}, },
}; };
#[test] #[test]
fn test_bench_tps_bank_client() { fn test_bench_tps_bank_client() {
let (genesis_config, id) = create_genesis_config(10_000); let (genesis_config, id) = create_genesis_config(sol_to_lamports(10_000.0));
let bank = Bank::new_for_tests(&genesis_config); let bank = Bank::new_for_tests(&genesis_config);
let client = Arc::new(BankClient::new(bank)); let client = Arc::new(BankClient::new(bank));
@ -976,48 +938,49 @@ mod tests {
let keypair_count = config.tx_count * config.keypair_multiplier; let keypair_count = config.tx_count * config.keypair_multiplier;
let keypairs = let keypairs =
generate_and_fund_keypairs(client.clone(), None, &config.id, keypair_count, 20) generate_and_fund_keypairs(client.clone(), &config.id, keypair_count, 20).unwrap();
.unwrap();
do_bench_tps(client, config, keypairs); do_bench_tps(client, config, keypairs);
} }
#[test] #[test]
fn test_bench_tps_fund_keys() { fn test_bench_tps_fund_keys() {
let (genesis_config, id) = create_genesis_config(10_000); let (genesis_config, id) = create_genesis_config(sol_to_lamports(10_000.0));
let bank = Bank::new_for_tests(&genesis_config); let bank = Bank::new_for_tests(&genesis_config);
let client = Arc::new(BankClient::new(bank)); let client = Arc::new(BankClient::new(bank));
let keypair_count = 20; let keypair_count = 20;
let lamports = 20; let lamports = 20;
let rent = client.get_minimum_balance_for_rent_exemption(0).unwrap();
let keypairs = let keypairs =
generate_and_fund_keypairs(client.clone(), None, &id, keypair_count, lamports).unwrap(); generate_and_fund_keypairs(client.clone(), &id, keypair_count, lamports).unwrap();
for kp in &keypairs { for kp in &keypairs {
assert_eq!( assert_eq!(
client client
.get_balance_with_commitment(&kp.pubkey(), CommitmentConfig::processed()) .get_balance_with_commitment(&kp.pubkey(), CommitmentConfig::processed())
.unwrap(), .unwrap(),
lamports lamports + rent
); );
} }
} }
#[test] #[test]
fn test_bench_tps_fund_keys_with_fees() { fn test_bench_tps_fund_keys_with_fees() {
let (mut genesis_config, id) = create_genesis_config(10_000); let (mut genesis_config, id) = create_genesis_config(sol_to_lamports(10_000.0));
let fee_rate_governor = FeeRateGovernor::new(11, 0); let fee_rate_governor = FeeRateGovernor::new(11, 0);
genesis_config.fee_rate_governor = fee_rate_governor; genesis_config.fee_rate_governor = fee_rate_governor;
let bank = Bank::new_for_tests(&genesis_config); let bank = Bank::new_for_tests(&genesis_config);
let client = Arc::new(BankClient::new(bank)); let client = Arc::new(BankClient::new(bank));
let keypair_count = 20; let keypair_count = 20;
let lamports = 20; let lamports = 20;
let rent = client.get_minimum_balance_for_rent_exemption(0).unwrap();
let keypairs = let keypairs =
generate_and_fund_keypairs(client.clone(), None, &id, keypair_count, lamports).unwrap(); generate_and_fund_keypairs(client.clone(), &id, keypair_count, lamports).unwrap();
for kp in &keypairs { for kp in &keypairs {
assert_eq!(client.get_balance(&kp.pubkey()).unwrap(), lamports); assert_eq!(client.get_balance(&kp.pubkey()).unwrap(), lamports + rent);
} }
} }
} }

View File

@ -0,0 +1,87 @@
use {
solana_client::{client_error::ClientError, tpu_client::TpuSenderError},
solana_sdk::{
commitment_config::CommitmentConfig, epoch_info::EpochInfo, hash::Hash, message::Message,
pubkey::Pubkey, signature::Signature, transaction::Transaction, transport::TransportError,
},
thiserror::Error,
};
#[derive(Error, Debug)]
pub enum BenchTpsError {
#[error("Airdrop failure")]
AirdropFailure,
#[error("IO error: {0:?}")]
IoError(#[from] std::io::Error),
#[error("Client error: {0:?}")]
ClientError(#[from] ClientError),
#[error("TpuClient error: {0:?}")]
TpuSenderError(#[from] TpuSenderError),
#[error("Transport error: {0:?}")]
TransportError(#[from] TransportError),
#[error("Custom error: {0}")]
Custom(String),
}
pub(crate) type Result<T> = std::result::Result<T, BenchTpsError>;
pub trait BenchTpsClient {
/// Send a signed transaction without confirmation
fn send_transaction(&self, transaction: Transaction) -> Result<Signature>;
/// Send a batch of signed transactions without confirmation.
fn send_batch(&self, transactions: Vec<Transaction>) -> Result<()>;
/// Get latest blockhash
fn get_latest_blockhash(&self) -> Result<Hash>;
/// Get latest blockhash and its last valid block height, using explicit commitment
fn get_latest_blockhash_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<(Hash, u64)>;
/// Get transaction count
fn get_transaction_count(&self) -> Result<u64>;
/// Get transaction count, using explicit commitment
fn get_transaction_count_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<u64>;
/// Get epoch info
fn get_epoch_info(&self) -> Result<EpochInfo>;
/// Get account balance
fn get_balance(&self, pubkey: &Pubkey) -> Result<u64>;
/// Get account balance, using explicit commitment
fn get_balance_with_commitment(
&self,
pubkey: &Pubkey,
commitment_config: CommitmentConfig,
) -> Result<u64>;
/// Calculate the fee for a `Message`
fn get_fee_for_message(&self, message: &Message) -> Result<u64>;
/// Get the rent-exempt minimum for an account
fn get_minimum_balance_for_rent_exemption(&self, data_len: usize) -> Result<u64>;
/// Return the address of client
fn addr(&self) -> String;
/// Request, submit, and confirm an airdrop transaction
fn request_airdrop_with_blockhash(
&self,
pubkey: &Pubkey,
lamports: u64,
recent_blockhash: &Hash,
) -> Result<Signature>;
}
mod bank_client;
mod rpc_client;
mod thin_client;
mod tpu_client;

View File

@ -0,0 +1,85 @@
use {
crate::bench_tps_client::{BenchTpsClient, BenchTpsError, Result},
solana_runtime::bank_client::BankClient,
solana_sdk::{
client::{AsyncClient, SyncClient},
commitment_config::CommitmentConfig,
epoch_info::EpochInfo,
hash::Hash,
message::Message,
pubkey::Pubkey,
signature::Signature,
transaction::Transaction,
},
};
impl BenchTpsClient for BankClient {
fn send_transaction(&self, transaction: Transaction) -> Result<Signature> {
AsyncClient::async_send_transaction(self, transaction).map_err(|err| err.into())
}
fn send_batch(&self, transactions: Vec<Transaction>) -> Result<()> {
AsyncClient::async_send_batch(self, transactions).map_err(|err| err.into())
}
fn get_latest_blockhash(&self) -> Result<Hash> {
SyncClient::get_latest_blockhash(self).map_err(|err| err.into())
}
fn get_latest_blockhash_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<(Hash, u64)> {
SyncClient::get_latest_blockhash_with_commitment(self, commitment_config)
.map_err(|err| err.into())
}
fn get_transaction_count(&self) -> Result<u64> {
SyncClient::get_transaction_count(self).map_err(|err| err.into())
}
fn get_transaction_count_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<u64> {
SyncClient::get_transaction_count_with_commitment(self, commitment_config)
.map_err(|err| err.into())
}
fn get_epoch_info(&self) -> Result<EpochInfo> {
SyncClient::get_epoch_info(self).map_err(|err| err.into())
}
fn get_balance(&self, pubkey: &Pubkey) -> Result<u64> {
SyncClient::get_balance(self, pubkey).map_err(|err| err.into())
}
fn get_balance_with_commitment(
&self,
pubkey: &Pubkey,
commitment_config: CommitmentConfig,
) -> Result<u64> {
SyncClient::get_balance_with_commitment(self, pubkey, commitment_config)
.map_err(|err| err.into())
}
fn get_fee_for_message(&self, message: &Message) -> Result<u64> {
SyncClient::get_fee_for_message(self, message).map_err(|err| err.into())
}
fn get_minimum_balance_for_rent_exemption(&self, data_len: usize) -> Result<u64> {
SyncClient::get_minimum_balance_for_rent_exemption(self, data_len).map_err(|err| err.into())
}
fn addr(&self) -> String {
"Local BankClient".to_string()
}
fn request_airdrop_with_blockhash(
&self,
_pubkey: &Pubkey,
_lamports: u64,
_recent_blockhash: &Hash,
) -> Result<Signature> {
// BankClient doesn't support airdrops
Err(BenchTpsError::AirdropFailure)
}
}

View File

@ -0,0 +1,83 @@
use {
crate::bench_tps_client::{BenchTpsClient, Result},
solana_client::rpc_client::RpcClient,
solana_sdk::{
commitment_config::CommitmentConfig, epoch_info::EpochInfo, hash::Hash, message::Message,
pubkey::Pubkey, signature::Signature, transaction::Transaction,
},
};
impl BenchTpsClient for RpcClient {
fn send_transaction(&self, transaction: Transaction) -> Result<Signature> {
RpcClient::send_transaction(self, &transaction).map_err(|err| err.into())
}
fn send_batch(&self, transactions: Vec<Transaction>) -> Result<()> {
for transaction in transactions {
BenchTpsClient::send_transaction(self, transaction)?;
}
Ok(())
}
fn get_latest_blockhash(&self) -> Result<Hash> {
RpcClient::get_latest_blockhash(self).map_err(|err| err.into())
}
fn get_latest_blockhash_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<(Hash, u64)> {
RpcClient::get_latest_blockhash_with_commitment(self, commitment_config)
.map_err(|err| err.into())
}
fn get_transaction_count(&self) -> Result<u64> {
RpcClient::get_transaction_count(self).map_err(|err| err.into())
}
fn get_transaction_count_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<u64> {
RpcClient::get_transaction_count_with_commitment(self, commitment_config)
.map_err(|err| err.into())
}
fn get_epoch_info(&self) -> Result<EpochInfo> {
RpcClient::get_epoch_info(self).map_err(|err| err.into())
}
fn get_balance(&self, pubkey: &Pubkey) -> Result<u64> {
RpcClient::get_balance(self, pubkey).map_err(|err| err.into())
}
fn get_balance_with_commitment(
&self,
pubkey: &Pubkey,
commitment_config: CommitmentConfig,
) -> Result<u64> {
RpcClient::get_balance_with_commitment(self, pubkey, commitment_config)
.map(|res| res.value)
.map_err(|err| err.into())
}
fn get_fee_for_message(&self, message: &Message) -> Result<u64> {
RpcClient::get_fee_for_message(self, message).map_err(|err| err.into())
}
fn get_minimum_balance_for_rent_exemption(&self, data_len: usize) -> Result<u64> {
RpcClient::get_minimum_balance_for_rent_exemption(self, data_len).map_err(|err| err.into())
}
fn addr(&self) -> String {
self.url()
}
fn request_airdrop_with_blockhash(
&self,
pubkey: &Pubkey,
lamports: u64,
recent_blockhash: &Hash,
) -> Result<Signature> {
RpcClient::request_airdrop_with_blockhash(self, pubkey, lamports, recent_blockhash)
.map_err(|err| err.into())
}
}

View File

@ -0,0 +1,86 @@
use {
crate::bench_tps_client::{BenchTpsClient, Result},
solana_client::thin_client::ThinClient,
solana_sdk::{
client::{AsyncClient, Client, SyncClient},
commitment_config::CommitmentConfig,
epoch_info::EpochInfo,
hash::Hash,
message::Message,
pubkey::Pubkey,
signature::Signature,
transaction::Transaction,
},
};
impl BenchTpsClient for ThinClient {
fn send_transaction(&self, transaction: Transaction) -> Result<Signature> {
AsyncClient::async_send_transaction(self, transaction).map_err(|err| err.into())
}
fn send_batch(&self, transactions: Vec<Transaction>) -> Result<()> {
AsyncClient::async_send_batch(self, transactions).map_err(|err| err.into())
}
fn get_latest_blockhash(&self) -> Result<Hash> {
SyncClient::get_latest_blockhash(self).map_err(|err| err.into())
}
fn get_latest_blockhash_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<(Hash, u64)> {
SyncClient::get_latest_blockhash_with_commitment(self, commitment_config)
.map_err(|err| err.into())
}
fn get_transaction_count(&self) -> Result<u64> {
SyncClient::get_transaction_count(self).map_err(|err| err.into())
}
fn get_transaction_count_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<u64> {
SyncClient::get_transaction_count_with_commitment(self, commitment_config)
.map_err(|err| err.into())
}
fn get_epoch_info(&self) -> Result<EpochInfo> {
SyncClient::get_epoch_info(self).map_err(|err| err.into())
}
fn get_balance(&self, pubkey: &Pubkey) -> Result<u64> {
SyncClient::get_balance(self, pubkey).map_err(|err| err.into())
}
fn get_balance_with_commitment(
&self,
pubkey: &Pubkey,
commitment_config: CommitmentConfig,
) -> Result<u64> {
SyncClient::get_balance_with_commitment(self, pubkey, commitment_config)
.map_err(|err| err.into())
}
fn get_fee_for_message(&self, message: &Message) -> Result<u64> {
SyncClient::get_fee_for_message(self, message).map_err(|err| err.into())
}
fn get_minimum_balance_for_rent_exemption(&self, data_len: usize) -> Result<u64> {
SyncClient::get_minimum_balance_for_rent_exemption(self, data_len).map_err(|err| err.into())
}
fn addr(&self) -> String {
Client::tpu_addr(self)
}
fn request_airdrop_with_blockhash(
&self,
pubkey: &Pubkey,
lamports: u64,
recent_blockhash: &Hash,
) -> Result<Signature> {
self.rpc_client()
.request_airdrop_with_blockhash(pubkey, lamports, recent_blockhash)
.map_err(|err| err.into())
}
}

View File

@ -0,0 +1,99 @@
use {
crate::bench_tps_client::{BenchTpsClient, Result},
solana_client::tpu_client::TpuClient,
solana_sdk::{
commitment_config::CommitmentConfig, epoch_info::EpochInfo, hash::Hash, message::Message,
pubkey::Pubkey, signature::Signature, transaction::Transaction,
},
};
impl BenchTpsClient for TpuClient {
fn send_transaction(&self, transaction: Transaction) -> Result<Signature> {
let signature = transaction.signatures[0];
self.try_send_transaction(&transaction)?;
Ok(signature)
}
fn send_batch(&self, transactions: Vec<Transaction>) -> Result<()> {
for transaction in transactions {
BenchTpsClient::send_transaction(self, transaction)?;
}
Ok(())
}
fn get_latest_blockhash(&self) -> Result<Hash> {
self.rpc_client()
.get_latest_blockhash()
.map_err(|err| err.into())
}
fn get_latest_blockhash_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<(Hash, u64)> {
self.rpc_client()
.get_latest_blockhash_with_commitment(commitment_config)
.map_err(|err| err.into())
}
fn get_transaction_count(&self) -> Result<u64> {
self.rpc_client()
.get_transaction_count()
.map_err(|err| err.into())
}
fn get_transaction_count_with_commitment(
&self,
commitment_config: CommitmentConfig,
) -> Result<u64> {
self.rpc_client()
.get_transaction_count_with_commitment(commitment_config)
.map_err(|err| err.into())
}
fn get_epoch_info(&self) -> Result<EpochInfo> {
self.rpc_client().get_epoch_info().map_err(|err| err.into())
}
fn get_balance(&self, pubkey: &Pubkey) -> Result<u64> {
self.rpc_client()
.get_balance(pubkey)
.map_err(|err| err.into())
}
fn get_balance_with_commitment(
&self,
pubkey: &Pubkey,
commitment_config: CommitmentConfig,
) -> Result<u64> {
self.rpc_client()
.get_balance_with_commitment(pubkey, commitment_config)
.map(|res| res.value)
.map_err(|err| err.into())
}
fn get_fee_for_message(&self, message: &Message) -> Result<u64> {
self.rpc_client()
.get_fee_for_message(message)
.map_err(|err| err.into())
}
fn get_minimum_balance_for_rent_exemption(&self, data_len: usize) -> Result<u64> {
self.rpc_client()
.get_minimum_balance_for_rent_exemption(data_len)
.map_err(|err| err.into())
}
fn addr(&self) -> String {
self.rpc_client().url()
}
fn request_airdrop_with_blockhash(
&self,
pubkey: &Pubkey,
lamports: u64,
recent_blockhash: &Hash,
) -> Result<Signature> {
self.rpc_client()
.request_airdrop_with_blockhash(pubkey, lamports, recent_blockhash)
.map_err(|err| err.into())
}
}

View File

@ -1,6 +1,7 @@
use { use {
clap::{crate_description, crate_name, App, Arg, ArgMatches}, clap::{crate_description, crate_name, App, Arg, ArgMatches},
solana_faucet::faucet::FAUCET_PORT, solana_clap_utils::input_validators::{is_url, is_url_or_moniker},
solana_cli_config::{ConfigInput, CONFIG_FILE},
solana_sdk::{ solana_sdk::{
fee_calculator::FeeRateGovernor, fee_calculator::FeeRateGovernor,
pubkey::Pubkey, pubkey::Pubkey,
@ -11,10 +12,28 @@ use {
const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::LAMPORTS_PER_SOL; const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::LAMPORTS_PER_SOL;
pub enum ExternalClientType {
// Submits transactions to an Rpc node using an RpcClient
RpcClient,
// Submits transactions directly to leaders using a ThinClient, broadcasting to multiple
// leaders when num_nodes > 1
ThinClient,
// Submits transactions directly to leaders using a TpuClient, broadcasting to upcoming leaders
// via TpuClient default configuration
TpuClient,
}
impl Default for ExternalClientType {
fn default() -> Self {
Self::ThinClient
}
}
/// Holds the configuration for a single run of the benchmark /// Holds the configuration for a single run of the benchmark
pub struct Config { pub struct Config {
pub entrypoint_addr: SocketAddr, pub entrypoint_addr: SocketAddr,
pub faucet_addr: SocketAddr, pub json_rpc_url: String,
pub websocket_url: String,
pub id: Keypair, pub id: Keypair,
pub threads: usize, pub threads: usize,
pub num_nodes: usize, pub num_nodes: usize,
@ -31,13 +50,16 @@ pub struct Config {
pub num_lamports_per_account: u64, pub num_lamports_per_account: u64,
pub target_slots_per_epoch: u64, pub target_slots_per_epoch: u64,
pub target_node: Option<Pubkey>, pub target_node: Option<Pubkey>,
pub external_client_type: ExternalClientType,
pub use_quic: bool,
} }
impl Default for Config { impl Default for Config {
fn default() -> Config { fn default() -> Config {
Config { Config {
entrypoint_addr: SocketAddr::from(([127, 0, 0, 1], 8001)), entrypoint_addr: SocketAddr::from(([127, 0, 0, 1], 8001)),
faucet_addr: SocketAddr::from(([127, 0, 0, 1], FAUCET_PORT)), json_rpc_url: ConfigInput::default().json_rpc_url,
websocket_url: ConfigInput::default().websocket_url,
id: Keypair::new(), id: Keypair::new(),
threads: 4, threads: 4,
num_nodes: 1, num_nodes: 1,
@ -54,6 +76,8 @@ impl Default for Config {
num_lamports_per_account: NUM_LAMPORTS_PER_ACCOUNT_DEFAULT, num_lamports_per_account: NUM_LAMPORTS_PER_ACCOUNT_DEFAULT,
target_slots_per_epoch: 0, target_slots_per_epoch: 0,
target_node: None, target_node: None,
external_client_type: ExternalClientType::default(),
use_quic: false,
} }
} }
} }
@ -62,6 +86,42 @@ impl Default for Config {
pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> { pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
App::new(crate_name!()).about(crate_description!()) App::new(crate_name!()).about(crate_description!())
.version(version) .version(version)
.arg({
let arg = Arg::with_name("config_file")
.short("C")
.long("config")
.value_name("FILEPATH")
.takes_value(true)
.global(true)
.help("Configuration file to use");
if let Some(ref config_file) = *CONFIG_FILE {
arg.default_value(config_file)
} else {
arg
}
})
.arg(
Arg::with_name("json_rpc_url")
.short("u")
.long("url")
.value_name("URL_OR_MONIKER")
.takes_value(true)
.global(true)
.validator(is_url_or_moniker)
.help(
"URL for Solana's JSON RPC or moniker (or their first letter): \
[mainnet-beta, testnet, devnet, localhost]",
),
)
.arg(
Arg::with_name("websocket_url")
.long("ws")
.value_name("URL")
.takes_value(true)
.global(true)
.validator(is_url)
.help("WebSocket URL for the solana cluster"),
)
.arg( .arg(
Arg::with_name("entrypoint") Arg::with_name("entrypoint")
.short("n") .short("n")
@ -76,7 +136,8 @@ pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
.long("faucet") .long("faucet")
.value_name("HOST:PORT") .value_name("HOST:PORT")
.takes_value(true) .takes_value(true)
.help("Location of the faucet; defaults to entrypoint:FAUCET_PORT"), .hidden(true)
.help("Deprecated. BenchTps no longer queries the faucet directly"),
) )
.arg( .arg(
Arg::with_name("identity") Arg::with_name("identity")
@ -191,6 +252,27 @@ pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
"Wait until epochs are this many slots long.", "Wait until epochs are this many slots long.",
), ),
) )
.arg(
Arg::with_name("rpc_client")
.long("use-rpc-client")
.conflicts_with("tpu_client")
.takes_value(false)
.help("Submit transactions with a RpcClient")
)
.arg(
Arg::with_name("tpu_client")
.long("use-tpu-client")
.conflicts_with("rpc_client")
.takes_value(false)
.help("Submit transactions with a TpuClient")
)
.arg(
Arg::with_name("tpu_use_quic")
.long("tpu-use-quic")
.takes_value(false)
.help("Submit transactions via QUIC; only affects ThinClient (default) \
or TpuClient sends"),
)
} }
/// Parses a clap `ArgMatches` structure into a `Config` /// Parses a clap `ArgMatches` structure into a `Config`
@ -201,6 +283,45 @@ pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
pub fn extract_args(matches: &ArgMatches) -> Config { pub fn extract_args(matches: &ArgMatches) -> Config {
let mut args = Config::default(); let mut args = Config::default();
let config = if let Some(config_file) = matches.value_of("config_file") {
solana_cli_config::Config::load(config_file).unwrap_or_default()
} else {
solana_cli_config::Config::default()
};
let (_, json_rpc_url) = ConfigInput::compute_json_rpc_url_setting(
matches.value_of("json_rpc_url").unwrap_or(""),
&config.json_rpc_url,
);
args.json_rpc_url = json_rpc_url;
let (_, websocket_url) = ConfigInput::compute_websocket_url_setting(
matches.value_of("websocket_url").unwrap_or(""),
&config.websocket_url,
matches.value_of("json_rpc_url").unwrap_or(""),
&config.json_rpc_url,
);
args.websocket_url = websocket_url;
let (_, id_path) = ConfigInput::compute_keypair_path_setting(
matches.value_of("identity").unwrap_or(""),
&config.keypair_path,
);
if let Ok(id) = read_keypair_file(id_path) {
args.id = id;
} else if matches.is_present("identity") {
panic!("could not parse identity path");
}
if matches.is_present("tpu_client") {
args.external_client_type = ExternalClientType::TpuClient;
} else if matches.is_present("rpc_client") {
args.external_client_type = ExternalClientType::RpcClient;
}
if matches.is_present("tpu_use_quic") {
args.use_quic = true;
}
if let Some(addr) = matches.value_of("entrypoint") { if let Some(addr) = matches.value_of("entrypoint") {
args.entrypoint_addr = solana_net_utils::parse_host_port(addr).unwrap_or_else(|e| { args.entrypoint_addr = solana_net_utils::parse_host_port(addr).unwrap_or_else(|e| {
eprintln!("failed to parse entrypoint address: {}", e); eprintln!("failed to parse entrypoint address: {}", e);
@ -208,18 +329,6 @@ pub fn extract_args(matches: &ArgMatches) -> Config {
}); });
} }
if let Some(addr) = matches.value_of("faucet") {
args.faucet_addr = solana_net_utils::parse_host_port(addr).unwrap_or_else(|e| {
eprintln!("failed to parse faucet address: {}", e);
exit(1)
});
}
if matches.is_present("identity") {
args.id = read_keypair_file(matches.value_of("identity").unwrap())
.expect("can't read client identity");
}
if let Some(t) = matches.value_of("threads") { if let Some(t) = matches.value_of("threads") {
args.threads = t.to_string().parse().expect("can't parse threads"); args.threads = t.to_string().parse().expect("can't parse threads");
} }

72
bench-tps/src/keypairs.rs Normal file
View File

@ -0,0 +1,72 @@
use {
crate::{
bench::{fund_keypairs, generate_and_fund_keypairs},
bench_tps_client::BenchTpsClient,
},
log::*,
solana_genesis::Base64Account,
solana_sdk::signature::{Keypair, Signer},
std::{collections::HashMap, fs::File, path::Path, process::exit, sync::Arc},
};
pub fn get_keypairs<T>(
client: Arc<T>,
id: &Keypair,
keypair_count: usize,
num_lamports_per_account: u64,
client_ids_and_stake_file: &str,
read_from_client_file: bool,
) -> Vec<Keypair>
where
T: 'static + BenchTpsClient + Send + Sync,
{
if read_from_client_file {
let path = Path::new(client_ids_and_stake_file);
let file = File::open(path).unwrap();
info!("Reading {}", client_ids_and_stake_file);
let accounts: HashMap<String, Base64Account> = serde_yaml::from_reader(file).unwrap();
let mut keypairs = vec![];
let mut last_balance = 0;
accounts
.into_iter()
.for_each(|(keypair, primordial_account)| {
let bytes: Vec<u8> = serde_json::from_str(keypair.as_str()).unwrap();
keypairs.push(Keypair::from_bytes(&bytes).unwrap());
last_balance = primordial_account.balance;
});
if keypairs.len() < keypair_count {
eprintln!(
"Expected {} accounts in {}, only received {} (--tx_count mismatch?)",
keypair_count,
client_ids_and_stake_file,
keypairs.len(),
);
exit(1);
}
// Sort keypairs so that do_bench_tps() uses the same subset of accounts for each run.
// This prevents the amount of storage needed for bench-tps accounts from creeping up
// across multiple runs.
keypairs.sort_by_key(|x| x.pubkey().to_string());
fund_keypairs(
client,
id,
&keypairs,
keypairs.len().saturating_sub(keypair_count) as u64,
last_balance,
)
.unwrap_or_else(|e| {
eprintln!("Error could not fund keys: {:?}", e);
exit(1);
});
keypairs
} else {
generate_and_fund_keypairs(client, id, keypair_count, num_lamports_per_account)
.unwrap_or_else(|e| {
eprintln!("Error could not fund keys: {:?}", e);
exit(1);
})
}
}

View File

@ -1,3 +1,6 @@
#![allow(clippy::integer_arithmetic)] #![allow(clippy::integer_arithmetic)]
pub mod bench; pub mod bench;
pub mod bench_tps_client;
pub mod cli; pub mod cli;
pub mod keypairs;
mod perf_utils;

View File

@ -2,15 +2,19 @@
use { use {
log::*, log::*,
solana_bench_tps::{ solana_bench_tps::{
bench::{do_bench_tps, generate_and_fund_keypairs, generate_keypairs}, bench::{do_bench_tps, generate_keypairs},
cli, cli::{self, ExternalClientType},
keypairs::get_keypairs,
},
solana_client::{
connection_cache,
rpc_client::RpcClient,
tpu_client::{TpuClient, TpuClientConfig},
}, },
solana_genesis::Base64Account, solana_genesis::Base64Account,
solana_gossip::gossip_service::{discover_cluster, get_client, get_multi_client}, solana_gossip::gossip_service::{discover_cluster, get_client, get_multi_client},
solana_sdk::{ solana_sdk::{
fee_calculator::FeeRateGovernor, commitment_config::CommitmentConfig, fee_calculator::FeeRateGovernor, system_program,
signature::{Keypair, Signer},
system_program,
}, },
solana_streamer::socket::SocketAddrSpace, solana_streamer::socket::SocketAddrSpace,
std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc}, std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc},
@ -28,7 +32,8 @@ fn main() {
let cli::Config { let cli::Config {
entrypoint_addr, entrypoint_addr,
faucet_addr, json_rpc_url,
websocket_url,
id, id,
num_nodes, num_nodes,
tx_count, tx_count,
@ -40,6 +45,8 @@ fn main() {
multi_client, multi_client,
num_lamports_per_account, num_lamports_per_account,
target_node, target_node,
external_client_type,
use_quic,
.. ..
} = &cli_config; } = &cli_config;
@ -75,83 +82,93 @@ fn main() {
} }
info!("Connecting to the cluster"); info!("Connecting to the cluster");
let nodes = discover_cluster(entrypoint_addr, *num_nodes, SocketAddrSpace::Unspecified)
.unwrap_or_else(|err| {
eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
exit(1);
});
let client = if *multi_client { match external_client_type {
let (client, num_clients) = get_multi_client(&nodes, &SocketAddrSpace::Unspecified); ExternalClientType::RpcClient => {
if nodes.len() < num_clients { let client = Arc::new(RpcClient::new_with_commitment(
eprintln!( json_rpc_url.to_string(),
"Error: Insufficient nodes discovered. Expecting {} or more", CommitmentConfig::confirmed(),
num_nodes ));
); let keypairs = get_keypairs(
exit(1); client.clone(),
} id,
Arc::new(client)
} else if let Some(target_node) = target_node {
info!("Searching for target_node: {:?}", target_node);
let mut target_client = None;
for node in nodes {
if node.id == *target_node {
target_client = Some(Arc::new(get_client(&[node], &SocketAddrSpace::Unspecified)));
break;
}
}
target_client.unwrap_or_else(|| {
eprintln!("Target node {} not found", target_node);
exit(1);
})
} else {
Arc::new(get_client(&nodes, &SocketAddrSpace::Unspecified))
};
let keypairs = if *read_from_client_file {
let path = Path::new(&client_ids_and_stake_file);
let file = File::open(path).unwrap();
info!("Reading {}", client_ids_and_stake_file);
let accounts: HashMap<String, Base64Account> = serde_yaml::from_reader(file).unwrap();
let mut keypairs = vec![];
let mut last_balance = 0;
accounts
.into_iter()
.for_each(|(keypair, primordial_account)| {
let bytes: Vec<u8> = serde_json::from_str(keypair.as_str()).unwrap();
keypairs.push(Keypair::from_bytes(&bytes).unwrap());
last_balance = primordial_account.balance;
});
if keypairs.len() < keypair_count {
eprintln!(
"Expected {} accounts in {}, only received {} (--tx_count mismatch?)",
keypair_count, keypair_count,
*num_lamports_per_account,
client_ids_and_stake_file, client_ids_and_stake_file,
keypairs.len(), *read_from_client_file,
); );
exit(1); do_bench_tps(client, cli_config, keypairs);
} }
// Sort keypairs so that do_bench_tps() uses the same subset of accounts for each run. ExternalClientType::ThinClient => {
// This prevents the amount of storage needed for bench-tps accounts from creeping up let nodes = discover_cluster(entrypoint_addr, *num_nodes, SocketAddrSpace::Unspecified)
// across multiple runs. .unwrap_or_else(|err| {
keypairs.sort_by_key(|x| x.pubkey().to_string()); eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
keypairs exit(1);
} else { });
generate_and_fund_keypairs( if *use_quic {
client.clone(), connection_cache::set_use_quic(true);
Some(*faucet_addr), }
id, let client = if *multi_client {
keypair_count, let (client, num_clients) = get_multi_client(&nodes, &SocketAddrSpace::Unspecified);
*num_lamports_per_account, if nodes.len() < num_clients {
) eprintln!(
.unwrap_or_else(|e| { "Error: Insufficient nodes discovered. Expecting {} or more",
eprintln!("Error could not fund keys: {:?}", e); num_nodes
exit(1); );
}) exit(1);
}; }
Arc::new(client)
do_bench_tps(client, cli_config, keypairs); } else if let Some(target_node) = target_node {
info!("Searching for target_node: {:?}", target_node);
let mut target_client = None;
for node in nodes {
if node.id == *target_node {
target_client =
Some(Arc::new(get_client(&[node], &SocketAddrSpace::Unspecified)));
break;
}
}
target_client.unwrap_or_else(|| {
eprintln!("Target node {} not found", target_node);
exit(1);
})
} else {
Arc::new(get_client(&nodes, &SocketAddrSpace::Unspecified))
};
let keypairs = get_keypairs(
client.clone(),
id,
keypair_count,
*num_lamports_per_account,
client_ids_and_stake_file,
*read_from_client_file,
);
do_bench_tps(client, cli_config, keypairs);
}
ExternalClientType::TpuClient => {
let rpc_client = Arc::new(RpcClient::new_with_commitment(
json_rpc_url.to_string(),
CommitmentConfig::confirmed(),
));
if *use_quic {
connection_cache::set_use_quic(true);
}
let client = Arc::new(
TpuClient::new(rpc_client, websocket_url, TpuClientConfig::default())
.unwrap_or_else(|err| {
eprintln!("Could not create TpuClient {:?}", err);
exit(1);
}),
);
let keypairs = get_keypairs(
client.clone(),
id,
keypair_count,
*num_lamports_per_account,
client_ids_and_stake_file,
*read_from_client_file,
);
do_bench_tps(client, cli_config, keypairs);
}
}
} }

View File

@ -1,6 +1,7 @@
use { use {
crate::bench_tps_client::BenchTpsClient,
log::*, log::*,
solana_sdk::{client::Client, commitment_config::CommitmentConfig, timing::duration_as_s}, solana_sdk::{commitment_config::CommitmentConfig, timing::duration_as_s},
std::{ std::{
sync::{ sync::{
atomic::{AtomicBool, Ordering}, atomic::{AtomicBool, Ordering},
@ -27,7 +28,7 @@ pub fn sample_txs<T>(
sample_period: u64, sample_period: u64,
client: &Arc<T>, client: &Arc<T>,
) where ) where
T: Client, T: BenchTpsClient,
{ {
let mut max_tps = 0.0; let mut max_tps = 0.0;
let mut total_elapsed; let mut total_elapsed;
@ -81,10 +82,7 @@ pub fn sample_txs<T>(
elapsed: total_elapsed, elapsed: total_elapsed,
txs: total_txs, txs: total_txs,
}; };
sample_stats sample_stats.write().unwrap().push((client.addr(), stats));
.write()
.unwrap()
.push((client.tpu_addr(), stats));
return; return;
} }
sleep(Duration::from_secs(sample_period)); sleep(Duration::from_secs(sample_period));

View File

@ -6,15 +6,24 @@ use {
bench::{do_bench_tps, generate_and_fund_keypairs}, bench::{do_bench_tps, generate_and_fund_keypairs},
cli::Config, cli::Config,
}, },
solana_client::thin_client::create_client, solana_client::{
rpc_client::RpcClient,
thin_client::create_client,
tpu_client::{TpuClient, TpuClientConfig},
},
solana_core::validator::ValidatorConfig, solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet_with_port, solana_faucet::faucet::{run_local_faucet, run_local_faucet_with_port},
solana_local_cluster::{ solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster}, local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs, validator_configs::make_identical_validator_configs,
}, },
solana_sdk::signature::{Keypair, Signer}, solana_rpc::rpc::JsonRpcConfig,
solana_sdk::{
commitment_config::CommitmentConfig,
signature::{Keypair, Signer},
},
solana_streamer::socket::SocketAddrSpace, solana_streamer::socket::SocketAddrSpace,
solana_test_validator::TestValidator,
std::{sync::Arc, time::Duration}, std::{sync::Arc, time::Duration},
}; };
@ -22,13 +31,34 @@ fn test_bench_tps_local_cluster(config: Config) {
let native_instruction_processors = vec![]; let native_instruction_processors = vec![];
solana_logger::setup(); solana_logger::setup();
let faucet_keypair = Keypair::new();
let faucet_pubkey = faucet_keypair.pubkey();
let (addr_sender, addr_receiver) = unbounded();
run_local_faucet_with_port(faucet_keypair, addr_sender, None, 0);
let faucet_addr = addr_receiver
.recv_timeout(Duration::from_secs(2))
.expect("run_local_faucet")
.expect("faucet_addr");
const NUM_NODES: usize = 1; const NUM_NODES: usize = 1;
let mut validator_config = ValidatorConfig::default_for_test();
validator_config.rpc_config = JsonRpcConfig {
faucet_addr: Some(faucet_addr),
..JsonRpcConfig::default_for_test()
};
let cluster = LocalCluster::new( let cluster = LocalCluster::new(
&mut ClusterConfig { &mut ClusterConfig {
node_stakes: vec![999_990; NUM_NODES], node_stakes: vec![999_990; NUM_NODES],
cluster_lamports: 200_000_000, cluster_lamports: 200_000_000,
validator_configs: make_identical_validator_configs( validator_configs: make_identical_validator_configs(
&ValidatorConfig::default_for_test(), &ValidatorConfig {
rpc_config: JsonRpcConfig {
faucet_addr: Some(faucet_addr),
..JsonRpcConfig::default_for_test()
},
..ValidatorConfig::default_for_test()
},
NUM_NODES, NUM_NODES,
), ),
native_instruction_processors, native_instruction_processors,
@ -37,31 +67,55 @@ fn test_bench_tps_local_cluster(config: Config) {
SocketAddrSpace::Unspecified, SocketAddrSpace::Unspecified,
); );
let faucet_keypair = Keypair::new(); cluster.transfer(&cluster.funding_keypair, &faucet_pubkey, 100_000_000);
cluster.transfer(
&cluster.funding_keypair,
&faucet_keypair.pubkey(),
100_000_000,
);
let client = Arc::new(create_client( let client = Arc::new(create_client(
cluster.entry_point_info.rpc, cluster.entry_point_info.rpc,
cluster.entry_point_info.tpu, cluster.entry_point_info.tpu,
)); ));
let (addr_sender, addr_receiver) = unbounded(); let lamports_per_account = 100;
run_local_faucet_with_port(faucet_keypair, addr_sender, None, 0);
let faucet_addr = addr_receiver let keypair_count = config.tx_count * config.keypair_multiplier;
.recv_timeout(Duration::from_secs(2)) let keypairs = generate_and_fund_keypairs(
.expect("run_local_faucet") client.clone(),
.expect("faucet_addr"); &config.id,
keypair_count,
lamports_per_account,
)
.unwrap();
let _total = do_bench_tps(client, config, keypairs);
#[cfg(not(debug_assertions))]
assert!(_total > 100);
}
fn test_bench_tps_test_validator(config: Config) {
solana_logger::setup();
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
let test_validator =
TestValidator::with_no_fees(mint_pubkey, Some(faucet_addr), SocketAddrSpace::Unspecified);
let rpc_client = Arc::new(RpcClient::new_with_commitment(
test_validator.rpc_url(),
CommitmentConfig::processed(),
));
let websocket_url = test_validator.rpc_pubsub_url();
let client =
Arc::new(TpuClient::new(rpc_client, &websocket_url, TpuClientConfig::default()).unwrap());
let lamports_per_account = 100; let lamports_per_account = 100;
let keypair_count = config.tx_count * config.keypair_multiplier; let keypair_count = config.tx_count * config.keypair_multiplier;
let keypairs = generate_and_fund_keypairs( let keypairs = generate_and_fund_keypairs(
client.clone(), client.clone(),
Some(faucet_addr),
&config.id, &config.id,
keypair_count, keypair_count,
lamports_per_account, lamports_per_account,
@ -83,3 +137,13 @@ fn test_bench_tps_local_cluster_solana() {
..Config::default() ..Config::default()
}); });
} }
#[test]
#[serial]
fn test_bench_tps_tpu_client() {
test_bench_tps_test_validator(Config {
tx_count: 100,
duration: Duration::from_secs(10),
..Config::default()
});
}

View File

@ -1,4 +1,5 @@
#![allow(dead_code)] #![allow(dead_code)]
use { use {
crate::{ crate::{
bucket::Bucket, bucket::Bucket,
@ -57,8 +58,18 @@ impl IndexEntry {
.expect("New storage offset must fit into 7 bytes!") .expect("New storage offset must fit into 7 bytes!")
} }
/// return closest bucket index fit for the slot slice.
/// Since bucket size is 2^index, the return value is
/// min index, such that 2^index >= num_slots
/// index = ceiling(log2(num_slots))
/// special case, when slot slice empty, return 0th index.
pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 { pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 {
(num_slots as f64).log2().ceil() as u64 // use int log here? // Compute the ceiling of log2 for integer
if num_slots == 0 {
0
} else {
(Slot::BITS - (num_slots - 1).leading_zeros()) as u64
}
} }
pub fn data_bucket_ix(&self) -> u64 { pub fn data_bucket_ix(&self) -> u64 {
@ -153,4 +164,23 @@ mod tests {
let mut index = IndexEntry::new(Pubkey::new_unique()); let mut index = IndexEntry::new(Pubkey::new_unique());
index.set_storage_offset(too_big); index.set_storage_offset(too_big);
} }
#[test]
fn test_data_bucket_from_num_slots() {
for n in 0..512 {
assert_eq!(
IndexEntry::data_bucket_from_num_slots(n),
(n as f64).log2().ceil() as u64
);
}
assert_eq!(IndexEntry::data_bucket_from_num_slots(u32::MAX as u64), 32);
assert_eq!(
IndexEntry::data_bucket_from_num_slots(u32::MAX as u64 + 1),
32
);
assert_eq!(
IndexEntry::data_bucket_from_num_slots(u32::MAX as u64 + 2),
33
);
}
} }

View File

@ -148,6 +148,18 @@ all_test_steps() {
# Full test suite # Full test suite
command_step stable ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-stable.sh" 70 command_step stable ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-stable.sh" 70
# Docs tests
if affects \
.rs$ \
^ci/rust-version.sh \
^ci/test-docs.sh \
; then
command_step doctest "ci/test-docs.sh" 15
else
annotate --style info --context test-docs \
"Docs skipped as no .rs files were modified"
fi
wait_step wait_step
# BPF test suite # BPF test suite

View File

@ -1,4 +1,4 @@
FROM solanalabs/rust:1.59.0 FROM solanalabs/rust:1.60.0
ARG date ARG date
RUN set -x \ RUN set -x \

View File

@ -3,8 +3,14 @@ set -ex
cd "$(dirname "$0")" cd "$(dirname "$0")"
platform=()
if [[ $(uname -m) = arm64 ]]; then
# Ref: https://blog.jaimyn.dev/how-to-build-multi-architecture-docker-images-on-an-m1-mac/#tldr
platform+=(--platform linux/amd64)
fi
nightlyDate=${1:-$(date +%Y-%m-%d)} nightlyDate=${1:-$(date +%Y-%m-%d)}
docker build -t solanalabs/rust-nightly:"$nightlyDate" --build-arg date="$nightlyDate" . docker build "${platform[@]}" -t solanalabs/rust-nightly:"$nightlyDate" --build-arg date="$nightlyDate" .
maybeEcho= maybeEcho=
if [[ -z $CI ]]; then if [[ -z $CI ]]; then

View File

@ -1,6 +1,6 @@
# Note: when the rust version is changed also modify # Note: when the rust version is changed also modify
# ci/rust-version.sh to pick up the new image tag # ci/rust-version.sh to pick up the new image tag
FROM rust:1.59.0 FROM rust:1.60.0
# Add Google Protocol Buffers for Libra's metrics library. # Add Google Protocol Buffers for Libra's metrics library.
ENV PROTOC_VERSION 3.8.0 ENV PROTOC_VERSION 3.8.0

View File

@ -3,7 +3,14 @@ set -ex
cd "$(dirname "$0")" cd "$(dirname "$0")"
docker build -t solanalabs/rust .
platform=()
if [[ $(uname -m) = arm64 ]]; then
# Ref: https://blog.jaimyn.dev/how-to-build-multi-architecture-docker-images-on-an-m1-mac/#tldr
platform+=(--platform linux/amd64)
fi
docker build "${platform[@]}" -t solanalabs/rust .
read -r rustc version _ < <(docker run solanalabs/rust rustc --version) read -r rustc version _ < <(docker run solanalabs/rust rustc --version)
[[ $rustc = rustc ]] [[ $rustc = rustc ]]

View File

@ -18,13 +18,13 @@
if [[ -n $RUST_STABLE_VERSION ]]; then if [[ -n $RUST_STABLE_VERSION ]]; then
stable_version="$RUST_STABLE_VERSION" stable_version="$RUST_STABLE_VERSION"
else else
stable_version=1.59.0 stable_version=1.60.0
fi fi
if [[ -n $RUST_NIGHTLY_VERSION ]]; then if [[ -n $RUST_NIGHTLY_VERSION ]]; then
nightly_version="$RUST_NIGHTLY_VERSION" nightly_version="$RUST_NIGHTLY_VERSION"
else else
nightly_version=2022-02-24 nightly_version=2022-04-01
fi fi

1
ci/test-docs.sh Symbolic link
View File

@ -0,0 +1 @@
test-stable.sh

View File

@ -30,7 +30,7 @@ JOBS=$((JOBS>NPROC ? NPROC : JOBS))
echo "Executing $testName" echo "Executing $testName"
case $testName in case $testName in
test-stable) test-stable)
_ "$cargo" stable test --jobs "$JOBS" --all --exclude solana-local-cluster ${V:+--verbose} -- --nocapture _ "$cargo" stable test --jobs "$JOBS" --all --tests --exclude solana-local-cluster ${V:+--verbose} -- --nocapture
;; ;;
test-stable-bpf) test-stable-bpf)
# Clear the C dependency files, if dependency moves these files are not regenerated # Clear the C dependency files, if dependency moves these files are not regenerated
@ -130,6 +130,10 @@ test-wasm)
done done
exit 0 exit 0
;; ;;
test-docs)
_ "$cargo" stable test --jobs "$JOBS" --all --doc --exclude solana-local-cluster ${V:+--verbose} -- --nocapture
exit 0
;;
*) *)
echo "Error: Unknown test: $testName" echo "Error: Unknown test: $testName"
;; ;;

View File

@ -18,7 +18,7 @@ solana-remote-wallet = { path = "../remote-wallet", version = "=1.11.0", default
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.11.0" }
thiserror = "1.0.30" thiserror = "1.0.30"
tiny-bip39 = "0.8.2" tiny-bip39 = "0.8.2"
uriparse = "0.6.3" uriparse = "0.6.4"
url = "2.2.2" url = "2.2.2"
[dev-dependencies] [dev-dependencies]

View File

@ -15,6 +15,8 @@ lazy_static = "1.4.0"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_yaml = "0.8.23" serde_yaml = "0.8.23"
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
url = "2.2.2" url = "2.2.2"
[dev-dependencies] [dev-dependencies]

View File

@ -0,0 +1,126 @@
use {
crate::Config, solana_clap_utils::input_validators::normalize_to_url_if_moniker,
solana_sdk::commitment_config::CommitmentConfig, std::str::FromStr,
};
pub enum SettingType {
Explicit,
Computed,
SystemDefault,
}
pub struct ConfigInput {
pub json_rpc_url: String,
pub websocket_url: String,
pub keypair_path: String,
pub commitment: CommitmentConfig,
}
impl ConfigInput {
fn default_keypair_path() -> String {
Config::default().keypair_path
}
fn default_json_rpc_url() -> String {
Config::default().json_rpc_url
}
fn default_websocket_url() -> String {
Config::default().websocket_url
}
fn default_commitment() -> CommitmentConfig {
CommitmentConfig::confirmed()
}
fn first_nonempty_setting(
settings: std::vec::Vec<(SettingType, String)>,
) -> (SettingType, String) {
settings
.into_iter()
.find(|(_, value)| !value.is_empty())
.expect("no nonempty setting")
}
fn first_setting_is_some<T>(
settings: std::vec::Vec<(SettingType, Option<T>)>,
) -> (SettingType, T) {
let (setting_type, setting_option) = settings
.into_iter()
.find(|(_, value)| value.is_some())
.expect("all settings none");
(setting_type, setting_option.unwrap())
}
pub fn compute_websocket_url_setting(
websocket_cmd_url: &str,
websocket_cfg_url: &str,
json_rpc_cmd_url: &str,
json_rpc_cfg_url: &str,
) -> (SettingType, String) {
Self::first_nonempty_setting(vec![
(SettingType::Explicit, websocket_cmd_url.to_string()),
(SettingType::Explicit, websocket_cfg_url.to_string()),
(
SettingType::Computed,
Config::compute_websocket_url(&normalize_to_url_if_moniker(json_rpc_cmd_url)),
),
(
SettingType::Computed,
Config::compute_websocket_url(&normalize_to_url_if_moniker(json_rpc_cfg_url)),
),
(SettingType::SystemDefault, Self::default_websocket_url()),
])
}
pub fn compute_json_rpc_url_setting(
json_rpc_cmd_url: &str,
json_rpc_cfg_url: &str,
) -> (SettingType, String) {
let (setting_type, url_or_moniker) = Self::first_nonempty_setting(vec![
(SettingType::Explicit, json_rpc_cmd_url.to_string()),
(SettingType::Explicit, json_rpc_cfg_url.to_string()),
(SettingType::SystemDefault, Self::default_json_rpc_url()),
]);
(setting_type, normalize_to_url_if_moniker(&url_or_moniker))
}
pub fn compute_keypair_path_setting(
keypair_cmd_path: &str,
keypair_cfg_path: &str,
) -> (SettingType, String) {
Self::first_nonempty_setting(vec![
(SettingType::Explicit, keypair_cmd_path.to_string()),
(SettingType::Explicit, keypair_cfg_path.to_string()),
(SettingType::SystemDefault, Self::default_keypair_path()),
])
}
pub fn compute_commitment_config(
commitment_cmd: &str,
commitment_cfg: &str,
) -> (SettingType, CommitmentConfig) {
Self::first_setting_is_some(vec![
(
SettingType::Explicit,
CommitmentConfig::from_str(commitment_cmd).ok(),
),
(
SettingType::Explicit,
CommitmentConfig::from_str(commitment_cfg).ok(),
),
(SettingType::SystemDefault, Some(Self::default_commitment())),
])
}
}
impl Default for ConfigInput {
fn default() -> ConfigInput {
ConfigInput {
json_rpc_url: Self::default_json_rpc_url(),
websocket_url: Self::default_websocket_url(),
keypair_path: Self::default_keypair_path(),
commitment: CommitmentConfig::confirmed(),
}
}
}

View File

@ -55,12 +55,16 @@
extern crate lazy_static; extern crate lazy_static;
mod config; mod config;
pub use config::{Config, CONFIG_FILE}; mod config_input;
use std::{ use std::{
fs::{create_dir_all, File}, fs::{create_dir_all, File},
io::{self, Write}, io::{self, Write},
path::Path, path::Path,
}; };
pub use {
config::{Config, CONFIG_FILE},
config_input::{ConfigInput, SettingType},
};
/// Load a value from a file in YAML format. /// Load a value from a file in YAML format.
/// ///

View File

@ -18,10 +18,12 @@ console = "0.15.0"
humantime = "2.0.1" humantime = "2.0.1"
indicatif = "0.16.2" indicatif = "0.16.2"
pretty-hex = "0.2.1" pretty-hex = "0.2.1"
semver = "1.0.7"
serde = "1.0.136" serde = "1.0.136"
serde_json = "1.0.79" serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" } solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" }
solana-client = { path = "../client", version = "=1.11.0" } solana-client = { path = "../client", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }

View File

@ -356,6 +356,7 @@ pub enum CliValidatorsSortOrder {
SkipRate, SkipRate,
Stake, Stake,
VoteAccount, VoteAccount,
Version,
} }
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -494,6 +495,22 @@ impl fmt::Display for CliValidators {
CliValidatorsSortOrder::Stake => { CliValidatorsSortOrder::Stake => {
sorted_validators.sort_by_key(|a| a.activated_stake); sorted_validators.sort_by_key(|a| a.activated_stake);
} }
CliValidatorsSortOrder::Version => {
sorted_validators.sort_by(|a, b| {
use std::cmp::Ordering;
let a_version = semver::Version::parse(a.version.as_str()).ok();
let b_version = semver::Version::parse(b.version.as_str()).ok();
match (a_version, b_version) {
(None, None) => a.version.cmp(&b.version),
(None, Some(_)) => Ordering::Less,
(Some(_), None) => Ordering::Greater,
(Some(va), Some(vb)) => match va.cmp(&vb) {
Ordering::Equal => a.activated_stake.cmp(&b.activated_stake),
ordering => ordering,
},
}
});
}
} }
if self.validators_reverse_sort { if self.validators_reverse_sort {

View File

@ -3,6 +3,7 @@ use {
chrono::{DateTime, Local, NaiveDateTime, SecondsFormat, TimeZone, Utc}, chrono::{DateTime, Local, NaiveDateTime, SecondsFormat, TimeZone, Utc},
console::style, console::style,
indicatif::{ProgressBar, ProgressStyle}, indicatif::{ProgressBar, ProgressStyle},
solana_cli_config::SettingType,
solana_sdk::{ solana_sdk::{
clock::UnixTimestamp, clock::UnixTimestamp,
hash::Hash, hash::Hash,
@ -104,6 +105,21 @@ pub fn writeln_name_value(f: &mut dyn fmt::Write, name: &str, value: &str) -> fm
writeln!(f, "{} {}", style(name).bold(), styled_value) writeln!(f, "{} {}", style(name).bold(), styled_value)
} }
pub fn println_name_value_or(name: &str, value: &str, setting_type: SettingType) {
let description = match setting_type {
SettingType::Explicit => "",
SettingType::Computed => "(computed)",
SettingType::SystemDefault => "(default)",
};
println!(
"{} {} {}",
style(name).bold(),
style(value),
style(description).italic(),
);
}
pub fn format_labeled_address(pubkey: &str, address_labels: &HashMap<String, String>) -> String { pub fn format_labeled_address(pubkey: &str, address_labels: &HashMap<String, String>) -> String {
let label = address_labels.get(pubkey); let label = address_labels.get(pubkey);
match label { match label {

View File

@ -23,7 +23,7 @@ log = "0.4.14"
num-traits = "0.2" num-traits = "0.2"
pretty-hex = "0.2.1" pretty-hex = "0.2.1"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
semver = "1.0.6" semver = "1.0.7"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_json = "1.0.79" serde_json = "1.0.79"
@ -42,7 +42,7 @@ solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" } solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" } solana-version = { path = "../version", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" } solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
solana_rbpf = "=0.2.24" solana_rbpf = "=0.2.25"
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] } spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0.30" thiserror = "1.0.30"
tiny-bip39 = "0.8.2" tiny-bip39 = "0.8.2"

View File

@ -7,7 +7,8 @@ use {
log::*, log::*,
num_traits::FromPrimitive, num_traits::FromPrimitive,
serde_json::{self, Value}, serde_json::{self, Value},
solana_clap_utils::{self, input_parsers::*, input_validators::*, keypair::*}, solana_clap_utils::{self, input_parsers::*, keypair::*},
solana_cli_config::ConfigInput,
solana_cli_output::{ solana_cli_output::{
display::println_name_value, CliSignature, CliValidatorsSortOrder, OutputFormat, display::println_name_value, CliSignature, CliValidatorsSortOrder, OutputFormat,
}, },
@ -187,6 +188,7 @@ pub enum CliCommand {
stake_account_pubkey: Pubkey, stake_account_pubkey: Pubkey,
stake_authority: SignerIndex, stake_authority: SignerIndex,
sign_only: bool, sign_only: bool,
deactivate_delinquent: bool,
dump_transaction_message: bool, dump_transaction_message: bool,
blockhash_query: BlockhashQuery, blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>, nonce_account: Option<Pubkey>,
@ -456,129 +458,23 @@ impl From<nonce_utils::Error> for CliError {
} }
} }
pub enum SettingType {
Explicit,
Computed,
SystemDefault,
}
pub struct CliConfig<'a> { pub struct CliConfig<'a> {
pub command: CliCommand, pub command: CliCommand,
pub json_rpc_url: String, pub json_rpc_url: String,
pub websocket_url: String, pub websocket_url: String,
pub signers: Vec<&'a dyn Signer>,
pub keypair_path: String, pub keypair_path: String,
pub commitment: CommitmentConfig,
pub signers: Vec<&'a dyn Signer>,
pub rpc_client: Option<Arc<RpcClient>>, pub rpc_client: Option<Arc<RpcClient>>,
pub rpc_timeout: Duration, pub rpc_timeout: Duration,
pub verbose: bool, pub verbose: bool,
pub output_format: OutputFormat, pub output_format: OutputFormat,
pub commitment: CommitmentConfig,
pub send_transaction_config: RpcSendTransactionConfig, pub send_transaction_config: RpcSendTransactionConfig,
pub confirm_transaction_initial_timeout: Duration, pub confirm_transaction_initial_timeout: Duration,
pub address_labels: HashMap<String, String>, pub address_labels: HashMap<String, String>,
} }
impl CliConfig<'_> { impl CliConfig<'_> {
fn default_keypair_path() -> String {
solana_cli_config::Config::default().keypair_path
}
fn default_json_rpc_url() -> String {
solana_cli_config::Config::default().json_rpc_url
}
fn default_websocket_url() -> String {
solana_cli_config::Config::default().websocket_url
}
fn default_commitment() -> CommitmentConfig {
CommitmentConfig::confirmed()
}
fn first_nonempty_setting(
settings: std::vec::Vec<(SettingType, String)>,
) -> (SettingType, String) {
settings
.into_iter()
.find(|(_, value)| !value.is_empty())
.expect("no nonempty setting")
}
fn first_setting_is_some<T>(
settings: std::vec::Vec<(SettingType, Option<T>)>,
) -> (SettingType, T) {
let (setting_type, setting_option) = settings
.into_iter()
.find(|(_, value)| value.is_some())
.expect("all settings none");
(setting_type, setting_option.unwrap())
}
pub fn compute_websocket_url_setting(
websocket_cmd_url: &str,
websocket_cfg_url: &str,
json_rpc_cmd_url: &str,
json_rpc_cfg_url: &str,
) -> (SettingType, String) {
Self::first_nonempty_setting(vec![
(SettingType::Explicit, websocket_cmd_url.to_string()),
(SettingType::Explicit, websocket_cfg_url.to_string()),
(
SettingType::Computed,
solana_cli_config::Config::compute_websocket_url(&normalize_to_url_if_moniker(
json_rpc_cmd_url,
)),
),
(
SettingType::Computed,
solana_cli_config::Config::compute_websocket_url(&normalize_to_url_if_moniker(
json_rpc_cfg_url,
)),
),
(SettingType::SystemDefault, Self::default_websocket_url()),
])
}
pub fn compute_json_rpc_url_setting(
json_rpc_cmd_url: &str,
json_rpc_cfg_url: &str,
) -> (SettingType, String) {
let (setting_type, url_or_moniker) = Self::first_nonempty_setting(vec![
(SettingType::Explicit, json_rpc_cmd_url.to_string()),
(SettingType::Explicit, json_rpc_cfg_url.to_string()),
(SettingType::SystemDefault, Self::default_json_rpc_url()),
]);
(setting_type, normalize_to_url_if_moniker(&url_or_moniker))
}
pub fn compute_keypair_path_setting(
keypair_cmd_path: &str,
keypair_cfg_path: &str,
) -> (SettingType, String) {
Self::first_nonempty_setting(vec![
(SettingType::Explicit, keypair_cmd_path.to_string()),
(SettingType::Explicit, keypair_cfg_path.to_string()),
(SettingType::SystemDefault, Self::default_keypair_path()),
])
}
pub fn compute_commitment_config(
commitment_cmd: &str,
commitment_cfg: &str,
) -> (SettingType, CommitmentConfig) {
Self::first_setting_is_some(vec![
(
SettingType::Explicit,
CommitmentConfig::from_str(commitment_cmd).ok(),
),
(
SettingType::Explicit,
CommitmentConfig::from_str(commitment_cfg).ok(),
),
(SettingType::SystemDefault, Some(Self::default_commitment())),
])
}
pub(crate) fn pubkey(&self) -> Result<Pubkey, SignerError> { pub(crate) fn pubkey(&self) -> Result<Pubkey, SignerError> {
if !self.signers.is_empty() { if !self.signers.is_empty() {
self.signers[0].try_pubkey() self.signers[0].try_pubkey()
@ -609,15 +505,15 @@ impl Default for CliConfig<'_> {
pubkey: Some(Pubkey::default()), pubkey: Some(Pubkey::default()),
use_lamports_unit: false, use_lamports_unit: false,
}, },
json_rpc_url: Self::default_json_rpc_url(), json_rpc_url: ConfigInput::default().json_rpc_url,
websocket_url: Self::default_websocket_url(), websocket_url: ConfigInput::default().websocket_url,
keypair_path: ConfigInput::default().keypair_path,
commitment: ConfigInput::default().commitment,
signers: Vec::new(), signers: Vec::new(),
keypair_path: Self::default_keypair_path(),
rpc_client: None, rpc_client: None,
rpc_timeout: Duration::from_secs(u64::from_str(DEFAULT_RPC_TIMEOUT_SECONDS).unwrap()), rpc_timeout: Duration::from_secs(u64::from_str(DEFAULT_RPC_TIMEOUT_SECONDS).unwrap()),
verbose: false, verbose: false,
output_format: OutputFormat::Display, output_format: OutputFormat::Display,
commitment: CommitmentConfig::confirmed(),
send_transaction_config: RpcSendTransactionConfig::default(), send_transaction_config: RpcSendTransactionConfig::default(),
confirm_transaction_initial_timeout: Duration::from_secs( confirm_transaction_initial_timeout: Duration::from_secs(
u64::from_str(DEFAULT_CONFIRM_TX_TIMEOUT_SECONDS).unwrap(), u64::from_str(DEFAULT_CONFIRM_TX_TIMEOUT_SECONDS).unwrap(),
@ -1188,6 +1084,7 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
stake_account_pubkey, stake_account_pubkey,
stake_authority, stake_authority,
sign_only, sign_only,
deactivate_delinquent,
dump_transaction_message, dump_transaction_message,
blockhash_query, blockhash_query,
nonce_account, nonce_account,
@ -1201,6 +1098,7 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
stake_account_pubkey, stake_account_pubkey,
*stake_authority, *stake_authority,
*sign_only, *sign_only,
*deactivate_delinquent,
*dump_transaction_message, *dump_transaction_message,
blockhash_query, blockhash_query,
*nonce_account, *nonce_account,
@ -2197,6 +2095,7 @@ mod tests {
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
dump_transaction_message: false, dump_transaction_message: false,
deactivate_delinquent: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
nonce_authority: 0, nonce_authority: 0,

View File

@ -379,6 +379,7 @@ impl ClusterQuerySubCommands for App<'_, '_> {
"root", "root",
"skip-rate", "skip-rate",
"stake", "stake",
"version",
"vote-account", "vote-account",
]) ])
.default_value("stake") .default_value("stake")
@ -650,6 +651,7 @@ pub fn parse_show_validators(matches: &ArgMatches<'_>) -> Result<CliCommandInfo,
"skip-rate" => CliValidatorsSortOrder::SkipRate, "skip-rate" => CliValidatorsSortOrder::SkipRate,
"stake" => CliValidatorsSortOrder::Stake, "stake" => CliValidatorsSortOrder::Stake,
"vote-account" => CliValidatorsSortOrder::VoteAccount, "vote-account" => CliValidatorsSortOrder::VoteAccount,
"version" => CliValidatorsSortOrder::Version,
_ => unreachable!(), _ => unreachable!(),
}; };

View File

@ -8,30 +8,18 @@ use {
}, },
solana_cli::{ solana_cli::{
clap_app::get_clap_app, clap_app::get_clap_app,
cli::{parse_command, process_command, CliCommandInfo, CliConfig, SettingType}, cli::{parse_command, process_command, CliCommandInfo, CliConfig},
},
solana_cli_config::{Config, ConfigInput},
solana_cli_output::{
display::{println_name_value, println_name_value_or},
OutputFormat,
}, },
solana_cli_config::Config,
solana_cli_output::{display::println_name_value, OutputFormat},
solana_client::rpc_config::RpcSendTransactionConfig, solana_client::rpc_config::RpcSendTransactionConfig,
solana_remote_wallet::remote_wallet::RemoteWalletManager, solana_remote_wallet::remote_wallet::RemoteWalletManager,
std::{collections::HashMap, error, path::PathBuf, sync::Arc, time::Duration}, std::{collections::HashMap, error, path::PathBuf, sync::Arc, time::Duration},
}; };
pub fn println_name_value_or(name: &str, value: &str, setting_type: SettingType) {
let description = match setting_type {
SettingType::Explicit => "",
SettingType::Computed => "(computed)",
SettingType::SystemDefault => "(default)",
};
println!(
"{} {} {}",
style(name).bold(),
style(value),
style(description).italic(),
);
}
fn parse_settings(matches: &ArgMatches<'_>) -> Result<bool, Box<dyn error::Error>> { fn parse_settings(matches: &ArgMatches<'_>) -> Result<bool, Box<dyn error::Error>> {
let parse_args = match matches.subcommand() { let parse_args = match matches.subcommand() {
("config", Some(matches)) => { ("config", Some(matches)) => {
@ -50,17 +38,18 @@ fn parse_settings(matches: &ArgMatches<'_>) -> Result<bool, Box<dyn error::Error
match matches.subcommand() { match matches.subcommand() {
("get", Some(subcommand_matches)) => { ("get", Some(subcommand_matches)) => {
let (url_setting_type, json_rpc_url) = let (url_setting_type, json_rpc_url) =
CliConfig::compute_json_rpc_url_setting("", &config.json_rpc_url); ConfigInput::compute_json_rpc_url_setting("", &config.json_rpc_url);
let (ws_setting_type, websocket_url) = CliConfig::compute_websocket_url_setting( let (ws_setting_type, websocket_url) =
"", ConfigInput::compute_websocket_url_setting(
&config.websocket_url, "",
"", &config.websocket_url,
&config.json_rpc_url, "",
); &config.json_rpc_url,
);
let (keypair_setting_type, keypair_path) = let (keypair_setting_type, keypair_path) =
CliConfig::compute_keypair_path_setting("", &config.keypair_path); ConfigInput::compute_keypair_path_setting("", &config.keypair_path);
let (commitment_setting_type, commitment) = let (commitment_setting_type, commitment) =
CliConfig::compute_commitment_config("", &config.commitment); ConfigInput::compute_commitment_config("", &config.commitment);
if let Some(field) = subcommand_matches.value_of("specific_setting") { if let Some(field) = subcommand_matches.value_of("specific_setting") {
let (field_name, value, setting_type) = match field { let (field_name, value, setting_type) = match field {
@ -107,17 +96,18 @@ fn parse_settings(matches: &ArgMatches<'_>) -> Result<bool, Box<dyn error::Error
config.save(config_file)?; config.save(config_file)?;
let (url_setting_type, json_rpc_url) = let (url_setting_type, json_rpc_url) =
CliConfig::compute_json_rpc_url_setting("", &config.json_rpc_url); ConfigInput::compute_json_rpc_url_setting("", &config.json_rpc_url);
let (ws_setting_type, websocket_url) = CliConfig::compute_websocket_url_setting( let (ws_setting_type, websocket_url) =
"", ConfigInput::compute_websocket_url_setting(
&config.websocket_url, "",
"", &config.websocket_url,
&config.json_rpc_url, "",
); &config.json_rpc_url,
);
let (keypair_setting_type, keypair_path) = let (keypair_setting_type, keypair_path) =
CliConfig::compute_keypair_path_setting("", &config.keypair_path); ConfigInput::compute_keypair_path_setting("", &config.keypair_path);
let (commitment_setting_type, commitment) = let (commitment_setting_type, commitment) =
CliConfig::compute_commitment_config("", &config.commitment); ConfigInput::compute_commitment_config("", &config.commitment);
println_name_value("Config File:", config_file); println_name_value("Config File:", config_file);
println_name_value_or("RPC URL:", &json_rpc_url, url_setting_type); println_name_value_or("RPC URL:", &json_rpc_url, url_setting_type);
@ -158,7 +148,7 @@ pub fn parse_args<'a>(
} else { } else {
Config::default() Config::default()
}; };
let (_, json_rpc_url) = CliConfig::compute_json_rpc_url_setting( let (_, json_rpc_url) = ConfigInput::compute_json_rpc_url_setting(
matches.value_of("json_rpc_url").unwrap_or(""), matches.value_of("json_rpc_url").unwrap_or(""),
&config.json_rpc_url, &config.json_rpc_url,
); );
@ -171,14 +161,14 @@ pub fn parse_args<'a>(
let confirm_transaction_initial_timeout = let confirm_transaction_initial_timeout =
Duration::from_secs(confirm_transaction_initial_timeout); Duration::from_secs(confirm_transaction_initial_timeout);
let (_, websocket_url) = CliConfig::compute_websocket_url_setting( let (_, websocket_url) = ConfigInput::compute_websocket_url_setting(
matches.value_of("websocket_url").unwrap_or(""), matches.value_of("websocket_url").unwrap_or(""),
&config.websocket_url, &config.websocket_url,
matches.value_of("json_rpc_url").unwrap_or(""), matches.value_of("json_rpc_url").unwrap_or(""),
&config.json_rpc_url, &config.json_rpc_url,
); );
let default_signer_arg_name = "keypair".to_string(); let default_signer_arg_name = "keypair".to_string();
let (_, default_signer_path) = CliConfig::compute_keypair_path_setting( let (_, default_signer_path) = ConfigInput::compute_keypair_path_setting(
matches.value_of(&default_signer_arg_name).unwrap_or(""), matches.value_of(&default_signer_arg_name).unwrap_or(""),
&config.keypair_path, &config.keypair_path,
); );
@ -201,7 +191,7 @@ pub fn parse_args<'a>(
let verbose = matches.is_present("verbose"); let verbose = matches.is_present("verbose");
let output_format = OutputFormat::from_matches(matches, "output_format", verbose); let output_format = OutputFormat::from_matches(matches, "output_format", verbose);
let (_, commitment) = CliConfig::compute_commitment_config( let (_, commitment) = ConfigInput::compute_commitment_config(
matches.value_of("commitment").unwrap_or(""), matches.value_of("commitment").unwrap_or(""),
&config.commitment, &config.commitment,
); );

View File

@ -41,6 +41,7 @@ use {
self, self,
instruction::{self as stake_instruction, LockupArgs, StakeError}, instruction::{self as stake_instruction, LockupArgs, StakeError},
state::{Authorized, Lockup, Meta, StakeActivationStatus, StakeAuthorize, StakeState}, state::{Authorized, Lockup, Meta, StakeActivationStatus, StakeAuthorize, StakeState},
tools::{acceptable_reference_epoch_credits, eligible_for_deactivate_delinquent},
}, },
stake_history::StakeHistory, stake_history::StakeHistory,
system_instruction::SystemError, system_instruction::SystemError,
@ -379,6 +380,13 @@ impl StakeSubCommands for App<'_, '_> {
.help("Seed for address generation; if specified, the resulting account \ .help("Seed for address generation; if specified, the resulting account \
will be at a derived address of STAKE_ACCOUNT_ADDRESS") will be at a derived address of STAKE_ACCOUNT_ADDRESS")
) )
.arg(
Arg::with_name("delinquent")
.long("delinquent")
.takes_value(false)
.conflicts_with(SIGN_ONLY_ARG.name)
.help("Deactivate abandoned stake that is currently delegated to a delinquent vote account")
)
.arg(stake_authority_arg()) .arg(stake_authority_arg())
.offline_args() .offline_args()
.nonce_args(false) .nonce_args(false)
@ -995,11 +1003,13 @@ pub fn parse_stake_deactivate_stake(
let stake_account_pubkey = let stake_account_pubkey =
pubkey_of_signer(matches, "stake_account_pubkey", wallet_manager)?.unwrap(); pubkey_of_signer(matches, "stake_account_pubkey", wallet_manager)?.unwrap();
let sign_only = matches.is_present(SIGN_ONLY_ARG.name); let sign_only = matches.is_present(SIGN_ONLY_ARG.name);
let deactivate_delinquent = matches.is_present("delinquent");
let dump_transaction_message = matches.is_present(DUMP_TRANSACTION_MESSAGE.name); let dump_transaction_message = matches.is_present(DUMP_TRANSACTION_MESSAGE.name);
let blockhash_query = BlockhashQuery::new_from_matches(matches); let blockhash_query = BlockhashQuery::new_from_matches(matches);
let nonce_account = pubkey_of(matches, NONCE_ARG.name); let nonce_account = pubkey_of(matches, NONCE_ARG.name);
let memo = matches.value_of(MEMO_ARG.name).map(String::from); let memo = matches.value_of(MEMO_ARG.name).map(String::from);
let seed = value_t!(matches, "seed", String).ok(); let seed = value_t!(matches, "seed", String).ok();
let (stake_authority, stake_authority_pubkey) = let (stake_authority, stake_authority_pubkey) =
signer_of(matches, STAKE_AUTHORITY_ARG.name, wallet_manager)?; signer_of(matches, STAKE_AUTHORITY_ARG.name, wallet_manager)?;
let (nonce_authority, nonce_authority_pubkey) = let (nonce_authority, nonce_authority_pubkey) =
@ -1018,6 +1028,7 @@ pub fn parse_stake_deactivate_stake(
stake_account_pubkey, stake_account_pubkey,
stake_authority: signer_info.index_of(stake_authority_pubkey).unwrap(), stake_authority: signer_info.index_of(stake_authority_pubkey).unwrap(),
sign_only, sign_only,
deactivate_delinquent,
dump_transaction_message, dump_transaction_message,
blockhash_query, blockhash_query,
nonce_account, nonce_account,
@ -1477,6 +1488,7 @@ pub fn process_deactivate_stake_account(
stake_account_pubkey: &Pubkey, stake_account_pubkey: &Pubkey,
stake_authority: SignerIndex, stake_authority: SignerIndex,
sign_only: bool, sign_only: bool,
deactivate_delinquent: bool,
dump_transaction_message: bool, dump_transaction_message: bool,
blockhash_query: &BlockhashQuery, blockhash_query: &BlockhashQuery,
nonce_account: Option<Pubkey>, nonce_account: Option<Pubkey>,
@ -1486,7 +1498,6 @@ pub fn process_deactivate_stake_account(
fee_payer: SignerIndex, fee_payer: SignerIndex,
) -> ProcessResult { ) -> ProcessResult {
let recent_blockhash = blockhash_query.get_blockhash(rpc_client, config.commitment)?; let recent_blockhash = blockhash_query.get_blockhash(rpc_client, config.commitment)?;
let stake_authority = config.signers[stake_authority];
let stake_account_address = if let Some(seed) = seed { let stake_account_address = if let Some(seed) = seed {
Pubkey::create_with_seed(stake_account_pubkey, seed, &stake::program::id())? Pubkey::create_with_seed(stake_account_pubkey, seed, &stake::program::id())?
@ -1494,11 +1505,77 @@ pub fn process_deactivate_stake_account(
*stake_account_pubkey *stake_account_pubkey
}; };
let ixs = vec![stake_instruction::deactivate_stake( let ixs = vec![if deactivate_delinquent {
&stake_account_address, let stake_account = rpc_client.get_account(&stake_account_address)?;
&stake_authority.pubkey(), if stake_account.owner != stake::program::id() {
)] return Err(CliError::BadParameter(format!(
"{} is not a stake account",
stake_account_address,
))
.into());
}
let vote_account_address = match stake_account.state() {
Ok(stake_state) => match stake_state {
StakeState::Stake(_, stake) => stake.delegation.voter_pubkey,
_ => {
return Err(CliError::BadParameter(format!(
"{} is not a delegated stake account",
stake_account_address,
))
.into())
}
},
Err(err) => {
return Err(CliError::RpcRequestError(format!(
"Account data could not be deserialized to stake state: {}",
err
))
.into())
}
};
let current_epoch = rpc_client.get_epoch_info()?.epoch;
let (_, vote_state) = crate::vote::get_vote_account(
rpc_client,
&vote_account_address,
rpc_client.commitment(),
)?;
if !eligible_for_deactivate_delinquent(&vote_state.epoch_credits, current_epoch) {
return Err(CliError::BadParameter(format!(
"Stake has not been delinquent for {} epochs",
stake::MINIMUM_DELINQUENT_EPOCHS_FOR_DEACTIVATION,
))
.into());
}
// Search for a reference vote account
let reference_vote_account_address = rpc_client
.get_vote_accounts()?
.current
.into_iter()
.find(|vote_account_info| {
acceptable_reference_epoch_credits(&vote_account_info.epoch_credits, current_epoch)
});
let reference_vote_account_address = reference_vote_account_address
.ok_or_else(|| {
CliError::RpcRequestError("Unable to find a reference vote account".into())
})?
.vote_pubkey
.parse()?;
stake_instruction::deactivate_delinquent_stake(
&stake_account_address,
&vote_account_address,
&reference_vote_account_address,
)
} else {
let stake_authority = config.signers[stake_authority];
stake_instruction::deactivate_stake(&stake_account_address, &stake_authority.pubkey())
}]
.with_memo(memo); .with_memo(memo);
let nonce_authority = config.signers[nonce_authority]; let nonce_authority = config.signers[nonce_authority];
let fee_payer = config.signers[fee_payer]; let fee_payer = config.signers[fee_payer];
@ -4174,6 +4251,34 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(),
nonce_account: None,
nonce_authority: 0,
memo: None,
seed: None,
fee_payer: 0,
},
signers: vec![read_keypair_file(&default_keypair_file).unwrap().into()],
}
);
// Test DeactivateStake Subcommand with delinquent flag
let test_deactivate_stake = test_commands.clone().get_matches_from(vec![
"test",
"deactivate-stake",
&stake_account_string,
"--delinquent",
]);
assert_eq!(
parse_command(&test_deactivate_stake, &default_signer, &mut None).unwrap(),
CliCommandInfo {
command: CliCommand::DeactivateStake {
stake_account_pubkey,
stake_authority: 0,
sign_only: false,
deactivate_delinquent: true,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -4201,6 +4306,7 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 1, stake_authority: 1,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -4235,6 +4341,7 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::Cluster, blockhash_query::Source::Cluster,
@ -4265,6 +4372,7 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: true, sign_only: true,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash), blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None, nonce_account: None,
@ -4299,6 +4407,7 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::Cluster, blockhash_query::Source::Cluster,
@ -4345,6 +4454,7 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::NonceAccount(nonce_account), blockhash_query::Source::NonceAccount(nonce_account),
@ -4379,6 +4489,7 @@ mod tests {
stake_account_pubkey, stake_account_pubkey,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster), blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None, nonce_account: None,

View File

@ -1140,7 +1140,7 @@ pub fn process_vote_update_commission(
} }
} }
fn get_vote_account( pub(crate) fn get_vote_account(
rpc_client: &RpcClient, rpc_client: &RpcClient,
vote_account_pubkey: &Pubkey, vote_account_pubkey: &Pubkey,
commitment_config: CommitmentConfig, commitment_config: CommitmentConfig,

View File

@ -204,6 +204,7 @@ fn test_seed_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_address, stake_account_pubkey: stake_address,
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -287,6 +288,7 @@ fn test_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::default(), blockhash_query: BlockhashQuery::default(),
nonce_account: None, nonce_account: None,
@ -412,6 +414,7 @@ fn test_offline_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: true, sign_only: true,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash), blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None, nonce_account: None,
@ -431,6 +434,7 @@ fn test_offline_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash), blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash),
nonce_account: None, nonce_account: None,
@ -546,6 +550,7 @@ fn test_nonced_stake_delegation_and_deactivation() {
stake_account_pubkey: stake_keypair.pubkey(), stake_account_pubkey: stake_keypair.pubkey(),
stake_authority: 0, stake_authority: 0,
sign_only: false, sign_only: false,
deactivate_delinquent: false,
dump_transaction_message: false, dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator( blockhash_query: BlockhashQuery::FeeCalculator(
blockhash_query::Source::NonceAccount(nonce_account.pubkey()), blockhash_query::Source::NonceAccount(nonce_account.pubkey()),

View File

@ -11,7 +11,7 @@ edition = "2021"
[dependencies] [dependencies]
async-mutex = "1.4.0" async-mutex = "1.4.0"
async-trait = "0.1.52" async-trait = "0.1.53"
base64 = "0.13.0" base64 = "0.13.0"
bincode = "1.3.3" bincode = "1.3.3"
bs58 = "0.4.0" bs58 = "0.4.0"
@ -27,12 +27,13 @@ lazy_static = "1.4.0"
log = "0.4.14" log = "0.4.14"
lru = "0.7.5" lru = "0.7.5"
quinn = "0.8.0" quinn = "0.8.0"
quinn-proto = "0.8.0"
rand = "0.7.0" rand = "0.7.0"
rand_chacha = "0.2.2" rand_chacha = "0.2.2"
rayon = "1.5.1" rayon = "1.5.1"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
rustls = { version = "0.20.2", features = ["dangerous_configuration"] } rustls = { version = "0.20.2", features = ["dangerous_configuration"] }
semver = "1.0.6" semver = "1.0.7"
serde = "1.0.136" serde = "1.0.136"
serde_derive = "1.0.103" serde_derive = "1.0.103"
serde_json = "1.0.79" serde_json = "1.0.79"
@ -40,6 +41,7 @@ solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-faucet = { path = "../faucet", version = "=1.11.0" } solana-faucet = { path = "../faucet", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" } solana-measure = { path = "../measure", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" } solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" } solana-streamer = { path = "../streamer", version = "=1.11.0" }

View File

@ -1,35 +1,135 @@
use { use {
crate::{ crate::{
quic_client::QuicTpuConnection, tpu_connection::TpuConnection, udp_client::UdpTpuConnection, quic_client::QuicTpuConnection,
tpu_connection::{ClientStats, TpuConnection},
udp_client::UdpTpuConnection,
}, },
lazy_static::lazy_static, lazy_static::lazy_static,
lru::LruCache, lru::LruCache,
solana_net_utils::VALIDATOR_PORT_RANGE, solana_net_utils::VALIDATOR_PORT_RANGE,
solana_sdk::{transaction::VersionedTransaction, transport::TransportError}, solana_sdk::{
timing::AtomicInterval, transaction::VersionedTransaction, transport::TransportError,
},
std::{ std::{
net::{IpAddr, Ipv4Addr, SocketAddr}, net::{IpAddr, Ipv4Addr, SocketAddr},
sync::{Arc, Mutex}, sync::{
atomic::{AtomicU64, Ordering},
Arc, Mutex,
},
}, },
}; };
// Should be non-zero // Should be non-zero
static MAX_CONNECTIONS: usize = 64; static MAX_CONNECTIONS: usize = 1024;
#[derive(Clone)] #[derive(Clone)]
enum Connection { pub enum Connection {
Udp(Arc<UdpTpuConnection>), Udp(Arc<UdpTpuConnection>),
Quic(Arc<QuicTpuConnection>), Quic(Arc<QuicTpuConnection>),
} }
struct ConnMap { #[derive(Default)]
struct ConnectionCacheStats {
cache_hits: AtomicU64,
cache_misses: AtomicU64,
sent_packets: AtomicU64,
total_batches: AtomicU64,
batch_success: AtomicU64,
batch_failure: AtomicU64,
// Need to track these separately per-connection
// because we need to track the base stat value from quinn
total_client_stats: ClientStats,
}
const CONNECTION_STAT_SUBMISSION_INTERVAL: u64 = 2000;
impl ConnectionCacheStats {
fn add_client_stats(&self, client_stats: &ClientStats, num_packets: usize, is_success: bool) {
self.total_client_stats.total_connections.fetch_add(
client_stats.total_connections.load(Ordering::Relaxed),
Ordering::Relaxed,
);
self.total_client_stats.connection_reuse.fetch_add(
client_stats.connection_reuse.load(Ordering::Relaxed),
Ordering::Relaxed,
);
self.sent_packets
.fetch_add(num_packets as u64, Ordering::Relaxed);
self.total_batches.fetch_add(1, Ordering::Relaxed);
if is_success {
self.batch_success.fetch_add(1, Ordering::Relaxed);
} else {
self.batch_failure.fetch_add(1, Ordering::Relaxed);
}
}
fn report(&self) {
datapoint_info!(
"quic-client-connection-stats",
(
"cache_hits",
self.cache_hits.swap(0, Ordering::Relaxed),
i64
),
(
"cache_misses",
self.cache_misses.swap(0, Ordering::Relaxed),
i64
),
(
"total_connections",
self.total_client_stats
.total_connections
.swap(0, Ordering::Relaxed),
i64
),
(
"connection_reuse",
self.total_client_stats
.connection_reuse
.swap(0, Ordering::Relaxed),
i64
),
(
"congestion_events",
self.total_client_stats.congestion_events.load_and_reset(),
i64
),
(
"tx_streams_blocked_uni",
self.total_client_stats
.tx_streams_blocked_uni
.load_and_reset(),
i64
),
(
"tx_data_blocked",
self.total_client_stats.tx_data_blocked.load_and_reset(),
i64
),
(
"tx_acks",
self.total_client_stats.tx_acks.load_and_reset(),
i64
),
);
}
}
struct ConnectionMap {
map: LruCache<SocketAddr, Connection>, map: LruCache<SocketAddr, Connection>,
stats: Arc<ConnectionCacheStats>,
last_stats: AtomicInterval,
use_quic: bool, use_quic: bool,
} }
impl ConnMap { impl ConnectionMap {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
map: LruCache::new(MAX_CONNECTIONS), map: LruCache::new(MAX_CONNECTIONS),
stats: Arc::new(ConnectionCacheStats::default()),
last_stats: AtomicInterval::default(),
use_quic: false, use_quic: false,
} }
} }
@ -40,7 +140,7 @@ impl ConnMap {
} }
lazy_static! { lazy_static! {
static ref CONNECTION_MAP: Mutex<ConnMap> = Mutex::new(ConnMap::new()); static ref CONNECTION_MAP: Mutex<ConnectionMap> = Mutex::new(ConnectionMap::new());
} }
pub fn set_use_quic(use_quic: bool) { pub fn set_use_quic(use_quic: bool) {
@ -50,11 +150,25 @@ pub fn set_use_quic(use_quic: bool) {
// TODO: see https://github.com/solana-labs/solana/issues/23661 // TODO: see https://github.com/solana-labs/solana/issues/23661
// remove lazy_static and optimize and refactor this // remove lazy_static and optimize and refactor this
fn get_connection(addr: &SocketAddr) -> Connection { fn get_connection(addr: &SocketAddr) -> (Connection, Arc<ConnectionCacheStats>) {
let mut map = (*CONNECTION_MAP).lock().unwrap(); let mut map = (*CONNECTION_MAP).lock().unwrap();
match map.map.get(addr) { if map
Some(connection) => connection.clone(), .last_stats
.should_update(CONNECTION_STAT_SUBMISSION_INTERVAL)
{
map.stats.report();
}
let (connection, hit, maybe_stats) = match map.map.get(addr) {
Some(connection) => {
let mut stats = None;
// update connection stats
if let Connection::Quic(conn) = connection {
stats = conn.stats().map(|s| (conn.base_stats(), s));
}
(connection.clone(), true, stats)
}
None => { None => {
let (_, send_socket) = solana_net_utils::bind_in_range( let (_, send_socket) = solana_net_utils::bind_in_range(
IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)),
@ -68,9 +182,41 @@ fn get_connection(addr: &SocketAddr) -> Connection {
}; };
map.map.put(*addr, connection.clone()); map.map.put(*addr, connection.clone());
connection (connection, false, None)
} }
};
if let Some((connection_stats, new_stats)) = maybe_stats {
map.stats.total_client_stats.congestion_events.update_stat(
&connection_stats.congestion_events,
new_stats.path.congestion_events,
);
map.stats
.total_client_stats
.tx_streams_blocked_uni
.update_stat(
&connection_stats.tx_streams_blocked_uni,
new_stats.frame_tx.streams_blocked_uni,
);
map.stats.total_client_stats.tx_data_blocked.update_stat(
&connection_stats.tx_data_blocked,
new_stats.frame_tx.data_blocked,
);
map.stats
.total_client_stats
.tx_acks
.update_stat(&connection_stats.tx_acks, new_stats.frame_tx.acks);
} }
if hit {
map.stats.cache_hits.fetch_add(1, Ordering::Relaxed);
} else {
map.stats.cache_misses.fetch_add(1, Ordering::Relaxed);
}
(connection, map.stats.clone())
} }
// TODO: see https://github.com/solana-labs/solana/issues/23851 // TODO: see https://github.com/solana-labs/solana/issues/23851
@ -86,44 +232,86 @@ pub fn send_wire_transaction_batch(
packets: &[&[u8]], packets: &[&[u8]],
addr: &SocketAddr, addr: &SocketAddr,
) -> Result<(), TransportError> { ) -> Result<(), TransportError> {
let conn = get_connection(addr); let (conn, stats) = get_connection(addr);
match conn { let client_stats = ClientStats::default();
Connection::Udp(conn) => conn.send_wire_transaction_batch(packets), let r = match conn {
Connection::Quic(conn) => conn.send_wire_transaction_batch(packets), Connection::Udp(conn) => conn.send_wire_transaction_batch(packets, &client_stats),
} Connection::Quic(conn) => conn.send_wire_transaction_batch(packets, &client_stats),
};
stats.add_client_stats(&client_stats, packets.len(), r.is_ok());
r
}
pub fn send_wire_transaction_async(
packets: Vec<u8>,
addr: &SocketAddr,
) -> Result<(), TransportError> {
let (conn, stats) = get_connection(addr);
let client_stats = Arc::new(ClientStats::default());
let r = match conn {
Connection::Udp(conn) => conn.send_wire_transaction_async(packets, client_stats.clone()),
Connection::Quic(conn) => conn.send_wire_transaction_async(packets, client_stats.clone()),
};
stats.add_client_stats(&client_stats, 1, r.is_ok());
r
}
pub fn send_wire_transaction_batch_async(
packets: Vec<Vec<u8>>,
addr: &SocketAddr,
) -> Result<(), TransportError> {
let (conn, stats) = get_connection(addr);
let client_stats = Arc::new(ClientStats::default());
let len = packets.len();
let r = match conn {
Connection::Udp(conn) => {
conn.send_wire_transaction_batch_async(packets, client_stats.clone())
}
Connection::Quic(conn) => {
conn.send_wire_transaction_batch_async(packets, client_stats.clone())
}
};
stats.add_client_stats(&client_stats, len, r.is_ok());
r
} }
pub fn send_wire_transaction( pub fn send_wire_transaction(
wire_transaction: &[u8], wire_transaction: &[u8],
addr: &SocketAddr, addr: &SocketAddr,
) -> Result<(), TransportError> { ) -> Result<(), TransportError> {
let conn = get_connection(addr); send_wire_transaction_batch(&[wire_transaction], addr)
match conn {
Connection::Udp(conn) => conn.send_wire_transaction(wire_transaction),
Connection::Quic(conn) => conn.send_wire_transaction(wire_transaction),
}
} }
pub fn serialize_and_send_transaction( pub fn serialize_and_send_transaction(
transaction: &VersionedTransaction, transaction: &VersionedTransaction,
addr: &SocketAddr, addr: &SocketAddr,
) -> Result<(), TransportError> { ) -> Result<(), TransportError> {
let conn = get_connection(addr); let (conn, stats) = get_connection(addr);
match conn { let client_stats = ClientStats::default();
Connection::Udp(conn) => conn.serialize_and_send_transaction(transaction), let r = match conn {
Connection::Quic(conn) => conn.serialize_and_send_transaction(transaction), Connection::Udp(conn) => conn.serialize_and_send_transaction(transaction, &client_stats),
} Connection::Quic(conn) => conn.serialize_and_send_transaction(transaction, &client_stats),
};
stats.add_client_stats(&client_stats, 1, r.is_ok());
r
} }
pub fn par_serialize_and_send_transaction_batch( pub fn par_serialize_and_send_transaction_batch(
transactions: &[VersionedTransaction], transactions: &[VersionedTransaction],
addr: &SocketAddr, addr: &SocketAddr,
) -> Result<(), TransportError> { ) -> Result<(), TransportError> {
let conn = get_connection(addr); let (conn, stats) = get_connection(addr);
match conn { let client_stats = ClientStats::default();
Connection::Udp(conn) => conn.par_serialize_and_send_transaction_batch(transactions), let r = match conn {
Connection::Quic(conn) => conn.par_serialize_and_send_transaction_batch(transactions), Connection::Udp(conn) => {
} conn.par_serialize_and_send_transaction_batch(transactions, &client_stats)
}
Connection::Quic(conn) => {
conn.par_serialize_and_send_transaction_batch(transactions, &client_stats)
}
};
stats.add_client_stats(&client_stats, transactions.len(), r.is_ok());
r
} }
#[cfg(test)] #[cfg(test)]
@ -158,6 +346,7 @@ mod tests {
#[test] #[test]
fn test_connection_cache() { fn test_connection_cache() {
solana_logger::setup();
// Allow the test to run deterministically // Allow the test to run deterministically
// with the same pseudorandom sequence between runs // with the same pseudorandom sequence between runs
// and on different platforms - the cryptographic security // and on different platforms - the cryptographic security
@ -171,7 +360,7 @@ mod tests {
// be lazy and not connect until first use or handle connection errors somehow // be lazy and not connect until first use or handle connection errors somehow
// (without crashing, as would be required in a real practical validator) // (without crashing, as would be required in a real practical validator)
let first_addr = get_addr(&mut rng); let first_addr = get_addr(&mut rng);
assert!(ip(get_connection(&first_addr)) == first_addr.ip()); assert!(ip(get_connection(&first_addr).0) == first_addr.ip());
let addrs = (0..MAX_CONNECTIONS) let addrs = (0..MAX_CONNECTIONS)
.into_iter() .into_iter()
.map(|_| { .map(|_| {

View File

@ -197,6 +197,10 @@ impl RpcSender for HttpSender {
return Ok(json["result"].take()); return Ok(json["result"].take());
} }
} }
fn url(&self) -> String {
self.url.clone()
}
} }
#[cfg(test)] #[cfg(test)]

View File

@ -2,6 +2,9 @@
#[macro_use] #[macro_use]
extern crate serde_derive; extern crate serde_derive;
#[macro_use]
extern crate solana_metrics;
pub mod blockhash_query; pub mod blockhash_query;
pub mod client_error; pub mod client_error;
pub mod connection_cache; pub mod connection_cache;
@ -9,7 +12,6 @@ pub(crate) mod http_sender;
pub(crate) mod mock_sender; pub(crate) mod mock_sender;
pub mod nonblocking; pub mod nonblocking;
pub mod nonce_utils; pub mod nonce_utils;
pub mod perf_utils;
pub mod pubsub_client; pub mod pubsub_client;
pub mod quic_client; pub mod quic_client;
pub mod rpc_cache; pub mod rpc_cache;

View File

@ -470,4 +470,8 @@ impl RpcSender for MockSender {
}; };
Ok(val) Ok(val)
} }
fn url(&self) -> String {
format!("MockSender: {}", self.url)
}
} }

View File

@ -502,6 +502,11 @@ impl RpcClient {
Self::new_with_timeout(url, timeout) Self::new_with_timeout(url, timeout)
} }
/// Get the configured url of the client's sender
pub fn url(&self) -> String {
self.sender.url()
}
async fn get_node_version(&self) -> Result<semver::Version, RpcError> { async fn get_node_version(&self) -> Result<semver::Version, RpcError> {
let r_node_version = self.node_version.read().await; let r_node_version = self.node_version.read().await;
if let Some(version) = &*r_node_version { if let Some(version) = &*r_node_version {

View File

@ -2,18 +2,29 @@
//! an interface for sending transactions which is restricted by the server's flow control. //! an interface for sending transactions which is restricted by the server's flow control.
use { use {
crate::{client_error::ClientErrorKind, tpu_connection::TpuConnection}, crate::{
client_error::ClientErrorKind,
tpu_connection::{ClientStats, TpuConnection},
},
async_mutex::Mutex, async_mutex::Mutex,
futures::future::join_all, futures::future::join_all,
itertools::Itertools, itertools::Itertools,
quinn::{ClientConfig, Endpoint, EndpointConfig, NewConnection, WriteError}, lazy_static::lazy_static,
log::*,
quinn::{
ClientConfig, Endpoint, EndpointConfig, IdleTimeout, NewConnection, VarInt, WriteError,
},
quinn_proto::ConnectionStats,
solana_sdk::{ solana_sdk::{
quic::{QUIC_MAX_CONCURRENT_STREAMS, QUIC_PORT_OFFSET}, quic::{
QUIC_KEEP_ALIVE_MS, QUIC_MAX_CONCURRENT_STREAMS, QUIC_MAX_TIMEOUT_MS, QUIC_PORT_OFFSET,
},
transport::Result as TransportResult, transport::Result as TransportResult,
}, },
std::{ std::{
net::{SocketAddr, UdpSocket}, net::{SocketAddr, UdpSocket},
sync::Arc, sync::{atomic::Ordering, Arc},
time::Duration,
}, },
tokio::runtime::Runtime, tokio::runtime::Runtime,
}; };
@ -39,18 +50,34 @@ impl rustls::client::ServerCertVerifier for SkipServerVerification {
Ok(rustls::client::ServerCertVerified::assertion()) Ok(rustls::client::ServerCertVerified::assertion())
} }
} }
lazy_static! {
static ref RUNTIME: Runtime = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
}
struct QuicClient { struct QuicClient {
runtime: Runtime,
endpoint: Endpoint, endpoint: Endpoint,
connection: Arc<Mutex<Option<Arc<NewConnection>>>>, connection: Arc<Mutex<Option<Arc<NewConnection>>>>,
addr: SocketAddr, addr: SocketAddr,
stats: Arc<ClientStats>,
} }
pub struct QuicTpuConnection { pub struct QuicTpuConnection {
client: Arc<QuicClient>, client: Arc<QuicClient>,
} }
impl QuicTpuConnection {
pub fn stats(&self) -> Option<ConnectionStats> {
self.client.stats()
}
pub fn base_stats(&self) -> Arc<ClientStats> {
self.client.stats.clone()
}
}
impl TpuConnection for QuicTpuConnection { impl TpuConnection for QuicTpuConnection {
fn new(client_socket: UdpSocket, tpu_addr: SocketAddr) -> Self { fn new(client_socket: UdpSocket, tpu_addr: SocketAddr) -> Self {
let tpu_addr = SocketAddr::new(tpu_addr.ip(), tpu_addr.port() + QUIC_PORT_OFFSET); let tpu_addr = SocketAddr::new(tpu_addr.ip(), tpu_addr.port() + QUIC_PORT_OFFSET);
@ -63,35 +90,74 @@ impl TpuConnection for QuicTpuConnection {
&self.client.addr &self.client.addr
} }
fn send_wire_transaction<T>(&self, wire_transaction: T) -> TransportResult<()> fn send_wire_transaction<T>(
&self,
wire_transaction: T,
stats: &ClientStats,
) -> TransportResult<()>
where where
T: AsRef<[u8]>, T: AsRef<[u8]>,
{ {
let _guard = self.client.runtime.enter(); let _guard = RUNTIME.enter();
let send_buffer = self.client.send_buffer(wire_transaction); let send_buffer = self.client.send_buffer(wire_transaction, stats);
self.client.runtime.block_on(send_buffer)?; RUNTIME.block_on(send_buffer)?;
Ok(()) Ok(())
} }
fn send_wire_transaction_batch<T>(&self, buffers: &[T]) -> TransportResult<()> fn send_wire_transaction_batch<T>(
&self,
buffers: &[T],
stats: &ClientStats,
) -> TransportResult<()>
where where
T: AsRef<[u8]>, T: AsRef<[u8]>,
{ {
let _guard = self.client.runtime.enter(); let _guard = RUNTIME.enter();
let send_batch = self.client.send_batch(buffers); let send_batch = self.client.send_batch(buffers, stats);
self.client.runtime.block_on(send_batch)?; RUNTIME.block_on(send_batch)?;
Ok(())
}
fn send_wire_transaction_async(
&self,
wire_transaction: Vec<u8>,
stats: Arc<ClientStats>,
) -> TransportResult<()> {
let _guard = RUNTIME.enter();
let client = self.client.clone();
//drop and detach the task
let _ = RUNTIME.spawn(async move {
let send_buffer = client.send_buffer(wire_transaction, &stats);
if let Err(e) = send_buffer.await {
warn!("Failed to send transaction async to {:?}", e);
datapoint_warn!("send-wire-async", ("failure", 1, i64),);
}
});
Ok(())
}
fn send_wire_transaction_batch_async(
&self,
buffers: Vec<Vec<u8>>,
stats: Arc<ClientStats>,
) -> TransportResult<()> {
let _guard = RUNTIME.enter();
let client = self.client.clone();
//drop and detach the task
let _ = RUNTIME.spawn(async move {
let send_batch = client.send_batch(&buffers, &stats);
if let Err(e) = send_batch.await {
warn!("Failed to send transaction batch async to {:?}", e);
datapoint_warn!("send-wire-batch-async", ("failure", 1, i64),);
}
});
Ok(()) Ok(())
} }
} }
impl QuicClient { impl QuicClient {
pub fn new(client_socket: UdpSocket, addr: SocketAddr) -> Self { pub fn new(client_socket: UdpSocket, addr: SocketAddr) -> Self {
let runtime = tokio::runtime::Builder::new_multi_thread() let _guard = RUNTIME.enter();
.enable_all()
.build()
.unwrap();
let _guard = runtime.enter();
let crypto = rustls::ClientConfig::builder() let crypto = rustls::ClientConfig::builder()
.with_safe_defaults() .with_safe_defaults()
@ -100,18 +166,30 @@ impl QuicClient {
let create_endpoint = QuicClient::create_endpoint(EndpointConfig::default(), client_socket); let create_endpoint = QuicClient::create_endpoint(EndpointConfig::default(), client_socket);
let mut endpoint = runtime.block_on(create_endpoint); let mut endpoint = RUNTIME.block_on(create_endpoint);
endpoint.set_default_client_config(ClientConfig::new(Arc::new(crypto))); let mut config = ClientConfig::new(Arc::new(crypto));
let transport_config = Arc::get_mut(&mut config.transport).unwrap();
let timeout = IdleTimeout::from(VarInt::from_u32(QUIC_MAX_TIMEOUT_MS));
transport_config.max_idle_timeout(Some(timeout));
transport_config.keep_alive_interval(Some(Duration::from_millis(QUIC_KEEP_ALIVE_MS)));
endpoint.set_default_client_config(config);
Self { Self {
runtime,
endpoint, endpoint,
connection: Arc::new(Mutex::new(None)), connection: Arc::new(Mutex::new(None)),
addr, addr,
stats: Arc::new(ClientStats::default()),
} }
} }
pub fn stats(&self) -> Option<ConnectionStats> {
let conn_guard = self.connection.lock();
let x = RUNTIME.block_on(conn_guard);
x.as_ref().map(|c| c.connection.stats())
}
// If this function becomes public, it should be changed to // If this function becomes public, it should be changed to
// not expose details of the specific Quic implementation we're using // not expose details of the specific Quic implementation we're using
async fn create_endpoint(config: EndpointConfig, client_socket: UdpSocket) -> Endpoint { async fn create_endpoint(config: EndpointConfig, client_socket: UdpSocket) -> Endpoint {
@ -128,18 +206,35 @@ impl QuicClient {
Ok(()) Ok(())
} }
async fn make_connection(&self, stats: &ClientStats) -> Result<Arc<NewConnection>, WriteError> {
let connecting = self.endpoint.connect(self.addr, "connect").unwrap();
stats.total_connections.fetch_add(1, Ordering::Relaxed);
let connecting_result = connecting.await;
if connecting_result.is_err() {
stats.connection_errors.fetch_add(1, Ordering::Relaxed);
}
let connection = connecting_result?;
Ok(Arc::new(connection))
}
// Attempts to send data, connecting/reconnecting as necessary // Attempts to send data, connecting/reconnecting as necessary
// On success, returns the connection used to successfully send the data // On success, returns the connection used to successfully send the data
async fn _send_buffer(&self, data: &[u8]) -> Result<Arc<NewConnection>, WriteError> { async fn _send_buffer(
&self,
data: &[u8],
stats: &ClientStats,
) -> Result<Arc<NewConnection>, WriteError> {
let connection = { let connection = {
let mut conn_guard = self.connection.lock().await; let mut conn_guard = self.connection.lock().await;
let maybe_conn = (*conn_guard).clone(); let maybe_conn = (*conn_guard).clone();
match maybe_conn { match maybe_conn {
Some(conn) => conn.clone(), Some(conn) => {
stats.connection_reuse.fetch_add(1, Ordering::Relaxed);
conn.clone()
}
None => { None => {
let connecting = self.endpoint.connect(self.addr, "connect").unwrap(); let connection = self.make_connection(stats).await?;
let connection = Arc::new(connecting.await?);
*conn_guard = Some(connection.clone()); *conn_guard = Some(connection.clone());
connection connection
} }
@ -149,8 +244,7 @@ impl QuicClient {
Ok(()) => Ok(connection), Ok(()) => Ok(connection),
_ => { _ => {
let connection = { let connection = {
let connecting = self.endpoint.connect(self.addr, "connect").unwrap(); let connection = self.make_connection(stats).await?;
let connection = Arc::new(connecting.await?);
let mut conn_guard = self.connection.lock().await; let mut conn_guard = self.connection.lock().await;
*conn_guard = Some(connection.clone()); *conn_guard = Some(connection.clone());
connection connection
@ -161,15 +255,19 @@ impl QuicClient {
} }
} }
pub async fn send_buffer<T>(&self, data: T) -> Result<(), ClientErrorKind> pub async fn send_buffer<T>(&self, data: T, stats: &ClientStats) -> Result<(), ClientErrorKind>
where where
T: AsRef<[u8]>, T: AsRef<[u8]>,
{ {
self._send_buffer(data.as_ref()).await?; self._send_buffer(data.as_ref(), stats).await?;
Ok(()) Ok(())
} }
pub async fn send_batch<T>(&self, buffers: &[T]) -> Result<(), ClientErrorKind> pub async fn send_batch<T>(
&self,
buffers: &[T],
stats: &ClientStats,
) -> Result<(), ClientErrorKind>
where where
T: AsRef<[u8]>, T: AsRef<[u8]>,
{ {
@ -187,7 +285,7 @@ impl QuicClient {
if buffers.is_empty() { if buffers.is_empty() {
return Ok(()); return Ok(());
} }
let connection = self._send_buffer(buffers[0].as_ref()).await?; let connection = self._send_buffer(buffers[0].as_ref(), stats).await?;
// Used to avoid dereferencing the Arc multiple times below // Used to avoid dereferencing the Arc multiple times below
// by just getting a reference to the NewConnection once // by just getting a reference to the NewConnection once
@ -197,13 +295,16 @@ impl QuicClient {
.iter() .iter()
.chunks(QUIC_MAX_CONCURRENT_STREAMS); .chunks(QUIC_MAX_CONCURRENT_STREAMS);
let futures = chunks.into_iter().map(|buffs| { let futures: Vec<_> = chunks
join_all( .into_iter()
buffs .map(|buffs| {
.into_iter() join_all(
.map(|buf| Self::_send_buffer_using_conn(buf.as_ref(), connection_ref)), buffs
) .into_iter()
}); .map(|buf| Self::_send_buffer_using_conn(buf.as_ref(), connection_ref)),
)
})
.collect();
for f in futures { for f in futures {
f.await.into_iter().try_for_each(|res| res)?; f.await.into_iter().try_for_each(|res| res)?;

View File

@ -535,6 +535,11 @@ impl RpcClient {
Self::new_with_timeout(url, timeout) Self::new_with_timeout(url, timeout)
} }
/// Get the configured url of the client's sender
pub fn url(&self) -> String {
self.rpc_client.url()
}
/// Get the configured default [commitment level][cl]. /// Get the configured default [commitment level][cl].
/// ///
/// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment /// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment

View File

@ -348,7 +348,29 @@ pub struct RpcSimulateTransactionResult {
pub logs: Option<Vec<String>>, pub logs: Option<Vec<String>>,
pub accounts: Option<Vec<Option<UiAccount>>>, pub accounts: Option<Vec<Option<UiAccount>>>,
pub units_consumed: Option<u64>, pub units_consumed: Option<u64>,
pub return_data: Option<TransactionReturnData>, pub return_data: Option<RpcTransactionReturnData>,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct RpcTransactionReturnData {
pub program_id: String,
pub data: (String, ReturnDataEncoding),
}
impl From<TransactionReturnData> for RpcTransactionReturnData {
fn from(return_data: TransactionReturnData) -> Self {
Self {
program_id: return_data.program_id.to_string(),
data: (base64::encode(return_data.data), ReturnDataEncoding::Base64),
}
}
}
#[derive(Serialize, Deserialize, Clone, Copy, Debug, Eq, Hash, PartialEq)]
#[serde(rename_all = "camelCase")]
pub enum ReturnDataEncoding {
Base64,
} }
#[derive(Serialize, Deserialize, Clone, Debug)] #[derive(Serialize, Deserialize, Clone, Debug)]

View File

@ -32,4 +32,5 @@ pub trait RpcSender {
params: serde_json::Value, params: serde_json::Value,
) -> Result<serde_json::Value>; ) -> Result<serde_json::Value>;
fn get_transport_stats(&self) -> RpcTransportStats; fn get_transport_stats(&self) -> RpcTransportStats;
fn url(&self) -> String;
} }

View File

@ -171,7 +171,7 @@ impl ThinClient {
&self.tpu_addrs[self.optimizer.best()] &self.tpu_addrs[self.optimizer.best()]
} }
fn rpc_client(&self) -> &RpcClient { pub fn rpc_client(&self) -> &RpcClient {
&self.rpc_clients[self.optimizer.best()] &self.rpc_clients[self.optimizer.best()]
} }

View File

@ -1,6 +1,7 @@
use { use {
crate::{ crate::{
client_error::ClientError, client_error::ClientError,
connection_cache::send_wire_transaction_async,
pubsub_client::{PubsubClient, PubsubClientError, PubsubClientSubscription}, pubsub_client::{PubsubClient, PubsubClientError, PubsubClientSubscription},
rpc_client::RpcClient, rpc_client::RpcClient,
rpc_request::MAX_GET_SIGNATURE_STATUSES_QUERY_ITEMS, rpc_request::MAX_GET_SIGNATURE_STATUSES_QUERY_ITEMS,
@ -17,6 +18,7 @@ use {
signature::SignerError, signature::SignerError,
signers::Signers, signers::Signers,
transaction::{Transaction, TransactionError}, transaction::{Transaction, TransactionError},
transport::{Result as TransportResult, TransportError},
}, },
std::{ std::{
collections::{HashMap, HashSet, VecDeque}, collections::{HashMap, HashSet, VecDeque},
@ -73,7 +75,7 @@ impl Default for TpuClientConfig {
/// Client which sends transactions directly to the current leader's TPU port over UDP. /// Client which sends transactions directly to the current leader's TPU port over UDP.
/// The client uses RPC to determine the current leader and fetch node contact info /// The client uses RPC to determine the current leader and fetch node contact info
pub struct TpuClient { pub struct TpuClient {
send_socket: UdpSocket, _deprecated: UdpSocket, // TpuClient now uses the connection_cache to choose a send_socket
fanout_slots: u64, fanout_slots: u64,
leader_tpu_service: LeaderTpuService, leader_tpu_service: LeaderTpuService,
exit: Arc<AtomicBool>, exit: Arc<AtomicBool>,
@ -85,25 +87,48 @@ impl TpuClient {
/// size /// size
pub fn send_transaction(&self, transaction: &Transaction) -> bool { pub fn send_transaction(&self, transaction: &Transaction) -> bool {
let wire_transaction = serialize(transaction).expect("serialization should succeed"); let wire_transaction = serialize(transaction).expect("serialization should succeed");
self.send_wire_transaction(&wire_transaction) self.send_wire_transaction(wire_transaction)
} }
/// Send a wire transaction to the current and upcoming leader TPUs according to fanout size /// Send a wire transaction to the current and upcoming leader TPUs according to fanout size
pub fn send_wire_transaction(&self, wire_transaction: &[u8]) -> bool { pub fn send_wire_transaction(&self, wire_transaction: Vec<u8>) -> bool {
let mut sent = false; self.try_send_wire_transaction(wire_transaction).is_ok()
}
/// Serialize and send transaction to the current and upcoming leader TPUs according to fanout
/// size
/// Returns the last error if all sends fail
pub fn try_send_transaction(&self, transaction: &Transaction) -> TransportResult<()> {
let wire_transaction = serialize(transaction).expect("serialization should succeed");
self.try_send_wire_transaction(wire_transaction)
}
/// Send a wire transaction to the current and upcoming leader TPUs according to fanout size
/// Returns the last error if all sends fail
fn try_send_wire_transaction(&self, wire_transaction: Vec<u8>) -> TransportResult<()> {
let mut last_error: Option<TransportError> = None;
let mut some_success = false;
for tpu_address in self for tpu_address in self
.leader_tpu_service .leader_tpu_service
.leader_tpu_sockets(self.fanout_slots) .leader_tpu_sockets(self.fanout_slots)
{ {
if self let result = send_wire_transaction_async(wire_transaction.clone(), &tpu_address);
.send_socket if let Err(err) = result {
.send_to(wire_transaction, tpu_address) last_error = Some(err);
.is_ok() } else {
{ some_success = true;
sent = true;
} }
} }
sent if !some_success {
Err(if let Some(err) = last_error {
err
} else {
std::io::Error::new(std::io::ErrorKind::Other, "No sends attempted").into()
})
} else {
Ok(())
}
} }
/// Create a new client that disconnects when dropped /// Create a new client that disconnects when dropped
@ -117,7 +142,7 @@ impl TpuClient {
LeaderTpuService::new(rpc_client.clone(), websocket_url, exit.clone())?; LeaderTpuService::new(rpc_client.clone(), websocket_url, exit.clone())?;
Ok(Self { Ok(Self {
send_socket: UdpSocket::bind("0.0.0.0:0").unwrap(), _deprecated: UdpSocket::bind("0.0.0.0:0").unwrap(),
fanout_slots: config.fanout_slots.min(MAX_FANOUT_SLOTS).max(1), fanout_slots: config.fanout_slots.min(MAX_FANOUT_SLOTS).max(1),
leader_tpu_service, leader_tpu_service,
exit, exit,
@ -266,6 +291,10 @@ impl TpuClient {
} }
Err(TpuSenderError::Custom("Max retries exceeded".into())) Err(TpuSenderError::Custom("Max retries exceeded".into()))
} }
pub fn rpc_client(&self) -> &RpcClient {
&self.rpc_client
}
} }
impl Drop for TpuClient { impl Drop for TpuClient {

View File

@ -1,9 +1,26 @@
use { use {
rayon::iter::{IntoParallelIterator, ParallelIterator}, rayon::iter::{IntoParallelIterator, ParallelIterator},
solana_metrics::MovingStat,
solana_sdk::{transaction::VersionedTransaction, transport::Result as TransportResult}, solana_sdk::{transaction::VersionedTransaction, transport::Result as TransportResult},
std::net::{SocketAddr, UdpSocket}, std::{
net::{SocketAddr, UdpSocket},
sync::{atomic::AtomicU64, Arc},
},
}; };
#[derive(Default)]
pub struct ClientStats {
pub total_connections: AtomicU64,
pub connection_reuse: AtomicU64,
pub connection_errors: AtomicU64,
// these will be the last values of these stats
pub congestion_events: MovingStat,
pub tx_streams_blocked_uni: MovingStat,
pub tx_data_blocked: MovingStat,
pub tx_acks: MovingStat,
}
pub trait TpuConnection { pub trait TpuConnection {
fn new(client_socket: UdpSocket, tpu_addr: SocketAddr) -> Self; fn new(client_socket: UdpSocket, tpu_addr: SocketAddr) -> Self;
@ -12,29 +29,51 @@ pub trait TpuConnection {
fn serialize_and_send_transaction( fn serialize_and_send_transaction(
&self, &self,
transaction: &VersionedTransaction, transaction: &VersionedTransaction,
stats: &ClientStats,
) -> TransportResult<()> { ) -> TransportResult<()> {
let wire_transaction = let wire_transaction =
bincode::serialize(transaction).expect("serialize Transaction in send_batch"); bincode::serialize(transaction).expect("serialize Transaction in send_batch");
self.send_wire_transaction(&wire_transaction) self.send_wire_transaction(&wire_transaction, stats)
} }
fn send_wire_transaction<T>(&self, wire_transaction: T) -> TransportResult<()> fn send_wire_transaction<T>(
&self,
wire_transaction: T,
stats: &ClientStats,
) -> TransportResult<()>
where where
T: AsRef<[u8]>; T: AsRef<[u8]>;
fn send_wire_transaction_async(
&self,
wire_transaction: Vec<u8>,
stats: Arc<ClientStats>,
) -> TransportResult<()>;
fn par_serialize_and_send_transaction_batch( fn par_serialize_and_send_transaction_batch(
&self, &self,
transactions: &[VersionedTransaction], transactions: &[VersionedTransaction],
stats: &ClientStats,
) -> TransportResult<()> { ) -> TransportResult<()> {
let buffers = transactions let buffers = transactions
.into_par_iter() .into_par_iter()
.map(|tx| bincode::serialize(&tx).expect("serialize Transaction in send_batch")) .map(|tx| bincode::serialize(&tx).expect("serialize Transaction in send_batch"))
.collect::<Vec<_>>(); .collect::<Vec<_>>();
self.send_wire_transaction_batch(&buffers) self.send_wire_transaction_batch(&buffers, stats)
} }
fn send_wire_transaction_batch<T>(&self, buffers: &[T]) -> TransportResult<()> fn send_wire_transaction_batch<T>(
&self,
buffers: &[T],
stats: &ClientStats,
) -> TransportResult<()>
where where
T: AsRef<[u8]>; T: AsRef<[u8]>;
fn send_wire_transaction_batch_async(
&self,
buffers: Vec<Vec<u8>>,
stats: Arc<ClientStats>,
) -> TransportResult<()>;
} }

View File

@ -2,11 +2,14 @@
//! an interface for sending transactions //! an interface for sending transactions
use { use {
crate::tpu_connection::TpuConnection, crate::tpu_connection::{ClientStats, TpuConnection},
core::iter::repeat, core::iter::repeat,
solana_sdk::transport::Result as TransportResult, solana_sdk::transport::Result as TransportResult,
solana_streamer::sendmmsg::batch_send, solana_streamer::sendmmsg::batch_send,
std::net::{SocketAddr, UdpSocket}, std::{
net::{SocketAddr, UdpSocket},
sync::Arc,
},
}; };
pub struct UdpTpuConnection { pub struct UdpTpuConnection {
@ -26,7 +29,11 @@ impl TpuConnection for UdpTpuConnection {
&self.addr &self.addr
} }
fn send_wire_transaction<T>(&self, wire_transaction: T) -> TransportResult<()> fn send_wire_transaction<T>(
&self,
wire_transaction: T,
_stats: &ClientStats,
) -> TransportResult<()>
where where
T: AsRef<[u8]>, T: AsRef<[u8]>,
{ {
@ -34,7 +41,20 @@ impl TpuConnection for UdpTpuConnection {
Ok(()) Ok(())
} }
fn send_wire_transaction_batch<T>(&self, buffers: &[T]) -> TransportResult<()> fn send_wire_transaction_async(
&self,
wire_transaction: Vec<u8>,
_stats: Arc<ClientStats>,
) -> TransportResult<()> {
self.socket.send_to(wire_transaction.as_ref(), self.addr)?;
Ok(())
}
fn send_wire_transaction_batch<T>(
&self,
buffers: &[T],
_stats: &ClientStats,
) -> TransportResult<()>
where where
T: AsRef<[u8]>, T: AsRef<[u8]>,
{ {
@ -42,4 +62,13 @@ impl TpuConnection for UdpTpuConnection {
batch_send(&self.socket, &pkts)?; batch_send(&self.socket, &pkts)?;
Ok(()) Ok(())
} }
fn send_wire_transaction_batch_async(
&self,
buffers: Vec<Vec<u8>>,
_stats: Arc<ClientStats>,
) -> TransportResult<()> {
let pkts: Vec<_> = buffers.into_iter().zip(repeat(self.tpu_addr())).collect();
batch_send(&self.socket, &pkts)?;
Ok(())
}
} }

View File

@ -21,12 +21,12 @@ bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] } chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5" crossbeam-channel = "0.5"
dashmap = { version = "4.0.2", features = ["rayon", "raw-api"] } dashmap = { version = "4.0.2", features = ["rayon", "raw-api"] }
etcd-client = { version = "0.8.4", features = ["tls"] } etcd-client = { version = "0.9.0", features = ["tls"] }
fs_extra = "1.2.0" fs_extra = "1.2.0"
histogram = "0.6.9" histogram = "0.6.9"
itertools = "0.10.3" itertools = "0.10.3"
log = "0.4.14" log = "0.4.14"
lru = "0.7.3" lru = "0.7.5"
rand = "0.7.0" rand = "0.7.0"
rand_chacha = "0.2.2" rand_chacha = "0.2.2"
rayon = "1.5.1" rayon = "1.5.1"
@ -49,7 +49,6 @@ solana-perf = { path = "../perf", version = "=1.11.0" }
solana-poh = { path = "../poh", version = "=1.11.0" } solana-poh = { path = "../poh", version = "=1.11.0" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" } solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" } solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" }
solana-replica-lib = { path = "../replica-lib", version = "=1.11.0" }
solana-rpc = { path = "../rpc", version = "=1.11.0" } solana-rpc = { path = "../rpc", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" } solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" } solana-sdk = { path = "../sdk", version = "=1.11.0" }

View File

@ -96,7 +96,7 @@ impl AccountsHashVerifier {
fault_injection_rate_slots: u64, fault_injection_rate_slots: u64,
snapshot_config: Option<&SnapshotConfig>, snapshot_config: Option<&SnapshotConfig>,
) { ) {
Self::verify_accounts_package_hash(&accounts_package); let accounts_hash = Self::calculate_and_verify_accounts_hash(&accounts_package);
Self::push_accounts_hashes_to_cluster( Self::push_accounts_hashes_to_cluster(
&accounts_package, &accounts_package,
@ -106,49 +106,63 @@ impl AccountsHashVerifier {
hashes, hashes,
exit, exit,
fault_injection_rate_slots, fault_injection_rate_slots,
accounts_hash,
); );
Self::submit_for_packaging(accounts_package, pending_snapshot_package, snapshot_config); Self::submit_for_packaging(
accounts_package,
pending_snapshot_package,
snapshot_config,
accounts_hash,
);
} }
fn verify_accounts_package_hash(accounts_package: &AccountsPackage) { /// returns calculated accounts hash
fn calculate_and_verify_accounts_hash(accounts_package: &AccountsPackage) -> Hash {
let mut measure_hash = Measure::start("hash"); let mut measure_hash = Measure::start("hash");
if let Some(expected_hash) = accounts_package.accounts_hash_for_testing { let mut sort_time = Measure::start("sort_storages");
let mut sort_time = Measure::start("sort_storages"); let sorted_storages = SortedStorages::new(&accounts_package.snapshot_storages);
let sorted_storages = SortedStorages::new(&accounts_package.snapshot_storages); sort_time.stop();
sort_time.stop();
let mut timings = HashStats { let mut timings = HashStats {
storage_sort_us: sort_time.as_us(), storage_sort_us: sort_time.as_us(),
..HashStats::default() ..HashStats::default()
};
timings.calc_storage_size_quartiles(&accounts_package.snapshot_storages);
let (hash, lamports) = accounts_package
.accounts
.accounts_db
.calculate_accounts_hash_without_index(
&CalcAccountsHashConfig {
use_bg_thread_pool: true,
check_hash: false,
ancestors: None,
use_write_cache: false,
epoch_schedule: &accounts_package.epoch_schedule,
rent_collector: &accounts_package.rent_collector,
},
&sorted_storages,
timings,
)
.unwrap();
assert_eq!(accounts_package.expected_capitalization, lamports);
assert_eq!(expected_hash, hash);
}; };
timings.calc_storage_size_quartiles(&accounts_package.snapshot_storages);
let (accounts_hash, lamports) = accounts_package
.accounts
.accounts_db
.calculate_accounts_hash_without_index(
&CalcAccountsHashConfig {
use_bg_thread_pool: true,
check_hash: false,
ancestors: None,
use_write_cache: false,
epoch_schedule: &accounts_package.epoch_schedule,
rent_collector: &accounts_package.rent_collector,
},
&sorted_storages,
timings,
)
.unwrap();
assert_eq!(accounts_package.expected_capitalization, lamports);
if let Some(expected_hash) = accounts_package.accounts_hash_for_testing {
assert_eq!(expected_hash, accounts_hash);
};
measure_hash.stop(); measure_hash.stop();
solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
accounts_package.snapshot_links.path(),
accounts_package.slot,
&accounts_hash,
);
datapoint_info!( datapoint_info!(
"accounts_hash_verifier", "accounts_hash_verifier",
("calculate_hash", measure_hash.as_us(), i64), ("calculate_hash", measure_hash.as_us(), i64),
); );
accounts_hash
} }
fn push_accounts_hashes_to_cluster( fn push_accounts_hashes_to_cluster(
@ -159,8 +173,8 @@ impl AccountsHashVerifier {
hashes: &mut Vec<(Slot, Hash)>, hashes: &mut Vec<(Slot, Hash)>,
exit: &Arc<AtomicBool>, exit: &Arc<AtomicBool>,
fault_injection_rate_slots: u64, fault_injection_rate_slots: u64,
accounts_hash: Hash,
) { ) {
let hash = accounts_package.accounts_hash;
if fault_injection_rate_slots != 0 if fault_injection_rate_slots != 0
&& accounts_package.slot % fault_injection_rate_slots == 0 && accounts_package.slot % fault_injection_rate_slots == 0
{ {
@ -171,10 +185,10 @@ impl AccountsHashVerifier {
}; };
warn!("inserting fault at slot: {}", accounts_package.slot); warn!("inserting fault at slot: {}", accounts_package.slot);
let rand = thread_rng().gen_range(0, 10); let rand = thread_rng().gen_range(0, 10);
let hash = extend_and_hash(&hash, &[rand]); let hash = extend_and_hash(&accounts_hash, &[rand]);
hashes.push((accounts_package.slot, hash)); hashes.push((accounts_package.slot, hash));
} else { } else {
hashes.push((accounts_package.slot, hash)); hashes.push((accounts_package.slot, accounts_hash));
} }
while hashes.len() > MAX_SNAPSHOT_HASHES { while hashes.len() > MAX_SNAPSHOT_HASHES {
@ -198,6 +212,7 @@ impl AccountsHashVerifier {
accounts_package: AccountsPackage, accounts_package: AccountsPackage,
pending_snapshot_package: Option<&PendingSnapshotPackage>, pending_snapshot_package: Option<&PendingSnapshotPackage>,
snapshot_config: Option<&SnapshotConfig>, snapshot_config: Option<&SnapshotConfig>,
accounts_hash: Hash,
) { ) {
if accounts_package.snapshot_type.is_none() if accounts_package.snapshot_type.is_none()
|| pending_snapshot_package.is_none() || pending_snapshot_package.is_none()
@ -206,7 +221,7 @@ impl AccountsHashVerifier {
return; return;
}; };
let snapshot_package = SnapshotPackage::from(accounts_package); let snapshot_package = SnapshotPackage::new(accounts_package, accounts_hash);
let pending_snapshot_package = pending_snapshot_package.unwrap(); let pending_snapshot_package = pending_snapshot_package.unwrap();
let _snapshot_config = snapshot_config.unwrap(); let _snapshot_config = snapshot_config.unwrap();
@ -295,6 +310,7 @@ mod tests {
sysvar::epoch_schedule::EpochSchedule, sysvar::epoch_schedule::EpochSchedule,
}, },
solana_streamer::socket::SocketAddrSpace, solana_streamer::socket::SocketAddrSpace,
std::str::FromStr,
}; };
fn new_test_cluster_info(contact_info: ContactInfo) -> ClusterInfo { fn new_test_cluster_info(contact_info: ContactInfo) -> ClusterInfo {
@ -358,6 +374,7 @@ mod tests {
..SnapshotConfig::default() ..SnapshotConfig::default()
}; };
let accounts = Arc::new(solana_runtime::accounts::Accounts::default_for_tests()); let accounts = Arc::new(solana_runtime::accounts::Accounts::default_for_tests());
let expected_hash = Hash::from_str("GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn").unwrap();
for i in 0..MAX_SNAPSHOT_HASHES + 1 { for i in 0..MAX_SNAPSHOT_HASHES + 1 {
let accounts_package = AccountsPackage { let accounts_package = AccountsPackage {
slot: full_snapshot_archive_interval_slots + i as u64, slot: full_snapshot_archive_interval_slots + i as u64,
@ -365,7 +382,6 @@ mod tests {
slot_deltas: vec![], slot_deltas: vec![],
snapshot_links: TempDir::new().unwrap(), snapshot_links: TempDir::new().unwrap(),
snapshot_storages: vec![], snapshot_storages: vec![],
accounts_hash: hash(&[i as u8]),
archive_format: ArchiveFormat::TarBzip2, archive_format: ArchiveFormat::TarBzip2,
snapshot_version: SnapshotVersion::default(), snapshot_version: SnapshotVersion::default(),
snapshot_archives_dir: PathBuf::default(), snapshot_archives_dir: PathBuf::default(),
@ -403,13 +419,13 @@ mod tests {
assert_eq!(cluster_hashes.len(), MAX_SNAPSHOT_HASHES); assert_eq!(cluster_hashes.len(), MAX_SNAPSHOT_HASHES);
assert_eq!( assert_eq!(
cluster_hashes[0], cluster_hashes[0],
(full_snapshot_archive_interval_slots + 1, hash(&[1])) (full_snapshot_archive_interval_slots + 1, expected_hash)
); );
assert_eq!( assert_eq!(
cluster_hashes[MAX_SNAPSHOT_HASHES - 1], cluster_hashes[MAX_SNAPSHOT_HASHES - 1],
( (
full_snapshot_archive_interval_slots + MAX_SNAPSHOT_HASHES as u64, full_snapshot_archive_interval_slots + MAX_SNAPSHOT_HASHES as u64,
hash(&[MAX_SNAPSHOT_HASHES as u8]) expected_hash
) )
); );
} }

View File

@ -23,7 +23,7 @@ use {
solana_perf::{ solana_perf::{
cuda_runtime::PinnedVec, cuda_runtime::PinnedVec,
data_budget::DataBudget, data_budget::DataBudget,
packet::{limited_deserialize, Packet, PacketBatch, PACKETS_PER_BATCH}, packet::{Packet, PacketBatch, PACKETS_PER_BATCH},
perf_libs, perf_libs,
}, },
solana_poh::poh_recorder::{BankStart, PohRecorder, PohRecorderError, TransactionRecorder}, solana_poh::poh_recorder::{BankStart, PohRecorder, PohRecorderError, TransactionRecorder},
@ -45,13 +45,10 @@ use {
MAX_TRANSACTION_FORWARDING_DELAY_GPU, MAX_TRANSACTION_FORWARDING_DELAY_GPU,
}, },
feature_set, feature_set,
message::Message,
pubkey::Pubkey, pubkey::Pubkey,
saturating_add_assign, saturating_add_assign,
timing::{duration_as_ms, timestamp, AtomicInterval}, timing::{duration_as_ms, timestamp, AtomicInterval},
transaction::{ transaction::{self, AddressLoader, SanitizedTransaction, TransactionError},
self, AddressLoader, SanitizedTransaction, TransactionError, VersionedTransaction,
},
transport::TransportError, transport::TransportError,
}, },
solana_transaction_status::token_balances::{ solana_transaction_status::token_balances::{
@ -558,7 +555,6 @@ impl BankingStage {
let mut reached_end_of_slot: Option<EndOfSlot> = None; let mut reached_end_of_slot: Option<EndOfSlot> = None;
RetainMut::retain_mut(buffered_packet_batches, |deserialized_packet_batch| { RetainMut::retain_mut(buffered_packet_batches, |deserialized_packet_batch| {
let packet_batch = &deserialized_packet_batch.packet_batch;
let original_unprocessed_indexes = deserialized_packet_batch let original_unprocessed_indexes = deserialized_packet_batch
.unprocessed_packets .unprocessed_packets
.keys() .keys()
@ -572,8 +568,7 @@ impl BankingStage {
let should_retain = if let Some(bank) = &end_of_slot.working_bank { let should_retain = if let Some(bank) = &end_of_slot.working_bank {
let new_unprocessed_indexes = Self::filter_unprocessed_packets_at_end_of_slot( let new_unprocessed_indexes = Self::filter_unprocessed_packets_at_end_of_slot(
bank, bank,
packet_batch, &deserialized_packet_batch.unprocessed_packets,
&original_unprocessed_indexes,
my_pubkey, my_pubkey,
end_of_slot.next_slot_leader, end_of_slot.next_slot_leader,
banking_stage_stats, banking_stage_stats,
@ -624,8 +619,7 @@ impl BankingStage {
&working_bank, &working_bank,
&bank_creation_time, &bank_creation_time,
recorder, recorder,
packet_batch, &deserialized_packet_batch.unprocessed_packets,
original_unprocessed_indexes.to_owned(),
transaction_status_sender.clone(), transaction_status_sender.clone(),
gossip_vote_sender, gossip_vote_sender,
banking_stage_stats, banking_stage_stats,
@ -1351,38 +1345,28 @@ impl BankingStage {
gossip_vote_sender: &ReplayVoteSender, gossip_vote_sender: &ReplayVoteSender,
qos_service: &QosService, qos_service: &QosService,
) -> ProcessTransactionBatchOutput { ) -> ProcessTransactionBatchOutput {
let ((transactions_qos_results, cost_model_throttled_transactions_count), cost_model_time) = let mut cost_model_time = Measure::start("cost_model");
Measure::this(
|_| {
let tx_costs = qos_service.compute_transaction_costs(txs.iter());
let (transactions_qos_results, num_included) = let transaction_costs = qos_service.compute_transaction_costs(txs.iter());
qos_service.select_transactions_per_cost(txs.iter(), tx_costs.iter(), bank);
let cost_model_throttled_transactions_count = let (transactions_qos_results, num_included) =
txs.len().saturating_sub(num_included); qos_service.select_transactions_per_cost(txs.iter(), transaction_costs.iter(), bank);
qos_service.accumulate_estimated_transaction_costs( let cost_model_throttled_transactions_count = txs.len().saturating_sub(num_included);
&Self::accumulate_batched_transaction_costs(
tx_costs.iter(), qos_service.accumulate_estimated_transaction_costs(
transactions_qos_results.iter(), &Self::accumulate_batched_transaction_costs(
), transaction_costs.iter(),
); transactions_qos_results.iter(),
( ),
transactions_qos_results, );
cost_model_throttled_transactions_count, cost_model_time.stop();
)
},
(),
"cost_model",
);
// Only lock accounts for those transactions are selected for the block; // Only lock accounts for those transactions are selected for the block;
// Once accounts are locked, other threads cannot encode transactions that will modify the // Once accounts are locked, other threads cannot encode transactions that will modify the
// same account state // same account state
let mut lock_time = Measure::start("lock_time"); let mut lock_time = Measure::start("lock_time");
let batch = let batch = bank.prepare_sanitized_batch_with_results(txs, transactions_qos_results.iter());
bank.prepare_sanitized_batch_with_results(txs, transactions_qos_results.into_iter());
lock_time.stop(); lock_time.stop();
// retryable_txs includes AccountInUse, WouldExceedMaxBlockCostLimit // retryable_txs includes AccountInUse, WouldExceedMaxBlockCostLimit
@ -1397,21 +1381,31 @@ impl BankingStage {
gossip_vote_sender, gossip_vote_sender,
); );
let mut unlock_time = Measure::start("unlock_time");
// Once the accounts are new transactions can enter the pipeline to process them
drop(batch);
unlock_time.stop();
let ExecuteAndCommitTransactionsOutput { let ExecuteAndCommitTransactionsOutput {
ref mut retryable_transaction_indexes, ref mut retryable_transaction_indexes,
ref execute_and_commit_timings, ref execute_and_commit_timings,
.. ..
} = execute_and_commit_transactions_output; } = execute_and_commit_transactions_output;
// TODO: This does not revert the cost tracker changes from all unexecuted transactions
// yet: For example tx that are too old will not be included in the block, but are not
// retryable.
QosService::update_or_remove_transaction_costs(
transaction_costs.iter(),
transactions_qos_results.iter(),
retryable_transaction_indexes,
bank,
);
retryable_transaction_indexes retryable_transaction_indexes
.iter_mut() .iter_mut()
.for_each(|x| *x += chunk_offset); .for_each(|x| *x += chunk_offset);
let mut unlock_time = Measure::start("unlock_time");
// Once the accounts are new transactions can enter the pipeline to process them
drop(batch);
unlock_time.stop();
let (cu, us) = let (cu, us) =
Self::accumulate_execute_units_and_time(&execute_and_commit_timings.execute_timings); Self::accumulate_execute_units_and_time(&execute_and_commit_timings.execute_timings);
qos_service.accumulate_actual_execute_cu(cu); qos_service.accumulate_actual_execute_cu(cu);
@ -1685,32 +1679,27 @@ impl BankingStage {
// with their packet indexes. // with their packet indexes.
#[allow(clippy::needless_collect)] #[allow(clippy::needless_collect)]
fn transactions_from_packets( fn transactions_from_packets(
packet_batch: &PacketBatch, deserialized_packet_batch: &HashMap<usize, DeserializedPacket>,
transaction_indexes: &[usize],
feature_set: &Arc<feature_set::FeatureSet>, feature_set: &Arc<feature_set::FeatureSet>,
votes_only: bool, votes_only: bool,
address_loader: impl AddressLoader, address_loader: impl AddressLoader,
) -> (Vec<SanitizedTransaction>, Vec<usize>) { ) -> (Vec<SanitizedTransaction>, Vec<usize>) {
transaction_indexes deserialized_packet_batch
.iter() .iter()
.filter_map(|tx_index| { .filter_map(|(&tx_index, deserialized_packet)| {
let p = &packet_batch.packets[*tx_index]; if votes_only && !deserialized_packet.is_simple_vote {
if votes_only && !p.meta.is_simple_vote_tx() {
return None; return None;
} }
let tx: VersionedTransaction = limited_deserialize(&p.data[0..p.meta.size]).ok()?;
let message_bytes = DeserializedPacketBatch::packet_message(p)?;
let message_hash = Message::hash_raw_message(message_bytes);
let tx = SanitizedTransaction::try_create( let tx = SanitizedTransaction::try_create(
tx, deserialized_packet.versioned_transaction.clone(),
message_hash, deserialized_packet.message_hash,
Some(p.meta.is_simple_vote_tx()), Some(deserialized_packet.is_simple_vote),
address_loader.clone(), address_loader.clone(),
) )
.ok()?; .ok()?;
tx.verify_precompiles(feature_set).ok()?; tx.verify_precompiles(feature_set).ok()?;
Some((tx, *tx_index)) Some((tx, tx_index))
}) })
.unzip() .unzip()
} }
@ -1759,8 +1748,7 @@ impl BankingStage {
bank: &Arc<Bank>, bank: &Arc<Bank>,
bank_creation_time: &Instant, bank_creation_time: &Instant,
poh: &TransactionRecorder, poh: &TransactionRecorder,
packet_batch: &PacketBatch, deserialized_packet_batch: &HashMap<usize, DeserializedPacket>,
packet_indexes: Vec<usize>,
transaction_status_sender: Option<TransactionStatusSender>, transaction_status_sender: Option<TransactionStatusSender>,
gossip_vote_sender: &ReplayVoteSender, gossip_vote_sender: &ReplayVoteSender,
banking_stage_stats: &BankingStageStats, banking_stage_stats: &BankingStageStats,
@ -1771,8 +1759,7 @@ impl BankingStage {
let ((transactions, transaction_to_packet_indexes), packet_conversion_time) = Measure::this( let ((transactions, transaction_to_packet_indexes), packet_conversion_time) = Measure::this(
|_| { |_| {
Self::transactions_from_packets( Self::transactions_from_packets(
packet_batch, deserialized_packet_batch,
&packet_indexes,
&bank.feature_set, &bank.feature_set,
bank.vote_only_bank(), bank.vote_only_bank(),
bank.as_ref(), bank.as_ref(),
@ -1860,8 +1847,7 @@ impl BankingStage {
fn filter_unprocessed_packets_at_end_of_slot( fn filter_unprocessed_packets_at_end_of_slot(
bank: &Arc<Bank>, bank: &Arc<Bank>,
packet_batch: &PacketBatch, deserialized_packet_batch: &HashMap<usize, DeserializedPacket>,
transaction_indexes: &[usize],
my_pubkey: &Pubkey, my_pubkey: &Pubkey,
next_leader: Option<Pubkey>, next_leader: Option<Pubkey>,
banking_stage_stats: &BankingStageStats, banking_stage_stats: &BankingStageStats,
@ -1871,15 +1857,17 @@ impl BankingStage {
// Filtering helps if we were going to forward the packets to some other node // Filtering helps if we were going to forward the packets to some other node
if let Some(leader) = next_leader { if let Some(leader) = next_leader {
if leader == *my_pubkey { if leader == *my_pubkey {
return transaction_indexes.to_vec(); return deserialized_packet_batch
.keys()
.cloned()
.collect::<Vec<usize>>();
} }
} }
let mut unprocessed_packet_conversion_time = let mut unprocessed_packet_conversion_time =
Measure::start("unprocessed_packet_conversion"); Measure::start("unprocessed_packet_conversion");
let (transactions, transaction_to_packet_indexes) = Self::transactions_from_packets( let (transactions, transaction_to_packet_indexes) = Self::transactions_from_packets(
packet_batch, deserialized_packet_batch,
transaction_indexes,
&bank.feature_set, &bank.feature_set,
bank.vote_only_bank(), bank.vote_only_bank(),
bank.as_ref(), bank.as_ref(),
@ -2113,7 +2101,7 @@ mod tests {
get_tmp_ledger_path_auto_delete, get_tmp_ledger_path_auto_delete,
leader_schedule_cache::LeaderScheduleCache, leader_schedule_cache::LeaderScheduleCache,
}, },
solana_perf::packet::{to_packet_batches, PacketFlags}, solana_perf::packet::{limited_deserialize, to_packet_batches, PacketFlags},
solana_poh::{ solana_poh::{
poh_recorder::{create_test_recorder, Record, WorkingBankEntry}, poh_recorder::{create_test_recorder, Record, WorkingBankEntry},
poh_service::PohService, poh_service::PohService,
@ -2133,7 +2121,10 @@ mod tests {
signature::{Keypair, Signer}, signature::{Keypair, Signer},
system_instruction::SystemError, system_instruction::SystemError,
system_transaction, system_transaction,
transaction::{MessageHash, SimpleAddressLoader, Transaction, TransactionError}, transaction::{
MessageHash, SimpleAddressLoader, Transaction, TransactionError,
VersionedTransaction,
},
}, },
solana_streamer::{recvmmsg::recv_mmsg, socket::SocketAddrSpace}, solana_streamer::{recvmmsg::recv_mmsg, socket::SocketAddrSpace},
solana_transaction_status::{TransactionStatusMeta, VersionedTransactionWithStatusMeta}, solana_transaction_status::{TransactionStatusMeta, VersionedTransactionWithStatusMeta},
@ -2893,6 +2884,131 @@ mod tests {
Blockstore::destroy(ledger_path.path()).unwrap(); Blockstore::destroy(ledger_path.path()).unwrap();
} }
#[test]
fn test_bank_process_and_record_transactions_cost_tracker() {
solana_logger::setup();
let GenesisConfigInfo {
genesis_config,
mint_keypair,
..
} = create_slow_genesis_config(10_000);
let bank = Arc::new(Bank::new_no_wallclock_throttle_for_tests(&genesis_config));
let pubkey = solana_sdk::pubkey::new_rand();
let ledger_path = get_tmp_ledger_path_auto_delete!();
{
let blockstore = Blockstore::open(ledger_path.path())
.expect("Expected to be able to open database ledger");
let (poh_recorder, _entry_receiver, record_receiver) = PohRecorder::new(
bank.tick_height(),
bank.last_blockhash(),
bank.clone(),
Some((4, 4)),
bank.ticks_per_slot(),
&pubkey,
&Arc::new(blockstore),
&Arc::new(LeaderScheduleCache::new_from_bank(&bank)),
&Arc::new(PohConfig::default()),
Arc::new(AtomicBool::default()),
);
let recorder = poh_recorder.recorder();
let poh_recorder = Arc::new(Mutex::new(poh_recorder));
let poh_simulator = simulate_poh(record_receiver, &poh_recorder);
poh_recorder.lock().unwrap().set_bank(&bank);
let (gossip_vote_sender, _gossip_vote_receiver) = unbounded();
let qos_service = QosService::new(Arc::new(RwLock::new(CostModel::default())), 1);
let get_block_cost = || bank.read_cost_tracker().unwrap().block_cost();
let get_tx_count = || bank.read_cost_tracker().unwrap().transaction_count();
assert_eq!(get_block_cost(), 0);
assert_eq!(get_tx_count(), 0);
//
// TEST: cost tracker's block cost increases when successfully processing a tx
//
let transactions = sanitize_transactions(vec![system_transaction::transfer(
&mint_keypair,
&pubkey,
1,
genesis_config.hash(),
)]);
let process_transactions_batch_output = BankingStage::process_and_record_transactions(
&bank,
&transactions,
&recorder,
0,
None,
&gossip_vote_sender,
&qos_service,
);
let ExecuteAndCommitTransactionsOutput {
executed_with_successful_result_count,
commit_transactions_result,
..
} = process_transactions_batch_output.execute_and_commit_transactions_output;
assert_eq!(executed_with_successful_result_count, 1);
assert!(commit_transactions_result.is_ok());
let single_transfer_cost = get_block_cost();
assert_ne!(single_transfer_cost, 0);
assert_eq!(get_tx_count(), 1);
//
// TEST: When a tx in a batch can't be executed (here because of account
// locks), then its cost does not affect the cost tracker.
//
let allocate_keypair = Keypair::new();
let transactions = sanitize_transactions(vec![
system_transaction::transfer(&mint_keypair, &pubkey, 2, genesis_config.hash()),
// intentionally use a tx that has a different cost
system_transaction::allocate(
&mint_keypair,
&allocate_keypair,
genesis_config.hash(),
1,
),
]);
let process_transactions_batch_output = BankingStage::process_and_record_transactions(
&bank,
&transactions,
&recorder,
0,
None,
&gossip_vote_sender,
&qos_service,
);
let ExecuteAndCommitTransactionsOutput {
executed_with_successful_result_count,
commit_transactions_result,
retryable_transaction_indexes,
..
} = process_transactions_batch_output.execute_and_commit_transactions_output;
assert_eq!(executed_with_successful_result_count, 1);
assert!(commit_transactions_result.is_ok());
assert_eq!(retryable_transaction_indexes, vec![1]);
assert_eq!(get_block_cost(), 2 * single_transfer_cost);
assert_eq!(get_tx_count(), 2);
poh_recorder
.lock()
.unwrap()
.is_exited
.store(true, Ordering::Relaxed);
let _ = poh_simulator.join();
}
Blockstore::destroy(ledger_path.path()).unwrap();
}
fn simulate_poh( fn simulate_poh(
record_receiver: CrossbeamReceiver<Record>, record_receiver: CrossbeamReceiver<Record>,
poh_recorder: &Arc<Mutex<PohRecorder>>, poh_recorder: &Arc<Mutex<PohRecorder>>,
@ -4115,7 +4231,7 @@ mod tests {
fn make_test_packets( fn make_test_packets(
transactions: Vec<Transaction>, transactions: Vec<Transaction>,
vote_indexes: Vec<usize>, vote_indexes: Vec<usize>,
) -> (PacketBatch, Vec<usize>) { ) -> DeserializedPacketBatch {
let capacity = transactions.len(); let capacity = transactions.len();
let mut packet_batch = PacketBatch::with_capacity(capacity); let mut packet_batch = PacketBatch::with_capacity(capacity);
let mut packet_indexes = Vec::with_capacity(capacity); let mut packet_indexes = Vec::with_capacity(capacity);
@ -4127,7 +4243,7 @@ mod tests {
for index in vote_indexes.iter() { for index in vote_indexes.iter() {
packet_batch.packets[*index].meta.flags |= PacketFlags::SIMPLE_VOTE_TX; packet_batch.packets[*index].meta.flags |= PacketFlags::SIMPLE_VOTE_TX;
} }
(packet_batch, packet_indexes) DeserializedPacketBatch::new(packet_batch, packet_indexes, false)
} }
#[test] #[test]
@ -4145,28 +4261,30 @@ mod tests {
&keypair, &keypair,
None, None,
); );
let sorted = |mut v: Vec<usize>| {
v.sort_unstable();
v
};
// packets with no votes // packets with no votes
{ {
let vote_indexes = vec![]; let vote_indexes = vec![];
let (packet_batch, packet_indexes) = let deserialized_packet_batch =
make_test_packets(vec![transfer_tx.clone(), transfer_tx.clone()], vote_indexes); make_test_packets(vec![transfer_tx.clone(), transfer_tx.clone()], vote_indexes);
let mut votes_only = false; let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packet_batch, &deserialized_packet_batch.unprocessed_packets,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(2, txs.len()); assert_eq!(2, txs.len());
assert_eq!(vec![0, 1], tx_packet_index); assert_eq!(vec![0, 1], sorted(tx_packet_index));
votes_only = true; votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packet_batch, &deserialized_packet_batch.unprocessed_packets,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
@ -4178,63 +4296,59 @@ mod tests {
// packets with some votes // packets with some votes
{ {
let vote_indexes = vec![0, 2]; let vote_indexes = vec![0, 2];
let (packet_batch, packet_indexes) = make_test_packets( let deserialized_packet_batch = make_test_packets(
vec![vote_tx.clone(), transfer_tx, vote_tx.clone()], vec![vote_tx.clone(), transfer_tx, vote_tx.clone()],
vote_indexes, vote_indexes,
); );
let mut votes_only = false; let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packet_batch, &deserialized_packet_batch.unprocessed_packets,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(3, txs.len()); assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], tx_packet_index); assert_eq!(vec![0, 1, 2], sorted(tx_packet_index));
votes_only = true; votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packet_batch, &deserialized_packet_batch.unprocessed_packets,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(2, txs.len()); assert_eq!(2, txs.len());
assert_eq!(vec![0, 2], tx_packet_index); assert_eq!(vec![0, 2], sorted(tx_packet_index));
} }
// packets with all votes // packets with all votes
{ {
let vote_indexes = vec![0, 1, 2]; let vote_indexes = vec![0, 1, 2];
let (packet_batch, packet_indexes) = make_test_packets( let deserialized_packet_batch = make_test_packets(
vec![vote_tx.clone(), vote_tx.clone(), vote_tx], vec![vote_tx.clone(), vote_tx.clone(), vote_tx],
vote_indexes, vote_indexes,
); );
let mut votes_only = false; let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packet_batch, &deserialized_packet_batch.unprocessed_packets,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(3, txs.len()); assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], tx_packet_index); assert_eq!(vec![0, 1, 2], sorted(tx_packet_index));
votes_only = true; votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets( let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packet_batch, &deserialized_packet_batch.unprocessed_packets,
&packet_indexes,
&Arc::new(FeatureSet::default()), &Arc::new(FeatureSet::default()),
votes_only, votes_only,
SimpleAddressLoader::Disabled, SimpleAddressLoader::Disabled,
); );
assert_eq!(3, txs.len()); assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], tx_packet_index); assert_eq!(vec![0, 1, 2], sorted(tx_packet_index));
} }
} }

View File

@ -24,7 +24,6 @@ impl LeaderExecuteAndCommitTimings {
saturating_add_assign!(self.record_us, other.record_us); saturating_add_assign!(self.record_us, other.record_us);
saturating_add_assign!(self.commit_us, other.commit_us); saturating_add_assign!(self.commit_us, other.commit_us);
saturating_add_assign!(self.find_and_send_votes_us, other.find_and_send_votes_us); saturating_add_assign!(self.find_and_send_votes_us, other.find_and_send_votes_us);
saturating_add_assign!(self.commit_us, other.commit_us);
self.record_transactions_timings self.record_transactions_timings
.accumulate(&other.record_transactions_timings); .accumulate(&other.record_transactions_timings);
self.execute_timings.accumulate(&other.execute_timings); self.execute_timings.accumulate(&other.execute_timings);

View File

@ -1,4 +1,8 @@
//! The `ledger_cleanup_service` drops older ledger data to limit disk space usage //! The `ledger_cleanup_service` drops older ledger data to limit disk space usage.
//! The service works by counting the number of live data shreds in the ledger; this
//! can be done quickly and should have a fairly stable correlation to actual bytes.
//! Once the shred count (and thus roughly the byte count) reaches a threshold,
//! the services begins removing data in FIFO order.
use { use {
crossbeam_channel::{Receiver, RecvTimeoutError}, crossbeam_channel::{Receiver, RecvTimeoutError},

View File

@ -59,6 +59,7 @@ pub mod sigverify;
pub mod sigverify_shreds; pub mod sigverify_shreds;
pub mod sigverify_stage; pub mod sigverify_stage;
pub mod snapshot_packager_service; pub mod snapshot_packager_service;
pub mod staked_nodes_updater_service;
pub mod stats_reporter_service; pub mod stats_reporter_service;
pub mod system_monitor_service; pub mod system_monitor_service;
mod tower1_7_14; mod tower1_7_14;
@ -73,6 +74,7 @@ pub mod verified_vote_packets;
pub mod vote_simulator; pub mod vote_simulator;
pub mod vote_stake_tracker; pub mod vote_stake_tracker;
pub mod voting_service; pub mod voting_service;
pub mod warm_quic_cache_service;
pub mod window_service; pub mod window_service;
#[macro_use] #[macro_use]

View File

@ -133,7 +133,7 @@ impl QosService {
let mut num_included = 0; let mut num_included = 0;
let select_results = transactions let select_results = transactions
.zip(transactions_costs) .zip(transactions_costs)
.map(|(tx, cost)| match cost_tracker.try_add(tx, cost) { .map(|(tx, cost)| match cost_tracker.try_add(cost) {
Ok(current_block_cost) => { Ok(current_block_cost) => {
debug!("slot {:?}, transaction {:?}, cost {:?}, fit into current block, current block cost {}", bank.slot(), tx, cost, current_block_cost); debug!("slot {:?}, transaction {:?}, cost {:?}, fit into current block, current block cost {}", bank.slot(), tx, cost, current_block_cost);
self.metrics.stats.selected_txs_count.fetch_add(1, Ordering::Relaxed); self.metrics.stats.selected_txs_count.fetch_add(1, Ordering::Relaxed);
@ -170,6 +170,35 @@ impl QosService {
(select_results, num_included) (select_results, num_included)
} }
/// Update the transaction cost in the cost_tracker with the real cost for
/// transactions that were executed successfully;
/// Otherwise remove the cost from the cost tracker, therefore preventing cost_tracker
/// being inflated with unsuccessfully executed transactions.
pub fn update_or_remove_transaction_costs<'a>(
transaction_costs: impl Iterator<Item = &'a TransactionCost>,
transaction_qos_results: impl Iterator<Item = &'a transaction::Result<()>>,
retryable_transaction_indexes: &[usize],
bank: &Arc<Bank>,
) {
let mut cost_tracker = bank.write_cost_tracker().unwrap();
transaction_costs
.zip(transaction_qos_results)
.enumerate()
.for_each(|(index, (tx_cost, qos_inclusion_result))| {
// Only transactions that the qos service incuded have been added to the
// cost tracker.
if qos_inclusion_result.is_ok() && retryable_transaction_indexes.contains(&index) {
cost_tracker.remove(tx_cost);
} else {
// TODO: Update the cost tracker with the actual execution compute units.
// Will have to plumb it in next; For now, keep estimated costs.
//
// let actual_execution_cost = 0;
// cost_tracker.update_execution_cost(tx_cost, actual_execution_cost);
}
});
}
// metrics are reported by bank slot // metrics are reported by bank slot
pub fn report_metrics(&self, bank: Arc<Bank>) { pub fn report_metrics(&self, bank: Arc<Bank>) {
self.report_sender self.report_sender

View File

@ -349,6 +349,7 @@ mod tests {
snapshots_dir, snapshots_dir,
accounts_dir, accounts_dir,
archive_format, archive_format,
snapshot_utils::VerifyBank::Deterministic,
); );
} }
} }

View File

@ -0,0 +1,76 @@
use {
solana_gossip::cluster_info::ClusterInfo,
solana_runtime::bank_forks::BankForks,
std::{
collections::HashMap,
net::IpAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
thread::{self, sleep, Builder, JoinHandle},
time::{Duration, Instant},
},
};
const IP_TO_STAKE_REFRESH_DURATION: Duration = Duration::from_secs(5);
pub struct StakedNodesUpdaterService {
thread_hdl: JoinHandle<()>,
}
impl StakedNodesUpdaterService {
pub fn new(
exit: Arc<AtomicBool>,
cluster_info: Arc<ClusterInfo>,
bank_forks: Arc<RwLock<BankForks>>,
shared_staked_nodes: Arc<RwLock<HashMap<IpAddr, u64>>>,
) -> Self {
let thread_hdl = Builder::new()
.name("sol-sn-updater".to_string())
.spawn(move || {
let mut last_stakes = Instant::now();
while !exit.load(Ordering::Relaxed) {
let mut new_ip_to_stake = HashMap::new();
Self::try_refresh_ip_to_stake(
&mut last_stakes,
&mut new_ip_to_stake,
&bank_forks,
&cluster_info,
);
let mut shared = shared_staked_nodes.write().unwrap();
*shared = new_ip_to_stake;
}
})
.unwrap();
Self { thread_hdl }
}
fn try_refresh_ip_to_stake(
last_stakes: &mut Instant,
ip_to_stake: &mut HashMap<IpAddr, u64>,
bank_forks: &RwLock<BankForks>,
cluster_info: &ClusterInfo,
) {
if last_stakes.elapsed() > IP_TO_STAKE_REFRESH_DURATION {
let root_bank = bank_forks.read().unwrap().root_bank();
let staked_nodes = root_bank.staked_nodes();
*ip_to_stake = cluster_info
.tvu_peers()
.into_iter()
.filter_map(|node| {
let stake = staked_nodes.get(&node.id)?;
Some((node.tvu.ip(), *stake))
})
.collect();
*last_stakes = Instant::now();
} else {
sleep(Duration::from_millis(1));
}
}
pub fn join(self) -> thread::Result<()> {
self.thread_hdl.join()
}
}

View File

@ -11,7 +11,6 @@ use {
fs::{self, File}, fs::{self, File},
io::{self, BufReader}, io::{self, BufReader},
path::PathBuf, path::PathBuf,
sync::RwLock,
}, },
}; };
@ -214,7 +213,7 @@ impl TowerStorage for FileTowerStorage {
} }
pub struct EtcdTowerStorage { pub struct EtcdTowerStorage {
client: RwLock<etcd_client::Client>, client: tokio::sync::Mutex<etcd_client::Client>,
instance_id: [u8; 8], instance_id: [u8; 8],
runtime: tokio::runtime::Runtime, runtime: tokio::runtime::Runtime,
} }
@ -260,7 +259,7 @@ impl EtcdTowerStorage {
.map_err(Self::etdc_to_tower_error)?; .map_err(Self::etdc_to_tower_error)?;
Ok(Self { Ok(Self {
client: RwLock::new(client), client: tokio::sync::Mutex::new(client),
instance_id: solana_sdk::timing::timestamp().to_le_bytes(), instance_id: solana_sdk::timing::timestamp().to_le_bytes(),
runtime, runtime,
}) })
@ -280,7 +279,6 @@ impl EtcdTowerStorage {
impl TowerStorage for EtcdTowerStorage { impl TowerStorage for EtcdTowerStorage {
fn load(&self, node_pubkey: &Pubkey) -> Result<Tower> { fn load(&self, node_pubkey: &Pubkey) -> Result<Tower> {
let (instance_key, tower_key) = Self::get_keys(node_pubkey); let (instance_key, tower_key) = Self::get_keys(node_pubkey);
let mut client = self.client.write().unwrap();
let txn = etcd_client::Txn::new().and_then(vec![etcd_client::TxnOp::put( let txn = etcd_client::Txn::new().and_then(vec![etcd_client::TxnOp::put(
instance_key.clone(), instance_key.clone(),
@ -288,7 +286,7 @@ impl TowerStorage for EtcdTowerStorage {
None, None,
)]); )]);
self.runtime self.runtime
.block_on(async { client.txn(txn).await }) .block_on(async { self.client.lock().await.txn(txn).await })
.map_err(|err| { .map_err(|err| {
error!("Failed to acquire etcd instance lock: {}", err); error!("Failed to acquire etcd instance lock: {}", err);
Self::etdc_to_tower_error(err) Self::etdc_to_tower_error(err)
@ -304,7 +302,7 @@ impl TowerStorage for EtcdTowerStorage {
let response = self let response = self
.runtime .runtime
.block_on(async { client.txn(txn).await }) .block_on(async { self.client.lock().await.txn(txn).await })
.map_err(|err| { .map_err(|err| {
error!("Failed to read etcd saved tower: {}", err); error!("Failed to read etcd saved tower: {}", err);
Self::etdc_to_tower_error(err) Self::etdc_to_tower_error(err)
@ -336,7 +334,6 @@ impl TowerStorage for EtcdTowerStorage {
fn store(&self, saved_tower: &SavedTowerVersions) -> Result<()> { fn store(&self, saved_tower: &SavedTowerVersions) -> Result<()> {
let (instance_key, tower_key) = Self::get_keys(&saved_tower.pubkey()); let (instance_key, tower_key) = Self::get_keys(&saved_tower.pubkey());
let mut client = self.client.write().unwrap();
let txn = etcd_client::Txn::new() let txn = etcd_client::Txn::new()
.when(vec![etcd_client::Compare::value( .when(vec![etcd_client::Compare::value(
@ -352,7 +349,7 @@ impl TowerStorage for EtcdTowerStorage {
let response = self let response = self
.runtime .runtime
.block_on(async { client.txn(txn).await }) .block_on(async { self.client.lock().await.txn(txn).await })
.map_err(|err| { .map_err(|err| {
error!("Failed to write etcd saved tower: {}", err); error!("Failed to write etcd saved tower: {}", err);
err err

View File

@ -13,8 +13,9 @@ use {
find_packet_sender_stake_stage::FindPacketSenderStakeStage, find_packet_sender_stake_stage::FindPacketSenderStakeStage,
sigverify::TransactionSigVerifier, sigverify::TransactionSigVerifier,
sigverify_stage::SigVerifyStage, sigverify_stage::SigVerifyStage,
staked_nodes_updater_service::StakedNodesUpdaterService,
}, },
crossbeam_channel::{unbounded, Receiver}, crossbeam_channel::{bounded, unbounded, Receiver, RecvTimeoutError},
solana_gossip::cluster_info::ClusterInfo, solana_gossip::cluster_info::ClusterInfo,
solana_ledger::{blockstore::Blockstore, blockstore_processor::TransactionStatusSender}, solana_ledger::{blockstore::Blockstore, blockstore_processor::TransactionStatusSender},
solana_poh::poh_recorder::{PohRecorder, WorkingBankEntry}, solana_poh::poh_recorder::{PohRecorder, WorkingBankEntry},
@ -28,15 +29,21 @@ use {
vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender}, vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender},
}, },
solana_sdk::signature::Keypair, solana_sdk::signature::Keypair,
solana_streamer::quic::{spawn_server, MAX_STAKED_CONNECTIONS, MAX_UNSTAKED_CONNECTIONS},
std::{ std::{
collections::HashMap,
net::UdpSocket, net::UdpSocket,
sync::{atomic::AtomicBool, Arc, Mutex, RwLock}, sync::{atomic::AtomicBool, Arc, Mutex, RwLock},
thread, thread,
time::Duration,
}, },
}; };
pub const DEFAULT_TPU_COALESCE_MS: u64 = 5; pub const DEFAULT_TPU_COALESCE_MS: u64 = 5;
/// Timeout interval when joining threads during TPU close
const TPU_THREADS_JOIN_TIMEOUT_SECONDS: u64 = 10;
// allow multiple connections for NAT and any open/close overlap // allow multiple connections for NAT and any open/close overlap
pub const MAX_QUIC_CONNECTIONS_PER_IP: usize = 8; pub const MAX_QUIC_CONNECTIONS_PER_IP: usize = 8;
@ -58,6 +65,7 @@ pub struct Tpu {
tpu_quic_t: thread::JoinHandle<()>, tpu_quic_t: thread::JoinHandle<()>,
find_packet_sender_stake_stage: FindPacketSenderStakeStage, find_packet_sender_stake_stage: FindPacketSenderStakeStage,
vote_find_packet_sender_stake_stage: FindPacketSenderStakeStage, vote_find_packet_sender_stake_stage: FindPacketSenderStakeStage,
staked_nodes_updater_service: StakedNodesUpdaterService,
} }
impl Tpu { impl Tpu {
@ -128,13 +136,23 @@ impl Tpu {
let (verified_sender, verified_receiver) = unbounded(); let (verified_sender, verified_receiver) = unbounded();
let tpu_quic_t = solana_streamer::quic::spawn_server( let staked_nodes = Arc::new(RwLock::new(HashMap::new()));
let staked_nodes_updater_service = StakedNodesUpdaterService::new(
exit.clone(),
cluster_info.clone(),
bank_forks.clone(),
staked_nodes.clone(),
);
let tpu_quic_t = spawn_server(
transactions_quic_sockets, transactions_quic_sockets,
keypair, keypair,
cluster_info.my_contact_info().tpu.ip(), cluster_info.my_contact_info().tpu.ip(),
packet_sender, packet_sender,
exit.clone(), exit.clone(),
MAX_QUIC_CONNECTIONS_PER_IP, MAX_QUIC_CONNECTIONS_PER_IP,
staked_nodes,
MAX_STAKED_CONNECTIONS,
MAX_UNSTAKED_CONNECTIONS,
) )
.unwrap(); .unwrap();
@ -204,10 +222,27 @@ impl Tpu {
tpu_quic_t, tpu_quic_t,
find_packet_sender_stake_stage, find_packet_sender_stake_stage,
vote_find_packet_sender_stake_stage, vote_find_packet_sender_stake_stage,
staked_nodes_updater_service,
} }
} }
pub fn join(self) -> thread::Result<()> { pub fn join(self) -> thread::Result<()> {
// spawn a new thread to wait for tpu close
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
let _ = self.do_join();
sender.send(()).unwrap();
});
// exit can deadlock. put an upper-bound on how long we wait for it
let timeout = Duration::from_secs(TPU_THREADS_JOIN_TIMEOUT_SECONDS);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
error!("timeout for closing tvu");
}
Ok(())
}
fn do_join(self) -> thread::Result<()> {
let results = vec![ let results = vec![
self.fetch_stage.join(), self.fetch_stage.join(),
self.sigverify_stage.join(), self.sigverify_stage.join(),
@ -216,6 +251,7 @@ impl Tpu {
self.banking_stage.join(), self.banking_stage.join(),
self.find_packet_sender_stake_stage.join(), self.find_packet_sender_stake_stage.join(),
self.vote_find_packet_sender_stake_stage.join(), self.vote_find_packet_sender_stake_stage.join(),
self.staked_nodes_updater_service.join(),
]; ];
self.tpu_quic_t.join()?; self.tpu_quic_t.join()?;
let broadcast_result = self.broadcast_stage.join(); let broadcast_result = self.broadcast_stage.join();

View File

@ -25,8 +25,9 @@ use {
sigverify_stage::SigVerifyStage, sigverify_stage::SigVerifyStage,
tower_storage::TowerStorage, tower_storage::TowerStorage,
voting_service::VotingService, voting_service::VotingService,
warm_quic_cache_service::WarmQuicCacheService,
}, },
crossbeam_channel::{unbounded, Receiver}, crossbeam_channel::{bounded, unbounded, Receiver, RecvTimeoutError},
solana_geyser_plugin_manager::block_metadata_notifier_interface::BlockMetadataNotifierLock, solana_geyser_plugin_manager::block_metadata_notifier_interface::BlockMetadataNotifierLock,
solana_gossip::cluster_info::ClusterInfo, solana_gossip::cluster_info::ClusterInfo,
solana_ledger::{ solana_ledger::{
@ -60,9 +61,13 @@ use {
net::UdpSocket, net::UdpSocket,
sync::{atomic::AtomicBool, Arc, Mutex, RwLock}, sync::{atomic::AtomicBool, Arc, Mutex, RwLock},
thread, thread,
time::Duration,
}, },
}; };
/// Timeout interval when joining threads during TVU close
const TVU_THREADS_JOIN_TIMEOUT_SECONDS: u64 = 10;
pub struct Tvu { pub struct Tvu {
fetch_stage: ShredFetchStage, fetch_stage: ShredFetchStage,
sigverify_stage: SigVerifyStage, sigverify_stage: SigVerifyStage,
@ -74,6 +79,7 @@ pub struct Tvu {
accounts_hash_verifier: AccountsHashVerifier, accounts_hash_verifier: AccountsHashVerifier,
cost_update_service: CostUpdateService, cost_update_service: CostUpdateService,
voting_service: VotingService, voting_service: VotingService,
warm_quic_cache_service: WarmQuicCacheService,
drop_bank_service: DropBankService, drop_bank_service: DropBankService,
transaction_cost_metrics_service: TransactionCostMetricsService, transaction_cost_metrics_service: TransactionCostMetricsService,
} }
@ -279,6 +285,9 @@ impl Tvu {
bank_forks.clone(), bank_forks.clone(),
); );
let warm_quic_cache_service =
WarmQuicCacheService::new(cluster_info.clone(), poh_recorder.clone(), exit.clone());
let (cost_update_sender, cost_update_receiver) = unbounded(); let (cost_update_sender, cost_update_receiver) = unbounded();
let cost_update_service = let cost_update_service =
CostUpdateService::new(blockstore.clone(), cost_model.clone(), cost_update_receiver); CostUpdateService::new(blockstore.clone(), cost_model.clone(), cost_update_receiver);
@ -352,12 +361,29 @@ impl Tvu {
accounts_hash_verifier, accounts_hash_verifier,
cost_update_service, cost_update_service,
voting_service, voting_service,
warm_quic_cache_service,
drop_bank_service, drop_bank_service,
transaction_cost_metrics_service, transaction_cost_metrics_service,
} }
} }
pub fn join(self) -> thread::Result<()> { pub fn join(self) -> thread::Result<()> {
// spawn a new thread to wait for tvu close
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
let _ = self.do_join();
sender.send(()).unwrap();
});
// exit can deadlock. put an upper-bound on how long we wait for it
let timeout = Duration::from_secs(TVU_THREADS_JOIN_TIMEOUT_SECONDS);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
error!("timeout for closing tvu");
}
Ok(())
}
fn do_join(self) -> thread::Result<()> {
self.retransmit_stage.join()?; self.retransmit_stage.join()?;
self.fetch_stage.join()?; self.fetch_stage.join()?;
self.sigverify_stage.join()?; self.sigverify_stage.join()?;
@ -370,6 +396,7 @@ impl Tvu {
self.accounts_hash_verifier.join()?; self.accounts_hash_verifier.join()?;
self.cost_update_service.join()?; self.cost_update_service.join()?;
self.voting_service.join()?; self.voting_service.join()?;
self.warm_quic_cache_service.join()?;
self.drop_bank_service.join()?; self.drop_bank_service.join()?;
self.transaction_cost_metrics_service.join()?; self.transaction_cost_metrics_service.join()?;
Ok(()) Ok(())

View File

@ -15,14 +15,9 @@ use {
/// SanitizedTransaction /// SanitizedTransaction
#[derive(Debug, Default)] #[derive(Debug, Default)]
pub struct DeserializedPacket { pub struct DeserializedPacket {
#[allow(dead_code)] pub versioned_transaction: VersionedTransaction,
versioned_transaction: VersionedTransaction, pub message_hash: Hash,
pub is_simple_vote: bool,
#[allow(dead_code)]
message_hash: Hash,
#[allow(dead_code)]
is_simple_vote: bool,
} }
/// Defines the type of entry in `UnprocessedPacketBatches`, it holds original packet_batch /// Defines the type of entry in `UnprocessedPacketBatches`, it holds original packet_batch

View File

@ -50,10 +50,6 @@ use {
poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS}, poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS},
poh_service::{self, PohService}, poh_service::{self, PohService},
}, },
solana_replica_lib::{
accountsdb_repl_server::{AccountsDbReplService, AccountsDbReplServiceConfig},
accountsdb_repl_server_factory,
},
solana_rpc::{ solana_rpc::{
max_slots::MaxSlots, max_slots::MaxSlots,
optimistically_confirmed_bank_tracker::{ optimistically_confirmed_bank_tracker::{
@ -77,6 +73,7 @@ use {
commitment::BlockCommitmentCache, commitment::BlockCommitmentCache,
cost_model::CostModel, cost_model::CostModel,
hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE}, hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE},
runtime_config::RuntimeConfig,
snapshot_archive_info::SnapshotArchiveInfoGetter, snapshot_archive_info::SnapshotArchiveInfoGetter,
snapshot_config::SnapshotConfig, snapshot_config::SnapshotConfig,
snapshot_hash::StartingSnapshotHashes, snapshot_hash::StartingSnapshotHashes,
@ -114,7 +111,6 @@ const MAX_COMPLETED_DATA_SETS_IN_CHANNEL: usize = 100_000;
const WAIT_FOR_SUPERMAJORITY_THRESHOLD_PERCENT: u64 = 80; const WAIT_FOR_SUPERMAJORITY_THRESHOLD_PERCENT: u64 = 80;
pub struct ValidatorConfig { pub struct ValidatorConfig {
pub dev_halt_at_slot: Option<Slot>,
pub expected_genesis_hash: Option<Hash>, pub expected_genesis_hash: Option<Hash>,
pub expected_bank_hash: Option<Hash>, pub expected_bank_hash: Option<Hash>,
pub expected_shred_version: Option<u16>, pub expected_shred_version: Option<u16>,
@ -122,7 +118,6 @@ pub struct ValidatorConfig {
pub account_paths: Vec<PathBuf>, pub account_paths: Vec<PathBuf>,
pub account_shrink_paths: Option<Vec<PathBuf>>, pub account_shrink_paths: Option<Vec<PathBuf>>,
pub rpc_config: JsonRpcConfig, pub rpc_config: JsonRpcConfig,
pub accountsdb_repl_service_config: Option<AccountsDbReplServiceConfig>,
pub geyser_plugin_config_files: Option<Vec<PathBuf>>, pub geyser_plugin_config_files: Option<Vec<PathBuf>>,
pub rpc_addrs: Option<(SocketAddr, SocketAddr)>, // (JsonRpc, JsonRpcPubSub) pub rpc_addrs: Option<(SocketAddr, SocketAddr)>, // (JsonRpc, JsonRpcPubSub)
pub pubsub_config: PubSubConfig, pub pubsub_config: PubSubConfig,
@ -151,7 +146,6 @@ pub struct ValidatorConfig {
pub debug_keys: Option<Arc<HashSet<Pubkey>>>, pub debug_keys: Option<Arc<HashSet<Pubkey>>>,
pub contact_debug_interval: u64, pub contact_debug_interval: u64,
pub contact_save_interval: u64, pub contact_save_interval: u64,
pub bpf_jit: bool,
pub send_transaction_service_config: send_transaction_service::Config, pub send_transaction_service_config: send_transaction_service::Config,
pub no_poh_speed_test: bool, pub no_poh_speed_test: bool,
pub no_os_memory_stats_reporting: bool, pub no_os_memory_stats_reporting: bool,
@ -170,12 +164,12 @@ pub struct ValidatorConfig {
pub accounts_shrink_ratio: AccountShrinkThreshold, pub accounts_shrink_ratio: AccountShrinkThreshold,
pub wait_to_vote_slot: Option<Slot>, pub wait_to_vote_slot: Option<Slot>,
pub ledger_column_options: LedgerColumnOptions, pub ledger_column_options: LedgerColumnOptions,
pub runtime_config: RuntimeConfig,
} }
impl Default for ValidatorConfig { impl Default for ValidatorConfig {
fn default() -> Self { fn default() -> Self {
Self { Self {
dev_halt_at_slot: None,
expected_genesis_hash: None, expected_genesis_hash: None,
expected_bank_hash: None, expected_bank_hash: None,
expected_shred_version: None, expected_shred_version: None,
@ -184,7 +178,6 @@ impl Default for ValidatorConfig {
account_paths: Vec::new(), account_paths: Vec::new(),
account_shrink_paths: None, account_shrink_paths: None,
rpc_config: JsonRpcConfig::default(), rpc_config: JsonRpcConfig::default(),
accountsdb_repl_service_config: None,
geyser_plugin_config_files: None, geyser_plugin_config_files: None,
rpc_addrs: None, rpc_addrs: None,
pubsub_config: PubSubConfig::default(), pubsub_config: PubSubConfig::default(),
@ -212,7 +205,6 @@ impl Default for ValidatorConfig {
debug_keys: None, debug_keys: None,
contact_debug_interval: DEFAULT_CONTACT_DEBUG_INTERVAL_MILLIS, contact_debug_interval: DEFAULT_CONTACT_DEBUG_INTERVAL_MILLIS,
contact_save_interval: DEFAULT_CONTACT_SAVE_INTERVAL_MILLIS, contact_save_interval: DEFAULT_CONTACT_SAVE_INTERVAL_MILLIS,
bpf_jit: false,
send_transaction_service_config: send_transaction_service::Config::default(), send_transaction_service_config: send_transaction_service::Config::default(),
no_poh_speed_test: true, no_poh_speed_test: true,
no_os_memory_stats_reporting: true, no_os_memory_stats_reporting: true,
@ -231,6 +223,7 @@ impl Default for ValidatorConfig {
accounts_db_config: None, accounts_db_config: None,
wait_to_vote_slot: None, wait_to_vote_slot: None,
ledger_column_options: LedgerColumnOptions::default(), ledger_column_options: LedgerColumnOptions::default(),
runtime_config: RuntimeConfig::default(),
} }
} }
} }
@ -238,6 +231,7 @@ impl Default for ValidatorConfig {
impl ValidatorConfig { impl ValidatorConfig {
pub fn default_for_test() -> Self { pub fn default_for_test() -> Self {
Self { Self {
enforce_ulimit_nofile: false,
rpc_config: JsonRpcConfig::default_for_test(), rpc_config: JsonRpcConfig::default_for_test(),
..Self::default() ..Self::default()
} }
@ -339,7 +333,6 @@ pub struct Validator {
pub cluster_info: Arc<ClusterInfo>, pub cluster_info: Arc<ClusterInfo>,
pub bank_forks: Arc<RwLock<BankForks>>, pub bank_forks: Arc<RwLock<BankForks>>,
pub blockstore: Arc<Blockstore>, pub blockstore: Arc<Blockstore>,
accountsdb_repl_service: Option<AccountsDbReplService>,
geyser_plugin_service: Option<GeyserPluginService>, geyser_plugin_service: Option<GeyserPluginService>,
} }
@ -673,7 +666,6 @@ impl Validator {
pubsub_service, pubsub_service,
optimistically_confirmed_bank_tracker, optimistically_confirmed_bank_tracker,
bank_notification_sender, bank_notification_sender,
accountsdb_repl_service,
) = if let Some((rpc_addr, rpc_pubsub_addr)) = config.rpc_addrs { ) = if let Some((rpc_addr, rpc_pubsub_addr)) = config.rpc_addrs {
if ContactInfo::is_valid_address(&node.info.rpc, &socket_addr_space) { if ContactInfo::is_valid_address(&node.info.rpc, &socket_addr_space) {
assert!(ContactInfo::is_valid_address( assert!(ContactInfo::is_valid_address(
@ -687,13 +679,6 @@ impl Validator {
)); ));
} }
let accountsdb_repl_service = config.accountsdb_repl_service_config.as_ref().map(|accountsdb_repl_service_config| {
let (bank_notification_sender, bank_notification_receiver) = unbounded();
bank_notification_senders.push(bank_notification_sender);
accountsdb_repl_server_factory::AccountsDbReplServerFactory::build_accountsdb_repl_server(
accountsdb_repl_service_config.clone(), bank_notification_receiver, bank_forks.clone())
});
let (bank_notification_sender, bank_notification_receiver) = unbounded(); let (bank_notification_sender, bank_notification_receiver) = unbounded();
let confirmed_bank_subscribers = if !bank_notification_senders.is_empty() { let confirmed_bank_subscribers = if !bank_notification_senders.is_empty() {
Some(Arc::new(RwLock::new(bank_notification_senders))) Some(Arc::new(RwLock::new(bank_notification_senders)))
@ -746,13 +731,12 @@ impl Validator {
confirmed_bank_subscribers, confirmed_bank_subscribers,
)), )),
Some(bank_notification_sender), Some(bank_notification_sender),
accountsdb_repl_service,
) )
} else { } else {
(None, None, None, None, None) (None, None, None, None)
}; };
if config.dev_halt_at_slot.is_some() { if config.runtime_config.dev_halt_at_slot.is_some() {
// Simulate a confirmed root to avoid RPC errors with CommitmentConfig::finalized() and // Simulate a confirmed root to avoid RPC errors with CommitmentConfig::finalized() and
// to ensure RPC endpoints like getConfirmedBlock, which require a confirmed root, work // to ensure RPC endpoints like getConfirmedBlock, which require a confirmed root, work
block_commitment_cache block_commitment_cache
@ -814,8 +798,7 @@ impl Validator {
let enable_gossip_push = config let enable_gossip_push = config
.accounts_db_config .accounts_db_config
.as_ref() .as_ref()
.and_then(|config| config.filler_account_count) .map(|config| config.filler_accounts_config.count == 0)
.map(|count| count == 0)
.unwrap_or(true); .unwrap_or(true);
let snapshot_packager_service = SnapshotPackagerService::new( let snapshot_packager_service = SnapshotPackagerService::new(
@ -999,7 +982,6 @@ impl Validator {
cluster_info, cluster_info,
bank_forks, bank_forks,
blockstore: blockstore.clone(), blockstore: blockstore.clone(),
accountsdb_repl_service,
geyser_plugin_service, geyser_plugin_service,
} }
} }
@ -1119,12 +1101,6 @@ impl Validator {
ip_echo_server.shutdown_background(); ip_echo_server.shutdown_background();
} }
if let Some(accountsdb_repl_service) = self.accountsdb_repl_service {
accountsdb_repl_service
.join()
.expect("accountsdb_repl_service");
}
if let Some(geyser_plugin_service) = self.geyser_plugin_service { if let Some(geyser_plugin_service) = self.geyser_plugin_service {
geyser_plugin_service.join().expect("geyser_plugin_service"); geyser_plugin_service.join().expect("geyser_plugin_service");
} }
@ -1332,9 +1308,7 @@ fn load_blockstore(
let blockstore_root_scan = BlockstoreRootScan::new(config, &blockstore, exit); let blockstore_root_scan = BlockstoreRootScan::new(config, &blockstore, exit);
let process_options = blockstore_processor::ProcessOptions { let process_options = blockstore_processor::ProcessOptions {
bpf_jit: config.bpf_jit,
poh_verify: config.poh_verify, poh_verify: config.poh_verify,
dev_halt_at_slot: config.dev_halt_at_slot,
new_hard_forks: config.new_hard_forks.clone(), new_hard_forks: config.new_hard_forks.clone(),
debug_keys: config.debug_keys.clone(), debug_keys: config.debug_keys.clone(),
account_indexes: config.account_indexes.clone(), account_indexes: config.account_indexes.clone(),
@ -1343,6 +1317,7 @@ fn load_blockstore(
shrink_ratio: config.accounts_shrink_ratio, shrink_ratio: config.accounts_shrink_ratio,
accounts_db_test_hash_calculation: config.accounts_db_test_hash_calculation, accounts_db_test_hash_calculation: config.accounts_db_test_hash_calculation,
accounts_db_skip_shrink: config.accounts_db_skip_shrink, accounts_db_skip_shrink: config.accounts_db_skip_shrink,
runtime_config: config.runtime_config.clone(),
..blockstore_processor::ProcessOptions::default() ..blockstore_processor::ProcessOptions::default()
}; };
@ -1867,7 +1842,20 @@ mod tests {
*start_progress.read().unwrap(), *start_progress.read().unwrap(),
ValidatorStartProgress::Running ValidatorStartProgress::Running
); );
validator.close();
// spawn a new thread to wait for validator close
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
validator.close();
sender.send(()).unwrap();
});
// exit can deadlock. put an upper-bound on how long we wait for it
let timeout = Duration::from_secs(30);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
panic!("timeout for closing validator");
}
remove_dir_all(validator_ledger_path).unwrap(); remove_dir_all(validator_ledger_path).unwrap();
} }

View File

@ -0,0 +1,70 @@
// Connect to future leaders with some jitter so the quic connection is warm
// by the time we need it.
use {
rand::{thread_rng, Rng},
solana_client::connection_cache::send_wire_transaction,
solana_gossip::cluster_info::ClusterInfo,
solana_poh::poh_recorder::PohRecorder,
std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc, Mutex,
},
thread::{self, sleep, Builder, JoinHandle},
time::Duration,
},
};
pub struct WarmQuicCacheService {
thread_hdl: JoinHandle<()>,
}
// ~50 seconds
const CACHE_OFFSET_SLOT: i64 = 100;
const CACHE_JITTER_SLOT: i64 = 20;
impl WarmQuicCacheService {
pub fn new(
cluster_info: Arc<ClusterInfo>,
poh_recorder: Arc<Mutex<PohRecorder>>,
exit: Arc<AtomicBool>,
) -> Self {
let thread_hdl = Builder::new()
.name("sol-warm-quic-service".to_string())
.spawn(move || {
let slot_jitter = thread_rng().gen_range(-CACHE_JITTER_SLOT, CACHE_JITTER_SLOT);
let mut maybe_last_leader = None;
while !exit.load(Ordering::Relaxed) {
if let Some(leader_pubkey) = poh_recorder
.lock()
.unwrap()
.leader_after_n_slots((CACHE_OFFSET_SLOT + slot_jitter) as u64)
{
if maybe_last_leader
.map_or(true, |last_leader| last_leader != leader_pubkey)
{
maybe_last_leader = Some(leader_pubkey);
if let Some(addr) = cluster_info
.lookup_contact_info(&leader_pubkey, |leader| leader.tpu)
{
if let Err(err) = send_wire_transaction(&[0u8], &addr) {
warn!(
"Failed to warmup QUIC connection to the leader {:?}, Error {:?}",
leader_pubkey, err
);
}
}
}
}
sleep(Duration::from_millis(200));
}
})
.unwrap();
Self { thread_hdl }
}
pub fn join(self) -> thread::Result<()> {
self.thread_hdl.join()
}
}

View File

@ -166,10 +166,10 @@ mod tests {
old_genesis_config: &GenesisConfig, old_genesis_config: &GenesisConfig,
account_paths: &[PathBuf], account_paths: &[PathBuf],
) { ) {
let (snapshot_path, snapshot_archives_dir) = old_bank_forks let snapshot_archives_dir = old_bank_forks
.snapshot_config .snapshot_config
.as_ref() .as_ref()
.map(|c| (&c.bank_snapshots_dir, &c.snapshot_archives_dir)) .map(|c| &c.snapshot_archives_dir)
.unwrap(); .unwrap();
let old_last_bank = old_bank_forks.get(old_last_slot).unwrap(); let old_last_bank = old_bank_forks.get(old_last_slot).unwrap();
@ -213,12 +213,6 @@ mod tests {
.unwrap() .unwrap()
.clone(); .clone();
assert_eq!(*bank, deserialized_bank); assert_eq!(*bank, deserialized_bank);
let bank_snapshots = snapshot_utils::get_bank_snapshots(&snapshot_path);
for p in bank_snapshots {
snapshot_utils::remove_bank_snapshot(p.slot, &snapshot_path).unwrap();
}
} }
// creates banks up to "last_slot" and runs the input function `f` on each bank created // creates banks up to "last_slot" and runs the input function `f` on each bank created
@ -274,7 +268,7 @@ mod tests {
let snapshot_config = &snapshot_test_config.snapshot_config; let snapshot_config = &snapshot_test_config.snapshot_config;
let bank_snapshots_dir = &snapshot_config.bank_snapshots_dir; let bank_snapshots_dir = &snapshot_config.bank_snapshots_dir;
let last_bank_snapshot_info = let last_bank_snapshot_info =
snapshot_utils::get_highest_bank_snapshot_info(bank_snapshots_dir) snapshot_utils::get_highest_bank_snapshot_pre(bank_snapshots_dir)
.expect("no bank snapshots found in path"); .expect("no bank snapshots found in path");
let accounts_package = AccountsPackage::new( let accounts_package = AccountsPackage::new(
last_bank, last_bank,
@ -289,7 +283,13 @@ mod tests {
Some(SnapshotType::FullSnapshot), Some(SnapshotType::FullSnapshot),
) )
.unwrap(); .unwrap();
let snapshot_package = SnapshotPackage::from(accounts_package); solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
accounts_package.snapshot_links.path(),
accounts_package.slot,
&last_bank.get_accounts_hash(),
);
let snapshot_package =
SnapshotPackage::new(accounts_package, last_bank.get_accounts_hash());
snapshot_utils::archive_snapshot_package( snapshot_utils::archive_snapshot_package(
&snapshot_package, &snapshot_package,
snapshot_config.maximum_full_snapshot_archives_to_retain, snapshot_config.maximum_full_snapshot_archives_to_retain,
@ -392,7 +392,6 @@ mod tests {
let tx = system_transaction::transfer(mint_keypair, &key1, 1, genesis_config.hash()); let tx = system_transaction::transfer(mint_keypair, &key1, 1, genesis_config.hash());
assert_eq!(bank.process_transaction(&tx), Ok(())); assert_eq!(bank.process_transaction(&tx), Ok(()));
bank.squash(); bank.squash();
let accounts_hash = bank.update_accounts_hash();
let pending_accounts_package = { let pending_accounts_package = {
if slot == saved_slot as u64 { if slot == saved_slot as u64 {
@ -462,7 +461,9 @@ mod tests {
saved_archive_path = Some(snapshot_utils::build_full_snapshot_archive_path( saved_archive_path = Some(snapshot_utils::build_full_snapshot_archive_path(
snapshot_archives_dir, snapshot_archives_dir,
slot, slot,
&accounts_hash, // this needs to match the hash value that we reserialize with later. It is complicated, so just use default.
// This hash value is just used to build the file name. Since this is mocked up test code, it is sufficient to pass default here.
&Hash::default(),
ArchiveFormat::TarBzip2, ArchiveFormat::TarBzip2,
)); ));
} }
@ -472,7 +473,7 @@ mod tests {
// currently sitting in the channel // currently sitting in the channel
snapshot_utils::purge_old_bank_snapshots(bank_snapshots_dir); snapshot_utils::purge_old_bank_snapshots(bank_snapshots_dir);
let mut bank_snapshots = snapshot_utils::get_bank_snapshots(&bank_snapshots_dir); let mut bank_snapshots = snapshot_utils::get_bank_snapshots_pre(&bank_snapshots_dir);
bank_snapshots.sort_unstable(); bank_snapshots.sort_unstable();
assert!(bank_snapshots assert!(bank_snapshots
.into_iter() .into_iter()
@ -511,7 +512,12 @@ mod tests {
.unwrap() .unwrap()
.take() .take()
.unwrap(); .unwrap();
let snapshot_package = SnapshotPackage::from(accounts_package); solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
accounts_package.snapshot_links.path(),
accounts_package.slot,
&Hash::default(),
);
let snapshot_package = SnapshotPackage::new(accounts_package, Hash::default());
pending_snapshot_package pending_snapshot_package
.lock() .lock()
.unwrap() .unwrap()
@ -548,11 +554,19 @@ mod tests {
) )
.unwrap(); .unwrap();
// files were saved off before we reserialized the bank in the hacked up accounts_hash_verifier stand-in.
solana_runtime::serde_snapshot::reserialize_bank_with_new_accounts_hash(
saved_snapshots_dir.path(),
saved_slot,
&Hash::default(),
);
snapshot_utils::verify_snapshot_archive( snapshot_utils::verify_snapshot_archive(
saved_archive_path.unwrap(), saved_archive_path.unwrap(),
saved_snapshots_dir.path(), saved_snapshots_dir.path(),
saved_accounts_dir.path(), saved_accounts_dir.path(),
ArchiveFormat::TarBzip2, ArchiveFormat::TarBzip2,
snapshot_utils::VerifyBank::NonDeterministic(saved_slot),
); );
} }
@ -751,7 +765,7 @@ mod tests {
let slot = bank.slot(); let slot = bank.slot();
info!("Making full snapshot archive from bank at slot: {}", slot); info!("Making full snapshot archive from bank at slot: {}", slot);
let bank_snapshot_info = let bank_snapshot_info =
snapshot_utils::get_bank_snapshots(&snapshot_config.bank_snapshots_dir) snapshot_utils::get_bank_snapshots_pre(&snapshot_config.bank_snapshots_dir)
.into_iter() .into_iter()
.find(|elem| elem.slot == slot) .find(|elem| elem.slot == slot)
.ok_or_else(|| { .ok_or_else(|| {
@ -786,7 +800,7 @@ mod tests {
slot, incremental_snapshot_base_slot, slot, incremental_snapshot_base_slot,
); );
let bank_snapshot_info = let bank_snapshot_info =
snapshot_utils::get_bank_snapshots(&snapshot_config.bank_snapshots_dir) snapshot_utils::get_bank_snapshots_pre(&snapshot_config.bank_snapshots_dir)
.into_iter() .into_iter()
.find(|elem| elem.slot == slot) .find(|elem| elem.slot == slot)
.ok_or_else(|| { .ok_or_else(|| {

View File

@ -179,6 +179,7 @@ module.exports = {
"proposals/block-confirmation", "proposals/block-confirmation",
"proposals/cluster-test-framework", "proposals/cluster-test-framework",
"proposals/embedding-move", "proposals/embedding-move",
"proposals/handle-duplicate-block",
"proposals/interchain-transaction-verification", "proposals/interchain-transaction-verification",
"proposals/ledger-replication-to-implement", "proposals/ledger-replication-to-implement",
"proposals/optimistic-confirmation-and-slashing", "proposals/optimistic-confirmation-and-slashing",

View File

@ -33,6 +33,14 @@ solana airdrop 1 <RECIPIENT_ACCOUNT_ADDRESS> --url https://api.devnet.solana.com
where you replace the text `<RECIPIENT_ACCOUNT_ADDRESS>` with your base58-encoded where you replace the text `<RECIPIENT_ACCOUNT_ADDRESS>` with your base58-encoded
public key/wallet address. public key/wallet address.
A response with the signature of the transaction will be returned. If the balance
of the address does not change by the expected amount, run the following command
for more information on what potentially went wrong:
```bash
solana confirm -v <TRANSACTION_SIGNATURE>
```
#### Check your balance #### Check your balance
Confirm the airdrop was successful by checking the account's balance. Confirm the airdrop was successful by checking the account's balance.

View File

@ -3384,7 +3384,7 @@ The result will be an RpcResponse JSON object with `value` set to a JSON object
- `err: <object | string | null>` - Error if transaction failed, null if transaction succeeded. [TransactionError definitions](https://github.com/solana-labs/solana/blob/c0c60386544ec9a9ec7119229f37386d9f070523/sdk/src/transaction/error.rs#L13) - `err: <object | string | null>` - Error if transaction failed, null if transaction succeeded. [TransactionError definitions](https://github.com/solana-labs/solana/blob/c0c60386544ec9a9ec7119229f37386d9f070523/sdk/src/transaction/error.rs#L13)
- `logs: <array | null>` - Array of log messages the transaction instructions output during execution, null if simulation failed before the transaction was able to execute (for example due to an invalid blockhash or signature verification failure) - `logs: <array | null>` - Array of log messages the transaction instructions output during execution, null if simulation failed before the transaction was able to execute (for example due to an invalid blockhash or signature verification failure)
- `accounts: <array> | null>` - array of accounts with the same length as the `accounts.addresses` array in the request - `accounts: <array | null>` - array of accounts with the same length as the `accounts.addresses` array in the request
- `<null>` - if the account doesn't exist or if `err` is not null - `<null>` - if the account doesn't exist or if `err` is not null
- `<object>` - otherwise, a JSON object containing: - `<object>` - otherwise, a JSON object containing:
- `lamports: <u64>`, number of lamports assigned to this account, as a u64 - `lamports: <u64>`, number of lamports assigned to this account, as a u64
@ -3393,6 +3393,9 @@ The result will be an RpcResponse JSON object with `value` set to a JSON object
- `executable: <bool>`, boolean indicating if the account contains a program \(and is strictly read-only\) - `executable: <bool>`, boolean indicating if the account contains a program \(and is strictly read-only\)
- `rentEpoch: <u64>`, the epoch at which this account will next owe rent, as u64 - `rentEpoch: <u64>`, the epoch at which this account will next owe rent, as u64
- `unitsConsumed: <u64 | undefined>`, The number of compute budget units consumed during the processing of this transaction - `unitsConsumed: <u64 | undefined>`, The number of compute budget units consumed during the processing of this transaction
- `returnData: <object | null>` - the most-recent return data generated by an instruction in the transaction, with the following fields:
- `programId: <string>`, the program that generated the return data, as base-58 encoded Pubkey
- `data: <[string, encoding]>`, the return data itself, as base-64 encoded binary data
#### Example: #### Example:
@ -3403,7 +3406,10 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
"id": 1, "id": 1,
"method": "simulateTransaction", "method": "simulateTransaction",
"params": [ "params": [
"4hXTCkRzt9WyecNzV1XPgCDfGAZzQKNxLXgynz5QDuWWPSAZBZSHptvWRL3BjCvzUXRdKvHL2b7yGrRQcWyaqsaBCncVG7BFggS8w9snUts67BSh3EqKpXLUm5UMHfD7ZBe9GhARjbNQMLJ1QD3Spr6oMTBU6EhdB4RD8CP2xUxr2u3d6fos36PD98XS6oX8TQjLpsMwncs5DAMiD4nNnR8NBfyghGCWvCVifVwvA8B8TJxE1aiyiv2L429BCWfyzAme5sZW8rDb14NeCQHhZbtNqfXhcp2tAnaAT" "AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEDArczbMia1tLmq7zz4DinMNN0pJ1JtLdqIJPUw3YrGCzYAMHBsgN27lcgB6H2WQvFgyZuJYHa46puOQo9yQ8CVQbd9uHXZaGT2cvhRs7reawctIXtX1s3kTqM9YV+/wCp20C7Wj2aiuk5TReAXo+VTVg8QTHjs0UjNMMKCvpzZ+ABAgEBARU=",
{
"encoding":"base64",
}
] ]
} }
' '
@ -3422,8 +3428,19 @@ Result:
"err": null, "err": null,
"accounts": null, "accounts": null,
"logs": [ "logs": [
"BPF program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri success" "Program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri invoke [1]",
] "Program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri consumed 2366 of 1400000 compute units",
"Program return: 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri KgAAAAAAAAA=",
"Program 83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri success"
],
"returnData": {
"data": [
"Kg==",
"base64"
],
"programId": "83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri"
},
"unitsConsumed": 2366
} }
}, },
"id": 1 "id": 1

View File

@ -1,3 +1,7 @@
---
title: Handle Duplicate Block
---
# Leader Duplicate Block Slashing # Leader Duplicate Block Slashing
This design describes how the cluster slashes leaders that produce duplicate This design describes how the cluster slashes leaders that produce duplicate

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,7 @@
"dependencies": { "dependencies": {
"@blockworks-foundation/mango-client": "^3.2.16", "@blockworks-foundation/mango-client": "^3.2.16",
"@bonfida/bot": "^0.5.3", "@bonfida/bot": "^0.5.3",
"@bonfida/spl-name-service": "^0.1.22", "@bonfida/spl-name-service": "^0.1.30",
"@cloudflare/stream-react": "^1.2.0", "@cloudflare/stream-react": "^1.2.0",
"@metamask/jazzicon": "^2.0.0", "@metamask/jazzicon": "^2.0.0",
"@metaplex/js": "4.12.0", "@metaplex/js": "4.12.0",

View File

@ -3,6 +3,7 @@ import { Cluster } from "providers/cluster";
import { TableCardBody } from "components/common/TableCardBody"; import { TableCardBody } from "components/common/TableCardBody";
import { InstructionLogs } from "utils/program-logs"; import { InstructionLogs } from "utils/program-logs";
import { ProgramName } from "utils/anchor"; import { ProgramName } from "utils/anchor";
import React from "react";
export function ProgramLogsCardBody({ export function ProgramLogsCardBody({
message, message,

View File

@ -13,14 +13,30 @@ import {
import { Cluster, useCluster } from "providers/cluster"; import { Cluster, useCluster } from "providers/cluster";
import { useTokenRegistry } from "providers/mints/token-registry"; import { useTokenRegistry } from "providers/mints/token-registry";
import { TokenInfoMap } from "@solana/spl-token-registry"; import { TokenInfoMap } from "@solana/spl-token-registry";
import { Connection } from "@solana/web3.js";
import { getDomainInfo, hasDomainSyntax } from "utils/name-service";
interface SearchOptions {
label: string;
options: {
label: string;
value: string[];
pathname: string;
}[];
}
export function SearchBar() { export function SearchBar() {
const [search, setSearch] = React.useState(""); const [search, setSearch] = React.useState("");
const searchRef = React.useRef("");
const [searchOptions, setSearchOptions] = React.useState<SearchOptions[]>([]);
const [loadingSearch, setLoadingSearch] = React.useState<boolean>(false);
const [loadingSearchMessage, setLoadingSearchMessage] =
React.useState<string>("loading...");
const selectRef = React.useRef<StateManager<any> | null>(null); const selectRef = React.useRef<StateManager<any> | null>(null);
const history = useHistory(); const history = useHistory();
const location = useLocation(); const location = useLocation();
const { tokenRegistry } = useTokenRegistry(); const { tokenRegistry } = useTokenRegistry();
const { cluster, clusterInfo } = useCluster(); const { url, cluster, clusterInfo } = useCluster();
const onChange = ( const onChange = (
{ pathname }: ValueType<any, false>, { pathname }: ValueType<any, false>,
@ -33,7 +49,54 @@ export function SearchBar() {
}; };
const onInputChange = (value: string, { action }: InputActionMeta) => { const onInputChange = (value: string, { action }: InputActionMeta) => {
if (action === "input-change") setSearch(value); if (action === "input-change") {
setSearch(value);
}
};
React.useEffect(() => {
searchRef.current = search;
setLoadingSearchMessage("Loading...");
setLoadingSearch(true);
// builds and sets local search output
const options = buildOptions(
search,
cluster,
tokenRegistry,
clusterInfo?.epochInfo.epoch
);
setSearchOptions(options);
// checking for non local search output
if (hasDomainSyntax(search)) {
// if search input is a potential domain we continue the loading state
domainSearch(options);
} else {
// if search input is not a potential domain we can conclude the search has finished
setLoadingSearch(false);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [search]);
// appends domain lookup results to the local search state
const domainSearch = async (options: SearchOptions[]) => {
setLoadingSearchMessage("Looking up domain...");
const connection = new Connection(url);
const searchTerm = search;
const updatedOptions = await buildDomainOptions(
connection,
search,
options
);
if (searchRef.current === searchTerm) {
setSearchOptions(updatedOptions);
// after attempting to fetch the domain name we can conclude the loading state
setLoadingSearch(false);
setLoadingSearchMessage("Loading...");
}
}; };
const resetValue = "" as any; const resetValue = "" as any;
@ -44,13 +107,9 @@ export function SearchBar() {
<Select <Select
autoFocus autoFocus
ref={(ref) => (selectRef.current = ref)} ref={(ref) => (selectRef.current = ref)}
options={buildOptions( options={searchOptions}
search,
cluster,
tokenRegistry,
clusterInfo?.epochInfo.epoch
)}
noOptionsMessage={() => "No Results"} noOptionsMessage={() => "No Results"}
loadingMessage={() => loadingSearchMessage}
placeholder="Search for blocks, accounts, transactions, programs, and tokens" placeholder="Search for blocks, accounts, transactions, programs, and tokens"
value={resetValue} value={resetValue}
inputValue={search} inputValue={search}
@ -65,6 +124,7 @@ export function SearchBar() {
onInputChange={onInputChange} onInputChange={onInputChange}
components={{ DropdownIndicator }} components={{ DropdownIndicator }}
classNamePrefix="search-bar" classNamePrefix="search-bar"
isLoading={loadingSearch}
/> />
</div> </div>
</div> </div>
@ -187,7 +247,7 @@ function buildTokenOptions(
if (matchedTokens.length > 0) { if (matchedTokens.length > 0) {
return { return {
label: "Tokens", label: "Tokens",
options: matchedTokens.map(([id, details]) => ({ options: matchedTokens.slice(0, 10).map(([id, details]) => ({
label: details.name, label: details.name,
value: [details.name, details.symbol, id], value: [details.name, details.symbol, id],
pathname: "/address/" + id, pathname: "/address/" + id,
@ -196,6 +256,39 @@ function buildTokenOptions(
} }
} }
async function buildDomainOptions(
connection: Connection,
search: string,
options: SearchOptions[]
) {
const domainInfo = await getDomainInfo(search, connection);
const updatedOptions: SearchOptions[] = [...options];
if (domainInfo && domainInfo.owner && domainInfo.address) {
updatedOptions.push({
label: "Domain Owner",
options: [
{
label: domainInfo.owner,
value: [search],
pathname: "/address/" + domainInfo.owner,
},
],
});
updatedOptions.push({
label: "Name Service Account",
options: [
{
label: search,
value: [search],
pathname: "/address/" + domainInfo.address,
},
],
});
}
return updatedOptions;
}
// builds local search options
function buildOptions( function buildOptions(
rawSearch: string, rawSearch: string,
cluster: Cluster, cluster: Cluster,
@ -287,6 +380,7 @@ function buildOptions(
}); });
} }
} catch (err) {} } catch (err) {}
return options; return options;
} }

View File

@ -1,63 +1,62 @@
import React, { useMemo } from "react"; import React, { useMemo } from "react";
import { Account } from "providers/accounts"; import { Account } from "providers/accounts";
import { Address } from "components/common/Address";
import { BorshAccountsCoder } from "@project-serum/anchor";
import { capitalizeFirstLetter } from "utils/anchor";
import { ErrorCard } from "components/common/ErrorCard";
import { PublicKey } from "@solana/web3.js";
import BN from "bn.js";
import ReactJson from "react-json-view";
import { useCluster } from "providers/cluster"; import { useCluster } from "providers/cluster";
import { BorshAccountsCoder } from "@project-serum/anchor";
import { IdlTypeDef } from "@project-serum/anchor/dist/cjs/idl";
import { getProgramName, mapAccountToRows } from "utils/anchor";
import { ErrorCard } from "components/common/ErrorCard";
import { useAnchorProgram } from "providers/anchor"; import { useAnchorProgram } from "providers/anchor";
export function AnchorAccountCard({ account }: { account: Account }) { export function AnchorAccountCard({ account }: { account: Account }) {
const { lamports } = account;
const { url } = useCluster(); const { url } = useCluster();
const program = useAnchorProgram( const anchorProgram = useAnchorProgram(
account.details?.owner.toString() ?? "", account.details?.owner.toString() || "",
url url
); );
const rawData = account?.details?.rawData;
const programName = getProgramName(anchorProgram) || "Unknown Program";
const { foundAccountLayoutName, decodedAnchorAccountData } = useMemo(() => { const { decodedAccountData, accountDef } = useMemo(() => {
let foundAccountLayoutName: string | undefined; let decodedAccountData: any | null = null;
let decodedAnchorAccountData: { [key: string]: any } | undefined; let accountDef: IdlTypeDef | undefined = undefined;
if (program && account.details && account.details.rawData) { if (anchorProgram && rawData) {
const accountBuffer = account.details.rawData; const coder = new BorshAccountsCoder(anchorProgram.idl);
const discriminator = accountBuffer.slice(0, 8); const accountDefTmp = anchorProgram.idl.accounts?.find(
(accountType: any) =>
// Iterate all the structs, see if any of the name-hashes match (rawData as Buffer)
Object.keys(program.account).forEach((accountType) => { .slice(0, 8)
const layoutName = capitalizeFirstLetter(accountType); .equals(BorshAccountsCoder.accountDiscriminator(accountType.name))
const discriminatorToCheck = );
BorshAccountsCoder.accountDiscriminator(layoutName); if (accountDefTmp) {
accountDef = accountDefTmp;
if (discriminatorToCheck.equals(discriminator)) { decodedAccountData = coder.decode(accountDef.name, rawData);
foundAccountLayoutName = layoutName; }
const accountDecoder = program.account[accountType];
decodedAnchorAccountData = accountDecoder.coder.accounts.decode(
layoutName,
accountBuffer
);
}
});
} }
return { foundAccountLayoutName, decodedAnchorAccountData };
}, [program, account.details]);
if (!foundAccountLayoutName || !decodedAnchorAccountData) { return {
decodedAccountData,
accountDef,
};
}, [anchorProgram, rawData]);
if (lamports === undefined) return null;
if (!anchorProgram) return <ErrorCard text="No Anchor IDL found" />;
if (!decodedAccountData || !accountDef) {
return ( return (
<ErrorCard text="Failed to decode account data according to its public anchor interface" /> <ErrorCard text="Failed to decode account data according to the public Anchor interface" />
); );
} }
return ( return (
<> <div>
<div className="card"> <div className="card">
<div className="card-header"> <div className="card-header">
<div className="row align-items-center"> <div className="row align-items-center">
<div className="col"> <div className="col">
<h3 className="card-header-title">{foundAccountLayoutName}</h3> <h3 className="card-header-title">
{programName}: {accountDef.name}
</h3>
</div> </div>
</div> </div>
</div> </div>
@ -66,92 +65,21 @@ export function AnchorAccountCard({ account }: { account: Account }) {
<table className="table table-sm table-nowrap card-table"> <table className="table table-sm table-nowrap card-table">
<thead> <thead>
<tr> <tr>
<th className="w-1 text-muted">Key</th> <th className="w-1">Field</th>
<th className="text-muted">Value</th> <th className="w-1">Type</th>
<th className="w-1">Value</th>
</tr> </tr>
</thead> </thead>
<tbody className="list"> <tbody>
{decodedAnchorAccountData && {mapAccountToRows(
Object.keys(decodedAnchorAccountData).map((key) => ( decodedAccountData,
<AccountRow accountDef as IdlTypeDef,
key={key} anchorProgram.idl
valueName={key} )}
value={decodedAnchorAccountData[key]}
/>
))}
</tbody> </tbody>
</table> </table>
</div> </div>
<div className="card-footer">
<div className="text-muted text-center">
{decodedAnchorAccountData &&
Object.keys(decodedAnchorAccountData).length > 0
? `Decoded ${Object.keys(decodedAnchorAccountData).length} Items`
: "No decoded data"}
</div>
</div>
</div> </div>
</> </div>
);
}
function AccountRow({ valueName, value }: { valueName: string; value: any }) {
let displayValue: JSX.Element;
if (value instanceof PublicKey) {
displayValue = <Address pubkey={value} link />;
} else if (value instanceof BN) {
displayValue = <>{value.toString()}</>;
} else if (!(value instanceof Object)) {
displayValue = <>{String(value)}</>;
} else if (value) {
const displayObject = stringifyPubkeyAndBigNums(value);
displayValue = (
<ReactJson
src={JSON.parse(JSON.stringify(displayObject))}
collapsed={1}
theme="solarized"
/>
);
} else {
displayValue = <>null</>;
}
return (
<tr>
<td className="w-1 text-monospace">{camelToUnderscore(valueName)}</td>
<td className="text-monospace">{displayValue}</td>
</tr>
);
}
function camelToUnderscore(key: string) {
var result = key.replace(/([A-Z])/g, " $1");
return result.split(" ").join("_").toLowerCase();
}
function stringifyPubkeyAndBigNums(object: Object): Object {
if (!Array.isArray(object)) {
if (object instanceof PublicKey) {
return object.toString();
} else if (object instanceof BN) {
return object.toString();
} else if (!(object instanceof Object)) {
return object;
} else {
const parsedObject: { [key: string]: Object } = {};
Object.keys(object).map((key) => {
let value = (object as { [key: string]: any })[key];
if (value instanceof Object) {
value = stringifyPubkeyAndBigNums(value);
}
parsedObject[key] = value;
return null;
});
return parsedObject;
}
}
return object.map((innerObject) =>
innerObject instanceof Object
? stringifyPubkeyAndBigNums(innerObject)
: innerObject
); );
} }

View File

@ -21,15 +21,14 @@ export function DomainsCard({ pubkey }: { pubkey: PublicKey }) {
return ( return (
<div className="card"> <div className="card">
<div className="card-header align-items-center"> <div className="card-header align-items-center">
<h3 className="card-header-title">Domain Names Owned</h3> <h3 className="card-header-title">Owned Domain Names</h3>
</div> </div>
<div className="table-responsive mb-0"> <div className="table-responsive mb-0">
<table className="table table-sm table-nowrap card-table"> <table className="table table-sm table-nowrap card-table">
<thead> <thead>
<tr> <tr>
<th className="text-muted">Domain name</th> <th className="text-muted">Domain Name</th>
<th className="text-muted">Domain Address</th> <th className="text-muted">Name Service Account</th>
<th className="text-muted">Domain Class Address</th>
</tr> </tr>
</thead> </thead>
<tbody className="list"> <tbody className="list">
@ -53,9 +52,6 @@ function RenderDomainRow({ domainInfo }: { domainInfo: DomainInfo }) {
<td> <td>
<Address pubkey={domainInfo.address} link /> <Address pubkey={domainInfo.address} link />
</td> </td>
<td>
<Address pubkey={domainInfo.class} link />
</td>
</tr> </tr>
); );
} }

View File

@ -343,7 +343,7 @@ function NonFungibleTokenMintAccountCard({
</td> </td>
</tr> </tr>
)} )}
{nftData?.metadata.collection?.verified && ( {!!nftData?.metadata.collection?.verified && (
<tr> <tr>
<td>Verified Collection Address</td> <td>Verified Collection Address</td>
<td className="text-lg-end"> <td className="text-lg-end">

View File

@ -18,6 +18,7 @@ type Props = {
truncateUnknown?: boolean; truncateUnknown?: boolean;
truncateChars?: number; truncateChars?: number;
useMetadata?: boolean; useMetadata?: boolean;
overrideText?: string;
}; };
export function Address({ export function Address({
@ -29,6 +30,7 @@ export function Address({
truncateUnknown, truncateUnknown,
truncateChars, truncateChars,
useMetadata, useMetadata,
overrideText,
}: Props) { }: Props) {
const address = pubkey.toBase58(); const address = pubkey.toBase58();
const { tokenRegistry } = useTokenRegistry(); const { tokenRegistry } = useTokenRegistry();
@ -52,6 +54,10 @@ export function Address({
addressLabel = addressLabel.slice(0, truncateChars) + "…"; addressLabel = addressLabel.slice(0, truncateChars) + "…";
} }
if (overrideText) {
addressLabel = overrideText;
}
const content = ( const content = (
<Copyable text={address} replaceText={!alignRight}> <Copyable text={address} replaceText={!alignRight}>
<span className="font-monospace"> <span className="font-monospace">

View File

@ -1,15 +1,21 @@
import { SignatureResult, TransactionInstruction } from "@solana/web3.js"; import { SignatureResult, TransactionInstruction } from "@solana/web3.js";
import { InstructionCard } from "./InstructionCard"; import { InstructionCard } from "./InstructionCard";
import { Idl, Program, BorshInstructionCoder } from "@project-serum/anchor"; import {
Idl,
Program,
BorshInstructionCoder,
Instruction,
} from "@project-serum/anchor";
import { import {
getAnchorNameForInstruction, getAnchorNameForInstruction,
getProgramName, getProgramName,
capitalizeFirstLetter,
getAnchorAccountsFromInstruction, getAnchorAccountsFromInstruction,
mapIxArgsToRows,
} from "utils/anchor"; } from "utils/anchor";
import { HexData } from "components/common/HexData";
import { Address } from "components/common/Address"; import { Address } from "components/common/Address";
import ReactJson from "react-json-view"; import { camelToTitleCase } from "utils";
import { IdlInstruction } from "@project-serum/anchor/dist/cjs/idl";
import { useMemo } from "react";
export default function AnchorDetailsCard(props: { export default function AnchorDetailsCard(props: {
key: string; key: string;
@ -26,46 +32,99 @@ export default function AnchorDetailsCard(props: {
const ixName = const ixName =
getAnchorNameForInstruction(ix, anchorProgram) ?? "Unknown Instruction"; getAnchorNameForInstruction(ix, anchorProgram) ?? "Unknown Instruction";
const cardTitle = `${programName}: ${ixName}`; const cardTitle = `${camelToTitleCase(programName)}: ${camelToTitleCase(
ixName
)}`;
return ( return (
<InstructionCard title={cardTitle} {...props}> <InstructionCard title={cardTitle} {...props}>
<RawAnchorDetails ix={ix} anchorProgram={anchorProgram} /> <AnchorDetails ix={ix} anchorProgram={anchorProgram} />
</InstructionCard> </InstructionCard>
); );
} }
function RawAnchorDetails({ function AnchorDetails({
ix, ix,
anchorProgram, anchorProgram,
}: { }: {
ix: TransactionInstruction; ix: TransactionInstruction;
anchorProgram: Program; anchorProgram: Program;
}) { }) {
let ixAccounts: const { ixAccounts, decodedIxData, ixDef } = useMemo(() => {
| { let ixAccounts:
name: string; | {
isMut: boolean; name: string;
isSigner: boolean; isMut: boolean;
pda?: Object; isSigner: boolean;
}[] pda?: Object;
| null = null; }[]
var decodedIxData = null; | null = null;
if (anchorProgram) { let decodedIxData: Instruction | null = null;
const decoder = new BorshInstructionCoder(anchorProgram.idl); let ixDef: IdlInstruction | undefined;
decodedIxData = decoder.decode(ix.data); if (anchorProgram) {
ixAccounts = getAnchorAccountsFromInstruction(decodedIxData, anchorProgram); const coder = new BorshInstructionCoder(anchorProgram.idl);
decodedIxData = coder.decode(ix.data);
if (decodedIxData) {
ixDef = anchorProgram.idl.instructions.find(
(ixDef) => ixDef.name === decodedIxData?.name
);
if (ixDef) {
ixAccounts = getAnchorAccountsFromInstruction(
decodedIxData,
anchorProgram
);
}
}
}
return {
ixAccounts,
decodedIxData,
ixDef,
};
}, [anchorProgram, ix.data]);
if (!ixAccounts || !decodedIxData || !ixDef) {
return (
<tr>
<td colSpan={3} className="text-lg-center">
Failed to decode account data according to the public Anchor interface
</td>
</tr>
);
} }
const programName = getProgramName(anchorProgram) ?? "Unknown Program";
return ( return (
<> <>
<tr>
<td>Program</td>
<td className="text-lg-end" colSpan={2}>
<Address
pubkey={ix.programId}
alignRight
link
raw
overrideText={programName}
/>
</td>
</tr>
<tr className="table-sep">
<td>Account Name</td>
<td className="text-lg-end" colSpan={2}>
Address
</td>
</tr>
{ix.keys.map(({ pubkey, isSigner, isWritable }, keyIndex) => { {ix.keys.map(({ pubkey, isSigner, isWritable }, keyIndex) => {
return ( return (
<tr key={keyIndex}> <tr key={keyIndex}>
<td> <td>
<div className="me-2 d-md-inline"> <div className="me-2 d-md-inline">
{ixAccounts && keyIndex < ixAccounts.length {ixAccounts
? `${capitalizeFirstLetter(ixAccounts[keyIndex].name)}` ? keyIndex < ixAccounts.length
? `${camelToTitleCase(ixAccounts[keyIndex].name)}`
: `Remaining Account #${keyIndex + 1 - ixAccounts.length}`
: `Account #${keyIndex + 1}`} : `Account #${keyIndex + 1}`}
</div> </div>
{isWritable && ( {isWritable && (
@ -75,27 +134,23 @@ function RawAnchorDetails({
<span className="badge bg-info-soft me-1">Signer</span> <span className="badge bg-info-soft me-1">Signer</span>
)} )}
</td> </td>
<td className="text-lg-end"> <td className="text-lg-end" colSpan={2}>
<Address pubkey={pubkey} alignRight link /> <Address pubkey={pubkey} alignRight link />
</td> </td>
</tr> </tr>
); );
})} })}
<tr> {decodedIxData && ixDef && ixDef.args.length > 0 && (
<td> <>
Instruction Data <span className="text-muted">(Hex)</span> <tr className="table-sep">
</td> <td>Argument Name</td>
{decodedIxData ? ( <td>Type</td>
<td className="metadata-json-viewer m-4"> <td className="text-lg-end">Value</td>
<ReactJson src={decodedIxData} theme="solarized" /> </tr>
</td> {mapIxArgsToRows(decodedIxData.data, ixDef, anchorProgram.idl)}
) : ( </>
<td className="text-lg-end"> )}
<HexData raw={ix.data} />
</td>
)}
</tr>
</> </>
); );
} }

View File

@ -100,12 +100,16 @@ export function InstructionCard({
children children
)} )}
{innerCards && innerCards.length > 0 && ( {innerCards && innerCards.length > 0 && (
<tr> <>
<td colSpan={2}> <tr className="table-sep">
Inner Instructions <td colSpan={3}>Inner Instructions</td>
<div className="inner-cards">{innerCards}</div> </tr>
</td> <tr>
</tr> <td colSpan={3}>
<div className="inner-cards">{innerCards}</div>
</td>
</tr>
</>
)} )}
</tbody> </tbody>
</table> </table>

View File

@ -271,7 +271,7 @@ function DetailsSections({
account. Please be cautious sending SOL to this account. account. Please be cautious sending SOL to this account.
</div> </div>
)} )}
{<InfoSection account={account} />} <InfoSection account={account} />
<MoreSection <MoreSection
account={account} account={account}
tab={moreTab} tab={moreTab}
@ -517,17 +517,17 @@ function getAnchorTabs(pubkey: PublicKey, account: Account) {
), ),
}); });
const anchorAccountTab: Tab = { const accountDataTab: Tab = {
slug: "anchor-account", slug: "anchor-account",
title: "Anchor Account", title: "Anchor Data",
path: "/anchor-account", path: "/anchor-account",
}; };
tabComponents.push({ tabComponents.push({
tab: anchorAccountTab, tab: accountDataTab,
component: ( component: (
<React.Suspense key={anchorAccountTab.slug} fallback={<></>}> <React.Suspense key={accountDataTab.slug} fallback={<></>}>
<AnchorAccountLink <AccountDataLink
tab={anchorAccountTab} tab={accountDataTab}
address={pubkey.toString()} address={pubkey.toString()}
programId={account.details?.owner} programId={account.details?.owner}
/> />
@ -567,7 +567,7 @@ function AnchorProgramLink({
); );
} }
function AnchorAccountLink({ function AccountDataLink({
address, address,
tab, tab,
programId, programId,

View File

@ -317,6 +317,8 @@ async function fetchAccountInfo(
}); });
} }
const IMAGE_MIME_TYPE_REGEX = /data:image\/(svg\+xml|png|jpeg|gif)/g;
const getMetaDataJSON = async ( const getMetaDataJSON = async (
id: string, id: string,
metadata: programs.metadata.MetadataData metadata: programs.metadata.MetadataData
@ -331,9 +333,11 @@ const getMetaDataJSON = async (
} }
if (extended?.image) { if (extended?.image) {
extended.image = extended.image.startsWith("http") extended.image =
? extended.image extended.image.startsWith("http") ||
: `${metadata.data.uri}/${extended.image}`; IMAGE_MIME_TYPE_REGEX.test(extended.image)
? extended.image
: `${metadata.data.uri}/${extended.image}`;
} }
return extended; return extended;

View File

@ -1,9 +1,9 @@
// //
// tables.scss // tables.scss
// Extended from Bootstrap // Extended from Bootstrap
// //
// //
// Bootstrap Overrides ===================================== // Bootstrap Overrides =====================================
// //
@ -25,6 +25,15 @@
border-bottom: 0; border-bottom: 0;
} }
.table-sep {
background-color: $table-head-bg;
text-transform: uppercase;
font-size: $font-size-xs;
font-weight: $font-weight-bold;
letter-spacing: .08em;
color: $table-head-color;
}
// Sizing // Sizing

View File

@ -1,43 +1,33 @@
import React from "react"; import React, { Fragment, ReactNode, useState } from "react";
import { Cluster } from "providers/cluster"; import { Cluster } from "providers/cluster";
import { PublicKey, TransactionInstruction } from "@solana/web3.js"; import { PublicKey, TransactionInstruction } from "@solana/web3.js";
import { BorshInstructionCoder, Program } from "@project-serum/anchor"; import { BorshInstructionCoder, Program, Idl } from "@project-serum/anchor";
import { useAnchorProgram } from "providers/anchor"; import { useAnchorProgram } from "providers/anchor";
import { programLabel } from "utils/tx"; import { programLabel } from "utils/tx";
import { ErrorBoundary } from "@sentry/react"; import { snakeToTitleCase, camelToTitleCase, numberWithSeparator } from "utils";
import {
function snakeToPascal(string: string) { IdlInstruction,
return string IdlType,
.split("/") IdlTypeDef,
.map((snake) => } from "@project-serum/anchor/dist/cjs/idl";
snake import { Address } from "components/common/Address";
.split("_") import ReactJson from "react-json-view";
.map((substr) => substr.charAt(0).toUpperCase() + substr.slice(1))
.join("")
)
.join("/");
}
export function getProgramName(program: Program | null): string | undefined { export function getProgramName(program: Program | null): string | undefined {
return program ? snakeToPascal(program.idl.name) : undefined; return program ? snakeToTitleCase(program.idl.name) : undefined;
} }
export function capitalizeFirstLetter(input: string) { export function AnchorProgramName({
return input.charAt(0).toUpperCase() + input.slice(1);
}
function AnchorProgramName({
programId, programId,
url, url,
defaultName = "Unknown Program",
}: { }: {
programId: PublicKey; programId: PublicKey;
url: string; url: string;
defaultName?: string;
}) { }) {
const program = useAnchorProgram(programId.toString(), url); const program = useAnchorProgram(programId.toString(), url);
if (!program) { const programName = getProgramName(program) || defaultName;
throw new Error("No anchor program name found for given programId");
}
const programName = getProgramName(program);
return <>{programName}</>; return <>{programName}</>;
} }
@ -52,12 +42,13 @@ export function ProgramName({
}) { }) {
const defaultProgramName = const defaultProgramName =
programLabel(programId.toBase58(), cluster) || "Unknown Program"; programLabel(programId.toBase58(), cluster) || "Unknown Program";
return ( return (
<React.Suspense fallback={defaultProgramName}> <React.Suspense fallback={<>{defaultProgramName}</>}>
<ErrorBoundary fallback={<>{defaultProgramName}</>}> <AnchorProgramName
<AnchorProgramName programId={programId} url={url} /> programId={programId}
</ErrorBoundary> url={url}
defaultName={defaultProgramName}
/>
</React.Suspense> </React.Suspense>
); );
} }
@ -107,3 +98,387 @@ export function getAnchorAccountsFromInstruction(
} }
return null; return null;
} }
export function mapIxArgsToRows(ixArgs: any, ixType: IdlInstruction, idl: Idl) {
return Object.entries(ixArgs).map(([key, value]) => {
try {
const fieldDef = ixType.args.find((ixDefArg) => ixDefArg.name === key);
if (!fieldDef) {
throw Error(
`Could not find expected ${key} field on account type definition for ${ixType.name}`
);
}
return mapField(key, value, fieldDef.type, idl);
} catch (error: any) {
console.log("Error while displaying IDL-based account data", error);
return (
<tr key={key}>
<td>{key}</td>
<td className="text-lg-end">
<td className="metadata-json-viewer m-4">
<ReactJson src={ixArgs} theme="solarized" />
</td>
</td>
</tr>
);
}
});
}
export function mapAccountToRows(
accountData: any,
accountType: IdlTypeDef,
idl: Idl
) {
return Object.entries(accountData).map(([key, value]) => {
try {
if (accountType.type.kind !== "struct") {
throw Error(
`Account ${accountType.name} is of type ${accountType.type.kind} (expected: 'struct')`
);
}
const fieldDef = accountType.type.fields.find(
(ixDefArg) => ixDefArg.name === key
);
if (!fieldDef) {
throw Error(
`Could not find expected ${key} field on account type definition for ${accountType.name}`
);
}
return mapField(key, value as any, fieldDef.type, idl);
} catch (error: any) {
console.log("Error while displaying IDL-based account data", error);
return (
<tr key={key}>
<td>{key}</td>
<td className="text-lg-end">
<td className="metadata-json-viewer m-4">
<ReactJson src={accountData} theme="solarized" />
</td>
</td>
</tr>
);
}
});
}
function mapField(
key: string,
value: any,
type: IdlType,
idl: Idl,
keySuffix?: any,
nestingLevel: number = 0
): ReactNode {
let itemKey = key;
if (/^-?\d+$/.test(keySuffix)) {
itemKey = `#${keySuffix}`;
}
itemKey = camelToTitleCase(itemKey);
if (value === undefined) {
return (
<SimpleRow
key={keySuffix ? `${key}-${keySuffix}` : key}
rawKey={key}
type={type}
keySuffix={keySuffix}
nestingLevel={nestingLevel}
>
<div>null</div>
</SimpleRow>
);
}
if (
type === "u8" ||
type === "i8" ||
type === "u16" ||
type === "i16" ||
type === "u32" ||
type === "i32" ||
type === "f32" ||
type === "u64" ||
type === "i64" ||
type === "f64" ||
type === "u128" ||
type === "i128"
) {
return (
<SimpleRow
key={keySuffix ? `${key}-${keySuffix}` : key}
rawKey={key}
type={type}
keySuffix={keySuffix}
nestingLevel={nestingLevel}
>
<div>{numberWithSeparator(value.toString())}</div>
</SimpleRow>
);
} else if (type === "bool" || type === "bytes" || type === "string") {
return (
<SimpleRow
key={keySuffix ? `${key}-${keySuffix}` : key}
rawKey={key}
type={type}
keySuffix={keySuffix}
nestingLevel={nestingLevel}
>
<div>{value.toString()}</div>
</SimpleRow>
);
} else if (type === "publicKey") {
return (
<SimpleRow
key={keySuffix ? `${key}-${keySuffix}` : key}
rawKey={key}
type={type}
keySuffix={keySuffix}
nestingLevel={nestingLevel}
>
<Address pubkey={value} link alignRight />
</SimpleRow>
);
} else if ("defined" in type) {
const fieldType = idl.types?.find((t) => t.name === type.defined);
if (!fieldType) {
throw Error(`Could not type definition for ${type.defined} field in IDL`);
}
if (fieldType.type.kind === "struct") {
const structFields = fieldType.type.fields;
return (
<ExpandableRow
fieldName={itemKey}
fieldType={typeDisplayName(type)}
nestingLevel={nestingLevel}
key={keySuffix ? `${key}-${keySuffix}` : key}
>
<Fragment key={keySuffix ? `${key}-${keySuffix}` : key}>
{Object.entries(value).map(
([innerKey, innerValue]: [string, any]) => {
const innerFieldType = structFields.find(
(t) => t.name === innerKey
);
if (!innerFieldType) {
throw Error(
`Could not type definition for ${innerKey} field in user-defined struct ${fieldType.name}`
);
}
return mapField(
innerKey,
innerValue,
innerFieldType?.type,
idl,
key,
nestingLevel + 1
);
}
)}
</Fragment>
</ExpandableRow>
);
} else {
const enumValue = Object.keys(value)[0];
return (
<SimpleRow
key={keySuffix ? `${key}-${keySuffix}` : key}
rawKey={key}
type={{ enum: type.defined }}
keySuffix={keySuffix}
nestingLevel={nestingLevel}
>
{camelToTitleCase(enumValue)}
</SimpleRow>
);
}
} else if ("option" in type) {
if (value === null) {
return (
<SimpleRow
key={keySuffix ? `${key}-${keySuffix}` : key}
rawKey={key}
type={type}
keySuffix={keySuffix}
nestingLevel={nestingLevel}
>
Not provided
</SimpleRow>
);
}
return mapField(key, value, type.option, idl, key, nestingLevel);
} else if ("vec" in type) {
const itemType = type.vec;
return (
<ExpandableRow
fieldName={itemKey}
fieldType={typeDisplayName(type)}
nestingLevel={nestingLevel}
key={keySuffix ? `${key}-${keySuffix}` : key}
>
<Fragment key={keySuffix ? `${key}-${keySuffix}` : key}>
{(value as any[]).map((item, i) =>
mapField(key, item, itemType, idl, i, nestingLevel + 1)
)}
</Fragment>
</ExpandableRow>
);
} else if ("array" in type) {
const [itemType] = type.array;
return (
<ExpandableRow
fieldName={itemKey}
fieldType={typeDisplayName(type)}
nestingLevel={nestingLevel}
key={keySuffix ? `${key}-${keySuffix}` : key}
>
<Fragment key={keySuffix ? `${key}-${keySuffix}` : key}>
{(value as any[]).map((item, i) =>
mapField(key, item, itemType, idl, i, nestingLevel + 1)
)}
</Fragment>
</ExpandableRow>
);
} else {
console.log("Impossible type:", type);
return (
<tr key={keySuffix ? `${key}-${keySuffix}` : key}>
<td>{camelToTitleCase(key)}</td>
<td></td>
<td className="text-lg-end">???</td>
</tr>
);
}
}
function SimpleRow({
rawKey,
type,
keySuffix,
nestingLevel = 0,
children,
}: {
rawKey: string;
type: IdlType | { enum: string };
keySuffix?: any;
nestingLevel: number;
children?: ReactNode;
}) {
let itemKey = rawKey;
if (/^-?\d+$/.test(keySuffix)) {
itemKey = `#${keySuffix}`;
}
itemKey = camelToTitleCase(itemKey);
return (
<tr
style={{
...(nestingLevel === 0 ? {} : { backgroundColor: "#141816" }),
}}
>
<td className="d-flex flex-row">
{nestingLevel > 0 && (
<span
className="text-info fe fe-corner-down-right me-2"
style={{
paddingLeft: `${15 * nestingLevel}px`,
}}
/>
)}
<div>{itemKey}</div>
</td>
<td>{typeDisplayName(type)}</td>
<td className="text-lg-end">{children}</td>
</tr>
);
}
export function ExpandableRow({
fieldName,
fieldType,
nestingLevel,
children,
}: {
fieldName: string;
fieldType: string;
nestingLevel: number;
children: React.ReactNode;
}) {
const [expanded, setExpanded] = useState(false);
return (
<>
<tr
style={{
...(nestingLevel === 0 ? {} : { backgroundColor: "#141816" }),
}}
>
<td className="d-flex flex-row">
{nestingLevel > 0 && (
<div
className="text-info fe fe-corner-down-right me-2"
style={{
paddingLeft: `${15 * nestingLevel}px`,
}}
/>
)}
<div>{fieldName}</div>
</td>
<td>{fieldType}</td>
<td
className="text-lg-end"
onClick={() => setExpanded((current) => !current)}
>
<div className="c-pointer">
{expanded ? (
<>
<span className="text-info me-2">Collapse</span>
<span className="fe fe-chevron-up" />
</>
) : (
<>
<span className="text-info me-2">Expand</span>
<span className="fe fe-chevron-down" />
</>
)}
</div>
</td>
</tr>
{expanded && <>{children}</>}
</>
);
}
function typeDisplayName(
type:
| IdlType
| {
enum: string;
}
): string {
switch (type) {
case "bool":
case "u8":
case "i8":
case "u16":
case "i16":
case "u32":
case "i32":
case "f32":
case "u64":
case "i64":
case "f64":
case "u128":
case "i128":
case "bytes":
case "string":
return type.toString();
case "publicKey":
return "PublicKey";
default:
if ("enum" in type) return `${type.enum} (enum)`;
if ("defined" in type) return type.defined;
if ("option" in type) return `${typeDisplayName(type.option)} (optional)`;
if ("vec" in type) return `${typeDisplayName(type.vec)}[]`;
if ("array" in type)
return `${typeDisplayName(type.array[0])}[${type.array[1]}]`;
return "unkonwn";
}
}

View File

@ -56,6 +56,10 @@ export function lamportsToSolString(
return new Intl.NumberFormat("en-US", { maximumFractionDigits }).format(sol); return new Intl.NumberFormat("en-US", { maximumFractionDigits }).format(sol);
} }
export function numberWithSeparator(s: string) {
return s.replace(/\B(?=(\d{3})+(?!\d))/g, ",");
}
export function SolBalance({ export function SolBalance({
lamports, lamports,
maximumFractionDigits = 9, maximumFractionDigits = 9,
@ -126,6 +130,27 @@ export function camelToTitleCase(str: string): string {
return result.charAt(0).toUpperCase() + result.slice(1); return result.charAt(0).toUpperCase() + result.slice(1);
} }
export function snakeToTitleCase(str: string): string {
const result = str.replace(/([-_]\w)/g, (g) => ` ${g[1].toUpperCase()}`);
return result.charAt(0).toUpperCase() + result.slice(1);
}
export function snakeToPascal(string: string) {
return string
.split("/")
.map((snake) =>
snake
.split("_")
.map((substr) => substr.charAt(0).toUpperCase() + substr.slice(1))
.join("")
)
.join("/");
}
export function capitalizeFirstLetter(input: string) {
return input.charAt(0).toUpperCase() + input.slice(1);
}
export function abbreviatedNumber(value: number, fixed = 1) { export function abbreviatedNumber(value: number, fixed = 1) {
if (value < 1e3) return value; if (value < 1e3) return value;
if (value >= 1e3 && value < 1e6) return +(value / 1e3).toFixed(fixed) + "K"; if (value >= 1e3 && value < 1e6) return +(value / 1e3).toFixed(fixed) + "K";

View File

@ -1,25 +1,27 @@
import { PublicKey, Connection } from "@solana/web3.js"; import { PublicKey, Connection } from "@solana/web3.js";
import { import {
getFilteredProgramAccounts,
getHashedName, getHashedName,
getNameAccountKey, getNameAccountKey,
NameRegistryState, getNameOwner,
getFilteredProgramAccounts,
NAME_PROGRAM_ID, NAME_PROGRAM_ID,
performReverseLookup,
} from "@bonfida/spl-name-service"; } from "@bonfida/spl-name-service";
import BN from "bn.js";
import { useState, useEffect } from "react"; import { useState, useEffect } from "react";
import { Cluster, useCluster } from "providers/cluster"; import { Cluster, useCluster } from "providers/cluster";
// Name auctionning Program ID // Address of the SOL TLD
export const PROGRAM_ID = new PublicKey( const SOL_TLD_AUTHORITY = new PublicKey(
"jCebN34bUfdeUYJT13J1yG16XWQpt5PDx6Mse9GUqhR" "58PwtjSDuFHuUkYjH9BYnnQKHfwo9reZhC2zMJv9JPkx"
); );
export interface DomainInfo { export interface DomainInfo {
name: string; name: string;
address: PublicKey; address: PublicKey;
class: PublicKey;
} }
export const hasDomainSyntax = (value: string) => {
return value.length > 4 && value.substring(value.length - 4) === ".sol";
};
async function getDomainKey( async function getDomainKey(
name: string, name: string,
@ -35,15 +37,43 @@ async function getDomainKey(
return nameKey; return nameKey;
} }
export async function findOwnedNameAccountsForUser( // returns non empty wallet string if a given .sol domain is owned by a wallet
export async function getDomainInfo(domain: string, connection: Connection) {
const domainKey = await getDomainKey(
domain.slice(0, -4), // remove .sol
undefined,
SOL_TLD_AUTHORITY
);
try {
const registry = await getNameOwner(connection, domainKey);
return registry && registry.registry.owner
? {
owner: registry.registry.owner.toString(),
address: domainKey.toString(),
}
: null;
} catch {
return null;
}
}
async function getUserDomainAddresses(
connection: Connection, connection: Connection,
userAccount: PublicKey userAddress: PublicKey
): Promise<PublicKey[]> { ): Promise<PublicKey[]> {
const filters = [ const filters = [
// parent
{
memcmp: {
offset: 0,
bytes: SOL_TLD_AUTHORITY.toBase58(),
},
},
// owner
{ {
memcmp: { memcmp: {
offset: 32, offset: 32,
bytes: userAccount.toBase58(), bytes: userAddress.toBase58(),
}, },
}, },
]; ];
@ -55,41 +85,8 @@ export async function findOwnedNameAccountsForUser(
return accounts.map((a) => a.publicKey); return accounts.map((a) => a.publicKey);
} }
export async function performReverseLookup(
connection: Connection,
nameAccounts: PublicKey[]
): Promise<DomainInfo[]> {
let [centralState] = await PublicKey.findProgramAddress(
[PROGRAM_ID.toBuffer()],
PROGRAM_ID
);
const reverseLookupAccounts = await Promise.all(
nameAccounts.map((name) => getDomainKey(name.toBase58(), centralState))
);
let names = await NameRegistryState.retrieveBatch(
connection,
reverseLookupAccounts
);
return names
.map((name) => {
if (!name?.data) {
return undefined;
}
const nameLength = new BN(name!.data.slice(0, 4), "le").toNumber();
return {
name: name.data.slice(4, 4 + nameLength).toString() + ".sol",
address: name.address,
class: name.class,
};
})
.filter((e) => !!e) as DomainInfo[];
}
export const useUserDomains = ( export const useUserDomains = (
address: PublicKey userAddress: PublicKey
): [DomainInfo[] | null, boolean] => { ): [DomainInfo[] | null, boolean] => {
const { url, cluster } = useCluster(); const { url, cluster } = useCluster();
const [result, setResult] = useState<DomainInfo[] | null>(null); const [result, setResult] = useState<DomainInfo[] | null>(null);
@ -102,12 +99,21 @@ export const useUserDomains = (
const connection = new Connection(url, "confirmed"); const connection = new Connection(url, "confirmed");
try { try {
setLoading(true); setLoading(true);
const domains = await findOwnedNameAccountsForUser(connection, address); const userDomainAddresses = await getUserDomainAddresses(
let names = await performReverseLookup(connection, domains); connection,
names.sort((a, b) => { userAddress
return a.name.localeCompare(b.name); );
}); const userDomains = await Promise.all(
setResult(names); userDomainAddresses.map(async (address) => {
const domainName = await performReverseLookup(connection, address);
return {
name: `${domainName}.sol`,
address,
};
})
);
userDomains.sort((a, b) => a.name.localeCompare(b.name));
setResult(userDomains);
} catch (err) { } catch (err) {
console.log(`Error fetching user domains ${err}`); console.log(`Error fetching user domains ${err}`);
} finally { } finally {
@ -115,7 +121,7 @@ export const useUserDomains = (
} }
}; };
resolve(); resolve();
}, [address, url]); // eslint-disable-line react-hooks/exhaustive-deps }, [userAddress, url]); // eslint-disable-line react-hooks/exhaustive-deps
return [result, loading]; return [result, loading];
}; };

View File

@ -107,6 +107,10 @@ pub enum GeyserPluginError {
/// Any custom error defined by the plugin. /// Any custom error defined by the plugin.
#[error("Plugin-defined custom error. Error message: ({0})")] #[error("Plugin-defined custom error. Error message: ({0})")]
Custom(Box<dyn error::Error + Send + Sync>), Custom(Box<dyn error::Error + Send + Sync>),
/// Error when updating the transaction.
#[error("Error updating transaction. Error message: ({msg})")]
TransactionUpdateError { msg: String },
} }
/// The current status of a slot /// The current status of a slot

View File

@ -18,7 +18,7 @@ flate2 = "1.0"
indexmap = { version = "1.8", features = ["rayon"] } indexmap = { version = "1.8", features = ["rayon"] }
itertools = "0.10.3" itertools = "0.10.3"
log = "0.4.14" log = "0.4.14"
lru = "0.7.3" lru = "0.7.5"
matches = "0.1.9" matches = "0.1.9"
num-traits = "0.2" num-traits = "0.2"
rand = "0.7.0" rand = "0.7.0"

View File

@ -536,14 +536,7 @@ pub(crate) fn submit_gossip_stats(
.pull .pull
.votes .votes
.into_iter() .into_iter()
.map(|(slot, num_votes)| (slot, num_votes)) .chain(crds_stats.push.votes.into_iter())
.chain(
crds_stats
.push
.votes
.into_iter()
.map(|(slot, num_votes)| (slot, num_votes)),
)
.into_grouping_map() .into_grouping_map()
.aggregate(|acc, _slot, num_votes| Some(acc.unwrap_or_default() + num_votes)); .aggregate(|acc, _slot, num_votes| Some(acc.unwrap_or_default() + num_votes));
submit_vote_stats("cluster_info_crds_stats_votes", &votes); submit_vote_stats("cluster_info_crds_stats_votes", &votes);

View File

@ -23,7 +23,7 @@ indicatif = "0.16.2"
lazy_static = "1.4.0" lazy_static = "1.4.0"
nix = "0.23.1" nix = "0.23.1"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] } reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
semver = "1.0.6" semver = "1.0.7"
serde = { version = "1.0.136", features = ["derive"] } serde = { version = "1.0.136", features = ["derive"] }
serde_yaml = "0.8.23" serde_yaml = "0.8.23"
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" } solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }

View File

@ -16,6 +16,7 @@ use {
}, },
solana_ledger::{blockstore::Blockstore, blockstore_db::AccessType}, solana_ledger::{blockstore::Blockstore, blockstore_db::AccessType},
solana_sdk::{clock::Slot, pubkey::Pubkey, signature::Signature}, solana_sdk::{clock::Slot, pubkey::Pubkey, signature::Signature},
solana_storage_bigtable::CredentialType,
solana_transaction_status::{ solana_transaction_status::{
BlockEncodingOptions, ConfirmedBlock, EncodeError, TransactionDetails, BlockEncodingOptions, ConfirmedBlock, EncodeError, TransactionDetails,
UiTransactionEncoding, UiTransactionEncoding,
@ -642,16 +643,18 @@ pub fn bigtable_process_command(ledger_path: &Path, matches: &ArgMatches<'_>) {
instance_name, instance_name,
..solana_storage_bigtable::LedgerStorageConfig::default() ..solana_storage_bigtable::LedgerStorageConfig::default()
}; };
let credential_path = Some(value_t_or_exit!( let credential_path = Some(value_t_or_exit!(
arg_matches, arg_matches,
"reference_credential", "reference_credential",
String String
)); ));
let ref_instance_name = let ref_instance_name =
value_t_or_exit!(arg_matches, "reference_instance_name", String); value_t_or_exit!(arg_matches, "reference_instance_name", String);
let ref_config = solana_storage_bigtable::LedgerStorageConfig { let ref_config = solana_storage_bigtable::LedgerStorageConfig {
read_only: false, read_only: false,
credential_path, credential_type: CredentialType::Filepath(credential_path),
instance_name: ref_instance_name, instance_name: ref_instance_name,
..solana_storage_bigtable::LedgerStorageConfig::default() ..solana_storage_bigtable::LedgerStorageConfig::default()
}; };

View File

@ -32,13 +32,14 @@ use {
}, },
solana_measure::measure::Measure, solana_measure::measure::Measure,
solana_runtime::{ solana_runtime::{
accounts_db::AccountsDbConfig, accounts_db::{AccountsDbConfig, FillerAccountsConfig},
accounts_index::{AccountsIndexConfig, ScanConfig}, accounts_index::{AccountsIndexConfig, IndexLimitMb, ScanConfig},
bank::{Bank, RewardCalculationEvent}, bank::{Bank, RewardCalculationEvent},
bank_forks::BankForks, bank_forks::BankForks,
cost_model::CostModel, cost_model::CostModel,
cost_tracker::CostTracker, cost_tracker::CostTracker,
hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE}, hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE},
runtime_config::RuntimeConfig,
snapshot_archive_info::SnapshotArchiveInfoGetter, snapshot_archive_info::SnapshotArchiveInfoGetter,
snapshot_config::SnapshotConfig, snapshot_config::SnapshotConfig,
snapshot_hash::StartingSnapshotHashes, snapshot_hash::StartingSnapshotHashes,
@ -822,7 +823,7 @@ fn compute_slot_cost(blockstore: &Blockstore, slot: Slot) -> Result<(), String>
num_programs += transaction.message().instructions().len(); num_programs += transaction.message().instructions().len();
let tx_cost = cost_model.calculate_cost(&transaction); let tx_cost = cost_model.calculate_cost(&transaction);
let result = cost_tracker.try_add(&transaction, &tx_cost); let result = cost_tracker.try_add(&tx_cost);
if result.is_err() { if result.is_err() {
println!( println!(
"Slot: {}, CostModel rejected transaction {:?}, reason {:?}", "Slot: {}, CostModel rejected transaction {:?}, reason {:?}",
@ -880,7 +881,7 @@ fn main() {
let starting_slot_arg = Arg::with_name("starting_slot") let starting_slot_arg = Arg::with_name("starting_slot")
.long("starting-slot") .long("starting-slot")
.value_name("NUM") .value_name("SLOT")
.takes_value(true) .takes_value(true)
.default_value("0") .default_value("0")
.help("Start at this slot"); .help("Start at this slot");
@ -928,7 +929,16 @@ fn main() {
.value_name("COUNT") .value_name("COUNT")
.validator(is_parsable::<usize>) .validator(is_parsable::<usize>)
.takes_value(true) .takes_value(true)
.default_value("0")
.help("How many accounts to add to stress the system. Accounts are ignored in operations related to correctness."); .help("How many accounts to add to stress the system. Accounts are ignored in operations related to correctness.");
let accounts_filler_size = Arg::with_name("accounts_filler_size")
.long("accounts-filler-size")
.value_name("BYTES")
.validator(is_parsable::<usize>)
.takes_value(true)
.default_value("0")
.requires("accounts_filler_count")
.help("Size per filler account in bytes.");
let account_paths_arg = Arg::with_name("account_paths") let account_paths_arg = Arg::with_name("account_paths")
.long("accounts") .long("accounts")
.value_name("PATHS") .value_name("PATHS")
@ -1119,7 +1129,7 @@ fn main() {
.arg( .arg(
Arg::with_name("target_db") Arg::with_name("target_db")
.long("target-db") .long("target-db")
.value_name("PATH") .value_name("DIR")
.takes_value(true) .takes_value(true)
.help("Target db"), .help("Target db"),
) )
@ -1283,6 +1293,7 @@ fn main() {
.arg(&disable_disk_index) .arg(&disable_disk_index)
.arg(&accountsdb_skip_shrink) .arg(&accountsdb_skip_shrink)
.arg(&accounts_filler_count) .arg(&accounts_filler_count)
.arg(&accounts_filler_size)
.arg(&verify_index_arg) .arg(&verify_index_arg)
.arg(&hard_forks_arg) .arg(&hard_forks_arg)
.arg(&no_accounts_db_caching_arg) .arg(&no_accounts_db_caching_arg)
@ -1775,9 +1786,12 @@ fn main() {
} }
("shred-version", Some(arg_matches)) => { ("shred-version", Some(arg_matches)) => {
let process_options = ProcessOptions { let process_options = ProcessOptions {
dev_halt_at_slot: Some(0),
new_hard_forks: hardforks_of(arg_matches, "hard_forks"), new_hard_forks: hardforks_of(arg_matches, "hard_forks"),
poh_verify: false, poh_verify: false,
runtime_config: RuntimeConfig {
dev_halt_at_slot: Some(0),
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}; };
let genesis_config = open_genesis_config_by(&ledger_path, arg_matches); let genesis_config = open_genesis_config_by(&ledger_path, arg_matches);
@ -1852,9 +1866,12 @@ fn main() {
} }
("bank-hash", Some(arg_matches)) => { ("bank-hash", Some(arg_matches)) => {
let process_options = ProcessOptions { let process_options = ProcessOptions {
dev_halt_at_slot: Some(0),
new_hard_forks: hardforks_of(arg_matches, "hard_forks"), new_hard_forks: hardforks_of(arg_matches, "hard_forks"),
poh_verify: false, poh_verify: false,
runtime_config: RuntimeConfig {
dev_halt_at_slot: Some(0),
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}; };
let genesis_config = open_genesis_config_by(&ledger_path, arg_matches); let genesis_config = open_genesis_config_by(&ledger_path, arg_matches);
@ -2042,13 +2059,15 @@ fn main() {
let system_monitor_service = let system_monitor_service =
SystemMonitorService::new(Arc::clone(&exit_signal), true, false); SystemMonitorService::new(Arc::clone(&exit_signal), true, false);
if let Some(limit) = accounts_index_config.index_limit_mb = if let Some(limit) =
value_t!(arg_matches, "accounts_index_memory_limit_mb", usize).ok() value_t!(arg_matches, "accounts_index_memory_limit_mb", usize).ok()
{ {
accounts_index_config.index_limit_mb = Some(limit); IndexLimitMb::Limit(limit)
} else if arg_matches.is_present("disable_accounts_disk_index") { } else if arg_matches.is_present("disable_accounts_disk_index") {
accounts_index_config.index_limit_mb = None; IndexLimitMb::InMemOnly
} } else {
IndexLimitMb::Unspecified
};
{ {
let mut accounts_index_paths: Vec<PathBuf> = let mut accounts_index_paths: Vec<PathBuf> =
@ -2066,21 +2085,21 @@ fn main() {
accounts_index_config.drives = Some(accounts_index_paths); accounts_index_config.drives = Some(accounts_index_paths);
} }
let filler_account_count = let filler_accounts_config = FillerAccountsConfig {
value_t!(arg_matches, "accounts_filler_count", usize).ok(); count: value_t_or_exit!(arg_matches, "accounts_filler_count", usize),
size: value_t_or_exit!(arg_matches, "accounts_filler_size", usize),
};
let accounts_db_config = Some(AccountsDbConfig { let accounts_db_config = Some(AccountsDbConfig {
index: Some(accounts_index_config), index: Some(accounts_index_config),
accounts_hash_cache_path: Some(ledger_path.clone()), accounts_hash_cache_path: Some(ledger_path.clone()),
filler_account_count, filler_accounts_config,
..AccountsDbConfig::default() ..AccountsDbConfig::default()
}); });
let process_options = ProcessOptions { let process_options = ProcessOptions {
dev_halt_at_slot: value_t!(arg_matches, "halt_at_slot", Slot).ok(),
new_hard_forks: hardforks_of(arg_matches, "hard_forks"), new_hard_forks: hardforks_of(arg_matches, "hard_forks"),
poh_verify: !arg_matches.is_present("skip_poh_verify"), poh_verify: !arg_matches.is_present("skip_poh_verify"),
bpf_jit: !matches.is_present("no_bpf_jit"),
accounts_db_caching_enabled: !arg_matches.is_present("no_accounts_db_caching"), accounts_db_caching_enabled: !arg_matches.is_present("no_accounts_db_caching"),
limit_load_slot_count_from_snapshot: value_t!( limit_load_slot_count_from_snapshot: value_t!(
arg_matches, arg_matches,
@ -2094,6 +2113,11 @@ fn main() {
accounts_db_test_hash_calculation: arg_matches accounts_db_test_hash_calculation: arg_matches
.is_present("accounts_db_test_hash_calculation"), .is_present("accounts_db_test_hash_calculation"),
accounts_db_skip_shrink: arg_matches.is_present("accounts_db_skip_shrink"), accounts_db_skip_shrink: arg_matches.is_present("accounts_db_skip_shrink"),
runtime_config: RuntimeConfig {
dev_halt_at_slot: value_t!(arg_matches, "halt_at_slot", Slot).ok(),
bpf_jit: !matches.is_present("no_bpf_jit"),
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}; };
let print_accounts_stats = arg_matches.is_present("print_accounts_stats"); let print_accounts_stats = arg_matches.is_present("print_accounts_stats");
@ -2130,9 +2154,12 @@ fn main() {
let output_file = value_t_or_exit!(arg_matches, "graph_filename", String); let output_file = value_t_or_exit!(arg_matches, "graph_filename", String);
let process_options = ProcessOptions { let process_options = ProcessOptions {
dev_halt_at_slot: value_t!(arg_matches, "halt_at_slot", Slot).ok(),
new_hard_forks: hardforks_of(arg_matches, "hard_forks"), new_hard_forks: hardforks_of(arg_matches, "hard_forks"),
poh_verify: false, poh_verify: false,
runtime_config: RuntimeConfig {
dev_halt_at_slot: value_t!(arg_matches, "halt_at_slot", Slot).ok(),
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}; };
@ -2256,9 +2283,12 @@ fn main() {
&genesis_config, &genesis_config,
&blockstore, &blockstore,
ProcessOptions { ProcessOptions {
dev_halt_at_slot: Some(snapshot_slot),
new_hard_forks, new_hard_forks,
poh_verify: false, poh_verify: false,
runtime_config: RuntimeConfig {
dev_halt_at_slot: Some(snapshot_slot),
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}, },
snapshot_archive_path, snapshot_archive_path,
@ -2553,9 +2583,12 @@ fn main() {
("accounts", Some(arg_matches)) => { ("accounts", Some(arg_matches)) => {
let dev_halt_at_slot = value_t!(arg_matches, "halt_at_slot", Slot).ok(); let dev_halt_at_slot = value_t!(arg_matches, "halt_at_slot", Slot).ok();
let process_options = ProcessOptions { let process_options = ProcessOptions {
dev_halt_at_slot,
new_hard_forks: hardforks_of(arg_matches, "hard_forks"), new_hard_forks: hardforks_of(arg_matches, "hard_forks"),
poh_verify: false, poh_verify: false,
runtime_config: RuntimeConfig {
dev_halt_at_slot,
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}; };
let genesis_config = open_genesis_config_by(&ledger_path, arg_matches); let genesis_config = open_genesis_config_by(&ledger_path, arg_matches);
@ -2616,9 +2649,12 @@ fn main() {
("capitalization", Some(arg_matches)) => { ("capitalization", Some(arg_matches)) => {
let dev_halt_at_slot = value_t!(arg_matches, "halt_at_slot", Slot).ok(); let dev_halt_at_slot = value_t!(arg_matches, "halt_at_slot", Slot).ok();
let process_options = ProcessOptions { let process_options = ProcessOptions {
dev_halt_at_slot,
new_hard_forks: hardforks_of(arg_matches, "hard_forks"), new_hard_forks: hardforks_of(arg_matches, "hard_forks"),
poh_verify: false, poh_verify: false,
runtime_config: RuntimeConfig {
dev_halt_at_slot,
..RuntimeConfig::default()
},
..ProcessOptions::default() ..ProcessOptions::default()
}; };
let genesis_config = open_genesis_config_by(&ledger_path, arg_matches); let genesis_config = open_genesis_config_by(&ledger_path, arg_matches);

View File

@ -22,11 +22,11 @@ itertools = "0.10.3"
lazy_static = "1.4.0" lazy_static = "1.4.0"
libc = "0.2.120" libc = "0.2.120"
log = { version = "0.4.14" } log = { version = "0.4.14" }
lru = "0.7.3" lru = "0.7.5"
num-derive = "0.3" num-derive = "0.3"
num-traits = "0.2" num-traits = "0.2"
num_cpus = "1.13.1" num_cpus = "1.13.1"
prost = "0.9.0" prost = "0.10.0"
rand = "0.7.0" rand = "0.7.0"
rand_chacha = "0.2.2" rand_chacha = "0.2.2"
rayon = "1.5.1" rayon = "1.5.1"

View File

@ -125,13 +125,12 @@ pub fn load_bank_forks(
accounts_update_notifier, accounts_update_notifier,
) )
} else { } else {
if process_options let maybe_filler_accounts = process_options
.accounts_db_config .accounts_db_config
.as_ref() .as_ref()
.and_then(|config| config.filler_account_count) .map(|config| config.filler_accounts_config.count > 0);
.unwrap_or_default()
> 0 if let Some(true) = maybe_filler_accounts {
{
panic!("filler accounts specified, but not loading from snapshot"); panic!("filler accounts specified, but not loading from snapshot");
} }
@ -212,14 +211,16 @@ fn bank_forks_from_snapshot(
process::exit(1); process::exit(1);
} }
let (deserialized_bank, full_snapshot_archive_info, incremental_snapshot_archive_info) = let (mut deserialized_bank, full_snapshot_archive_info, incremental_snapshot_archive_info) =
snapshot_utils::bank_from_latest_snapshot_archives( snapshot_utils::bank_from_latest_snapshot_archives(
&snapshot_config.bank_snapshots_dir, &snapshot_config.bank_snapshots_dir,
&snapshot_config.snapshot_archives_dir, &snapshot_config.snapshot_archives_dir,
&account_paths, &account_paths,
genesis_config, genesis_config,
process_options.debug_keys.clone(), process_options.debug_keys.clone(),
Some(&crate::builtins::get(process_options.bpf_jit)), Some(&crate::builtins::get(
process_options.runtime_config.bpf_jit,
)),
process_options.account_indexes.clone(), process_options.account_indexes.clone(),
process_options.accounts_db_caching_enabled, process_options.accounts_db_caching_enabled,
process_options.limit_load_slot_count_from_snapshot, process_options.limit_load_slot_count_from_snapshot,
@ -236,6 +237,8 @@ fn bank_forks_from_snapshot(
deserialized_bank.set_shrink_paths(shrink_paths); deserialized_bank.set_shrink_paths(shrink_paths);
} }
deserialized_bank.set_compute_budget(process_options.runtime_config.compute_budget);
let full_snapshot_hash = FullSnapshotHash { let full_snapshot_hash = FullSnapshotHash {
hash: ( hash: (
full_snapshot_archive_info.slot(), full_snapshot_archive_info.slot(),

Some files were not shown because too many files have changed in this diff Show More