Compare commits

..

209 Commits

Author SHA1 Message Date
dependabot[bot]
f6156f7ef0 chore: bump axios from 0.21.1 to 0.21.4 in /docs
Bumps [axios](https://github.com/axios/axios) from 0.21.1 to 0.21.4.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/master/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v0.21.1...v0.21.4)

---
updated-dependencies:
- dependency-name: axios
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-06 17:23:44 +00:00
Noah Gundotra
559ee5a843 Explorer: Add Anchor Decoding to Programs/Accounts/Transactions (#23972)
* Add program idl to the Program page
* Add instruction decoding to the Tx page
* Add account decoding to the Account page
2022-04-06 10:22:49 -07:00
behzad nouri
cd09390367 reduces gossip crds stats (#24132) 2022-04-06 15:35:25 +00:00
BG Zhu
22224127e0 Refactor thin_client::create_client (#24067)
Refactor the thin_client::create_client to take addresses separately instead of as a tuple

Co-authored-by: Bijie Zhu <bijiezhu@Bijies-MBP.cable.rcn.com>
2022-04-06 11:03:38 -04:00
ryleung-solana
a38bd4acc8 Use LRU in connection-cache (#24109)
Switch to using LRU for connection-cache
2022-04-06 10:58:32 -04:00
Brooks Prumo
c322842257 Replace channel with Mutex<Option> for AccountsPackage (#24013) 2022-04-06 05:47:19 -05:00
Alexander Meißner
07f4a9040a Removes KeyedAccount from tests in stake instruction. (Part 4) (#24124)
* Moves tests from stake state to stake instruction.

* Migrates test_merge.

* Migrates test_merge_self_fails.

* Migrates test_merge_incorrect_authorized_staker.

* Migrates test_merge_invalid_account_data.

* Migrates test_merge_fake_stake_source.

* Migrates test_merge_active_stake.
2022-04-06 12:04:35 +02:00
Yueh-Hsuan Chiang
24cc6c33de (LedgerStore)(Refactor) Move metric reporting functions to a dedicate mod (#24060)
Previously, the metric reporting functions are implemented under LedgerColumnMetric.
However, there're operations like write batch which is issued by the function inside Rocks.

This PR moves reporting functions to its own dedicate mod so that both LedgerColumn and
Rocks can report column perf metrics.
2022-04-05 15:06:17 -07:00
HaoranYi
302142bb25 fix typo (#24123) 2022-04-05 15:55:47 -05:00
behzad nouri
db23295e1c removes legacy weighted_shuffle and weighted_best methods (#24125)
Older weighted_shuffle is based on a heuristic which results in biased
samples as shown in:
https://github.com/solana-labs/solana/pull/18343
and can be replaced with WeightedShuffle.

Also, as described in:
https://github.com/solana-labs/solana/pull/13919
weighted_best can be replaced with rand::distributions::WeightedIndex,
or WeightdShuffle::first.
2022-04-05 19:19:22 +00:00
carllin
4ea59d8cb4 Set drop callback on first root bank (#23999) 2022-04-05 13:02:33 -05:00
behzad nouri
2282571493 removes outdated and flaky test_skip_repair from retransmit-stage (#24121)
test_skip_repair in retransmit-stage is no longer relevant because
following: https://github.com/solana-labs/solana/pull/19233
repair packets are filtered out earlier in window-service and so
retransmit stage does not know if a shred is repaired or not.
Also, following turbine peer shuffle changes:
https://github.com/solana-labs/solana/pull/24080
the test has become flaky since it does not take into account how peers
are shuffled for each shred.
2022-04-05 16:02:53 +00:00
HaoranYi
eb65ffb779 Optimize replay waking up (#24051)
* timeout for validator exits

* clippy

* print backtrace when panic

* add backtrace package

* increase time out to 30s

* debug logging

* make rpc complete service non blocking

* reduce log level

* remove logging

* recv_timeout

* remove backtrace

* remove sleep

* wip

* remove unused variable

* add comments

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* whitespace

* more whitespace

* fix build

* clean up import

* add mutex for signal senders in blockstore

* remove mut

* refactor: extract add signal functions

* make blockstore signal private

* increase replay wake up channel bounds

* reduce replay wakeup signal bound to 1

* reduce log level

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-04-05 08:57:12 -05:00
Jeff Washington (jwash)
4a11fa072f hash_account_with_rent_epoch (#24104) 2022-04-05 08:10:31 -05:00
samkim-crypto
ba92ba0e06 Zk instructions check length (#24103)
* zk-token-sdk: add a length check before decoding proof instruction

* zk-token-sdk: fix minor spelling

* zk-token-sdk: one-liner for length check

* zk-token-sdk: one-liner fix
2022-04-05 08:40:45 -04:00
behzad nouri
2b718d00b0 removes legacy compatibility turbine peers shuffle code 2022-04-05 12:04:12 +00:00
behzad nouri
d0b850cdd9 removes turbine peers shuffle patch feature 2022-04-05 12:04:12 +00:00
behzad nouri
855801cc95 removes deterministic-shred-seed feature 2022-04-05 12:04:12 +00:00
Alexander Meißner
e051c7c162 Removes KeyedAccount from tests in stake instruction. (Part 3) (#24110)
* Moves test from stake state to stake instruction.

* Migrates test_split_source_uninitialized.

* Migrates test_split_split_not_uninitialized.

* Migrates test_split_more_than_staked.

* Migrates test_split_with_rent.

* Migrates test_split_to_account_with_rent_exempt_reserve.

* Migrates test_split_from_larger_sized_account.

* Migrates test_split_from_smaller_sized_account.

* Migrates test_split_100_percent_of_source.

* Migrates test_split_100_percent_of_source_to_account_with_lamports.

* Migrates test_split_rent_exemptness.
2022-04-05 12:36:01 +02:00
axleiro
6fb99891f2 stopped "autolock_bot_PR.yml" action file. 2022-04-05 11:41:23 +05:30
hana
41f2fd7fca Implement get_account_with_config (#23997). (#24095) 2022-04-04 22:58:58 +00:00
Jeff Biseda
ee6bb0d5d3 track fec set turbine stats (#23989) 2022-04-04 14:44:21 -07:00
Jeff Washington (jwash)
6a7f6585ce persist historical_roots (#24029) 2022-04-04 13:13:11 -05:00
Jeff Washington (jwash)
132f08486a remove basically duplicate function (#24107) 2022-04-04 12:55:05 -05:00
Tao Zhu
997db7637c do simple math without floats 2022-04-04 12:32:00 -05:00
HaoranYi
6ba4e870c4 Blockstore should drop signals before validator exit (#24025)
* timeout for validator exits

* clippy

* print backtrace when panic

* add backtrace package

* increase time out to 30s

* debug logging

* make rpc complete service non blocking

* reduce log level

* remove logging

* recv_timeout

* remove backtrace

* remove sleep

* wip

* remove unused variable

* add comments

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* whitespace

* more whitespace

* fix build

* clean up import

* add mutex for signal senders in blockstore

* remove mut

* refactor: extract add signal functions

* make blockstore signal private

* let compiler infer mutex type

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-04-04 11:38:05 -05:00
Jeff Washington (jwash)
f8f3edac3c update comment (#24108) 2022-04-04 11:06:01 -05:00
Jeff Washington (jwash)
2820b64eb3 roots_original -> historical_roots (#24063) 2022-04-04 09:12:12 -05:00
behzad nouri
ef3e3dce7a hides implementation details of vote-accounts from public interface (#24087) 2022-04-04 13:20:26 +00:00
Bryon M
04158ee455 fix: stop logging to console when send tx fails (#23511)
There is no need to log the error to the console. Developers can simply catch the error and handle it themselves without it cluttering production logs.
2022-04-04 19:11:20 +08:00
Nico Gründel
4c058b48b6 Explorer: remove link from discord security contact (#24097) 2022-04-04 18:12:53 +08:00
axleiro
2fff8bbcc8 stopped autolock_bot_closed_issues.yml 2022-04-04 10:04:49 +05:30
Brooks Prumo
b14b8b1efa Un-deprecate MINIMUM_STAKE_DELEGATION (#24089) 2022-04-03 15:09:41 -05:00
behzad nouri
7cb3b6cbe2 demotes WeightedShuffle failures to error metrics (#24079)
Since call-sites are calling unwrap anyways, panicking seems too punitive
for our use cases.
2022-04-03 16:20:06 +00:00
behzad nouri
fa7eb7f30c improves Stakes::activate_epoch performance (#24068)
Tested with mainnet stakes obtained from the ledger at 5 recent epoch
boundaries, this code is ~30% faster than current master.

Current code:
  epoch: 289, elapsed: 82901us
  epoch: 290, elapsed: 80525us
  epoch: 291, elapsed: 79122us
  epoch: 292, elapsed: 79961us
  epoch: 293, elapsed: 78965us

This commit:
  epoch: 289, elapsed: 61710us
  epoch: 290, elapsed: 55721us
  epoch: 291, elapsed: 55886us
  epoch: 292, elapsed: 55399us
  epoch: 293, elapsed: 56803us
2022-04-02 22:48:51 +00:00
Jeff Washington (jwash)
0ca5a0ec68 prior_roots -> historical_roots (#24064) 2022-04-02 12:01:43 -05:00
Jeff Washington (jwash)
ec97d6d078 rename remove_old_roots (#24059) 2022-04-02 12:01:13 -05:00
Jeff Washington (jwash)
3ca4fffa78 root -> alive_root (#24062) 2022-04-02 12:00:52 -05:00
HaoranYi
ffa4cafe1c Revert sequential execution of validator_exit and validator_parallel_exit tests (#24048)
* handle channel disconnect

* revert sequential execution of validator_exit and parallel_validator_exit tests
2022-04-02 10:22:47 -05:00
blake
4968e7d38c Fix typo in documentation (#24076) 2022-04-02 08:09:41 -05:00
Yueh-Hsuan Chiang
2c6a3280e4 Include PR lables section and add "feature-gate" in CONTRIBUTING.md (#24056)
Added "PR / Issue Labels" section to CONTRIBUTING.md listing commonly used labels
and when to use them.
2022-04-01 22:30:37 -07:00
Brooks Prumo
2af6753808 Add GetMinimumDelegation stake program instruction (#24020) 2022-04-02 05:11:10 +00:00
Justin Starry
792bbf75ab Support sending versioned txs in AsyncClient (#23982) 2022-04-02 11:12:02 +08:00
Noah Gundotra
694292f7fa add candy machine v2 to known program names (#24072)
Co-authored-by: Noah Gundotra <noahgundotra@noahs-mbp.mynetworksettings.com>
2022-04-02 02:28:19 +00:00
samkim-crypto
f1f8f5458d Threads for discrete log (#23867)
* zk-token-sdk: add multi-thread for discrete log

* zk-token-sdk: some clean-up

* zk-token-sdk: change default discrete log thread to 1

* zk-token-sdk: allow discrete log thread nums to be chosen as param

* zk-token-sdk: join discrete log threads

* zk-token-sdk: join thread handles before returning

* zk-token-sdk: Apply suggestions from code review

Co-authored-by: Michael Vines <mvines@gmail.com>

* zk-token-sdk: update tests to use num_threads

* zk-token-sdk: simplify discrete log by removing mpsc and just using join

* zk-token-sdk: minor

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-04-01 20:01:24 -04:00
Alexander Meißner
8a18c48e47 Removes KeyedAccount from tests in stake instruction. (Part 2) (#24053)
* Migrates test_initialize_minimum_stake_delegation.

* Migrates test_delegate_minimum_stake_delegation.

* Migrates test_split_minimum_stake_delegation.

* Migrates test_split_full_amount_minimum_stake_delegation.

* Migrates test_split_destination_minimum_stake_delegation.

* Migrates test_withdraw_minimum_stake_delegation.

* Migrates test_behavior_withdrawal_then_redelegate_with_less_than_minimum_stake_delegation.
2022-04-02 01:08:55 +02:00
HaoranYi
0b7d0476c8 fix a typo (#24070) 2022-04-01 15:16:51 -07:00
Yueh-Hsuan Chiang
0b5ed87220 (LedgerStore) Enable performance sampling in column family get() (#23834)
#### Summary of Changes
This PR enables RocksDB read side performance metrics to report to blockstore_rocksdb_read_perf.
The sampling rate is controlled by an env arg `SOLANA_METRICS_ROCKSDB_PERF_SAMPLES_IN_1K`,
specifies the number of perf samples for every 1000 operations.  The default value is set to 10, meaning
we will report 10 out of 1000 (or 1/100) reads.

The metrics are based on the RocksDB [PerfContext](https://github.com/facebook/rocksdb/blob/main/include/rocksdb/perf_context.h).
It includes many useful metrics including block read time, cache hit rate, and time spent on decompressing the block.
2022-04-01 13:13:32 -07:00
HaoranYi
c9a476e24d handle channel disconnect (#24036) 2022-04-01 13:47:06 -05:00
Pankaj Garg
df4d92f9cf Revert voting service to use UDP instead of QUIC (#24032) 2022-04-01 09:34:18 -07:00
Justin Starry
97170a5d38 Bump bytemuck version in solana-program for consistency (#24043) 2022-04-01 22:25:53 +08:00
Alexander Meißner
1b45c509c3 Refactor: Use InstructionContext::get_instruction_data() (#24014)
* Adds transaction_context and instruction_context where invoke_context.get_keyed_accounts() is used.

* Use instruction_context.get_instruction_data() instead of an explicit parameter.

* Removes instruction_data parameter from Executor::execute().

* Removes instruction_data parameter from ProcessInstructionWithContext.
2022-04-01 15:48:05 +02:00
Justin Starry
cf59c000d9 Add issue template for feature gate tracking issues (#24040)
* Add issue template for feature gate tracking issues

* review feedback
2022-04-01 21:16:56 +08:00
Blaž Hrastnik
436048ca2b explorer: Add Chainlink programs to known addresses (#24037) 2022-04-01 07:54:54 +00:00
Justin Starry
0188e2601b Add feature gate prompt and backport label automation (#24023) 2022-04-01 14:41:55 +08:00
HaoranYi
51b37f0184 Modify rpc_completed_slot_service to be non-blocking (#24007)
* timeout for validator exits

* clippy

* print backtrace when panic

* add backtrace package

* increase time out to 30s

* debug logging

* make rpc complete service non blocking

* reduce log level

* remove logging

* recv_timeout

* remove backtrace

* remove sleep

* remove unused variable

* add comments

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/validator.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* whitespace

* more whitespace

* fix build

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-03-31 16:44:23 -05:00
ryleung-solana
8b72200afb Thin client quic (#23973)
Change thin-client to use connection-cache
2022-03-31 15:47:00 -04:00
Jeff Washington (jwash)
31997f8251 hash calc scanning takes config (#24016) 2022-03-31 14:26:37 -05:00
Lijun Wang
98525ddea9 Make tpu_use_quic a flag only without argument (#24018) 2022-03-31 10:04:24 -07:00
Jack May
ceb3b52ae4 Remove unnecessary asserts (#24017) 2022-03-31 09:23:45 -07:00
Jeff Washington (jwash)
9c8dad33c7 add epoch_schedule and rent_collector to hash calc (#24012) 2022-03-31 10:51:18 -05:00
Jeff Washington (jwash)
da001d54e5 calculate_accounts_hash_helper uses config (#24003) 2022-03-31 09:29:45 -05:00
Justin Starry
88326533ed Add SDK support for creating transactions with address table lookups (#23728)
* Add SDK support for creating transactions with address table lookups

* fix bpf compilation

* rename compile error variants to indicate overflow

* Add doc tests

* fix bpf compatibility

* use constant for overflow tests

* Use cfg_attr for dead code attribute

* resolve merge conflict
2022-03-31 17:44:20 +08:00
Felipe Custodio
9abebc2d64 feat: parse and display Security.txt in explorer (#23995)
* feat: parse and display Security.txt

* implement review suggestions

* rename Encryption to Secure Contact Encryption

* Update explorer/src/components/account/UpgradeableLoaderAccountSection.tsx

Co-authored-by: Justin Starry <justin.m.starry@gmail.com>

* address re-review

Co-authored-by: Justin Starry <justin.m.starry@gmail.com>
2022-03-31 17:23:32 +08:00
Justin Starry
cb5e67d327 Use Rent sysvar directly for stake split instruction (#24008)
* Use Rent sysvar directly for stake split ix

* Add feature to gate rent sysvar change

* fix tests

* cargo clippy
2022-03-31 16:46:35 +08:00
Brian Anderson
210d98bc06 Document APIs related to durable transaction nonces 2022-03-30 22:49:29 -06:00
Jack May
b741b86403 restore existing overlapping overflow (#24010) 2022-03-30 15:21:51 -07:00
Jeff Washington (jwash)
125f9634fd add hash calc config.use_write_cache (#24005) 2022-03-30 17:19:34 -05:00
Jeff Washington (jwash)
82c5230bc2 AccountsPackage::new less brittle (#23968) 2022-03-30 14:06:15 -05:00
Jack May
37497657c6 assert-type-assumptions (#23996) 2022-03-30 08:28:49 -07:00
HaoranYi
1fb82d7924 fix typo in comments (#24004) 2022-03-30 09:47:51 -05:00
Jeff Washington (jwash)
af9344fe22 SortedStorages refactoring (#23998) 2022-03-30 09:19:03 -05:00
axleiro
54aedb058c schedule Cron job for every night 2022-03-30 19:46:47 +05:30
HaoranYi
ba770832d0 Poh timing service (#23736)
* initial work for poh timing report service

* add poh_timing_report_service to validator

* fix comments

* clippy

* imrove test coverage

* delete record when complete

* rename shred full to slot full.

* debug logging

* fix slot full

* remove debug comments

* adding fmt trait

* derive default

* default for poh timing reporter

* better comments

* remove commented code

* fix test

* more test fixes

* delete timestamps for slot that are older than root_slot

* debug log

* record poh start end in bank reset

* report full to start time instead

* fix poh slot offset

* report poh start for normal ticks

* fix typo

* refactor out poh point report fn

* rename

* optimize delete - delete only when last_root changed

* change log level to trace

* convert if to match

* remove redudant check

* fix SlotPohTiming comments

* review feedback on poh timing reporter

* review feedback on poh_recorder

* add test case for out-of-order arrival of timing points and incomplete timing points

* refactor poh_timing_points into its own mod

* remove option for poh_timing_report service

* move poh_timing_point_sender to constructor

* clippy

* better comments

* more clippy

* more clippy

* add slot poh timing point macro

* clippy

* assert in test

* comments and display fmt

* fix check

* assert format

* revise comments

* refactor

* extrac send fn

* revert reporting_poh_timing_point

* align loggin

* small refactor

* move type declaration to the top of the module

* replace macro with constructor

* clippy: remove redundant closure

* review comments

* simplify poh timing point creation

Co-authored-by: Haoran Yi <hyi@Haorans-MacBook-Air.local>
2022-03-30 09:04:49 -05:00
behzad nouri
cda3d66b21 uses first_coding_index for erasure meta obtained from coding shreds (#23974)
Now that nodes correctly populate position field in coding shreds, and
first_coding_index in erasure meta, the old code to maintain backward
compatibility can be removed.
The commit is working towards changing erasure coding schema to 32:64.
2022-03-30 13:55:11 +00:00
Jeff Washington (jwash)
5636570d6d add roots_original to roots tracker (#23849) 2022-03-30 08:52:45 -05:00
axleiro
7d281a8ddd schedule Cron job for every ten min 2022-03-30 18:47:01 +05:30
axleiro
f3f7578e4b added action yml "autolock_bot_PR.yml"
This GitHub action is used to automatically lock PR since there has not been any activity in past 14 days after it was merged.
2022-03-30 18:36:29 +05:30
joeaba
c8937fa244 schedule Cron job for every night 2022-03-30 15:06:53 +05:30
axleiro
2b75546190 added action yml "autolock_bot_closed_issue.yml"
This GitHub action is used to auto-lock closed issues if there is not any activity in the past 7 days.
2022-03-30 13:39:09 +05:30
Alexander Meißner
83ef3fc53e Refactor: Remove KeyedAccount in bpf_loader helper functions (#23986)
* Replaces KeyedAccount in create_executor().

* Refactors len to write_offset in write_program_data().

* Replaces KeyedAccount in write_program_data().

* Use transaction_context.get_key_of_account_at_index() in process_instruction_common().

* Renames next_first_instruction_account to program_account_index.

* Replaces program KeyedAccount by BorrowedAccount in process_instruction_common().

* Removes _program in process_instruction_common().

* Replaces first_account KeyedAccount by BorrowedAccount in process_instruction_common().

* Moves KeyedAccount lookup inside common_close_account().

* Replaces close_account, recipient_account and authority_account KeyedAccount by BorrowedAccount in common_close_account().
2022-03-30 09:17:55 +02:00
Jeff Washington (jwash)
da844d7be5 refactoring of SortedStorages tests to make other changes easier (#23990) 2022-03-29 22:06:48 -05:00
Jeff Washington (jwash)
5a613e9b6e use CalcAccountsHashConfig in calculate_accounts_hash (#23987) 2022-03-29 22:05:47 -05:00
Alexander Meißner
794645d092 Adds check_number_of_instruction_accounts() to all builtin programs except for the address-lookup-table. (#23984) 2022-03-29 19:06:50 +02:00
HaoranYi
ac8b662413 reduce metric write log level (#23966) 2022-03-29 12:00:42 -05:00
Michael Vines
7ef18f220a Update Version CrdsData on node identity changes 2022-03-28 15:57:16 -07:00
dependabot[bot]
2a5764ef79 chore: bump @rollup/plugin-commonjs from 21.0.2 to 21.0.3 in /web3.js (#23962)
Bumps [@rollup/plugin-commonjs](https://github.com/rollup/plugins/tree/HEAD/packages/commonjs) from 21.0.2 to 21.0.3.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Changelog](https://github.com/rollup/plugins/blob/master/packages/commonjs/CHANGELOG.md)
- [Commits](https://github.com/rollup/plugins/commits/commonjs-v21.0.3/packages/commonjs)

---
updated-dependencies:
- dependency-name: "@rollup/plugin-commonjs"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-28 08:16:45 +00:00
stellaw1
c08cfafd6c feat: adds getBlockProduction RPC call 2022-03-26 18:31:40 -07:00
Brooks Prumo
31b707b625 Specify if archive size datapoint is for full or incremental snapshots (#23941) 2022-03-26 12:29:13 -05:00
steveluscher
5e08701189 feat: the search bar now auto-focuses when you first visit the site 2022-03-26 00:05:15 -07:00
Michael Vines
87e0aa1b74 improve arg documentation 2022-03-25 21:37:10 -07:00
Trent Nelson
bd27eedd15 cli: allow skipping fee-checks when writing program buffers (hidden) 2022-03-25 18:19:03 -06:00
Jeff Washington (jwash)
c24de17278 remove index hash calculation as an option (#23928) 2022-03-25 15:32:53 -05:00
Jeff Washington (jwash)
ec78702bc8 RollingBitField::get_all_less_than (#23919) 2022-03-25 15:20:22 -05:00
HaoranYi
01af40d6b6 Fix intermittent validator_exit test failure (#23594)
* run validator_exit_test sequentially

* limit validator exit run to its own serial run subset
add 10ms delay in the validator exit tests

* fix intermittent validator exit failure

* no sleep

* undo the code move
2022-03-25 14:38:19 -05:00
behzad nouri
1f9c89c1e8 expands lifetime of SlotStats (#23872)
Current slot stats are removed when the slot is full or every 30 seconds
if the slot is before root:
https://github.com/solana-labs/solana/blob/493a8e234/ledger/src/blockstore.rs#L2017-L2027

In order to track if the slot is ultimately marked as dead or rooted and
emit more metrics, this commit expands lifetime of SlotStats while
bounding total size of cache using an LRU eviction policy.
2022-03-25 19:32:22 +00:00
Will Hickey
c6dda3b324 Add solana-faucet to the list of dependencies referenced by downstream projects (#23935) 2022-03-25 13:27:31 -05:00
Trent Nelson
e34c52934c ci: don't allow mergify to add automerge label to merged PRs 2022-03-25 16:19:11 +00:00
Jeff Washington (jwash)
acfd22712b RollingBitFIeld to its own file (#23917) 2022-03-25 10:37:00 -05:00
ryleung-solana
6b85c2104c Implement forwarding via TpuConnection (#23817) 2022-03-25 11:31:40 -04:00
Steven Luscher
f44c8f296f fix: thread enforce_ulimit_nofile config down when opening blockstore (#23925) 2022-03-25 03:13:33 -05:00
steveluscher
9cf7720922 fix: when there is no instruction index, default to the current instruction by supplying u16:MAX 2022-03-24 22:55:52 -07:00
steveluscher
c73cdfd6ce fix: add TypeScript buffer type to nonce-account.ts 2022-03-24 22:55:52 -07:00
steveluscher
477355df3b fix: add TypeScript buffer type to stake-program.ts 2022-03-24 22:55:52 -07:00
steveluscher
6686b7c534 fix: add TypeScript buffer type to message.ts 2022-03-24 22:55:52 -07:00
steveluscher
741c85ca7c fix: add TypeScript buffer type to loader.ts 2022-03-24 22:55:52 -07:00
steveluscher
6bb02cdcc1 fix: add TypeScript buffer type to secp256k1-program.ts 2022-03-24 22:55:52 -07:00
steveluscher
96361295aa fix: add TypeScript buffer type to ed25519-program.ts 2022-03-24 22:55:52 -07:00
steveluscher
3333f37e88 fix: add TypeScript buffer type to vote-account.ts 2022-03-24 22:55:52 -07:00
steveluscher
b2f2a68b86 fix: fix spelling of timestamp in BlockTimestamp type 2022-03-24 22:55:52 -07:00
steveluscher
c227b8ca4d fix: add TypeScript buffer type to vote-program.ts 2022-03-24 22:55:52 -07:00
steveluscher
607a5c05de fix: add TypeScript buffer type to system-program.ts 2022-03-24 22:55:52 -07:00
steveluscher
807f88e547 fix: add TypeScript types to the rustString buffer layout helper 2022-03-24 22:55:52 -07:00
steveluscher
d34fe3dba3 fix: add TypeScript buffer type to layout.ts 2022-03-24 22:55:52 -07:00
steveluscher
b516a25132 fix: add TypeScript buffer type to instruction.ts 2022-03-24 22:55:52 -07:00
steveluscher
023fc028bc chore: Upgrade buffer-layout to v4.0.0 2022-03-24 22:55:52 -07:00
Steven Luscher
412d9be445 fix: repair web3 connection tests by making fewer assumptions about the existence of particular blocks (#23921)
* fix: repair 'get confirmed signatures for address' test in web3.js

* fix: repair 'get signatures for address' test in web3.js

* fix: repair 'get parsed confirmed transactions' test in web3.js

* fix: repair 'get transaction' test in web3.js

* fix: repair 'get confirmed transaction' test in web3.js

* fix: repair 'get block' test in web3.js

* fix: repair 'get confirmed block' test in web3.js

* fix: repair 'get block signatures' test in web3.js

* fix: repair 'get block time' test in web3.js

Co-authored-by: steveluscher <github@steveluscher.com>
2022-03-24 22:21:14 -07:00
Michael Vines
c8c3c4359f vote-authorize-voter now accepts either the vote or withdraw authority 2022-03-24 16:46:41 -07:00
Jeff Washington (jwash)
51f5524e2f make verify_accounts_package_hash like other hash calc (#23906) 2022-03-24 17:49:48 -05:00
Brian Anderson
492c54a28f Fix example mock Signer API in solana-program (#23911) 2022-03-24 17:58:51 -04:00
Jeff Washington (jwash)
55d61023f7 document 'accounts' hash (#23907) 2022-03-24 15:58:52 -05:00
HaoranYi
fedf4e984f typo (#23910) 2022-03-24 15:21:59 -05:00
Josh
9dbb950a25 feat(explorer): show ping server metrics unavailable (#23914)
* feat: show ping server metrics unavailable

* fix: formatting
2022-03-24 13:54:51 -06:00
steviez
b61c0a4a21 Add accounts arg to genesis command to dump genesis account info (#23879) 2022-03-24 14:26:08 -05:00
Alexander Meißner
140c8dd01f Refactor: Replaces KeyedAccount in_get_sysvar_with_account_check (#23905)
* Replaces all use sites of get_sysvar_with_account_check by get_sysvar_with_account_check2.

* Removes get_sysvar_with_account_check.

* Renames get_sysvar_with_account_check2 to get_sysvar_with_account_check.
2022-03-24 19:30:42 +01:00
Jeff Washington (jwash)
37c36ce3fa pass stats separately from CalcAccountsHashConfig (#23892) 2022-03-24 12:48:47 -05:00
Jeff Washington (jwash)
82328fd9d8 move max_clean_root deeper in flush cache (#23869) 2022-03-24 12:45:49 -05:00
steviez
c31db81ac4 Use VoteAccountsHashMap type alias in all applicable spots (#23904) 2022-03-24 12:09:48 -05:00
Jeff Washington (jwash)
a22a2384bf fix ci test error (#23908) 2022-03-24 11:30:20 -05:00
ryleung-solana
82945ba973 Optimize TpuConnection and its implementations and refactor connection-cache to not use dyn in order to enable those changes (#23877) 2022-03-24 11:40:26 -04:00
Jeff Washington (jwash)
5b916961b5 HashCalc uses self.accounts_cache (#23890) 2022-03-24 10:34:28 -05:00
Jeff Washington (jwash)
f2aea3b7c7 flush_slot_cache takes [Slot] (#23865) 2022-03-24 10:24:36 -05:00
Jeff Washington (jwash)
9d3b17c635 HashCalc uses self.accounts_index (#23888) 2022-03-24 10:06:32 -05:00
Jeff Washington (jwash)
396b49a7c1 Start saving/loading prior_roots(_with_hash) to snapshot (#23844)
* Start saving/loading prior_roots(_with_hash) to snapshot

* Update runtime/src/accounts_index.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

* Update runtime/src/accounts_index.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

* update comment

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-03-24 10:06:24 -05:00
Jeff Washington (jwash)
b22165ad69 hash calc uses self.filler_account_suffix (#23887) 2022-03-24 09:58:06 -05:00
Jeff Washington (jwash)
9022931689 calc hash uses self.num_hash_scan_passes (#23883) 2022-03-24 09:44:42 -05:00
Jeff Washington (jwash)
e3eb002f66 Log storage size stats at hash calc (#23843) 2022-03-24 09:40:35 -05:00
Jeff Washington (jwash)
f1a411c897 add epoch_schedule and rent_collector to hash calc (#23857) 2022-03-24 09:39:22 -05:00
Jeff Washington (jwash)
db5d68f01f HashCalc uses self.accounts_hash_cache_path (#23882) 2022-03-24 09:31:55 -05:00
HaoranYi
90009f330b small refactor to shorten the lock on slot_under_contention hashset (#23891)
* small refactor to shorten the lock on slot_under_contention hashset

* adding comments

* comments
2022-03-24 08:20:56 -05:00
Alexander Meißner
91c2729856 Replaces keyed_account get_signers() by InstructionContext::get_signers(). (#23863) 2022-03-24 12:57:51 +01:00
Yueh-Hsuan Chiang
c83c95b56b (LedgerStore) Create ColumnMetrics trait for CF metric reporting (#23763)
This PR does a refactoring on column family-related metrics reporting.
As the metric reporting is per column family basis, the PR creates
ColumnMetrics trait and move the metric reporting logic into it.

This refactoring will make future column metric reporting (such as
read PerfContext) much cleaner.
2022-03-23 20:51:49 -07:00
Jeff Washington (jwash)
5a892af2fe disable 'check_hash' on accounts hash calc (#23873) 2022-03-23 21:03:31 -05:00
Jeff Washington (jwash)
3e22d4b286 calc hash uses self.thread_pool_clean (#23881) 2022-03-23 20:52:38 -05:00
Brian Anderson
6428602cd9 Make find_program_address client example runnable (#23492) 2022-03-23 19:37:12 -06:00
steveluscher
260fdf7ba3 Revert "chore: Upgrade buffer-layout package in web3.js (#23897)"
Fixing up the types is going to take me a bit longer than I anticipated, so I'll back this out for now.
2022-03-23 18:34:01 -07:00
Jack May
486f7b7673 use array access function (#23895) 2022-03-23 17:03:01 -07:00
Steven Luscher
0c0db9308b chore: Upgrade buffer-layout package in web3.js (#23897) 2022-03-23 14:56:13 -07:00
Trent Nelson
9dae5551a1 Revert transient dependency bumps from c4ecfa5 2022-03-23 21:08:26 +00:00
Josh
100fd03f3e feat(explorer): solana ping set minBarHeight (#23894) 2022-03-23 20:35:59 +00:00
dependabot[bot]
7af7c15802 chore:(deps): bump minimist from 1.2.5 to 1.2.6 in /explorer (#23886)
Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/substack/minimist/releases)
- [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6)

---
updated-dependencies:
- dependency-name: minimist
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-23 20:00:24 +00:00
dependabot[bot]
154b828287 chore:(deps): bump nanoid from 3.1.23 to 3.3.1 in /explorer (#23884)
Bumps [nanoid](https://github.com/ai/nanoid) from 3.1.23 to 3.3.1.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.1.23...3.3.1)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-23 19:59:16 +00:00
Andrey Frolov
59290c08aa fix: add type-check script to web3.js package (#23109) 2022-03-23 12:58:42 -07:00
microwavedcola1
1b7b261460 feat(explorer): render program name, ix name, and account names from on chain idl for specific anchor programs (#23499)
* show titles of ix, from idl

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>

* remove unused

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>

* remaining accounts

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>

* fallback

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>

* fix from code review: remove default for the non fallback case

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>

* keep camelcase

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>

* formatting

Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>
2022-03-23 12:14:26 -07:00
Jeff Washington (jwash)
dc3863ef14 flush_slot_cache_with_clean (#23868) 2022-03-23 14:09:56 -05:00
Jeff Washington (jwash)
260f899eda write cache: hashmap to set (#23866) 2022-03-23 14:05:45 -05:00
Jeff Washington (jwash)
9e61fe7583 add AccountsHashConfig to manage parameters (#23850) 2022-03-23 13:44:23 -05:00
HaoranYi
db49b826f0 seperate blockstore metrics from window service metrics (#23871) 2022-03-23 13:38:17 -05:00
HaoranYi
7ff8ed869c typos (#23870) 2022-03-23 13:36:55 -05:00
Sammy
26da64184a feat(web3.js): expose rpcEndpoint in client for web3.js (#23719)
Adds a getter to the commitment class to expose the rpcEndpoint property.
2022-03-23 11:05:37 -07:00
Will Hickey
a573cfa39d Revert "Remove unneeded unit expression"
This reverts commit e8e0097046.
2022-03-23 10:22:18 -07:00
Jeff Washington (jwash)
b1280b670a calculate_accounts_hash_without_index takes &self (#23846)
* calculate_accounts_hash_without_index takes &self

* Update runtime/src/snapshot_package.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

Co-authored-by: Brooks Prumo <brooks@prumo.org>
2022-03-23 11:57:32 -05:00
Jeff Washington (jwash)
7b89222fde don't start extra threads for shrink/clean/hash (#23858) 2022-03-23 11:53:37 -05:00
Josh
911aa5bad3 fix(explorer): can't convert too large of stake to number (#23876) 2022-03-23 09:34:43 -07:00
Josh
5541a5873b fix(explorer): serum init open orders has optional openOrdersMarketAuthority (#23875) 2022-03-23 09:32:24 -07:00
Josh
6b76391ed2 fix(explorer): add sync native to token program decode (#23874) 2022-03-23 09:31:58 -07:00
Jack May
6962a667e5 add-u8-align-check (#23860) 2022-03-23 09:16:29 -07:00
Jack May
27b66db88d Use sat math for ptr calcs (#23861) 2022-03-23 09:16:03 -07:00
Jeff Washington (jwash)
493a8e2348 remove random flushing of write cache (#23845) 2022-03-23 08:45:44 -05:00
klykov
9859eb83b5 upd Cargo.lock for bpf 2022-03-23 09:25:36 +01:00
klykov
36807d5fa3 update clap to v3: poh-bench 2022-03-23 09:25:36 +01:00
klykov
22404ca1fc update clap to v3: bench-streamer 2022-03-23 09:25:36 +01:00
klykov
01317395e9 update Cargo.lock 2022-03-23 09:25:36 +01:00
klykov
3f2971692d update clap to v3: net-utils 2022-03-23 09:25:36 +01:00
klykov
300c50798f update clap to v3: log-analyzer 2022-03-23 09:25:36 +01:00
klykov
12e24a90a0 update clap to v3: net-sharper 2022-03-23 09:25:36 +01:00
Edgar Xi
d8be0d9430 make get_protobuf_or_bincode_cells accept IntoIter on row_keys, make get_confirmed_blocks_with_data return an Iterator 2022-03-22 22:47:25 -06:00
Edgar Xi
f717fda9a3 modify get_protobuf_or_bincode_cells to accept and return an iterator 2022-03-22 22:47:25 -06:00
Edgar Xi
fbcf6a0802 use &[T] instead of Vec<T> where appropriate
clippy
2022-03-22 22:47:25 -06:00
Edgar Xi
5533e9393c appease clippy 2022-03-22 22:47:25 -06:00
Edgar Xi
f3219fb695 add get_confirmed_blocks_with_data and get_protobuf_or_bincode_cells 2022-03-22 22:47:25 -06:00
Jeff Washington (jwash)
bc35e1c5f5 snapshot code needs all storages for hash calc (#23840) 2022-03-22 21:27:54 -05:00
Justin Starry
92462ae031 Manually serialize and use send_wire_transaction for votes (#23826)
* Revert "core: partial versioned transaction support for voting service"

This reverts commit eb3df4c20e.

* Manually serialize vote tx before sending to TPU
2022-03-23 09:47:55 +08:00
Alexander Meißner
9f0ca6d88a Refactor: Remove trait from nonce keyed account (#23811)
* Removes the trait `NonceKeyedAccount`.
2022-03-23 02:09:30 +01:00
Jack May
3d7c8442c7 add size check for from_raw_parts (#23781) 2022-03-22 15:20:39 -07:00
Jon Cinque
7af48465fa transaction-status: Add return data to meta (#23688)
* transaction-status: Add return data to meta

* Add return data to simulation results

* Use pretty-hex for printing return data

* Update arg name, make TransactionRecord struct

* Rename TransactionRecord -> ExecutionRecord
2022-03-22 23:17:05 +01:00
Kirill Lykov
359e2de090 ignore heavy tests in dos 2022-03-22 20:19:28 +01:00
Jeff Washington (jwash)
1089a38aaf AcctIdx: rework scan and write to disk (#23794) 2022-03-22 11:54:12 -05:00
Jeff Washington (jwash)
89ba3ff139 log fail to evict (#23815) 2022-03-22 09:19:38 -05:00
axleiro
16b73a998b Increasing timeout in local-cluster-slow by 10 min 2022-03-22 17:52:06 +05:30
axleiro
9347d57973 increasing timeout of local-cluster-slow test by 10 min 2022-03-22 17:51:13 +05:30
Yueh-Hsuan Chiang
ae75b1a25f (LedgerStore) Add compression type (#23578)
This PR adds `--rocksdb-ledger-compression` as a hidden argument to the validator
for specifying the compression algorithm for TransactionStatus.  Available compression
algorithms include `lz4`, `snappy`, `zlib`. The default value is `none`.

Experimental results show that with lz4 compression, we can achieve ~37% size-reduction
on the TransactionStatus column family, or ~8% size-reduction of the ledger store size.
2022-03-22 02:27:09 -07:00
Lijun Wang
49228573f4 Use connection cache in send transaction (#23712)
Use connection cache in send transaction (#23712)
2022-03-21 23:24:21 -07:00
Trent Nelson
eb3df4c20e core: partial versioned transaction support for voting service 2022-03-21 22:59:05 -06:00
Justin Starry
016d3c450a Update TpuConnection interface to be compatible with versioned txs (#23760)
* Update TpuConnection interface to be compatible with versioned txs

* Add convenience method for sending txs

* use parallel iterator to serialize transactions
2022-03-22 09:45:22 +08:00
HaoranYi
45a7c6edfb Fix typos and a small refactor (#23805)
* fix typo

* remove packet_has_more_unprocessed_transactions function
2022-03-21 18:35:31 -05:00
Will Hickey
c4ecfa5716 Bump version to v1.11 (#23807)
* Revert crossbeam_epoch to stable. 0.9.8 only works with nightly
* Remove unneeded unit expression
2022-03-21 17:40:50 -05:00
Jeff Washington (jwash)
24f6855f86 AcctIdx: only remove a fixed number of items per write lock (#23795) 2022-03-21 16:55:04 -05:00
samkim-crypto
10eeafd3d6 zk-token-sdk: handle edge cases for transfer with fee (#23804)
* zk-token-sdk: handle edge cases for transfer with fee

* zk-token-sdk: clippy

* zk-token-sdk: clippy

* zk-token-sdk: cargo fmt
2022-03-21 16:10:33 -04:00
Brooks Prumo
cb06126388 Set accounts_data_len on feature activation (#23730) 2022-03-21 12:28:26 -05:00
Tyera Eulberg
9c60991cd3 Add ability to query bigtable via solana-test-validator, with hidden params 2022-03-21 11:26:49 -06:00
Trent Nelson
9b32b72990 bigtable: allow custom instance names 2022-03-21 11:26:49 -06:00
Trent Nelson
f513195468 bigtable: add a config ctor for LedgerStorage 2022-03-21 11:26:49 -06:00
Tyera Eulberg
63ee00e647 Refactor validator bigtable config 2022-03-21 11:26:49 -06:00
Michael Vines
99f1a43262 Add v1.10 backport label, remove v1.8 backport label 2022-03-21 09:50:55 -07:00
DimAn
739e43ba58 Add ability to get the latest incremental snapshot via RPC (#23788) 2022-03-21 11:48:49 -05:00
Lijun Wang
ae76fe2bd7 Made connection cache configurable. (#23783)
Added command-line argument tpu-use-quic argument.
Changed connection cache to return different connections based on the config.
2022-03-21 09:31:37 -07:00
Pankaj Garg
5d03b188c8 Use QUIC client in voting service (#23713)
* Use QUIC client in voting service

* guard quic-client usage with a flag

* add measure to time the quic client

* move time measure outside if block

* remove quic vs UDP flag from voting service
2022-03-21 09:10:16 -07:00
Jeff Washington (jwash)
965ab9186d AcctIdx: fix infinite loop (#23806) 2022-03-21 10:58:36 -05:00
350 changed files with 30014 additions and 16651 deletions

View File

@@ -1,6 +0,0 @@
#### Problem
#### Proposed Solution

12
.github/ISSUE_TEMPLATE/0-general.md vendored Normal file
View File

@@ -0,0 +1,12 @@
---
name: General Issue
about: Create a report describing a problem and a proposed solution
title: ''
assignees: ''
---
#### Problem
#### Proposed Solution

View File

@@ -0,0 +1,70 @@
name: Feature Gate Tracker
description: Track the development and status of an on-chain feature
title: "Feature Gate: "
labels: ["feature-gate"]
body:
- type: markdown
attributes:
value: >
Steps to add a new feature are outlined below. Note that these steps only cover
the process of getting a feature into the core Solana code.
- For features that are unambiguously good (ie bug fixes), these steps are sufficient.
- For features that should go up for community vote (ie fee structure changes), more
information on the additional steps to follow can be found at:
<https://spl.solana.com/feature-proposal#feature-proposal-life-cycle>
1. Generate a new keypair with `solana-keygen new --outfile feature.json --no-passphrase`
- Keypairs should be held by core contributors only. If you're a non-core contirbutor going
through these steps, the PR process will facilitate a keypair holder being picked. That
person will generate the keypair, provide pubkey for PR, and ultimately enable the feature.
2. Add a public module for the feature, specifying keypair pubkey as the id with
`solana_sdk::declare_id!()` within the module. Additionally, add an entry to `FEATURE_NAMES` map.
3. Add desired logic to check for and switch on feature availability.
- type: textarea
id: description
attributes:
label: Description
placeholder: Describe why the new feature gate is needed and any necessary conditions for its activation
validations:
required: true
- type: input
id: id
attributes:
label: Feature ID
description: The public key of the feature account
validations:
required: true
- type: dropdown
id: activation-method
attributes:
label: Activation Method
options:
- Single Core Contributor
- Staked Validator Vote
validations:
required: true
- type: input
id: testnet
attributes:
label: Testnet Activation Epoch
placeholder: Edit this response when feature is activated on this cluster
validations:
required: false
- type: input
id: devnet
attributes:
label: Devnet Activation Epoch
placeholder: Edit this response when feature is activated on this cluster
validations:
required: false
- type: input
id: mainnet-beta
attributes:
label: Mainnet-Beta Activation Epoch
placeholder: Edit this response when feature is activated on this cluster
validations:
required: false

View File

@@ -1,9 +1,9 @@
#### Problem
#### Summary of Changes
Fixes #
<!-- OPTIONAL: Feature Gate Issue: # -->
<!-- Don't forget to add the "feature-gate" label -->

37
.github/workflows/autolock_bot_PR.txt vendored Normal file
View File

@@ -0,0 +1,37 @@
name: 'Autolock RitBot for for PR'
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
concurrency:
group: lock
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v3
with:
github-token: ${{ github.token }}
pr-inactive-days: '14'
exclude-pr-created-before: ''
exclude-pr-created-after: ''
exclude-pr-created-between: ''
exclude-pr-closed-before: ''
exclude-pr-closed-after: ''
exclude-pr-closed-between: ''
include-any-pr-labels: 'automerge'
include-all-pr-labels: ''
exclude-any-pr-labels: ''
add-pr-labels: 'locked PR'
remove-pr-labels: ''
pr-comment: 'This PR has been automatically locked since there has not been any activity in past 14 days after it was merged.'
pr-lock-reason: 'resolved'
log-output: true

View File

@@ -0,0 +1,38 @@
name: 'Autolock NaviBot for closed issue'
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
concurrency:
group: lock
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v3
with:
github-token: ${{ github.token }}
issue-inactive-days: '7'
exclude-issue-created-before: ''
exclude-issue-created-after: ''
exclude-issue-created-between: ''
exclude-issue-closed-before: ''
exclude-issue-closed-after: ''
exclude-issue-closed-between: ''
include-any-issue-labels: ''
include-all-issue-labels: ''
exclude-any-issue-labels: ''
add-issue-labels: 'locked issue'
remove-issue-labels: ''
issue-comment: 'This issue has been automatically locked since there has not been any activity in past 7 days after it was closed. Please open a new issue for related bugs.'
issue-lock-reason: 'resolved'
process-only: 'issues'
log-output: true

View File

@@ -88,32 +88,52 @@ pull_request_rules:
actions:
dismiss_reviews:
changes_requested: true
- name: set automerge label on mergify backport PRs
conditions:
- author=mergify[bot]
- head~=^mergify/bp/
- "#status-failure=0"
- "-merged"
actions:
label:
add:
- automerge
- name: v1.8 backport
conditions:
- label=v1.8
actions:
backport:
ignore_conflicts: true
branches:
- v1.8
- name: v1.9 backport
- name: v1.9 feature-gate backport
conditions:
- label=v1.9
- label=feature-gate
actions:
backport:
ignore_conflicts: true
labels:
- automerge
- feature-gate
branches:
- v1.9
- name: v1.9 non-feature-gate backport
conditions:
- label=v1.9
- label!=feature-gate
actions:
backport:
ignore_conflicts: true
labels:
- automerge
branches:
- v1.9
- name: v1.10 feature-gate backport
conditions:
- label=v1.10
- label=feature-gate
actions:
backport:
ignore_conflicts: true
labels:
- automerge
- feature-gate
branches:
- v1.10
- name: v1.10 non-feature-gate backport
conditions:
- label=v1.10
- label!=feature-gate
actions:
backport:
ignore_conflicts: true
labels:
- automerge
branches:
- v1.10
commands_restrictions:
# The author of copied PRs is the Mergify user.

View File

@@ -146,6 +146,31 @@ the subject lines of the git commits contained in the PR. It's especially
generous (and not expected) to rebase or reword commits such that each change
matches the logical flow in your PR description.
### The PR / Issue Labels
Labels make it easier to manage and track PRs / issues. Below some common labels
that we use in Solana. For the complete list of labels, please refer to the
[label page](https://github.com/solana-labs/solana/issues/labels):
* "feature-gate": when you add a new feature gate or modify the behavior of
an existing feature gate, please add the "feature-gate" label to your PR.
New feature gates should also always have a corresponding tracking issue
(go to "New Issue" -> "Feature Gate Tracker [Get Started](https://github.com/solana-labs/solana/issues/new?assignees=&labels=feature-gate&template=1-feature-gate.yml&title=Feature+Gate%3A+)")
and should be updated each time the feature is activated on a cluster.
* "automerge": When a PR is labelled with "automerge", the PR will be
automically merged once CI passes. In general, this label should only
be used for small hot-fix (fewer than 100 lines) or automatic generated
PRs. If you're uncertain, it's usually the case that the PR is not
qualified as "automerge".
* "good first issue": If you happen to find an issue that is non-urgent and
self-contained with moderate scope, you might want to consider attaching
"good first issue" to it as it might be a good practice for newcomers.
* "rust": this pull request updates Rust code.
* "javascript": this pull request updates Javascript code.
### When will my PR be reviewed?
PRs are typically reviewed and merged in under 7 days. If your PR has been open

701
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-account-decoder"
version = "1.10.8"
version = "1.11.0"
description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -19,9 +19,9 @@ lazy_static = "1.4.0"
serde = "1.0.136"
serde_derive = "1.0.103"
serde_json = "1.0.79"
solana-config-program = { path = "../programs/config", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.8" }
solana-config-program = { path = "../programs/config", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
thiserror = "1.0"
zstd = "0.11.1"

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-bench"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -12,11 +12,11 @@ publish = false
clap = "2.33.1"
log = "0.4.14"
rayon = "1.5.1"
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -10,8 +10,11 @@ use {
accounts_db::AccountShrinkThreshold,
accounts_index::AccountSecondaryIndexes,
ancestors::Ancestors,
rent_collector::RentCollector,
},
solana_sdk::{
genesis_config::ClusterType, pubkey::Pubkey, sysvar::epoch_schedule::EpochSchedule,
},
solana_sdk::{genesis_config::ClusterType, pubkey::Pubkey},
std::{env, fs, path::PathBuf},
};
@@ -114,7 +117,12 @@ fn main() {
} else {
let mut pubkeys: Vec<Pubkey> = vec![];
let mut time = Measure::start("hash");
let results = accounts.accounts_db.update_accounts_hash(0, &ancestors);
let results = accounts.accounts_db.update_accounts_hash(
0,
&ancestors,
&EpochSchedule::default(),
&RentCollector::default(),
);
time.stop();
let mut time_store = Measure::start("hash using store");
let results_store = accounts.accounts_db.update_accounts_hash_with_index_option(
@@ -124,7 +132,8 @@ fn main() {
&ancestors,
None,
false,
None,
&EpochSchedule::default(),
&RentCollector::default(),
false,
);
time_store.stop();

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-cluster-bench"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -13,25 +13,25 @@ clap = "2.33.1"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.10.8" }
solana-faucet = { path = "../faucet", version = "=1.10.8" }
solana-gossip = { path = "../gossip", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-net-utils = { path = "../net-utils", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-faucet = { path = "../faucet", version = "=1.11.0" }
solana-gossip = { path = "../gossip", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
[dev-dependencies]
solana-core = { path = "../core", version = "=1.10.8" }
solana-local-cluster = { path = "../local-cluster", version = "=1.10.8" }
solana-test-validator = { path = "../test-validator", version = "=1.10.8" }
solana-core = { path = "../core", version = "=1.11.0" }
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-banking-bench"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -14,17 +14,17 @@ crossbeam-channel = "0.5"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-core = { path = "../core", version = "=1.10.8" }
solana-gossip = { path = "../gossip", version = "=1.10.8" }
solana-ledger = { path = "../ledger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-perf = { path = "../perf", version = "=1.10.8" }
solana-poh = { path = "../poh", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-core = { path = "../core", version = "=1.11.0" }
solana-gossip = { path = "../gossip", version = "=1.11.0" }
solana-ledger = { path = "../ledger", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-perf = { path = "../perf", version = "=1.11.0" }
solana-poh = { path = "../poh", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-client"
version = "1.10.8"
version = "1.11.0"
description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,17 +12,17 @@ edition = "2021"
[dependencies]
borsh = "0.9.3"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.10.8" }
solana-program = { path = "../sdk/program", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-banks-interface = { path = "../banks-interface", version = "=1.11.0" }
solana-program = { path = "../sdk/program", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
tarpc = { version = "0.27.2", features = ["full"] }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
[dev-dependencies]
solana-banks-server = { path = "../banks-server", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-banks-server = { path = "../banks-server", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
[lib]
crate-type = ["lib"]

View File

@@ -1,5 +1,8 @@
use {
solana_sdk::{transaction::TransactionError, transport::TransportError},
solana_sdk::{
transaction::TransactionError, transaction_context::TransactionReturnData,
transport::TransportError,
},
std::io,
tarpc::client::RpcError,
thiserror::Error,
@@ -25,6 +28,7 @@ pub enum BanksClientError {
err: TransactionError,
logs: Vec<String>,
units_consumed: u64,
return_data: Option<TransactionReturnData>,
},
}

View File

@@ -247,6 +247,7 @@ impl BanksClient {
err,
logs: simulation_details.logs,
units_consumed: simulation_details.units_consumed,
return_data: simulation_details.return_data,
}),
BanksTransactionResultWithSimulation {
result: Some(result),

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-interface"
version = "1.10.8"
version = "1.11.0"
description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -11,7 +11,7 @@ edition = "2021"
[dependencies]
serde = { version = "1.0.136", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
tarpc = { version = "0.27.2", features = ["full"] }
[lib]

View File

@@ -12,6 +12,7 @@ use {
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction, TransactionError},
transaction_context::TransactionReturnData,
},
};
@@ -35,6 +36,7 @@ pub struct TransactionStatus {
pub struct TransactionSimulationDetails {
pub logs: Vec<String>,
pub units_consumed: u64,
pub return_data: Option<TransactionReturnData>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-server"
version = "1.10.8"
version = "1.11.0"
description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -13,10 +13,10 @@ edition = "2021"
bincode = "1.3.3"
crossbeam-channel = "0.5"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.10.8" }
solana-banks-interface = { path = "../banks-interface", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.11.0" }
tarpc = { version = "0.27.2", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }

View File

@@ -266,6 +266,7 @@ impl Banks for BanksServer {
logs,
post_simulation_accounts: _,
units_consumed,
return_data,
} = self
.bank(commitment)
.simulate_transaction_unchecked(sanitized_transaction)
@@ -275,6 +276,7 @@ impl Banks for BanksServer {
simulation_details: Some(TransactionSimulationDetails {
logs,
units_consumed,
return_data,
}),
};
}

View File

@@ -2,18 +2,18 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-bench-streamer"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
clap = "2.33.1"
crossbeam-channel = "0.5"
solana-net-utils = { path = "../net-utils", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
clap = { version = "3.1.5", features = ["cargo"] }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,6 +1,6 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, value_t, App, Arg},
clap::{crate_description, crate_name, Arg, Command},
crossbeam_channel::unbounded,
solana_streamer::{
packet::{Packet, PacketBatch, PacketBatchRecycler, PACKET_DATA_SIZE},
@@ -57,18 +57,18 @@ fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketBatchReceiver) ->
fn main() -> Result<()> {
let mut num_sockets = 1usize;
let matches = App::new(crate_name!())
let matches = Command::new(crate_name!())
.about(crate_description!())
.version(solana_version::version!())
.arg(
Arg::with_name("num-recv-sockets")
Arg::new("num-recv-sockets")
.long("num-recv-sockets")
.value_name("NUM")
.takes_value(true)
.help("Use NUM receive sockets"),
)
.arg(
Arg::with_name("num-producers")
Arg::new("num-producers")
.long("num-producers")
.value_name("NUM")
.takes_value(true)
@@ -80,7 +80,7 @@ fn main() -> Result<()> {
num_sockets = max(num_sockets, n.to_string().parse().expect("integer"));
}
let num_producers = value_t!(matches, "num_producers", u64).unwrap_or(4);
let num_producers: u64 = matches.value_of_t("num_producers").unwrap_or(4);
let port = 0;
let ip_addr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0));

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-bench-tps"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -15,23 +15,23 @@ log = "0.4.14"
rayon = "1.5.1"
serde_json = "1.0.79"
serde_yaml = "0.8.23"
solana-client = { path = "../client", version = "=1.10.8" }
solana-core = { path = "../core", version = "=1.10.8" }
solana-faucet = { path = "../faucet", version = "=1.10.8" }
solana-genesis = { path = "../genesis", version = "=1.10.8" }
solana-gossip = { path = "../gossip", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-metrics = { path = "../metrics", version = "=1.10.8" }
solana-net-utils = { path = "../net-utils", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-core = { path = "../core", version = "=1.11.0" }
solana-faucet = { path = "../faucet", version = "=1.11.0" }
solana-genesis = { path = "../genesis", version = "=1.11.0" }
solana-gossip = { path = "../gossip", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
[dev-dependencies]
serial_test = "0.6.0"
solana-local-cluster = { path = "../local-cluster", version = "=1.10.8" }
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -9,7 +9,6 @@ use {
solana_client::thin_client::create_client,
solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet_with_port,
solana_gossip::cluster_info::VALIDATOR_PORT_RANGE,
solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
@@ -46,8 +45,8 @@ fn test_bench_tps_local_cluster(config: Config) {
);
let client = Arc::new(create_client(
(cluster.entry_point_info.rpc, cluster.entry_point_info.tpu),
VALIDATOR_PORT_RANGE,
cluster.entry_point_info.rpc,
cluster.entry_point_info.tpu,
));
let (addr_sender, addr_receiver) = unbounded();

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-bloom"
version = "1.10.8"
version = "1.11.0"
description = "Solana bloom filter"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -17,9 +17,9 @@ rand = "0.7.0"
rayon = "1.5.1"
serde = { version = "1.0.136", features = ["rc"] }
serde_derive = "1.0.103"
solana-frozen-abi = { path = "../frozen-abi", version = "=1.10.8" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.11.0" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
[lib]
crate-type = ["lib"]

View File

@@ -1,7 +1,3 @@
---
title: Handle Duplicate Block
---
# Leader Duplicate Block Slashing
This design describes how the cluster slashes leaders that produce duplicate

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-bucket-map"
version = "1.10.8"
version = "1.11.0"
description = "solana-bucket-map"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bucket-map"
@@ -15,14 +15,14 @@ log = { version = "0.4.11" }
memmap2 = "0.5.3"
modular-bitfield = "0.11.2"
rand = "0.7.0"
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
tempfile = "3.3.0"
[dev-dependencies]
fs_extra = "1.2.0"
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
[lib]
crate-type = ["lib"]

View File

@@ -303,7 +303,7 @@ EOF
command_step "local-cluster-slow" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \
30
40
}
pull_or_push_steps() {

View File

@@ -295,7 +295,7 @@ EOF
command_step "local-cluster-slow" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \
30
40
}
pull_or_push_steps() {

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-clap-utils"
version = "1.10.8"
version = "1.11.0"
description = "Solana utilities for the clap"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -13,9 +13,9 @@ edition = "2021"
chrono = "0.4"
clap = "2.33.0"
rpassword = "6.0"
solana-perf = { path = "../perf", version = "=1.10.8" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.10.8", default-features = false }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-perf = { path = "../perf", version = "=1.11.0" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.11.0", default-features = false }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
thiserror = "1.0.30"
tiny-bip39 = "0.8.2"
uriparse = "0.6.3"

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli-config"
description = "Blockchain, Rebuilt for Scale"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli-output"
description = "Blockchain, Rebuilt for Scale"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -17,14 +17,15 @@ clap = "2.33.0"
console = "0.15.0"
humantime = "2.0.1"
indicatif = "0.16.2"
pretty-hex = "0.2.1"
serde = "1.0.136"
serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.8" }
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
[dev-dependencies]

View File

@@ -14,6 +14,7 @@ use {
signature::Signature,
stake,
transaction::{TransactionError, TransactionVersion, VersionedTransaction},
transaction_context::TransactionReturnData,
},
solana_transaction_status::{Rewards, UiTransactionStatusMeta},
spl_memo::{id as spl_memo_id, v1::id as spl_memo_v1_id},
@@ -246,6 +247,7 @@ fn write_transaction<W: io::Write>(
write_fees(w, transaction_status.fee, prefix)?;
write_balances(w, transaction_status, prefix)?;
write_log_messages(w, transaction_status.log_messages.as_ref(), prefix)?;
write_return_data(w, transaction_status.return_data.as_ref(), prefix)?;
write_rewards(w, transaction_status.rewards.as_ref(), prefix)?;
} else {
writeln!(w, "{}Status: Unavailable", prefix)?;
@@ -576,6 +578,25 @@ fn write_balances<W: io::Write>(
Ok(())
}
fn write_return_data<W: io::Write>(
w: &mut W,
return_data: Option<&TransactionReturnData>,
prefix: &str,
) -> io::Result<()> {
if let Some(return_data) = return_data {
if !return_data.data.is_empty() {
use pretty_hex::*;
writeln!(
w,
"{}Return Data from Program {}:",
prefix, return_data.program_id
)?;
writeln!(w, "{} {:?}", prefix, return_data.data.hex_dump())?;
}
}
Ok(())
}
fn write_log_messages<W: io::Write>(
w: &mut W,
log_messages: Option<&Vec<String>>,
@@ -750,6 +771,10 @@ mod test {
commission: None,
}]),
loaded_addresses: LoadedAddresses::default(),
return_data: Some(TransactionReturnData {
program_id: Pubkey::new_from_array([2u8; 32]),
data: vec![1, 2, 3],
}),
};
let output = {
@@ -786,6 +811,9 @@ Status: Ok
Account 1 balance: ◎0.00001 -> ◎0.0000099
Log Messages:
Test message
Return Data from Program 8qbHbw2BbbTHBW1sbeqakYXVKRQM8Ne7pLK7m6CVfeR:
Length: 3 (0x3) bytes
0000: 01 02 03 ...
Rewards:
Address Type Amount New Balance \0
4vJ9JU1bJJE96FWSJKvHsmmFADCg4gpZQff4P3bkLKi rent -◎0.000000100 ◎0.000009900 \0
@@ -820,6 +848,10 @@ Rewards:
commission: None,
}]),
loaded_addresses,
return_data: Some(TransactionReturnData {
program_id: Pubkey::new_from_array([2u8; 32]),
data: vec![1, 2, 3],
}),
};
let output = {
@@ -865,6 +897,9 @@ Status: Ok
Account 3 balance: ◎0.00002
Log Messages:
Test message
Return Data from Program 8qbHbw2BbbTHBW1sbeqakYXVKRQM8Ne7pLK7m6CVfeR:
Length: 3 (0x3) bytes
0000: 01 02 03 ...
Rewards:
Address Type Amount New Balance \0
CktRuQ2mttgRGkXJtyksdKHjUdc2C4TgDzyB98oEzy8 rent -◎0.000000100 ◎0.000014900 \0

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli"
description = "Blockchain, Rebuilt for Scale"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -27,29 +27,29 @@ semver = "1.0.6"
serde = "1.0.136"
serde_derive = "1.0.103"
serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.8" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.10.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.8" }
solana-cli-config = { path = "../cli-config", version = "=1.10.8" }
solana-cli-output = { path = "../cli-output", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.10.8" }
solana-config-program = { path = "../programs/config", version = "=1.10.8" }
solana-faucet = { path = "../faucet", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-program-runtime = { path = "../program-runtime", version = "=1.10.8" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.8" }
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.11.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" }
solana-cli-output = { path = "../cli-output", version = "=1.11.0" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-config-program = { path = "../programs/config", version = "=1.11.0" }
solana-faucet = { path = "../faucet", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
solana_rbpf = "=0.2.24"
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0.30"
tiny-bip39 = "0.8.2"
[dev-dependencies]
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-test-validator = { path = "../test-validator", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" }
tempfile = "3.3.0"
[[bin]]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-client-test"
version = "1.10.8"
version = "1.11.0"
description = "Solana RPC Test"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -14,25 +14,25 @@ publish = false
futures-util = "0.3.21"
serde_json = "1.0.79"
serial_test = "0.6.0"
solana-client = { path = "../client", version = "=1.10.8" }
solana-ledger = { path = "../ledger", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.10.8" }
solana-metrics = { path = "../metrics", version = "=1.10.8" }
solana-perf = { path = "../perf", version = "=1.10.8" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.8" }
solana-rpc = { path = "../rpc", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-test-validator = { path = "../test-validator", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-ledger = { path = "../ledger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-perf = { path = "../perf", version = "=1.11.0" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" }
solana-rpc = { path = "../rpc", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-test-validator = { path = "../test-validator", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
systemstat = "0.1.10"
tokio = { version = "1", features = ["full"] }
[dev-dependencies]
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-client"
version = "1.10.8"
version = "1.11.0"
description = "Solana Client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -25,6 +25,7 @@ itertools = "0.10.2"
jsonrpc-core = "18.0.0"
lazy_static = "1.4.0"
log = "0.4.14"
lru = "0.7.5"
quinn = "0.8.0"
rand = "0.7.0"
rand_chacha = "0.2.2"
@@ -35,16 +36,16 @@ semver = "1.0.6"
serde = "1.0.136"
serde_derive = "1.0.103"
serde_json = "1.0.79"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.8" }
solana-faucet = { path = "../faucet", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-net-utils = { path = "../net-utils", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.8" }
solana-account-decoder = { path = "../account-decoder", version = "=1.11.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-faucet = { path = "../faucet", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1.8"
@@ -53,9 +54,10 @@ tungstenite = { version = "0.17.2", features = ["rustls-tls-webpki-roots"] }
url = "2.2.2"
[dev-dependencies]
anyhow = "1.0.45"
assert_matches = "1.5.0"
jsonrpc-http-server = "18.0.0"
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -3,10 +3,11 @@ use {
quic_client::QuicTpuConnection, tpu_connection::TpuConnection, udp_client::UdpTpuConnection,
},
lazy_static::lazy_static,
lru::LruCache,
solana_net_utils::VALIDATOR_PORT_RANGE,
solana_sdk::{transaction::VersionedTransaction, transport::TransportError},
std::{
collections::{hash_map::Entry, BTreeMap, HashMap},
net::{SocketAddr, UdpSocket},
net::{IpAddr, Ipv4Addr, SocketAddr},
sync::{Arc, Mutex},
},
};
@@ -21,26 +22,14 @@ enum Connection {
}
struct ConnMap {
// Keeps track of the connection associated with an addr and the last time it was used
map: HashMap<SocketAddr, (Connection, u64)>,
// Helps to find the least recently used connection. The search and inserts are O(log(n))
// but since we're bounding the size of the collections, this should be constant
// (and hopefully negligible) time. In theory, we can do this in constant time
// with a queue implemented as a doubly-linked list (and all the
// HashMap entries holding a "pointer" to the corresponding linked-list node),
// so we can push, pop and bump a used connection back to the end of the queue in O(1) time, but
// that seems non-"Rust-y" and low bang/buck. This is still pretty terrible though...
last_used_times: BTreeMap<u64, SocketAddr>,
ticks: u64,
map: LruCache<SocketAddr, Connection>,
use_quic: bool,
}
impl ConnMap {
pub fn new() -> Self {
Self {
map: HashMap::new(),
last_used_times: BTreeMap::new(),
ticks: 0,
map: LruCache::new(MAX_CONNECTIONS),
use_quic: false,
}
}
@@ -59,53 +48,29 @@ pub fn set_use_quic(use_quic: bool) {
map.set_use_quic(use_quic);
}
#[allow(dead_code)]
// TODO: see https://github.com/solana-labs/solana/issues/23661
// remove lazy_static and optimize and refactor this
fn get_connection(addr: &SocketAddr) -> Connection {
let mut map = (*CONNECTION_MAP).lock().unwrap();
let ticks = map.ticks;
let use_quic = map.use_quic;
let (conn, target_ticks) = match map.map.entry(*addr) {
Entry::Occupied(mut entry) => {
let mut pair = entry.get_mut();
let old_ticks = pair.1;
pair.1 = ticks;
(pair.0.clone(), old_ticks)
}
Entry::Vacant(entry) => {
let send_socket = UdpSocket::bind("0.0.0.0:0").unwrap();
// TODO: see https://github.com/solana-labs/solana/issues/23659
// make it configurable (e.g. via the command line) whether to use UDP or Quic
let conn = if use_quic {
match map.map.get(addr) {
Some(connection) => connection.clone(),
None => {
let (_, send_socket) = solana_net_utils::bind_in_range(
IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)),
VALIDATOR_PORT_RANGE,
)
.unwrap();
let connection = if map.use_quic {
Connection::Quic(Arc::new(QuicTpuConnection::new(send_socket, *addr)))
} else {
Connection::Udp(Arc::new(UdpTpuConnection::new(send_socket, *addr)))
};
entry.insert((conn.clone(), ticks));
(conn, ticks)
map.map.put(*addr, connection.clone());
connection
}
};
let num_connections = map.map.len();
if num_connections > MAX_CONNECTIONS {
let (old_ticks, target_addr) = {
let (old_ticks, target_addr) = map.last_used_times.iter().next().unwrap();
(*old_ticks, *target_addr)
};
map.map.remove(&target_addr);
map.last_used_times.remove(&old_ticks);
}
if target_ticks != ticks {
map.last_used_times.remove(&target_ticks);
}
map.last_used_times.insert(ticks, *addr);
map.ticks += 1;
conn
}
// TODO: see https://github.com/solana-labs/solana/issues/23851
@@ -218,11 +183,11 @@ mod tests {
{
let map = (*CONNECTION_MAP).lock().unwrap();
addrs.iter().for_each(|a| {
let conn = map.map.get(a).expect("Address not found");
assert!(a.ip() == ip(conn.0.clone()));
let conn = map.map.peek(a).expect("Address not found");
assert!(a.ip() == ip(conn.clone()));
});
assert!(map.map.get(&first_addr).is_none());
assert!(map.map.peek(&first_addr).is_none());
}
// Test that get_connection updates which connection is next up for eviction
@@ -235,7 +200,7 @@ mod tests {
get_connection(&get_addr(&mut rng));
let map = (*CONNECTION_MAP).lock().unwrap();
assert!(map.map.get(&addrs[0]).is_some());
assert!(map.map.get(&addrs[1]).is_none());
assert!(map.map.peek(&addrs[0]).is_some());
assert!(map.map.peek(&addrs[1]).is_none());
}
}

View File

@@ -229,6 +229,7 @@ impl RpcSender for MockSender {
post_token_balances: None,
rewards: None,
loaded_addresses: None,
return_data: None,
}),
},
block_time: Some(1628633791),
@@ -340,6 +341,7 @@ impl RpcSender for MockSender {
logs: None,
accounts: None,
units_consumed: None,
return_data: None,
},
})?,
"getMinimumBalanceForRentExemption" => json![20],

View File

@@ -1,3 +1,5 @@
//! Durable transaction nonce helpers.
use {
crate::rpc_client::RpcClient,
solana_sdk::{
@@ -32,10 +34,23 @@ pub enum Error {
Client(String),
}
/// Get a nonce account from the network.
///
/// This is like [`RpcClient::get_account`] except:
///
/// - it returns this module's [`Error`] type,
/// - it returns an error if any of the checks from [`account_identity_ok`] fail.
pub fn get_account(rpc_client: &RpcClient, nonce_pubkey: &Pubkey) -> Result<Account, Error> {
get_account_with_commitment(rpc_client, nonce_pubkey, CommitmentConfig::default())
}
/// Get a nonce account from the network.
///
/// This is like [`RpcClient::get_account_with_commitment`] except:
///
/// - it returns this module's [`Error`] type,
/// - it returns an error if the account does not exist,
/// - it returns an error if any of the checks from [`account_identity_ok`] fail.
pub fn get_account_with_commitment(
rpc_client: &RpcClient,
nonce_pubkey: &Pubkey,
@@ -52,6 +67,13 @@ pub fn get_account_with_commitment(
.and_then(|a| account_identity_ok(&a).map(|()| a))
}
/// Perform basic checks that an account has nonce-like properties.
///
/// # Errors
///
/// Returns [`Error::InvalidAccountOwner`] if the account is not owned by the
/// system program. Returns [`Error::UnexpectedDataSize`] if the account
/// contains no data.
pub fn account_identity_ok<T: ReadableAccount>(account: &T) -> Result<(), Error> {
if account.owner() != &system_program::id() {
Err(Error::InvalidAccountOwner)
@@ -62,6 +84,47 @@ pub fn account_identity_ok<T: ReadableAccount>(account: &T) -> Result<(), Error>
}
}
/// Deserialize the state of a durable transaction nonce account.
///
/// # Errors
///
/// Returns an error if the account is not owned by the system program or
/// contains no data.
///
/// # Examples
///
/// Determine if a nonce account is initialized:
///
/// ```no_run
/// use solana_client::{
/// rpc_client::RpcClient,
/// nonce_utils,
/// };
/// use solana_sdk::{
/// nonce::State,
/// pubkey::Pubkey,
/// };
/// use anyhow::Result;
///
/// fn is_nonce_initialized(
/// client: &RpcClient,
/// nonce_account_pubkey: &Pubkey,
/// ) -> Result<bool> {
///
/// // Sign the tx with nonce_account's `blockhash` instead of the
/// // network's latest blockhash.
/// let nonce_account = client.get_account(nonce_account_pubkey)?;
/// let nonce_state = nonce_utils::state_from_account(&nonce_account)?;
///
/// Ok(!matches!(nonce_state, State::Uninitialized))
/// }
/// #
/// # let client = RpcClient::new(String::new());
/// # let nonce_account_pubkey = Pubkey::new_unique();
/// # is_nonce_initialized(&client, &nonce_account_pubkey)?;
/// #
/// # Ok::<(), anyhow::Error>(())
/// ```
pub fn state_from_account<T: ReadableAccount + StateMut<Versions>>(
account: &T,
) -> Result<State, Error> {
@@ -71,6 +134,93 @@ pub fn state_from_account<T: ReadableAccount + StateMut<Versions>>(
.map(|v| v.convert_to_current())
}
/// Deserialize the state data of a durable transaction nonce account.
///
/// # Errors
///
/// Returns an error if the account is not owned by the system program or
/// contains no data. Returns an error if the account state is uninitialized or
/// fails to deserialize.
///
/// # Examples
///
/// Create and sign a transaction with a durable nonce:
///
/// ```no_run
/// use solana_client::{
/// rpc_client::RpcClient,
/// nonce_utils,
/// };
/// use solana_sdk::{
/// message::Message,
/// pubkey::Pubkey,
/// signature::{Keypair, Signer},
/// system_instruction,
/// transaction::Transaction,
/// };
/// use std::path::Path;
/// use anyhow::Result;
/// # use anyhow::anyhow;
///
/// fn create_transfer_tx_with_nonce(
/// client: &RpcClient,
/// nonce_account_pubkey: &Pubkey,
/// payer: &Keypair,
/// receiver: &Pubkey,
/// amount: u64,
/// tx_path: &Path,
/// ) -> Result<()> {
///
/// let instr_transfer = system_instruction::transfer(
/// &payer.pubkey(),
/// receiver,
/// amount,
/// );
///
/// // In this example, `payer` is `nonce_account_pubkey`'s authority
/// let instr_advance_nonce_account = system_instruction::advance_nonce_account(
/// nonce_account_pubkey,
/// &payer.pubkey(),
/// );
///
/// // The `advance_nonce_account` instruction must be the first issued in
/// // the transaction.
/// let message = Message::new(
/// &[
/// instr_advance_nonce_account,
/// instr_transfer
/// ],
/// Some(&payer.pubkey()),
/// );
///
/// let mut tx = Transaction::new_unsigned(message);
///
/// // Sign the tx with nonce_account's `blockhash` instead of the
/// // network's latest blockhash.
/// let nonce_account = client.get_account(nonce_account_pubkey)?;
/// let nonce_data = nonce_utils::data_from_account(&nonce_account)?;
/// let blockhash = nonce_data.blockhash;
///
/// tx.try_sign(&[payer], blockhash)?;
///
/// // Save the signed transaction locally for later submission.
/// save_tx_to_file(&tx_path, &tx)?;
///
/// Ok(())
/// }
/// #
/// # fn save_tx_to_file(path: &Path, tx: &Transaction) -> Result<()> {
/// # Ok(())
/// # }
/// #
/// # let client = RpcClient::new(String::new());
/// # let nonce_account_pubkey = Pubkey::new_unique();
/// # let payer = Keypair::new();
/// # let receiver = Pubkey::new_unique();
/// # create_transfer_tx_with_nonce(&client, &nonce_account_pubkey, &payer, &receiver, 1024, Path::new("new_tx"))?;
/// #
/// # Ok::<(), anyhow::Error>(())
/// ```
pub fn data_from_account<T: ReadableAccount + StateMut<Versions>>(
account: &T,
) -> Result<Data, Error> {
@@ -78,6 +228,12 @@ pub fn data_from_account<T: ReadableAccount + StateMut<Versions>>(
state_from_account(account).and_then(|ref s| data_from_state(s).map(|d| d.clone()))
}
/// Get the nonce data from its [`State`] value.
///
/// # Errors
///
/// Returns [`Error::InvalidStateForOperation`] if `state` is
/// [`State::Uninitialized`].
pub fn data_from_state(state: &State) -> Result<&Data, Error> {
match state {
State::Uninitialized => Err(Error::InvalidStateForOperation),

View File

@@ -7,6 +7,7 @@ use {
hash::Hash,
inflation::Inflation,
transaction::{Result, TransactionError},
transaction_context::TransactionReturnData,
},
solana_transaction_status::{
ConfirmedTransactionStatusWithSignature, TransactionConfirmationStatus, UiConfirmedBlock,
@@ -347,6 +348,7 @@ pub struct RpcSimulateTransactionResult {
pub logs: Option<Vec<String>>,
pub accounts: Option<Vec<Option<UiAccount>>>,
pub units_consumed: Option<u64>,
pub return_data: Option<TransactionReturnData>,
}
#[derive(Serialize, Deserialize, Clone, Debug)]

View File

@@ -5,8 +5,13 @@
use {
crate::{
rpc_client::RpcClient, rpc_config::RpcProgramAccountsConfig, rpc_response::Response,
tpu_connection::TpuConnection, udp_client::UdpTpuConnection,
connection_cache::{
par_serialize_and_send_transaction_batch, send_wire_transaction,
serialize_and_send_transaction,
},
rpc_client::RpcClient,
rpc_config::RpcProgramAccountsConfig,
rpc_response::Response,
},
log::*,
solana_sdk::{
@@ -29,7 +34,7 @@ use {
},
std::{
io,
net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket},
net::SocketAddr,
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
RwLock,
@@ -118,64 +123,52 @@ impl ClientOptimizer {
}
/// An object for querying and sending transactions to the network.
pub struct ThinClient<C: 'static + TpuConnection> {
pub struct ThinClient {
rpc_clients: Vec<RpcClient>,
tpu_connections: Vec<C>,
tpu_addrs: Vec<SocketAddr>,
optimizer: ClientOptimizer,
}
impl<C: 'static + TpuConnection> ThinClient<C> {
impl ThinClient {
/// Create a new ThinClient that will interface with the Rpc at `rpc_addr` using TCP
/// and the Tpu at `tpu_addr` over `transactions_socket` using Quic or UDP
/// (currently hardcoded to UDP)
pub fn new(rpc_addr: SocketAddr, tpu_addr: SocketAddr, transactions_socket: UdpSocket) -> Self {
let tpu_connection = C::new(transactions_socket, tpu_addr);
Self::new_from_client(RpcClient::new_socket(rpc_addr), tpu_connection)
pub fn new(rpc_addr: SocketAddr, tpu_addr: SocketAddr) -> Self {
Self::new_from_client(RpcClient::new_socket(rpc_addr), tpu_addr)
}
pub fn new_socket_with_timeout(
rpc_addr: SocketAddr,
tpu_addr: SocketAddr,
transactions_socket: UdpSocket,
timeout: Duration,
) -> Self {
let rpc_client = RpcClient::new_socket_with_timeout(rpc_addr, timeout);
let tpu_connection = C::new(transactions_socket, tpu_addr);
Self::new_from_client(rpc_client, tpu_connection)
Self::new_from_client(rpc_client, tpu_addr)
}
fn new_from_client(rpc_client: RpcClient, tpu_connection: C) -> Self {
fn new_from_client(rpc_client: RpcClient, tpu_addr: SocketAddr) -> Self {
Self {
rpc_clients: vec![rpc_client],
tpu_connections: vec![tpu_connection],
tpu_addrs: vec![tpu_addr],
optimizer: ClientOptimizer::new(0),
}
}
pub fn new_from_addrs(
rpc_addrs: Vec<SocketAddr>,
tpu_addrs: Vec<SocketAddr>,
transactions_socket: UdpSocket,
) -> Self {
pub fn new_from_addrs(rpc_addrs: Vec<SocketAddr>, tpu_addrs: Vec<SocketAddr>) -> Self {
assert!(!rpc_addrs.is_empty());
assert_eq!(rpc_addrs.len(), tpu_addrs.len());
let rpc_clients: Vec<_> = rpc_addrs.into_iter().map(RpcClient::new_socket).collect();
let optimizer = ClientOptimizer::new(rpc_clients.len());
let tpu_connections: Vec<_> = tpu_addrs
.into_iter()
.map(|tpu_addr| C::new(transactions_socket.try_clone().unwrap(), tpu_addr))
.collect();
Self {
rpc_clients,
tpu_connections,
tpu_addrs,
optimizer,
}
}
fn tpu_connection(&self) -> &C {
&self.tpu_connections[self.optimizer.best()]
fn tpu_addr(&self) -> &SocketAddr {
&self.tpu_addrs[self.optimizer.best()]
}
fn rpc_client(&self) -> &RpcClient {
@@ -220,8 +213,7 @@ impl<C: 'static + TpuConnection> ThinClient<C> {
while now.elapsed().as_secs() < wait_time as u64 {
if num_confirmed == 0 {
// Send the transaction if there has been no confirmation (e.g. the first time)
self.tpu_connection()
.send_wire_transaction(&wire_transaction)?;
send_wire_transaction(&wire_transaction, self.tpu_addr())?;
}
if let Ok(confirmed_blocks) = self.poll_for_signature_confirmation(
@@ -316,13 +308,13 @@ impl<C: 'static + TpuConnection> ThinClient<C> {
}
}
impl<C: 'static + TpuConnection> Client for ThinClient<C> {
impl Client for ThinClient {
fn tpu_addr(&self) -> String {
self.tpu_connection().tpu_addr().to_string()
self.tpu_addr().to_string()
}
}
impl<C: 'static + TpuConnection> SyncClient for ThinClient<C> {
impl SyncClient for ThinClient {
fn send_and_confirm_message<T: Signers>(
&self,
keypairs: &T,
@@ -602,69 +594,34 @@ impl<C: 'static + TpuConnection> SyncClient for ThinClient<C> {
}
}
impl<C: 'static + TpuConnection> AsyncClient for ThinClient<C> {
fn async_send_transaction(&self, transaction: Transaction) -> TransportResult<Signature> {
let transaction = VersionedTransaction::from(transaction);
self.tpu_connection()
.serialize_and_send_transaction(&transaction)?;
impl AsyncClient for ThinClient {
fn async_send_versioned_transaction(
&self,
transaction: VersionedTransaction,
) -> TransportResult<Signature> {
serialize_and_send_transaction(&transaction, self.tpu_addr())?;
Ok(transaction.signatures[0])
}
fn async_send_batch(&self, transactions: Vec<Transaction>) -> TransportResult<()> {
let batch: Vec<VersionedTransaction> = transactions.into_iter().map(Into::into).collect();
self.tpu_connection()
.par_serialize_and_send_transaction_batch(&batch[..])?;
fn async_send_versioned_transaction_batch(
&self,
batch: Vec<VersionedTransaction>,
) -> TransportResult<()> {
par_serialize_and_send_transaction_batch(&batch[..], self.tpu_addr())?;
Ok(())
}
fn async_send_message<T: Signers>(
&self,
keypairs: &T,
message: Message,
recent_blockhash: Hash,
) -> TransportResult<Signature> {
let transaction = Transaction::new(keypairs, message, recent_blockhash);
self.async_send_transaction(transaction)
}
fn async_send_instruction(
&self,
keypair: &Keypair,
instruction: Instruction,
recent_blockhash: Hash,
) -> TransportResult<Signature> {
let message = Message::new(&[instruction], Some(&keypair.pubkey()));
self.async_send_message(&[keypair], message, recent_blockhash)
}
fn async_transfer(
&self,
lamports: u64,
keypair: &Keypair,
pubkey: &Pubkey,
recent_blockhash: Hash,
) -> TransportResult<Signature> {
let transfer_instruction =
system_instruction::transfer(&keypair.pubkey(), pubkey, lamports);
self.async_send_instruction(keypair, transfer_instruction, recent_blockhash)
}
}
pub fn create_client(
(rpc, tpu): (SocketAddr, SocketAddr),
range: (u16, u16),
) -> ThinClient<UdpTpuConnection> {
let (_, transactions_socket) =
solana_net_utils::bind_in_range(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), range).unwrap();
ThinClient::<UdpTpuConnection>::new(rpc, tpu, transactions_socket)
pub fn create_client(rpc: SocketAddr, tpu: SocketAddr) -> ThinClient {
ThinClient::new(rpc, tpu)
}
pub fn create_client_with_timeout(
(rpc, tpu): (SocketAddr, SocketAddr),
range: (u16, u16),
rpc: SocketAddr,
tpu: SocketAddr,
timeout: Duration,
) -> ThinClient<UdpTpuConnection> {
let (_, transactions_socket) =
solana_net_utils::bind_in_range(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), range).unwrap();
ThinClient::<UdpTpuConnection>::new_socket_with_timeout(rpc, tpu, transactions_socket, timeout)
) -> ThinClient {
ThinClient::new_socket_with_timeout(rpc, tpu, timeout)
}
#[cfg(test)]

View File

@@ -1,7 +1,7 @@
[package]
name = "solana-core"
description = "Blockchain, Rebuilt for Scale"
version = "1.10.8"
version = "1.11.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-core"
readme = "../README.md"
@@ -21,42 +21,42 @@ bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5"
dashmap = { version = "4.0.2", features = ["rayon", "raw-api"] }
etcd-client = { version = "0.9.0", features = ["tls"] }
etcd-client = { version = "0.8.4", features = ["tls"] }
fs_extra = "1.2.0"
histogram = "0.6.9"
itertools = "0.10.3"
log = "0.4.14"
lru = "0.7.5"
lru = "0.7.3"
rand = "0.7.0"
rand_chacha = "0.2.2"
rayon = "1.5.1"
retain_mut = "0.1.7"
serde = "1.0.136"
serde_derive = "1.0.103"
solana-address-lookup-table-program = { path = "../programs/address-lookup-table", version = "=1.10.8" }
solana-bloom = { path = "../bloom", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.10.8" }
solana-entry = { path = "../entry", version = "=1.10.8" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.10.8" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.10.8" }
solana-geyser-plugin-manager = { path = "../geyser-plugin-manager", version = "=1.10.8" }
solana-gossip = { path = "../gossip", version = "=1.10.8" }
solana-ledger = { path = "../ledger", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-metrics = { path = "../metrics", version = "=1.10.8" }
solana-net-utils = { path = "../net-utils", version = "=1.10.8" }
solana-perf = { path = "../perf", version = "=1.10.8" }
solana-poh = { path = "../poh", version = "=1.10.8" }
solana-program-runtime = { path = "../program-runtime", version = "=1.10.8" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.8" }
solana-replica-lib = { path = "../replica-lib", version = "=1.10.8" }
solana-rpc = { path = "../rpc", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.8" }
solana-address-lookup-table-program = { path = "../programs/address-lookup-table", version = "=1.11.0" }
solana-bloom = { path = "../bloom", version = "=1.11.0" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-entry = { path = "../entry", version = "=1.11.0" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.11.0" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.11.0" }
solana-geyser-plugin-manager = { path = "../geyser-plugin-manager", version = "=1.11.0" }
solana-gossip = { path = "../gossip", version = "=1.11.0" }
solana-ledger = { path = "../ledger", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-perf = { path = "../perf", version = "=1.11.0" }
solana-poh = { path = "../poh", version = "=1.11.0" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" }
solana-replica-lib = { path = "../replica-lib", version = "=1.11.0" }
solana-rpc = { path = "../rpc", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
sys-info = "0.9.1"
tempfile = "3.3.0"
thiserror = "1.0"
@@ -69,10 +69,10 @@ raptorq = "1.6.5"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
serde_json = "1.0.79"
serial_test = "0.6.0"
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-program-runtime = { path = "../program-runtime", version = "=1.10.8" }
solana-stake-program = { path = "../programs/stake", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-program-runtime = { path = "../program-runtime", version = "=1.11.0" }
solana-stake-program = { path = "../programs/stake", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
static_assertions = "1.1.0"
systemstat = "0.1.10"

View File

@@ -1,28 +1,28 @@
// Service to verify accounts hashes with other known validator nodes.
//
// Each interval, publish the snapshat hash which is the full accounts state
// Each interval, publish the snapshot hash which is the full accounts state
// hash on gossip. Monitor gossip for messages from validators in the `--known-validator`s
// set and halt the node if a mismatch is detected.
use {
crossbeam_channel::RecvTimeoutError,
rayon::ThreadPool,
solana_gossip::cluster_info::{ClusterInfo, MAX_SNAPSHOT_HASHES},
solana_measure::measure::Measure,
solana_runtime::{
accounts_db::{self, AccountsDb},
accounts_hash::HashStats,
accounts_hash::{CalcAccountsHashConfig, HashStats},
snapshot_config::SnapshotConfig,
snapshot_package::{
AccountsPackage, AccountsPackageReceiver, PendingSnapshotPackage, SnapshotPackage,
AccountsPackage, PendingAccountsPackage, PendingSnapshotPackage, SnapshotPackage,
SnapshotType,
},
sorted_storages::SortedStorages,
},
solana_sdk::{clock::Slot, hash::Hash, pubkey::Pubkey},
solana_sdk::{
clock::{Slot, SLOT_MS},
hash::Hash,
pubkey::Pubkey,
},
std::{
collections::{HashMap, HashSet},
path::{Path, PathBuf},
sync::{
atomic::{AtomicBool, Ordering},
Arc,
@@ -38,7 +38,7 @@ pub struct AccountsHashVerifier {
impl AccountsHashVerifier {
pub fn new(
accounts_package_receiver: AccountsPackageReceiver,
pending_accounts_package: PendingAccountsPackage,
pending_snapshot_package: Option<PendingSnapshotPackage>,
exit: &Arc<AtomicBool>,
cluster_info: &Arc<ClusterInfo>,
@@ -46,7 +46,6 @@ impl AccountsHashVerifier {
halt_on_known_validators_accounts_hash_mismatch: bool,
fault_injection_rate_slots: u64,
snapshot_config: Option<SnapshotConfig>,
ledger_path: PathBuf,
) -> Self {
let exit = exit.clone();
let cluster_info = cluster_info.clone();
@@ -54,36 +53,29 @@ impl AccountsHashVerifier {
.name("solana-hash-accounts".to_string())
.spawn(move || {
let mut hashes = vec![];
let mut thread_pool = None;
loop {
if exit.load(Ordering::Relaxed) {
break;
}
match accounts_package_receiver.recv_timeout(Duration::from_secs(1)) {
Ok(accounts_package) => {
if accounts_package.hash_for_testing.is_some() && thread_pool.is_none()
{
thread_pool = Some(accounts_db::make_min_priority_thread_pool());
}
Self::process_accounts_package(
accounts_package,
&cluster_info,
known_validators.as_ref(),
halt_on_known_validators_accounts_hash_mismatch,
pending_snapshot_package.as_ref(),
&mut hashes,
&exit,
fault_injection_rate_slots,
snapshot_config.as_ref(),
thread_pool.as_ref(),
&ledger_path,
);
}
Err(RecvTimeoutError::Disconnected) => break,
Err(RecvTimeoutError::Timeout) => (),
let accounts_package = pending_accounts_package.lock().unwrap().take();
if accounts_package.is_none() {
std::thread::sleep(Duration::from_millis(SLOT_MS));
continue;
}
let accounts_package = accounts_package.unwrap();
Self::process_accounts_package(
accounts_package,
&cluster_info,
known_validators.as_ref(),
halt_on_known_validators_accounts_hash_mismatch,
pending_snapshot_package.as_ref(),
&mut hashes,
&exit,
fault_injection_rate_slots,
snapshot_config.as_ref(),
);
}
})
.unwrap();
@@ -103,10 +95,8 @@ impl AccountsHashVerifier {
exit: &Arc<AtomicBool>,
fault_injection_rate_slots: u64,
snapshot_config: Option<&SnapshotConfig>,
thread_pool: Option<&ThreadPool>,
ledger_path: &Path,
) {
Self::verify_accounts_package_hash(&accounts_package, thread_pool, ledger_path);
Self::verify_accounts_package_hash(&accounts_package);
Self::push_accounts_hashes_to_cluster(
&accounts_package,
@@ -121,25 +111,35 @@ impl AccountsHashVerifier {
Self::submit_for_packaging(accounts_package, pending_snapshot_package, snapshot_config);
}
fn verify_accounts_package_hash(
accounts_package: &AccountsPackage,
thread_pool: Option<&ThreadPool>,
ledger_path: &Path,
) {
fn verify_accounts_package_hash(accounts_package: &AccountsPackage) {
let mut measure_hash = Measure::start("hash");
if let Some(expected_hash) = accounts_package.hash_for_testing {
if let Some(expected_hash) = accounts_package.accounts_hash_for_testing {
let mut sort_time = Measure::start("sort_storages");
let sorted_storages = SortedStorages::new(&accounts_package.snapshot_storages);
let (hash, lamports) = AccountsDb::calculate_accounts_hash_without_index(
ledger_path,
&sorted_storages,
thread_pool,
HashStats::default(),
false,
None,
None, // this will fail with filler accounts
None, // this code path is only for testing, so use default # passes here
)
.unwrap();
sort_time.stop();
let mut timings = HashStats {
storage_sort_us: sort_time.as_us(),
..HashStats::default()
};
timings.calc_storage_size_quartiles(&accounts_package.snapshot_storages);
let (hash, lamports) = accounts_package
.accounts
.accounts_db
.calculate_accounts_hash_without_index(
&CalcAccountsHashConfig {
use_bg_thread_pool: true,
check_hash: false,
ancestors: None,
use_write_cache: false,
epoch_schedule: &accounts_package.epoch_schedule,
rent_collector: &accounts_package.rent_collector,
},
&sorted_storages,
timings,
)
.unwrap();
assert_eq!(accounts_package.expected_capitalization, lamports);
assert_eq!(expected_hash, hash);
@@ -160,7 +160,7 @@ impl AccountsHashVerifier {
exit: &Arc<AtomicBool>,
fault_injection_rate_slots: u64,
) {
let hash = accounts_package.hash;
let hash = accounts_package.accounts_hash;
if fault_injection_rate_slots != 0
&& accounts_package.slot % fault_injection_rate_slots == 0
{
@@ -284,11 +284,15 @@ mod tests {
use {
super::*,
solana_gossip::{cluster_info::make_accounts_hashes_message, contact_info::ContactInfo},
solana_runtime::snapshot_utils::{ArchiveFormat, SnapshotVersion},
solana_runtime::{
rent_collector::RentCollector,
snapshot_utils::{ArchiveFormat, SnapshotVersion},
},
solana_sdk::{
genesis_config::ClusterType,
hash::hash,
signature::{Keypair, Signer},
sysvar::epoch_schedule::EpochSchedule,
},
solana_streamer::socket::SocketAddrSpace,
};
@@ -353,6 +357,7 @@ mod tests {
incremental_snapshot_archive_interval_slots: Slot::MAX,
..SnapshotConfig::default()
};
let accounts = Arc::new(solana_runtime::accounts::Accounts::default_for_tests());
for i in 0..MAX_SNAPSHOT_HASHES + 1 {
let accounts_package = AccountsPackage {
slot: full_snapshot_archive_interval_slots + i as u64,
@@ -360,18 +365,19 @@ mod tests {
slot_deltas: vec![],
snapshot_links: TempDir::new().unwrap(),
snapshot_storages: vec![],
hash: hash(&[i as u8]),
accounts_hash: hash(&[i as u8]),
archive_format: ArchiveFormat::TarBzip2,
snapshot_version: SnapshotVersion::default(),
snapshot_archives_dir: PathBuf::default(),
expected_capitalization: 0,
hash_for_testing: None,
accounts_hash_for_testing: None,
cluster_type: ClusterType::MainnetBeta,
snapshot_type: None,
accounts: Arc::clone(&accounts),
epoch_schedule: EpochSchedule::default(),
rent_collector: RentCollector::default(),
};
let ledger_path = TempDir::new().unwrap();
AccountsHashVerifier::process_accounts_package(
accounts_package,
&cluster_info,
@@ -382,8 +388,6 @@ mod tests {
&exit,
0,
Some(&snapshot_config),
None,
ledger_path.path(),
);
// sleep for 1ms to create a newer timestmap for gossip entry

View File

@@ -1,5 +1,5 @@
//! The `banking_stage` processes Transaction messages. It is intended to be used
//! to contruct a software pipeline. The stage uses all available CPU cores and
//! to construct a software pipeline. The stage uses all available CPU cores and
//! can do its processing in parallel with signature verification on the GPU.
use {
crate::{
@@ -195,7 +195,7 @@ impl BankingStageStats {
}
fn report(&mut self, report_interval_ms: u64) {
// skip repoting metrics if stats is empty
// skip reporting metrics if stats is empty
if self.is_empty() {
return;
}
@@ -717,14 +717,11 @@ impl BankingStage {
// `original_unprocessed_indexes` must have remaining packets to process
// if not yet processed.
assert!(Self::packet_has_more_unprocessed_transactions(
&original_unprocessed_indexes
));
assert!(!original_unprocessed_indexes.is_empty());
true
}
}
});
proc_start.stop();
debug!(
@@ -1194,6 +1191,7 @@ impl BankingStage {
MAX_PROCESSING_AGE,
transaction_status_sender.is_some(),
transaction_status_sender.is_some(),
transaction_status_sender.is_some(),
&mut execute_and_commit_timings.execute_timings,
)
},
@@ -1353,28 +1351,38 @@ impl BankingStage {
gossip_vote_sender: &ReplayVoteSender,
qos_service: &QosService,
) -> ProcessTransactionBatchOutput {
let mut cost_model_time = Measure::start("cost_model");
let ((transactions_qos_results, cost_model_throttled_transactions_count), cost_model_time) =
Measure::this(
|_| {
let tx_costs = qos_service.compute_transaction_costs(txs.iter());
let transaction_costs = qos_service.compute_transaction_costs(txs.iter());
let (transactions_qos_results, num_included) =
qos_service.select_transactions_per_cost(txs.iter(), tx_costs.iter(), bank);
let (transactions_qos_results, num_included) =
qos_service.select_transactions_per_cost(txs.iter(), transaction_costs.iter(), bank);
let cost_model_throttled_transactions_count =
txs.len().saturating_sub(num_included);
let cost_model_throttled_transactions_count = txs.len().saturating_sub(num_included);
qos_service.accumulate_estimated_transaction_costs(
&Self::accumulate_batched_transaction_costs(
transaction_costs.iter(),
transactions_qos_results.iter(),
),
);
cost_model_time.stop();
qos_service.accumulate_estimated_transaction_costs(
&Self::accumulate_batched_transaction_costs(
tx_costs.iter(),
transactions_qos_results.iter(),
),
);
(
transactions_qos_results,
cost_model_throttled_transactions_count,
)
},
(),
"cost_model",
);
// Only lock accounts for those transactions are selected for the block;
// Once accounts are locked, other threads cannot encode transactions that will modify the
// same account state
let mut lock_time = Measure::start("lock_time");
let batch = bank.prepare_sanitized_batch_with_results(txs, transactions_qos_results.iter());
let batch =
bank.prepare_sanitized_batch_with_results(txs, transactions_qos_results.into_iter());
lock_time.stop();
// retryable_txs includes AccountInUse, WouldExceedMaxBlockCostLimit
@@ -1389,31 +1397,21 @@ impl BankingStage {
gossip_vote_sender,
);
let mut unlock_time = Measure::start("unlock_time");
// Once the accounts are new transactions can enter the pipeline to process them
drop(batch);
unlock_time.stop();
let ExecuteAndCommitTransactionsOutput {
ref mut retryable_transaction_indexes,
ref execute_and_commit_timings,
..
} = execute_and_commit_transactions_output;
// TODO: This does not revert the cost tracker changes from all unexecuted transactions
// yet: For example tx that are too old will not be included in the block, but are not
// retryable.
QosService::update_or_remove_transaction_costs(
transaction_costs.iter(),
transactions_qos_results.iter(),
retryable_transaction_indexes,
bank,
);
retryable_transaction_indexes
.iter_mut()
.for_each(|x| *x += chunk_offset);
let mut unlock_time = Measure::start("unlock_time");
// Once the accounts are new transactions can enter the pipeline to process them
drop(batch);
unlock_time.stop();
let (cu, us) =
Self::accumulate_execute_units_and_time(&execute_and_commit_timings.execute_timings);
qos_service.accumulate_actual_execute_cu(cu);
@@ -2026,7 +2024,7 @@ impl BankingStage {
banking_stage_stats: &mut BankingStageStats,
slot_metrics_tracker: &mut LeaderSlotMetricsTracker,
) {
if Self::packet_has_more_unprocessed_transactions(&packet_indexes) {
if !packet_indexes.is_empty() {
if unprocessed_packet_batches.len() >= batch_limit {
*dropped_packet_batches_count += 1;
if let Some(dropped_batch) = unprocessed_packet_batches.pop_front() {
@@ -2052,10 +2050,6 @@ impl BankingStage {
}
}
fn packet_has_more_unprocessed_transactions(packet_indexes: &[usize]) -> bool {
!packet_indexes.is_empty()
}
pub fn join(self) -> thread::Result<()> {
for bank_thread_hdl in self.bank_thread_hdls {
bank_thread_hdl.join()?;
@@ -2167,6 +2161,7 @@ mod tests {
log_messages: None,
inner_instructions: None,
durable_nonce_fee: None,
return_data: None,
})
}
@@ -2898,131 +2893,6 @@ mod tests {
Blockstore::destroy(ledger_path.path()).unwrap();
}
#[test]
fn test_bank_process_and_record_transactions_cost_tracker() {
solana_logger::setup();
let GenesisConfigInfo {
genesis_config,
mint_keypair,
..
} = create_slow_genesis_config(10_000);
let bank = Arc::new(Bank::new_no_wallclock_throttle_for_tests(&genesis_config));
let pubkey = solana_sdk::pubkey::new_rand();
let ledger_path = get_tmp_ledger_path_auto_delete!();
{
let blockstore = Blockstore::open(ledger_path.path())
.expect("Expected to be able to open database ledger");
let (poh_recorder, _entry_receiver, record_receiver) = PohRecorder::new(
bank.tick_height(),
bank.last_blockhash(),
bank.clone(),
Some((4, 4)),
bank.ticks_per_slot(),
&pubkey,
&Arc::new(blockstore),
&Arc::new(LeaderScheduleCache::new_from_bank(&bank)),
&Arc::new(PohConfig::default()),
Arc::new(AtomicBool::default()),
);
let recorder = poh_recorder.recorder();
let poh_recorder = Arc::new(Mutex::new(poh_recorder));
let poh_simulator = simulate_poh(record_receiver, &poh_recorder);
poh_recorder.lock().unwrap().set_bank(&bank);
let (gossip_vote_sender, _gossip_vote_receiver) = unbounded();
let qos_service = QosService::new(Arc::new(RwLock::new(CostModel::default())), 1);
let get_block_cost = || bank.read_cost_tracker().unwrap().block_cost();
let get_tx_count = || bank.read_cost_tracker().unwrap().transaction_count();
assert_eq!(get_block_cost(), 0);
assert_eq!(get_tx_count(), 0);
//
// TEST: cost tracker's block cost increases when successfully processing a tx
//
let transactions = sanitize_transactions(vec![system_transaction::transfer(
&mint_keypair,
&pubkey,
1,
genesis_config.hash(),
)]);
let process_transactions_batch_output = BankingStage::process_and_record_transactions(
&bank,
&transactions,
&recorder,
0,
None,
&gossip_vote_sender,
&qos_service,
);
let ExecuteAndCommitTransactionsOutput {
executed_with_successful_result_count,
commit_transactions_result,
..
} = process_transactions_batch_output.execute_and_commit_transactions_output;
assert_eq!(executed_with_successful_result_count, 1);
assert!(commit_transactions_result.is_ok());
let single_transfer_cost = get_block_cost();
assert_ne!(single_transfer_cost, 0);
assert_eq!(get_tx_count(), 1);
//
// TEST: When a tx in a batch can't be executed (here because of account
// locks), then its cost does not affect the cost tracker.
//
let allocate_keypair = Keypair::new();
let transactions = sanitize_transactions(vec![
system_transaction::transfer(&mint_keypair, &pubkey, 2, genesis_config.hash()),
// intentionally use a tx that has a different cost
system_transaction::allocate(
&mint_keypair,
&allocate_keypair,
genesis_config.hash(),
1,
),
]);
let process_transactions_batch_output = BankingStage::process_and_record_transactions(
&bank,
&transactions,
&recorder,
0,
None,
&gossip_vote_sender,
&qos_service,
);
let ExecuteAndCommitTransactionsOutput {
executed_with_successful_result_count,
commit_transactions_result,
retryable_transaction_indexes,
..
} = process_transactions_batch_output.execute_and_commit_transactions_output;
assert_eq!(executed_with_successful_result_count, 1);
assert!(commit_transactions_result.is_ok());
assert_eq!(retryable_transaction_indexes, vec![1]);
assert_eq!(get_block_cost(), 2 * single_transfer_cost);
assert_eq!(get_tx_count(), 2);
poh_recorder
.lock()
.unwrap()
.is_exited
.store(true, Ordering::Relaxed);
let _ = poh_simulator.join();
}
Blockstore::destroy(ledger_path.path()).unwrap();
}
fn simulate_poh(
record_receiver: CrossbeamReceiver<Record>,
poh_recorder: &Arc<Mutex<PohRecorder>>,

View File

@@ -10,7 +10,7 @@ use {
solana_ledger::{ancestor_iterator::AncestorIterator, blockstore::Blockstore, blockstore_db},
solana_runtime::{
bank::Bank, bank_forks::BankForks, commitment::VOTE_THRESHOLD_SIZE,
vote_account::VoteAccount,
vote_account::VoteAccountsHashMap,
},
solana_sdk::{
clock::{Slot, UnixTimestamp},
@@ -253,7 +253,7 @@ impl Tower {
pub(crate) fn collect_vote_lockouts(
vote_account_pubkey: &Pubkey,
bank_slot: Slot,
vote_accounts: &HashMap<Pubkey, (/*stake:*/ u64, VoteAccount)>,
vote_accounts: &VoteAccountsHashMap,
ancestors: &HashMap<Slot, HashSet<Slot>>,
get_frozen_hash: impl Fn(Slot) -> Option<Hash>,
latest_validator_votes_for_frozen_banks: &mut LatestValidatorVotesForFrozenBanks,
@@ -636,7 +636,7 @@ impl Tower {
descendants: &HashMap<Slot, HashSet<u64>>,
progress: &ProgressMap,
total_stake: u64,
epoch_vote_accounts: &HashMap<Pubkey, (u64, VoteAccount)>,
epoch_vote_accounts: &VoteAccountsHashMap,
latest_validator_votes_for_frozen_banks: &LatestValidatorVotesForFrozenBanks,
heaviest_subtree_fork_choice: &HeaviestSubtreeForkChoice,
) -> SwitchForkDecision {
@@ -929,7 +929,7 @@ impl Tower {
descendants: &HashMap<Slot, HashSet<u64>>,
progress: &ProgressMap,
total_stake: u64,
epoch_vote_accounts: &HashMap<Pubkey, (u64, VoteAccount)>,
epoch_vote_accounts: &VoteAccountsHashMap,
latest_validator_votes_for_frozen_banks: &LatestValidatorVotesForFrozenBanks,
heaviest_subtree_fork_choice: &HeaviestSubtreeForkChoice,
) -> SwitchForkDecision {
@@ -1377,7 +1377,7 @@ pub mod test {
},
itertools::Itertools,
solana_ledger::{blockstore::make_slot_entries, get_tmp_ledger_path},
solana_runtime::bank::Bank,
solana_runtime::{bank::Bank, vote_account::VoteAccount},
solana_sdk::{
account::{Account, AccountSharedData, ReadableAccount, WritableAccount},
clock::Slot,
@@ -1398,7 +1398,7 @@ pub mod test {
trees::tr,
};
fn gen_stakes(stake_votes: &[(u64, &[u64])]) -> HashMap<Pubkey, (u64, VoteAccount)> {
fn gen_stakes(stake_votes: &[(u64, &[u64])]) -> VoteAccountsHashMap {
stake_votes
.iter()
.map(|(lamports, votes)| {

View File

@@ -13,7 +13,9 @@ use {
},
};
// Determines how often we report blockstore metrics.
// Determines how often we report blockstore metrics under
// LedgerMetricReportService. Note that there're other blockstore
// metrics that are reported outside LedgerMetricReportService.
const BLOCKSTORE_METRICS_REPORT_PERIOD_MILLIS: u64 = 10000;
pub struct LedgerMetricReportService {

View File

@@ -37,6 +37,8 @@ pub mod optimistic_confirmation_verifier;
pub mod outstanding_requests;
pub mod packet_hasher;
pub mod packet_threshold;
pub mod poh_timing_report_service;
pub mod poh_timing_reporter;
pub mod progress_map;
pub mod qos_service;
pub mod repair_generic_traversal;

View File

@@ -0,0 +1,87 @@
//! PohTimingReportService module
use {
crate::poh_timing_reporter::PohTimingReporter,
solana_metrics::poh_timing_point::{PohTimingReceiver, SlotPohTimingInfo},
std::{
string::ToString,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, Builder, JoinHandle},
time::Duration,
},
};
/// Timeout to wait on the poh timing points from the channel
const POH_TIMING_RECEIVER_TIMEOUT_MILLISECONDS: u64 = 1000;
/// The `poh_timing_report_service` receives signals of relevant timing points
/// during the processing of a slot, (i.e. from blockstore and poh), aggregate and
/// report the result as datapoints.
pub struct PohTimingReportService {
t_poh_timing: JoinHandle<()>,
}
impl PohTimingReportService {
pub fn new(receiver: PohTimingReceiver, exit: Arc<AtomicBool>) -> Self {
let exit_signal = exit;
let mut poh_timing_reporter = PohTimingReporter::default();
let t_poh_timing = Builder::new()
.name("poh_timing_report".to_string())
.spawn(move || loop {
if exit_signal.load(Ordering::Relaxed) {
break;
}
if let Ok(SlotPohTimingInfo {
slot,
root_slot,
timing_point,
}) = receiver.recv_timeout(Duration::from_millis(
POH_TIMING_RECEIVER_TIMEOUT_MILLISECONDS,
)) {
poh_timing_reporter.process(slot, root_slot, timing_point);
}
})
.unwrap();
Self { t_poh_timing }
}
pub fn join(self) -> thread::Result<()> {
self.t_poh_timing.join()
}
}
#[cfg(test)]
mod test {
use {
super::*, crossbeam_channel::unbounded, solana_metrics::poh_timing_point::SlotPohTimingInfo,
};
#[test]
/// Test the life cycle of the PohTimingReportService
fn test_poh_timing_report_service() {
let (poh_timing_point_sender, poh_timing_point_receiver) = unbounded();
let exit = Arc::new(AtomicBool::new(false));
// Create the service
let poh_timing_report_service =
PohTimingReportService::new(poh_timing_point_receiver, exit.clone());
// Send SlotPohTimingPoint
let _ = poh_timing_point_sender.send(SlotPohTimingInfo::new_slot_start_poh_time_point(
42, None, 100,
));
let _ = poh_timing_point_sender.send(SlotPohTimingInfo::new_slot_end_poh_time_point(
42, None, 200,
));
let _ = poh_timing_point_sender.send(SlotPohTimingInfo::new_slot_full_poh_time_point(
42, None, 150,
));
// Shutdown the service
exit.store(true, Ordering::Relaxed);
poh_timing_report_service
.join()
.expect("poh_timing_report_service completed");
}
}

View File

@@ -0,0 +1,239 @@
//! A poh_timing_reporter module implement poh timing point and timing reporter
//! structs.
use {
solana_metrics::{datapoint_info, poh_timing_point::PohTimingPoint},
solana_sdk::clock::Slot,
std::{collections::HashMap, fmt},
};
/// A SlotPohTimestamp records timing of the events during the processing of a
/// slot by the validator
#[derive(Debug, Clone, Copy, Default)]
pub struct SlotPohTimestamp {
/// Slot start time from poh
pub start_time: u64,
/// Slot end time from poh
pub end_time: u64,
/// Last shred received time from block producer
pub full_time: u64,
}
/// Display trait
impl fmt::Display for SlotPohTimestamp {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"SlotPohTimestamp: start={} end={} full={}",
self.start_time, self.end_time, self.full_time
)
}
}
impl SlotPohTimestamp {
/// Return true if the timing points of all events are received.
pub fn is_complete(&self) -> bool {
self.start_time != 0 && self.end_time != 0 && self.full_time != 0
}
/// Update with timing point
pub fn update(&mut self, timing_point: PohTimingPoint) {
match timing_point {
PohTimingPoint::PohSlotStart(ts) => self.start_time = ts,
PohTimingPoint::PohSlotEnd(ts) => self.end_time = ts,
PohTimingPoint::FullSlotReceived(ts) => self.full_time = ts,
}
}
/// Return the time difference from slot start to slot full
fn slot_start_to_full_time(&self) -> i64 {
(self.full_time as i64).saturating_sub(self.start_time as i64)
}
/// Return the time difference from slot full to slot end
fn slot_full_to_end_time(&self) -> i64 {
(self.end_time as i64).saturating_sub(self.full_time as i64)
}
/// Report PohTiming for a slot
pub fn report(&self, slot: Slot) {
datapoint_info!(
"poh_slot_timing",
("slot", slot as i64, i64),
("start_time", self.start_time as i64, i64),
("end_time", self.end_time as i64, i64),
("full_time", self.full_time as i64, i64),
(
"start_to_full_time_diff",
self.slot_start_to_full_time(),
i64
),
("full_to_end_time_diff", self.slot_full_to_end_time(), i64),
);
}
}
/// A PohTimingReporter manages and reports the timing of events for incoming
/// slots
#[derive(Default)]
pub struct PohTimingReporter {
/// Storage map of SlotPohTimestamp per slot
slot_timestamps: HashMap<Slot, SlotPohTimestamp>,
last_root_slot: Slot,
}
impl PohTimingReporter {
/// Return true if PohTiming is complete for the slot
pub fn is_complete(&self, slot: Slot) -> bool {
if let Some(slot_timestamp) = self.slot_timestamps.get(&slot) {
slot_timestamp.is_complete()
} else {
false
}
}
/// Process incoming PohTimingPoint from the channel
pub fn process(&mut self, slot: Slot, root_slot: Option<Slot>, t: PohTimingPoint) -> bool {
let slot_timestamp = self
.slot_timestamps
.entry(slot)
.or_insert_with(SlotPohTimestamp::default);
slot_timestamp.update(t);
let is_completed = slot_timestamp.is_complete();
if is_completed {
slot_timestamp.report(slot);
}
// delete slots that are older than the root_slot
if let Some(root_slot) = root_slot {
if root_slot > self.last_root_slot {
self.slot_timestamps.retain(|&k, _| k >= root_slot);
self.last_root_slot = root_slot;
}
}
is_completed
}
/// Return the count of slot_timestamps in tracking
pub fn slot_count(&self) -> usize {
self.slot_timestamps.len()
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
/// Test poh_timing_reporter
fn test_poh_timing_reporter() {
// create a reporter
let mut reporter = PohTimingReporter::default();
// process all relevant PohTimingPoints for slot 42
let complete = reporter.process(42, None, PohTimingPoint::PohSlotStart(100));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::PohSlotEnd(200));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::FullSlotReceived(150));
// assert that the PohTiming is complete
assert!(complete);
// Move root to slot 43
let root = Some(43);
// process all relevant PohTimingPoints for slot 45
let complete = reporter.process(45, None, PohTimingPoint::PohSlotStart(100));
assert!(!complete);
let complete = reporter.process(45, None, PohTimingPoint::PohSlotEnd(200));
assert!(!complete);
let complete = reporter.process(45, root, PohTimingPoint::FullSlotReceived(150));
// assert that the PohTiming is complete
assert!(complete);
// assert that only one timestamp remains in track
assert_eq!(reporter.slot_count(), 1)
}
#[test]
/// Test poh_timing_reporter
fn test_poh_timing_reporter_out_of_order() {
// create a reporter
let mut reporter = PohTimingReporter::default();
// process all relevant PohTimingPoints for slot 42/43 out of order
let mut c = 0;
// slot_start 42
c += reporter.process(42, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot_full 42
c += reporter.process(42, None, PohTimingPoint::FullSlotReceived(120)) as i32;
// slot_full 43
c += reporter.process(43, None, PohTimingPoint::FullSlotReceived(140)) as i32;
// slot_end 42
c += reporter.process(42, None, PohTimingPoint::PohSlotEnd(200)) as i32;
// slot start 43
c += reporter.process(43, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot end 43
c += reporter.process(43, None, PohTimingPoint::PohSlotEnd(200)) as i32;
// assert that both timing points are complete
assert_eq!(c, 2);
// assert that both timestamps remain in track
assert_eq!(reporter.slot_count(), 2)
}
#[test]
/// Test poh_timing_reporter
fn test_poh_timing_reporter_never_complete() {
// create a reporter
let mut reporter = PohTimingReporter::default();
let mut c = 0;
// process all relevant PohTimingPoints for slot 42/43 out of order
// slot_start 42
c += reporter.process(42, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot_full 42
c += reporter.process(42, None, PohTimingPoint::FullSlotReceived(120)) as i32;
// slot_full 43
c += reporter.process(43, None, PohTimingPoint::FullSlotReceived(140)) as i32;
// skip slot 42, jump to slot 43
// slot start 43
c += reporter.process(43, None, PohTimingPoint::PohSlotStart(100)) as i32;
// slot end 43
c += reporter.process(43, None, PohTimingPoint::PohSlotEnd(200)) as i32;
// assert that only one timing point is complete
assert_eq!(c, 1);
// assert that both timestamp is in track
assert_eq!(reporter.slot_count(), 2)
}
#[test]
fn test_poh_timing_reporter_overflow() {
// create a reporter
let mut reporter = PohTimingReporter::default();
// process all relevant PohTimingPoints for a slot
let complete = reporter.process(42, None, PohTimingPoint::PohSlotStart(1647624609896));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::PohSlotEnd(1647624610286));
assert!(!complete);
let complete = reporter.process(42, None, PohTimingPoint::FullSlotReceived(1647624610281));
// assert that the PohTiming is complete
assert!(complete);
}
#[test]
fn test_slot_poh_timestamp_fmt() {
let t = SlotPohTimestamp::default();
assert_eq!(format!("{}", t), "SlotPohTimestamp: start=0 end=0 full=0");
}
}

View File

@@ -7,7 +7,7 @@ use {
},
solana_ledger::blockstore_processor::{ConfirmationProgress, ConfirmationTiming},
solana_program_runtime::timings::ExecuteTimingType,
solana_runtime::{bank::Bank, bank_forks::BankForks, vote_account::VoteAccount},
solana_runtime::{bank::Bank, bank_forks::BankForks, vote_account::VoteAccountsHashMap},
solana_sdk::{clock::Slot, hash::Hash, pubkey::Pubkey},
std::{
collections::{BTreeMap, HashMap, HashSet},
@@ -516,7 +516,7 @@ impl PropagatedStats {
&mut self,
node_pubkey: &Pubkey,
vote_account_pubkeys: &[Pubkey],
epoch_vote_accounts: &HashMap<Pubkey, (u64, VoteAccount)>,
epoch_vote_accounts: &VoteAccountsHashMap,
) {
self.propagated_node_ids.insert(*node_pubkey);
for vote_account_pubkey in vote_account_pubkeys.iter() {
@@ -695,7 +695,7 @@ impl ProgressMap {
#[cfg(test)]
mod test {
use super::*;
use {super::*, solana_runtime::vote_account::VoteAccount};
#[test]
fn test_add_vote_pubkey() {

View File

@@ -133,7 +133,7 @@ impl QosService {
let mut num_included = 0;
let select_results = transactions
.zip(transactions_costs)
.map(|(tx, cost)| match cost_tracker.try_add(cost) {
.map(|(tx, cost)| match cost_tracker.try_add(tx, cost) {
Ok(current_block_cost) => {
debug!("slot {:?}, transaction {:?}, cost {:?}, fit into current block, current block cost {}", bank.slot(), tx, cost, current_block_cost);
self.metrics.stats.selected_txs_count.fetch_add(1, Ordering::Relaxed);
@@ -170,35 +170,6 @@ impl QosService {
(select_results, num_included)
}
/// Update the transaction cost in the cost_tracker with the real cost for
/// transactions that were executed successfully;
/// Otherwise remove the cost from the cost tracker, therefore preventing cost_tracker
/// being inflated with unsuccessfully executed transactions.
pub fn update_or_remove_transaction_costs<'a>(
transaction_costs: impl Iterator<Item = &'a TransactionCost>,
transaction_qos_results: impl Iterator<Item = &'a transaction::Result<()>>,
retryable_transaction_indexes: &[usize],
bank: &Arc<Bank>,
) {
let mut cost_tracker = bank.write_cost_tracker().unwrap();
transaction_costs
.zip(transaction_qos_results)
.enumerate()
.for_each(|(index, (tx_cost, qos_inclusion_result))| {
// Only transactions that the qos service incuded have been added to the
// cost tracker.
if qos_inclusion_result.is_ok() && retryable_transaction_indexes.contains(&index) {
cost_tracker.remove(tx_cost);
} else {
// TODO: Update the cost tracker with the actual execution compute units.
// Will have to plumb it in next; For now, keep estimated costs.
//
// let actual_execution_cost = 0;
// cost_tracker.update_execution_cost(tx_cost, actual_execution_cost);
}
});
}
// metrics are reported by bank slot
pub fn report_metrics(&self, bank: Arc<Bank>) {
self.report_sender

View File

@@ -1682,6 +1682,9 @@ impl ReplayStage {
blockstore
.set_dead_slot(slot)
.expect("Failed to mark slot as dead in blockstore");
blockstore.slots_stats.mark_dead(slot);
rpc_subscriptions.notify_slot_update(SlotUpdate::Dead {
slot,
err: format!("error: {:?}", err),
@@ -1788,6 +1791,9 @@ impl ReplayStage {
epoch_slots_frozen_slots,
drop_bank_sender,
);
blockstore.slots_stats.mark_rooted(new_root);
rpc_subscriptions.notify_roots(rooted_slots);
if let Some(sender) = bank_notification_sender {
sender
@@ -2301,7 +2307,7 @@ impl ReplayStage {
}
}
// send accumulated excute-timings to cost_update_service
// send accumulated execute-timings to cost_update_service
if !execute_timings.details.per_program_timings.is_empty() {
cost_update_sender
.send(CostUpdate::ExecuteTiming {
@@ -2589,7 +2595,7 @@ impl ReplayStage {
*/
// Imagine 90% of validators voted on slot 4, but only 9% landed. If everybody that fails
// the switch theshold abandons slot 4 to build on slot 8 (because it's *currently* heavier),
// the switch threshold abandons slot 4 to build on slot 8 (because it's *currently* heavier),
// then there will be no blocks to include the votes for slot 4, and the network halts
// because 90% of validators can't vote
info!(
@@ -2931,6 +2937,7 @@ impl ReplayStage {
accounts_background_request_sender,
highest_confirmed_root,
);
drop_bank_sender
.send(removed_banks)
.unwrap_or_else(|err| warn!("bank drop failed: {:?}", err));

View File

@@ -240,7 +240,7 @@ fn retransmit(
epoch_fetch.stop();
stats.epoch_fetch += epoch_fetch.as_us();
let mut epoch_cache_update = Measure::start("retransmit_epoch_cach_update");
let mut epoch_cache_update = Measure::start("retransmit_epoch_cache_update");
maybe_reset_shreds_received_cache(shreds_received, hasher_reset_ts);
epoch_cache_update.stop();
stats.epoch_cache_update += epoch_cache_update.as_us();

View File

@@ -48,9 +48,7 @@ use {
commitment::BlockCommitmentCache,
cost_model::CostModel,
snapshot_config::SnapshotConfig,
snapshot_package::{
AccountsPackageReceiver, AccountsPackageSender, PendingSnapshotPackage,
},
snapshot_package::{PendingAccountsPackage, PendingSnapshotPackage},
transaction_cost_metrics_sender::{
TransactionCostMetricsSender, TransactionCostMetricsService,
},
@@ -98,7 +96,6 @@ pub struct TvuConfig {
pub accounts_hash_fault_injection_slots: u64,
pub accounts_db_caching_enabled: bool,
pub test_hash_calculation: bool,
pub use_index_hash_calculation: bool,
pub rocksdb_compaction_interval: Option<u64>,
pub rocksdb_max_compaction_jitter: Option<u64>,
pub wait_for_vote_to_start_leader: bool,
@@ -144,7 +141,7 @@ impl Tvu {
tvu_config: TvuConfig,
max_slots: &Arc<MaxSlots>,
cost_model: &Arc<RwLock<CostModel>>,
accounts_package_channel: (AccountsPackageSender, AccountsPackageReceiver),
pending_accounts_package: PendingAccountsPackage,
last_full_snapshot_slot: Option<Slot>,
block_metadata_notifier: Option<BlockMetadataNotifierLock>,
wait_to_vote_slot: Option<Slot>,
@@ -221,9 +218,8 @@ impl Tvu {
(Some(snapshot_config), Some(pending_snapshot_package))
})
.unwrap_or((None, None));
let (accounts_package_sender, accounts_package_receiver) = accounts_package_channel;
let accounts_hash_verifier = AccountsHashVerifier::new(
accounts_package_receiver,
Arc::clone(&pending_accounts_package),
pending_snapshot_package,
exit,
cluster_info,
@@ -231,7 +227,6 @@ impl Tvu {
tvu_config.halt_on_known_validators_accounts_hash_mismatch,
tvu_config.accounts_hash_fault_injection_slots,
snapshot_config.clone(),
blockstore.ledger_path().to_path_buf(),
);
let (snapshot_request_sender, snapshot_request_handler) = match snapshot_config {
@@ -243,7 +238,7 @@ impl Tvu {
Some(SnapshotRequestHandler {
snapshot_config,
snapshot_request_receiver,
accounts_package_sender,
pending_accounts_package,
}),
)
}
@@ -343,7 +338,6 @@ impl Tvu {
accounts_background_request_handler,
tvu_config.accounts_db_caching_enabled,
tvu_config.test_hash_calculation,
tvu_config.use_index_hash_calculation,
last_full_snapshot_slot,
);
@@ -447,7 +441,6 @@ pub mod tests {
let (_, gossip_confirmed_slots_receiver) = unbounded();
let bank_forks = Arc::new(RwLock::new(bank_forks));
let tower = Tower::default();
let accounts_package_channel = unbounded();
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
let (_pruned_banks_sender, pruned_banks_receiver) = unbounded();
let tvu = Tvu::new(
@@ -495,7 +488,7 @@ pub mod tests {
TvuConfig::default(),
&Arc::new(MaxSlots::default()),
&Arc::new(RwLock::new(CostModel::default())),
accounts_package_channel,
PendingAccountsPackage::default(),
None,
None,
None,

View File

@@ -8,6 +8,7 @@ use {
cluster_info_vote_listener::VoteTracker,
completed_data_sets_service::CompletedDataSetsService,
consensus::{reconcile_blockstore_roots_with_tower, Tower},
poh_timing_report_service::PohTimingReportService,
rewards_recorder_service::{RewardsRecorderSender, RewardsRecorderService},
sample_performance_service::SamplePerformanceService,
serve_repair::ServeRepair,
@@ -44,7 +45,7 @@ use {
leader_schedule_cache::LeaderScheduleCache,
},
solana_measure::measure::Measure,
solana_metrics::datapoint_info,
solana_metrics::{datapoint_info, poh_timing_point::PohTimingSender},
solana_poh::{
poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS},
poh_service::{self, PohService},
@@ -79,7 +80,7 @@ use {
snapshot_archive_info::SnapshotArchiveInfoGetter,
snapshot_config::SnapshotConfig,
snapshot_hash::StartingSnapshotHashes,
snapshot_package::{AccountsPackageSender, PendingSnapshotPackage},
snapshot_package::{PendingAccountsPackage, PendingSnapshotPackage},
snapshot_utils,
},
solana_sdk::{
@@ -163,7 +164,6 @@ pub struct ValidatorConfig {
pub warp_slot: Option<Slot>,
pub accounts_db_test_hash_calculation: bool,
pub accounts_db_skip_shrink: bool,
pub accounts_db_use_index_hash_calculation: bool,
pub tpu_coalesce_ms: u64,
pub validator_exit: Arc<RwLock<Exit>>,
pub no_wait_for_vote_to_start_leader: bool,
@@ -224,7 +224,6 @@ impl Default for ValidatorConfig {
warp_slot: None,
accounts_db_test_hash_calculation: false,
accounts_db_skip_shrink: false,
accounts_db_use_index_hash_calculation: true,
tpu_coalesce_ms: DEFAULT_TPU_COALESCE_MS,
validator_exit: Arc::new(RwLock::new(Exit::default())),
no_wait_for_vote_to_start_leader: true,
@@ -326,6 +325,7 @@ pub struct Validator {
cache_block_meta_service: Option<CacheBlockMetaService>,
system_monitor_service: Option<SystemMonitorService>,
sample_performance_service: Option<SamplePerformanceService>,
poh_timing_report_service: PohTimingReportService,
stats_reporter_service: StatsReporterService,
gossip_service: GossipService,
serve_repair_service: ServeRepairService,
@@ -338,6 +338,7 @@ pub struct Validator {
ip_echo_server: Option<solana_net_utils::IpEchoServer>,
pub cluster_info: Arc<ClusterInfo>,
pub bank_forks: Arc<RwLock<BankForks>>,
pub blockstore: Arc<Blockstore>,
accountsdb_repl_service: Option<AccountsDbReplService>,
geyser_plugin_service: Option<GeyserPluginService>,
}
@@ -459,8 +460,6 @@ impl Validator {
.register_exit(Box::new(move || exit.store(true, Ordering::Relaxed)));
}
let accounts_package_channel = unbounded();
let accounts_update_notifier = geyser_plugin_service
.as_ref()
.and_then(|geyser_plugin_service| geyser_plugin_service.get_accounts_update_notifier());
@@ -485,6 +484,10 @@ impl Validator {
!config.no_os_network_stats_reporting,
));
let (poh_timing_point_sender, poh_timing_point_receiver) = unbounded();
let poh_timing_report_service =
PohTimingReportService::new(poh_timing_point_receiver, exit.clone());
let (
genesis_config,
mut bank_forks,
@@ -512,8 +515,10 @@ impl Validator {
&start_progress,
accounts_update_notifier,
transaction_notifier,
Some(poh_timing_point_sender.clone()),
);
let pending_accounts_package = PendingAccountsPackage::default();
let last_full_snapshot_slot = process_blockstore(
&blockstore,
&mut bank_forks,
@@ -522,7 +527,7 @@ impl Validator {
transaction_status_sender.as_ref(),
cache_block_meta_sender.as_ref(),
config.snapshot_config.as_ref(),
accounts_package_channel.0.clone(),
Arc::clone(&pending_accounts_package),
blockstore_root_scan,
pruned_banks_receiver.clone(),
);
@@ -530,7 +535,6 @@ impl Validator {
last_full_snapshot_slot.or_else(|| starting_snapshot_hashes.map(|x| x.full.hash.0));
maybe_warp_slot(config, ledger_path, &mut bank_forks, &leader_schedule_cache);
let tower = {
let restored_tower = Tower::restore(config.tower_storage.as_ref(), &id);
if let Ok(tower) = &restored_tower {
@@ -655,9 +659,10 @@ impl Validator {
bank.ticks_per_slot(),
&id,
&blockstore,
blockstore.new_shreds_signals.first().cloned(),
blockstore.get_new_shred_signal(0),
&leader_schedule_cache,
&poh_config,
Some(poh_timing_point_sender),
exit.clone(),
);
let poh_recorder = Arc::new(Mutex::new(poh_recorder));
@@ -854,7 +859,7 @@ impl Validator {
record_receiver,
);
assert_eq!(
blockstore.new_shreds_signals.len(),
blockstore.get_new_shred_signals_len(),
1,
"New shred signal for the TVU should be the same as the clear bank signal."
);
@@ -870,8 +875,11 @@ impl Validator {
let (gossip_verified_vote_hash_sender, gossip_verified_vote_hash_receiver) = unbounded();
let (cluster_confirmed_slot_sender, cluster_confirmed_slot_receiver) = unbounded();
let rpc_completed_slots_service =
RpcCompletedSlotsService::spawn(completed_slots_receiver, rpc_subscriptions.clone());
let rpc_completed_slots_service = RpcCompletedSlotsService::spawn(
completed_slots_receiver,
rpc_subscriptions.clone(),
exit.clone(),
);
let (replay_vote_sender, replay_vote_receiver) = unbounded();
let tvu = Tvu::new(
@@ -918,7 +926,6 @@ impl Validator {
accounts_hash_fault_injection_slots: config.accounts_hash_fault_injection_slots,
accounts_db_caching_enabled: config.accounts_db_caching_enabled,
test_hash_calculation: config.accounts_db_test_hash_calculation,
use_index_hash_calculation: config.accounts_db_use_index_hash_calculation,
rocksdb_compaction_interval: config.rocksdb_compaction_interval,
rocksdb_max_compaction_jitter: config.rocksdb_compaction_interval,
wait_for_vote_to_start_leader,
@@ -926,7 +933,7 @@ impl Validator {
},
&max_slots,
&cost_model,
accounts_package_channel,
pending_accounts_package,
last_full_snapshot_slot,
block_metadata_notifier,
config.wait_to_vote_slot,
@@ -980,6 +987,7 @@ impl Validator {
cache_block_meta_service,
system_monitor_service,
sample_performance_service,
poh_timing_report_service,
snapshot_packager_service,
completed_data_sets_service,
tpu,
@@ -990,6 +998,7 @@ impl Validator {
validator_exit: config.validator_exit.clone(),
cluster_info,
bank_forks,
blockstore: blockstore.clone(),
accountsdb_repl_service,
geyser_plugin_service,
}
@@ -998,6 +1007,9 @@ impl Validator {
// Used for notifying many nodes in parallel to exit
pub fn exit(&mut self) {
self.validator_exit.write().unwrap().exit();
// drop all signals in blockstore
self.blockstore.drop_signal();
}
pub fn close(mut self) {
@@ -1116,6 +1128,10 @@ impl Validator {
if let Some(geyser_plugin_service) = self.geyser_plugin_service {
geyser_plugin_service.join().expect("geyser_plugin_service");
}
self.poh_timing_report_service
.join()
.expect("poh_timing_report_service");
}
}
@@ -1254,6 +1270,7 @@ fn load_blockstore(
start_progress: &Arc<RwLock<ValidatorStartProgress>>,
accounts_update_notifier: Option<AccountsUpdateNotifier>,
transaction_notifier: Option<TransactionNotifierLock>,
poh_timing_point_sender: Option<PohTimingSender>,
) -> (
GenesisConfig,
BankForks,
@@ -1309,6 +1326,7 @@ fn load_blockstore(
)
.expect("Failed to open ledger database");
blockstore.set_no_compaction(config.no_rocksdb_compaction);
blockstore.shred_timing_point_sender = poh_timing_point_sender;
let blockstore = Arc::new(blockstore);
let blockstore_root_scan = BlockstoreRootScan::new(config, &blockstore, exit);
@@ -1337,7 +1355,7 @@ fn load_blockstore(
blockstore.clone(),
exit,
enable_rpc_transaction_history,
config.rpc_config.enable_cpi_and_log_storage,
config.rpc_config.enable_extended_tx_metadata_storage,
transaction_notifier,
)
} else {
@@ -1395,7 +1413,7 @@ fn process_blockstore(
transaction_status_sender: Option<&TransactionStatusSender>,
cache_block_meta_sender: Option<&CacheBlockMetaSender>,
snapshot_config: Option<&SnapshotConfig>,
accounts_package_sender: AccountsPackageSender,
pending_accounts_package: PendingAccountsPackage,
blockstore_root_scan: BlockstoreRootScan,
pruned_banks_receiver: DroppedSlotsReceiver,
) -> Option<Slot> {
@@ -1407,7 +1425,7 @@ fn process_blockstore(
transaction_status_sender,
cache_block_meta_sender,
snapshot_config,
accounts_package_sender,
pending_accounts_package,
pruned_banks_receiver,
)
.unwrap_or_else(|err| {
@@ -1552,7 +1570,7 @@ fn initialize_rpc_transaction_history_services(
blockstore: Arc<Blockstore>,
exit: &Arc<AtomicBool>,
enable_rpc_transaction_history: bool,
enable_cpi_and_log_storage: bool,
enable_extended_tx_metadata_storage: bool,
transaction_notifier: Option<TransactionNotifierLock>,
) -> TransactionHistoryServices {
let max_complete_transaction_status_slot = Arc::new(AtomicU64::new(blockstore.max_root()));
@@ -1566,7 +1584,7 @@ fn initialize_rpc_transaction_history_services(
enable_rpc_transaction_history,
transaction_notifier.clone(),
blockstore.clone(),
enable_cpi_and_log_storage,
enable_extended_tx_metadata_storage,
exit,
));
@@ -1808,9 +1826,10 @@ pub fn is_snapshot_config_valid(
mod tests {
use {
super::*,
crossbeam_channel::{bounded, RecvTimeoutError},
solana_ledger::{create_new_tmp_ledger, genesis_utils::create_genesis_config_with_leader},
solana_sdk::{genesis_config::create_genesis_config, poh_config::PohConfig},
std::fs::remove_dir_all,
std::{fs::remove_dir_all, thread, time::Duration},
};
#[test]
@@ -1930,12 +1949,22 @@ mod tests {
// Each validator can exit in parallel to speed many sequential calls to join`
validators.iter_mut().for_each(|v| v.exit());
// While join is called sequentially, the above exit call notified all the
// validators to exit from all their threads
validators.into_iter().for_each(|validator| {
validator.join();
// spawn a new thread to wait for the join of the validator
let (sender, receiver) = bounded(0);
let _ = thread::spawn(move || {
validators.into_iter().for_each(|validator| {
validator.join();
});
sender.send(()).unwrap();
});
// timeout of 30s for shutting down the validators
let timeout = Duration::from_secs(30);
if let Err(RecvTimeoutError::Timeout) = receiver.recv_timeout(timeout) {
panic!("timeout for shutting down validators",);
}
for path in ledger_paths {
remove_dir_all(path).unwrap();
}

View File

@@ -604,7 +604,7 @@ impl WindowService {
}
if last_print.elapsed().as_secs() > 2 {
metrics.report_metrics("recv-window-insert-shreds");
metrics.report_metrics("blockstore-insert-shreds");
metrics = BlockstoreInsertionMetrics::default();
ws_metrics.report_metrics("recv-window-insert-shreds");
ws_metrics = WindowServiceMetrics::default();

View File

@@ -358,6 +358,7 @@ mod tests {
..BlockstoreRocksFifoOptions::default()
},
),
..LedgerColumnOptions::default()
},
..BlockstoreOptions::default()
}

View File

@@ -69,7 +69,8 @@ mod tests {
snapshot_archive_info::FullSnapshotArchiveInfo,
snapshot_config::SnapshotConfig,
snapshot_package::{
AccountsPackage, PendingSnapshotPackage, SnapshotPackage, SnapshotType,
AccountsPackage, PendingAccountsPackage, PendingSnapshotPackage, SnapshotPackage,
SnapshotType,
},
snapshot_utils::{self, ArchiveFormat, SnapshotVersion},
status_cache::MAX_CACHE_ENTRIES,
@@ -247,12 +248,11 @@ mod tests {
let mint_keypair = &snapshot_test_config.genesis_config_info.mint_keypair;
let (s, snapshot_request_receiver) = unbounded();
let (accounts_package_sender, _r) = unbounded();
let request_sender = AbsRequestSender::new(Some(s));
let snapshot_request_handler = SnapshotRequestHandler {
snapshot_config: snapshot_test_config.snapshot_config.clone(),
snapshot_request_receiver,
accounts_package_sender,
pending_accounts_package: PendingAccountsPackage::default(),
};
for slot in 1..=last_slot {
let mut bank = Bank::new_from_parent(&bank_forks[slot - 1], &Pubkey::default(), slot);
@@ -265,8 +265,7 @@ mod tests {
// set_root should send a snapshot request
bank_forks.set_root(bank.slot(), &request_sender, None);
bank.update_accounts_hash();
snapshot_request_handler
.handle_snapshot_requests(false, false, false, 0, &mut None);
snapshot_request_handler.handle_snapshot_requests(false, false, 0, &mut None);
}
}
@@ -367,8 +366,8 @@ mod tests {
.unwrap();
// Set up snapshotting channels
let (sender, receiver) = unbounded();
let (fake_sender, _fake_receiver) = unbounded();
let real_pending_accounts_package = PendingAccountsPackage::default();
let fake_pending_accounts_package = PendingAccountsPackage::default();
// Create next MAX_CACHE_ENTRIES + 2 banks and snapshots. Every bank will get snapshotted
// and the snapshot purging logic will run on every snapshot taken. This means the three
@@ -395,21 +394,21 @@ mod tests {
bank.squash();
let accounts_hash = bank.update_accounts_hash();
let package_sender = {
let pending_accounts_package = {
if slot == saved_slot as u64 {
// Only send one package on the real sender so that the packaging service
// doesn't take forever to run the packaging logic on all MAX_CACHE_ENTRIES
// later
&sender
// Only send one package on the real pending_accounts_package so that the
// packaging service doesn't take forever to run the packaging logic on all
// MAX_CACHE_ENTRIES later
&real_pending_accounts_package
} else {
&fake_sender
&fake_pending_accounts_package
}
};
snapshot_utils::snapshot_bank(
&bank,
vec![],
package_sender,
pending_accounts_package,
bank_snapshots_dir,
snapshot_archives_dir,
snapshot_config.snapshot_version,
@@ -507,15 +506,16 @@ mod tests {
let _package_receiver = std::thread::Builder::new()
.name("package-receiver".to_string())
.spawn(move || {
while let Ok(mut accounts_package) = receiver.recv() {
// Only package the latest
while let Ok(new_accounts_package) = receiver.try_recv() {
accounts_package = new_accounts_package;
}
let snapshot_package = SnapshotPackage::from(accounts_package);
*pending_snapshot_package.lock().unwrap() = Some(snapshot_package);
}
let accounts_package = real_pending_accounts_package
.lock()
.unwrap()
.take()
.unwrap();
let snapshot_package = SnapshotPackage::from(accounts_package);
pending_snapshot_package
.lock()
.unwrap()
.replace(snapshot_package);
// Wait until the package is consumed by SnapshotPackagerService
while pending_snapshot_package.lock().unwrap().is_some() {
@@ -527,10 +527,6 @@ mod tests {
})
.unwrap();
// Close the channel so that the package receiver will exit after reading all the
// packages off the channel
drop(sender);
// Wait for service to finish
snapshot_packager_service
.join()
@@ -670,12 +666,11 @@ mod tests {
let mint_keypair = &snapshot_test_config.genesis_config_info.mint_keypair;
let (snapshot_request_sender, snapshot_request_receiver) = unbounded();
let (accounts_package_sender, _accounts_package_receiver) = unbounded();
let request_sender = AbsRequestSender::new(Some(snapshot_request_sender));
let snapshot_request_handler = SnapshotRequestHandler {
snapshot_config: snapshot_test_config.snapshot_config.clone(),
snapshot_request_receiver,
accounts_package_sender,
pending_accounts_package: PendingAccountsPackage::default(),
};
let mut last_full_snapshot_slot = None;
@@ -707,7 +702,6 @@ mod tests {
bank_forks.set_root(bank.slot(), &request_sender, None);
bank.update_accounts_hash();
snapshot_request_handler.handle_snapshot_requests(
false,
false,
false,
0,
@@ -894,7 +888,7 @@ mod tests {
let (pruned_banks_sender, pruned_banks_receiver) = unbounded();
let (snapshot_request_sender, snapshot_request_receiver) = unbounded();
let (accounts_package_sender, accounts_package_receiver) = unbounded();
let pending_accounts_package = PendingAccountsPackage::default();
let pending_snapshot_package = PendingSnapshotPackage::default();
let bank_forks = Arc::new(RwLock::new(snapshot_test_config.bank_forks));
@@ -914,7 +908,7 @@ mod tests {
let snapshot_request_handler = Some(SnapshotRequestHandler {
snapshot_config: snapshot_test_config.snapshot_config.clone(),
snapshot_request_receiver,
accounts_package_sender,
pending_accounts_package: Arc::clone(&pending_accounts_package),
});
let abs_request_handler = AbsRequestHandler {
snapshot_request_handler,
@@ -931,9 +925,8 @@ mod tests {
true,
);
let tmpdir = TempDir::new().unwrap();
let accounts_hash_verifier = AccountsHashVerifier::new(
accounts_package_receiver,
pending_accounts_package,
Some(pending_snapshot_package),
&exit,
&cluster_info,
@@ -941,7 +934,6 @@ mod tests {
false,
0,
Some(snapshot_test_config.snapshot_config.clone()),
tmpdir.path().to_path_buf(),
);
let accounts_background_service = AccountsBackgroundService::new(
@@ -949,7 +941,6 @@ mod tests {
&exit,
abs_request_handler,
false,
false,
true,
None,
);

16
docs/package-lock.json generated
View File

@@ -3338,11 +3338,11 @@
}
},
"node_modules/axios": {
"version": "0.21.1",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.21.1.tgz",
"integrity": "sha512-dKQiRHxGD9PPRIUNIWvZhPTPpl1rf/OxTYKsqKUDjBwYylTvV7SjSHJb9ratfyzM6wCdLCOYLzs73qpg5c4iGA==",
"version": "0.21.4",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.21.4.tgz",
"integrity": "sha512-ut5vewkiu8jjGBdqpM44XxjuCjq9LAKeHVmoVfHVzy8eHgxxq8SbAVQNovDA8mVi05kP0Ea/n/UzcSHcTJQfNg==",
"dependencies": {
"follow-redirects": "^1.10.0"
"follow-redirects": "^1.14.0"
}
},
"node_modules/babel-eslint": {
@@ -18280,11 +18280,11 @@
}
},
"axios": {
"version": "0.21.1",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.21.1.tgz",
"integrity": "sha512-dKQiRHxGD9PPRIUNIWvZhPTPpl1rf/OxTYKsqKUDjBwYylTvV7SjSHJb9ratfyzM6wCdLCOYLzs73qpg5c4iGA==",
"version": "0.21.4",
"resolved": "https://registry.npmjs.org/axios/-/axios-0.21.4.tgz",
"integrity": "sha512-ut5vewkiu8jjGBdqpM44XxjuCjq9LAKeHVmoVfHVzy8eHgxxq8SbAVQNovDA8mVi05kP0Ea/n/UzcSHcTJQfNg==",
"requires": {
"follow-redirects": "^1.10.0"
"follow-redirects": "^1.14.0"
}
},
"babel-eslint": {

View File

@@ -179,7 +179,6 @@ module.exports = {
"proposals/block-confirmation",
"proposals/cluster-test-framework",
"proposals/embedding-move",
"proposals/handle-duplicate-block",
"proposals/interchain-transaction-verification",
"proposals/ledger-replication-to-implement",
"proposals/optimistic-confirmation-and-slashing",

View File

@@ -33,14 +33,6 @@ solana airdrop 1 <RECIPIENT_ACCOUNT_ADDRESS> --url https://api.devnet.solana.com
where you replace the text `<RECIPIENT_ACCOUNT_ADDRESS>` with your base58-encoded
public key/wallet address.
A response with the signature of the transaction will be returned. If the balance
of the address does not change by the expected amount, run the following command
for more information on what potentially went wrong:
```bash
solana confirm -v <TRANSACTION_SIGNATURE>
```
#### Check your balance
Confirm the airdrop was successful by checking the account's balance.

View File

@@ -3059,7 +3059,7 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
Result:
```json
{ "jsonrpc": "2.0", "result": { "solana-core": "1.10.8" }, "id": 1 }
{ "jsonrpc": "2.0", "result": { "solana-core": "1.11.0" }, "id": 1 }
```
### getVoteAccounts

View File

@@ -2296,11 +2296,11 @@ autoprefixer@^10.0.2, autoprefixer@^10.2.5:
postcss-value-parser "^4.1.0"
axios@^0.21.1:
version "0.21.1"
resolved "https://registry.yarnpkg.com/axios/-/axios-0.21.1.tgz#22563481962f4d6bde9a76d516ef0e5d3c09b2b8"
integrity sha512-dKQiRHxGD9PPRIUNIWvZhPTPpl1rf/OxTYKsqKUDjBwYylTvV7SjSHJb9ratfyzM6wCdLCOYLzs73qpg5c4iGA==
version "0.21.4"
resolved "https://registry.yarnpkg.com/axios/-/axios-0.21.4.tgz#c67b90dc0568e5c1cf2b0b858c43ba28e2eda575"
integrity sha512-ut5vewkiu8jjGBdqpM44XxjuCjq9LAKeHVmoVfHVzy8eHgxxq8SbAVQNovDA8mVi05kP0Ea/n/UzcSHcTJQfNg==
dependencies:
follow-redirects "^1.10.0"
follow-redirects "^1.14.0"
babel-eslint@^10.1.0:
version "10.1.0"
@@ -4225,10 +4225,10 @@ flux@^4.0.1:
fbemitter "^3.0.0"
fbjs "^3.0.0"
follow-redirects@^1.0.0, follow-redirects@^1.10.0:
version "1.14.1"
resolved "https://registry.yarnpkg.com/follow-redirects/-/follow-redirects-1.14.1.tgz#d9114ded0a1cfdd334e164e6662ad02bfd91ff43"
integrity sha512-HWqDgT7ZEkqRzBvc2s64vSZ/hfOceEol3ac/7tKwzuvEyWx3/4UegXh5oBOIotkGsObyk3xznnSRVADBgWSQVg==
follow-redirects@^1.0.0, follow-redirects@^1.14.0:
version "1.14.9"
resolved "https://registry.yarnpkg.com/follow-redirects/-/follow-redirects-1.14.9.tgz#dd4ea157de7bfaf9ea9b3fbd85aa16951f78d8d7"
integrity sha512-MQDfihBQYMcyy5dhRDJUHcw7lb2Pv/TuE6xP1vyraLukNDHKbDxDNaOE3NbCAdKQApno+GPRyo1YAp89yCjK4w==
for-in@^1.0.2:
version "1.0.2"

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-dos"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -15,18 +15,18 @@ clap = {version = "3.1.5", features = ["derive", "cargo"]}
log = "0.4.14"
rand = "0.7.0"
serde = "1.0.136"
solana-client = { path = "../client", version = "=1.10.8" }
solana-core = { path = "../core", version = "=1.10.8" }
solana-gossip = { path = "../gossip", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-net-utils = { path = "../net-utils", version = "=1.10.8" }
solana-perf = { path = "../perf", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-streamer = { path = "../streamer", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-client = { path = "../client", version = "=1.11.0" }
solana-core = { path = "../core", version = "=1.11.0" }
solana-gossip = { path = "../gossip", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-net-utils = { path = "../net-utils", version = "=1.11.0" }
solana-perf = { path = "../perf", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-streamer = { path = "../streamer", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.10.8" }
solana-local-cluster = { path = "../local-cluster", version = "=1.11.0" }

View File

@@ -542,6 +542,7 @@ pub mod test {
}
#[test]
#[ignore]
fn test_dos_local_cluster_transactions() {
let num_nodes = 1;
let cluster =

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-download-utils"
version = "1.10.8"
version = "1.11.0"
description = "Solana Download Utils"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -14,8 +14,8 @@ console = "0.15.0"
indicatif = "0.16.2"
log = "0.4.14"
reqwest = { version = "0.11.10", default-features = false, features = ["blocking", "rustls-tls", "json"] }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
[lib]
crate-type = ["lib"]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-entry"
version = "1.10.8"
version = "1.11.0"
description = "Solana Entry"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -18,16 +18,16 @@ log = "0.4.11"
rand = "0.7.0"
rayon = "1.5.1"
serde = "1.0.136"
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.10.8" }
solana-metrics = { path = "../metrics", version = "=1.10.8" }
solana-perf = { path = "../perf", version = "=1.10.8" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-perf = { path = "../perf", version = "=1.11.0" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
[dev-dependencies]
matches = "0.1.9"
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
[lib]
crate-type = ["lib"]

View File

@@ -14,6 +14,7 @@
"@cloudflare/stream-react": "^1.2.0",
"@metamask/jazzicon": "^2.0.0",
"@metaplex/js": "4.12.0",
"@project-serum/anchor": "0.23.0",
"@project-serum/serum": "^0.13.61",
"@react-hook/debounce": "^4.0.0",
"@sentry/react": "^6.16.1",
@@ -4489,17 +4490,18 @@
}
},
"node_modules/@project-serum/anchor": {
"version": "0.11.1",
"resolved": "https://registry.npmjs.org/@project-serum/anchor/-/anchor-0.11.1.tgz",
"integrity": "sha512-oIdm4vTJkUy6GmE6JgqDAuQPKI7XM4TPJkjtoIzp69RZe0iAD9JP2XHx7lV1jLdYXeYHqDXfBt3zcq7W91K6PA==",
"version": "0.23.0",
"resolved": "https://registry.npmjs.org/@project-serum/anchor/-/anchor-0.23.0.tgz",
"integrity": "sha512-LV2/ifZOJVFTZ4GbEloXln3iVfCvO1YM8i7BBCrUm4tehP7irMx4nr4/IabHWOzrQcQElsxSP/lb1tBp+2ff8A==",
"dependencies": {
"@project-serum/borsh": "^0.2.2",
"@solana/web3.js": "^1.17.0",
"@project-serum/borsh": "^0.2.5",
"@solana/web3.js": "^1.36.0",
"base64-js": "^1.5.1",
"bn.js": "^5.1.2",
"bs58": "^4.0.1",
"buffer-layout": "^1.2.0",
"buffer-layout": "^1.2.2",
"camelcase": "^5.3.1",
"cross-fetch": "^3.1.5",
"crypto-hash": "^1.3.0",
"eventemitter3": "^4.0.7",
"find": "^0.3.0",
@@ -4512,11 +4514,95 @@
"node": ">=11"
}
},
"node_modules/@project-serum/anchor/node_modules/@babel/runtime": {
"version": "7.17.8",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.17.8.tgz",
"integrity": "sha512-dQpEpK0O9o6lj6oPu0gRDbbnk+4LeHlNcBpspf6Olzt3GIX4P1lWF1gS+pHLDFlaJvbR6q7jCfQ08zA4QJBnmA==",
"dependencies": {
"regenerator-runtime": "^0.13.4"
},
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@project-serum/anchor/node_modules/@solana/buffer-layout": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/@solana/buffer-layout/-/buffer-layout-4.0.0.tgz",
"integrity": "sha512-lR0EMP2HC3+Mxwd4YcnZb0smnaDw7Bl2IQWZiTevRH5ZZBZn6VRWn3/92E3qdU4SSImJkA6IDHawOHAnx/qUvQ==",
"dependencies": {
"buffer": "~6.0.3"
},
"engines": {
"node": ">=5.10"
}
},
"node_modules/@project-serum/anchor/node_modules/@solana/buffer-layout/node_modules/buffer": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz",
"integrity": "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
],
"dependencies": {
"base64-js": "^1.3.1",
"ieee754": "^1.2.1"
}
},
"node_modules/@project-serum/anchor/node_modules/@solana/web3.js": {
"version": "1.37.0",
"resolved": "https://registry.npmjs.org/@solana/web3.js/-/web3.js-1.37.0.tgz",
"integrity": "sha512-O2iCcgkGdi2FXwVLztPIZHcBuZXdhbVLavMsG+RdEyFGzFD0tQN1rOJ+Xb5eaexjqtgcqRN+Fyg3wAhLcHJbiA==",
"dependencies": {
"@babel/runtime": "^7.12.5",
"@ethersproject/sha2": "^5.5.0",
"@solana/buffer-layout": "^4.0.0",
"bn.js": "^5.0.0",
"borsh": "^0.7.0",
"bs58": "^4.0.1",
"buffer": "6.0.1",
"cross-fetch": "^3.1.4",
"jayson": "^3.4.4",
"js-sha3": "^0.8.0",
"rpc-websockets": "^7.4.2",
"secp256k1": "^4.0.2",
"superstruct": "^0.14.2",
"tweetnacl": "^1.0.0"
},
"engines": {
"node": ">=12.20.0"
}
},
"node_modules/@project-serum/anchor/node_modules/borsh": {
"version": "0.7.0",
"resolved": "https://registry.npmjs.org/borsh/-/borsh-0.7.0.tgz",
"integrity": "sha512-CLCsZGIBCFnPtkNnieW/a8wmreDmfUtjU2m9yHrzPXIlNbqVs0AQrSatSG6vdNYUqdc83tkQi2eHfF98ubzQLA==",
"dependencies": {
"bn.js": "^5.2.0",
"bs58": "^4.0.0",
"text-encoding-utf-8": "^1.0.2"
}
},
"node_modules/@project-serum/anchor/node_modules/pako": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/pako/-/pako-2.0.3.tgz",
"integrity": "sha512-WjR1hOeg+kki3ZIOjaf4b5WVcay1jaliKSYiEaB1XzwhMQZJxRdQRv0V31EKBYlxb4T7SK3hjfc/jxyU64BoSw=="
},
"node_modules/@project-serum/anchor/node_modules/superstruct": {
"version": "0.14.2",
"resolved": "https://registry.npmjs.org/superstruct/-/superstruct-0.14.2.tgz",
"integrity": "sha512-nPewA6m9mR3d6k7WkZ8N8zpTWfenFH3q9pA2PkuiZxINr9DKB2+40wEQf0ixn8VaGuJ78AB6iWOtStI+/4FKZQ=="
},
"node_modules/@project-serum/borsh": {
"version": "0.2.5",
"resolved": "https://registry.npmjs.org/@project-serum/borsh/-/borsh-0.2.5.tgz",
@@ -4547,6 +4633,30 @@
"node": ">=10"
}
},
"node_modules/@project-serum/serum/node_modules/@project-serum/anchor": {
"version": "0.11.1",
"resolved": "https://registry.npmjs.org/@project-serum/anchor/-/anchor-0.11.1.tgz",
"integrity": "sha512-oIdm4vTJkUy6GmE6JgqDAuQPKI7XM4TPJkjtoIzp69RZe0iAD9JP2XHx7lV1jLdYXeYHqDXfBt3zcq7W91K6PA==",
"dependencies": {
"@project-serum/borsh": "^0.2.2",
"@solana/web3.js": "^1.17.0",
"base64-js": "^1.5.1",
"bn.js": "^5.1.2",
"bs58": "^4.0.1",
"buffer-layout": "^1.2.0",
"camelcase": "^5.3.1",
"crypto-hash": "^1.3.0",
"eventemitter3": "^4.0.7",
"find": "^0.3.0",
"js-sha256": "^0.9.0",
"pako": "^2.0.3",
"snake-case": "^3.0.4",
"toml": "^3.0.0"
},
"engines": {
"node": ">=11"
}
},
"node_modules/@project-serum/serum/node_modules/@solana/spl-token": {
"version": "0.1.6",
"resolved": "https://registry.npmjs.org/@solana/spl-token/-/spl-token-0.1.6.tgz",
@@ -4594,6 +4704,11 @@
"node": ">=10"
}
},
"node_modules/@project-serum/serum/node_modules/pako": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/pako/-/pako-2.0.4.tgz",
"integrity": "sha512-v8tweI900AUkZN6heMU/4Uy4cXRc2AYNRggVmTR+dEncawDJgCdLMximOVA2p4qO57WMynangsfGRb5WD6L1Bg=="
},
"node_modules/@project-serum/sol-wallet-adapter": {
"version": "0.1.8",
"resolved": "https://registry.npmjs.org/@project-serum/sol-wallet-adapter/-/sol-wallet-adapter-0.1.8.tgz",
@@ -18534,9 +18649,9 @@
}
},
"node_modules/minimist": {
"version": "1.2.5",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
"integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz",
"integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q=="
},
"node_modules/minipass": {
"version": "3.1.3",
@@ -18707,9 +18822,9 @@
"integrity": "sha512-M2ufzIiINKCuDfBSAUr1vWQ+vuVcA9kqx8JJUsbQi6yf1uGRyb7HfpdfUr5qLXf3B/t8dPvcjhKMmlfnP47EzQ=="
},
"node_modules/nanoid": {
"version": "3.1.23",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.1.23.tgz",
"integrity": "sha512-FiB0kzdP0FFVGDKlRLEQ1BgDzU87dy5NnzjeW9YZNt+/c3+q82EQDUwniSAUxp/F0gFNI1ZhKU1FqYsMuqZVnw==",
"version": "3.3.1",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.1.tgz",
"integrity": "sha512-n6Vs/3KGyxPQd6uO0eH4Bv0ojGSUvuLlIHtC3Y0kEO23YRge8H9x1GCzLn28YX0H66pMkxuaeESFq4tKISKwdw==",
"bin": {
"nanoid": "bin/nanoid.cjs"
},
@@ -30606,17 +30721,18 @@
"peer": true
},
"@project-serum/anchor": {
"version": "0.11.1",
"resolved": "https://registry.npmjs.org/@project-serum/anchor/-/anchor-0.11.1.tgz",
"integrity": "sha512-oIdm4vTJkUy6GmE6JgqDAuQPKI7XM4TPJkjtoIzp69RZe0iAD9JP2XHx7lV1jLdYXeYHqDXfBt3zcq7W91K6PA==",
"version": "0.23.0",
"resolved": "https://registry.npmjs.org/@project-serum/anchor/-/anchor-0.23.0.tgz",
"integrity": "sha512-LV2/ifZOJVFTZ4GbEloXln3iVfCvO1YM8i7BBCrUm4tehP7irMx4nr4/IabHWOzrQcQElsxSP/lb1tBp+2ff8A==",
"requires": {
"@project-serum/borsh": "^0.2.2",
"@solana/web3.js": "^1.17.0",
"@project-serum/borsh": "^0.2.5",
"@solana/web3.js": "^1.36.0",
"base64-js": "^1.5.1",
"bn.js": "^5.1.2",
"bs58": "^4.0.1",
"buffer-layout": "^1.2.0",
"buffer-layout": "^1.2.2",
"camelcase": "^5.3.1",
"cross-fetch": "^3.1.5",
"crypto-hash": "^1.3.0",
"eventemitter3": "^4.0.7",
"find": "^0.3.0",
@@ -30626,10 +30742,73 @@
"toml": "^3.0.0"
},
"dependencies": {
"@babel/runtime": {
"version": "7.17.8",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.17.8.tgz",
"integrity": "sha512-dQpEpK0O9o6lj6oPu0gRDbbnk+4LeHlNcBpspf6Olzt3GIX4P1lWF1gS+pHLDFlaJvbR6q7jCfQ08zA4QJBnmA==",
"requires": {
"regenerator-runtime": "^0.13.4"
}
},
"@solana/buffer-layout": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/@solana/buffer-layout/-/buffer-layout-4.0.0.tgz",
"integrity": "sha512-lR0EMP2HC3+Mxwd4YcnZb0smnaDw7Bl2IQWZiTevRH5ZZBZn6VRWn3/92E3qdU4SSImJkA6IDHawOHAnx/qUvQ==",
"requires": {
"buffer": "~6.0.3"
},
"dependencies": {
"buffer": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz",
"integrity": "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==",
"requires": {
"base64-js": "^1.3.1",
"ieee754": "^1.2.1"
}
}
}
},
"@solana/web3.js": {
"version": "1.37.0",
"resolved": "https://registry.npmjs.org/@solana/web3.js/-/web3.js-1.37.0.tgz",
"integrity": "sha512-O2iCcgkGdi2FXwVLztPIZHcBuZXdhbVLavMsG+RdEyFGzFD0tQN1rOJ+Xb5eaexjqtgcqRN+Fyg3wAhLcHJbiA==",
"requires": {
"@babel/runtime": "^7.12.5",
"@ethersproject/sha2": "^5.5.0",
"@solana/buffer-layout": "^4.0.0",
"bn.js": "^5.0.0",
"borsh": "^0.7.0",
"bs58": "^4.0.1",
"buffer": "6.0.1",
"cross-fetch": "^3.1.4",
"jayson": "^3.4.4",
"js-sha3": "^0.8.0",
"rpc-websockets": "^7.4.2",
"secp256k1": "^4.0.2",
"superstruct": "^0.14.2",
"tweetnacl": "^1.0.0"
}
},
"borsh": {
"version": "0.7.0",
"resolved": "https://registry.npmjs.org/borsh/-/borsh-0.7.0.tgz",
"integrity": "sha512-CLCsZGIBCFnPtkNnieW/a8wmreDmfUtjU2m9yHrzPXIlNbqVs0AQrSatSG6vdNYUqdc83tkQi2eHfF98ubzQLA==",
"requires": {
"bn.js": "^5.2.0",
"bs58": "^4.0.0",
"text-encoding-utf-8": "^1.0.2"
}
},
"pako": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/pako/-/pako-2.0.3.tgz",
"integrity": "sha512-WjR1hOeg+kki3ZIOjaf4b5WVcay1jaliKSYiEaB1XzwhMQZJxRdQRv0V31EKBYlxb4T7SK3hjfc/jxyU64BoSw=="
},
"superstruct": {
"version": "0.14.2",
"resolved": "https://registry.npmjs.org/superstruct/-/superstruct-0.14.2.tgz",
"integrity": "sha512-nPewA6m9mR3d6k7WkZ8N8zpTWfenFH3q9pA2PkuiZxINr9DKB2+40wEQf0ixn8VaGuJ78AB6iWOtStI+/4FKZQ=="
}
}
},
@@ -30654,6 +30833,27 @@
"buffer-layout": "^1.2.0"
},
"dependencies": {
"@project-serum/anchor": {
"version": "0.11.1",
"resolved": "https://registry.npmjs.org/@project-serum/anchor/-/anchor-0.11.1.tgz",
"integrity": "sha512-oIdm4vTJkUy6GmE6JgqDAuQPKI7XM4TPJkjtoIzp69RZe0iAD9JP2XHx7lV1jLdYXeYHqDXfBt3zcq7W91K6PA==",
"requires": {
"@project-serum/borsh": "^0.2.2",
"@solana/web3.js": "^1.17.0",
"base64-js": "^1.5.1",
"bn.js": "^5.1.2",
"bs58": "^4.0.1",
"buffer-layout": "^1.2.0",
"camelcase": "^5.3.1",
"crypto-hash": "^1.3.0",
"eventemitter3": "^4.0.7",
"find": "^0.3.0",
"js-sha256": "^0.9.0",
"pako": "^2.0.3",
"snake-case": "^3.0.4",
"toml": "^3.0.0"
}
},
"@solana/spl-token": {
"version": "0.1.6",
"resolved": "https://registry.npmjs.org/@solana/spl-token/-/spl-token-0.1.6.tgz",
@@ -30680,6 +30880,11 @@
"version": "10.0.0",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-10.0.0.tgz",
"integrity": "sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q=="
},
"pako": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/pako/-/pako-2.0.4.tgz",
"integrity": "sha512-v8tweI900AUkZN6heMU/4Uy4cXRc2AYNRggVmTR+dEncawDJgCdLMximOVA2p4qO57WMynangsfGRb5WD6L1Bg=="
}
}
},
@@ -41480,9 +41685,9 @@
}
},
"minimist": {
"version": "1.2.5",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
"integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz",
"integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q=="
},
"minipass": {
"version": "3.1.3",
@@ -41622,9 +41827,9 @@
"integrity": "sha512-M2ufzIiINKCuDfBSAUr1vWQ+vuVcA9kqx8JJUsbQi6yf1uGRyb7HfpdfUr5qLXf3B/t8dPvcjhKMmlfnP47EzQ=="
},
"nanoid": {
"version": "3.1.23",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.1.23.tgz",
"integrity": "sha512-FiB0kzdP0FFVGDKlRLEQ1BgDzU87dy5NnzjeW9YZNt+/c3+q82EQDUwniSAUxp/F0gFNI1ZhKU1FqYsMuqZVnw=="
"version": "3.3.1",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.1.tgz",
"integrity": "sha512-n6Vs/3KGyxPQd6uO0eH4Bv0ojGSUvuLlIHtC3Y0kEO23YRge8H9x1GCzLn28YX0H66pMkxuaeESFq4tKISKwdw=="
},
"nanomatch": {
"version": "1.2.13",

View File

@@ -9,6 +9,7 @@
"@cloudflare/stream-react": "^1.2.0",
"@metamask/jazzicon": "^2.0.0",
"@metaplex/js": "4.12.0",
"@project-serum/anchor": "0.23.0",
"@project-serum/serum": "^0.13.61",
"@react-hook/debounce": "^4.0.0",
"@sentry/react": "^6.16.1",

View File

@@ -1,18 +1,19 @@
import React from "react";
import { Message, ParsedMessage } from "@solana/web3.js";
import { Cluster } from "providers/cluster";
import { TableCardBody } from "components/common/TableCardBody";
import { programLabel } from "utils/tx";
import { InstructionLogs } from "utils/program-logs";
import { ProgramName } from "utils/anchor";
export function ProgramLogsCardBody({
message,
logs,
cluster,
url,
}: {
message: Message | ParsedMessage;
logs: InstructionLogs[];
cluster: Cluster;
url: string;
}) {
return (
<TableCardBody>
@@ -28,9 +29,6 @@ export function ProgramLogsCardBody({
} else {
programId = ix.programId;
}
const programName =
programLabel(programId.toBase58(), cluster) || "Unknown Program";
const programLogs: InstructionLogs | undefined = logs[index];
let badgeColor = "white";
@@ -45,7 +43,12 @@ export function ProgramLogsCardBody({
<span className={`badge bg-${badgeColor}-soft me-2`}>
#{index + 1}
</span>
{programName} Instruction
<ProgramName
programId={programId}
cluster={cluster}
url={url}
/>{" "}
Instruction
</div>
{programLogs && (
<div className="d-flex align-items-start flex-column font-monospace p-2 font-size-sm">

View File

@@ -42,6 +42,7 @@ export function SearchBar() {
<div className="row align-items-center">
<div className="col">
<Select
autoFocus
ref={(ref) => (selectRef.current = ref)}
options={buildOptions(
search,

View File

@@ -1,6 +1,7 @@
import React from "react";
import classNames from "classnames";
import {
PingInfo,
PingRollupInfo,
PingStatus,
useSolanaPingInfo,
@@ -107,12 +108,10 @@ const CUSTOM_TOOLTIP = function (this: any, tooltipModel: ChartTooltipModel) {
// Set Text
if (tooltipModel.body) {
const { label, value } = tooltipModel.dataPoints[0];
const { label } = tooltipModel.dataPoints[0];
const tooltipContent = tooltipEl.querySelector("div");
if (tooltipContent) {
let innerHtml = `<div class="value">${value} ms</div>`;
innerHtml += `<div class="label">${label}</div>`;
tooltipContent.innerHTML = innerHtml;
tooltipContent.innerHTML = `${label}`;
}
}
@@ -173,33 +172,56 @@ const CHART_OPTION: ChartOptions = {
function PingBarChart({ pingInfo }: { pingInfo: PingRollupInfo }) {
const [series, setSeries] = React.useState<Series>("short");
const seriesData = pingInfo[series] || [];
const maxMean = seriesData.reduce((a, b) => {
return Math.max(a, b.mean);
}, 0);
const seriesLength = seriesData.length;
const backgroundColor = (val: PingInfo) => {
if (val.submitted === 0) {
return "#08a274";
}
return val.loss > 0.5 ? "#f00" : "#00D192";
};
const chartData: Chart.ChartData = {
labels: seriesData.map((val, i) => {
return `
<p class="mb-0">${val.confirmed} of ${val.submitted} confirmed</p>
${
val.loss
? `<p class="mb-0">${val.loss.toLocaleString(undefined, {
style: "percent",
minimumFractionDigits: 2,
})} loss</p>`
: ""
if (val.submitted === 0) {
return `
<div class="label">
<p class="mb-0">Ping statistics unavailable</p>
${SERIES_INFO[series].label(seriesLength - i)}min ago
</div>
`;
}
${SERIES_INFO[series].label(seriesLength - i)}min ago
return `
<div class="value">${val.mean} ms</div>
<div class="label">
<p class="mb-0">${val.confirmed} of ${val.submitted} confirmed</p>
${
val.loss
? `<p class="mb-0">${val.loss.toLocaleString(undefined, {
style: "percent",
minimumFractionDigits: 2,
})} loss</p>`
: ""
}
${SERIES_INFO[series].label(seriesLength - i)}min ago
</div>
`;
}),
datasets: [
{
backgroundColor: seriesData.map((val) =>
val.loss > 0.5 ? "#f00" : "#00D192"
),
hoverBackgroundColor: seriesData.map((val) =>
val.loss > 0.5 ? "#f00" : "#00D192"
),
minBarLength: 2,
backgroundColor: seriesData.map(backgroundColor),
hoverBackgroundColor: seriesData.map(backgroundColor),
borderWidth: 0,
data: seriesData.map((val) => val.mean || 0),
data: seriesData.map((val) => {
if (val.submitted === 0) {
return maxMean * 0.5;
}
return val.mean || 0;
}),
},
],
};

View File

@@ -0,0 +1,157 @@
import React, { useMemo } from "react";
import { Account } from "providers/accounts";
import { Address } from "components/common/Address";
import { BorshAccountsCoder } from "@project-serum/anchor";
import { capitalizeFirstLetter } from "utils/anchor";
import { ErrorCard } from "components/common/ErrorCard";
import { PublicKey } from "@solana/web3.js";
import BN from "bn.js";
import ReactJson from "react-json-view";
import { useCluster } from "providers/cluster";
import { useAnchorProgram } from "providers/anchor";
export function AnchorAccountCard({ account }: { account: Account }) {
const { url } = useCluster();
const program = useAnchorProgram(
account.details?.owner.toString() ?? "",
url
);
const { foundAccountLayoutName, decodedAnchorAccountData } = useMemo(() => {
let foundAccountLayoutName: string | undefined;
let decodedAnchorAccountData: { [key: string]: any } | undefined;
if (program && account.details && account.details.rawData) {
const accountBuffer = account.details.rawData;
const discriminator = accountBuffer.slice(0, 8);
// Iterate all the structs, see if any of the name-hashes match
Object.keys(program.account).forEach((accountType) => {
const layoutName = capitalizeFirstLetter(accountType);
const discriminatorToCheck =
BorshAccountsCoder.accountDiscriminator(layoutName);
if (discriminatorToCheck.equals(discriminator)) {
foundAccountLayoutName = layoutName;
const accountDecoder = program.account[accountType];
decodedAnchorAccountData = accountDecoder.coder.accounts.decode(
layoutName,
accountBuffer
);
}
});
}
return { foundAccountLayoutName, decodedAnchorAccountData };
}, [program, account.details]);
if (!foundAccountLayoutName || !decodedAnchorAccountData) {
return (
<ErrorCard text="Failed to decode account data according to its public anchor interface" />
);
}
return (
<>
<div className="card">
<div className="card-header">
<div className="row align-items-center">
<div className="col">
<h3 className="card-header-title">{foundAccountLayoutName}</h3>
</div>
</div>
</div>
<div className="table-responsive mb-0">
<table className="table table-sm table-nowrap card-table">
<thead>
<tr>
<th className="w-1 text-muted">Key</th>
<th className="text-muted">Value</th>
</tr>
</thead>
<tbody className="list">
{decodedAnchorAccountData &&
Object.keys(decodedAnchorAccountData).map((key) => (
<AccountRow
key={key}
valueName={key}
value={decodedAnchorAccountData[key]}
/>
))}
</tbody>
</table>
</div>
<div className="card-footer">
<div className="text-muted text-center">
{decodedAnchorAccountData &&
Object.keys(decodedAnchorAccountData).length > 0
? `Decoded ${Object.keys(decodedAnchorAccountData).length} Items`
: "No decoded data"}
</div>
</div>
</div>
</>
);
}
function AccountRow({ valueName, value }: { valueName: string; value: any }) {
let displayValue: JSX.Element;
if (value instanceof PublicKey) {
displayValue = <Address pubkey={value} link />;
} else if (value instanceof BN) {
displayValue = <>{value.toString()}</>;
} else if (!(value instanceof Object)) {
displayValue = <>{String(value)}</>;
} else if (value) {
const displayObject = stringifyPubkeyAndBigNums(value);
displayValue = (
<ReactJson
src={JSON.parse(JSON.stringify(displayObject))}
collapsed={1}
theme="solarized"
/>
);
} else {
displayValue = <>null</>;
}
return (
<tr>
<td className="w-1 text-monospace">{camelToUnderscore(valueName)}</td>
<td className="text-monospace">{displayValue}</td>
</tr>
);
}
function camelToUnderscore(key: string) {
var result = key.replace(/([A-Z])/g, " $1");
return result.split(" ").join("_").toLowerCase();
}
function stringifyPubkeyAndBigNums(object: Object): Object {
if (!Array.isArray(object)) {
if (object instanceof PublicKey) {
return object.toString();
} else if (object instanceof BN) {
return object.toString();
} else if (!(object instanceof Object)) {
return object;
} else {
const parsedObject: { [key: string]: Object } = {};
Object.keys(object).map((key) => {
let value = (object as { [key: string]: any })[key];
if (value instanceof Object) {
value = stringifyPubkeyAndBigNums(value);
}
parsedObject[key] = value;
return null;
});
return parsedObject;
}
}
return object.map((innerObject) =>
innerObject instanceof Object
? stringifyPubkeyAndBigNums(innerObject)
: innerObject
);
}

View File

@@ -0,0 +1,36 @@
import { PublicKey } from "@solana/web3.js";
import { useAnchorProgram } from "providers/anchor";
import { useCluster } from "providers/cluster";
import ReactJson from "react-json-view";
export function AnchorProgramCard({ programId }: { programId: PublicKey }) {
const { url } = useCluster();
const program = useAnchorProgram(programId.toString(), url);
if (!program) {
return null;
}
return (
<>
<div className="card">
<div className="card-header">
<div className="row align-items-center">
<div className="col">
<h3 className="card-header-title">Anchor IDL</h3>
</div>
</div>
</div>
<div className="card metadata-json-viewer m-4">
<ReactJson
src={program.idl}
theme={"solarized"}
style={{ padding: 25 }}
collapsed={1}
/>
</div>
</div>
</>
);
}

View File

@@ -0,0 +1,271 @@
import { ErrorCard } from "components/common/ErrorCard";
import { TableCardBody } from "components/common/TableCardBody";
import { UpgradeableLoaderAccountData } from "providers/accounts";
import { fromProgramData, SecurityTXT } from "utils/security-txt";
export function SecurityCard({ data }: { data: UpgradeableLoaderAccountData }) {
if (!data.programData) {
return <ErrorCard text="Account has no data" />;
}
const { securityTXT, error } = fromProgramData(data.programData);
if (!securityTXT) {
return <ErrorCard text={error!} />;
}
return (
<div className="card security-txt">
<div className="card-header">
<h3 className="card-header-title mb-0 d-flex align-items-center">
Security.txt
</h3>
<small>
Note that this is self-reported by the author of the program and might
not be accurate.
</small>
</div>
<TableCardBody>
{ROWS.filter((x) => x.key in securityTXT).map((x, idx) => {
return (
<tr key={idx}>
<td className="w-100">{x.display}</td>
<RenderEntry value={securityTXT[x.key]} type={x.type} />
</tr>
);
})}
</TableCardBody>
</div>
);
}
enum DisplayType {
String,
URL,
Date,
Contacts,
PGP,
Auditors,
}
type TableRow = {
display: string;
key: keyof SecurityTXT;
type: DisplayType;
};
const ROWS: TableRow[] = [
{
display: "Name",
key: "name",
type: DisplayType.String,
},
{
display: "Project URL",
key: "project_url",
type: DisplayType.URL,
},
{
display: "Contacts",
key: "contacts",
type: DisplayType.Contacts,
},
{
display: "Policy",
key: "policy",
type: DisplayType.URL,
},
{
display: "Preferred Languages",
key: "preferred_languages",
type: DisplayType.String,
},
{
display: "Source Code URL",
key: "source_code",
type: DisplayType.URL,
},
{
display: "Secure Contact Encryption",
key: "encryption",
type: DisplayType.PGP,
},
{
display: "Auditors",
key: "auditors",
type: DisplayType.Auditors,
},
{
display: "Acknowledgements",
key: "acknowledgements",
type: DisplayType.URL,
},
{
display: "Expiry",
key: "expiry",
type: DisplayType.Date,
},
];
function RenderEntry({
value,
type,
}: {
value: SecurityTXT[keyof SecurityTXT];
type: DisplayType;
}) {
if (!value) {
return <></>;
}
switch (type) {
case DisplayType.String:
return <td className="text-lg-end font-monospace">{value}</td>;
case DisplayType.Contacts:
return (
<td className="text-lg-end font-monospace">
<ul>
{value?.split(",").map((c, i) => {
const idx = c.indexOf(":");
if (idx < 0) {
//invalid contact
return <li key={i}>{c}</li>;
}
const [type, information] = [c.slice(0, idx), c.slice(idx + 1)];
return (
<li key={i}>
<Contact type={type} information={information} />
</li>
);
})}
</ul>
</td>
);
case DisplayType.URL:
if (isValidLink(value)) {
return (
<td className="text-lg-end">
<span className="font-monospace">
<a rel="noopener noreferrer" target="_blank" href={value}>
{value}
<span className="fe fe-external-link ms-2"></span>
</a>
</span>
</td>
);
}
return (
<td className="text-lg-end">
<pre>{value.trim()}</pre>
</td>
);
case DisplayType.Date:
return <td className="text-lg-end font-monospace">{value}</td>;
case DisplayType.PGP:
if (isValidLink(value)) {
return (
<td className="text-lg-end">
<span className="font-monospace">
<a rel="noopener noreferrer" target="_blank" href={value}>
{value}
<span className="fe fe-external-link ms-2"></span>
</a>
</span>
</td>
);
}
return (
<td>
<code>{value.trim()}</code>
</td>
);
case DisplayType.Auditors:
if (isValidLink(value)) {
return (
<td className="text-lg-end">
<span className="font-monospace">
<a rel="noopener noreferrer" target="_blank" href={value}>
{value}
<span className="fe fe-external-link ms-2"></span>
</a>
</span>
</td>
);
}
return (
<td>
<ul>
{value?.split(",").map((c, idx) => {
return <li key={idx}>{c}</li>;
})}
</ul>
</td>
);
default:
break;
}
return <></>;
}
function isValidLink(value: string) {
try {
const url = new URL(value);
return ["http:", "https:"].includes(url.protocol);
} catch (err) {
return false;
}
}
function Contact({ type, information }: { type: string; information: string }) {
switch (type) {
case "discord":
return <>Discord: {information}</>;
case "email":
return (
<a
rel="noopener noreferrer"
target="_blank"
href={`mailto:${information}`}
>
{information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
case "telegram":
return (
<a
rel="noopener noreferrer"
target="_blank"
href={`https://t.me/${information}`}
>
Telegram: {information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
case "twitter":
return (
<a
rel="noopener noreferrer"
target="_blank"
href={`https://twitter.com/${information}`}
>
Twitter {information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
case "link":
if (isValidLink(information)) {
return (
<a rel="noopener noreferrer" target="_blank" href={`${information}`}>
{information}
<span className="fe fe-external-link ms-2"></span>
</a>
);
}
return <>{information}</>;
case "other":
default:
return (
<>
{type}: {information}
</>
);
}
}

View File

@@ -296,11 +296,11 @@ function isFullyInactivated(
return false;
}
const delegatedStake = stake.delegation.stake.toNumber();
const inactiveStake = activation.inactive;
const delegatedStake = stake.delegation.stake;
const inactiveStake = new BN(activation.inactive);
return (
!stake.delegation.deactivationEpoch.eq(MAX_EPOCH) &&
delegatedStake === inactiveStake
delegatedStake.eq(inactiveStake)
);
}

View File

@@ -18,6 +18,7 @@ import { Downloadable } from "components/common/Downloadable";
import { CheckingBadge, VerifiedBadge } from "components/common/VerifiedBadge";
import { InfoTooltip } from "components/common/InfoTooltip";
import { useVerifiableBuilds } from "utils/program-verification";
import { SecurityTXTBadge } from "components/common/SecurityTXTBadge";
export function UpgradeableLoaderAccountSection({
account,
@@ -146,6 +147,17 @@ export function UpgradeableProgramSection({
)}
</td>
</tr>
<tr>
<td>
<SecurityLabel />
</td>
<td className="text-lg-end">
<SecurityTXTBadge
programData={programData}
pubkey={account.pubkey}
/>
</td>
</tr>
<tr>
<td>Last Deployed Slot</td>
<td className="text-lg-end">
@@ -165,6 +177,21 @@ export function UpgradeableProgramSection({
);
}
function SecurityLabel() {
return (
<InfoTooltip text="Security.txt helps security researchers to contact developers if they find security bugs.">
<a
rel="noopener noreferrer"
target="_blank"
href="https://github.com/neodyme-labs/solana-security-txt"
>
<span className="security-txt-link-color-hack-reee">Security.txt</span>
<span className="fe fe-external-link ms-2"></span>
</a>
</InfoTooltip>
);
}
function LastVerifiedBuildLabel() {
return (
<InfoTooltip text="Indicates whether the program currently deployed on-chain is verified to match the associated published source code, when it is available.">

View File

@@ -0,0 +1,33 @@
import { PublicKey } from "@solana/web3.js";
import { Link } from "react-router-dom";
import { fromProgramData } from "utils/security-txt";
import { clusterPath } from "utils/url";
import { ProgramDataAccountInfo } from "validators/accounts/upgradeable-program";
export function SecurityTXTBadge({
programData,
pubkey,
}: {
programData: ProgramDataAccountInfo;
pubkey: PublicKey;
}) {
const { securityTXT, error } = fromProgramData(programData);
if (securityTXT) {
return (
<h3 className="mb-0">
<Link
className="c-pointer badge bg-success-soft rank"
to={clusterPath(`/address/${pubkey.toBase58()}/security`)}
>
Included
</Link>
</h3>
);
} else {
return (
<h3 className="mb-0">
<span className="badge bg-warning-soft rank">{error}</span>
</h3>
);
}
}

View File

@@ -0,0 +1,101 @@
import { SignatureResult, TransactionInstruction } from "@solana/web3.js";
import { InstructionCard } from "./InstructionCard";
import { Idl, Program, BorshInstructionCoder } from "@project-serum/anchor";
import {
getAnchorNameForInstruction,
getProgramName,
capitalizeFirstLetter,
getAnchorAccountsFromInstruction,
} from "utils/anchor";
import { HexData } from "components/common/HexData";
import { Address } from "components/common/Address";
import ReactJson from "react-json-view";
export default function AnchorDetailsCard(props: {
key: string;
ix: TransactionInstruction;
index: number;
result: SignatureResult;
signature: string;
innerCards?: JSX.Element[];
childIndex?: number;
anchorProgram: Program<Idl>;
}) {
const { ix, anchorProgram } = props;
const programName = getProgramName(anchorProgram) ?? "Unknown Program";
const ixName =
getAnchorNameForInstruction(ix, anchorProgram) ?? "Unknown Instruction";
const cardTitle = `${programName}: ${ixName}`;
return (
<InstructionCard title={cardTitle} {...props}>
<RawAnchorDetails ix={ix} anchorProgram={anchorProgram} />
</InstructionCard>
);
}
function RawAnchorDetails({
ix,
anchorProgram,
}: {
ix: TransactionInstruction;
anchorProgram: Program;
}) {
let ixAccounts:
| {
name: string;
isMut: boolean;
isSigner: boolean;
pda?: Object;
}[]
| null = null;
var decodedIxData = null;
if (anchorProgram) {
const decoder = new BorshInstructionCoder(anchorProgram.idl);
decodedIxData = decoder.decode(ix.data);
ixAccounts = getAnchorAccountsFromInstruction(decodedIxData, anchorProgram);
}
return (
<>
{ix.keys.map(({ pubkey, isSigner, isWritable }, keyIndex) => {
return (
<tr key={keyIndex}>
<td>
<div className="me-2 d-md-inline">
{ixAccounts && keyIndex < ixAccounts.length
? `${capitalizeFirstLetter(ixAccounts[keyIndex].name)}`
: `Account #${keyIndex + 1}`}
</div>
{isWritable && (
<span className="badge bg-info-soft me-1">Writable</span>
)}
{isSigner && (
<span className="badge bg-info-soft me-1">Signer</span>
)}
</td>
<td className="text-lg-end">
<Address pubkey={pubkey} alignRight link />
</td>
</tr>
);
})}
<tr>
<td>
Instruction Data <span className="text-muted">(Hex)</span>
</td>
{decodedIxData ? (
<td className="metadata-json-viewer m-4">
<ReactJson src={decodedIxData} theme="solarized" />
</td>
) : (
<td className="text-lg-end">
<HexData raw={ix.data} />
</td>
)}
</tr>
</>
);
}

View File

@@ -500,7 +500,7 @@ export function decodeInitOpenOrders(
openOrders: ix.keys[0].pubkey,
openOrdersOwner: ix.keys[1].pubkey,
market: ix.keys[2].pubkey,
openOrdersMarketAuthority: ix.keys[4].pubkey,
openOrdersMarketAuthority: ix.keys[4]?.pubkey,
},
};
}

View File

@@ -166,6 +166,10 @@ const BurnChecked = type({
tokenAmount: TokenAmountUi,
});
const SyncNative = type({
account: PublicKeyFromString,
});
export type TokenInstructionType = Infer<typeof TokenInstructionType>;
export const TokenInstructionType = enums([
"initializeMint",
@@ -188,6 +192,7 @@ export const TokenInstructionType = enums([
"approveChecked",
"mintToChecked",
"burnChecked",
"syncNative",
]);
export const IX_STRUCTS = {
@@ -211,6 +216,7 @@ export const IX_STRUCTS = {
approveChecked: ApproveChecked,
mintToChecked: MintToChecked,
burnChecked: BurnChecked,
syncNative: SyncNative,
};
export const IX_TITLES = {
@@ -234,4 +240,5 @@ export const IX_TITLES = {
approveChecked: "Approve (Checked)",
mintToChecked: "Mint To (Checked)",
burnChecked: "Burn (Checked)",
syncNative: "Sync Native",
};

View File

@@ -5,7 +5,6 @@ import {
ParsedInstruction,
ParsedTransaction,
PartiallyDecodedInstruction,
PublicKey,
SignatureResult,
TransactionSignature,
} from "@solana/web3.js";
@@ -21,8 +20,8 @@ import { WormholeDetailsCard } from "components/instruction/WormholeDetailsCard"
import { UnknownDetailsCard } from "components/instruction/UnknownDetailsCard";
import { BonfidaBotDetailsCard } from "components/instruction/BonfidaBotDetails";
import {
SignatureProps,
INNER_INSTRUCTIONS_START_SLOT,
SignatureProps,
} from "pages/TransactionDetailsPage";
import { intoTransactionInstruction } from "utils/tx";
import { isSerumInstruction } from "components/instruction/serum/types";
@@ -39,10 +38,14 @@ import { BpfUpgradeableLoaderDetailsCard } from "components/instruction/bpf-upgr
import { VoteDetailsCard } from "components/instruction/vote/VoteDetailsCard";
import { isWormholeInstruction } from "components/instruction/wormhole/types";
import { AssociatedTokenDetailsCard } from "components/instruction/AssociatedTokenDetailsCard";
import { isMangoInstruction } from "components/instruction/mango/types";
import { MangoDetailsCard } from "components/instruction/MangoDetails";
import { isPythInstruction } from "components/instruction/pyth/types";
import { PythDetailsCard } from "components/instruction/pyth/PythDetailsCard";
import AnchorDetailsCard from "../instruction/AnchorDetailsCard";
import { isMangoInstruction } from "../instruction/mango/types";
import { useAnchorProgram } from "providers/anchor";
import { LoadingCard } from "components/common/LoadingCard";
import { ErrorBoundary } from "@sentry/react";
export type InstructionDetailsProps = {
tx: ParsedTransaction;
@@ -56,14 +59,16 @@ export type InstructionDetailsProps = {
export function InstructionsSection({ signature }: SignatureProps) {
const status = useTransactionStatus(signature);
const details = useTransactionDetails(signature);
const { cluster } = useCluster();
const { cluster, url } = useCluster();
const fetchDetails = useFetchTransactionDetails();
const refreshDetails = () => fetchDetails(signature);
if (!status?.data?.info || !details?.data?.transaction) return null;
const { transaction } = details.data.transaction;
const result = status?.data?.info?.result;
if (!result || !details?.data?.transaction) {
return <ErrorCard retry={refreshDetails} text="No instructions found" />;
}
const { meta } = details.data.transaction;
const { transaction } = details.data?.transaction;
if (transaction.message.instructions.length === 0) {
return <ErrorCard retry={refreshDetails} text="No instructions found" />;
@@ -89,58 +94,60 @@ export function InstructionsSection({ signature }: SignatureProps) {
});
}
const result = status.data.info.result;
const instructionDetails = transaction.message.instructions.map(
(instruction, index) => {
let innerCards: JSX.Element[] = [];
if (index in innerInstructions) {
innerInstructions[index].forEach((ix, childIndex) => {
if (typeof ix.programId === "string") {
ix.programId = new PublicKey(ix.programId);
}
let res = renderInstructionCard({
index,
ix,
result,
signature,
tx: transaction,
childIndex,
});
innerCards.push(res);
});
}
return renderInstructionCard({
index,
ix: instruction,
result,
signature,
tx: transaction,
innerCards,
});
}
);
return (
<>
<div className="container">
<div className="header">
<div className="header-body">
<h3 className="mb-0">
{instructionDetails.length > 1 ? "Instructions" : "Instruction"}
{transaction.message.instructions.length > 1
? "Instructions"
: "Instruction"}
</h3>
</div>
</div>
</div>
{instructionDetails}
<React.Suspense fallback={<LoadingCard message="Loading Instructions" />}>
{transaction.message.instructions.map((instruction, index) => {
let innerCards: JSX.Element[] = [];
if (index in innerInstructions) {
innerInstructions[index].forEach((ix, childIndex) => {
let res = (
<InstructionCard
key={`${index}-${childIndex}`}
index={index}
ix={ix}
result={result}
signature={signature}
tx={transaction}
childIndex={childIndex}
url={url}
/>
);
innerCards.push(res);
});
}
return (
<InstructionCard
key={`${index}`}
index={index}
ix={instruction}
result={result}
signature={signature}
tx={transaction}
innerCards={innerCards}
url={url}
/>
);
})}
</React.Suspense>
</>
);
}
function renderInstructionCard({
function InstructionCard({
ix,
tx,
result,
@@ -148,6 +155,7 @@ function renderInstructionCard({
signature,
innerCards,
childIndex,
url,
}: {
ix: ParsedInstruction | PartiallyDecodedInstruction;
tx: ParsedTransaction;
@@ -156,8 +164,10 @@ function renderInstructionCard({
signature: TransactionSignature;
innerCards?: JSX.Element[];
childIndex?: number;
url: string;
}) {
const key = `${index}-${childIndex}`;
const anchorProgram = useAnchorProgram(ix.programId.toString(), url);
if ("parsed" in ix) {
const props = {
@@ -226,6 +236,12 @@ function renderInstructionCard({
return <WormholeDetailsCard key={key} {...props} />;
} else if (isPythInstruction(transactionIx)) {
return <PythDetailsCard key={key} {...props} />;
} else if (anchorProgram) {
return (
<ErrorBoundary fallback={<UnknownDetailsCard {...props} />}>
<AnchorDetailsCard key={key} anchorProgram={anchorProgram} {...props} />
</ErrorBoundary>
);
} else {
return <UnknownDetailsCard key={key} {...props} />;
}

View File

@@ -6,7 +6,7 @@ import { prettyProgramLogs } from "utils/program-logs";
import { useCluster } from "providers/cluster";
export function ProgramLogSection({ signature }: SignatureProps) {
const { cluster } = useCluster();
const { cluster, url } = useCluster();
const details = useTransactionDetails(signature);
const transaction = details?.data?.transaction;
@@ -32,6 +32,7 @@ export function ProgramLogSection({ signature }: SignatureProps) {
message={message}
logs={prettyLogs}
cluster={cluster}
url={url}
/>
) : (
<div className="card-body">

View File

@@ -5,7 +5,6 @@ import {
useFetchAccountInfo,
useAccountInfo,
Account,
ProgramData,
TokenProgramData,
useMintAccountInfo,
} from "providers/accounts";
@@ -40,6 +39,10 @@ import { MetaplexMetadataCard } from "components/account/MetaplexMetadataCard";
import { NFTHeader } from "components/account/MetaplexNFTHeader";
import { DomainsCard } from "components/account/DomainsCard";
import isMetaplexNFT from "providers/accounts/utils/isMetaplexNFT";
import { SecurityCard } from "components/account/SecurityCard";
import { AnchorAccountCard } from "components/account/AnchorAccountCard";
import { AnchorProgramCard } from "components/account/AnchorProgramCard";
import { useAnchorProgram } from "providers/anchor";
const IDENTICON_WIDTH = 64;
@@ -108,6 +111,13 @@ const TABS_LOOKUP: { [id: string]: Tab[] } = {
path: "/stake-history",
},
],
"bpf-upgradeable-loader": [
{
slug: "security",
title: "Security",
path: "/security",
},
],
};
const TOKEN_TABS_HIDDEN = [
@@ -238,11 +248,16 @@ function DetailsSections({
}
const account = info.data;
const data = account?.details?.data;
const tabs = getTabs(data);
const tabComponents = getTabs(pubkey, account).concat(
getAnchorTabs(pubkey, account)
);
let moreTab: MoreTabs = "history";
if (tab && tabs.filter(({ slug }) => slug === tab).length === 0) {
if (
tab &&
tabComponents.filter((tabComponent) => tabComponent.tab.slug === tab)
.length === 0
) {
return <Redirect to={{ ...location, pathname: `/address/${address}` }} />;
} else if (tab) {
moreTab = tab as MoreTabs;
@@ -257,7 +272,11 @@ function DetailsSections({
</div>
)}
{<InfoSection account={account} />}
{<MoreSection account={account} tab={moreTab} tabs={tabs} />}
<MoreSection
account={account}
tab={moreTab}
tabs={tabComponents.map(({ component }) => component)}
/>
</>
);
}
@@ -307,6 +326,11 @@ type Tab = {
path: string;
};
type TabComponent = {
tab: Tab;
component: JSX.Element | null;
};
export type MoreTabs =
| "history"
| "tokens"
@@ -319,7 +343,10 @@ export type MoreTabs =
| "instructions"
| "rewards"
| "metadata"
| "domains";
| "domains"
| "security"
| "anchor-program"
| "anchor-account";
function MoreSection({
account,
@@ -328,29 +355,17 @@ function MoreSection({
}: {
account: Account;
tab: MoreTabs;
tabs: Tab[];
tabs: (JSX.Element | null)[];
}) {
const pubkey = account.pubkey;
const address = account.pubkey.toBase58();
const data = account?.details?.data;
return (
<>
<div className="container">
<div className="header">
<div className="header-body pt-0">
<ul className="nav nav-tabs nav-overflow header-tabs">
{tabs.map(({ title, slug, path }) => (
<li key={slug} className="nav-item">
<NavLink
className="nav-link"
to={clusterPath(`/address/${address}${path}`)}
exact
>
{title}
</NavLink>
</li>
))}
</ul>
<ul className="nav nav-tabs nav-overflow header-tabs">{tabs}</ul>
</div>
</div>
</div>
@@ -389,11 +404,32 @@ function MoreSection({
/>
)}
{tab === "domains" && <DomainsCard pubkey={pubkey} />}
{tab === "security" && data?.program === "bpf-upgradeable-loader" && (
<SecurityCard data={data} />
)}
{tab === "anchor-program" && (
<React.Suspense
fallback={<LoadingCard message="Loading anchor program IDL" />}
>
<AnchorProgramCard programId={pubkey} />
</React.Suspense>
)}
{tab === "anchor-account" && (
<React.Suspense
fallback={
<LoadingCard message="Decoding account data using anchor interface" />
}
>
<AnchorAccountCard account={account} />
</React.Suspense>
)}
</>
);
}
function getTabs(data?: ProgramData): Tab[] {
function getTabs(pubkey: PublicKey, account: Account): TabComponent[] {
const address = pubkey.toBase58();
const data = account.details?.data;
const tabs: Tab[] = [
{
slug: "history",
@@ -443,5 +479,122 @@ function getTabs(data?: ProgramData): Tab[] {
});
}
return tabs;
return tabs.map((tab) => {
return {
tab,
component: (
<li key={tab.slug} className="nav-item">
<NavLink
className="nav-link"
to={clusterPath(`/address/${address}${tab.path}`)}
exact
>
{tab.title}
</NavLink>
</li>
),
};
});
}
function getAnchorTabs(pubkey: PublicKey, account: Account) {
const tabComponents = [];
const anchorProgramTab: Tab = {
slug: "anchor-program",
title: "Anchor Program IDL",
path: "/anchor-program",
};
tabComponents.push({
tab: anchorProgramTab,
component: (
<React.Suspense key={anchorProgramTab.slug} fallback={<></>}>
<AnchorProgramLink
tab={anchorProgramTab}
address={pubkey.toString()}
pubkey={pubkey}
/>
</React.Suspense>
),
});
const anchorAccountTab: Tab = {
slug: "anchor-account",
title: "Anchor Account",
path: "/anchor-account",
};
tabComponents.push({
tab: anchorAccountTab,
component: (
<React.Suspense key={anchorAccountTab.slug} fallback={<></>}>
<AnchorAccountLink
tab={anchorAccountTab}
address={pubkey.toString()}
programId={account.details?.owner}
/>
</React.Suspense>
),
});
return tabComponents;
}
function AnchorProgramLink({
tab,
address,
pubkey,
}: {
tab: Tab;
address: string;
pubkey: PublicKey;
}) {
const { url } = useCluster();
const anchorProgram = useAnchorProgram(pubkey.toString() ?? "", url);
if (!anchorProgram) {
return null;
}
return (
<li key={tab.slug} className="nav-item">
<NavLink
className="nav-link"
to={clusterPath(`/address/${address}${tab.path}`)}
exact
>
{tab.title}
</NavLink>
</li>
);
}
function AnchorAccountLink({
address,
tab,
programId,
}: {
address: string;
tab: Tab;
programId: PublicKey | undefined;
}) {
const { url } = useCluster();
const accountAnchorProgram = useAnchorProgram(
programId?.toString() ?? "",
url
);
if (!accountAnchorProgram) {
return null;
}
return (
<li key={tab.slug} className="nav-item">
<NavLink
className="nav-link"
to={clusterPath(`/address/${address}${tab.path}`)}
exact
>
{tab.title}
</NavLink>
</li>
);
}

View File

@@ -105,7 +105,11 @@ export function TransactionDetailsPage({ signature: raw }: SignatureProps) {
) : (
<SignatureContext.Provider value={signature}>
<StatusCard signature={signature} autoRefresh={autoRefresh} />
<DetailsSection signature={signature} />
<React.Suspense
fallback={<LoadingCard message="Loading transaction details" />}
>
<DetailsSection signature={signature} />
</React.Suspense>
</SignatureContext.Provider>
)}
</div>

View File

@@ -8,7 +8,7 @@ import { ProgramLogsCardBody } from "components/ProgramLogsCardBody";
const DEFAULT_SIGNATURE = bs58.encode(Buffer.alloc(64).fill(0));
export function SimulatorCard({ message }: { message: Message }) {
const { cluster } = useCluster();
const { cluster, url } = useCluster();
const {
simulate,
simulating,
@@ -67,7 +67,12 @@ export function SimulatorCard({ message }: { message: Message }) {
Retry
</button>
</div>
<ProgramLogsCardBody message={message} logs={logs} cluster={cluster} />
<ProgramLogsCardBody
message={message}
logs={logs}
cluster={cluster}
url={url}
/>
</div>
);
}

View File

@@ -90,6 +90,7 @@ export interface Details {
owner: PublicKey;
space: number;
data?: ProgramData;
rawData?: Buffer;
}
export interface Account {
@@ -284,11 +285,19 @@ async function fetchAccountInfo(
}
}
// If we cannot parse account layout as native spl account
// then keep raw data for other components to decode
let rawData: Buffer | undefined;
if (!data && !("parsed" in result.data)) {
rawData = result.data;
}
details = {
space,
executable: result.executable,
owner: result.owner,
data,
rawData,
};
}
data = { pubkey, lamports, details };

View File

@@ -0,0 +1,47 @@
import { Idl, Program, Provider } from "@project-serum/anchor";
import { Connection, Keypair } from "@solana/web3.js";
import { NodeWallet } from "@metaplex/js";
const cachedAnchorProgramPromises: Record<
string,
| void
| { __type: "promise"; promise: Promise<void> }
| { __type: "result"; result: Program<Idl> | null }
> = {};
export function useAnchorProgram(
programAddress: string,
url: string
): Program | null {
const key = `${programAddress}-${url}`;
const cacheEntry = cachedAnchorProgramPromises[key];
if (cacheEntry === undefined) {
const promise = Program.at(
programAddress,
new Provider(new Connection(url), new NodeWallet(Keypair.generate()), {})
)
.then((program) => {
cachedAnchorProgramPromises[key] = {
__type: "result",
result: program,
};
})
.catch((_) => {
cachedAnchorProgramPromises[key] = { __type: "result", result: null };
});
cachedAnchorProgramPromises[key] = {
__type: "promise",
promise,
};
throw promise;
} else if (cacheEntry.__type === "promise") {
throw cacheEntry.promise;
}
return cacheEntry.result;
}
export type AnchorAccount = {
layout: string;
account: Object;
};

View File

@@ -445,3 +445,33 @@ p.updated-time {
text-overflow: ellipsis;
white-space: nowrap;
}
// security-txt css hacks
.security-txt ul {
list-style: none;
text-align: right;
margin: 0;
padding-inline-start: 0;
}
.security-txt p, pre {
text-align: left !important;
margin: 0;
}
.security-txt a {
white-space: nowrap;
}
.security-txt td {
white-space: unset;
}
.security-txt code {
white-space: pre-wrap;
display: block;
}
.security-txt-link-color-hack-reee {
color: white;
}

View File

@@ -0,0 +1,109 @@
import React from "react";
import { Cluster } from "providers/cluster";
import { PublicKey, TransactionInstruction } from "@solana/web3.js";
import { BorshInstructionCoder, Program } from "@project-serum/anchor";
import { useAnchorProgram } from "providers/anchor";
import { programLabel } from "utils/tx";
import { ErrorBoundary } from "@sentry/react";
function snakeToPascal(string: string) {
return string
.split("/")
.map((snake) =>
snake
.split("_")
.map((substr) => substr.charAt(0).toUpperCase() + substr.slice(1))
.join("")
)
.join("/");
}
export function getProgramName(program: Program | null): string | undefined {
return program ? snakeToPascal(program.idl.name) : undefined;
}
export function capitalizeFirstLetter(input: string) {
return input.charAt(0).toUpperCase() + input.slice(1);
}
function AnchorProgramName({
programId,
url,
}: {
programId: PublicKey;
url: string;
}) {
const program = useAnchorProgram(programId.toString(), url);
if (!program) {
throw new Error("No anchor program name found for given programId");
}
const programName = getProgramName(program);
return <>{programName}</>;
}
export function ProgramName({
programId,
cluster,
url,
}: {
programId: PublicKey;
cluster: Cluster;
url: string;
}) {
const defaultProgramName =
programLabel(programId.toBase58(), cluster) || "Unknown Program";
return (
<React.Suspense fallback={defaultProgramName}>
<ErrorBoundary fallback={<>{defaultProgramName}</>}>
<AnchorProgramName programId={programId} url={url} />
</ErrorBoundary>
</React.Suspense>
);
}
export function getAnchorNameForInstruction(
ix: TransactionInstruction,
program: Program
): string | null {
const coder = new BorshInstructionCoder(program.idl);
const decodedIx = coder.decode(ix.data);
if (!decodedIx) {
return null;
}
var _ixTitle = decodedIx.name;
return _ixTitle.charAt(0).toUpperCase() + _ixTitle.slice(1);
}
export function getAnchorAccountsFromInstruction(
decodedIx: Object | null,
program: Program
):
| {
name: string;
isMut: boolean;
isSigner: boolean;
pda?: Object;
}[]
| null {
if (decodedIx) {
// get ix accounts
const idlInstructions = program.idl.instructions.filter(
// @ts-ignore
(ix) => ix.name === decodedIx.name
);
if (idlInstructions.length === 0) {
return null;
}
return idlInstructions[0].accounts as {
// type coercing since anchor doesn't export the underlying type
name: string;
isMut: boolean;
isSigner: boolean;
pda?: Object;
}[];
}
return null;
}

View File

@@ -0,0 +1,100 @@
import { ProgramDataAccountInfo } from "validators/accounts/upgradeable-program";
export type SecurityTXT = {
name: string;
project_url: string;
contacts: string;
policy: string;
preferred_languages?: string;
source_code?: string;
encryption?: string;
auditors?: string;
acknowledgements?: string;
expiry?: string;
};
const REQUIRED_KEYS: (keyof SecurityTXT)[] = [
"name",
"project_url",
"contacts",
"policy",
];
const VALID_KEYS: (keyof SecurityTXT)[] = [
"name",
"project_url",
"contacts",
"policy",
"preferred_languages",
"source_code",
"encryption",
"auditors",
"acknowledgements",
"expiry",
];
const HEADER = "=======BEGIN SECURITY.TXT V1=======\0";
const FOOTER = "=======END SECURITY.TXT V1=======\0";
export const fromProgramData = (
programData: ProgramDataAccountInfo
): { securityTXT?: SecurityTXT; error?: string } => {
const [data, encoding] = programData.data;
if (!(data && encoding === "base64"))
return { securityTXT: undefined, error: "Failed to decode program data" };
const decoded = Buffer.from(data, encoding);
const headerIdx = decoded.indexOf(HEADER);
const footerIdx = decoded.indexOf(FOOTER);
if (headerIdx < 0 || footerIdx < 0) {
return { securityTXT: undefined, error: "Program has no security.txt" };
}
/*
the expected structure of content should be a list
of ascii encoded key value pairs seperated by null characters.
e.g. key1\0value1\0key2\0value2\0
*/
const content = decoded.subarray(headerIdx + HEADER.length, footerIdx);
const map = content
.reduce<number[][]>(
(prev, current) => {
if (current === 0) {
prev.push([]);
} else {
prev[prev.length - 1].push(current);
}
return prev;
},
[[]]
)
.map((c) => String.fromCharCode(...c))
.reduce<{ map: { [key: string]: string }; key: string | undefined }>(
(prev, current) => {
const key = prev.key;
if (!key) {
return {
map: prev.map,
key: current,
};
} else {
return {
map: {
...(VALID_KEYS.some((x) => x === key) ? { [key]: current } : {}),
...prev.map,
},
key: undefined,
};
}
},
{ map: {}, key: undefined }
).map;
if (!REQUIRED_KEYS.every((k) => k in map)) {
return {
securityTXT: undefined,
error: `some required fields (${REQUIRED_KEYS}) are missing`,
};
}
return { securityTXT: map as SecurityTXT, error: undefined };
};

View File

@@ -50,6 +50,8 @@ export enum PROGRAM_NAMES {
ACUMEN = "Acumen Program",
BONFIDA_POOL = "Bonfida Pool Program",
BREAK_SOLANA = "Break Solana Program",
CHAINLINK_ORACLE = "Chainlink OCR2 Oracle Program",
CHAINLINK_STORE = "Chainlink Store Program",
MANGO_GOVERNANCE = "Mango Governance Program",
MANGO_ICO = "Mango ICO Program",
MANGO_1 = "Mango Program v1",
@@ -60,6 +62,7 @@ export enum PROGRAM_NAMES {
METAPLEX = "Metaplex Program",
NFT_AUCTION = "NFT Auction Program",
NFT_CANDY_MACHINE = "NFT Candy Machine Program",
NFT_CANDY_MACHINE_V2 = "NFT Candy Machine Program V2",
ORCA_SWAP_1 = "Orca Swap Program v1",
ORCA_SWAP_2 = "Orca Swap Program v2",
ORCA_AQUAFARM = "Orca Aquafarm Program",
@@ -197,6 +200,14 @@ export const PROGRAM_INFO_BY_ID: { [address: string]: ProgramInfo } = {
name: PROGRAM_NAMES.BREAK_SOLANA,
deployments: LIVE_CLUSTERS,
},
cjg3oHmg9uuPsP8D6g29NWvhySJkdYdAo9D25PRbKXJ: {
name: PROGRAM_NAMES.CHAINLINK_ORACLE,
deployments: [Cluster.Devnet, Cluster.MainnetBeta],
},
HEvSKofvBgfaexv23kMabbYqxasxU3mQ4ibBMEmJWHny: {
name: PROGRAM_NAMES.CHAINLINK_STORE,
deployments: [Cluster.Devnet, Cluster.MainnetBeta],
},
GqTPL6qRf5aUuqscLh8Rg2HTxPUXfhhAXDptTLhp1t2J: {
name: PROGRAM_NAMES.MANGO_GOVERNANCE,
deployments: [Cluster.MainnetBeta],
@@ -237,6 +248,10 @@ export const PROGRAM_INFO_BY_ID: { [address: string]: ProgramInfo } = {
name: PROGRAM_NAMES.NFT_CANDY_MACHINE,
deployments: LIVE_CLUSTERS,
},
cndy3Z4yapfJBmL3ShUp5exZKqR3z33thTzeNMm2gRZ: {
name: PROGRAM_NAMES.NFT_CANDY_MACHINE_V2,
deployments: LIVE_CLUSTERS,
},
DjVE6JNiYqPL2QXyCUUh8rNjHrbz9hXHNYt99MQ59qw1: {
name: PROGRAM_NAMES.ORCA_SWAP_1,
deployments: [Cluster.MainnetBeta],

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-faucet"
version = "1.10.8"
version = "1.11.0"
description = "Solana Faucet"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -17,12 +17,12 @@ crossbeam-channel = "0.5"
log = "0.4.14"
serde = "1.0.136"
serde_derive = "1.0.103"
solana-clap-utils = { path = "../clap-utils", version = "=1.10.8" }
solana-cli-config = { path = "../cli-config", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-metrics = { path = "../metrics", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-frozen-abi"
version = "1.10.8"
version = "1.11.0"
description = "Solana Frozen ABI"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -18,7 +18,7 @@ serde = "1.0.136"
serde_derive = "1.0.103"
serde_bytes = "0.11"
sha2 = "0.10.2"
solana-frozen-abi-macro = { path = "macro", version = "=1.10.8" }
solana-frozen-abi-macro = { path = "macro", version = "=1.11.0" }
thiserror = "1.0"
[target.'cfg(not(target_arch = "bpf"))'.dependencies]
@@ -27,7 +27,7 @@ im = { version = "15.0.0", features = ["rayon", "serde"] }
memmap2 = "0.5.3"
[target.'cfg(not(target_arch = "bpf"))'.dev-dependencies]
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.11.0" }
[build-dependencies]
rustc_version = "0.4"

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-frozen-abi-macro"
version = "1.10.8"
version = "1.11.0"
description = "Solana Frozen ABI Macro"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-genesis-utils"
version = "1.10.8"
version = "1.11.0"
description = "Solana Genesis Utils"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -10,9 +10,9 @@ documentation = "https://docs.rs/solana-download-utils"
edition = "2021"
[dependencies]
solana-download-utils = { path = "../download-utils", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-download-utils = { path = "../download-utils", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
[lib]
crate-type = ["lib"]

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-genesis"
description = "Blockchain, Rebuilt for Scale"
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -15,16 +15,16 @@ clap = "2.33.1"
serde = "1.0.136"
serde_json = "1.0.79"
serde_yaml = "0.8.23"
solana-clap-utils = { path = "../clap-utils", version = "=1.10.8" }
solana-cli-config = { path = "../cli-config", version = "=1.10.8" }
solana-entry = { path = "../entry", version = "=1.10.8" }
solana-ledger = { path = "../ledger", version = "=1.10.8" }
solana-logger = { path = "../logger", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-stake-program = { path = "../programs/stake", version = "=1.10.8" }
solana-version = { path = "../version", version = "=1.10.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.11.0" }
solana-cli-config = { path = "../cli-config", version = "=1.11.0" }
solana-entry = { path = "../entry", version = "=1.11.0" }
solana-ledger = { path = "../ledger", version = "=1.11.0" }
solana-logger = { path = "../logger", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-stake-program = { path = "../programs/stake", version = "=1.11.0" }
solana-version = { path = "../version", version = "=1.11.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.11.0" }
tempfile = "3.3.0"
[[bin]]

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-geyser-plugin-interface"
description = "The Solana Geyser plugin interface."
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -11,8 +11,8 @@ documentation = "https://docs.rs/solana-geyser-plugin-interface"
[dependencies]
log = "0.4.11"
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
thiserror = "1.0.30"
[package.metadata.docs.rs]

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-geyser-plugin-manager"
description = "The Solana Geyser plugin manager."
version = "1.10.8"
version = "1.11.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -16,13 +16,13 @@ json5 = "0.4.1"
libloading = "0.7.3"
log = "0.4.11"
serde_json = "1.0.79"
solana-geyser-plugin-interface = { path = "../geyser-plugin-interface", version = "=1.10.8" }
solana-measure = { path = "../measure", version = "=1.10.8" }
solana-metrics = { path = "../metrics", version = "=1.10.8" }
solana-rpc = { path = "../rpc", version = "=1.10.8" }
solana-runtime = { path = "../runtime", version = "=1.10.8" }
solana-sdk = { path = "../sdk", version = "=1.10.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.8" }
solana-geyser-plugin-interface = { path = "../geyser-plugin-interface", version = "=1.11.0" }
solana-measure = { path = "../measure", version = "=1.11.0" }
solana-metrics = { path = "../metrics", version = "=1.11.0" }
solana-rpc = { path = "../rpc", version = "=1.11.0" }
solana-runtime = { path = "../runtime", version = "=1.11.0" }
solana-sdk = { path = "../sdk", version = "=1.11.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.11.0" }
thiserror = "1.0.30"
[package.metadata.docs.rs]

Some files were not shown because too many files have changed in this diff Show More