Compare commits

...

710 Commits

Author SHA1 Message Date
ffb9efa89d chore: bump shelljs from 0.8.4 to 0.8.5 in /docs
Bumps [shelljs](https://github.com/shelljs/shelljs) from 0.8.4 to 0.8.5.
- [Release notes](https://github.com/shelljs/shelljs/releases)
- [Changelog](https://github.com/shelljs/shelljs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/shelljs/shelljs/compare/v0.8.4...v0.8.5)

---
updated-dependencies:
- dependency-name: shelljs
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-27 07:25:38 +00:00
aa49802119 updated explorer_preview.yml to get vercel preview deployment link on slack 2022-01-27 12:54:32 +05:30
bee586c6fd fix: Adjust mem op base cost to avoid breaking Raydium AMM Program (#22762) 2022-01-27 02:01:57 +00:00
85e8bece2e Add solana_client::nonblocking::RpcClient 2022-01-26 17:58:00 -08:00
db481e1799 chore: bump serde_json from 1.0.75 to 1.0.78 (#22748)
* chore: bump serde_json from 1.0.75 to 1.0.78

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.75 to 1.0.78.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.75...v1.0.78)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-26 18:01:02 -07:00
115b488807 Improve poh recorder metrics (#22730)
* Improve poh recorder metrics

* Add metric for poh service send record

* feedback

* clean up
2022-01-27 08:44:41 +08:00
3993cd765c Remove rewrite_rent_exempt_reserve() (#22741) 2022-01-26 18:42:44 -05:00
d9c259a231 Set the correct root in block commitment cache initialization (#22750)
* Set the correct root in block commitment cache initialization

* clean up test

* bump
2022-01-27 00:48:00 +08:00
071e97053f Perf: Reduce write locks on blockhash queue (#22729)
* Perf: Reduce write locks on blockhash queue

* Add comment about thread safety

* Add comment about write starvation
2022-01-26 16:24:27 +08:00
aea8f0df16 chore: bump anyhow from 1.0.52 to 1.0.53 (#22743)
* chore: bump anyhow from 1.0.52 to 1.0.53

Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.52 to 1.0.53.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.52...1.0.53)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-25 21:10:34 -07:00
66b44b48a4 chore: remove time dep (#22665)
* chore: bump time from 0.3.5 to 0.3.6

Bumps [time](https://github.com/time-rs/time) from 0.3.5 to 0.3.6.
- [Release notes](https://github.com/time-rs/time/releases)
- [Changelog](https://github.com/time-rs/time/blob/main/CHANGELOG.md)
- [Commits](https://github.com/time-rs/time/compare/v0.3.5...v0.3.6)

---
updated-dependencies:
- dependency-name: time
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Remove separate time dependency

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-01-25 16:30:21 -07:00
8b1cde83c1 Update vote-signing.md to remove references to anachronistic behavior 2022-01-25 15:27:58 -08:00
1cf6c97779 Add checks to vote state updates to handle updates outside of SlotHash history (#22358) 2022-01-25 13:47:31 -05:00
1192e760a4 sdk: Maybe fix build for the future with import (#22731) 2022-01-25 19:29:42 +01:00
8b90084ebc fix comment typos (#22723) 2022-01-25 11:10:42 -06:00
f366e0f890 Export BanksClientError (#22715) 2022-01-25 09:33:46 -07:00
52045c761c chore: remove remaining unused Xargo.toml files 2022-01-24 15:35:54 -08:00
fc21af4e6e spl-associated-token-account: Add feature for new program (#22648)
* spl-associated-token-account: Add feature for new program

* Address feedback
2022-01-24 22:54:41 +01:00
0562426661 add 'ticks-per-slot' to 'solana-test-validator' (#22701)
* add 'ticks-per-slot' to 'solana-test-validator'

* add input parser validator for "ticks-per-slot" argument

* fix fmt
2022-01-24 20:56:37 +00:00
1c10677f82 Bump thread_local (#22711) 2022-01-24 11:08:30 -07:00
2e56c59bcb Handle already discarded packets in gpu sigverify path (#22680) 2022-01-24 14:35:47 +01:00
7569f282c6 Move discard check before generate offsets (#22684) 2022-01-24 12:59:47 +00:00
9666f4a8be Update expected instruction count in test (#22702) 2022-01-24 12:12:29 +01:00
fd0f5e4d12 Fix typos
Fix typos
2022-01-23 18:39:46 -08:00
714a344937 fix: flag was incorrect in doc 2022-01-23 12:59:15 -08:00
2b111cd631 fix typo in docs (#22690) 2022-01-23 08:46:19 -07:00
1240217a73 Refactor: Rename variables and helper method to PohRecorder (#22676)
* Refactor: Rename leader_first_tick_height field

* Refactor: add `PohRecorder::slot_for_tick_height` helper

* Refactor: Add type for poh leader status
2022-01-23 10:28:50 +08:00
31ed4c18f9 Accountsdb: support config in Json5 (#22605)
* accountsdb: support config in json5

* update docs

* remove not required dependencies

Co-authored-by: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
2022-01-22 18:00:06 -08:00
92e43df266 restore explorer's rpc endpoint 2022-01-22 16:18:41 -05:00
dec21524e1 replace mainnet-beta endpoint 2022-01-22 16:10:23 -05:00
8cb0b1ce9a reverse mainnet-beta endpoint changes 2022-01-22 15:56:45 -05:00
b0486f0aaa reverse rpc-endpoint changes 2022-01-22 15:55:31 -05:00
ba5faa2582 update explorer's mainnet-beta endpoint 2022-01-22 15:38:14 -05:00
e42316ba5a change mainnet-beta endpoint 2022-01-22 15:36:00 -05:00
a300e2d2dc docs: fix broken link for "transaction-id" (#22682) 2022-01-22 19:34:28 +00:00
1ee6707bdd undoing the changes for the traceability (#22642) 2022-01-22 23:27:13 +11:00
091929da12 Update to Rust 1.58.1 2022-01-21 21:38:06 -08:00
90689585ef Update ping to transfer to self, with rotating amount (#22657)
* Update ping to transfer to self, with rotating amount

* Remove balance check
2022-01-21 22:10:50 -07:00
477cb539d0 Explorer: Sort block program ids in filter dropdown (#22674) 2022-01-22 05:09:58 +00:00
88bf3c7063 Add timestamp zone toggle to Explorer (#22616)
* Add timestamp zone toggle to Explorer

* style: code formatting
2022-01-22 12:41:41 +08:00
d6011ba14d Dedup bloom filter is too slow (#22607)
* Faster dedup

* use ahash

* fixup

* single threaded

* use duration type

* remove the count

* fixup
2022-01-21 20:23:48 -07:00
6d5bbca630 Pacify clippy 2022-01-21 19:12:57 -08:00
ce4f7601af Avoid unstable_name_collisions warning 2022-01-21 19:12:57 -08:00
d8cbb2a952 Elgamal pass (#22632)
* zk-token-sdk: change G and H to static and optimize pedersen arithmetic

* zk-token-sdk: remove unnecessary copy in elgamal arithmetic

* zk-token-sdk: fix elgamal tests for new syntax

* zk-token-sdk: use lazy-static for pedersen base

* zk-token-sdk: add dlog test for elgamal decryption

* zk-token-sdk: reflect changes in elgamal in the rest of the sdk

* zk-token-sdk: rustfmt and clippy

* zk-token-sdk: some documentation for elgamal and pedersen

* zk-token-sdk: minor remove whitespace

* zk-token-sdk: update lock files

* zk-token-sdk: change random() to new_rand()

* zk-token-sdk: add explanation for suppressing clippy::op_ref
2022-01-21 20:56:27 -05:00
8dd62854fa Document transaction module (#22440)
* Document transaction module

* example_mocks is only for feature = full
2022-01-21 18:30:12 -07:00
a7f2fff219 Remove active features stake program v3 and v4 (#22612)
Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2022-01-21 19:27:53 -06:00
0cf886302d chore: bump serde from 1.0.133 to 1.0.134 (#22650)
* chore: bump serde from 1.0.133 to 1.0.134

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.133 to 1.0.134.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.133...v1.0.134)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-21 17:14:02 -07:00
fc8130955a Impl get_/set_return_data syscalls for ProgramTest (#22647)
* Remove &mut self from set_return_data

* Impl get_/set_return_data for program-test SyscallStubs

* Add return_data program-test
2022-01-21 17:05:48 -07:00
4d00995d2d shellcheck 2022-01-21 14:06:53 -08:00
ae95540387 chore: add test timeouts 2022-01-21 13:16:42 -08:00
0e587488b5 Use web3.js Travis scripts more to avoid Github Actions/Travis mismatches 2022-01-21 13:16:42 -08:00
aca1d577a4 chore: differentiate between Travis and Github Actions before install 2022-01-21 13:16:42 -08:00
7803e1be3d chore: remove reference to nonexistent script 2022-01-21 12:12:00 -08:00
d137acf74d chore: bump libloading from 0.7.2 to 0.7.3 (#22540)
* chore: bump libloading from 0.7.2 to 0.7.3

Bumps [libloading](https://github.com/nagisa/rust_libloading) from 0.7.2 to 0.7.3.
- [Release notes](https://github.com/nagisa/rust_libloading/releases)
- [Commits](https://github.com/nagisa/rust_libloading/commits/0.7.3)

---
updated-dependencies:
- dependency-name: libloading
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-21 20:00:16 +00:00
7be533a770 Add zeroed default for ElGamalCiphertext (#22639) 2022-01-21 19:52:36 +00:00
95bbb70c91 chore: bump serde_json from 1.0.74 to 1.0.75 (#22541)
* chore: bump serde_json from 1.0.74 to 1.0.75

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.74 to 1.0.75.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.74...v1.0.75)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-21 12:26:51 -07:00
252b0c42d5 undoing the previous commit
was unnecessary
2022-01-22 00:22:27 +05:30
e3915e4b7a add comment (#22636) 2022-01-21 10:51:58 -06:00
4abf7a7f88 add ActiveStats to track when long running, expensive bg processes are running (#22608) 2022-01-21 10:51:23 -06:00
38b02bbcc0 Handle already discarded packets in discard_excess_packets (#22594) 2022-01-21 17:22:50 +01:00
9977396d8f Remove unused fields from Bank (#22491) 2022-01-21 06:03:41 -06:00
7d34a7acac Perf: Only check executors cache for executable bpf program ids (#22624)
* Only check executors cache for executable bpf program ids

* switch to native loader check

* clean up tests

* fix tests

* clippy
2022-01-21 18:32:30 +08:00
373f200ab8 Update introduction.md (#22623)
A few fixes for grammatical and spelling issues.
2022-01-20 23:31:52 -07:00
0eb488580d minor cleanup (#22610) 2022-01-20 18:11:00 -06:00
41fb98c771 zk-token_sdk: define defaults for pod ElGamal/AES ciphertexts (#22532) 2022-01-20 16:27:31 -05:00
a2d251ce1e Speed up packet dedup and fix benches (#22592)
* Speed up packet dedup and fix benches

* fix tests

* allow int arithmetic in bench
2022-01-20 13:59:16 -07:00
e7777281d6 regularly report network limits (#22563) 2022-01-20 12:38:42 -08:00
cca3dbc76d system-monitor-service: support percentages from bigger numbers 2022-01-20 09:51:23 +00:00
7ba57e7a7c Refactor: move instructions sysvar serialization out of Message (#22544) 2022-01-20 17:33:49 +08:00
f8db314134 (Ledger Store Benchmark) Display storage size of all data shreds (#22445)
* (Ledger Store) APIs for obtaining physical size of all data and coding shreds

* (Ledger Store Benchmark) Display total data shred storage size.
2022-01-19 19:33:08 -08:00
fe7543c31a (Ledger Store) APIs for obtaining physical size of all data and coding shreds (#22443) 2022-01-19 19:31:19 -08:00
7f20c6149e Refactor: move simple vote parsing to runtime (#22537) 2022-01-20 10:39:21 +08:00
d343713f61 Optimize packet dedup (#22571)
* Use bloom filter to dedup packets

* dedup first

* Update bloom/src/bloom.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/sigverify_stage.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/sigverify_stage.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/sigverify_stage.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* fixup

* fixup

* fixup

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-01-19 13:58:20 -08:00
b448472037 Refactor: Move InstructionRecorder into TransactionContext (#22578)
* Moves InstructionRecorder into TransactionContext.

* Adds assertions for number_of_instructions_at_transaction_level.
2022-01-19 22:40:09 +01:00
60850d71ce Removed solana-accountsdb-plugin-postgres from the monorepo as it has its own (#22567)
Removed solana-accountsdb-plugin-postgres from the monorepo as it has its own  standalone repo now
2022-01-19 10:46:19 -08:00
dcf44d2523 improves sigverify discard_excess_packets performance (#22577)
As shown by the added benchmark, current code does worse if there is a
spam address plus a lot of unique addresses.

on current master:
test bench_packet_discard_many_senders  ... bench:   1,997,960 ns/iter (+/- 103,715)
test bench_packet_discard_mixed_senders ... bench:  14,256,116 ns/iter (+/- 534,865)
test bench_packet_discard_single_sender ... bench:   1,306,809 ns/iter (+/- 61,992)

with this commit:
test bench_packet_discard_many_senders  ... bench:   1,644,025 ns/iter (+/- 83,715)
test bench_packet_discard_mixed_senders ... bench:   1,089,789 ns/iter (+/- 86,324)
test bench_packet_discard_single_sender ... bench:     955,234 ns/iter (+/- 55,953)
2022-01-19 18:10:02 +00:00
650882217c Add PacketBatch packet_indexes stat (#22564)
* collect stats on packet batch indicies

* cleanup

* cleanup

* cleanup

* change name
2022-01-19 08:13:07 +00:00
e616a7ebfc Track discard time of excess packets in sigverify (#22554)
* discard time histogram

* closer to the if

* update
2022-01-18 15:09:39 -07:00
8dc6f9f589 Remove unused mut 2022-01-18 12:10:31 -08:00
49443406fd Use VecDeque instead of Vec in sigverify stage (#22538)
avoid bad performance of remove(0) for a single sender
2022-01-17 18:37:05 +01:00
2d94e6e5d3 metrics for generate new bank forks (#22492)
* metrics for generate new bank forks

* fixed

* Apply suggestions from code review

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* --fixup

* fixup!

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-01-17 09:59:47 -07:00
cc76a73c49 Refactor: move compute budget runtime logic into solana-program-runtime (#22543) 2022-01-17 20:48:00 +08:00
5e5cdf397c fixing CI 2022-01-17 16:12:24 +05:30
2c38a9213f Revert "Refactor: move compute budget runtime logic into solana-program-runtime (#22533)" (#22542)
This reverts commit b27976626a.
2022-01-17 17:43:17 +08:00
901b2881fb Add more details about vote account key rotation 2022-01-17 00:51:13 -08:00
b27976626a Refactor: move compute budget runtime logic into solana-program-runtime (#22533) 2022-01-17 15:52:33 +08:00
c17e54e3f6 Add download bytecode button to explorer (#22459) 2022-01-16 12:46:02 +00:00
65f1e0fcc2 vote account withdraw authority may change the authorized voter 2022-01-15 21:47:08 -08:00
3bd5a89d6f add scripts for traceability 2022-01-15 23:28:15 +05:30
1a855aa6f7 add script to check the validity of file content 2022-01-15 22:47:15 +05:30
eb461a5442 add scripts for traceability 2022-01-15 22:36:13 +05:30
6edeed888d Remove unnecessary & from AsRef params (#22523) 2022-01-15 04:51:05 +00:00
f34ade7610 chore: bump assert_cmd from 2.0.2 to 2.0.4 (#22490)
Bumps [assert_cmd](https://github.com/assert-rs/assert_cmd) from 2.0.2 to 2.0.4.
- [Release notes](https://github.com/assert-rs/assert_cmd/releases)
- [Changelog](https://github.com/assert-rs/assert_cmd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/assert-rs/assert_cmd/compare/v2.0.2...v2.0.4)

---
updated-dependencies:
- dependency-name: assert_cmd
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-14 21:44:51 -07:00
7171b3a3ac Bugfix/block subscribe (#22516)
* use correct operation name

* require enable_rpc_transaction_history flag when enabling block_subscription

Co-authored-by: Zano <segfaultdoctor@protonmail.com>
2022-01-14 19:53:34 -07:00
70633b5c2f Add Bank::accounts_data_len to SetRootMetrics (#22509) 2022-01-14 20:00:07 -06:00
a724fa2347 Add hidden cli option to allow validator reports replayed transaction cost metrics (#22369)
* add hidden cli option to allow validator reports replayed transaction cost detail metrics

* Update validator/src/main.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

* - rebase master, using unbounded instead of channel; dowgrade to datapoint_trace

* removed cli arg, prefer log at trace

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-15 00:31:21 +00:00
2aa113fd8c Update syscall base costs 2022-01-14 16:15:14 -08:00
1309a9cea0 Add estimated and actual block cost units metrics (#22326)
* - report cost details for transactions selected to be packed into block;
- report estimated execution units packed into block, and actual units and time after execution

* revert reporting per-transaction details

* rollup transaction cost details (eg signature cost, wirte lock, data cost and execution costs) into block stats

* change naming from units to cu, use struct to replace tuple
2022-01-14 23:44:18 +00:00
e14ae33e86 Improve nonce-related error messages 2022-01-14 12:59:01 -08:00
871fd291f3 update vote bench after #22502 (#22510) 2022-01-14 18:58:10 +00:00
3b9654771d change metrics to internal-metrics 2022-01-15 00:08:45 +05:30
7d7228c356 cleanup assert (#22495) 2022-01-14 11:53:24 -06:00
56ac26f90e Fix build: s/vote_instruction/vote_processor/ (#22511) 2022-01-14 11:35:14 -06:00
9c9f2dd5bd port counting vote CUs to block cost (#22477) 2022-01-14 10:50:29 -06:00
cddab635ff Cleanup activated rent_for_sysvars feature (#22454) 2022-01-14 20:34:09 +08:00
ae6c511f13 Refactor: Split vote_instruction.rs into multiple files (#22502) 2022-01-14 17:25:15 +08:00
93a7b94507 Add benchmark for vote processing (#22486) 2022-01-14 17:10:17 +08:00
f12a8fcd73 docs: fix get fee for message docs (#22501) 2022-01-14 01:34:05 -07:00
f804ccdece Store address table lookups in blockstore and bigtable (#22402) 2022-01-14 15:24:41 +08:00
4c577d7f8c Bank::get_fee_for_message is now nonce aware 2022-01-13 17:27:38 -08:00
4ab7d6c23e Filter out outdated slots (#22450)
* Filter out outdated slots

* Fixup error
2022-01-13 19:51:00 -05:00
eca8d21249 log internals (#22493) 2022-01-13 19:50:46 -05:00
d3b95b9226 Cleanup AccountStorage struct (#22463)
* Revert "chore: bump dashmap from 4.0.2 to 5.0.0 (#21824)"

This reverts commit 8aa3d690b5.

* Cleanup AccountStorage struct
2022-01-13 17:51:53 -06:00
7711cd74c3 partition_from_pubkey (#22430)
* Revert "chore: bump dashmap from 4.0.2 to 5.0.0 (#21824)"

This reverts commit 8aa3d690b5.

* partition_from_pubkey
2022-01-13 17:02:42 -06:00
66a97bdde0 Apply suggestions from code review
Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2022-01-13 14:12:54 -07:00
b635073829 Add hidapi feature in remote-wallet 2022-01-13 14:12:54 -07:00
e291342c4a Revert "chore: bump dashmap from 4.0.2 to 5.0.0 (#21824)" (#22488)
This reverts commit 8aa3d690b5.
2022-01-13 13:06:39 -06:00
b82d71d22a Inline DEFAULT_SNAPSHOT_VERSION constant string (#22487) 2022-01-13 18:19:15 +00:00
1632ee03da nit: Traceable balance checks (#22462) 2022-01-13 09:09:22 -08:00
2756abce39 More serde snapshot cleanup (#22449) 2022-01-13 09:20:20 -06:00
9c3144e286 Refactor serde snapshot's "future" to "newer" (#22431) 2022-01-13 07:12:09 -06:00
ba046bd4dd chore:(deps): bump @types/jest from 27.0.3 to 27.4.0 in /explorer (#22483)
Bumps [@types/jest](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/jest) from 27.0.3 to 27.4.0.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/jest)

---
updated-dependencies:
- dependency-name: "@types/jest"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-13 09:06:32 +00:00
2370e61431 Perf: Store deserialized sysvars in the sysvars cache (#22455)
* Perf: Store deserialized sysvars in sysvars cache

* add bench
2022-01-13 05:36:21 +00:00
f2908ed475 ci: Patch solana-account-decoder for downstream builds (#22472) 2022-01-13 02:02:36 +01:00
6614727be8 downgrade individual per-program-timing to trace to reduce writes to influx 2022-01-12 18:52:13 -06:00
b211f839cb Fetch sysvars from invoke context for vote program (#22444) 2022-01-13 08:41:48 +08:00
eaae2f3538 Use modular_bitfield to bitpack IndexEntry (#22447) 2022-01-12 14:37:34 -06:00
7171c95bdd Refactor: move sysvar cache to new module 2022-01-12 12:35:28 -07:00
b27333e52d Update docs vis-a-vis prohibition of RentPaying accounts (#22438)
* Rent-exempt docs for exchange integrations

* Remove discussion of rent-paying accounts from developing docs

* Improve verbiage
2022-01-12 19:32:19 +00:00
4854fadc44 chore:(deps): bump sass from 1.45.0 to 1.47.0 in /explorer (#22453)
Bumps [sass](https://github.com/sass/dart-sass) from 1.45.0 to 1.47.0.
- [Release notes](https://github.com/sass/dart-sass/releases)
- [Changelog](https://github.com/sass/dart-sass/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sass/dart-sass/compare/1.45.0...1.47.0)

---
updated-dependencies:
- dependency-name: sass
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-12 09:21:29 +00:00
3e8ee27d9d chore:(deps): bump @metaplex/js from 4.9.0 to 4.11.3 in /explorer (#22452)
Bumps [@metaplex/js](https://github.com/metaplex/js) from 4.9.0 to 4.11.3.
- [Release notes](https://github.com/metaplex/js/releases)
- [Changelog](https://github.com/metaplex-foundation/js/blob/main/CHANGELOG.md)
- [Commits](https://github.com/metaplex/js/compare/v4.9.0...v4.11.3)

---
updated-dependencies:
- dependency-name: "@metaplex/js"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-12 09:14:32 +00:00
8161cee70f Remove unnecessary var in banking_stage bench (#22408) 2022-01-11 22:25:21 -06:00
de2a7fdc6c update influx host 2022-01-11 23:23:15 -05:00
3ca16de851 Only examine explicit tx accounts for rent state (#22441)
* Add failing test

* Fix: only examine accounts explicitly included in a tx
2022-01-11 20:23:51 -07:00
c45dde6164 Handle accounts data size changes due to rent-collected accounts (#22412) 2022-01-11 17:20:28 -06:00
35a5dd9c45 Refactor: consolidate memo extraction for each message version (#22422) 2022-01-12 06:32:44 +08:00
157f165a3d Bump memmap2 from 0.5.1 to 0.5.2 (#22414)
* Bump memmap2 from 0.5.1 to 0.5.2

Bumps [memmap2](https://github.com/RazrFalcon/memmap2-rs) from 0.5.1 to 0.5.2.
- [Release notes](https://github.com/RazrFalcon/memmap2-rs/releases)
- [Changelog](https://github.com/RazrFalcon/memmap2-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/RazrFalcon/memmap2-rs/compare/v0.5.1...v0.5.2)

---
updated-dependencies:
- dependency-name: memmap2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-11 15:08:13 -07:00
637e366b18 Prevent rent-paying account creation (#22292)
* Fixup typo

* Add new feature

* Add new TransactionError

* Add framework for checking account state before and after transaction processing

* Fail transactions that leave new rent-paying accounts

* Only check rent-state of writable tx accounts

* Review comments: combine process_result success behavior; log and metrics before feature activation

* Fix tests that assume rent-exempt accounts are okay

* Remove test no longer relevant

* Remove native/sysvar special case

* Move metrics submission to report legacy->legacy rent paying transitions as well
2022-01-11 11:32:25 -07:00
a49ef49f87 optimizes ReadOnlyAccountsCache LRU eviction implementation (#22403)
ReadOnlyAccountsCache is using a background thread, table scan and sort
to implement LRU eviction policy:
https://github.com/solana-labs/solana/blob/eaa52bc93/runtime/src/read_only_accounts_cache.rs#L66-L73
https://github.com/solana-labs/solana/blob/eaa52bc93/runtime/src/read_only_accounts_cache.rs#L186-L191
https://github.com/solana-labs/solana/blob/eaa52bc93/runtime/src/read_only_accounts_cache.rs#L222

DashMap internally locks each shard when accessed; so a table scan in
the background thread can create a lot of lock contention.

This commit adds an index-list queue containing cached keys in the order
that they are accessed. Each hash-map entry also includes its index into
this queue.
When an item is first entered into the cache, it is added to the end of
the queue. Also each time an entry is looked up from the cache it is
moved to the end of queue. As a result, items in the queue are always
sorted in the order that they have last been accessed. When doing LRU
eviction, cache entries are evicted from the front of the queue.
Using index-list, all queue operations above are O(1) with low overhead
and so above achieves an efficient implementation of LRU cache eviction
policy.
2022-01-11 17:25:28 +00:00
8b66625c95 convert std::sync::mpsc to crossbeam_channel (#22264) 2022-01-11 02:44:46 -08:00
3c44d405c7 feat: add Connection.getFeeForMessage (#22128)
* web3.js: add Connection.getFeeForMessage

* throw if value is null

* fix null value

* fix types
2022-01-11 17:49:28 +08:00
d064c40617 update metrics url 2022-01-10 21:36:44 -05:00
4214c69694 update metrics url 2022-01-10 21:35:37 -05:00
ec364cc737 ci: Add Anchor and Anchor projects to the downstream build (#22098)
* ci: Add Anchor and Anchor projects to the downstream build

* Separate downstream anchor projects into separate step

* Decrease anchor project build time
2022-01-11 00:21:53 +01:00
49da347d84 limits gossip vote stats to the top most voted slots (#22416) 2022-01-10 21:23:41 +00:00
2cc6f863bf Bump etcd-client from 0.8.2 to 0.8.3 (#22415)
Bumps [etcd-client](https://github.com/etcdv3/etcd-client) from 0.8.2 to 0.8.3.
- [Release notes](https://github.com/etcdv3/etcd-client/releases)
- [Commits](https://github.com/etcdv3/etcd-client/compare/0.8.2...0.8.3)

---
updated-dependencies:
- dependency-name: etcd-client
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-10 13:25:15 -07:00
0869f644fc Bump merlin from 2.0.1 to 3.0.0 (#22331)
* Bump merlin from 2.0.1 to 3.0.0

Bumps [merlin](https://github.com/zkcrypto/merlin) from 2.0.1 to 3.0.0.
- [Release notes](https://github.com/zkcrypto/merlin/releases)
- [Changelog](https://github.com/zkcrypto/merlin/blob/main/CHANGELOG.md)
- [Commits](https://github.com/zkcrypto/merlin/compare/2.0.1...3.0.0)

---
updated-dependencies:
- dependency-name: merlin
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-10 11:18:15 -07:00
04a2c19c87 Bump memmap2 from 0.5.0 to 0.5.1 (#22409)
* Bump memmap2 from 0.5.0 to 0.5.1

Bumps [memmap2](https://github.com/RazrFalcon/memmap2-rs) from 0.5.0 to 0.5.1.
- [Release notes](https://github.com/RazrFalcon/memmap2-rs/releases)
- [Changelog](https://github.com/RazrFalcon/memmap2-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/RazrFalcon/memmap2-rs/compare/v0.5.0...v0.5.1)

---
updated-dependencies:
- dependency-name: memmap2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-10 11:14:56 -07:00
5f1f4dcbdd Use struct to pass all Tpu sockets as one argument to Tpu::new() (#21965)
Tpu::new() now matches Tvu::new() in having struct to reduce argument
list. Additionally, Rust supports partial moves, so there is no need to
clone the Tvu sockets out of Node object.
2022-01-10 11:29:48 -06:00
aadf4b9b63 Moves InvokeContext::return_data to TransactionContext. (#22411) 2022-01-10 18:26:51 +01:00
9bc2592da1 Use lazy_rent_collection directly (#22410) 2022-01-10 10:47:08 -06:00
eeec1ce2ad Add local cluster test to repro slot hash expiry bug (#21873) 2022-01-10 00:58:21 -05:00
39327ec753 Bump crossbeam-channel from 0.5.1 to 0.5.2 (#22405)
* Bump crossbeam-channel from 0.5.1 to 0.5.2

Bumps [crossbeam-channel](https://github.com/crossbeam-rs/crossbeam) from 0.5.1 to 0.5.2.
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/compare/crossbeam-channel-0.5.1...crossbeam-channel-0.5.2)

---
updated-dependencies:
- dependency-name: crossbeam-channel
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-09 22:32:29 -07:00
28275a33d6 Bump tempfile from 3.2.0 to 3.3.0 (#22401)
* Bump tempfile from 3.2.0 to 3.3.0

Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.2.0 to 3.3.0.
- [Release notes](https://github.com/Stebalien/tempfile/releases)
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/NEWS)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.2.0...v3.3.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-09 12:26:08 -07:00
dd6b4ac221 Bump indexmap from 1.7.0 to 1.8.0 (#22389)
* Bump indexmap from 1.7.0 to 1.8.0

Bumps [indexmap](https://github.com/bluss/indexmap) from 1.7.0 to 1.8.0.
- [Release notes](https://github.com/bluss/indexmap/releases)
- [Changelog](https://github.com/bluss/indexmap/blob/master/RELEASES.rst)
- [Commits](https://github.com/bluss/indexmap/compare/1.7.0...1.8.0)

---
updated-dependencies:
- dependency-name: indexmap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-09 16:09:36 +00:00
428575f9ae wrap create executor timings datapoint in a module 2022-01-09 05:01:17 +00:00
3b4aad9df1 bank: fix executor cache metrics 2022-01-09 04:22:39 +00:00
eaa52bc935 Bump zstd from 0.9.1+zstd.1.5.1 to 0.9.2+zstd.1.5.1 (#22388)
* Bump zstd from 0.9.1+zstd.1.5.1 to 0.9.2+zstd.1.5.1

Bumps [zstd](https://github.com/gyscos/zstd-rs) from 0.9.1+zstd.1.5.1 to 0.9.2+zstd.1.5.1.
- [Release notes](https://github.com/gyscos/zstd-rs/releases)
- [Commits](https://github.com/gyscos/zstd-rs/compare/0.9.1...v0.9.2)

---
updated-dependencies:
- dependency-name: zstd
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-08 22:21:56 +00:00
d8ec8ca2e6 Bump blake3 from 1.2.0 to 1.3.0 (#22373)
* Bump blake3 from 1.2.0 to 1.3.0

Bumps [blake3](https://github.com/BLAKE3-team/BLAKE3) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/BLAKE3-team/BLAKE3/releases)
- [Commits](https://github.com/BLAKE3-team/BLAKE3/compare/1.2.0...1.3.0)

---
updated-dependencies:
- dependency-name: blake3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-08 20:28:04 +00:00
0f94e1d3a2 Clarify docs of minimum_balance (#22385) 2022-01-08 11:27:32 -07:00
d90d5ee9b6 Refactor Rent::due() with RentDue enum (#22346) 2022-01-08 09:03:46 -06:00
2f29ff1a3f add excutor creation trace timings 2022-01-08 11:18:37 +00:00
4a9f4e2505 improve multi executor cache addition 2022-01-08 03:47:23 -07:00
ad3cb0bc93 --amend 2022-01-08 08:19:27 +00:00
4ce48307bb bank: prime new executor cache entry use-counts 2022-01-08 08:19:27 +00:00
2759cb860b Bump sha2 from 0.10.0 to 0.10.1 (#22359)
* Bump sha2 from 0.10.0 to 0.10.1

Bumps [sha2](https://github.com/RustCrypto/hashes) from 0.10.0 to 0.10.1.
- [Release notes](https://github.com/RustCrypto/hashes/releases)
- [Commits](https://github.com/RustCrypto/hashes/compare/sha2-v0.10.0...sha2-v0.10.1)

---
updated-dependencies:
- dependency-name: sha2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-08 00:17:32 -07:00
813006b33b remove per program timings from blockstore processor ledger replay (#22370) 2022-01-08 01:52:08 -05:00
81a10e649f fix: non-deterministic writeable account order (#21724) 2022-01-08 13:38:58 +08:00
6d76db1de5 bank: Add executors cache metrics 2022-01-07 16:53:27 -07:00
5771c36d3f Rename open_with_access_type() to open_with_options() (#22123) 2022-01-07 12:11:43 -08:00
b11d3b5abf Shrink disk buckets index record (#21973)
* Shrink index record

* Update bucket_map/src/index_entry.rs

Co-authored-by: Brooks Prumo <brooks@prumo.org>

Co-authored-by: Brooks Prumo <brooks@prumo.org>
2022-01-07 12:12:36 -06:00
c2389fc209 removes CowCachedExecutors (#22343)
Copy-on-write semantics for cached executors can be implemented by a
simple Arc<CachedExecutors> as opposed to CowCachedExecutors:
https://github.com/solana-labs/solana/blob/f1e2598ba/runtime/src/bank.rs#L244-L247

This will also avoid the need for double locking as in:
https://github.com/solana-labs/solana/blob/f1e2598ba/runtime/src/bank.rs#L3490-L3491
https://github.com/solana-labs/solana/blob/f1e2598ba/runtime/src/bank.rs#L3525-L3526
2022-01-07 14:06:29 +00:00
e2aa932e97 Add aarch64-apple-darwin publish tarball step 2022-01-06 23:59:18 -08:00
0eb0d62e95 Increase timeout of local-cluster-slow CI step 2022-01-07 15:30:42 +08:00
9f1f64e384 Cleanup ledger-tool analyze-storage command (#22310)
* Make ledger-tool analyze-storage use Blockstore::open()

Opening a large ledger may require setting a larger open file descriptor
limit. Blockstore::open() does this whereas the underlying Database
object that analyze-storage was opening does not.

* Move key_size call lookup to take advantage of traits

* Fix typo where analyze worked on wrong column

* Make analyze-storage analyze all columns
2022-01-06 23:40:02 -06:00
207825d30b Minimize boilerplate code around Rocks column families (#22345) 2022-01-06 23:39:09 -06:00
52d12cc802 Add runtime support for address table lookups (#22223)
* Add support for address table lookups in runtime

* feedback

* feedback
2022-01-07 11:59:09 +08:00
08e64c88ed explorer: fix missing logs error for old transactions (#22350) 2022-01-07 00:48:20 +00:00
b2687b7e70 Added TooManyAccountLocks to the PostgreSQL enum types (#22348)
Checked w/o waiting for the CI result as currently CI is not validating Postgres changes. Manually verified it fixed the issue
2022-01-06 16:48:12 -08:00
9cb27613c3 Don't accidentally commit farf (#22349) 2022-01-06 17:24:33 -07:00
5f611723a5 disk buckets: helpers and test (#22339) 2022-01-06 15:45:47 -06:00
0459f0a4c0 cargo-build-bpf: don't set -C linker on windows (#22314)
* cargo-build-bpf: don't set -C linker on windows

Since we're now linking using rust-lld
(87ba5c61a5)
which is guaranteed to always be in the sysroot, hardcoding the linker path
shouldn't be needed anymore.

* Update Cargo.lock
2022-01-07 07:41:13 +11:00
f1e2598baa Retain executor cache counts (#22322) 2022-01-06 08:56:00 -08:00
e0c091a9f4 factor out code in do_shrink_slot_stores (#22306) 2022-01-06 10:49:24 -06:00
100293c4b5 refactor get_store_for_shrink (#22307) 2022-01-06 10:49:01 -06:00
705084a25b zk-token-sdk: rustfmt 2022-01-06 11:18:06 -05:00
f81f926a0c zk-token-sdk: fix transfer verification / set up for fee proof (#22337) 2022-01-06 11:01:27 -05:00
bc654bf865 feat: add error types for each sigma protocol (#22336) 2022-01-06 08:10:37 -05:00
390ef0fbcd Consolidate process instruction execution timings to own struct 2022-01-06 03:56:46 -07:00
72fc6096a0 Use saturating_add_assign macro 2022-01-06 03:56:46 -07:00
deb9344e49 Add helper macro for AddAssigning with saturating arithmetic 2022-01-06 03:56:46 -07:00
848b6dfbdd Add metrics for executor creation 2022-01-06 03:56:46 -07:00
b25e4a200b Add execute metrics 2022-01-06 03:56:46 -07:00
7d32909e17 move ExecuteTimings from runtime::bank to program_runtime::timings 2022-01-06 03:56:46 -07:00
47b74e28ec Add CLEANUP_SERVICE flag to ledger cleanup benchmark (#22108) 2022-01-05 23:46:02 -08:00
d9220652ad [ledger-tool]compare_blocks (#22229)
* 1.made load_credentials accept credential path as a parameter. 2.partial implement bigtable comparasion function

* finding missing blocks in bigtables in a specified range

* refactor compare-blocks,add unit test for missing_blocks and fmt

* compare-block fix last block bug

* refactor compare-block and improve wording

* Update ledger-tool/src/bigtable.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* update compare-block command-line description

* style:improve wording/naming/code style

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2022-01-06 14:36:03 +08:00
0673dc5375 Remove erroneous dbg!() (#22324) 2022-01-06 06:09:15 +00:00
12e160269e cache executors on failed transactions (#22308) 2022-01-05 22:09:03 -08:00
37ebd9bd9e Update default --dynamic-port-range values to include some room for additional ports that may be added in the future 2022-01-05 16:50:15 -08:00
005ca7759e Remove stray printlns 2022-01-05 16:49:47 -08:00
cd24ec2ef6 --dynamic-port-range now requires at least 12 ports 2022-01-05 16:12:06 -08:00
cd54b90ae6 Update Cargo.lock 2022-01-05 16:04:32 -08:00
ab13e39518 Only sum accounts data len from non-zero lamport accounts (#22309) 2022-01-05 23:23:37 +00:00
f2ed6f09ee Skip updating already cached executors if unmodified 2022-01-05 16:08:10 -07:00
959ea26816 Re-enable LTO (#22287)
LTO seems to work fine now. It was possibly fixed by either the LLVM13 upgrade
or by b2ed47a925,
which fixed a LTO issue with tests.
2022-01-06 09:16:50 +11:00
bb3a1b6b31 Add zk_token_sdk_enabled feature to gate Zk Token proof program and sol_zk_token_elgamal_op syscalls 2022-01-05 11:57:37 -08:00
98e7fada15 Bump generic-array from 0.14.4 to 0.14.5 (#22297)
* Bump generic-array from 0.14.4 to 0.14.5

Bumps [generic-array](https://github.com/fizyk20/generic-array) from 0.14.4 to 0.14.5.
- [Release notes](https://github.com/fizyk20/generic-array/releases)
- [Changelog](https://github.com/fizyk20/generic-array/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fizyk20/generic-array/commits)

---
updated-dependencies:
- dependency-name: generic-array
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-05 12:01:15 -07:00
69e632a337 Adapt zk-token-{sdk,proof-program} for use in the monorepo 2022-01-05 08:51:18 -08:00
e1848ecbcc Cargo.lock 2022-01-05 08:51:18 -08:00
a9b0309c1f Update to Rust 2021 edition 2022-01-05 08:51:18 -08:00
a08e3760d8 Update to Solana 1.9.1 2022-01-05 08:51:18 -08:00
f2e7f0c32b Adapt to changes to native program entrypoint 2022-01-05 08:51:18 -08:00
9c88f8205b Update to Solana 1.9.0 2022-01-05 08:51:18 -08:00
d5196046dd remove UpdateAccountPk instruction 2022-01-05 08:51:18 -08:00
39e0c19b4b Update to Solana 1.7.15 2022-01-05 08:51:18 -08:00
11cac460aa Consume compute units since proof verification is an expensive operation 2022-01-05 08:51:18 -08:00
62b6eafd7c Prevent proof program from being invoked as an inner instruction 2022-01-05 08:51:18 -08:00
af80203522 Merge transfer instructions 2022-01-05 08:51:18 -08:00
6cdb94ccc5 Rename crypto crate to sdk 2022-01-05 08:51:18 -08:00
c38309791b Update to Solana 1.7.13 2022-01-05 08:51:18 -08:00
f1345c7bb3 Merge sdk/ back into crypto/ 2022-01-05 08:51:18 -08:00
d5472faa96 Add demo cli 2022-01-05 08:51:18 -08:00
b902ee6ad5 Adjust crate names 2022-01-05 08:51:18 -08:00
b1be9a7907 Move solana-specific parts of crypto/ into sdk/ 2022-01-05 08:51:18 -08:00
b96e07d161 Drop 'With' from TransferRangeProof/TransferValidityProof 2022-01-05 08:51:18 -08:00
0e8aa331ad Add transfer proof-program instructions 2022-01-05 08:51:18 -08:00
dc52817408 Rework bytemuck dependencies for monorepo compatibility 2022-01-05 08:51:18 -08:00
6ba3f4dc6d Add WithdrawData proof program plumbing 2022-01-05 08:51:18 -08:00
8b15d7bc28 Add CloseAccountData proof program plumbing 2022-01-05 08:51:18 -08:00
00fa5a93b8 Rename some files 2022-01-05 08:51:18 -08:00
1a254ec098 feat: use proper constructor syntax for inner product 2022-01-05 08:51:18 -08:00
5b41d62f8a feat: fix clippy for new error types 2022-01-05 08:51:18 -08:00
0944abc0e2 feat: update error types for sdk 2022-01-05 08:51:18 -08:00
1cbcda71cb feat: add separate error types for sigma proofs 2022-01-05 08:51:18 -08:00
7439d2424b feat: add a separate TranscriptError 2022-01-05 08:51:18 -08:00
a211fe1cf4 feat: add errors for range proof module 2022-01-05 08:51:18 -08:00
e1d3883893 feat: clean up range proof constructor 2022-01-05 08:51:18 -08:00
826c3bee4a feat: add verification for fee proof 2022-01-05 08:51:18 -08:00
e561fbc25a feat: add test for fee proof 2022-01-05 08:51:18 -08:00
bc7ac42f2a feat: proof generation for max and equality proof 2022-01-05 08:51:18 -08:00
601247d958 feat: add zk-proof certifying that a ciphertext encrypts specified max fee value 2022-01-05 08:51:18 -08:00
beb95c4884 Allow publish 2022-01-05 08:51:18 -08:00
08ef612361 refactor: add mod.rs for sigma_proofs 2022-01-05 08:51:18 -08:00
584c63bcc4 refactor: CloseAccount now uses zero-balance-proof 2022-01-05 08:51:18 -08:00
c26fa1d0e9 refactor: create pod struct for ZeroBalanceProof 2022-01-05 08:51:18 -08:00
208621e3cf refactor: create a separate zero-balance-proof for CloseAccount instruction 2022-01-05 08:51:18 -08:00
c6cd0a5591 refactor: group equality and validity proofs in sigma_proofs module 2022-01-05 08:51:18 -08:00
e011502875 Update to Rust 2021 edition 2022-01-05 08:51:18 -08:00
8ee07cd5c6 Update to Solana 1.9.1 2022-01-05 08:51:18 -08:00
31737406da Adapt to changes to native program entrypoint 2022-01-05 08:51:18 -08:00
93860e88d2 Update to Solana 1.9.0 2022-01-05 08:51:18 -08:00
9a43fbe3b2 clean up authenticated encryption implementation and also rename aes to auth_encryption 2022-01-05 08:51:18 -08:00
7a568482de cargo fmt and fix clippy 2022-01-05 08:51:18 -08:00
30871784e4 incorporate validity proof into transfer proof 2022-01-05 08:51:18 -08:00
c7bf9958e7 add validity proof serialization and deserialization 2022-01-05 08:51:18 -08:00
725781eaa7 add validity proof generation and verification 2022-01-05 08:51:18 -08:00
dcc961ae00 fix clippy for the updated transfer 2022-01-05 08:51:18 -08:00
ccdbe65c87 cleaning up transfer proof 2022-01-05 08:51:18 -08:00
30e12aef9a Update withdraw instruction to use equality proof 2022-01-05 08:51:18 -08:00
6c329e2431 add equality proof struct 2022-01-05 08:51:18 -08:00
f0db6020eb updating close account zk proof 2022-01-05 08:51:18 -08:00
aba8c2f4af reformat imports 2022-01-05 08:51:18 -08:00
c61775664e Add decrypt helper function 2022-01-05 08:51:18 -08:00
69fab16e83 ElGamalKeypair::new() now generates valid keypairs 2022-01-05 08:51:18 -08:00
88ce934bd7 Derive thiserror::Error for ProofError 2022-01-05 08:51:18 -08:00
2c51288afd Add Copy to Role 2022-01-05 08:51:18 -08:00
8d731f1a70 set ciphertext_lo and ciphertext_hi methods to private 2022-01-05 08:51:18 -08:00
c59e8f7c8d resolve conflict 2022-01-05 08:51:18 -08:00
973287ad66 add decryption functionality to transfer data 2022-01-05 08:51:18 -08:00
15aea0fe47 Avoid runtime discrete log table precomputation 2022-01-05 08:51:18 -08:00
c1db2b4866 Wrap a struct around the discrete log precompute hashmap 2022-01-05 08:51:18 -08:00
425a4a4082 cargo fmt 2022-01-05 08:51:18 -08:00
fdb658fff4 Various program refinements 2022-01-05 08:51:18 -08:00
c155519ae1 Generate AesKey/ElGamalSecretKey from an ed25519 signature instead of secret key 2022-01-05 08:51:18 -08:00
221f499041 derive ElGamal keypair from the secret component of keypair 2022-01-05 08:51:18 -08:00
89ddae29ef derive ElGamal keypair from Ed25519 keypair instead of just the signing key 2022-01-05 08:51:18 -08:00
defdf8da72 change AESCiphertext to AesCiphertext 2022-01-05 08:51:18 -08:00
3721eda23e serialization for aes 2022-01-05 08:51:18 -08:00
c7fc430adb use randomized authenticated encryption for aes 2022-01-05 08:51:18 -08:00
77e79221a0 remove UpdateAccountPk instruction 2022-01-05 08:51:18 -08:00
b0e492bc06 Update sdk/src/encryption/aes.rs
Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-05 08:51:18 -08:00
173d88d514 remove OptionAESCiphertext 2022-01-05 08:51:18 -08:00
22114c523f update demo program and bpf test for aes ciphertext removal 2022-01-05 08:51:18 -08:00
88f952075d remove aes ciphertext from the proof program 2022-01-05 08:51:18 -08:00
c51a51d0ad quick syntactical fixes from pr review
merge
2022-01-05 08:51:18 -08:00
2359150b9c incorporate aes ciphertext for zk-proof instructions 2022-01-05 08:51:18 -08:00
6749c45c63 merge 2022-01-05 08:51:18 -08:00
57103c515b update applying pending balance for aes ciphertext 2022-01-05 08:51:18 -08:00
2d225de48c pod for AESCiphertext 2022-01-05 08:51:18 -08:00
beba0eac55 Some clippy 2022-01-05 08:51:18 -08:00
e0c168ef3f add aes encryption 2022-01-05 08:51:18 -08:00
72ade5473a Add blueprint for aes encryption 2022-01-05 08:51:18 -08:00
abe6b27b34 clippy 2022-01-05 08:51:18 -08:00
0ac6427abc cargo fmt 2022-01-05 08:51:18 -08:00
17f5dd734c Fix BPF build 2022-01-05 08:51:18 -08:00
a707e85c10 add key pair derivation from ed25519 signing key 2022-01-05 08:51:18 -08:00
ecbdb6ba68 update cargo to include ed25519_dalek 2022-01-05 08:51:18 -08:00
2eb326b0da add keypair derivation 2022-01-05 08:51:18 -08:00
f350fa7147 add key pair derivation from ed25519 signing key
merge
2022-01-05 08:51:18 -08:00
0cc717340c update cargo to include ed25519_dalek 2022-01-05 08:51:18 -08:00
a368adcd30 add keypair derivation
merge
2022-01-05 08:51:18 -08:00
500423626d merge 2022-01-05 08:51:18 -08:00
aea95e8ff3 update cargo to include ed25519_dalek 2022-01-05 08:51:18 -08:00
0bd28f9620 merge 2022-01-05 08:51:18 -08:00
65cf599786 merge 2022-01-05 08:51:18 -08:00
9fdadb503d merge 2022-01-05 08:51:18 -08:00
ee6a13ef6f update cargo to include ed25519_dalek 2022-01-05 08:51:18 -08:00
30702dcdee add keypair derivation 2022-01-05 08:51:18 -08:00
43e368faf6 add ElGamal key derivation from Ed25519 signing key 2022-01-05 08:51:18 -08:00
7aef523a41 sdk/ now builds for wasm32-unknown-unknown 2022-01-05 08:51:18 -08:00
4b61e27d12 divide out elgamal algorithms with keypair 2022-01-05 08:51:18 -08:00
a8ab615c89 Add inner instruction utility functions 2022-01-05 08:51:18 -08:00
93eb49a3e3 Rename ElGamalKeypair fields 2022-01-05 08:51:18 -08:00
c33e24de57 Rename ElGamal to ElGamalKeypair 2022-01-05 08:51:18 -08:00
f272c025bd Rename ElGamal::new() to ElGamal::default() 2022-01-05 08:51:18 -08:00
6b59beda7b Add fn to save/load ElGamal 2022-01-05 08:51:18 -08:00
1daf676b37 Update to Solana 1.7.15 2022-01-05 08:51:18 -08:00
2c1aa715b0 Adjust ElGamal::new() signature 2022-01-05 08:51:18 -08:00
2d62e4e6bd update program processor for the single transfer instruction 2022-01-05 08:51:18 -08:00
09b8baa4b1 merge 2022-01-05 08:51:18 -08:00
db69128825 Simplify range proof verification syntax for merged transfer 2022-01-05 08:51:18 -08:00
a5d1efc207 Rust fmt and clippy 2022-01-05 08:51:18 -08:00
25216705b3 Add UpdateAccountPk tests for edge cases 2022-01-05 08:51:18 -08:00
1af1106b87 Add CloseAccount tests for edge cases 2022-01-05 08:51:18 -08:00
73c06d9e33 Rename ElGamalPubkey::gen_decrypt_handle method to ElGamalPubkey::decrypt_handle 2022-01-05 08:51:18 -08:00
20c6001836 derive Debug for pods for BPF target as well 2022-01-05 08:51:18 -08:00
c150b4b197 Replace to_elgamal_ciphertext with From trait for ElGamalCiphertext 2022-01-05 08:51:18 -08:00
a40e7fc59b Rename Pedersen related structs and methods for consistency 2022-01-05 08:51:18 -08:00
17cda46531 Merge transfer instructions 2022-01-05 08:51:18 -08:00
42f7c0c7f6 Update tests 2022-01-05 08:51:18 -08:00
20bce10204 add clippy 2022-01-05 08:51:18 -08:00
9b73e351aa minor name change 2022-01-05 08:51:18 -08:00
d6a808f41a simplify get_ciphertext methods 2022-01-05 08:51:18 -08:00
93f2323e52 add ciphertext extraction methods for TransferData 2022-01-05 08:51:18 -08:00
75896958b6 rename to_elgamal_ctxt to to_elgamal_ciphertext 2022-01-05 08:51:18 -08:00
a622ee4b8d Rename ElGamal::keygen to ElGamal::new 2022-01-05 08:51:18 -08:00
94a96670e8 Update lib.rs 2022-01-05 08:51:18 -08:00
8bb6f0dc6f Rename ElGamalSK to ElGamalSecretKey 2022-01-05 08:51:18 -08:00
5445e13828 Rename dlog.rs to discrete_log.rs 2022-01-05 08:51:18 -08:00
23d3b540a1 Avoid explicit curve25519_dalek dependency in demo/ 2022-01-05 08:51:18 -08:00
1ef3a621a8 add decryption in demo 2022-01-05 08:51:18 -08:00
d20d03cd7f clean up ElGamal decryption 2022-01-05 08:51:18 -08:00
409b55ad81 add some comments 2022-01-05 08:51:18 -08:00
667e72144e rename encode.rs to dlog.rs 2022-01-05 08:51:18 -08:00
2f138ecb96 Fix tests 2022-01-05 08:51:18 -08:00
48047b55ba clippy 2022-01-05 08:51:18 -08:00
f227504ea7 Add sol_zk_token_elgamal syscall declarations 2022-01-05 08:51:18 -08:00
78799640ea Rename ElGamalCT to ElGamalCiphertext, ElGamalPK to ElGamalPubkey 2022-01-05 08:51:18 -08:00
f3e7e62813 Refactor sdk/src/pod.rs 2022-01-05 08:51:18 -08:00
d01d425e4b Rename crypto crate to sdk 2022-01-05 08:51:18 -08:00
7da620f0b4 Merge sdk/ back into crypto/ 2022-01-05 08:51:18 -08:00
88b71c0732 Add demo cli 2022-01-05 08:51:18 -08:00
df521bbfc8 Adjust crate names 2022-01-05 08:51:18 -08:00
03a3a501f3 Groom Cargo.tomls 2022-01-05 08:51:18 -08:00
ae5d254e73 Move solana-specific parts of crypto/ into sdk/ 2022-01-05 08:51:18 -08:00
0e1afcbb26 Split up local cluster tests into separate CI steps (#22295)
* Split up local cluster tests into separate CI steps

* Update buildkite-pipeline.sh
2022-01-05 14:44:15 +00:00
44d61465f1 (Ledger store benchmark - 3/N) Add comments about the benchmark and its arguments (#22160)
* Avoid shred generation in ledger_cleanup.rs

* Update comment for test_ledger_cleanup_compaction to include benchmark information.
2022-01-04 23:35:55 -10:00
9f63493789 Refactor: Remove KeyedAccounts (2) (#22274)
* Adds InstructionContext::get_signers().
Improves error messages when modifying borrowed accounts.

* Removes keyed_accounts from InvokeContext tests.

* Removes keyed_accounts from message_processor.rs

* Removes keyed_accounts from bank.rs

* Removes keyed_accounts from bpf serialization.
2022-01-05 09:39:37 +01:00
c1995c647b fix(rpc): recreate dead and uncleaned subscriptions (#22281) 2022-01-05 00:15:21 -07:00
5bb376f304 Fix CONTRIBUTING wording (#22291)
Co-authored-by: Seth Girvan <seth@ahoy.fund>
2022-01-05 06:27:10 +00:00
45458e7139 Refactor: Improve type safety and readability of transaction execution (#22215)
* Refactor Bank::load_and_execute_transactions

* Refactor: improve type safety of TransactionExecutionResult

* Add enum for extra type safety in execution results

* feedback
2022-01-05 10:15:15 +08:00
e201b41341 Avoid shred generation in ledger_cleanup.rs (#22091) 2022-01-04 15:29:43 -10:00
af7a2e3daa Update metrics subdomain 2022-01-04 20:14:41 -05:00
9725f2e319 docs: Fix typo in proposal (#22282) 2022-01-04 22:11:51 +00:00
212e6ea4a4 Remove obsolete M1 setup information 2022-01-04 21:14:13 +00:00
379feecae5 patches bug in recv_mmsg when npkts != nrecv
If recv_mmsg receives 2 packets where the first one is filtered out,
then it returns npkts == 1:
https://github.com/solana-labs/solana/blob/01a096adc/streamer/src/recvmmsg.rs#L104-L115

But then streamer::packet::recv_from will erroneously keep the 1st
packet and drop the 2nd one:
https://github.com/solana-labs/solana/blob/01a096adc/streamer/src/packet.rs#L34-L49

To avoid this bug, this commit updates recv_mmsg to always return total
number of received packets. If socket address cannot be correctly
obtained, it is left as the default value which is UNSPECIFIED:
https://github.com/solana-labs/solana/blob/01a096adc/sdk/src/packet.rs#L145
2022-01-04 21:06:59 +00:00
4b24499916 removes total-size from return value of recv_mmsg 2022-01-04 21:06:59 +00:00
6c65734330 Bump hidapi from 1.3.0 to 1.3.2 (#22265)
* Bump hidapi from 1.3.0 to 1.3.2

Bumps [hidapi](https://github.com/ruabmbua/hidapi-rs) from 1.3.0 to 1.3.2.
- [Release notes](https://github.com/ruabmbua/hidapi-rs/releases)
- [Commits](https://github.com/ruabmbua/hidapi-rs/commits)

---
updated-dependencies:
- dependency-name: hidapi
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-04 10:47:58 -07:00
1460f00e0f Consume from AccountsDataMeter (#21994) 2022-01-04 10:00:21 -06:00
01a096adc8 adds bitflags to Packet.Meta
Instead of a separate bool type for each flag, all the flags can be
encoded in a type-safe bitflags encoded in a single u8:
https://github.com/solana-labs/solana/blob/d6ec103be/sdk/src/packet.rs#L19-L31
2022-01-04 13:53:40 +00:00
73a7741c49 uses std::net::IpAddr type for Packet.Meta.addr 2022-01-04 13:53:40 +00:00
aa9f7ed7e8 removes seed and slot fields from Packet.Meta
507367e6ac
updated window-service to send shreds (as opposed to packets) to
retransmit-stage and so seed and slot fields in Packet.Meta are unused:
https://github.com/solana-labs/solana/blob/d6ec103be/sdk/src/packet.rs#L27-L28
2022-01-04 13:53:40 +00:00
2486e21ffe Lower vote-only-mode to 400 (#22210) 2022-01-04 12:49:14 +01:00
ca5591bfa0 Updates to the address lookup table proposal (#22269) 2022-01-04 11:35:20 +00:00
9665da9d0b Documentation typos (#22262)
* Fix typo markdown link

* Add missing punctuation full stop
2022-01-04 18:49:14 +08:00
ca8fef5855 retransmit consecutive leader blocks (#22157) 2022-01-04 00:24:16 -08:00
2b5e00d36d Limit number of accounts that a transaction can lock (#22201) 2022-01-04 14:25:23 +08:00
8b6310b179 fix: add owner to token balance type 2022-01-03 20:31:42 -08:00
e8b7f96a89 Add struct BlockstoreOptions (#22121) 2022-01-03 18:30:45 -10:00
33ad74fbcd chore: add encoding param to getMultipleAccounts 2022-01-03 19:26:03 -08:00
6895eb7ef6 Correctly set CI_COMMIT when Buildkite provides HEAD instead of a real commit 2022-01-03 17:39:20 -08:00
25cb859ed0 Switch from arm64-apple-darwin to aarch64-apple-darwin to align with Rust's target names 2022-01-03 16:54:46 -08:00
5b6027bef0 Fixed issue #22124 -- missing historical data if slot updated later. (#22193)
* Fixed issue #22124 -- missing historical data if slot updated later.

* Fixed a couple of comments
2022-01-03 16:10:44 -08:00
ed0b47c6f8 Use experimential docker virtualization framework for arm64 2022-01-03 15:57:06 -08:00
20b61e28b6 Flip iter operations to keep associated address/header/packets together (#22245)
Flip iter operations to keep associated address/header/packets together

Before this change, if cast_socket_addr() returned a None for any
address/header pair, the subsequent zip() would misalign the
address/header pair and packet. So, this change zips all three together,
then does filter_map() so keep things aligned.

Additionally, compute total_size inline to avoid running through packets
a second time.
2022-01-03 17:15:50 -06:00
73e6038986 Refactor: Remove KeyedAccount from program runtime (#22226)
* Makes error handling in BorrowedAccount optional.
Adds BorrowedAccount ::get_rent_epoch().
Exposes InstructionContext::get_index_in_transaction().
Turns accounts and account_keys into pinned boxed slices.

* Introduces "unsafe" to InvokeContext::push().

* Turns &TransactionContext into &mut TransactionContext in InvokeContext.

* Push and pop InstructionContext in InvokeContext.
Makes test_process_cross_program and test_native_invoke symmetric.
Removes the borrow check from test_invoke_context_verify.

* Removes keyed_accounts from prepare_instruction()

* Removes usage of invoke_stack.

* Removes keyed_accounts from program-test.

* Removes caller_write_privileges.

* Removes keyed_accounts from BPF parameter (de-)serialization.
2022-01-03 23:30:56 +01:00
672fed04cb Bump serde from 1.0.132 to 1.0.133 (#22233)
* Bump serde from 1.0.132 to 1.0.133

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.132 to 1.0.133.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.132...v1.0.133)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-03 22:19:39 +00:00
005592998d Fix bug, add error specific timings (#22225) 2022-01-03 16:33:54 -05:00
69d71f8f86 removes epoch_authorized_voters from VoteTracker (#22192)
https://github.com/solana-labs/solana/pull/22169
verifies authorized-voter early on in vote-listener pipeline; and so
VoteTracker no longer needs to maintain and check for epoch authorized
voters.
2022-01-03 21:07:47 +00:00
b18462737e Correctly set CI_OS_NAME for macOs buildkite agents 2022-01-03 12:54:05 -08:00
53777f2fbf Add support for arm64-apple-darwin release/channel artifacts 2022-01-03 12:42:57 -08:00
bbe5b66324 Prevent lookup tables from being closed during deactivation slot (#22221) 2022-01-04 04:42:29 +08:00
0e4ede46d1 work around rust 39364 for stats_reporter_sender (#22227) 2022-01-03 11:46:02 -08:00
9029b46570 Fix token-balance owner type in docs (#22240) 2022-01-03 18:00:13 +00:00
ecbfc70bfa Bump serde_json from 1.0.73 to 1.0.74 (#22231)
* Bump serde_json from 1.0.73 to 1.0.74

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.73 to 1.0.74.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.73...v1.0.74)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2022-01-03 10:36:59 -07:00
56fd32bda2 Remove Xargo.toml reference 2022-01-03 17:12:45 +00:00
04b76eb066 Use AtomicUsize for next_bucket_to_flush (#22095) 2022-01-03 10:35:35 -06:00
fb62407232 AcctIdx: appendvecid: u32 (#21842) 2022-01-03 10:35:17 -06:00
2a00382d71 Refactor: cleanup solana_transaction_status crate (#22230) 2022-01-03 15:45:18 +00:00
86acd8f6f9 Update .gitignore 2022-01-03 19:30:09 +05:30
64b153fae0 connected github with vercel for preview deploy 2022-01-03 19:20:36 +05:30
ce6c76e45f Remove unused serialization attributes from transaction status types (#22228) 2022-01-03 16:20:57 +08:00
d6ec103bee Bump rbpf to v0.2.21 (#22216) 2022-01-01 20:22:48 +00:00
a2a7e91ad6 set secp256k1 cost similar to sigverify 2021-12-31 17:45:50 -06:00
d743c2917c re-calibrate limit based on mainnet data, see issue #21917 2021-12-31 17:45:50 -06:00
0b1b36f088 Exit early on BigTable error (#22200) 2021-12-31 13:36:57 -07:00
8a43e2d889 Bump solana_rbpf to version v0.2.20 (#22164) 2021-12-31 19:50:45 +01:00
e529d03c11 Cleanup #22182 (#22205)
* Turns compute_units_consumed of ProcessInstructionResult into a &mut parameter.

* Removes second nesting level from test_process_instruction_compute_budget().

* Makes test_process_cross_program and test_native_invoke symmetric.

* Unifies test_process_cross_program(), test_native_invoke() and test_process_instruction_compute_budget() into test_process_instruction().
2021-12-31 17:55:27 +01:00
557d35ec79 Remove duplicate code in ledger_cleanup_compaction_test (#22204) 2021-12-31 11:19:33 -05:00
94a9b712b6 Fix check failure on (#22202) 2021-12-30 23:48:26 -10:00
f479ab7af2 ledger_cleanup test improvement (1/N) -- make the test lockless and simplify the logic (#22090) 2021-12-30 20:18:47 -10:00
d06e6c7425 Count compute units even when transaction errors (#22182) 2021-12-30 21:21:42 -05:00
95dfcc546a bypass retransmission for slots without propagated stats (#22176) 2021-12-30 16:07:34 -08:00
4e4577afbe chore: update transaction error links in docs (#22189) 2021-12-31 06:05:29 +08:00
736f974082 chore: fix typo in AccountInfo docs (#22196) 2021-12-31 06:03:42 +08:00
d421ccb330 simplifies parse [sanitized] vote transaction (#22127) 2021-12-30 16:03:03 +00:00
c0c6038654 checks for authorized voter early on in the vote-listener pipeline (#22169)
Before votes are verified that they are signed by the authorized voter,
they might be dropped in verified-vote-packets code. If there are
enough many spam votes from unauthorized voters, this may potentially
drop valid votes but keep the false ones.
https://github.com/solana-labs/solana/blob/57986f982/core/src/verified_vote_packets.rs#L165-L168
2021-12-30 15:03:14 +00:00
edb20d6909 Splits index of InstructionAccount into index_in_transaction and index_in_caller. (#22165) 2021-12-30 15:46:36 +01:00
3f88994e0f explorer: Fix setting custom RPC URL (#22187) 2021-12-30 19:50:08 +08:00
29edb130cc Update install/src/command.rs
Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
2021-12-30 00:47:46 -08:00
3c1416091e Add connect timeout and change overall timeout to None 2021-12-30 00:47:46 -08:00
a1912f8400 fix: Installer increase download req timeout from 30 seconds to 6 minutes 2021-12-30 00:47:46 -08:00
70f6aff07f explorer: Bump @solana/spl-token-registry to 0.2.1143 (#22183) 2021-12-30 08:24:15 +00:00
33d0b5e011 Revert "Count compute units even when transaction errors (#22059)" (#22174)
This reverts commit eaa8c67bde.
2021-12-30 02:42:32 -05:00
135af08b8b Add docs for notifying transactions via plugin (#22097)
* Added documentations for streaming transactions via plugin

* Updated comments for transaction info

* Updated doc on transaction format

* Removed a white space

* Apply suggestions from code review from Tyera

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-12-29 16:22:13 -08:00
972730924b fix: add Sysvar PubKeys
* web3.js: Add Sysvar PubKeys

* remove fees sysvar
2021-12-29 16:30:34 -07:00
f14928a970 Stream additional block metadata via plugin (#22023)
* Stream additional block metadata through plugin
blockhash, block_height, block_time, rewards are streamed
2021-12-29 15:12:01 -08:00
c9c78622a8 discards serialized gossip crds votes if cannot parse tx (#22129) 2021-12-29 19:31:26 +00:00
b1d9a2e60e Don't forward packets received from TPU forwards port (#22078)
* Don't forward packets received from TPU forwards port

* Add banking stage test
2021-12-29 19:34:31 +01:00
d20a3774db Bump lru from 0.7.1 to 0.7.2 (#22161)
Bumps [lru](https://github.com/jeromefroe/lru-rs) from 0.7.1 to 0.7.2.
- [Release notes](https://github.com/jeromefroe/lru-rs/releases)
- [Changelog](https://github.com/jeromefroe/lru-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jeromefroe/lru-rs/compare/0.7.1...0.7.2)

---
updated-dependencies:
- dependency-name: lru
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-29 10:47:37 -07:00
bac6821e19 get_signatures_for_address does not correctly account for result sets that span local and Bigtable sources (#22115)
* get_signatures_for_address does not correctly account for result sets that span Blockstore and Bigtable.

This causes Bigtable to return `RowNotFound` until the new tx is uploaded.

Check that `before` exists in Bigtable, and if not, set it to `None` to return the full data set.

References #21442
Closes #22110

* Differentiate between before sig not found and no newer signatures

* Dedupe bigtable results to account for potential upload race

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-12-29 10:25:10 -07:00
57986f982a cargo-build-bpf: Add Windows support (#20276)
* cargo-build-bpf: Add Windows support

* Update error message
2021-12-28 18:37:35 -05:00
eaa8c67bde Count compute units even when transaction errors (#22059) 2021-12-28 17:05:11 -05:00
f061059e45 Prevent log spam (#22148) 2021-12-28 17:04:48 -05:00
f8c97a3d1f fix bank-benching test 2021-12-28 15:21:24 -06:00
3d6ab96587 push live packets straight to buffer, leader only process packets from buffer 2021-12-28 15:21:24 -06:00
c7b0917e1a Fix program log filtering (#22133) 2021-12-28 12:13:03 -08:00
422a095647 Add (preflight) simulation to BanksClient (#22084)
* Add more-legitimate conversion from legacy Transaction to SanitizedTransaction

* Add Banks method with preflight checks

* Expose BanksClient method with preflight checks

* Unwrap simulation err

* Add Bank simulation method that works on unfrozen Banks

* Add simpler api

* Better name: BanksTransactionResultWithSimulation
2021-12-28 19:25:46 +00:00
e61a736d44 Revert host
Revert host
2021-12-28 18:22:20 +03:00
f2c51653e4 Update init
As know that hostname was changed
2021-12-28 17:35:45 +03:00
e3b20c443a Use load_accounts_data_len() instead of Atomic .load() (#22144) 2021-12-28 13:38:20 +00:00
800472ddf5 Add AccountsDataMeter to InvokeContext (#21813) 2021-12-28 05:14:48 -06:00
e1a0660948 stopped explorer_preview.yml
after connecting the repo to the vercel we have to specify root directory in vercel setting
2021-12-28 11:38:54 +05:30
a06646631c Feature: TransactionContext, InstructionContext and BorrowedAccount (#21706)
* Adds TransactionContext, InstructionContext and BorrowedAccount.

* Redirects the usage of accounts in InvokeContext through TransactionContext.
Also use the types declared in transaction_context.rs everywhere.

* Adjusts all affected tests.
2021-12-27 18:49:32 +01:00
bb97c8fdcd Bump zstd from 0.9.0+zstd.1.5.0 to 0.9.1+zstd.1.5.1 (#22105)
* Bump zstd from 0.9.0+zstd.1.5.0 to 0.9.1+zstd.1.5.1

Bumps [zstd](https://github.com/gyscos/zstd-rs) from 0.9.0+zstd.1.5.0 to 0.9.1+zstd.1.5.1.
- [Release notes](https://github.com/gyscos/zstd-rs/releases)
- [Commits](https://github.com/gyscos/zstd-rs/compare/0.9.0...0.9.1)

---
updated-dependencies:
- dependency-name: zstd
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-25 21:26:55 -07:00
f643a8b425 docs: fix typo (#22116) 2021-12-25 21:12:06 -07:00
0a0fc85282 Add PubsubClient::vote_subscribe 2021-12-25 13:21:05 -08:00
77b26b62e0 Bump anyhow from 1.0.51 to 1.0.52 (#22106)
* Bump anyhow from 1.0.51 to 1.0.52

Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.51 to 1.0.52.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.51...1.0.52)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-25 12:22:57 -07:00
cc947cad03 Refactor: CPI Instruction Recording (#22111)
* Unifies all InstructionRecorders of a transaction into one.

* Stops explicitly compiling CPI instructions for recording,
uses the indices gathered from instruction_accounts instead.
2021-12-25 13:35:43 +01:00
60ddd93d09 Cleanup: invoke_context.rs (#22107)
* Removes message::Message from invoke_context.rs completely.

* Simplifies sol_invoke_signed() of program-test

* Start search for non program accounts at front (instead of the back).
Program and programdata accounts use rposition(), everything else uses position().
2021-12-25 10:00:40 +01:00
b89cd8cd1a Avoid cloning Vec<Entry> when calling entries_to_test_shreds() (#22093) 2021-12-24 12:32:43 -08:00
2ab4f34c02 Refactor: Remove Message and CompiledInstruction from InvokeContext interfaces (#22102)
* Introduces InstructionAccount which is like AccountMeta but uses an index instead of a Pubkey

* Renames InvokeContext::create_message() to InvokeContext::prepare_instruction()

* Removes Message and CompiledInstruction from InvokeContext interfaces.

* Resolves TODOs of sol_invoke_signed() in program-test.

* Moves CompiledInstruction::visit_each_account() into invoke_context.rs
2021-12-24 16:17:55 +01:00
214b561a28 banks-client: Update return type to BanksClientError (#22058) 2021-12-24 09:44:19 -05:00
ec7536faf6 Add test to enforce that program id account info for CPI is optional (#22069)
* Update tests to demonstrate that program id account info for CPI is optional

* Clean up comments that say that program id account info is required
2021-12-24 00:43:15 +01:00
b93ab5d295 validator: add contact-info query to admin port 2021-12-23 20:50:21 +00:00
d06c04d02c Update checks.rs 2021-12-23 05:14:37 -08:00
52c1eb0160 Remove msg spam from deploying 2021-12-23 05:14:37 -08:00
93c776ce19 Refactor packet deduplication and harden bench test (#22080) 2021-12-22 23:05:10 -06:00
f10407dbc3 Update release documentation (#22086) 2021-12-22 22:51:32 -06:00
298c2d0f62 Display bpf-tools version in cargo-build-bpf version string (#22061)
* Display bpf-tools version in cargo-build-bpf version string

* Print cargo-build-bpf version in CI for reference in stable-bpf jobs
2021-12-22 23:10:25 +00:00
67c8034fe5 Update jsonrpc-api.md to document 'owner' property (#22074)
* Update jsonrpc-api.md to document 'owner' property

Documents 'owner' property on the token balances struct.

* Update docs/src/developing/clients/jsonrpc-api.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-12-22 21:48:27 +00:00
dd80a525ef Leader QoS service metrics (#21708)
* - qos_service metrics tagged with leader thread ids to separate gossip/tpu votes and transactions;
- qos_service metrics is reported with bank slot;
- replaced timer-based reporting with signal via channel; removed async report test as qos_service now lives within a thread

* - add tpu live packets (eg, not buffered packets) states to qos metrics reporting
2021-12-22 21:39:59 +00:00
2be139ca61 Disk Buckets: cleanup api (is_free) (#22068) 2021-12-22 15:12:34 -06:00
37f6777ceb Increment execution timings on errors as well (#22053) 2021-12-22 15:07:07 -05:00
f67ecd5c18 removes unused Cargo dependencies (#22022)
Also moving some to [dev-dependencies] so that they are not propagated
to other packages which depend on the package.
2021-12-22 17:01:33 +00:00
61cc7b10a9 AcctIdx: respect disk idx mem size param (#22050) 2021-12-22 09:54:05 -06:00
4d62f03297 uses enum instead of trait for VoteTransaction (#22019)
Box<dyn Trait> involves runtime dispatch, has significant overhead and
is slow. It also requires hacky boilerplate code for implementing Clone
or other basic traits:
https://github.com/solana-labs/solana/blob/e92a81b74/programs/vote/src/vote_state/mod.rs#L70-L102

Only limited known types can be VoteTransaction and they are all defined
in the same crate. So using a trait here only adds overhead.
https://github.com/solana-labs/solana/blob/e92a81b74/programs/vote/src/vote_state/mod.rs#L125-L165
https://github.com/solana-labs/solana/blob/e92a81b74/programs/vote/src/vote_state/mod.rs#L221-L264
2021-12-22 14:25:46 +00:00
d6de4a2f4e Fix transaction pk violation (#22057)
* Handle PK violation issue for transaction notification. The transaction might be replayed due to
validator restart.
2021-12-22 00:58:51 -08:00
bf8fbf8383 Add code comment for handle_chaining in blockstore.rs (#21876) 2021-12-21 22:36:24 -08:00
3155a04189 Add comment block for insert_shreds_handle_duplicate in blockstore.rs (#21877) 2021-12-21 22:36:13 -08:00
ddde356b33 bump tarpc from 0.26.2 to 0.27.2 and add BanksClientError (#21739)
* chore: bump tarpc from 0.26.2 to 0.27.2

Bumps [tarpc](https://github.com/google/tarpc) from 0.26.2 to 0.27.2.
- [Release notes](https://github.com/google/tarpc/releases)
- [Changelog](https://github.com/google/tarpc/blob/master/RELEASES.md)
- [Commits](https://github.com/google/tarpc/commits)

---
updated-dependencies:
- dependency-name: tarpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

* Accommodate breaking changes

* Reword incorrect error message

* Add error module

* Revert client Error type to io::Error; easy transition to BanksClientError

* Bump tracing crates in programs

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-12-22 02:31:02 +00:00
7cc6262b5a Bump bpf-tools to v1.21 2021-12-21 16:40:42 -08:00
bdae2993e0 AcctIdx: hold ranges in memory uses multiple threads (#22031) 2021-12-21 17:31:48 -06:00
5b464a32f5 Disk Buckets: unlock verifies expected (#22052) 2021-12-21 16:35:59 -06:00
9c5d82557a skip reporting all-zero stats 2021-12-21 16:20:36 -06:00
1c0cd2cbb4 DiskBuckets: remove unnecessary atomic on uid (#22039) 2021-12-21 15:50:39 -06:00
b36f7151fc Disk buckets: abstract UID_UNLOCKED (#22051) 2021-12-21 14:51:38 -06:00
711856cad3 disk buckets: clone_from_slice -> copy_from_slice (#22038) 2021-12-21 13:52:03 -06:00
84eaaae062 disk_buckets: factor out unsafe code (#22028) 2021-12-21 13:50:04 -06:00
d896ff74ec Remove Apple M1 resolver workaround 2021-12-21 08:30:36 -08:00
ba8e15848e Fix #21986 (#22035)
* Partial revert "Updates documentation around what needs to be passed in CPI. (#21633)"

* Enforces the program_id being passed explicitly by removing it from get_instruction_keyed_accounts().

* instruction_accounts => instructions_account
2021-12-21 12:53:22 +01:00
41ec7c8be9 chore: bump num_cpus from 1.13.0 to 1.13.1 (#22044)
* chore: bump num_cpus from 1.13.0 to 1.13.1

Bumps [num_cpus](https://github.com/seanmonstar/num_cpus) from 1.13.0 to 1.13.1.
- [Release notes](https://github.com/seanmonstar/num_cpus/releases)
- [Changelog](https://github.com/seanmonstar/num_cpus/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/num_cpus/compare/v1.13.0...v1.13.1)

---
updated-dependencies:
- dependency-name: num_cpus
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-21 00:06:14 -07:00
252b0554ca docs: corrected the grammar (#22046) 2021-12-21 06:50:36 +00:00
69d0b08dd8 chore: bump lru from 0.7.0 to 0.7.1 (#22018)
Bumps [lru](https://github.com/jeromefroe/lru-rs) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/jeromefroe/lru-rs/releases)
- [Changelog](https://github.com/jeromefroe/lru-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jeromefroe/lru-rs/compare/0.7.0...0.7.1)

---
updated-dependencies:
- dependency-name: lru
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-20 21:56:56 -07:00
3356c6afb2 chore: bump signal-hook from 0.3.12 to 0.3.13 (#22037)
Bumps [signal-hook](https://github.com/vorner/signal-hook) from 0.3.12 to 0.3.13.
- [Release notes](https://github.com/vorner/signal-hook/releases)
- [Changelog](https://github.com/vorner/signal-hook/blob/master/CHANGELOG.md)
- [Commits](https://github.com/vorner/signal-hook/compare/v0.3.12...v0.3.13)

---
updated-dependencies:
- dependency-name: signal-hook
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-20 21:40:16 -07:00
2347f65133 The sidebar for the plugin doc is showing the item as "Overview", corrected the styles (#22033) 2021-12-20 17:26:43 -08:00
755e816521 chore: bump bytemuck from 1.7.2 to 1.7.3 (#22032)
* chore: bump bytemuck from 1.7.2 to 1.7.3

Bumps [bytemuck](https://github.com/Lokathor/bytemuck) from 1.7.2 to 1.7.3.
- [Release notes](https://github.com/Lokathor/bytemuck/releases)
- [Changelog](https://github.com/Lokathor/bytemuck/blob/main/changelog.md)
- [Commits](https://github.com/Lokathor/bytemuck/compare/v1.7.2...v1.7.3)

---
updated-dependencies:
- dependency-name: bytemuck
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-20 16:34:55 -07:00
f5d1115468 Add deactivation cooldown before address lookup tables can be closed (#22011) 2021-12-20 17:33:46 -06:00
c0c3d7c1f2 fix: add publickey toJSON (#22004) 2021-12-20 15:16:32 -06:00
e810400716 chore: bump futures from 0.3.18 to 0.3.19 (#22020)
* chore: bump futures from 0.3.18 to 0.3.19

Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.18 to 0.3.19.
- [Release notes](https://github.com/rust-lang/futures-rs/releases)
- [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/futures-rs/compare/0.3.18...0.3.19)

---
updated-dependencies:
- dependency-name: futures
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-20 20:45:19 +00:00
116517fb6d Fix weird formatting of bullets (#22013) 2021-12-20 13:16:13 -07:00
b8eff3456c Update program close docs (#22026) 2021-12-20 10:30:06 -08:00
eeb063b957 chore: bump futures-util from 0.3.18 to 0.3.19 (#22017)
* chore: bump futures-util from 0.3.18 to 0.3.19

Bumps [futures-util](https://github.com/rust-lang/futures-rs) from 0.3.18 to 0.3.19.
- [Release notes](https://github.com/rust-lang/futures-rs/releases)
- [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/futures-rs/compare/0.3.18...0.3.19)

---
updated-dependencies:
- dependency-name: futures-util
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-20 09:35:34 -07:00
e92a81b741 typo: lanaguage -> language (#22009) 2021-12-19 22:33:21 -07:00
65d59f4ef0 tracks erasure coding shreds' indices explicitly (#21822)
The indices for erasure coding shreds are tied to data shreds:
https://github.com/solana-labs/solana/blob/90f41fd9b/ledger/src/shred.rs#L921

However with the upcoming changes to erasure schema, there will be more
erasure coding shreds than data shreds and we can no longer infer coding
shreds indices from data shreds.

The commit adds constructs to track coding shreds indices explicitly.
2021-12-19 22:37:55 +00:00
df6a4930b9 chore: add blockSubscribe api docs (#22002)
Co-authored-by: Zano <segfaultdoctor@protonmail.com>
2021-12-19 09:23:28 -07:00
301d585d47 chore: bump nix from 0.23.0 to 0.23.1 (#21998)
* chore: bump nix from 0.23.0 to 0.23.1

Bumps [nix](https://github.com/nix-rust/nix) from 0.23.0 to 0.23.1.
- [Release notes](https://github.com/nix-rust/nix/releases)
- [Changelog](https://github.com/nix-rust/nix/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nix-rust/nix/compare/v0.23.0...v0.23.1)

---
updated-dependencies:
- dependency-name: nix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-18 11:04:45 -07:00
7476dfeec0 removes Select in favor of recv_timeout/try_iter (#21981)
crossbeam_channel::Select::ready_timeout might return with success spuriously.
2021-12-18 17:39:07 +00:00
3fe942ab30 new net-stats require a new table (#21996) 2021-12-18 00:13:16 -08:00
8f547a6c98 chore: bump serde from 1.0.131 to 1.0.132 (#21989)
* chore: bump serde from 1.0.131 to 1.0.132

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.131 to 1.0.132.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.131...v1.0.132)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-18 07:45:20 +00:00
7f6fb6937a Ensure AncestorHashesSerice selects an open port (#21919) 2021-12-18 00:44:01 -05:00
97a1fa10a6 streamer send destination metrics for repair, gossip (#21564) 2021-12-17 15:21:05 -08:00
76098dd42a RPC Block Subscription (#21787)
* add stuff

* compiling

* add notify block

* wip

* feat: add blockSubscribe pubsub method

* address PR comments

Co-authored-by: Lucas B <buffalu@jito.network>
Co-authored-by: Zano <segfaultdoctor@protonmail.com>
2021-12-17 16:03:09 -07:00
5f054cd51b Update to reed-solomon-erasure 5.0.1, to get simd-accel on M1 macs 2021-12-17 14:19:39 -08:00
68a2570ebd chore: bump digest from 0.10.0 to 0.10.1 (#21977)
* chore: bump digest from 0.10.0 to 0.10.1

Bumps [digest](https://github.com/RustCrypto/traits) from 0.10.0 to 0.10.1.
- [Release notes](https://github.com/RustCrypto/traits/releases)
- [Commits](https://github.com/RustCrypto/traits/compare/digest-v0.10.0...digest-v0.10.1)

---
updated-dependencies:
- dependency-name: digest
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-17 14:30:42 -07:00
94aa9e568a Mango instruction decoding: use generic helper from mango-client (which is often auto updated by dependabot) instead of relying on a manual instruction lookup table (#21985)
Signed-off-by: microwavedcola1 <microwavedcola@gmail.com>
2021-12-17 19:46:34 +00:00
0f6e8d3385 Check file size of snapshot_version when unarchiving snapshot (#21925) 2021-12-17 12:27:54 -06:00
70f96bda25 disk buckets: refactor (#21972) 2021-12-17 10:16:34 -06:00
8ed7ad5fa7 AcctIdx: hold range recognizes already held ranges (#21937) 2021-12-17 10:04:41 -06:00
056f2e9e67 removed explorer .travis.yml from main .travis.yml 2021-12-17 21:30:39 +05:30
729698e815 AcctIdx: items() uses held ranges (#21954) 2021-12-17 09:59:29 -06:00
af53d2f692 simplify api on reconstruct_single_storage (#21970) 2021-12-17 09:06:23 -06:00
c04737ae69 added explorer .travis.yml for PR related issues 2021-12-17 20:32:05 +05:30
89d66c3210 removes next_shred_index from return value of entries to shreds api (#21961)
next-shred-index is already readily available from returned data shreds.
The commit simplifies the api for upcoming changes to erasure coding
schema which will require explicit tracking of indices for coding shreds
as well as data shreds.
2021-12-17 15:01:55 +00:00
7ec39f5a1e time based retransmit in replay_stage (#21498) 2021-12-17 05:44:40 -08:00
66fa8f9667 Refactor: Removes Rc from Refcell<AccountSharedData> in the program-runtime (#21927)
* Removes Rc from Rc<RefCell<AccountSharedData>> in the program-runtime.

* Adjusts tests in bpf_loader, system_instruction_processor, config_processor, vote_instruction and stake_instruction
2021-12-17 14:01:12 +01:00
56ec5245cc chore: bump tokio from 1.14.0 to 1.15.0 (#21966)
* chore: bump tokio from 1.14.0 to 1.15.0

Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.14.0 to 1.15.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.14.0...tokio-1.15.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-17 00:29:49 -07:00
c4389a6675 AcctIdx: held ranges search in lifo order (#21964) 2021-12-16 23:25:22 -06:00
5d40da5688 log "next_id" to track append vec ids (#21971) 2021-12-16 22:41:34 -06:00
6374995522 AcctIdx: share bucket map size for perf (#21935) 2021-12-16 21:25:54 -06:00
ba777f4f56 AcctIdx: remove Option from held ranges (#21958) 2021-12-16 21:22:04 -06:00
385efae4b3 Remove need to send bank in retransmit request from ReplayStage (#21943)
* Remove need to send bank in retransmitter
2021-12-16 21:11:01 -05:00
e11a1911ad load_accounts_index_for_shrink ignores cached entries (#21951) 2021-12-16 16:37:08 -06:00
6ff0be6a82 Clean up demote program write lock feature (#21949)
* Clean up demote program write lock feature

* fix test
2021-12-16 17:27:22 -05:00
a5769c029f chore: bump tar from 0.4.37 to 0.4.38 (#21921)
* chore: bump tar from 0.4.37 to 0.4.38

Bumps [tar](https://github.com/alexcrichton/tar-rs) from 0.4.37 to 0.4.38.
- [Release notes](https://github.com/alexcrichton/tar-rs/releases)
- [Commits](https://github.com/alexcrichton/tar-rs/compare/0.4.37...0.4.38)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-16 15:06:56 -07:00
347323cbb2 use bg thread pool for shrink (#21950) 2021-12-16 15:54:38 -06:00
3398f5a2f5 Update getSignaturesForAddress and getConfirmedSignaturesForAddress2 RPC call description (#21955)
* Update jsonrpc-api.md

* Update docs/src/developing/clients/jsonrpc-api.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Wrap 80chars

* Update docs/src/developing/clients/jsonrpc-api.md

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-12-16 19:19:41 +00:00
efd64a3862 simplifies ShredIndex api (#21932) 2021-12-16 19:17:32 +00:00
e83ca4bb28 Clean up test_ledger_cleanup_compaction prints (#21875)
- Use info!()/warn!() over println!()/eprintln!()
- Make status prints consistent
- Add default RUST_LOG filter to see test printouts
- Adjust reported data to show shreds and rates we care about
2021-12-16 11:24:29 -06:00
e97da0ea15 AcctIdx: define type for serialized AppendVecId (#21938) 2021-12-16 08:41:01 -06:00
18417e410e AcctIdx: remove troublesome assert (#21947) 2021-12-16 08:37:05 -06:00
8183f28636 adds ErasureSetId identifying erasure coding sets of shreds (#21928) 2021-12-16 14:18:55 +00:00
49cb161203 Fixes the calculation of the "compute_meter_consumption" across process_instruction() and process_message(). (#21944) 2021-12-16 15:15:58 +01:00
82672b40fd AcctIdx: streamline metric update (#21936) 2021-12-15 19:52:23 -06:00
1e0d3f13e6 AcctIdx: fix metrics bug (#21934) 2021-12-15 17:05:38 -06:00
635337d2ff Bank gets accounts data len delta from MessageProcessor::process_message() 2021-12-15 16:41:38 -06:00
46e5350d8c AcctInfo: store offset in AccountInfo as u32 (#21895) 2021-12-15 15:41:11 -06:00
e5be96d8bf AcctIdx: consolidate next_id calls (#21929) 2021-12-15 15:39:54 -06:00
882f886450 Add comment block for commit_slot_meta_working_set in blockstore.rs (#21852) 2021-12-15 13:12:50 -08:00
e374fb1d60 Add code comment for get_slot_meta_entry in blockstore.rs (#21844) 2021-12-15 13:12:38 -08:00
5fb7da12f2 add caching_enabled option to test-validator 2021-12-15 11:45:31 -08:00
ed924e3bc4 Update argument name 2021-12-15 11:05:02 -08:00
9b06d64eb8 Add option to load accounts from file
This introduces the `--clone-from-file` option for
solana-test-validator. It allows specifying any number of files
(without extension) containing account info and data, which will be
loaded at genesis. This is similar to `--bpf-program` for programs
loading.

The files will be searched for in the CWD or in `tests/fixtures`.

Example: `solana-test-validator --clone-from-file SRM_token USD_token`
2021-12-15 11:05:02 -08:00
0e9e67b65d Add complete account dump to file
This commit introduces the ability to dump the complete content of an
account to a JSON file (compact or not depending on the provided format
option).

Example:

```sh
solana account -u m \
  --output json-compact \
  --output-file SRM_token.json \
  SRMuApVNdxXokk5GT7XD5cUUgXMBCoAz2LHeuAoKWRt
```

Note: Behavior remains untouched if format option `--output` is not
provided (only account data gets written to file).
2021-12-15 11:05:02 -08:00
9797af6f85 chore: bump backoff from 0.3.0 to 0.4.0 (#21922)
Bumps [backoff](https://github.com/ihrwein/backoff) from 0.3.0 to 0.4.0.
- [Release notes](https://github.com/ihrwein/backoff/releases)
- [Commits](https://github.com/ihrwein/backoff/compare/v0.3.0...v0.4.0)

---
updated-dependencies:
- dependency-name: backoff
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-15 11:20:04 -07:00
41dd31e5f4 AcctIdx: move cached store id to bit (#21892)
* AcctIdx: move cached store id to bit

* add comments/rename
2021-12-15 11:49:24 -06:00
02fa135815 AcctIdx: create test fn get_test() to isolate changes to AcctIdx::get() (#21909) 2021-12-15 09:09:56 -06:00
71b12b1f56 Add comment for clear_unconfirmed_slot() in blockstore.rs (#21837) 2021-12-15 00:37:09 -08:00
e476e17abf Add code comment for check_insert_data_shred in blockstore.rs (#21845) 2021-12-15 00:36:11 -08:00
e124659aca Restore solana_validator::test_validator export 2021-12-15 00:22:27 -08:00
c2a94a8fb0 add accountsdb-plugin-config to test-validator 2021-12-14 23:42:55 -08:00
8d22ca5076 Add helper crate to generate syscalls.txt 2021-12-14 21:20:13 -08:00
dcd2854829 Add json support for feature sets; also print output after feature list (#21905)
* Add json support for feature sets; also print output after feature list

* Move stringifying into Display implementation
2021-12-15 05:11:08 +00:00
7ba27e5cae Update openssl-src package to resolve cargo audit complaint 2021-12-14 19:04:59 -08:00
2a6dcb2ffd Futures 0.3.18 has been yanked, back off to .17 2021-12-14 14:14:43 -08:00
dcb5849484 Document solana_program::instruction (#21817)
* Document solana_program::instruction

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-12-14 21:35:22 +00:00
e4f7af0b48 Eagerly receive records in sleepy tick producer (#21832) 2021-12-14 16:25:28 -05:00
9da826421a Don't publish rbpf-cli to crates.io 2021-12-14 12:12:47 -08:00
003ba1f092 Don't publish poh-bench to crates.io 2021-12-14 12:09:39 -08:00
f4308bdb64 AcctIdx: StoredSize is u32 (#21894) 2021-12-14 13:50:19 -06:00
cb395abff7 Fix subtraction overflow (#21871) 2021-12-14 14:24:22 -05:00
e694acaf5f AcctIdx: better types for AccountInfo (#21893) 2021-12-14 13:08:49 -06:00
8d980f07ba uses Option<Slot> for SlotMeta.parent_slot (#21808)
SlotMeta.parent_slot for the head of a detached chain of slots is
unknown and that is indicated by u64::MAX which lacks type-safety:
https://github.com/solana-labs/solana/blob/6c108c8fc/ledger/src/blockstore_meta.rs#L203-L205

The commit changes the type to Option<Slot>. Backward compatibility is
maintained by customizing serde serialize/deserialize implementations.
2021-12-14 18:57:11 +00:00
d13a5056f1 Typo (#21898)
Sentence grammatically incomplete. "Typo"
2021-12-14 18:04:38 +00:00
4ceb2689f5 adds ShredId uniquely identifying each shred (#21820) 2021-12-14 17:34:02 +00:00
426f2507d5 chore: bump serde_yaml from 0.8.21 to 0.8.23 (#21867)
* chore: bump serde_yaml from 0.8.21 to 0.8.23

Bumps [serde_yaml](https://github.com/dtolnay/serde-yaml) from 0.8.21 to 0.8.23.
- [Release notes](https://github.com/dtolnay/serde-yaml/releases)
- [Commits](https://github.com/dtolnay/serde-yaml/compare/0.8.21...0.8.23)

---
updated-dependencies:
- dependency-name: serde_yaml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-14 10:23:26 -07:00
ec583bd12d AcctIdex: use StorageLocation (#21853) 2021-12-14 10:27:17 -06:00
b610e5503e Fixed a typo in the SQL statement (#21872)
* Fixed a typo in the SQL statement

* Fixed additional two errors in the postgres database objects
2021-12-14 08:26:59 -08:00
e19c7923c3 chore: fix typo in shred.rs (#21890)
begining -> beginning
2021-12-14 11:19:27 -05:00
509bcd2e74 Bump rbpf to v0.2.19 (#21880)
* Bump rbpf to v0.2.19

Co-authored-by: Alexander Meißner <AlexanderMeissner@gmx.net>
2021-12-14 16:51:23 +01:00
a86fe899ac AcctIdx: move zero lamport out of accounts index (#21526) 2021-12-14 09:31:42 -06:00
4adc8b133f Refactor: Remove Rc from PreAccount and InvokeContext::get_account() (#21882)
* Removes Rc and RefCell from PreAccount

* Splits get_account() into find_index_of_account() and get_account_at_index()
in order to remove Rc from return type.
2021-12-14 15:44:31 +01:00
c92c09a8be Remove activated feature for removing inactive delegations from stakes cache (#21732)
* Remove activated feature for removing inactive delegations from stakes cache

* Fix builtin purging
2021-12-14 09:23:36 -05:00
e5476913fe Remove activated feature that checks tx signature len (#21747) 2021-12-14 09:23:05 -05:00
746869fdac Add missing word "that" (#21878) 2021-12-14 09:20:31 -05:00
033106ed81 Add solana-cli-config link to rust-api.md (#21840) 2021-12-14 00:33:10 -07:00
8a63812c4e chore: bump libc from 0.2.109 to 0.2.112 (#21870)
* chore: bump libc from 0.2.109 to 0.2.112

Bumps [libc](https://github.com/rust-lang/libc) from 0.2.109 to 0.2.112.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.109...0.2.112)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-14 00:27:57 -07:00
4a9d7318d1 Rework test parameters to be shreds instead of entries (#21780)
The number of shreds that result from a given number of entries is
variable and in our test case, somewhat unintuitive to think about when
trying to determine how much data we're pushing into the blockstore. So,
this change converts the unit of test parameters from entries to shreds.

This change also cleans up some variable naming for clarity and prints.
2021-12-13 23:34:43 -06:00
018b54dbd7 chore: bump serde_json from 1.0.72 to 1.0.73 (#21856)
* chore: bump serde_json from 1.0.72 to 1.0.73

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.72 to 1.0.73.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.72...v1.0.73)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-14 04:08:22 +00:00
3202cc7eef chore:(deps): bump typescript from 4.5.3 to 4.5.4 in /explorer (#21868)
Bumps [typescript](https://github.com/Microsoft/TypeScript) from 4.5.3 to 4.5.4.
- [Release notes](https://github.com/Microsoft/TypeScript/releases)
- [Commits](https://github.com/Microsoft/TypeScript/compare/v4.5.3...v4.5.4)

---
updated-dependencies:
- dependency-name: typescript
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-14 03:36:58 +00:00
6fc6673ead AcctInfo: store_id private and add accessor (#21839) 2021-12-13 21:35:30 -06:00
f402cbe64e chore:(deps): bump @solana/spl-token-registry in /explorer (#21865)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.804 to 0.2.810.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.804...v0.2.810)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-14 03:27:10 +00:00
98e5ea9dce AcctIdx: simplify AccountIndexGetResult (#21857) 2021-12-13 21:16:17 -06:00
17cd14ad88 chore: bump fd-lock from 3.0.1 to 3.0.2 (#21848)
Bumps [fd-lock](https://github.com/yoshuawuyts/fd-lock) from 3.0.1 to 3.0.2.
- [Release notes](https://github.com/yoshuawuyts/fd-lock/releases)
- [Commits](https://github.com/yoshuawuyts/fd-lock/commits)

---
updated-dependencies:
- dependency-name: fd-lock
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-14 02:58:19 +00:00
ed4897d715 chore:(deps): bump sass from 1.44.0 to 1.45.0 in /explorer (#21862)
Bumps [sass](https://github.com/sass/dart-sass) from 1.44.0 to 1.45.0.
- [Release notes](https://github.com/sass/dart-sass/releases)
- [Changelog](https://github.com/sass/dart-sass/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sass/dart-sass/compare/1.44.0...1.45.0)

---
updated-dependencies:
- dependency-name: sass
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-14 02:45:20 +00:00
fd212fd2a4 Add notes about new v1.9 rpc apis 2021-12-13 19:02:21 -07:00
eebaf89874 Remove old notes referring to EOL versions 2021-12-13 19:02:21 -07:00
bed1b143a5 Restore ALL behavior; add enum variant, comments, and help text to make behavior clearer (#21854) 2021-12-13 19:00:29 -07:00
a8ba979360 chore:(deps): bump @blockworks-foundation/mango-client in /explorer (#21860)
Bumps [@blockworks-foundation/mango-client](https://github.com/blockworks-foundation/mango-client-v3) from 3.2.15 to 3.2.16.
- [Release notes](https://github.com/blockworks-foundation/mango-client-v3/releases)
- [Commits](https://github.com/blockworks-foundation/mango-client-v3/commits)

---
updated-dependencies:
- dependency-name: "@blockworks-foundation/mango-client"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-14 01:41:18 +00:00
3bc3af2332 chore:(deps): bump @solana/spl-token-registry in /explorer (#21855)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.801 to 0.2.804.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.801...v0.2.804)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-14 00:07:14 +00:00
50f26ea9c0 AcctInfo: create AcctInfo with cache explicitly (#21847) 2021-12-13 17:35:07 -06:00
b81124deec chore: bump etcd-client from 0.8.1 to 0.8.2 (#21825)
Bumps [etcd-client](https://github.com/etcdv3/etcd-client) from 0.8.1 to 0.8.2.
- [Release notes](https://github.com/etcdv3/etcd-client/releases)
- [Commits](https://github.com/etcdv3/etcd-client/compare/0.8.1...0.8.2)

---
updated-dependencies:
- dependency-name: etcd-client
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-13 16:20:22 -07:00
5a28f61f49 chore:(deps): bump @solana/spl-token-registry in /explorer (#21846)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.720 to 0.2.801.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.720...v0.2.801)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-13 20:44:59 +00:00
6155ef6377 AcctInfo: make offset private, add accessor (#21838) 2021-12-13 14:43:26 -06:00
8aa3d690b5 chore: bump dashmap from 4.0.2 to 5.0.0 (#21824)
* chore: bump dashmap from 4.0.2 to 5.0.0

Bumps [dashmap](https://github.com/xacrimon/dashmap) from 4.0.2 to 5.0.0.
- [Release notes](https://github.com/xacrimon/dashmap/releases)
- [Commits](https://github.com/xacrimon/dashmap/commits/v5.0.0)

---
updated-dependencies:
- dependency-name: dashmap
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-13 13:23:58 -07:00
d5879bafe9 chore:(deps): bump @sentry/react from 6.15.0 to 6.16.1 in /explorer (#21835)
Bumps [@sentry/react](https://github.com/getsentry/sentry-javascript) from 6.15.0 to 6.16.1.
- [Release notes](https://github.com/getsentry/sentry-javascript/releases)
- [Changelog](https://github.com/getsentry/sentry-javascript/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-javascript/compare/6.15.0...6.16.1)

---
updated-dependencies:
- dependency-name: "@sentry/react"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-13 17:36:20 +00:00
9e9a1a4876 AcctIdx: AccountInfo.lamports private (#21819) 2021-12-13 10:59:33 -06:00
c9e0bde407 AcctIdx: AccountInfo::stored_size private (#21821) 2021-12-13 10:10:06 -06:00
4bc5bfb2df Addressing leftover comments from #21531 (#21782)
* Addressing leftover comments from #21531

* Add feature flag

* Feature gate new vote instruction

* add clock & slot hashes sysvar to test
2021-12-13 07:52:22 -08:00
1149c1880d cli: Order displayed feature list by status (#21810) 2021-12-13 07:42:57 -05:00
90f41fd9b7 use cost model to limit new account creation (#21369)
* use cost model to limit new account creation

* handle every system instruction

* remove &

* simplify match

* simplify match

* add datapoint for account data size

* add postgres error handling

* handle accounts:unlock_accounts
2021-12-12 14:57:18 -06:00
025a5a3b9c AccountInfo: construct with new() (#21788) 2021-12-12 14:36:05 -06:00
825f8bcea4 use AppendVecId instead of usize (#21792) 2021-12-11 21:38:13 -06:00
c5b6ea74ef replace some .lamports checks with trait (#21783) 2021-12-11 21:37:33 -06:00
4c0373b72a introduce AtomicAppendVecId (#21793) 2021-12-11 21:34:35 -06:00
a400b5e63d chore: bump async-trait from 0.1.51 to 0.1.52 (#21765)
* chore: bump async-trait from 0.1.51 to 0.1.52

Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.51 to 0.1.52.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.51...0.1.52)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-11 21:38:33 +00:00
ef7f49a21d move AccountInfo to its own file (#21784) 2021-12-11 11:47:05 -06:00
c5c699a918 Remove the 5 integer msg! form 2021-12-11 09:37:11 -08:00
eeb97fe7ce Add accounts_data_len to Bank (#21781) 2021-12-11 09:34:38 -06:00
e08139f949 uses Option<u64> for SlotMeta.last_index (#21775)
SlotMeta.last_index may be unknown and current code is using u64::MAX to
indicate that:
https://github.com/solana-labs/solana/blob/6c108c8fc/ledger/src/blockstore_meta.rs#L169-L174

This lacks type-safety and can introduce bugs if not always checked for
Several instances of slot_meta.last_index + 1 are also subject to
overflow.

This commit updates the type to Option<u64>. Backward compatibility is
maintained by customizing serde serialize/deserialize implementations.
2021-12-11 14:47:20 +00:00
254ef3e7b6 Rename Packets to PacketBatch (#21794) 2021-12-11 09:44:15 -05:00
379e3ec848 Add Accountsdb plugin documentations (#21746)
Add the public facing documentation about the plugin framework: explaining the interface, how to load plugin and the example PostgreSQL plugin implementation.
Updated the rust documentation for the plugin interfaces for accounts and slot.
This changes are targeted for v1.8. Information about transactions will be updated later.
2021-12-10 15:54:40 -08:00
2bbe1d875a Deserialize accounts before acquiring stakes cache lock (#21733)
* Deserialize stored accounts before locking stakes cache

* fix test
2021-12-10 16:02:35 -05:00
9b41ddd9ba Add address lookup table program (#21616)
* Add address lookup table program

* feedback
2021-12-10 16:02:16 -05:00
a5a0dabe7b Bump solana_rbpf to version v0.2.18 (#21774) 2021-12-10 21:31:14 +01:00
49ba09b333 adds back ErasureMeta::first_coding_index field (#21623)
https://github.com/solana-labs/solana/pull/16646
removed first_coding_index since the field is currently redundant and
always equal to fec_set_index.
However, with upcoming changes to erasure coding schema, this will no
longer be the same as fec_set_index and so requires a separate field to
represent.
2021-12-10 20:08:04 +00:00
ec7e17787e Compute accounts data len during generate_index() (#21757) 2021-12-10 13:27:59 -06:00
15a9fa6f53 Update to Rust 1.57.0 2021-12-10 11:07:19 -08:00
80eae20610 Fix master fmt 2021-12-10 11:33:39 -07:00
350845c513 Move type alias and use it more broadly (#21763) 2021-12-10 10:46:21 -07:00
65194c7ae8 Add NUM_WRITERS to ledger_cleanup to enable multiple writers. (#21729)
Summary:
* Add NUM_WRITERS to ledger_cleanup to enable multiple writers.
  (Note that our insert_shreds() is still single threaded because
   it has a lock that limits only one writer at a time.)

* Make pre-generated slots more performent by directly inserting
  into the shared queue.  Otherwise, the main-thread which
  prepares the slots will be slower than the writers.

* Correct the shred insertion time -- before this diff it did not
  wait for joining all writer threads.
2021-12-10 09:42:51 -08:00
6c108c8fc3 Migrate from address maps to address lookup tables (#21634)
* Migrate from address maps to address lookup tables

* update sanitize error

* cargo fmt

* update abi
2021-12-10 11:04:04 -05:00
fd175c1ea9 Add StakesCache struct to abstract away locking (#21738) 2021-12-10 10:51:33 -05:00
622fd7c7ec testing for the latest changes 2021-12-10 19:44:22 +05:30
08940fff10 modifying event type 2021-12-10 18:10:10 +05:30
75385f657a chore: bump tonic-build from 0.6.0 to 0.6.2 (#21764)
* chore: bump tonic-build from 0.6.0 to 0.6.2

Bumps [tonic-build](https://github.com/hyperium/tonic) from 0.6.0 to 0.6.2.
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.6.0...v0.6.2)

---
updated-dependencies:
- dependency-name: tonic-build
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-10 09:31:15 +00:00
b07fd1fead chore:(deps): bump typescript from 4.5.2 to 4.5.3 in /explorer (#21768)
Bumps [typescript](https://github.com/Microsoft/TypeScript) from 4.5.2 to 4.5.3.
- [Release notes](https://github.com/Microsoft/TypeScript/releases)
- [Commits](https://github.com/Microsoft/TypeScript/commits)

---
updated-dependencies:
- dependency-name: typescript
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-10 09:16:48 +00:00
b3c5a299be chore:(deps): bump @solana/spl-token-registry in /explorer (#21766)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.698 to 0.2.720.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.698...v0.2.720)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-10 09:08:45 +00:00
d870f566ef chore: bump serde from 1.0.130 to 1.0.131 (#21758)
* chore: bump serde from 1.0.130 to 1.0.131

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.130 to 1.0.131.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.130...v1.0.131)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-10 08:47:46 +00:00
f56eb7f882 chore: bump tonic from 0.6.1 to 0.6.2 (#21754)
Bumps [tonic](https://github.com/hyperium/tonic) from 0.6.1 to 0.6.2.
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.6.1...v0.6.2)

---
updated-dependencies:
- dependency-name: tonic
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-10 00:22:45 -07:00
c1386d66e6 Nits in message-processor (#21755)
* Fixup typo

* Simplify types slightly
2021-12-09 23:38:36 -07:00
805d53fc10 program-runtime: add LogCollector::new_ref_with_limit() (#21691)
program-runtime: add LogCollector::new_ref_with_limit()

LogCollector::new_ref_with_limit(limit: Option<usize>) can be used to
initialize a collector with the given bytes limit. The limit can be None, in
which case logs are never truncated.

new_ref_with_limit(None) is used by cargo-run-bpf-tests so that the output of
cargo test --target=bpfel-unknown-unknown is not truncated.
2021-12-10 13:38:03 +11:00
16a6dceb6b impl better debug for Ancestors (#21737) 2021-12-09 18:36:13 -06:00
f32216588d Remove libcurl to prevent wasm-pack segfault in libssl 2021-12-09 15:53:58 -08:00
f4babb7566 Cargo.lock 2021-12-09 15:53:58 -08:00
a35df1cb02 Add initial wasm bindings for Instruction, SystemProgram and Transaction 2021-12-09 15:53:58 -08:00
03a956e8d9 Add wasm bindings for Hash 2021-12-09 15:53:58 -08:00
488dc37fec Add wasm bindings for Pubkey and Keypair 2021-12-09 15:53:58 -08:00
6919c4863b Expand docs for Pubkey::create_program_address (#21750)
* Expand docs for Pubkey::create_program_address

* Update sdk/program/src/pubkey.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-12-09 23:45:23 +00:00
f865392eee chore: bump digest from 0.9.0 to 0.10.0 (#21742)
* chore: bump digest from 0.9.0 to 0.10.0

Bumps [digest](https://github.com/RustCrypto/traits) from 0.9.0 to 0.10.0.
- [Release notes](https://github.com/RustCrypto/traits/releases)
- [Commits](https://github.com/RustCrypto/traits/compare/digest-v0.9.0...digest-v0.10.0)

---
updated-dependencies:
- dependency-name: digest
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* tree; multiple versions of digest required

* Bump sha3 as well

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-12-09 23:32:21 +00:00
1528f85112 Add return types to generate_index() (#21735) 2021-12-09 17:15:13 -06:00
c965517a6b chore: bump signal-hook from 0.3.10 to 0.3.12 (#21721)
Bumps [signal-hook](https://github.com/vorner/signal-hook) from 0.3.10 to 0.3.12.
- [Release notes](https://github.com/vorner/signal-hook/releases)
- [Changelog](https://github.com/vorner/signal-hook/blob/master/CHANGELOG.md)
- [Commits](https://github.com/vorner/signal-hook/compare/v0.3.10...v0.3.12)

---
updated-dependencies:
- dependency-name: signal-hook
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-09 21:43:31 +00:00
bji
6d18b6bab5 Fixed minor issues with the cluster overview docs which had confused some (#21744)
new users.
2021-12-09 13:11:52 -07:00
6fc329180b Add more reporting for invalid stake cache members and prune them (#21654)
* Add more reporting for invalid stake cache members

* feedback
2021-12-09 14:12:11 -05:00
a2df1eb502 AcctIdx: disk generate index and filler accounts use more threads (#21566) 2021-12-09 11:54:14 -06:00
8063273d09 adds more sanity checks to shreds (#21675) 2021-12-09 16:43:57 +00:00
58a7d3fc0e AcctIdx: disable direct to disk for filler account upsert (#21710) 2021-12-09 10:14:57 -06:00
0090916735 skip clean on startup, too (#21718) 2021-12-09 09:36:02 -06:00
7f48f67948 Update explorer_preview.yml 2021-12-09 17:39:18 +05:30
0a11be496a reverting back to previous successful build 2021-12-09 17:33:32 +05:30
7b722abb63 Update explorer_preview.yml 2021-12-09 17:25:33 +05:30
c665fc7a9b fixing indentation 2021-12-09 17:16:27 +05:30
84ef6462c2 handling last error of npm 2021-12-09 17:12:07 +05:30
b2ed9daada updating explorer_preview.yml
it deploy preview on vercel
2021-12-09 17:00:33 +05:30
b0be0881a7 chore:(deps): bump @blockworks-foundation/mango-client in /explorer (#21728)
Bumps [@blockworks-foundation/mango-client](https://github.com/blockworks-foundation/mango-client-v3) from 3.2.14 to 3.2.15.
- [Release notes](https://github.com/blockworks-foundation/mango-client-v3/releases)
- [Commits](https://github.com/blockworks-foundation/mango-client-v3/commits)

---
updated-dependencies:
- dependency-name: "@blockworks-foundation/mango-client"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-09 09:19:24 +00:00
91efd6e102 chore:(deps): bump @solana/spl-token-registry in /explorer (#21727)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.662 to 0.2.698.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.662...v0.2.698)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-09 09:08:47 +00:00
ae024f666a reverting back to the original state 2021-12-09 13:16:05 +05:30
0fd8e3c5d7 using ssh for cloning 2021-12-09 11:49:19 +05:30
e020960f49 AcctIdx: generate index goes to mem not disk (#21709) 2021-12-08 19:48:12 -06:00
54862eba0d AcctIdx: env var to enable testing of disk buckets (#21494) 2021-12-08 19:47:25 -06:00
b61b7189a5 chore: bump hmac from 0.11.0 to 0.12.0 (#21681)
* chore: bump hmac from 0.11.0 to 0.12.0

Bumps [hmac](https://github.com/RustCrypto/MACs) from 0.11.0 to 0.12.0.
- [Release notes](https://github.com/RustCrypto/MACs/releases)
- [Commits](https://github.com/RustCrypto/MACs/compare/hmac-v0.11.0...hmac-v0.12.0)

---
updated-dependencies:
- dependency-name: hmac
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

* Update dependabot-pr.sh

* Bump sha2 and pbkdf2 as well

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
Co-authored-by: axleiro <83293196+axleiro@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-12-08 18:12:35 -07:00
181c0092d6 AcctIdx: resize in-mem after startup for disk index (#21676) 2021-12-08 16:52:22 -06:00
824994db69 simulateTransaction now returns the correct error code if accounts are provided as input 2021-12-08 14:03:21 -08:00
8d1e5ac294 refactor and test scan abort code (#21390) 2021-12-08 14:09:34 -06:00
923720f529 SDK: Add stdlib.h include to pull in abort() (#21700) 2021-12-08 17:00:16 +01:00
7c9abaff2c triggering the "system-performance-test" CI pipeline 2021-12-08 19:13:31 +05:30
40a04490ad Replacing the "ssh" with the "https" link for cloning the GitHub repo
{
  using:
  git clone https://github.com/solana-labs/testnet-keypairs.git "${REPO_ROOT}"/net/keypairs
  replacing :
 #git clone git@github.com:solana-labs/testnet-keypairs.git "${REPO_ROOT}"/net/keypairs
}
2021-12-08 18:58:35 +05:30
99da25dc9d testing for dependabot trigger 2021-12-08 16:25:31 +05:30
7cd5c0cd20 chore: bump @types/node from 16.11.11 to 16.11.12 in /web3.js (#21689)
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 16.11.11 to 16.11.12.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

---
updated-dependencies:
- dependency-name: "@types/node"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-08 09:19:00 +00:00
ea1b59f684 chore: bump prettier from 2.5.0 to 2.5.1 in /web3.js (#21685)
Bumps [prettier](https://github.com/prettier/prettier) from 2.5.0 to 2.5.1.
- [Release notes](https://github.com/prettier/prettier/releases)
- [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prettier/prettier/compare/2.5.0...2.5.1)

---
updated-dependencies:
- dependency-name: prettier
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-08 09:10:15 +00:00
f98e198dcf resolving buildkite pull before push issue
Added:
git config pull.rebase false
git pull origin master --allow-unrelated-histories
2021-12-08 13:43:59 +05:30
483f5cce46 fixing "Updates were rejected because the remote contains work that you do"
adding git pull
2021-12-08 13:32:54 +05:30
0224a8b127 Update address map proposal to improve dev experience (#21576)
* Update address map proposal to improve dev experience

* another revision to match implementation
2021-12-07 22:58:18 -05:00
4da435f2a0 Avoid entropy sources when constructing a solana_program::message::Message.
The solana-program crate can be used in certain embedded environments (HSMs) where
the source of entropy, whether used for cryptographic purposes or not, is tightly
controlled. In these cases, using the default OS source of entrophy is not always
acceptable. Thus, using the default Rust stdlib entropy source for seeding its
default hasher, is prohibited. This means any use of HashMap/HashSet must be able
to be constructed and used with a custom hasher implementation.

This commit removes the use of Itertools::unique() to dedupe Instructions that are
being compiled into a new Message, which uses a default-configured HashMap
under-the-hood. Instead, we use a BTreeSet which does not invoke any entropy
source in order to seed a hash implementation.
2021-12-07 19:19:01 -08:00
f0acf7681e Add vote instructions that directly update on chain vote state (#21531)
* Add vote state instructions

UpdateVoteState and UpdateVoteStateSwitch

* cargo tree

* extract vote state version conversion to common fn
2021-12-07 16:47:26 -08:00
1df88837c8 - Implicitly fixes invoke_context.return_data not being reset between instructions in process_message. (#21671)
- Lets InvokeContext::process_cross_program_instruction() handle the first invocation depth too.
- Marks InvokeContext::verify(), InvokeContext::verify_and_update() and InvokeContext::process_executable_chain() private.
- Renames InvokeContext::process_cross_program_instruction() to InvokeContext::process_instruction().
- Removes InvokeContext::new_mock_with_sysvars().
2021-12-07 23:00:04 +01:00
94b1cf47ca Plumb verbose_level through ledger-tool slot subcommand (#21670)
* Add commas for readability and fix plurality in a printout
* Pass verbose_level into slot subcommand instead of default to max

This is useful for when a full printout isn't necessary, such as just
trying to determine the slot blockhash. The equivalent behavior before
this change would have been using "-vv" with the command.
2021-12-07 15:14:39 -06:00
31b8fd3109 Bumps solana_rbpf to v0.2.17 (#21672) 2021-12-07 22:12:09 +01:00
45e56c599d Ensure we have keys to activate these features (#21669) 2021-12-07 18:58:59 +00:00
a2477c1f32 Docs: Solflare web/app updates (#21540)
* Update Solflare description

* Add Solflare to mobile wallets

* Sort mobile wallets alphabetically

* Sort web wollets alphabetically

* Update docs/src/wallet-guide/apps.md

* Update docs/src/wallet-guide/apps.md

* Update docs/src/wallet-guide/web-wallets.md

* Update docs/src/wallet-guide/web-wallets.md

* Update docs/src/wallet-guide/apps.md

Co-authored-by: Justin Starry <justin.m.starry@gmail.com>
2021-12-07 11:04:46 -05:00
b57097ef18 docs: Fix SOL staked formula (#21615)
Fix the formula on the proposal page: https://docs.solana.com/implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards
2021-12-07 10:36:36 -05:00
388e34f8e9 chore:(deps): bump @project-serum/serum in /explorer (#21665)
Bumps [@project-serum/serum](https://github.com/project-serum/serum-ts) from 0.13.60 to 0.13.61.
- [Release notes](https://github.com/project-serum/serum-ts/releases)
- [Commits](https://github.com/project-serum/serum-ts/commits)

---
updated-dependencies:
- dependency-name: "@project-serum/serum"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-07 08:48:06 -05:00
293629bdb5 chore:(deps): bump @testing-library/jest-dom in /explorer (#21666)
Bumps [@testing-library/jest-dom](https://github.com/testing-library/jest-dom) from 5.16.0 to 5.16.1.
- [Release notes](https://github.com/testing-library/jest-dom/releases)
- [Changelog](https://github.com/testing-library/jest-dom/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testing-library/jest-dom/compare/v5.16.0...v5.16.1)

---
updated-dependencies:
- dependency-name: "@testing-library/jest-dom"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-07 09:15:44 +00:00
c5811e5a14 chore:(deps): bump @solana/spl-token-registry in /explorer (#21664)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.646 to 0.2.662.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.646...v0.2.662)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-07 09:09:29 +00:00
cf5a1448c5 chore:(deps): bump @metaplex/js from 4.7.0 to 4.9.0 in /explorer (#21662)
Bumps [@metaplex/js](https://github.com/metaplex/js) from 4.7.0 to 4.9.0.
- [Release notes](https://github.com/metaplex/js/releases)
- [Changelog](https://github.com/metaplex/js/blob/main/CHANGELOG.md)
- [Commits](https://github.com/metaplex/js/compare/v4.7.0...v4.9.0)

---
updated-dependencies:
- dependency-name: "@metaplex/js"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-07 09:09:16 +00:00
aff8e82705 chore:(deps): bump @types/node from 16.11.11 to 16.11.12 in /explorer (#21661)
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 16.11.11 to 16.11.12.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

---
updated-dependencies:
- dependency-name: "@types/node"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-07 09:07:15 +00:00
205fd95722 Add option to reclaim accounts-cluster-bench accounts/lamports (#21656)
* Add option to reclaim accounts-cluster-bench accounts/lamports

* lint
2021-12-06 23:52:57 -07:00
f6801a4af4 chore: bump itertools from 0.10.1 to 0.10.3 (#21643)
* chore: bump itertools from 0.10.1 to 0.10.3

Bumps [itertools](https://github.com/rust-itertools/itertools) from 0.10.1 to 0.10.3.
- [Release notes](https://github.com/rust-itertools/itertools/releases)
- [Changelog](https://github.com/rust-itertools/itertools/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-itertools/itertools/compare/v0.10.1...v0.10.3)

---
updated-dependencies:
- dependency-name: itertools
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-06 23:23:26 -07:00
92a1fc076c chore:(deps): bump @testing-library/jest-dom in /explorer (#21630)
Bumps [@testing-library/jest-dom](https://github.com/testing-library/jest-dom) from 5.15.1 to 5.16.0.
- [Release notes](https://github.com/testing-library/jest-dom/releases)
- [Changelog](https://github.com/testing-library/jest-dom/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testing-library/jest-dom/compare/v5.15.1...v5.16.0)

---
updated-dependencies:
- dependency-name: "@testing-library/jest-dom"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-06 22:00:08 -05:00
da4015a959 Ensure that StakeDelegations and StakeHistory serde (#21640)
Add tests to StakeDelegations and StakeHistory to ensure that the outer
types serialize and deserialize correctly to/from the inner types.
2021-12-06 17:34:10 -06:00
a1adcb23b6 Remove activated feature for filtering invalid stakes from rewards (#21641) 2021-12-06 17:56:05 -05:00
873fe81bc0 Add offline and fee-payer utilities to CLI vote module (#21579)
* create-vote-account: add offline, nonce, fee_payer capabilities

* vote-authorize: add offline, nonce, fee-payer

* vote-update-things: add offline, nonce, fee-payer

* withdraw-vote: add offline, nonce, fee-payer

* close-vote-acct: add fee-payer

* Allow WithdrawVoteAccount to empty account, since offline operations cannot perform account state queries as in CloseVoteAccount

* Fix lint

* Update offline-signing docs

* Add some parse unit tests

* Add offline integration test
2021-12-06 15:54:50 -07:00
f493a88258 Fixup flaky tests (#21617)
* Fixup flaky tests

* Fixup listeners
2021-12-06 17:14:38 -05:00
e123883b26 Reject vote withdraws that create non-rent-exempt accounts (#21639)
* Reject vote withdraws that create non-rent-exempt accounts

* fix mocked instruction test
2021-12-06 17:01:20 -05:00
008655e3a4 chore: bump rustversion from 1.0.5 to 1.0.6 (#21636)
* chore: bump rustversion from 1.0.5 to 1.0.6

Bumps [rustversion](https://github.com/dtolnay/rustversion) from 1.0.5 to 1.0.6.
- [Release notes](https://github.com/dtolnay/rustversion/releases)
- [Commits](https://github.com/dtolnay/rustversion/compare/1.0.5...1.0.6)

---
updated-dependencies:
- dependency-name: rustversion
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
2021-12-06 14:47:26 -07:00
6f3f6eddb2 Updates documentation around what needs to be passed in CPI. (#21633) 2021-12-06 21:20:16 +01:00
3dab1e711d Move transaction error code into new module (#21635) 2021-12-06 12:45:33 -05:00
df2b448993 Fix incorrect nonoverlapping test in sol_memcpy (#21007)
Thanks!
2021-12-06 09:26:46 -08:00
d1c101cde2 Rework docs for Pubkey::find_program_address and friends (#21528)
* Rework docs for Pubkey::find_program_address and friends

* Remove circular dependency

* Minor tweaks

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Sort solana-program dev-dependencies

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-12-06 17:00:50 +00:00
b5353e2130 chore: bump libc from 0.2.108 to 0.2.109 (#21627)
* chore: bump libc from 0.2.108 to 0.2.109

Bumps [libc](https://github.com/rust-lang/libc) from 0.2.108 to 0.2.109.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.108...0.2.109)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <you@example.com>
2021-12-06 09:37:26 -07:00
cd9dda4701 chore:(deps): bump @blockworks-foundation/mango-client in /explorer (#21632)
Bumps [@blockworks-foundation/mango-client](https://github.com/blockworks-foundation/mango-client-v3) from 3.2.10 to 3.2.14.
- [Release notes](https://github.com/blockworks-foundation/mango-client-v3/releases)
- [Commits](https://github.com/blockworks-foundation/mango-client-v3/commits)

---
updated-dependencies:
- dependency-name: "@blockworks-foundation/mango-client"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-06 09:28:42 +00:00
b8b721b7c2 chore:(deps): bump react-content-loader from 6.0.3 to 6.1.0 in /explorer (#21631)
Bumps [react-content-loader](https://github.com/danilowoz/react-content-loader) from 6.0.3 to 6.1.0.
- [Release notes](https://github.com/danilowoz/react-content-loader/releases)
- [Commits](https://github.com/danilowoz/react-content-loader/compare/v6.0.3...v6.1.0)

---
updated-dependencies:
- dependency-name: react-content-loader
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-06 09:18:29 +00:00
a75db69105 chore:(deps): bump prettier from 2.5.0 to 2.5.1 in /explorer (#21628)
Bumps [prettier](https://github.com/prettier/prettier) from 2.5.0 to 2.5.1.
- [Release notes](https://github.com/prettier/prettier/releases)
- [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prettier/prettier/compare/2.5.0...2.5.1)

---
updated-dependencies:
- dependency-name: prettier
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-06 09:09:46 +00:00
f4e92f9b54 chore:(deps): bump @solana/spl-token-registry in /explorer (#21629)
Bumps [@solana/spl-token-registry](https://github.com/solana-labs/token-list) from 0.2.588 to 0.2.646.
- [Release notes](https://github.com/solana-labs/token-list/releases)
- [Changelog](https://github.com/solana-labs/token-list/blob/main/CHANGELOG.md)
- [Commits](https://github.com/solana-labs/token-list/compare/v0.2.588...v0.2.646)

---
updated-dependencies:
- dependency-name: "@solana/spl-token-registry"
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-06 09:09:32 +00:00
72669cad8c chore:(deps): bump @metaplex/js from 4.6.0 to 4.7.0 in /explorer (#21626)
Bumps [@metaplex/js](https://github.com/metaplex/js) from 4.6.0 to 4.7.0.
- [Release notes](https://github.com/metaplex/js/releases)
- [Changelog](https://github.com/metaplex/js/blob/main/CHANGELOG.md)
- [Commits](https://github.com/metaplex/js/compare/v4.6.0...v4.7.0)

---
updated-dependencies:
- dependency-name: "@metaplex/js"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-06 09:07:24 +00:00
f3c2803af9 Fix spelling of 'Borsh' 2021-12-05 21:06:13 -08:00
cd17f63d81 adds back position field to coding-shred-header (#21600)
https://github.com/solana-labs/solana/pull/17004
removed position field from coding-shred-header because as it stands the
field is redundant and unused.
However, with the upcoming changes to erasure coding schema this field
will no longer be redundant and needs to be populated.
2021-12-05 14:42:09 +00:00
3e5a5a834f Bump RpcClient node versions (#21612)
* Bump blockhash/fee api check versions

* Bump snapshot api check version
2021-12-04 21:51:43 +00:00
f4ca87205f If configured so, panic if there is an error saving transactions in the plugin (#21602) 2021-12-04 12:47:28 -08:00
d6f22433d0 Bump version to v1.10.0 2021-12-04 20:17:54 +00:00
649 changed files with 51436 additions and 25266 deletions

View File

@ -10,6 +10,8 @@ jobs:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }} # Required
@ -18,3 +20,32 @@ jobs:
vercel-project-id: ${{ secrets.PROJECT_ID}} #Required
working-directory: ./explorer
scope: ${{ secrets.TEAM_ID }}
- name: vercel url
run : |
touch vercelfile.txt
vercel --token ${{secrets.VERCEL_TOKEN}} ls solana > vercelfile.txt
echo "first line"
touch vercelfile1.txt
head -n 2 vercelfile.txt > vercelfile1.txt
touch vercelfile2.txt
tail -n 1 vercelfile1.txt > vercelfile2.txt
filtered_url=$(cut -f8 -d" " vercelfile2.txt)
touch .env.preview1
echo "$filtered_url" > .env.preview1
echo "filtered_url is: $filtered_url"
- name: Run tests
uses: mathiasvr/command-output@v1
id: tests2
with:
run: |
echo "$(cat .env.preview1)"
- name: Slack Notification1
uses: rtCamp/action-slack-notify@master
env:
SLACK_MESSAGE: ${{ steps.tests2.outputs.stdout }}
SLACK_TITLE: Vercel "Explorer" Preview Deployment Link
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@ -65,20 +65,7 @@ jobs:
node-version: ${{ matrix.node }}
cache: 'npm'
cache-dependency-path: web3.js/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run lint
- run: |
npm run build
ls -l lib
test -r lib/index.iife.js
test -r lib/index.cjs.js
test -r lib/index.esm.js
- run: npm run doc
- run: npm run codecov
- run: |
sh -c "$(curl -sSfL https://release.solana.com/edge/install)"
echo "$HOME/.local/share/solana/install/active_release/bin" >> $GITHUB_PATH
PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"
solana --version
- run: npm run test:live-with-test-validator
source .travis/before_install.sh
npm install
source .travis/script.sh

1
.gitignore vendored
View File

@ -4,6 +4,7 @@
/solana-metrics/
/solana-metrics.tar.bz2
/target/
/test-ledger/
**/*.rs.bk
.cargo

View File

@ -78,6 +78,7 @@ jobs:
# - sudo apt-get install libssl-dev libudev-dev
# docs pull request
- name: "docs"
if: type IN (push, pull_request) OR tag IS present
language: node_js

View File

@ -74,7 +74,7 @@ minutes to execute. Use that time to write a detailed problem description. Once
the description is written and CI succeeds, click the "Ready to Review" button
and add reviewers. Adding reviewers before CI succeeds is a fast path to losing
reviewer engagement. Not only will they be notified and see the PR is not yet
ready for them, they will also be bombarded them with additional notifications
ready for them, they will also be bombarded with additional notifications
each time you push a commit to get past CI or until they "mute" the PR. Once
muted, you'll need to reach out over some other medium, such as Discord, to
request they have another look. When you use draft PRs, no notifications are

1312
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,6 @@
members = [
"accountsdb-plugin-interface",
"accountsdb-plugin-manager",
"accountsdb-plugin-postgres",
"accounts-cluster-bench",
"bench-streamer",
"bench-tps",
@ -12,6 +11,7 @@ members = [
"banks-interface",
"banks-server",
"bucket_map",
"bloom",
"clap-utils",
"cli-config",
"cli-output",
@ -46,7 +46,10 @@ members = [
"poh",
"poh-bench",
"program-test",
"programs/address-lookup-table",
"programs/address-lookup-table-tests",
"programs/bpf_loader",
"programs/bpf_loader/gen-syscall-list",
"programs/compute-budget",
"programs/config",
"programs/stake",
@ -77,13 +80,10 @@ members = [
"test-validator",
"rpc-test",
"client-test",
"zk-token-sdk",
"programs/zk-token-proof",
]
exclude = [
"programs/bpf",
]
# TODO: Remove once the "simd-accel" feature from the reed-solomon-erasure
# dependency is supported on Apple M1. v2 of the feature resolver is needed to
# specify arch-specific features.
resolver = "2"

View File

@ -38,12 +38,6 @@ $ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang make
```
On Mac M1s, make sure you set up your terminal & homebrew [to use](https://5balloons.info/correct-way-to-install-and-use-homebrew-on-m1-macs/) Rosetta. You can install it with:
```bash
$ softwareupdate --install-rosetta
```
## **2. Download the source code.**
```bash
@ -74,7 +68,7 @@ devnet.solana.com. Runs 24/7. Learn more about the [public clusters](https://doc
# Benchmarking
First install the nightly build of rustc. `cargo bench` requires use of the
First, install the nightly build of rustc. `cargo bench` requires the use of the
unstable features only available in the nightly build.
```bash
@ -121,12 +115,12 @@ the reader to check and validate their accuracy and truthfulness.
Furthermore, nothing in this project constitutes a solicitation for
investment.
Any content produced by SF or developer resources that SF provides, are
Any content produced by SF or developer resources that SF provides are
for educational and inspirational purposes only. SF does not encourage,
induce or sanction the deployment, integration or use of any such
applications (including the code comprising the Solana blockchain
protocol) in violation of applicable laws or regulations and hereby
prohibits any such deployment, integration or use. This includes use of
prohibits any such deployment, integration or use. This includes the use of
any such applications by the reader (a) in violation of export control
or sanctions laws of the United States or any other applicable
jurisdiction, (b) if the reader is located in or ordinarily resident in
@ -139,7 +133,7 @@ prohibitions.
The reader should be aware that U.S. export control and sanctions laws
prohibit U.S. persons (and other persons that are subject to such laws)
from transacting with persons in certain countries and territories or
that are on the SDN list. As a project based primarily on open-source
that are on the SDN list. As a project-based primarily on open-source
software, it is possible that such sanctioned persons may nevertheless
bypass prohibitions, obtain the code comprising the Solana blockchain
protocol (or other project code or applications) and deploy, integrate,

View File

@ -94,7 +94,7 @@ Alternatively use the Github UI.
```
1. Confirm that your freshly cut release branch is shown as `BETA_CHANNEL` and the previous release branch as `STABLE_CHANNEL`:
```
ci/channel_info.sh
ci/channel-info.sh
```
## Steps to Create a Release
@ -152,5 +152,5 @@ appearing. To check for progress:
[Crates.io](https://crates.io/crates/solana) should have an updated Solana version. This can take 2-3 hours, and sometimes fails in the `solana-secondary` job.
If this happens and the error is non-fatal, click "Retry" on the "publish crate" job
### Update software on devnet.solana.com/testnet.solana.com/mainnet-beta.solana.com
See the documentation at https://github.com/solana-labs/cluster-ops/
### Update software on testnet.solana.com
See the documentation at https://github.com/solana-labs/cluster-ops/. devnet.solana.com and mainnet-beta.solana.com run stable releases that have been tested on testnet. Do not update devnet or mainnet-beta with a beta release.

View File

@ -1,6 +1,6 @@
[package]
name = "solana-account-decoder"
version = "1.9.0"
version = "1.10.0"
description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -16,15 +16,15 @@ bs58 = "0.4.0"
bv = "0.11.1"
Inflector = "0.11.4"
lazy_static = "1.4.0"
serde = "1.0.130"
serde = "1.0.134"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-config-program = { path = "../programs/config", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.0" }
serde_json = "1.0.78"
solana-config-program = { path = "../programs/config", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.0" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
thiserror = "1.0"
zstd = "0.9.0"
zstd = "0.9.2"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -136,16 +136,13 @@ impl UiAccount {
UiAccountData::Binary(blob, encoding) => match encoding {
UiAccountEncoding::Base58 => bs58::decode(blob).into_vec().ok(),
UiAccountEncoding::Base64 => base64::decode(blob).ok(),
UiAccountEncoding::Base64Zstd => base64::decode(blob)
.ok()
.map(|zstd_data| {
let mut data = vec![];
zstd::stream::read::Decoder::new(zstd_data.as_slice())
.and_then(|mut reader| reader.read_to_end(&mut data))
.map(|_| data)
.ok()
})
.flatten(),
UiAccountEncoding::Base64Zstd => base64::decode(blob).ok().and_then(|zstd_data| {
let mut data = vec![];
zstd::stream::read::Decoder::new(zstd_data.as_slice())
.and_then(|mut reader| reader.read_to_end(&mut data))
.map(|_| data)
.ok()
}),
UiAccountEncoding::Binary | UiAccountEncoding::JsonParsed => None,
},
}?;

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-bench"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -11,11 +11,11 @@ publish = false
[dependencies]
log = "0.4.14"
rayon = "1.5.1"
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
clap = "2.33.1"
[package.metadata.docs.rs]

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-cluster-bench"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -13,24 +13,25 @@ clap = "2.33.1"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.0" }
solana-client = { path = "../client", version = "=1.9.0" }
solana-core = { path = "../core", version = "=1.9.0" }
solana-faucet = { path = "../faucet", version = "=1.9.0" }
solana-gossip = { path = "../gossip", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-net-utils = { path = "../net-utils", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-streamer = { path = "../streamer", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-account-decoder = { path = "../account-decoder", version = "=1.10.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-faucet = { path = "../faucet", version = "=1.10.0" }
solana-gossip = { path = "../gossip", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.9.0" }
solana-core = { path = "../core", version = "=1.10.0" }
solana-local-cluster = { path = "../local-cluster", version = "=1.10.0" }
solana-test-validator = { path = "../test-validator", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -23,6 +23,7 @@ use {
solana_streamer::socket::SocketAddrSpace,
solana_transaction_status::parse_token::spl_token_instruction,
std::{
cmp::min,
net::SocketAddr,
process::exit,
sync::{
@ -115,7 +116,7 @@ fn make_create_message(
let instructions: Vec<_> = (0..num_instructions)
.into_iter()
.map(|_| {
.flat_map(|_| {
let program_id = if mint.is_some() {
inline_spl_token::id()
} else {
@ -147,7 +148,6 @@ fn make_create_message(
instructions
})
.flatten()
.collect();
Message::new(&instructions, Some(&keypair.pubkey()))
@ -156,24 +156,30 @@ fn make_create_message(
fn make_close_message(
keypair: &Keypair,
base_keypair: &Keypair,
max_closed_seed: Arc<AtomicU64>,
max_created: Arc<AtomicU64>,
max_closed: Arc<AtomicU64>,
num_instructions: usize,
balance: u64,
spl_token: bool,
) -> Message {
let instructions: Vec<_> = (0..num_instructions)
.into_iter()
.map(|_| {
.filter_map(|_| {
let program_id = if spl_token {
inline_spl_token::id()
} else {
system_program::id()
};
let seed = max_closed_seed.fetch_add(1, Ordering::Relaxed).to_string();
let max_created_seed = max_created.load(Ordering::Relaxed);
let max_closed_seed = max_closed.load(Ordering::Relaxed);
if max_closed_seed >= max_created_seed {
return None;
}
let seed = max_closed.fetch_add(1, Ordering::Relaxed).to_string();
let address =
Pubkey::create_with_seed(&base_keypair.pubkey(), &seed, &program_id).unwrap();
if spl_token {
spl_token_instruction(
Some(spl_token_instruction(
spl_token::instruction::close_account(
&spl_token::id(),
&spl_token_pubkey(&address),
@ -182,16 +188,16 @@ fn make_close_message(
&[],
)
.unwrap(),
)
))
} else {
system_instruction::transfer_with_seed(
Some(system_instruction::transfer_with_seed(
&address,
&base_keypair.pubkey(),
seed,
&program_id,
&keypair.pubkey(),
balance,
)
))
}
})
.collect();
@ -211,6 +217,7 @@ fn run_accounts_bench(
maybe_lamports: Option<u64>,
num_instructions: usize,
mint: Option<Pubkey>,
reclaim_accounts: bool,
) {
assert!(num_instructions > 0);
let client =
@ -350,6 +357,7 @@ fn run_accounts_bench(
let message = make_close_message(
payer_keypairs[0],
&base_keypair,
seed_tracker.max_created.clone(),
seed_tracker.max_closed.clone(),
1,
min_balance,
@ -372,7 +380,7 @@ fn run_accounts_bench(
}
count += 1;
if last_log.elapsed().as_millis() > 3000 {
if last_log.elapsed().as_millis() > 3000 || count >= iterations {
info!(
"total_accounts_created: {} total_accounts_closed: {} tx_sent_count: {} loop_count: {} balance(s): {:?}",
total_accounts_created, total_accounts_closed, tx_sent_count, count, balances
@ -387,6 +395,83 @@ fn run_accounts_bench(
}
}
executor.close();
if reclaim_accounts {
let executor = TransactionExecutor::new(entrypoint_addr);
loop {
let max_closed_seed = seed_tracker.max_closed.load(Ordering::Relaxed);
let max_created_seed = seed_tracker.max_created.load(Ordering::Relaxed);
if latest_blockhash.elapsed().as_millis() > 10_000 {
blockhash = client.get_latest_blockhash().expect("blockhash");
latest_blockhash = Instant::now();
}
message.recent_blockhash = blockhash;
let fee = client
.get_fee_for_message(&message)
.expect("get_fee_for_message");
let sigs_len = executor.num_outstanding();
if sigs_len < batch_size && max_closed_seed < max_created_seed {
let num_to_close = min(
batch_size - sigs_len,
(max_created_seed - max_closed_seed) as usize,
);
if num_to_close >= payer_keypairs.len() {
info!("closing {} accounts", num_to_close);
let chunk_size = num_to_close / payer_keypairs.len();
info!("{:?} chunk_size", chunk_size);
if chunk_size > 0 {
for (i, keypair) in payer_keypairs.iter().enumerate() {
let txs: Vec<_> = (0..chunk_size)
.into_par_iter()
.filter_map(|_| {
let message = make_close_message(
keypair,
&base_keypair,
seed_tracker.max_created.clone(),
seed_tracker.max_closed.clone(),
num_instructions,
min_balance,
mint.is_some(),
);
if message.instructions.is_empty() {
return None;
}
let signers: Vec<&Keypair> = vec![keypair, &base_keypair];
Some(Transaction::new(&signers, message, blockhash))
})
.collect();
balances[i] = balances[i].saturating_sub(fee * txs.len() as u64);
info!("close txs: {}", txs.len());
let new_ids = executor.push_transactions(txs);
info!("close ids: {}", new_ids.len());
tx_sent_count += new_ids.len();
total_accounts_closed += (num_instructions * new_ids.len()) as u64;
}
}
}
} else {
let _ = executor.drain_cleared();
}
count += 1;
if last_log.elapsed().as_millis() > 3000 || max_closed_seed >= max_created_seed {
info!(
"total_accounts_closed: {} tx_sent_count: {} loop_count: {} balance(s): {:?}",
total_accounts_closed, tx_sent_count, count, balances
);
last_log = Instant::now();
}
if max_closed_seed >= max_created_seed {
break;
}
if executor.num_outstanding() >= batch_size {
sleep(Duration::from_millis(500));
}
}
executor.close();
}
}
fn main() {
@ -462,7 +547,7 @@ fn main() {
.long("iterations")
.takes_value(true)
.value_name("NUM")
.help("Number of iterations to make"),
.help("Number of iterations to make. 0 = unlimited iterations."),
)
.arg(
Arg::with_name("check_gossip")
@ -475,6 +560,12 @@ fn main() {
.takes_value(true)
.help("Mint address to initialize account"),
)
.arg(
Arg::with_name("reclaim_accounts")
.long("reclaim-accounts")
.takes_value(false)
.help("Reclaim accounts after session ends; incompatible with --iterations 0"),
)
.get_matches();
let skip_gossip = !matches.is_present("check_gossip");
@ -556,6 +647,7 @@ fn main() {
lamports,
num_instructions,
mint,
matches.is_present("reclaim_accounts"),
);
}
@ -564,12 +656,18 @@ pub mod test {
use {
super::*,
solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet,
solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
},
solana_measure::measure::Measure,
solana_sdk::poh_config::PohConfig,
solana_sdk::{native_token::sol_to_lamports, poh_config::PohConfig},
solana_test_validator::TestValidator,
spl_token::{
solana_program::program_pack::Pack,
state::{Account, Mint},
},
};
#[test]
@ -605,6 +703,108 @@ pub mod test {
maybe_lamports,
num_instructions,
None,
false,
);
start.stop();
info!("{}", start);
}
#[test]
fn test_create_then_reclaim_spl_token_accounts() {
solana_logger::setup();
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
let test_validator = TestValidator::with_custom_fees(
mint_pubkey,
1,
Some(faucet_addr),
SocketAddrSpace::Unspecified,
);
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
// Created funder
let funder = Keypair::new();
let latest_blockhash = rpc_client.get_latest_blockhash().unwrap();
let signature = rpc_client
.request_airdrop_with_blockhash(
&funder.pubkey(),
sol_to_lamports(1.0),
&latest_blockhash,
)
.unwrap();
rpc_client
.confirm_transaction_with_spinner(
&signature,
&latest_blockhash,
CommitmentConfig::confirmed(),
)
.unwrap();
// Create Mint
let spl_mint_keypair = Keypair::new();
let spl_mint_len = Mint::get_packed_len();
let spl_mint_rent = rpc_client
.get_minimum_balance_for_rent_exemption(spl_mint_len)
.unwrap();
let transaction = Transaction::new_signed_with_payer(
&[
system_instruction::create_account(
&funder.pubkey(),
&spl_mint_keypair.pubkey(),
spl_mint_rent,
spl_mint_len as u64,
&inline_spl_token::id(),
),
spl_token_instruction(
spl_token::instruction::initialize_mint(
&spl_token::id(),
&spl_token_pubkey(&spl_mint_keypair.pubkey()),
&spl_token_pubkey(&spl_mint_keypair.pubkey()),
None,
2,
)
.unwrap(),
),
],
Some(&funder.pubkey()),
&[&funder, &spl_mint_keypair],
latest_blockhash,
);
let _sig = rpc_client
.send_and_confirm_transaction(&transaction)
.unwrap();
let account_len = Account::get_packed_len();
let minimum_balance = rpc_client
.get_minimum_balance_for_rent_exemption(account_len)
.unwrap();
let iterations = 5;
let batch_size = 100;
let close_nth_batch = 0;
let num_instructions = 4;
let mut start = Measure::start("total accounts run");
let keypair0 = Keypair::new();
let keypair1 = Keypair::new();
let keypair2 = Keypair::new();
run_accounts_bench(
test_validator
.rpc_url()
.replace("http://", "")
.parse()
.unwrap(),
faucet_addr,
&[&keypair0, &keypair1, &keypair2],
iterations,
Some(account_len as u64),
batch_size,
close_nth_batch,
Some(minimum_balance),
num_instructions,
Some(spl_mint_keypair.pubkey()),
true,
);
start.stop();
info!("{}", start);

View File

@ -3,17 +3,17 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-interface"
description = "The Solana AccountsDb plugin interface."
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
documentation = "https://docs.rs/solana-accountsdb-plugin-interface"
[dependencies]
log = "0.4.11"
thiserror = "1.0.30"
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -3,8 +3,8 @@
/// In addition, the dynamic library must export a "C" function _create_plugin which
/// creates the implementation of the plugin.
use {
solana_sdk::{signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
solana_sdk::{clock::UnixTimestamp, signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::{Reward, TransactionStatusMeta},
std::{any::Any, error, io},
thiserror::Error,
};
@ -12,54 +12,117 @@ use {
impl Eq for ReplicaAccountInfo<'_> {}
#[derive(Clone, PartialEq, Debug)]
/// Information about an account being updated
pub struct ReplicaAccountInfo<'a> {
/// The Pubkey for the account
pub pubkey: &'a [u8],
/// The lamports for the account
pub lamports: u64,
/// The Pubkey of the owner program account
pub owner: &'a [u8],
/// This account's data contains a loaded program (and is now read-only)
pub executable: bool,
/// The epoch at which this account will next owe rent
pub rent_epoch: u64,
/// The data held in this account.
pub data: &'a [u8],
/// A global monotonically increasing atomic number, which can be used
/// to tell the order of the account update. For example, when an
/// account is updated in the same slot multiple times, the update
/// with higher write_version should supersede the one with lower
/// write_version.
pub write_version: u64,
}
/// A wrapper to future-proof ReplicaAccountInfo handling.
/// If there were a change to the structure of ReplicaAccountInfo,
/// there would be new enum entry for the newer version, forcing
/// plugin implementations to handle the change.
pub enum ReplicaAccountInfoVersions<'a> {
V0_0_1(&'a ReplicaAccountInfo<'a>),
}
/// Information about a transaction
#[derive(Clone, Debug)]
pub struct ReplicaTransactionInfo<'a> {
/// The first signature of the transaction, used for identifying the transaction.
pub signature: &'a Signature,
/// Indicates if the transaction is a simple vote transaction.
pub is_vote: bool,
/// The sanitized transaction.
pub transaction: &'a SanitizedTransaction,
/// Metadata of the transaction status.
pub transaction_status_meta: &'a TransactionStatusMeta,
}
/// A wrapper to future-proof ReplicaTransactionInfo handling.
/// If there were a change to the structure of ReplicaTransactionInfo,
/// there would be new enum entry for the newer version, forcing
/// plugin implementations to handle the change.
pub enum ReplicaTransactionInfoVersions<'a> {
V0_0_1(&'a ReplicaTransactionInfo<'a>),
}
#[derive(Clone, Debug)]
pub struct ReplicaBlockInfo<'a> {
pub slot: u64,
pub blockhash: &'a str,
pub rewards: &'a [Reward],
pub block_time: Option<UnixTimestamp>,
pub block_height: Option<u64>,
}
pub enum ReplicaBlockInfoVersions<'a> {
V0_0_1(&'a ReplicaBlockInfo<'a>),
}
/// Errors returned by plugin calls
#[derive(Error, Debug)]
pub enum AccountsDbPluginError {
/// Error opening the configuration file; for example, when the file
/// is not found or when the validator process has no permission to read it.
#[error("Error opening config file. Error detail: ({0}).")]
ConfigFileOpenError(#[from] io::Error),
/// Error in reading the content of the config file or the content
/// is not in the expected format.
#[error("Error reading config file. Error message: ({msg})")]
ConfigFileReadError { msg: String },
/// Error when updating the account.
#[error("Error updating account. Error message: ({msg})")]
AccountsUpdateError { msg: String },
/// Error when updating the slot status
#[error("Error updating slot status. Error message: ({msg})")]
SlotStatusUpdateError { msg: String },
/// Any custom error defined by the plugin.
#[error("Plugin-defined custom error. Error message: ({0})")]
Custom(Box<dyn error::Error + Send + Sync>),
}
/// The current status of a slot
#[derive(Debug, Clone)]
pub enum SlotStatus {
/// The highest slot of the heaviest fork processed by the node. Ledger state at this slot is
/// not derived from a confirmed or finalized block, but if multiple forks are present, is from
/// the fork the validator believes is most likely to finalize.
Processed,
/// The highest slot having reached max vote lockout.
Rooted,
/// The highest slot that has been voted on by supermajority of the cluster, ie. is confirmed.
Confirmed,
}
@ -75,6 +138,9 @@ impl SlotStatus {
pub type Result<T> = std::result::Result<T, AccountsDbPluginError>;
/// Defines an AccountsDb plugin, to stream data from the runtime.
/// AccountsDb plugins must describe desired behavior for load and unload,
/// as well as how they will handle streamed data.
pub trait AccountsDbPlugin: Any + Send + Sync + std::fmt::Debug {
fn name(&self) -> &'static str;
@ -93,6 +159,9 @@ pub trait AccountsDbPlugin: Any + Send + Sync + std::fmt::Debug {
fn on_unload(&mut self) {}
/// Called when an account is updated at a slot.
/// When `is_startup` is true, it indicates the account is loaded from
/// snapshots when the validator starts up. When `is_startup` is false,
/// the account is updated during transaction processing.
#[allow(unused_variables)]
fn update_account(
&mut self,
@ -129,6 +198,12 @@ pub trait AccountsDbPlugin: Any + Send + Sync + std::fmt::Debug {
Ok(())
}
/// Called when block's metadata is updated.
#[allow(unused_variables)]
fn notify_block_metadata(&mut self, blockinfo: ReplicaBlockInfoVersions) -> Result<()> {
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-manager"
description = "The Solana AccountsDb plugin manager."
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -12,19 +12,17 @@ documentation = "https://docs.rs/solana-validator"
[dependencies]
bs58 = "0.4.0"
crossbeam-channel = "0.5"
libloading = "0.7.2"
json5 = "0.4.1"
libloading = "0.7.3"
log = "0.4.11"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-metrics = { path = "../metrics", version = "=1.9.0" }
solana-rpc = { path = "../rpc", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
serde_json = "1.0.78"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-metrics = { path = "../metrics", version = "=1.10.0" }
solana-rpc = { path = "../rpc", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
thiserror = "1.0.30"
[package.metadata.docs.rs]

View File

@ -2,12 +2,13 @@ use {
crate::{
accounts_update_notifier::AccountsUpdateNotifierImpl,
accountsdb_plugin_manager::AccountsDbPluginManager,
block_metadata_notifier::BlockMetadataNotifierImpl,
block_metadata_notifier_interface::BlockMetadataNotifierLock,
slot_status_notifier::SlotStatusNotifierImpl, slot_status_observer::SlotStatusObserver,
transaction_notifier::TransactionNotifierImpl,
},
crossbeam_channel::Receiver,
log::*,
serde_json,
solana_rpc::{
optimistically_confirmed_bank_tracker::BankNotification,
transaction_notifier_interface::TransactionNotifierLock,
@ -50,6 +51,7 @@ pub struct AccountsDbPluginService {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
accounts_update_notifier: Option<AccountsUpdateNotifier>,
transaction_notifier: Option<TransactionNotifierLock>,
block_metadata_notifier: Option<BlockMetadataNotifierLock>,
}
impl AccountsDbPluginService {
@ -102,17 +104,24 @@ impl AccountsDbPluginService {
None
};
let slot_status_observer =
if account_data_notifications_enabled || transaction_notifications_enabled {
let slot_status_notifier = SlotStatusNotifierImpl::new(plugin_manager.clone());
let slot_status_notifier = Arc::new(RwLock::new(slot_status_notifier));
let (slot_status_observer, block_metadata_notifier): (
Option<SlotStatusObserver>,
Option<BlockMetadataNotifierLock>,
) = if account_data_notifications_enabled || transaction_notifications_enabled {
let slot_status_notifier = SlotStatusNotifierImpl::new(plugin_manager.clone());
let slot_status_notifier = Arc::new(RwLock::new(slot_status_notifier));
(
Some(SlotStatusObserver::new(
confirmed_bank_receiver,
slot_status_notifier,
))
} else {
None
};
)),
Some(Arc::new(RwLock::new(BlockMetadataNotifierImpl::new(
plugin_manager.clone(),
)))),
)
} else {
(None, None)
};
info!("Started AccountsDbPluginService");
Ok(AccountsDbPluginService {
@ -120,6 +129,7 @@ impl AccountsDbPluginService {
plugin_manager,
accounts_update_notifier,
transaction_notifier,
block_metadata_notifier,
})
}
@ -145,12 +155,12 @@ impl AccountsDbPluginService {
)));
}
let result: serde_json::Value = match serde_json::from_str(&contents) {
let result: serde_json::Value = match json5::from_str(&contents) {
Ok(value) => value,
Err(err) => {
return Err(AccountsdbPluginServiceError::InvalidConfigFileFormat(
format!(
"The config file {:?} is not in a valid Json format, error: {:?}",
"The config file {:?} is not in a valid Json5 format, error: {:?}",
accountsdb_plugin_config_file, err
),
));
@ -186,6 +196,10 @@ impl AccountsDbPluginService {
self.transaction_notifier.clone()
}
pub fn get_block_metadata_notifier(&self) -> Option<BlockMetadataNotifierLock> {
self.block_metadata_notifier.clone()
}
pub fn join(self) -> thread::Result<()> {
if let Some(mut slot_status_observer) = self.slot_status_observer {
slot_status_observer.join()?;

View File

@ -0,0 +1,105 @@
use {
crate::{
accountsdb_plugin_manager::AccountsDbPluginManager,
block_metadata_notifier_interface::BlockMetadataNotifier,
},
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
ReplicaBlockInfo, ReplicaBlockInfoVersions,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_runtime::bank::RewardInfo,
solana_sdk::{clock::UnixTimestamp, pubkey::Pubkey},
solana_transaction_status::{Reward, Rewards},
std::sync::{Arc, RwLock},
};
pub(crate) struct BlockMetadataNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl BlockMetadataNotifier for BlockMetadataNotifierImpl {
/// Notify the block metadata
fn notify_block_metadata(
&self,
slot: u64,
blockhash: &str,
rewards: &RwLock<Vec<(Pubkey, RewardInfo)>>,
block_time: Option<UnixTimestamp>,
block_height: Option<u64>,
) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
let rewards = Self::build_rewards(rewards);
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-slot");
let block_info =
Self::build_replica_block_info(slot, blockhash, &rewards, block_time, block_height);
let block_info = ReplicaBlockInfoVersions::V0_0_1(&block_info);
match plugin.notify_block_metadata(block_info) {
Err(err) => {
error!(
"Failed to update block metadata at slot {}, error: {} to plugin {}",
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated block metadata at slot {} to plugin {}",
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-block-metadata-us",
measure.as_us() as usize,
1000,
1000
);
}
}
}
impl BlockMetadataNotifierImpl {
fn build_rewards(rewards: &RwLock<Vec<(Pubkey, RewardInfo)>>) -> Rewards {
let rewards = rewards.read().unwrap();
rewards
.iter()
.map(|(pubkey, reward)| Reward {
pubkey: pubkey.to_string(),
lamports: reward.lamports,
post_balance: reward.post_balance,
reward_type: Some(reward.reward_type),
commission: reward.commission,
})
.collect()
}
fn build_replica_block_info<'a>(
slot: u64,
blockhash: &'a str,
rewards: &'a [Reward],
block_time: Option<UnixTimestamp>,
block_height: Option<u64>,
) -> ReplicaBlockInfo<'a> {
ReplicaBlockInfo {
slot,
blockhash,
rewards,
block_time,
block_height,
}
}
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
Self { plugin_manager }
}
}

View File

@ -0,0 +1,20 @@
use {
solana_runtime::bank::RewardInfo,
solana_sdk::{clock::UnixTimestamp, pubkey::Pubkey},
std::sync::{Arc, RwLock},
};
/// Interface for notifying block metadata changes
pub trait BlockMetadataNotifier {
/// Notify the block metadata
fn notify_block_metadata(
&self,
slot: u64,
blockhash: &str,
rewards: &RwLock<Vec<(Pubkey, RewardInfo)>>,
block_time: Option<UnixTimestamp>,
block_height: Option<u64>,
);
}
pub type BlockMetadataNotifierLock = Arc<RwLock<dyn BlockMetadataNotifier + Sync + Send>>;

View File

@ -1,6 +1,8 @@
pub mod accounts_update_notifier;
pub mod accountsdb_plugin_manager;
pub mod accountsdb_plugin_service;
pub mod block_metadata_notifier;
pub mod block_metadata_notifier_interface;
pub mod slot_status_notifier;
pub mod slot_status_observer;
pub mod transaction_notifier;

View File

@ -8,7 +8,6 @@ use {
solana_measure::measure::Measure,
solana_metrics::*,
solana_rpc::transaction_notifier_interface::TransactionNotifier,
solana_runtime::bank,
solana_sdk::{clock::Slot, signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
std::sync::{Arc, RwLock},
@ -85,7 +84,7 @@ impl TransactionNotifierImpl {
) -> ReplicaTransactionInfo<'a> {
ReplicaTransactionInfo {
signature,
is_vote: bank::is_simple_vote_transaction(transaction),
is_vote: transaction.is_simple_vote_transaction(),
transaction,
transaction_status_meta,
}

View File

@ -1,39 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-postgres"
description = "The Solana AccountsDb plugin for PostgreSQL database."
version = "1.9.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5"
log = "0.4.14"
postgres = { version = "0.19.2", features = ["with-chrono-0_4"] }
postgres-types = { version = "0.2.2", features = ["derive"] }
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-metrics = { path = "../metrics", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
thiserror = "1.0.30"
tokio-postgres = "0.7.4"
[dev-dependencies]
solana-account-decoder = { path = "../account-decoder", version = "=1.9.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,5 +0,0 @@
This is an example implementing the AccountsDb plugin for PostgreSQL database.
Please see the `src/accountsdb_plugin_postgres.rs` for the format of the plugin's configuration file.
To create the schema objects for the database, please use `scripts/create_schema.sql`.
`scripts/drop_schema.sql` can be used to tear down the schema objects.

View File

@ -1,183 +0,0 @@
/**
* This plugin implementation for PostgreSQL requires the following tables
*/
-- The table storing accounts
CREATE TABLE account (
pubkey BYTEA PRIMARY KEY,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- The table storing slot information
CREATE TABLE slot (
slot BIGINT PRIMARY KEY,
parent BIGINT,
status VARCHAR(16) NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- Types for Transactions
Create TYPE "TransactionErrorCode" AS ENUM (
'AccountInUse',
'AccountLoadedTwice',
'AccountNotFound',
'ProgramAccountNotFound',
'InsufficientFundsForFee',
'InvalidAccountForFee',
'AlreadyProcessed',
'BlockhashNotFound',
'InstructionError',
'CallChainTooDeep',
'MissingSignatureForFee',
'InvalidAccountIndex',
'SignatureFailure',
'InvalidProgramForExecution',
'SanitizeFailure',
'ClusterMaintenance',
'AccountBorrowOutstanding',
'WouldExceedMaxAccountCostLimit',
'WouldExceedMaxBlockCostLimit',
'UnsupportedVersion',
'InvalidWritableAccount'
);
CREATE TYPE "TransactionError" AS (
error_code "TransactionErrorCode",
error_detail VARCHAR(256)
);
CREATE TYPE "CompiledInstruction" AS (
program_id_index SMALLINT,
accounts SMALLINT[],
data BYTEA
);
CREATE TYPE "InnerInstructions" AS (
index SMALLINT,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionTokenBalance" AS (
account_index SMALLINT,
mint VARCHAR(44),
ui_token_amount DOUBLE PRECISION,
owner VARCHAR(44)
);
Create TYPE "RewardType" AS ENUM (
'Fee',
'Rent',
'Staking',
'Voting'
);
CREATE TYPE "Reward" AS (
pubkey VARCHAR(44),
lamports BIGINT,
post_balance BIGINT,
reward_type "RewardType",
commission SMALLINT
);
CREATE TYPE "TransactionStatusMeta" AS (
error "TransactionError",
fee BIGINT,
pre_balances BIGINT[],
post_balances BIGINT[],
inner_instructions "InnerInstructions"[],
log_messages TEXT[],
pre_token_balances "TransactionTokenBalance"[],
post_token_balances "TransactionTokenBalance"[],
rewards "Reward"[]
);
CREATE TYPE "TransactionMessageHeader" AS (
num_required_signatures SMALLINT,
num_readonly_signed_accounts SMALLINT,
num_readonly_unsigned_accounts SMALLINT
);
CREATE TYPE "TransactionMessage" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[]
);
CREATE TYPE "AddressMapIndexes" AS (
writable SMALLINT[],
readonly SMALLINT[]
);
CREATE TYPE "TransactionMessageV0" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[],
address_map_indexes "AddressMapIndexes"[]
);
CREATE TYPE "MappedAddresses" AS (
writable BYTEA[],
readonly BYTEA[]
);
CREATE TYPE "MappedMessage" AS (
message "TransactionMessageV0",
mapped_addresses "MappedAddresses"
);
-- The table storing transactions
CREATE TABLE transaction (
slot BIGINT NOT NULL,
signature BYTEA NOT NULL,
is_vote BOOL NOT NULL,
message_type SMALLINT, -- 0: legacy, 1: v0 message
legacy_message "TransactionMessage",
v0_mapped_message "MappedMessage",
signatures BYTEA[],
message_hash BYTEA,
meta "TransactionStatusMeta",
updated_on TIMESTAMP NOT NULL,
CONSTRAINT transaction_pk PRIMARY KEY (slot, signature)
);
/**
* The following is for keeping historical data for accounts and is not required for plugin to work.
*/
-- The table storing historical data for accounts
CREATE TABLE account_audit (
pubkey BYTEA,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
CREATE INDEX account_audit_account_key ON account_audit (pubkey, write_version);
CREATE FUNCTION audit_account_update() RETURNS trigger AS $audit_account_update$
BEGIN
INSERT INTO account_audit (pubkey, owner, lamports, slot, executable, rent_epoch, data, write_version, updated_on)
VALUES (OLD.pubkey, OLD.owner, OLD.lamports, OLD.slot,
OLD.executable, OLD.rent_epoch, OLD.data, OLD.write_version, OLD.updated_on);
RETURN NEW;
END;
$audit_account_update$ LANGUAGE plpgsql;
CREATE TRIGGER account_update_trigger AFTER UPDATE OR DELETE ON account
FOR EACH ROW EXECUTE PROCEDURE audit_account_update();

View File

@ -1,25 +0,0 @@
/**
* Script for cleaning up the schema for PostgreSQL used for the AccountsDb plugin.
*/
DROP TRIGGER account_update_trigger ON account;
DROP FUNCTION audit_account_update;
DROP TABLE account_audit;
DROP TABLE account;
DROP TABLE slot;
DROP TABLE transaction;
DROP TYPE "TransactionError" CASCADE;
DROP TYPE "TransactionErrorCode" CASCADE;
DROP TYPE "MappedMessage" CASCADE;
DROP TYPE "MappedAddresses" CASCADE;
DROP TYPE "TransactionMessageV0" CASCADE;
DROP TYPE "AddressMapIndexes" CASCADE;
DROP TYPE "TransactionMessage" CASCADE;
DROP TYPE "TransactionMessageHeader" CASCADE;
DROP TYPE "TransactionStatusMeta" CASCADE;
DROP TYPE "RewardType" CASCADE;
DROP TYPE "Reward" CASCADE;
DROP TYPE "TransactionTokenBalance" CASCADE;
DROP TYPE "InnerInstructions" CASCADE;
DROP TYPE "CompiledInstruction" CASCADE;

View File

@ -1,802 +0,0 @@
# This a reference configuration file for the PostgreSQL database version 14.
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: B = bytes Time units: us = microseconds
# kB = kilobytes ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
data_directory = '/var/lib/postgresql/14/main' # use data in another directory
# (change requires restart)
hba_file = '/etc/postgresql/14/main/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/etc/postgresql/14/main/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/14-main.pid' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
listen_addresses = '*'
port = 5433 # (change requires restart)
max_connections = 200 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP settings -
# see "man tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
# 0 selects the system default
#client_connection_check_interval = 0 # time between checks for client
# disconnection while running queries;
# 0 for never
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = scram-sha-256 # scram-sha-256 or md5
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
#krb_caseins_users = off
# - SSL -
ssl = on
#ssl_ca_file = ''
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
#ssl_crl_file = ''
#ssl_crl_dir = ''
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1.2'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 1GB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#huge_page_size = 0 # zero for system default
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash table work_mem
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first option
# supported by the operating system:
# mmap
# sysv
# windows
# (change requires restart)
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# (change requires restart)
#min_dynamic_shared_memory = 0MB # (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 64
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 2 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#backend_flush_after = 0 # measured in pages, 0 disables
effective_io_concurrency = 1000 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = minimal # minimal, replica, or logical
# (change requires restart)
fsync = off # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
synchronous_commit = off # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux and FreeBSD)
# fsync
# fsync_writethrough
# open_sync
full_page_writes = off # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_compression = off # enable compression of full-page writes
#wal_init_zero = on # zero-fill new WAL files
#wal_recycle = on # recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#wal_skip_threshold = 2MB
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
max_wal_size = 1GB
min_wal_size = 80MB
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = '' # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
# %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
#archive_cleanup_command = '' # command to execute at every restartpoint
#recovery_end_command = '' # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = '' # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = '' # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = '' # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the primary and on any standby that will send replication data.
max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#wal_keep_size = 0 # in megabytes; 0 disables
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Primary Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
#promote_trigger_file = '' # file name whose presence ends recovery
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
# is not set
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_async_append = on
#enable_bitmapscan = on
#enable_gathermerge = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_incremental_sort = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_memoize = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_parallel_hash = on
#enable_partition_pruning = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#jit = on # allow JIT compilation
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#plan_cache_mode = auto # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (Windows):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
# and their durations, > 0 logs only a sample of
# statements running at least this number
# of milliseconds;
# sample fraction is determined by log_statement_sample_rate
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
# log_min_duration_sample to be logged;
# 1.0 logs all such statements, 0.0 never logs
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
# are logged regardless of their duration; 1.0 logs all
# statements from all transactions, 0.0 never logs
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_autovacuum_min_duration = -1 # log autovacuum activity;
# -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%m [%p] %q%u@%d ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %b = backend type
# %p = process ID
# %P = process ID of parallel group leader
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %Q = query ID (0 if none or not computed)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_recovery_conflict_waits = off # log standby recovery conflict waits
# >= deadlock_timeout
#log_parameter_max_length = -1 # when logging statements, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Etc/UTC'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
cluster_name = '14/main' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_activity_query_size = 1024 # (change requires restart)
#track_counts = on
#track_io_timing = off
#track_wal_io_timing = off
#track_functions = none # none, pl, all
stats_temp_directory = '/var/run/postgresql/14-main.pg_stat_tmp'
# - Monitoring -
#compute_query_id = auto
#log_statement_stats = off
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_table_access_method = 'heap'
#default_tablespace = '' # a tablespace name, '' uses the default
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_table_age = 150000000
#vacuum_freeze_min_age = 50000000
#vacuum_failsafe_age = 1600000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_failsafe_age = 1600000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Etc/UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C.UTF-8' # locale for system error message
# strings
lc_monetary = 'C.UTF-8' # locale for monetary formatting
lc_numeric = 'C.UTF-8' # locale for number formatting
lc_time = 'C.UTF-8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#extension_destdir = '' # prepend path when loading extensions
# and shared objects (added by Debian)
#gin_fuzzy_search_limit = 0
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
include_dir = 'conf.d' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

View File

@ -1,74 +0,0 @@
use {log::*, std::collections::HashSet};
#[derive(Debug)]
pub(crate) struct AccountsSelector {
pub accounts: HashSet<Vec<u8>>,
pub owners: HashSet<Vec<u8>>,
pub select_all_accounts: bool,
}
impl AccountsSelector {
pub fn default() -> Self {
AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts: true,
}
}
pub fn new(accounts: &[String], owners: &[String]) -> Self {
info!(
"Creating AccountsSelector from accounts: {:?}, owners: {:?}",
accounts, owners
);
let select_all_accounts = accounts.iter().any(|key| key == "*");
if select_all_accounts {
return AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts,
};
}
let accounts = accounts
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
let owners = owners
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
AccountsSelector {
accounts,
owners,
select_all_accounts,
}
}
pub fn is_account_selected(&self, account: &[u8], owner: &[u8]) -> bool {
self.select_all_accounts || self.accounts.contains(account) || self.owners.contains(owner)
}
/// Check if any account is of interested at all
pub fn is_enabled(&self) -> bool {
self.select_all_accounts || !self.accounts.is_empty() || !self.owners.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_create_accounts_selector() {
AccountsSelector::new(
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
&[],
);
AccountsSelector::new(
&[],
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
);
}
}

View File

@ -1,437 +0,0 @@
use solana_measure::measure::Measure;
/// Main entry for the PostgreSQL plugin
use {
crate::{
accounts_selector::AccountsSelector,
postgres_client::{ParallelPostgresClient, PostgresClientBuilder},
transaction_selector::TransactionSelector,
},
bs58,
log::*,
serde_derive::{Deserialize, Serialize},
serde_json,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPlugin, AccountsDbPluginError, ReplicaAccountInfoVersions,
ReplicaTransactionInfoVersions, Result, SlotStatus,
},
solana_metrics::*,
std::{fs::File, io::Read},
thiserror::Error,
};
#[derive(Default)]
pub struct AccountsDbPluginPostgres {
client: Option<ParallelPostgresClient>,
accounts_selector: Option<AccountsSelector>,
transaction_selector: Option<TransactionSelector>,
}
impl std::fmt::Debug for AccountsDbPluginPostgres {
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct AccountsDbPluginPostgresConfig {
pub host: Option<String>,
pub user: Option<String>,
pub port: Option<u16>,
pub connection_str: Option<String>,
pub threads: Option<usize>,
pub batch_size: Option<usize>,
pub panic_on_db_errors: Option<bool>,
}
#[derive(Error, Debug)]
pub enum AccountsDbPluginPostgresError {
#[error("Error connecting to the backend data store. Error message: ({msg})")]
DataStoreConnectionError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
DataSchemaError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
ConfigurationError { msg: String },
}
impl AccountsDbPlugin for AccountsDbPluginPostgres {
fn name(&self) -> &'static str {
"AccountsDbPluginPostgres"
}
/// Do initialization for the PostgreSQL plugin.
///
/// # Format of the config file:
/// * The `accounts_selector` section allows the user to controls accounts selections.
/// "accounts_selector" : {
/// "accounts" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// or:
/// "accounts_selector" = {
/// "owners" : \["pubkey-1", "pubkey-2", ..., "pubkey-m"\]
/// }
/// Accounts either satisyfing the accounts condition or owners condition will be selected.
/// When only owners is specified,
/// all accounts belonging to the owners will be streamed.
/// The accounts field support wildcard to select all accounts:
/// "accounts_selector" : {
/// "accounts" : \["*"\],
/// }
/// * "host", optional, specifies the PostgreSQL server.
/// * "user", optional, specifies the PostgreSQL user.
/// * "port", optional, specifies the PostgreSQL server's port.
/// * "connection_str", optional, the custom PostgreSQL connection string.
/// Please refer to https://docs.rs/postgres/0.19.2/postgres/config/struct.Config.html for the connection configuration.
/// When `connection_str` is set, the values in "host", "user" and "port" are ignored. If `connection_str` is not given,
/// `host` and `user` must be given.
/// * "threads" optional, specifies the number of worker threads for the plugin. A thread
/// maintains a PostgreSQL connection to the server. The default is '10'.
/// * "batch_size" optional, specifies the batch size of bulk insert when the AccountsDb is created
/// from restoring a snapshot. The default is '10'.
/// * "panic_on_db_errors", optional, contols if to panic when there are errors replicating data to the
/// PostgreSQL database. The default is 'false'.
/// * "transaction_selector", optional, controls if and what transaction to store. If this field is missing
/// None of the transction is stored.
/// "transaction_selector" : {
/// "mentions" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// The `mentions` field support wildcard to select all transaction or all 'vote' transactions:
/// For example, to select all transactions:
/// "transaction_selector" : {
/// "mentions" : \["*"\],
/// }
/// To select all vote transactions:
/// "transaction_selector" : {
/// "mentions" : \["all_votes"\],
/// }
/// # Examples
///
/// {
/// "libpath": "/home/solana/target/release/libsolana_accountsdb_plugin_postgres.so",
/// "host": "host_foo",
/// "user": "solana",
/// "threads": 10,
/// "accounts_selector" : {
/// "owners" : ["9oT9R5ZyRovSVnt37QvVoBttGpNqR3J7unkb567NP8k3"]
/// }
/// }
fn on_load(&mut self, config_file: &str) -> Result<()> {
solana_logger::setup_with_default("info");
info!(
"Loading plugin {:?} from config_file {:?}",
self.name(),
config_file
);
let mut file = File::open(config_file)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let result: serde_json::Value = serde_json::from_str(&contents).unwrap();
self.accounts_selector = Some(Self::create_accounts_selector_from_config(&result));
self.transaction_selector = Some(Self::create_transaction_selector_from_config(&result));
let result: serde_json::Result<AccountsDbPluginPostgresConfig> =
serde_json::from_str(&contents);
match result {
Err(err) => {
return Err(AccountsDbPluginError::ConfigFileReadError {
msg: format!(
"The config file is not in the JSON format expected: {:?}",
err
),
})
}
Ok(config) => {
let client = PostgresClientBuilder::build_pararallel_postgres_client(&config)?;
self.client = Some(client);
}
}
Ok(())
}
fn on_unload(&mut self) {
info!("Unloading plugin: {:?}", self.name());
match &mut self.client {
None => {}
Some(client) => {
client.join().unwrap();
}
}
}
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
let mut measure_all = Measure::start("accountsdb-plugin-postgres-update-account-main");
match account {
ReplicaAccountInfoVersions::V0_0_1(account) => {
let mut measure_select =
Measure::start("accountsdb-plugin-postgres-update-account-select");
if let Some(accounts_selector) = &self.accounts_selector {
if !accounts_selector.is_account_selected(account.pubkey, account.owner) {
return Ok(());
}
} else {
return Ok(());
}
measure_select.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-select-us",
measure_select.as_us() as usize,
100000,
100000
);
debug!(
"Updating account {:?} with owner {:?} at slot {:?} using account selector {:?}",
bs58::encode(account.pubkey).into_string(),
bs58::encode(account.owner).into_string(),
slot,
self.accounts_selector.as_ref().unwrap()
);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database."
.to_string(),
},
)));
}
Some(client) => {
let mut measure_update =
Measure::start("accountsdb-plugin-postgres-update-account-client");
let result = { client.update_account(account, slot, is_startup) };
measure_update.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-client-us",
measure_update.as_us() as usize,
100000,
100000
);
if let Err(err) = result {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!("Failed to persist the update of account to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
}
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-main-us",
measure_all.as_us() as usize,
100000,
100000
);
Ok(())
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
info!("Updating slot {:?} at with status {:?}", slot, status);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.update_slot_status(slot, parent, status);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the update of slot to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<()> {
info!("Notifying the end of startup for accounts notifications");
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.notify_end_of_startup();
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to notify the end of startup for accounts notifications. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_transaction(
&mut self,
transaction_info: ReplicaTransactionInfoVersions,
slot: u64,
) -> Result<()> {
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => match transaction_info {
ReplicaTransactionInfoVersions::V0_0_1(transaction_info) => {
if let Some(transaction_selector) = &self.transaction_selector {
if !transaction_selector.is_transaction_selected(
transaction_info.is_vote,
transaction_info.transaction.message().account_keys_iter(),
) {
return Ok(());
}
} else {
return Ok(());
}
let result = client.log_transaction_info(transaction_info, slot);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the transaction info to the PostgreSQL database. Error: {:?}", err)
});
}
}
},
}
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.
fn account_data_notifications_enabled(&self) -> bool {
self.accounts_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
/// Check if the plugin is interested in transaction data
fn transaction_notifications_enabled(&self) -> bool {
self.transaction_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
}
impl AccountsDbPluginPostgres {
fn create_accounts_selector_from_config(config: &serde_json::Value) -> AccountsSelector {
let accounts_selector = &config["accounts_selector"];
if accounts_selector.is_null() {
AccountsSelector::default()
} else {
let accounts = &accounts_selector["accounts"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
let owners = &accounts_selector["owners"];
let owners: Vec<String> = if owners.is_array() {
owners
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
AccountsSelector::new(&accounts, &owners)
}
}
fn create_transaction_selector_from_config(config: &serde_json::Value) -> TransactionSelector {
let transaction_selector = &config["transaction_selector"];
if transaction_selector.is_null() {
TransactionSelector::default()
} else {
let accounts = &transaction_selector["mentions"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
TransactionSelector::new(&accounts)
}
}
pub fn new() -> Self {
Self::default()
}
}
#[no_mangle]
#[allow(improper_ctypes_definitions)]
/// # Safety
///
/// This function returns the AccountsDbPluginPostgres pointer as trait AccountsDbPlugin.
pub unsafe extern "C" fn _create_plugin() -> *mut dyn AccountsDbPlugin {
let plugin = AccountsDbPluginPostgres::new();
let plugin: Box<dyn AccountsDbPlugin> = Box::new(plugin);
Box::into_raw(plugin)
}
#[cfg(test)]
pub(crate) mod tests {
use {super::*, serde_json};
#[test]
fn test_accounts_selector_from_config() {
let config = "{\"accounts_selector\" : { \
\"owners\" : [\"9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin\"] \
}}";
let config: serde_json::Value = serde_json::from_str(config).unwrap();
AccountsDbPluginPostgres::create_accounts_selector_from_config(&config);
}
}

View File

@ -1,4 +0,0 @@
pub mod accounts_selector;
pub mod accountsdb_plugin_postgres;
pub mod postgres_client;
pub mod transaction_selector;

View File

@ -1,905 +0,0 @@
#![allow(clippy::integer_arithmetic)]
mod postgres_client_transaction;
/// A concurrent implementation for writing accounts into the PostgreSQL in parallel.
use {
crate::accountsdb_plugin_postgres::{
AccountsDbPluginPostgresConfig, AccountsDbPluginPostgresError,
},
chrono::Utc,
crossbeam_channel::{bounded, Receiver, RecvTimeoutError, Sender},
log::*,
postgres::{Client, NoTls, Statement},
postgres_client_transaction::LogTransactionRequest,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPluginError, ReplicaAccountInfo, SlotStatus,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_sdk::timing::AtomicInterval,
std::{
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
Arc, Mutex,
},
thread::{self, sleep, Builder, JoinHandle},
time::Duration,
},
tokio_postgres::types,
};
/// The maximum asynchronous requests allowed in the channel to avoid excessive
/// memory usage. The downside -- calls after this threshold is reached can get blocked.
const MAX_ASYNC_REQUESTS: usize = 40960;
const DEFAULT_POSTGRES_PORT: u16 = 5432;
const DEFAULT_THREADS_COUNT: usize = 100;
const DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE: usize = 10;
const ACCOUNT_COLUMN_COUNT: usize = 9;
const DEFAULT_PANIC_ON_DB_ERROR: bool = false;
struct PostgresSqlClientWrapper {
client: Client,
update_account_stmt: Statement,
bulk_account_insert_stmt: Statement,
update_slot_with_parent_stmt: Statement,
update_slot_without_parent_stmt: Statement,
update_transaction_log_stmt: Statement,
}
pub struct SimplePostgresClient {
batch_size: usize,
pending_account_updates: Vec<DbAccountInfo>,
client: Mutex<PostgresSqlClientWrapper>,
}
struct PostgresClientWorker {
client: SimplePostgresClient,
/// Indicating if accounts notification during startup is done.
is_startup_done: bool,
}
impl Eq for DbAccountInfo {}
#[derive(Clone, PartialEq, Debug)]
pub struct DbAccountInfo {
pub pubkey: Vec<u8>,
pub lamports: i64,
pub owner: Vec<u8>,
pub executable: bool,
pub rent_epoch: i64,
pub data: Vec<u8>,
pub slot: i64,
pub write_version: i64,
}
pub(crate) fn abort() -> ! {
#[cfg(not(test))]
{
// standard error is usually redirected to a log file, cry for help on standard output as
// well
eprintln!("Validator process aborted. The validator log may contain further details");
std::process::exit(1);
}
#[cfg(test)]
panic!("process::exit(1) is intercepted for friendly test failure...");
}
impl DbAccountInfo {
fn new<T: ReadableAccountInfo>(account: &T, slot: u64) -> DbAccountInfo {
let data = account.data().to_vec();
Self {
pubkey: account.pubkey().to_vec(),
lamports: account.lamports() as i64,
owner: account.owner().to_vec(),
executable: account.executable(),
rent_epoch: account.rent_epoch() as i64,
data,
slot: slot as i64,
write_version: account.write_version(),
}
}
}
pub trait ReadableAccountInfo: Sized {
fn pubkey(&self) -> &[u8];
fn owner(&self) -> &[u8];
fn lamports(&self) -> i64;
fn executable(&self) -> bool;
fn rent_epoch(&self) -> i64;
fn data(&self) -> &[u8];
fn write_version(&self) -> i64;
}
impl ReadableAccountInfo for DbAccountInfo {
fn pubkey(&self) -> &[u8] {
&self.pubkey
}
fn owner(&self) -> &[u8] {
&self.owner
}
fn lamports(&self) -> i64 {
self.lamports
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch
}
fn data(&self) -> &[u8] {
&self.data
}
fn write_version(&self) -> i64 {
self.write_version
}
}
impl<'a> ReadableAccountInfo for ReplicaAccountInfo<'a> {
fn pubkey(&self) -> &[u8] {
self.pubkey
}
fn owner(&self) -> &[u8] {
self.owner
}
fn lamports(&self) -> i64 {
self.lamports as i64
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch as i64
}
fn data(&self) -> &[u8] {
self.data
}
fn write_version(&self) -> i64 {
self.write_version as i64
}
}
pub trait PostgresClient {
fn join(&mut self) -> thread::Result<()> {
Ok(())
}
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError>;
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError>;
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError>;
fn log_transaction(
&mut self,
transaction_log_info: LogTransactionRequest,
) -> Result<(), AccountsDbPluginError>;
}
impl SimplePostgresClient {
fn connect_to_db(
config: &AccountsDbPluginPostgresConfig,
) -> Result<Client, AccountsDbPluginError> {
let port = config.port.unwrap_or(DEFAULT_POSTGRES_PORT);
let connection_str = if let Some(connection_str) = &config.connection_str {
connection_str.clone()
} else {
if config.host.is_none() || config.user.is_none() {
let msg = format!(
"\"connection_str\": {:?}, or \"host\": {:?} \"user\": {:?} must be specified",
config.connection_str, config.host, config.user
);
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::ConfigurationError { msg },
)));
}
format!(
"host={} user={} port={}",
config.host.as_ref().unwrap(),
config.user.as_ref().unwrap(),
port
)
};
match Client::connect(&connection_str, NoTls) {
Err(err) => {
let msg = format!(
"Error in connecting to the PostgreSQL database: {:?} connection_str: {:?}",
err, connection_str
);
error!("{}", msg);
Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError { msg },
)))
}
Ok(client) => Ok(client),
}
}
fn build_bulk_account_insert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
let mut stmt = String::from("INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) VALUES");
for j in 0..batch_size {
let row = j * ACCOUNT_COLUMN_COUNT;
let val_str = format!(
"(${}, ${}, ${}, ${}, ${}, ${}, ${}, ${}, ${})",
row + 1,
row + 2,
row + 3,
row + 4,
row + 5,
row + 6,
row + 7,
row + 8,
row + 9,
);
if j == 0 {
stmt = format!("{} {}", &stmt, val_str);
} else {
stmt = format!("{}, {}", &stmt, val_str);
}
}
let handle_conflict = "ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
stmt = format!("{} {}", stmt, handle_conflict);
info!("{}", stmt);
let bulk_stmt = client.prepare(&stmt);
match bulk_stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_single_account_upsert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) \
ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_slot_upsert_statement_with_parent(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO slot (slot, parent, status, updated_on) \
VALUES ($1, $2, $3, $4) \
ON CONFLICT (slot) DO UPDATE SET parent=excluded.parent, status=excluded.status, updated_on=excluded.updated_on";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the slot update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
fn build_slot_upsert_statement_without_parent(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO slot (slot, status, updated_on) \
VALUES ($1, $2, $3) \
ON CONFLICT (slot) DO UPDATE SET status=excluded.status, updated_on=excluded.updated_on";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the slot update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
/// Internal function for updating or inserting a single account
fn upsert_account_internal(
account: &DbAccountInfo,
statement: &Statement,
client: &mut Client,
) -> Result<(), AccountsDbPluginError> {
let lamports = account.lamports() as i64;
let rent_epoch = account.rent_epoch() as i64;
let updated_on = Utc::now().naive_utc();
let result = client.query(
statement,
&[
&account.pubkey(),
&account.slot,
&account.owner(),
&lamports,
&account.executable(),
&rent_epoch,
&account.data(),
&account.write_version(),
&updated_on,
],
);
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
Ok(())
}
/// Update or insert a single account
fn upsert_account(&mut self, account: &DbAccountInfo) -> Result<(), AccountsDbPluginError> {
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
Self::upsert_account_internal(account, statement, client)
}
/// Insert accounts in batch to reduce network overhead
fn insert_accounts_in_batch(
&mut self,
account: DbAccountInfo,
) -> Result<(), AccountsDbPluginError> {
self.pending_account_updates.push(account);
if self.pending_account_updates.len() == self.batch_size {
let mut measure = Measure::start("accountsdb-plugin-postgres-prepare-values");
let mut values: Vec<&(dyn types::ToSql + Sync)> =
Vec::with_capacity(self.batch_size * ACCOUNT_COLUMN_COUNT);
let updated_on = Utc::now().naive_utc();
for j in 0..self.batch_size {
let account = &self.pending_account_updates[j];
values.push(&account.pubkey);
values.push(&account.slot);
values.push(&account.owner);
values.push(&account.lamports);
values.push(&account.executable);
values.push(&account.rent_epoch);
values.push(&account.data);
values.push(&account.write_version);
values.push(&updated_on);
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-prepare-values-us",
measure.as_us() as usize,
10000,
10000
);
let mut measure = Measure::start("accountsdb-plugin-postgres-update-account");
let client = self.client.get_mut().unwrap();
let result = client
.client
.query(&client.bulk_account_insert_stmt, &values);
self.pending_account_updates.clear();
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-us",
measure.as_us() as usize,
10000,
10000
);
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-count",
self.batch_size,
10000,
10000
);
}
Ok(())
}
/// Flush any left over accounts in batch which are not processed in the last batch
fn flush_buffered_writes(&mut self) -> Result<(), AccountsDbPluginError> {
if self.pending_account_updates.is_empty() {
return Ok(());
}
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
for account in self.pending_account_updates.drain(..) {
Self::upsert_account_internal(&account, statement, client)?;
}
Ok(())
}
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating SimplePostgresClient...");
let mut client = Self::connect_to_db(config)?;
let bulk_account_insert_stmt =
Self::build_bulk_account_insert_statement(&mut client, config)?;
let update_account_stmt = Self::build_single_account_upsert_statement(&mut client, config)?;
let update_slot_with_parent_stmt =
Self::build_slot_upsert_statement_with_parent(&mut client, config)?;
let update_slot_without_parent_stmt =
Self::build_slot_upsert_statement_without_parent(&mut client, config)?;
let update_transaction_log_stmt =
Self::build_transaction_info_upsert_statement(&mut client, config)?;
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
info!("Created SimplePostgresClient.");
Ok(Self {
batch_size,
pending_account_updates: Vec::with_capacity(batch_size),
client: Mutex::new(PostgresSqlClientWrapper {
client,
update_account_stmt,
bulk_account_insert_stmt,
update_slot_with_parent_stmt,
update_slot_without_parent_stmt,
update_transaction_log_stmt,
}),
})
}
}
impl PostgresClient for SimplePostgresClient {
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
trace!(
"Updating account {} with owner {} at slot {}",
bs58::encode(account.pubkey()).into_string(),
bs58::encode(account.owner()).into_string(),
account.slot,
);
if !is_startup {
return self.upsert_account(&account);
}
self.insert_accounts_in_batch(account)
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
info!("Updating slot {:?} at with status {:?}", slot, status);
let slot = slot as i64; // postgres only supports i64
let parent = parent.map(|parent| parent as i64);
let updated_on = Utc::now().naive_utc();
let status_str = status.as_str();
let client = self.client.get_mut().unwrap();
let result = match parent {
Some(parent) => client.client.execute(
&client.update_slot_with_parent_stmt,
&[&slot, &parent, &status_str, &updated_on],
),
None => client.client.execute(
&client.update_slot_without_parent_stmt,
&[&slot, &status_str, &updated_on],
),
};
match result {
Err(err) => {
let msg = format!(
"Failed to persist the update of slot to the PostgreSQL database. Error: {:?}",
err
);
error!("{:?}", msg);
return Err(AccountsDbPluginError::SlotStatusUpdateError { msg });
}
Ok(rows) => {
assert_eq!(1, rows, "Expected one rows to be updated a time");
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
self.flush_buffered_writes()
}
fn log_transaction(
&mut self,
transaction_log_info: LogTransactionRequest,
) -> Result<(), AccountsDbPluginError> {
self.log_transaction_impl(transaction_log_info)
}
}
struct UpdateAccountRequest {
account: DbAccountInfo,
is_startup: bool,
}
struct UpdateSlotRequest {
slot: u64,
parent: Option<u64>,
slot_status: SlotStatus,
}
#[warn(clippy::large_enum_variant)]
enum DbWorkItem {
UpdateAccount(Box<UpdateAccountRequest>),
UpdateSlot(Box<UpdateSlotRequest>),
LogTransaction(Box<LogTransactionRequest>),
}
impl PostgresClientWorker {
fn new(config: AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
let result = SimplePostgresClient::new(&config);
match result {
Ok(client) => Ok(PostgresClientWorker {
client,
is_startup_done: false,
}),
Err(err) => {
error!("Error in creating SimplePostgresClient: {}", err);
Err(err)
}
}
}
fn do_work(
&mut self,
receiver: Receiver<DbWorkItem>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
panic_on_db_errors: bool,
) -> Result<(), AccountsDbPluginError> {
while !exit_worker.load(Ordering::Relaxed) {
let mut measure = Measure::start("accountsdb-plugin-postgres-worker-recv");
let work = receiver.recv_timeout(Duration::from_millis(500));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-worker-recv-us",
measure.as_us() as usize,
100000,
100000
);
match work {
Ok(work) => match work {
DbWorkItem::UpdateAccount(request) => {
if let Err(err) = self
.client
.update_account(request.account, request.is_startup)
{
error!("Failed to update account: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
DbWorkItem::UpdateSlot(request) => {
if let Err(err) = self.client.update_slot_status(
request.slot,
request.parent,
request.slot_status,
) {
error!("Failed to update slot: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
DbWorkItem::LogTransaction(transaction_log_info) => {
self.client.log_transaction(*transaction_log_info)?;
}
},
Err(err) => match err {
RecvTimeoutError::Timeout => {
if !self.is_startup_done && is_startup_done.load(Ordering::Relaxed) {
if let Err(err) = self.client.notify_end_of_startup() {
error!("Error in notifying end of startup: ({})", err);
if panic_on_db_errors {
abort();
}
}
self.is_startup_done = true;
startup_done_count.fetch_add(1, Ordering::Relaxed);
}
continue;
}
_ => {
error!("Error in receiving the item {:?}", err);
if panic_on_db_errors {
abort();
}
break;
}
},
}
}
Ok(())
}
}
pub struct ParallelPostgresClient {
workers: Vec<JoinHandle<Result<(), AccountsDbPluginError>>>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
initialized_worker_count: Arc<AtomicUsize>,
sender: Sender<DbWorkItem>,
last_report: AtomicInterval,
}
impl ParallelPostgresClient {
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating ParallelPostgresClient...");
let (sender, receiver) = bounded(MAX_ASYNC_REQUESTS);
let exit_worker = Arc::new(AtomicBool::new(false));
let mut workers = Vec::default();
let is_startup_done = Arc::new(AtomicBool::new(false));
let startup_done_count = Arc::new(AtomicUsize::new(0));
let worker_count = config.threads.unwrap_or(DEFAULT_THREADS_COUNT);
let initialized_worker_count = Arc::new(AtomicUsize::new(0));
for i in 0..worker_count {
let cloned_receiver = receiver.clone();
let exit_clone = exit_worker.clone();
let is_startup_done_clone = is_startup_done.clone();
let startup_done_count_clone = startup_done_count.clone();
let initialized_worker_count_clone = initialized_worker_count.clone();
let config = config.clone();
let worker = Builder::new()
.name(format!("worker-{}", i))
.spawn(move || -> Result<(), AccountsDbPluginError> {
let panic_on_db_errors = *config
.panic_on_db_errors
.as_ref()
.unwrap_or(&DEFAULT_PANIC_ON_DB_ERROR);
let result = PostgresClientWorker::new(config);
match result {
Ok(mut worker) => {
initialized_worker_count_clone.fetch_add(1, Ordering::Relaxed);
worker.do_work(
cloned_receiver,
exit_clone,
is_startup_done_clone,
startup_done_count_clone,
panic_on_db_errors,
)?;
Ok(())
}
Err(err) => {
error!("Error when making connection to database: ({})", err);
if panic_on_db_errors {
abort();
}
Err(err)
}
}
})
.unwrap();
workers.push(worker);
}
info!("Created ParallelPostgresClient.");
Ok(Self {
last_report: AtomicInterval::default(),
workers,
exit_worker,
is_startup_done,
startup_done_count,
initialized_worker_count,
sender,
})
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_worker.store(true, Ordering::Relaxed);
while !self.workers.is_empty() {
let worker = self.workers.pop();
if worker.is_none() {
break;
}
let worker = worker.unwrap();
let result = worker.join().unwrap();
if result.is_err() {
error!("The worker thread has failed: {:?}", result);
}
}
Ok(())
}
pub fn update_account(
&mut self,
account: &ReplicaAccountInfo,
slot: u64,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
if self.last_report.should_update(30000) {
datapoint_debug!(
"postgres-plugin-stats",
("message-queue-length", self.sender.len() as i64, i64),
);
}
let mut measure = Measure::start("accountsdb-plugin-posgres-create-work-item");
let wrk_item = DbWorkItem::UpdateAccount(Box::new(UpdateAccountRequest {
account: DbAccountInfo::new(account, slot),
is_startup,
}));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-create-work-item-us",
measure.as_us() as usize,
100000,
100000
);
let mut measure = Measure::start("accountsdb-plugin-posgres-send-msg");
if let Err(err) = self.sender.send(wrk_item) {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!(
"Failed to update the account {:?}, error: {:?}",
bs58::encode(account.pubkey()).into_string(),
err
),
});
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-send-msg-us",
measure.as_us() as usize,
100000,
100000
);
Ok(())
}
pub fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
if let Err(err) = self
.sender
.send(DbWorkItem::UpdateSlot(Box::new(UpdateSlotRequest {
slot,
parent,
slot_status: status,
})))
{
return Err(AccountsDbPluginError::SlotStatusUpdateError {
msg: format!("Failed to update the slot {:?}, error: {:?}", slot, err),
});
}
Ok(())
}
pub fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
info!("Notifying the end of startup");
// Ensure all items in the queue has been received by the workers
while !self.sender.is_empty() {
sleep(Duration::from_millis(100));
}
self.is_startup_done.store(true, Ordering::Relaxed);
// Wait for all worker threads to be done with flushing
while self.startup_done_count.load(Ordering::Relaxed)
!= self.initialized_worker_count.load(Ordering::Relaxed)
{
info!(
"Startup done count: {}, good worker thread count: {}",
self.startup_done_count.load(Ordering::Relaxed),
self.initialized_worker_count.load(Ordering::Relaxed)
);
sleep(Duration::from_millis(100));
}
info!("Done with notifying the end of startup");
Ok(())
}
}
pub struct PostgresClientBuilder {}
impl PostgresClientBuilder {
pub fn build_pararallel_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<ParallelPostgresClient, AccountsDbPluginError> {
ParallelPostgresClient::new(config)
}
pub fn build_simple_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<SimplePostgresClient, AccountsDbPluginError> {
SimplePostgresClient::new(config)
}
}

View File

@ -1,194 +0,0 @@
/// The transaction selector is responsible for filtering transactions
/// in the plugin framework.
use {log::*, solana_sdk::pubkey::Pubkey, std::collections::HashSet};
pub(crate) struct TransactionSelector {
pub mentioned_addresses: HashSet<Vec<u8>>,
pub select_all_transactions: bool,
pub select_all_vote_transactions: bool,
}
#[allow(dead_code)]
impl TransactionSelector {
pub fn default() -> Self {
Self {
mentioned_addresses: HashSet::default(),
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Create a selector based on the mentioned addresses
/// To select all transactions use ["*"] or ["all"]
/// To select all vote transactions, use ["all_votes"]
/// To select transactions mentioning specific addresses use ["<pubkey1>", "<pubkey2>", ...]
pub fn new(mentioned_addresses: &[String]) -> Self {
info!(
"Creating TransactionSelector from addresses: {:?}",
mentioned_addresses
);
let select_all_transactions = mentioned_addresses
.iter()
.any(|key| key == "*" || key == "all");
if select_all_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let select_all_vote_transactions = mentioned_addresses.iter().any(|key| key == "all_votes");
if select_all_vote_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let mentioned_addresses = mentioned_addresses
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
Self {
mentioned_addresses,
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Check if a transaction is of interest.
pub fn is_transaction_selected(
&self,
is_vote: bool,
mentioned_addresses: Box<dyn Iterator<Item = &Pubkey> + '_>,
) -> bool {
if !self.is_enabled() {
return false;
}
if self.select_all_transactions || (self.select_all_vote_transactions && is_vote) {
return true;
}
for address in mentioned_addresses {
if self.mentioned_addresses.contains(address.as_ref()) {
return true;
}
}
false
}
/// Check if any transaction is of interest at all
pub fn is_enabled(&self) -> bool {
self.select_all_transactions
|| self.select_all_vote_transactions
|| !self.mentioned_addresses.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_select_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[pubkey1.to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_using_wildcard() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["*".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_all() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_vote_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all_votes".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
#[test]
fn test_select_no_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[]);
assert!(!selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
}

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-banking-bench"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -14,17 +14,17 @@ crossbeam-channel = "0.5"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-core = { path = "../core", version = "=1.9.0" }
solana-gossip = { path = "../gossip", version = "=1.9.0" }
solana-ledger = { path = "../ledger", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-perf = { path = "../perf", version = "=1.9.0" }
solana-poh = { path = "../poh", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-streamer = { path = "../streamer", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-core = { path = "../core", version = "=1.10.0" }
solana-gossip = { path = "../gossip", version = "=1.10.0" }
solana-ledger = { path = "../ledger", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-perf = { path = "../perf", version = "=1.10.0" }
solana-poh = { path = "../poh", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,7 +1,7 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, value_t, App, Arg},
crossbeam_channel::unbounded,
crossbeam_channel::{unbounded, Receiver},
log::*,
rand::{thread_rng, Rng},
rayon::prelude::*,
@ -13,7 +13,7 @@ use {
get_tmp_ledger_path,
},
solana_measure::measure::Measure,
solana_perf::packet::to_packets_chunked,
solana_perf::packet::to_packet_batches,
solana_poh::poh_recorder::{create_test_recorder, PohRecorder, WorkingBankEntry},
solana_runtime::{
accounts_background_service::AbsRequestSender, bank::Bank, bank_forks::BankForks,
@ -28,7 +28,7 @@ use {
},
solana_streamer::socket::SocketAddrSpace,
std::{
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex, RwLock},
sync::{atomic::Ordering, Arc, Mutex, RwLock},
thread::sleep,
time::{Duration, Instant},
},
@ -212,7 +212,7 @@ fn main() {
bank.clear_signatures();
}
let mut verified: Vec<_> = to_packets_chunked(&transactions, packets_per_chunk);
let mut verified: Vec<_> = to_packet_batches(&transactions, packets_per_chunk);
let ledger_path = get_tmp_ledger_path!();
{
let blockstore = Arc::new(
@ -364,7 +364,7 @@ fn main() {
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
tx.signatures[0] = Signature::new(&sig[0..64]);
}
verified = to_packets_chunked(&transactions.clone(), packets_per_chunk);
verified = to_packet_batches(&transactions.clone(), packets_per_chunk);
}
start += chunk_len;

View File

@ -1,6 +1,6 @@
[package]
name = "solana-banks-client"
version = "1.9.0"
version = "1.10.0"
description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -12,16 +12,17 @@ edition = "2021"
[dependencies]
borsh = "0.9.1"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.9.0" }
solana-program = { path = "../sdk/program", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
tarpc = { version = "0.26.2", features = ["full"] }
solana-banks-interface = { path = "../banks-interface", version = "=1.10.0" }
solana-program = { path = "../sdk/program", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
tarpc = { version = "0.27.2", features = ["full"] }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
[dev-dependencies]
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-banks-server = { path = "../banks-server", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-banks-server = { path = "../banks-server", version = "=1.10.0" }
[lib]
crate-type = ["lib"]

73
banks-client/src/error.rs Normal file
View File

@ -0,0 +1,73 @@
use {
solana_sdk::{transaction::TransactionError, transport::TransportError},
std::io,
tarpc::client::RpcError,
thiserror::Error,
};
/// Errors from BanksClient
#[derive(Error, Debug)]
pub enum BanksClientError {
#[error("client error: {0}")]
ClientError(&'static str),
#[error(transparent)]
Io(#[from] io::Error),
#[error(transparent)]
RpcError(#[from] RpcError),
#[error("transport transaction error: {0}")]
TransactionError(#[from] TransactionError),
#[error("simulation error: {err:?}, logs: {logs:?}, units_consumed: {units_consumed:?}")]
SimulationError {
err: TransactionError,
logs: Vec<String>,
units_consumed: u64,
},
}
impl BanksClientError {
pub fn unwrap(&self) -> TransactionError {
match self {
BanksClientError::TransactionError(err)
| BanksClientError::SimulationError { err, .. } => err.clone(),
_ => panic!("unexpected transport error"),
}
}
}
impl From<BanksClientError> for io::Error {
fn from(err: BanksClientError) -> Self {
match err {
BanksClientError::ClientError(err) => Self::new(io::ErrorKind::Other, err.to_string()),
BanksClientError::Io(err) => err,
BanksClientError::RpcError(err) => Self::new(io::ErrorKind::Other, err.to_string()),
BanksClientError::TransactionError(err) => {
Self::new(io::ErrorKind::Other, err.to_string())
}
BanksClientError::SimulationError { err, .. } => {
Self::new(io::ErrorKind::Other, err.to_string())
}
}
}
}
impl From<BanksClientError> for TransportError {
fn from(err: BanksClientError) -> Self {
match err {
BanksClientError::ClientError(err) => {
Self::IoError(io::Error::new(io::ErrorKind::Other, err.to_string()))
}
BanksClientError::Io(err) => {
Self::IoError(io::Error::new(io::ErrorKind::Other, err.to_string()))
}
BanksClientError::RpcError(err) => {
Self::IoError(io::Error::new(io::ErrorKind::Other, err.to_string()))
}
BanksClientError::TransactionError(err) => Self::TransactionError(err),
BanksClientError::SimulationError { err, .. } => Self::TransactionError(err),
}
}
}

View File

@ -5,11 +5,12 @@
//! but they are undocumented, may change over time, and are generally more
//! cumbersome to use.
pub use crate::error::BanksClientError;
pub use solana_banks_interface::{BanksClient as TarpcClient, TransactionStatus};
use {
borsh::BorshDeserialize,
futures::{future::join_all, Future, FutureExt},
solana_banks_interface::{BanksRequest, BanksResponse},
futures::{future::join_all, Future, FutureExt, TryFutureExt},
solana_banks_interface::{BanksRequest, BanksResponse, BanksTransactionResultWithSimulation},
solana_program::{
clock::Slot, fee_calculator::FeeCalculator, hash::Hash, program_pack::Pack, pubkey::Pubkey,
rent::Rent, sysvar::Sysvar,
@ -20,9 +21,7 @@ use {
message::Message,
signature::Signature,
transaction::{self, Transaction},
transport,
},
std::io::{self, Error, ErrorKind},
tarpc::{
client::{self, NewClient, RequestDispatch},
context::{self, Context},
@ -33,6 +32,8 @@ use {
tokio_serde::formats::Bincode,
};
mod error;
// This exists only for backward compatibility
pub trait BanksClientExt {}
@ -57,8 +58,10 @@ impl BanksClient {
&mut self,
ctx: Context,
transaction: Transaction,
) -> impl Future<Output = io::Result<()>> + '_ {
self.inner.send_transaction_with_context(ctx, transaction)
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
self.inner
.send_transaction_with_context(ctx, transaction)
.map_err(Into::into)
}
#[deprecated(
@ -69,35 +72,41 @@ impl BanksClient {
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, u64)>> + '_ {
) -> impl Future<Output = Result<(FeeCalculator, Hash, u64), BanksClientError>> + '_ {
#[allow(deprecated)]
self.inner
.get_fees_with_commitment_and_context(ctx, commitment)
.map_err(Into::into)
}
pub fn get_transaction_status_with_context(
&mut self,
ctx: Context,
signature: Signature,
) -> impl Future<Output = io::Result<Option<TransactionStatus>>> + '_ {
) -> impl Future<Output = Result<Option<TransactionStatus>, BanksClientError>> + '_ {
self.inner
.get_transaction_status_with_context(ctx, signature)
.map_err(Into::into)
}
pub fn get_slot_with_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Slot>> + '_ {
self.inner.get_slot_with_context(ctx, commitment)
) -> impl Future<Output = Result<Slot, BanksClientError>> + '_ {
self.inner
.get_slot_with_context(ctx, commitment)
.map_err(Into::into)
}
pub fn get_block_height_with_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Slot>> + '_ {
self.inner.get_block_height_with_context(ctx, commitment)
) -> impl Future<Output = Result<Slot, BanksClientError>> + '_ {
self.inner
.get_block_height_with_context(ctx, commitment)
.map_err(Into::into)
}
pub fn process_transaction_with_commitment_and_context(
@ -105,9 +114,26 @@ impl BanksClient {
ctx: Context,
transaction: Transaction,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<transaction::Result<()>>>> + '_ {
) -> impl Future<Output = Result<Option<transaction::Result<()>>, BanksClientError>> + '_ {
self.inner
.process_transaction_with_commitment_and_context(ctx, transaction, commitment)
.map_err(Into::into)
}
pub fn process_transaction_with_preflight_and_commitment_and_context(
&mut self,
ctx: Context,
transaction: Transaction,
commitment: CommitmentLevel,
) -> impl Future<Output = Result<BanksTransactionResultWithSimulation, BanksClientError>> + '_
{
self.inner
.process_transaction_with_preflight_and_commitment_and_context(
ctx,
transaction,
commitment,
)
.map_err(Into::into)
}
pub fn get_account_with_commitment_and_context(
@ -115,9 +141,10 @@ impl BanksClient {
ctx: Context,
address: Pubkey,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<Account>>> + '_ {
) -> impl Future<Output = Result<Option<Account>, BanksClientError>> + '_ {
self.inner
.get_account_with_commitment_and_context(ctx, address, commitment)
.map_err(Into::into)
}
/// Send a transaction and return immediately. The server will resend the
@ -126,7 +153,7 @@ impl BanksClient {
pub fn send_transaction(
&mut self,
transaction: Transaction,
) -> impl Future<Output = io::Result<()>> + '_ {
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
self.send_transaction_with_context(context::current(), transaction)
}
@ -139,23 +166,25 @@ impl BanksClient {
)]
pub fn get_fees(
&mut self,
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, u64)>> + '_ {
) -> impl Future<Output = Result<(FeeCalculator, Hash, u64), BanksClientError>> + '_ {
#[allow(deprecated)]
self.get_fees_with_commitment_and_context(context::current(), CommitmentLevel::default())
}
/// Return the cluster Sysvar
pub fn get_sysvar<T: Sysvar>(&mut self) -> impl Future<Output = io::Result<T>> + '_ {
pub fn get_sysvar<T: Sysvar>(
&mut self,
) -> impl Future<Output = Result<T, BanksClientError>> + '_ {
self.get_account(T::id()).map(|result| {
let sysvar = result?
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Sysvar not present"))?;
from_account::<T, _>(&sysvar)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Failed to deserialize sysvar"))
let sysvar = result?.ok_or(BanksClientError::ClientError("Sysvar not present"))?;
from_account::<T, _>(&sysvar).ok_or(BanksClientError::ClientError(
"Failed to deserialize sysvar",
))
})
}
/// Return the cluster rent
pub fn get_rent(&mut self) -> impl Future<Output = io::Result<Rent>> + '_ {
pub fn get_rent(&mut self) -> impl Future<Output = Result<Rent, BanksClientError>> + '_ {
self.get_sysvar::<Rent>()
}
@ -163,8 +192,11 @@ impl BanksClient {
/// transactions with a blockhash that has not yet expired. Use the `get_fees`
/// method to get both a blockhash and the blockhash's last valid slot.
#[deprecated(since = "1.9.0", note = "Please use `get_latest_blockhash` instead")]
pub fn get_recent_blockhash(&mut self) -> impl Future<Output = io::Result<Hash>> + '_ {
self.get_latest_blockhash()
pub fn get_recent_blockhash(
&mut self,
) -> impl Future<Output = Result<Hash, BanksClientError>> + '_ {
#[allow(deprecated)]
self.get_fees().map(|result| Ok(result?.1))
}
/// Send a transaction and return after the transaction has been rejected or
@ -173,23 +205,71 @@ impl BanksClient {
&mut self,
transaction: Transaction,
commitment: CommitmentLevel,
) -> impl Future<Output = transport::Result<()>> + '_ {
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
let mut ctx = context::current();
ctx.deadline += Duration::from_secs(50);
self.process_transaction_with_commitment_and_context(ctx, transaction, commitment)
.map(|result| match result? {
None => {
Err(Error::new(ErrorKind::TimedOut, "invalid blockhash or fee-payer").into())
}
None => Err(BanksClientError::ClientError(
"invalid blockhash or fee-payer",
)),
Some(transaction_result) => Ok(transaction_result?),
})
}
/// Send a transaction and return any preflight (sanitization or simulation) errors, or return
/// after the transaction has been rejected or reached the given level of commitment.
pub fn process_transaction_with_preflight_and_commitment(
&mut self,
transaction: Transaction,
commitment: CommitmentLevel,
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
let mut ctx = context::current();
ctx.deadline += Duration::from_secs(50);
self.process_transaction_with_preflight_and_commitment_and_context(
ctx,
transaction,
commitment,
)
.map(|result| match result? {
BanksTransactionResultWithSimulation {
result: None,
simulation_details: _,
} => Err(BanksClientError::ClientError(
"invalid blockhash or fee-payer",
)),
BanksTransactionResultWithSimulation {
result: Some(Err(err)),
simulation_details: Some(simulation_details),
} => Err(BanksClientError::SimulationError {
err,
logs: simulation_details.logs,
units_consumed: simulation_details.units_consumed,
}),
BanksTransactionResultWithSimulation {
result: Some(result),
simulation_details: _,
} => result.map_err(Into::into),
})
}
/// Send a transaction and return any preflight (sanitization or simulation) errors, or return
/// after the transaction has been finalized or rejected.
pub fn process_transaction_with_preflight(
&mut self,
transaction: Transaction,
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
self.process_transaction_with_preflight_and_commitment(
transaction,
CommitmentLevel::default(),
)
}
/// Send a transaction and return until the transaction has been finalized or rejected.
pub fn process_transaction(
&mut self,
transaction: Transaction,
) -> impl Future<Output = transport::Result<()>> + '_ {
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
self.process_transaction_with_commitment(transaction, CommitmentLevel::default())
}
@ -197,7 +277,7 @@ impl BanksClient {
&mut self,
transactions: Vec<Transaction>,
commitment: CommitmentLevel,
) -> transport::Result<()> {
) -> Result<(), BanksClientError> {
let mut clients: Vec<_> = transactions.iter().map(|_| self.clone()).collect();
let futures = clients
.iter_mut()
@ -213,19 +293,21 @@ impl BanksClient {
pub fn process_transactions(
&mut self,
transactions: Vec<Transaction>,
) -> impl Future<Output = transport::Result<()>> + '_ {
) -> impl Future<Output = Result<(), BanksClientError>> + '_ {
self.process_transactions_with_commitment(transactions, CommitmentLevel::default())
}
/// Return the most recent rooted slot. All transactions at or below this slot
/// are said to be finalized. The cluster will not fork to a higher slot.
pub fn get_root_slot(&mut self) -> impl Future<Output = io::Result<Slot>> + '_ {
pub fn get_root_slot(&mut self) -> impl Future<Output = Result<Slot, BanksClientError>> + '_ {
self.get_slot_with_context(context::current(), CommitmentLevel::default())
}
/// Return the most recent rooted block height. All transactions at or below this height
/// are said to be finalized. The cluster will not fork to a higher block height.
pub fn get_root_block_height(&mut self) -> impl Future<Output = io::Result<Slot>> + '_ {
pub fn get_root_block_height(
&mut self,
) -> impl Future<Output = Result<Slot, BanksClientError>> + '_ {
self.get_block_height_with_context(context::current(), CommitmentLevel::default())
}
@ -235,7 +317,7 @@ impl BanksClient {
&mut self,
address: Pubkey,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<Account>>> + '_ {
) -> impl Future<Output = Result<Option<Account>, BanksClientError>> + '_ {
self.get_account_with_commitment_and_context(context::current(), address, commitment)
}
@ -244,7 +326,7 @@ impl BanksClient {
pub fn get_account(
&mut self,
address: Pubkey,
) -> impl Future<Output = io::Result<Option<Account>>> + '_ {
) -> impl Future<Output = Result<Option<Account>, BanksClientError>> + '_ {
self.get_account_with_commitment(address, CommitmentLevel::default())
}
@ -253,12 +335,11 @@ impl BanksClient {
pub fn get_packed_account_data<T: Pack>(
&mut self,
address: Pubkey,
) -> impl Future<Output = io::Result<T>> + '_ {
) -> impl Future<Output = Result<T, BanksClientError>> + '_ {
self.get_account(address).map(|result| {
let account =
result?.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Account not found"))?;
let account = result?.ok_or(BanksClientError::ClientError("Account not found"))?;
T::unpack_from_slice(&account.data)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "Failed to deserialize account"))
.map_err(|_| BanksClientError::ClientError("Failed to deserialize account"))
})
}
@ -267,11 +348,10 @@ impl BanksClient {
pub fn get_account_data_with_borsh<T: BorshDeserialize>(
&mut self,
address: Pubkey,
) -> impl Future<Output = io::Result<T>> + '_ {
) -> impl Future<Output = Result<T, BanksClientError>> + '_ {
self.get_account(address).map(|result| {
let account =
result?.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "account not found"))?;
T::try_from_slice(&account.data)
let account = result?.ok_or(BanksClientError::ClientError("Account not found"))?;
T::try_from_slice(&account.data).map_err(Into::into)
})
}
@ -281,14 +361,17 @@ impl BanksClient {
&mut self,
address: Pubkey,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<u64>> + '_ {
) -> impl Future<Output = Result<u64, BanksClientError>> + '_ {
self.get_account_with_commitment_and_context(context::current(), address, commitment)
.map(|result| Ok(result?.map(|x| x.lamports).unwrap_or(0)))
}
/// Return the balance in lamports of an account at the given address at the time
/// of the most recent root slot.
pub fn get_balance(&mut self, address: Pubkey) -> impl Future<Output = io::Result<u64>> + '_ {
pub fn get_balance(
&mut self,
address: Pubkey,
) -> impl Future<Output = Result<u64, BanksClientError>> + '_ {
self.get_balance_with_commitment(address, CommitmentLevel::default())
}
@ -300,7 +383,7 @@ impl BanksClient {
pub fn get_transaction_status(
&mut self,
signature: Signature,
) -> impl Future<Output = io::Result<Option<TransactionStatus>>> + '_ {
) -> impl Future<Output = Result<Option<TransactionStatus>, BanksClientError>> + '_ {
self.get_transaction_status_with_context(context::current(), signature)
}
@ -308,7 +391,7 @@ impl BanksClient {
pub async fn get_transaction_statuses(
&mut self,
signatures: Vec<Signature>,
) -> io::Result<Vec<Option<TransactionStatus>>> {
) -> Result<Vec<Option<TransactionStatus>>, BanksClientError> {
// tarpc futures oddly hold a mutable reference back to the client so clone the client upfront
let mut clients_and_signatures: Vec<_> = signatures
.into_iter()
@ -325,19 +408,22 @@ impl BanksClient {
statuses.into_iter().collect()
}
pub fn get_latest_blockhash(&mut self) -> impl Future<Output = io::Result<Hash>> + '_ {
pub fn get_latest_blockhash(
&mut self,
) -> impl Future<Output = Result<Hash, BanksClientError>> + '_ {
self.get_latest_blockhash_with_commitment(CommitmentLevel::default())
.map(|result| {
result?
.map(|x| x.0)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "account not found"))
.ok_or(BanksClientError::ClientError("valid blockhash not found"))
.map_err(Into::into)
})
}
pub fn get_latest_blockhash_with_commitment(
&mut self,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<(Hash, u64)>>> + '_ {
) -> impl Future<Output = Result<Option<(Hash, u64)>, BanksClientError>> + '_ {
self.get_latest_blockhash_with_commitment_and_context(context::current(), commitment)
}
@ -345,9 +431,10 @@ impl BanksClient {
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<(Hash, u64)>>> + '_ {
) -> impl Future<Output = Result<Option<(Hash, u64)>, BanksClientError>> + '_ {
self.inner
.get_latest_blockhash_with_commitment_and_context(ctx, commitment)
.map_err(Into::into)
}
pub fn get_fee_for_message_with_commitment_and_context(
@ -355,13 +442,14 @@ impl BanksClient {
ctx: Context,
commitment: CommitmentLevel,
message: Message,
) -> impl Future<Output = io::Result<Option<u64>>> + '_ {
) -> impl Future<Output = Result<Option<u64>, BanksClientError>> + '_ {
self.inner
.get_fee_for_message_with_commitment_and_context(ctx, commitment, message)
.map_err(Into::into)
}
}
pub async fn start_client<C>(transport: C) -> io::Result<BanksClient>
pub async fn start_client<C>(transport: C) -> Result<BanksClient, BanksClientError>
where
C: Transport<ClientMessage<BanksRequest>, Response<BanksResponse>> + Send + 'static,
{
@ -370,7 +458,7 @@ where
})
}
pub async fn start_tcp_client<T: ToSocketAddrs>(addr: T) -> io::Result<BanksClient> {
pub async fn start_tcp_client<T: ToSocketAddrs>(addr: T) -> Result<BanksClient, BanksClientError> {
let transport = tcp::connect(addr, Bincode::default).await?;
Ok(BanksClient {
inner: TarpcClient::new(client::Config::default(), transport).spawn(),
@ -399,7 +487,7 @@ mod tests {
}
#[test]
fn test_banks_server_transfer_via_server() -> io::Result<()> {
fn test_banks_server_transfer_via_server() -> Result<(), BanksClientError> {
// This test shows the preferred way to interact with BanksServer.
// It creates a runtime explicitly (no globals via tokio macros) and calls
// `runtime.block_on()` just once, to run all the async code.
@ -432,7 +520,7 @@ mod tests {
}
#[test]
fn test_banks_server_transfer_via_client() -> io::Result<()> {
fn test_banks_server_transfer_via_client() -> Result<(), BanksClientError> {
// The caller may not want to hold the connection open until the transaction
// is processed (or blockhash expires). In this test, we verify the
// server-side functionality is available to the client.

View File

@ -1,6 +1,6 @@
[package]
name = "solana-banks-interface"
version = "1.9.0"
version = "1.10.0"
description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -10,9 +10,9 @@ documentation = "https://docs.rs/solana-banks-interface"
edition = "2021"
[dependencies]
serde = { version = "1.0.130", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
tarpc = { version = "0.26.2", features = ["full"] }
serde = { version = "1.0.134", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
tarpc = { version = "0.27.2", features = ["full"] }
[lib]
crate-type = ["lib"]

View File

@ -30,6 +30,19 @@ pub struct TransactionStatus {
pub confirmation_status: Option<TransactionConfirmationStatus>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct TransactionSimulationDetails {
pub logs: Vec<String>,
pub units_consumed: u64,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct BanksTransactionResultWithSimulation {
pub result: Option<transaction::Result<()>>,
pub simulation_details: Option<TransactionSimulationDetails>,
}
#[tarpc::service]
pub trait Banks {
async fn send_transaction_with_context(transaction: Transaction);
@ -44,6 +57,10 @@ pub trait Banks {
-> Option<TransactionStatus>;
async fn get_slot_with_context(commitment: CommitmentLevel) -> Slot;
async fn get_block_height_with_context(commitment: CommitmentLevel) -> u64;
async fn process_transaction_with_preflight_and_commitment_and_context(
transaction: Transaction,
commitment: CommitmentLevel,
) -> BanksTransactionResultWithSimulation;
async fn process_transaction_with_commitment_and_context(
transaction: Transaction,
commitment: CommitmentLevel,

View File

@ -1,6 +1,6 @@
[package]
name = "solana-banks-server"
version = "1.9.0"
version = "1.10.0"
description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -11,12 +11,13 @@ edition = "2021"
[dependencies]
bincode = "1.3.3"
crossbeam-channel = "0.5"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.9.0" }
tarpc = { version = "0.26.2", features = ["full"] }
solana-banks-interface = { path = "../banks-interface", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.10.0" }
tarpc = { version = "0.27.2", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
tokio-stream = "0.1"

View File

@ -1,10 +1,16 @@
use {
bincode::{deserialize, serialize},
crossbeam_channel::{unbounded, Receiver, Sender},
futures::{future, prelude::stream::StreamExt},
solana_banks_interface::{
Banks, BanksRequest, BanksResponse, TransactionConfirmationStatus, TransactionStatus,
Banks, BanksRequest, BanksResponse, BanksTransactionResultWithSimulation,
TransactionConfirmationStatus, TransactionSimulationDetails, TransactionStatus,
},
solana_runtime::{
bank::{Bank, TransactionSimulationResult},
bank_forks::BankForks,
commitment::BlockCommitmentCache,
},
solana_runtime::{bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache},
solana_sdk::{
account::Account,
clock::Slot,
@ -15,7 +21,7 @@ use {
message::{Message, SanitizedMessage},
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction},
transaction::{self, SanitizedTransaction, Transaction},
},
solana_send_transaction_service::{
send_transaction_service::{SendTransactionService, TransactionInfo},
@ -25,17 +31,14 @@ use {
convert::TryFrom,
io,
net::{Ipv4Addr, SocketAddr},
sync::{
mpsc::{channel, Receiver, Sender},
Arc, RwLock,
},
sync::{Arc, RwLock},
thread::Builder,
time::Duration,
},
tarpc::{
context::Context,
serde_transport::tcp,
server::{self, Channel, Incoming},
server::{self, incoming::Incoming, Channel},
transport::{self, channel::UnboundedChannel},
ClientMessage, Response,
},
@ -91,7 +94,7 @@ impl BanksServer {
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
let (transaction_sender, transaction_receiver) = channel();
let (transaction_sender, transaction_receiver) = unbounded();
let bank = bank_forks.read().unwrap().working_bank();
let slot = bank.slot();
{
@ -242,6 +245,47 @@ impl Banks for BanksServer {
self.bank(commitment).block_height()
}
async fn process_transaction_with_preflight_and_commitment_and_context(
self,
ctx: Context,
transaction: Transaction,
commitment: CommitmentLevel,
) -> BanksTransactionResultWithSimulation {
let sanitized_transaction =
match SanitizedTransaction::try_from_legacy_transaction(transaction.clone()) {
Err(err) => {
return BanksTransactionResultWithSimulation {
result: Some(Err(err)),
simulation_details: None,
};
}
Ok(tx) => tx,
};
if let TransactionSimulationResult {
result: Err(err),
logs,
post_simulation_accounts: _,
units_consumed,
} = self
.bank(commitment)
.simulate_transaction_unchecked(sanitized_transaction)
{
return BanksTransactionResultWithSimulation {
result: Some(Err(err)),
simulation_details: Some(TransactionSimulationDetails {
logs,
units_consumed,
}),
};
}
BanksTransactionResultWithSimulation {
result: self
.process_transaction_with_commitment_and_context(ctx, transaction, commitment)
.await,
simulation_details: None,
}
}
async fn process_transaction_with_commitment_and_context(
self,
_: Context,
@ -346,7 +390,7 @@ pub async fn start_tcp_server(
// serve is generated by the service attribute. It takes as input any type implementing
// the generated Banks trait.
.map(move |chan| {
let (sender, receiver) = channel();
let (sender, receiver) = unbounded();
SendTransactionService::new::<NullTpuInfo>(
tpu_addr,

View File

@ -2,19 +2,18 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-bench-streamer"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
crossbeam-channel = "0.5"
clap = "2.33.1"
solana-clap-utils = { path = "../clap-utils", version = "=1.9.0" }
solana-streamer = { path = "../streamer", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-net-utils = { path = "../net-utils", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,16 +1,16 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, App, Arg},
crossbeam_channel::unbounded,
solana_streamer::{
packet::{Packet, Packets, PacketsRecycler, PACKET_DATA_SIZE},
streamer::{receiver, PacketReceiver},
packet::{Packet, PacketBatch, PacketBatchRecycler, PACKET_DATA_SIZE},
streamer::{receiver, PacketBatchReceiver},
},
std::{
cmp::max,
net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket},
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
mpsc::channel,
Arc,
},
thread::{sleep, spawn, JoinHandle, Result},
@ -20,19 +20,19 @@ use {
fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
let send = UdpSocket::bind("0.0.0.0:0").unwrap();
let mut msgs = Packets::default();
msgs.packets.resize(10, Packet::default());
for w in msgs.packets.iter_mut() {
let mut packet_batch = PacketBatch::default();
packet_batch.packets.resize(10, Packet::default());
for w in packet_batch.packets.iter_mut() {
w.meta.size = PACKET_DATA_SIZE;
w.meta.set_addr(addr);
}
let msgs = Arc::new(msgs);
let packet_batch = Arc::new(packet_batch);
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
}
let mut num = 0;
for p in &msgs.packets {
for p in &packet_batch.packets {
let a = p.meta.addr();
assert!(p.meta.size <= PACKET_DATA_SIZE);
send.send_to(&p.data[..p.meta.size], &a).unwrap();
@ -42,14 +42,14 @@ fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
})
}
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketReceiver) -> JoinHandle<()> {
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketBatchReceiver) -> JoinHandle<()> {
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
}
let timer = Duration::new(1, 0);
if let Ok(msgs) = r.recv_timeout(timer) {
rvs.fetch_add(msgs.packets.len(), Ordering::Relaxed);
if let Ok(packet_batch) = r.recv_timeout(timer) {
rvs.fetch_add(packet_batch.packets.len(), Ordering::Relaxed);
}
})
}
@ -81,7 +81,7 @@ fn main() -> Result<()> {
let mut read_channels = Vec::new();
let mut read_threads = Vec::new();
let recycler = PacketsRecycler::default();
let recycler = PacketBatchRecycler::default();
for _ in 0..num_sockets {
let read = solana_net_utils::bind_to(ip_addr, port, false).unwrap();
read.set_read_timeout(Some(Duration::new(1, 0))).unwrap();
@ -89,7 +89,7 @@ fn main() -> Result<()> {
addr = read.local_addr().unwrap();
port = addr.port();
let (s_reader, r_reader) = channel();
let (s_reader, r_reader) = unbounded();
read_channels.push(r_reader);
read_threads.push(receiver(
Arc::new(read),

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-bench-tps"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -10,27 +10,28 @@ publish = false
[dependencies]
clap = "2.33.1"
crossbeam-channel = "0.5"
log = "0.4.14"
rayon = "1.5.1"
serde_json = "1.0.72"
serde_yaml = "0.8.21"
solana-core = { path = "../core", version = "=1.9.0" }
solana-genesis = { path = "../genesis", version = "=1.9.0" }
solana-client = { path = "../client", version = "=1.9.0" }
solana-faucet = { path = "../faucet", version = "=1.9.0" }
solana-gossip = { path = "../gossip", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-metrics = { path = "../metrics", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-net-utils = { path = "../net-utils", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-streamer = { path = "../streamer", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
serde_json = "1.0.78"
serde_yaml = "0.8.23"
solana-core = { path = "../core", version = "=1.10.0" }
solana-genesis = { path = "../genesis", version = "=1.10.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-faucet = { path = "../faucet", version = "=1.10.0" }
solana-gossip = { path = "../gossip", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-metrics = { path = "../metrics", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
[dev-dependencies]
serial_test = "0.5.1"
solana-local-cluster = { path = "../local-cluster", version = "=1.9.0" }
solana-local-cluster = { path = "../local-cluster", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,5 +1,6 @@
#![allow(clippy::integer_arithmetic)]
use {
crossbeam_channel::unbounded,
serial_test::serial,
solana_bench_tps::{
bench::{do_bench_tps, generate_and_fund_keypairs},
@ -15,10 +16,7 @@ use {
},
solana_sdk::signature::{Keypair, Signer},
solana_streamer::socket::SocketAddrSpace,
std::{
sync::{mpsc::channel, Arc},
time::Duration,
},
std::{sync::Arc, time::Duration},
};
fn test_bench_tps_local_cluster(config: Config) {
@ -52,7 +50,7 @@ fn test_bench_tps_local_cluster(config: Config) {
VALIDATOR_PORT_RANGE,
));
let (addr_sender, addr_receiver) = channel();
let (addr_sender, addr_receiver) = unbounded();
run_local_faucet_with_port(faucet_keypair, addr_sender, None, 0);
let faucet_addr = addr_receiver
.recv_timeout(Duration::from_secs(2))

32
bloom/Cargo.toml Normal file
View File

@ -0,0 +1,32 @@
[package]
name = "solana-bloom"
version = "1.10.0"
description = "Solana bloom filter"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bloom"
edition = "2021"
[dependencies]
bv = { version = "0.11.1", features = ["serde"] }
fnv = "1.0.7"
rand = "0.7.0"
serde = { version = "1.0.134", features = ["rc"] }
rayon = "1.5.1"
serde_derive = "1.0.103"
solana-frozen-abi = { path = "../frozen-abi", version = "=1.10.0" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
log = "0.4.14"
[lib]
crate-type = ["lib"]
name = "solana_bloom"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]
[build-dependencies]
rustc_version = "0.4"

View File

@ -5,7 +5,7 @@ use {
bv::BitVec,
fnv::FnvHasher,
rand::Rng,
solana_runtime::bloom::{AtomicBloom, Bloom, BloomHashIndex},
solana_bloom::bloom::{AtomicBloom, Bloom, BloomHashIndex},
solana_sdk::{
hash::{hash, Hash},
signature::Signature,

1
bloom/build.rs Symbolic link
View File

@ -0,0 +1 @@
../frozen-abi/build.rs

View File

@ -101,7 +101,7 @@ impl<T: BloomHashIndex> Bloom<T> {
}
}
fn pos(&self, key: &T, k: u64) -> u64 {
key.hash_at_index(k) % self.bits.len()
key.hash_at_index(k).wrapping_rem(self.bits.len())
}
pub fn clear(&mut self) {
self.bits = BitVec::new_fill(false, self.bits.len());
@ -111,7 +111,7 @@ impl<T: BloomHashIndex> Bloom<T> {
for k in &self.keys {
let pos = self.pos(key, *k);
if !self.bits.get(pos) {
self.num_bits_set += 1;
self.num_bits_set = self.num_bits_set.saturating_add(1);
self.bits.set(pos, true);
}
}
@ -164,21 +164,26 @@ impl<T: BloomHashIndex> From<Bloom<T>> for AtomicBloom<T> {
impl<T: BloomHashIndex> AtomicBloom<T> {
fn pos(&self, key: &T, hash_index: u64) -> (usize, u64) {
let pos = key.hash_at_index(hash_index) % self.num_bits;
let pos = key.hash_at_index(hash_index).wrapping_rem(self.num_bits);
// Divide by 64 to figure out which of the
// AtomicU64 bit chunks we need to modify.
let index = pos >> 6;
let index = pos.wrapping_shr(6);
// (pos & 63) is equivalent to mod 64 so that we can find
// the index of the bit within the AtomicU64 to modify.
let mask = 1u64 << (pos & 63);
let mask = 1u64.wrapping_shl(u32::try_from(pos & 63).unwrap());
(index as usize, mask)
}
pub fn add(&self, key: &T) {
/// Adds an item to the bloom filter and returns true if the item
/// was not in the filter before.
pub fn add(&self, key: &T) -> bool {
let mut added = false;
for k in &self.keys {
let (index, mask) = self.pos(key, *k);
self.bits[index].fetch_or(mask, Ordering::Relaxed);
let prev_val = self.bits[index].fetch_or(mask, Ordering::Relaxed);
added = added || prev_val & mask == 0u64;
}
added
}
pub fn contains(&self, key: &T) -> bool {
@ -189,6 +194,12 @@ impl<T: BloomHashIndex> AtomicBloom<T> {
})
}
pub fn clear_for_tests(&mut self) {
self.bits.iter().for_each(|bit| {
bit.store(0u64, Ordering::Relaxed);
});
}
// Only for tests and simulations.
pub fn mock_clone(&self) -> Self {
Self {
@ -320,7 +331,9 @@ mod test {
assert_eq!(bloom.keys.len(), 3);
assert_eq!(bloom.num_bits, 6168);
assert_eq!(bloom.bits.len(), 97);
hash_values.par_iter().for_each(|v| bloom.add(v));
hash_values.par_iter().for_each(|v| {
bloom.add(v);
});
let bloom: Bloom<Hash> = bloom.into();
assert_eq!(bloom.keys.len(), 3);
assert_eq!(bloom.bits.len(), 6168);
@ -362,7 +375,9 @@ mod test {
}
// Round trip, re-inserting the same hash values.
let bloom: AtomicBloom<_> = bloom.into();
hash_values.par_iter().for_each(|v| bloom.add(v));
hash_values.par_iter().for_each(|v| {
bloom.add(v);
});
for hash_value in &hash_values {
assert!(bloom.contains(hash_value));
}
@ -380,7 +395,9 @@ mod test {
let bloom: AtomicBloom<_> = bloom.into();
assert_eq!(bloom.num_bits, 9731);
assert_eq!(bloom.bits.len(), (9731 + 63) / 64);
more_hash_values.par_iter().for_each(|v| bloom.add(v));
more_hash_values.par_iter().for_each(|v| {
bloom.add(v);
});
for hash_value in &hash_values {
assert!(bloom.contains(hash_value));
}

5
bloom/src/lib.rs Normal file
View File

@ -0,0 +1,5 @@
#![cfg_attr(RUSTC_WITH_SPECIALIZATION, feature(min_specialization))]
pub mod bloom;
#[macro_use]
extern crate solana_frozen_abi_macro;

View File

@ -1,6 +1,6 @@
[package]
name = "solana-bucket-map"
version = "1.9.0"
version = "1.10.0"
description = "solana-bucket-map"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bucket-map"
@ -11,15 +11,18 @@ license = "Apache-2.0"
edition = "2021"
[dependencies]
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
memmap2 = "0.5.0"
solana-sdk = { path = "../sdk", version = "=1.10.0" }
memmap2 = "0.5.2"
log = { version = "0.4.11" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
rand = "0.7.0"
tempfile = "3.3.0"
modular-bitfield = "0.11.2"
[dev-dependencies]
fs_extra = "1.2.0"
tempfile = "3.2.0"
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.10.0" }
[lib]
crate-type = ["lib"]

View File

@ -3,7 +3,7 @@ use {
bucket_item::BucketItem,
bucket_map::BucketMapError,
bucket_stats::BucketMapStats,
bucket_storage::{BucketStorage, Uid, DEFAULT_CAPACITY_POW2, UID_UNLOCKED},
bucket_storage::{BucketStorage, Uid, DEFAULT_CAPACITY_POW2},
index_entry::IndexEntry,
MaxSearch, RefCount,
},
@ -17,7 +17,7 @@ use {
ops::RangeBounds,
path::PathBuf,
sync::{
atomic::{AtomicUsize, Ordering},
atomic::{AtomicU64, AtomicUsize, Ordering},
Arc, Mutex,
},
},
@ -81,6 +81,7 @@ impl<T: Clone + Copy> Bucket<T> {
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
stats: Arc<BucketMapStats>,
count: Arc<AtomicU64>,
) -> Self {
let index = BucketStorage::new(
Arc::clone(&drives),
@ -88,6 +89,7 @@ impl<T: Clone + Copy> Bucket<T> {
std::mem::size_of::<IndexEntry>() as u64,
max_search,
Arc::clone(&stats.index),
count,
);
Self {
random: thread_rng().gen(),
@ -100,14 +102,10 @@ impl<T: Clone + Copy> Bucket<T> {
}
}
pub fn bucket_len(&self) -> u64 {
self.index.used.load(Ordering::Relaxed)
}
pub fn keys(&self) -> Vec<Pubkey> {
let mut rv = vec![];
for i in 0..self.index.capacity() {
if self.index.uid(i) == UID_UNLOCKED {
if self.index.is_free(i) {
continue;
}
let ix: &IndexEntry = self.index.get(i);
@ -120,10 +118,10 @@ impl<T: Clone + Copy> Bucket<T> {
where
R: RangeBounds<Pubkey>,
{
let mut result = Vec::with_capacity(self.index.used.load(Ordering::Relaxed) as usize);
let mut result = Vec::with_capacity(self.index.count.load(Ordering::Relaxed) as usize);
for i in 0..self.index.capacity() {
let ii = i % self.index.capacity();
if self.index.uid(ii) == UID_UNLOCKED {
if self.index.is_free(ii) {
continue;
}
let ix: &IndexEntry = self.index.get(ii);
@ -156,7 +154,7 @@ impl<T: Clone + Copy> Bucket<T> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i % index.capacity();
if index.uid(ii) == UID_UNLOCKED {
if index.is_free(ii) {
continue;
}
let elem: &mut IndexEntry = index.get_mut(ii);
@ -175,7 +173,7 @@ impl<T: Clone + Copy> Bucket<T> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i % index.capacity();
if index.uid(ii) == UID_UNLOCKED {
if index.is_free(ii) {
continue;
}
let elem: &IndexEntry = index.get(ii);
@ -187,26 +185,23 @@ impl<T: Clone + Copy> Bucket<T> {
}
fn bucket_create_key(
index: &BucketStorage,
index: &mut BucketStorage,
key: &Pubkey,
elem_uid: Uid,
random: u64,
is_resizing: bool,
) -> Result<u64, BucketMapError> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i as u64 % index.capacity();
if index.uid(ii) != UID_UNLOCKED {
if !index.is_free(ii) {
continue;
}
index.allocate(ii, elem_uid).unwrap();
let mut elem: &mut IndexEntry = index.get_mut(ii);
elem.key = *key;
// These will be overwritten after allocation by callers.
index.allocate(ii, elem_uid, is_resizing).unwrap();
let elem: &mut IndexEntry = index.get_mut(ii);
// These fields will be overwritten after allocation by callers.
// Since this part of the mmapped file could have previously been used by someone else, there can be garbage here.
elem.ref_count = 0;
elem.storage_offset = 0;
elem.storage_capacity_when_created_pow2 = 0;
elem.num_slots = 0;
elem.init(key);
//debug!( "INDEX ALLOC {:?} {} {} {}", key, ii, index.capacity, elem_uid );
return Ok(ii);
}
@ -225,8 +220,14 @@ impl<T: Clone + Copy> Bucket<T> {
Some(elem.ref_count)
}
fn create_key(&self, key: &Pubkey) -> Result<u64, BucketMapError> {
Self::bucket_create_key(&self.index, key, IndexEntry::key_uid(key), self.random)
fn create_key(&mut self, key: &Pubkey) -> Result<u64, BucketMapError> {
Self::bucket_create_key(
&mut self.index,
key,
IndexEntry::key_uid(key),
self.random,
false,
)
}
pub fn read_value(&self, key: &Pubkey) -> Option<(&[T], RefCount)> {
@ -256,16 +257,17 @@ impl<T: Clone + Copy> Bucket<T> {
Some(res) => res,
};
elem.ref_count = ref_count;
let elem_uid = self.index.uid(elem_ix);
let elem_uid = self.index.uid_unchecked(elem_ix);
let bucket_ix = elem.data_bucket_ix();
let current_bucket = &self.data[bucket_ix as usize];
let num_slots = data.len() as u64;
if best_fit_bucket == bucket_ix && elem.num_slots > 0 {
// in place update
let elem_loc = elem.data_loc(current_bucket);
let slice: &mut [T] = current_bucket.get_mut_cell_slice(elem_loc, data.len() as u64);
assert!(current_bucket.uid(elem_loc) == elem_uid);
elem.num_slots = data.len() as u64;
slice.clone_from_slice(data);
assert_eq!(current_bucket.uid(elem_loc), Some(elem_uid));
elem.num_slots = num_slots;
slice.copy_from_slice(data);
Ok(())
} else {
// need to move the allocation to a best fit spot
@ -275,18 +277,21 @@ impl<T: Clone + Copy> Bucket<T> {
let pos = thread_rng().gen_range(0, cap);
for i in pos..pos + self.index.max_search() {
let ix = i % cap;
if best_bucket.uid(ix) == UID_UNLOCKED {
if best_bucket.is_free(ix) {
let elem_loc = elem.data_loc(current_bucket);
if elem.num_slots > 0 {
let old_slots = elem.num_slots;
elem.set_storage_offset(ix);
elem.set_storage_capacity_when_created_pow2(best_bucket.capacity_pow2);
elem.num_slots = num_slots;
if old_slots > 0 {
let current_bucket = &mut self.data[bucket_ix as usize];
current_bucket.free(elem_loc, elem_uid);
}
elem.storage_offset = ix;
elem.storage_capacity_when_created_pow2 = best_bucket.capacity_pow2;
elem.num_slots = data.len() as u64;
//debug!( "DATA ALLOC {:?} {} {} {}", key, elem.data_location, best_bucket.capacity, elem_uid );
if elem.num_slots > 0 {
best_bucket.allocate(ix, elem_uid).unwrap();
let slice = best_bucket.get_mut_cell_slice(ix, data.len() as u64);
if num_slots > 0 {
let best_bucket = &mut self.data[best_fit_bucket as usize];
best_bucket.allocate(ix, elem_uid, false).unwrap();
let slice = best_bucket.get_mut_cell_slice(ix, num_slots);
slice.copy_from_slice(data);
}
return Ok(());
@ -298,10 +303,12 @@ impl<T: Clone + Copy> Bucket<T> {
pub fn delete_key(&mut self, key: &Pubkey) {
if let Some((elem, elem_ix)) = self.find_entry(key) {
let elem_uid = self.index.uid(elem_ix);
let elem_uid = self.index.uid_unchecked(elem_ix);
if elem.num_slots > 0 {
let data_bucket = &self.data[elem.data_bucket_ix() as usize];
let ix = elem.data_bucket_ix() as usize;
let data_bucket = &self.data[ix];
let loc = elem.data_loc(data_bucket);
let data_bucket = &mut self.data[ix];
//debug!( "DATA FREE {:?} {} {} {}", key, elem.data_location, data_bucket.capacity, elem_uid );
data_bucket.free(loc, elem_uid);
}
@ -319,7 +326,7 @@ impl<T: Clone + Copy> Bucket<T> {
//increasing the capacity by ^4 reduces the
//likelyhood of a re-index collision of 2^(max_search)^2
//1 in 2^32
let index = BucketStorage::new_with_capacity(
let mut index = BucketStorage::new_with_capacity(
Arc::clone(&self.drives),
1,
std::mem::size_of::<IndexEntry>() as u64,
@ -327,14 +334,16 @@ impl<T: Clone + Copy> Bucket<T> {
self.index.capacity_pow2 + i, // * 2,
self.index.max_search,
Arc::clone(&self.stats.index),
Arc::clone(&self.index.count),
);
let random = thread_rng().gen();
let mut valid = true;
for ix in 0..self.index.capacity() {
let uid = self.index.uid(ix);
if UID_UNLOCKED != uid {
if let Some(uid) = uid {
let elem: &IndexEntry = self.index.get(ix);
let new_ix = Self::bucket_create_key(&index, &elem.key, uid, random);
let new_ix =
Self::bucket_create_key(&mut index, &elem.key, uid, random, true);
if new_ix.is_err() {
valid = false;
break;
@ -391,6 +400,7 @@ impl<T: Clone + Copy> Bucket<T> {
Self::elem_size(),
self.index.max_search,
Arc::clone(&self.stats.data),
Arc::default(),
))
}
self.data.push(bucket);

View File

@ -30,14 +30,13 @@ impl<T: Clone + Copy> BucketApi<T> {
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
stats: Arc<BucketMapStats>,
count: Arc<AtomicU64>,
) -> Self {
Self {
drives,
max_search,
stats,
bucket: RwLock::default(),
count,
count: Arc::default(),
}
}
@ -73,12 +72,7 @@ impl<T: Clone + Copy> BucketApi<T> {
}
pub fn bucket_len(&self) -> u64 {
self.bucket
.read()
.unwrap()
.as_ref()
.map(|bucket| bucket.bucket_len())
.unwrap_or_default()
self.count.load(Ordering::Relaxed)
}
pub fn delete_key(&self, key: &Pubkey) {
@ -95,11 +89,11 @@ impl<T: Clone + Copy> BucketApi<T> {
Arc::clone(&self.drives),
self.max_search,
Arc::clone(&self.stats),
Arc::clone(&self.count),
));
} else {
let write = bucket.as_mut().unwrap();
write.handle_delayed_grows();
self.count.store(write.bucket_len(), Ordering::Relaxed);
}
bucket
}

View File

@ -79,21 +79,14 @@ impl<T: Clone + Copy + Debug> BucketMap<T> {
});
let drives = Arc::new(drives);
let mut per_bucket_count = Vec::with_capacity(config.max_buckets);
per_bucket_count.resize_with(config.max_buckets, Arc::default);
let stats = Arc::new(BucketMapStats {
per_bucket_count,
..BucketMapStats::default()
});
let buckets = stats
.per_bucket_count
.iter()
.map(|per_bucket_count| {
let stats = Arc::default();
let buckets = (0..config.max_buckets)
.into_iter()
.map(|_| {
Arc::new(BucketApi::new(
Arc::clone(&drives),
max_search,
Arc::clone(&stats),
Arc::clone(per_bucket_count),
))
})
.collect();

View File

@ -14,5 +14,4 @@ pub struct BucketStats {
pub struct BucketMapStats {
pub index: Arc<BucketStats>,
pub data: Arc<BucketStats>,
pub per_bucket_count: Vec<Arc<AtomicU64>>,
}

View File

@ -35,27 +35,42 @@ use {
pub const DEFAULT_CAPACITY_POW2: u8 = 5;
/// A Header UID of 0 indicates that the header is unlocked
pub(crate) const UID_UNLOCKED: Uid = 0;
const UID_UNLOCKED: Uid = 0;
pub(crate) type Uid = u64;
#[repr(C)]
struct Header {
lock: AtomicU64,
lock: u64,
}
impl Header {
fn try_lock(&self, uid: Uid) -> bool {
Ok(UID_UNLOCKED)
== self
.lock
.compare_exchange(UID_UNLOCKED, uid, Ordering::AcqRel, Ordering::Relaxed)
/// try to lock this entry with 'uid'
/// return true if it could be locked
fn try_lock(&mut self, uid: Uid) -> bool {
if self.lock == UID_UNLOCKED {
self.lock = uid;
true
} else {
false
}
}
fn unlock(&self) -> Uid {
self.lock.swap(UID_UNLOCKED, Ordering::Release)
/// mark this entry as unlocked
fn unlock(&mut self, expected: Uid) {
assert_eq!(expected, self.lock);
self.lock = UID_UNLOCKED;
}
fn uid(&self) -> Uid {
self.lock.load(Ordering::Acquire)
/// uid that has locked this entry or None if unlocked
fn uid(&self) -> Option<Uid> {
if self.lock == UID_UNLOCKED {
None
} else {
Some(self.lock)
}
}
/// true if this entry is unlocked
fn is_unlocked(&self) -> bool {
self.lock == UID_UNLOCKED
}
}
@ -64,7 +79,7 @@ pub struct BucketStorage {
mmap: MmapMut,
pub cell_size: u64,
pub capacity_pow2: u8,
pub used: AtomicU64,
pub count: Arc<AtomicU64>,
pub stats: Arc<BucketStats>,
pub max_search: MaxSearch,
}
@ -88,6 +103,7 @@ impl BucketStorage {
capacity_pow2: u8,
max_search: MaxSearch,
stats: Arc<BucketStats>,
count: Arc<AtomicU64>,
) -> Self {
let cell_size = elem_size * num_elems + std::mem::size_of::<Header>() as u64;
let (mmap, path) = Self::new_map(&drives, cell_size as usize, capacity_pow2, &stats);
@ -95,7 +111,7 @@ impl BucketStorage {
path,
mmap,
cell_size,
used: AtomicU64::new(0),
count,
capacity_pow2,
stats,
max_search,
@ -112,6 +128,7 @@ impl BucketStorage {
elem_size: u64,
max_search: MaxSearch,
stats: Arc<BucketStats>,
count: Arc<AtomicU64>,
) -> Self {
Self::new_with_capacity(
drives,
@ -120,53 +137,74 @@ impl BucketStorage {
DEFAULT_CAPACITY_POW2,
max_search,
stats,
count,
)
}
pub fn uid(&self, ix: u64) -> Uid {
assert!(ix < self.capacity(), "bad index size");
/// return ref to header of item 'ix' in mmapped file
fn header_ptr(&self, ix: u64) -> &Header {
let ix = (ix * self.cell_size) as usize;
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
return hdr.as_ref().unwrap().uid();
hdr.as_ref().unwrap()
}
}
pub fn allocate(&self, ix: u64, uid: Uid) -> Result<(), BucketStorageError> {
/// return ref to header of item 'ix' in mmapped file
fn header_mut_ptr(&mut self, ix: u64) -> &mut Header {
let ix = (ix * self.cell_size) as usize;
let hdr_slice: &mut [u8] = &mut self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_mut_ptr() as *mut Header;
hdr.as_mut().unwrap()
}
}
/// return uid allocated at index 'ix' or None if vacant
pub fn uid(&self, ix: u64) -> Option<Uid> {
assert!(ix < self.capacity(), "bad index size");
self.header_ptr(ix).uid()
}
/// true if the entry at index 'ix' is free (as opposed to being allocated)
pub fn is_free(&self, ix: u64) -> bool {
// note that the terminology in the implementation is locked or unlocked.
// but our api is allocate/free
self.header_ptr(ix).is_unlocked()
}
/// caller knows id is not empty
pub fn uid_unchecked(&self, ix: u64) -> Uid {
self.uid(ix).unwrap()
}
/// 'is_resizing' true if caller is resizing the index (so don't increment count)
/// 'is_resizing' false if caller is adding an item to the index (so increment count)
pub fn allocate(
&mut self,
ix: u64,
uid: Uid,
is_resizing: bool,
) -> Result<(), BucketStorageError> {
assert!(ix < self.capacity(), "allocate: bad index size");
assert!(UID_UNLOCKED != uid, "allocate: bad uid");
let mut e = Err(BucketStorageError::AlreadyAllocated);
let ix = (ix * self.cell_size) as usize;
//debug!("ALLOC {} {}", ix, uid);
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
if hdr.as_ref().unwrap().try_lock(uid) {
e = Ok(());
self.used.fetch_add(1, Ordering::Relaxed);
if self.header_mut_ptr(ix).try_lock(uid) {
e = Ok(());
if !is_resizing {
self.count.fetch_add(1, Ordering::Relaxed);
}
};
}
e
}
pub fn free(&self, ix: u64, uid: Uid) {
pub fn free(&mut self, ix: u64, uid: Uid) {
assert!(ix < self.capacity(), "bad index size");
assert!(UID_UNLOCKED != uid, "free: bad uid");
let ix = (ix * self.cell_size) as usize;
//debug!("FREE {} {}", ix, uid);
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
//debug!("FREE uid: {}", hdr.as_ref().unwrap().uid());
let previous_uid = hdr.as_ref().unwrap().unlock();
assert_eq!(
previous_uid, uid,
"free: unlocked a header with a differet uid: {}",
previous_uid
);
self.used.fetch_sub(1, Ordering::Relaxed);
}
self.header_mut_ptr(ix).unlock(uid);
self.count.fetch_sub(1, Ordering::Relaxed);
}
pub fn get<T: Sized>(&self, ix: u64) -> &T {
@ -324,6 +362,9 @@ impl BucketStorage {
capacity_pow_2,
max_search,
Arc::clone(stats),
bucket
.map(|bucket| Arc::clone(&bucket.count))
.unwrap_or_default(),
);
if let Some(bucket) = bucket {
new_bucket.copy_contents(bucket);
@ -341,3 +382,43 @@ impl BucketStorage {
1 << self.capacity_pow2
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_bucket_storage() {
let tmpdir1 = std::env::temp_dir().join("bucket_map_test_mt");
let paths: Vec<PathBuf> = [tmpdir1]
.iter()
.filter(|x| std::fs::create_dir_all(x).is_ok())
.cloned()
.collect();
assert!(!paths.is_empty());
let mut storage =
BucketStorage::new(Arc::new(paths), 1, 1, 1, Arc::default(), Arc::default());
let ix = 0;
let uid = Uid::MAX;
assert!(storage.is_free(ix));
assert!(storage.allocate(ix, uid, false).is_ok());
assert!(storage.allocate(ix, uid, false).is_err());
assert!(!storage.is_free(ix));
assert_eq!(storage.uid(ix), Some(uid));
assert_eq!(storage.uid_unchecked(ix), uid);
storage.free(ix, uid);
assert!(storage.is_free(ix));
assert_eq!(storage.uid(ix), None);
let uid = 1;
assert!(storage.is_free(ix));
assert!(storage.allocate(ix, uid, false).is_ok());
assert!(storage.allocate(ix, uid, false).is_err());
assert!(!storage.is_free(ix));
assert_eq!(storage.uid(ix), Some(uid));
assert_eq!(storage.uid_unchecked(ix), uid);
storage.free(ix, uid);
assert!(storage.is_free(ix));
assert_eq!(storage.uid(ix), None);
}
}

View File

@ -1,9 +1,11 @@
#![allow(dead_code)]
use {
crate::{
bucket::Bucket,
bucket_storage::{BucketStorage, Uid},
RefCount,
},
modular_bitfield::prelude::*,
solana_sdk::{clock::Slot, pubkey::Pubkey},
std::{
collections::hash_map::DefaultHasher,
@ -19,13 +21,42 @@ use {
pub struct IndexEntry {
pub key: Pubkey, // can this be smaller if we have reduced the keys into buckets already?
pub ref_count: RefCount, // can this be smaller? Do we ever need more than 4B refcounts?
pub storage_offset: u64, // smaller? since these are variably sized, this could get tricky. well, actually accountinfo is not variable sized...
storage_cap_and_offset: PackedStorage,
// if the bucket doubled, the index can be recomputed using create_bucket_capacity_pow2
pub storage_capacity_when_created_pow2: u8, // see data_location
pub num_slots: Slot, // can this be smaller? epoch size should ~ be the max len. this is the num elements in the slot list
}
/// Pack the storage offset and capacity-when-crated-pow2 fields into a single u64
#[bitfield(bits = 64)]
#[repr(C)]
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq)]
struct PackedStorage {
capacity_when_created_pow2: B8,
offset: B56,
}
impl IndexEntry {
pub fn init(&mut self, pubkey: &Pubkey) {
self.key = *pubkey;
self.ref_count = 0;
self.storage_cap_and_offset = PackedStorage::default();
self.num_slots = 0;
}
pub fn set_storage_capacity_when_created_pow2(
&mut self,
storage_capacity_when_created_pow2: u8,
) {
self.storage_cap_and_offset
.set_capacity_when_created_pow2(storage_capacity_when_created_pow2)
}
pub fn set_storage_offset(&mut self, storage_offset: u64) {
self.storage_cap_and_offset
.set_offset_checked(storage_offset)
.expect("New storage offset must fit into 7 bytes!")
}
pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 {
(num_slots as f64).log2().ceil() as u64 // use int log here?
}
@ -38,10 +69,18 @@ impl IndexEntry {
self.ref_count
}
fn storage_capacity_when_created_pow2(&self) -> u8 {
self.storage_cap_and_offset.capacity_when_created_pow2()
}
fn storage_offset(&self) -> u64 {
self.storage_cap_and_offset.offset()
}
// This function maps the original data location into an index in the current bucket storage.
// This is coupled with how we resize bucket storages.
pub fn data_loc(&self, storage: &BucketStorage) -> u64 {
self.storage_offset << (storage.capacity_pow2 - self.storage_capacity_when_created_pow2)
self.storage_offset() << (storage.capacity_pow2 - self.storage_capacity_when_created_pow2())
}
pub fn read_value<'a, T>(&self, bucket: &'a Bucket<T>) -> Option<(&'a [T], RefCount)> {
@ -50,7 +89,7 @@ impl IndexEntry {
let slice = if self.num_slots > 0 {
let loc = self.data_loc(data_bucket);
let uid = Self::key_uid(&self.key);
assert_eq!(uid, bucket.data[data_bucket_ix as usize].uid(loc));
assert_eq!(Some(uid), bucket.data[data_bucket_ix as usize].uid(loc));
bucket.data[data_bucket_ix as usize].get_cell_slice(loc, self.num_slots)
} else {
// num_slots is 0. This means we don't have an actual allocation.
@ -59,9 +98,59 @@ impl IndexEntry {
};
Some((slice, self.ref_count))
}
pub fn key_uid(key: &Pubkey) -> Uid {
let mut s = DefaultHasher::new();
key.hash(&mut s);
s.finish().max(1u64)
}
}
#[cfg(test)]
mod tests {
use super::*;
impl IndexEntry {
pub fn new(key: Pubkey) -> Self {
IndexEntry {
key,
ref_count: 0,
storage_cap_and_offset: PackedStorage::default(),
num_slots: 0,
}
}
}
/// verify that accessors for storage_offset and capacity_when_created are
/// correct and independent
#[test]
fn test_api() {
for offset in [0, 1, u32::MAX as u64] {
let mut index = IndexEntry::new(solana_sdk::pubkey::new_rand());
if offset != 0 {
index.set_storage_offset(offset);
}
assert_eq!(index.storage_offset(), offset);
assert_eq!(index.storage_capacity_when_created_pow2(), 0);
for pow in [1, 255, 0] {
index.set_storage_capacity_when_created_pow2(pow);
assert_eq!(index.storage_offset(), offset);
assert_eq!(index.storage_capacity_when_created_pow2(), pow);
}
}
}
#[test]
fn test_size() {
assert_eq!(std::mem::size_of::<PackedStorage>(), 1 + 7);
assert_eq!(std::mem::size_of::<IndexEntry>(), 32 + 8 + 8 + 8);
}
#[test]
#[should_panic(expected = "New storage offset must fit into 7 bytes!")]
fn test_set_storage_offset_value_too_large() {
let too_big = 1 << 56;
let mut index = IndexEntry::new(Pubkey::new_unique());
index.set_storage_offset(too_big);
}
}

View File

@ -9,5 +9,8 @@ for a in "$@"; do
fi
done
set -x
set -ex
if [[ ! -f sdk/bpf/syscalls.txt ]]; then
"$here"/cargo build --manifest-path "$here"/programs/bpf_loader/gen-syscall-list/Cargo.toml
fi
exec "$here"/cargo run --manifest-path "$here"/sdk/cargo-build-bpf/Cargo.toml -- $maybe_bpf_sdk "$@"

View File

@ -226,6 +226,44 @@ EOF
annotate --style info \
"downstream-projects skipped as no relevant files were modified"
fi
# Downstream Anchor projects backwards compatibility
if affects \
.rs$ \
Cargo.lock$ \
Cargo.toml$ \
^ci/rust-version.sh \
^ci/test-stable-perf.sh \
^ci/test-stable.sh \
^ci/test-local-cluster.sh \
^core/build.rs \
^fetch-perf-libs.sh \
^programs/ \
^sdk/ \
^scripts/build-downstream-anchor-projects.sh \
; then
cat >> "$output_file" <<"EOF"
- command: "scripts/build-downstream-anchor-projects.sh"
name: "downstream-anchor-projects"
timeout_in_minutes: 10
EOF
else
annotate --style info \
"downstream-anchor-projects skipped as no relevant files were modified"
fi
# Wasm support
if affects \
^ci/test-wasm.sh \
^ci/test-stable.sh \
^sdk/ \
; then
command_step wasm ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-wasm.sh" 20
else
annotate --style info \
"wasm skipped as no relevant files were modified"
fi
# Benches...
if affects \
.rs$ \
@ -243,7 +281,15 @@ EOF
command_step "local-cluster" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster.sh" \
50
40
command_step "local-cluster-flakey" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-flakey.sh" \
10
command_step "local-cluster-slow" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \
30
}
pull_or_push_steps() {

View File

@ -19,3 +19,8 @@ steps:
timeout_in_minutes: 240
name: "publish crate"
branches: "!master"
- command: "ci/publish-tarball.sh"
agents:
- "queue=release-build-aarch64-apple-darwin"
timeout_in_minutes: 60
name: "publish tarball (aarch64-apple-darwin)"

View File

@ -1,4 +1,4 @@
FROM solanalabs/rust:1.56.1
FROM solanalabs/rust:1.58.1
ARG date
RUN set -x \

View File

@ -1,6 +1,6 @@
# Note: when the rust version is changed also modify
# ci/rust-version.sh to pick up the new image tag
FROM rust:1.56.1
FROM rust:1.58.1
# Add Google Protocol Buffers for Libra's metrics library.
ENV PROTOC_VERSION 3.8.0
@ -11,6 +11,7 @@ RUN set -x \
&& apt-get install apt-transport-https \
&& echo deb https://apt.buildkite.com/buildkite-agent stable main > /etc/apt/sources.list.d/buildkite-agent.list \
&& apt-key adv --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 32A37959C2FA5C3C99EFBC32A79206696452D198 \
&& curl -fsSL https://deb.nodesource.com/setup_current.x | bash - \
&& apt update \
&& apt install -y \
buildkite-agent \
@ -19,15 +20,20 @@ RUN set -x \
lcov \
libudev-dev \
mscgen \
nodejs \
net-tools \
rsync \
sudo \
golang \
unzip \
\
&& apt remove -y libcurl4-openssl-dev \
&& rm -rf /var/lib/apt/lists/* \
&& node --version \
&& npm --version \
&& rustup component add rustfmt \
&& rustup component add clippy \
&& rustup target add wasm32-unknown-unknown \
&& cargo install cargo-audit \
&& cargo install mdbook \
&& cargo install mdbook-linkcheck \

View File

@ -23,6 +23,9 @@ if [[ -n $CI ]]; then
elif [[ -n $BUILDKITE ]]; then
export CI_BRANCH=$BUILDKITE_BRANCH
export CI_BUILD_ID=$BUILDKITE_BUILD_ID
if [[ $BUILDKITE_COMMIT = HEAD ]]; then
BUILDKITE_COMMIT="$(git rev-parse HEAD)"
fi
export CI_COMMIT=$BUILDKITE_COMMIT
export CI_JOB_ID=$BUILDKITE_JOB_ID
# The standard BUILDKITE_PULL_REQUEST environment variable is always "false" due
@ -35,7 +38,18 @@ if [[ -n $CI ]]; then
export CI_BASE_BRANCH=$BUILDKITE_BRANCH
export CI_PULL_REQUEST=
fi
export CI_OS_NAME=linux
case "$(uname -s)" in
Linux)
export CI_OS_NAME=linux
;;
Darwin)
export CI_OS_NAME=osx
;;
*)
;;
esac
if [[ -n $BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG ]]; then
# The solana-secondary pipeline should use the slug of the pipeline that
# triggered it

View File

@ -39,7 +39,11 @@ fi
case "$CI_OS_NAME" in
osx)
TARGET=x86_64-apple-darwin
_cputype="$(uname -m)"
if [[ $_cputype = arm64 ]]; then
_cputype=aarch64
fi
TARGET=${_cputype}-apple-darwin
;;
linux)
TARGET=x86_64-unknown-linux-gnu

View File

@ -27,6 +27,8 @@ steps+=(test-stable-perf)
steps+=(test-downstream-builds)
steps+=(test-bench)
steps+=(test-local-cluster)
steps+=(test-local-cluster-flakey)
steps+=(test-local-cluster-slow)
step_index=0
if [[ -n "$1" ]]; then

View File

@ -18,13 +18,13 @@
if [[ -n $RUST_STABLE_VERSION ]]; then
stable_version="$RUST_STABLE_VERSION"
else
stable_version=1.56.1
stable_version=1.58.1
fi
if [[ -n $RUST_NIGHTLY_VERSION ]]; then
nightly_version="$RUST_NIGHTLY_VERSION"
else
nightly_version=2021-11-30
nightly_version=2022-01-21
fi

View File

@ -0,0 +1 @@
test-stable.sh

View File

@ -0,0 +1 @@
test-stable.sh

View File

@ -65,6 +65,9 @@ test-stable-bpf)
fi
done
# bpf-tools version
"$cargo_build_bpf" -V
# BPF program instruction count assertion
bpf_target_path=programs/bpf/target
_ "$cargo" stable test \
@ -100,7 +103,30 @@ test-stable-perf)
;;
test-local-cluster)
_ "$cargo" stable build --release --bins ${V:+--verbose}
_ "$cargo" stable test --release --package solana-local-cluster ${V:+--verbose} -- --nocapture --test-threads=1
_ "$cargo" stable test --release --package solana-local-cluster --test local_cluster ${V:+--verbose} -- --nocapture --test-threads=1
exit 0
;;
test-local-cluster-flakey)
_ "$cargo" stable build --release --bins ${V:+--verbose}
_ "$cargo" stable test --release --package solana-local-cluster --test local_cluster_flakey ${V:+--verbose} -- --nocapture --test-threads=1
exit 0
;;
test-local-cluster-slow)
_ "$cargo" stable build --release --bins ${V:+--verbose}
_ "$cargo" stable test --release --package solana-local-cluster --test local_cluster_slow ${V:+--verbose} -- --nocapture --test-threads=1
exit 0
;;
test-wasm)
_ node --version
_ npm --version
for dir in sdk/{program,}; do
if [[ -r "$dir"/package.json ]]; then
pushd "$dir"
_ npm install
_ npm test
popd
fi
done
exit 0
;;
*)

1
ci/test-wasm.sh Symbolic link
View File

@ -0,0 +1 @@
test-stable.sh

View File

@ -19,13 +19,24 @@ upload-ci-artifact() {
upload-s3-artifact() {
echo "--- artifact: $1 to $2"
(
set -x
docker run \
--rm \
--env AWS_ACCESS_KEY_ID \
--env AWS_SECRET_ACCESS_KEY \
--volume "$PWD:/solana" \
eremite/aws-cli:2018.12.18 \
args=(
--rm
--env AWS_ACCESS_KEY_ID
--env AWS_SECRET_ACCESS_KEY
--volume "$PWD:/solana"
)
if [[ $(uname -m) = arm64 ]]; then
# Ref: https://blog.jaimyn.dev/how-to-build-multi-architecture-docker-images-on-an-m1-mac/#tldr
args+=(
--platform linux/amd64
)
fi
args+=(
eremite/aws-cli:2018.12.18
/usr/bin/s3cmd --acl-public put "$1" "$2"
)
set -x
docker run "${args[@]}"
)
}

View File

@ -1,6 +1,6 @@
[package]
name = "solana-clap-utils"
version = "1.9.0"
version = "1.10.0"
description = "Solana utilities for the clap"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -12,9 +12,9 @@ edition = "2021"
[dependencies]
clap = "2.33.0"
rpassword = "5.0"
solana-perf = { path = "../perf", version = "=1.9.0" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-perf = { path = "../perf", version = "=1.10.0" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.10.0", default-features = false}
solana-sdk = { path = "../sdk", version = "=1.10.0" }
thiserror = "1.0.30"
tiny-bip39 = "0.8.2"
uriparse = "0.6.3"
@ -22,7 +22,7 @@ url = "2.2.2"
chrono = "0.4"
[dev-dependencies]
tempfile = "3.2.0"
tempfile = "3.3.0"
[lib]
name = "solana_clap_utils"

View File

@ -328,7 +328,7 @@ pub fn is_derivation<T>(value: T) -> Result<(), String>
where
T: AsRef<str> + Display,
{
let value = value.as_ref().replace("'", "");
let value = value.as_ref().replace('\'', "");
let mut parts = value.split('/');
let account = parts.next().unwrap();
account

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli-config"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -12,13 +12,13 @@ documentation = "https://docs.rs/solana-cli-config"
[dependencies]
dirs-next = "2.0.0"
lazy_static = "1.4.0"
serde = "1.0.130"
serde = "1.0.134"
serde_derive = "1.0.103"
serde_yaml = "0.8.21"
serde_yaml = "0.8.23"
url = "2.2.2"
[dev-dependencies]
anyhow = "1.0.51"
anyhow = "1.0.53"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli-output"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -17,14 +17,14 @@ console = "0.15.0"
humantime = "2.0.1"
Inflector = "0.11.4"
indicatif = "0.16.2"
serde = "1.0.130"
serde_json = "1.0.72"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.0" }
solana-client = { path = "../client", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.0" }
serde = "1.0.134"
serde_json = "1.0.78"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.0" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
[package.metadata.docs.rs]

View File

@ -99,7 +99,7 @@ impl OutputFormat {
pub struct CliAccount {
#[serde(flatten)]
pub keyed_account: RpcKeyedAccount,
#[serde(skip_serializing)]
#[serde(skip_serializing, skip_deserializing)]
pub use_lamports_unit: bool,
}
@ -2285,6 +2285,7 @@ impl fmt::Display for CliBlock {
let sign = if reward.lamports < 0 { "-" } else { "" };
total_rewards += reward.lamports;
#[allow(clippy::format_in_format_args)]
writeln!(
f,
" {:<44} {:^15} {:>15} {} {}",

View File

@ -139,7 +139,7 @@ fn format_account_mode(message: &Message, index: usize) -> String {
} else {
"-"
},
if message.is_writable(index, /*demote_program_write_locks=*/ true) {
if message.is_writable(index) {
"w" // comment for consistent rust fmt (no joking; lol)
} else {
"-"

View File

@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.0"
version = "1.10.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -17,39 +17,40 @@ criterion-stats = "0.3.0"
ctrlc = { version = "3.2.1", features = ["termination"] }
console = "0.15.0"
const_format = "0.2.22"
crossbeam-channel = "0.5"
log = "0.4.14"
humantime = "2.0.1"
num-traits = "0.2"
pretty-hex = "0.2.1"
reqwest = { version = "0.11.6", default-features = false, features = ["blocking", "rustls-tls", "json"] }
semver = "1.0.4"
serde = "1.0.130"
serde = "1.0.134"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.0" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.9.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.0" }
solana-cli-config = { path = "../cli-config", version = "=1.9.0" }
solana-cli-output = { path = "../cli-output", version = "=1.9.0" }
solana-client = { path = "../client", version = "=1.9.0" }
solana-config-program = { path = "../programs/config", version = "=1.9.0" }
solana-faucet = { path = "../faucet", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-program-runtime = { path = "../program-runtime", version = "=1.9.0" }
solana_rbpf = "=0.2.16"
solana-remote-wallet = { path = "../remote-wallet", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.0" }
serde_json = "1.0.78"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.0" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.10.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.0" }
solana-cli-config = { path = "../cli-config", version = "=1.10.0" }
solana-cli-output = { path = "../cli-output", version = "=1.10.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-config-program = { path = "../programs/config", version = "=1.10.0" }
solana-faucet = { path = "../faucet", version = "=1.10.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
solana-program-runtime = { path = "../program-runtime", version = "=1.10.0" }
solana_rbpf = "=0.2.21"
solana-remote-wallet = { path = "../remote-wallet", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.0" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0.30"
tiny-bip39 = "0.8.2"
[dev-dependencies]
solana-streamer = { path = "../streamer", version = "=1.9.0" }
solana-test-validator = { path = "../test-validator", version = "=1.9.0" }
tempfile = "3.2.0"
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-test-validator = { path = "../test-validator", version = "=1.10.0" }
tempfile = "3.3.0"
[[bin]]
name = "solana"

View File

@ -98,10 +98,7 @@ pub fn get_fee_for_messages(
) -> Result<u64, CliError> {
Ok(messages
.iter()
.map(|message| {
println!("msg {:?}", message.recent_blockhash);
rpc_client.get_fee_for_message(message)
})
.map(|message| rpc_client.get_fee_for_message(message))
.collect::<Result<Vec<_>, _>>()?
.iter()
.sum())

View File

@ -83,7 +83,6 @@ pub enum CliCommand {
filter: RpcTransactionLogsFilter,
},
Ping {
lamports: u64,
interval: Duration,
count: Option<u64>,
timeout: Duration,
@ -298,7 +297,13 @@ pub enum CliCommand {
authorized_voter: Option<Pubkey>,
authorized_withdrawer: Pubkey,
commission: u8,
sign_only: bool,
dump_transaction_message: bool,
blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>,
nonce_authority: SignerIndex,
memo: Option<String>,
fee_payer: SignerIndex,
},
ShowVoteAccount {
pubkey: Pubkey,
@ -310,19 +315,32 @@ pub enum CliCommand {
destination_account_pubkey: Pubkey,
withdraw_authority: SignerIndex,
withdraw_amount: SpendAmount,
sign_only: bool,
dump_transaction_message: bool,
blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>,
nonce_authority: SignerIndex,
memo: Option<String>,
fee_payer: SignerIndex,
},
CloseVoteAccount {
vote_account_pubkey: Pubkey,
destination_account_pubkey: Pubkey,
withdraw_authority: SignerIndex,
memo: Option<String>,
fee_payer: SignerIndex,
},
VoteAuthorize {
vote_account_pubkey: Pubkey,
new_authorized_pubkey: Pubkey,
vote_authorize: VoteAuthorize,
sign_only: bool,
dump_transaction_message: bool,
blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>,
nonce_authority: SignerIndex,
memo: Option<String>,
fee_payer: SignerIndex,
authorized: SignerIndex,
new_authorized: Option<SignerIndex>,
},
@ -330,13 +348,25 @@ pub enum CliCommand {
vote_account_pubkey: Pubkey,
new_identity_account: SignerIndex,
withdraw_authority: SignerIndex,
sign_only: bool,
dump_transaction_message: bool,
blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>,
nonce_authority: SignerIndex,
memo: Option<String>,
fee_payer: SignerIndex,
},
VoteUpdateCommission {
vote_account_pubkey: Pubkey,
commission: u8,
withdraw_authority: SignerIndex,
sign_only: bool,
dump_transaction_message: bool,
blockhash_query: BlockhashQuery,
nonce_account: Option<Pubkey>,
nonce_authority: SignerIndex,
memo: Option<String>,
fee_payer: SignerIndex,
},
// Wallet Commands
Address,
@ -942,7 +972,6 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
CliCommand::LiveSlots => process_live_slots(config),
CliCommand::Logs { filter } => process_logs(config, filter),
CliCommand::Ping {
lamports,
interval,
count,
timeout,
@ -951,7 +980,6 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
} => process_ping(
&rpc_client,
config,
*lamports,
interval,
count,
timeout,
@ -1384,7 +1412,13 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
authorized_voter,
authorized_withdrawer,
commission,
sign_only,
dump_transaction_message,
blockhash_query,
ref nonce_account,
nonce_authority,
memo,
fee_payer,
} => process_create_vote_account(
&rpc_client,
config,
@ -1394,7 +1428,13 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
authorized_voter,
*authorized_withdrawer,
*commission,
*sign_only,
*dump_transaction_message,
blockhash_query,
nonce_account.as_ref(),
*nonce_authority,
memo.as_ref(),
*fee_payer,
),
CliCommand::ShowVoteAccount {
pubkey: vote_account_pubkey,
@ -1412,7 +1452,13 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
withdraw_authority,
withdraw_amount,
destination_account_pubkey,
sign_only,
dump_transaction_message,
blockhash_query,
ref nonce_account,
nonce_authority,
memo,
fee_payer,
} => process_withdraw_from_vote_account(
&rpc_client,
config,
@ -1420,13 +1466,20 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
*withdraw_authority,
*withdraw_amount,
destination_account_pubkey,
*sign_only,
*dump_transaction_message,
blockhash_query,
nonce_account.as_ref(),
*nonce_authority,
memo.as_ref(),
*fee_payer,
),
CliCommand::CloseVoteAccount {
vote_account_pubkey,
withdraw_authority,
destination_account_pubkey,
memo,
fee_payer,
} => process_close_vote_account(
&rpc_client,
config,
@ -1434,12 +1487,19 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
*withdraw_authority,
destination_account_pubkey,
memo.as_ref(),
*fee_payer,
),
CliCommand::VoteAuthorize {
vote_account_pubkey,
new_authorized_pubkey,
vote_authorize,
sign_only,
dump_transaction_message,
blockhash_query,
nonce_account,
nonce_authority,
memo,
fee_payer,
authorized,
new_authorized,
} => process_vote_authorize(
@ -1450,33 +1510,63 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
*vote_authorize,
*authorized,
*new_authorized,
*sign_only,
*dump_transaction_message,
blockhash_query,
*nonce_account,
*nonce_authority,
memo.as_ref(),
*fee_payer,
),
CliCommand::VoteUpdateValidator {
vote_account_pubkey,
new_identity_account,
withdraw_authority,
sign_only,
dump_transaction_message,
blockhash_query,
nonce_account,
nonce_authority,
memo,
fee_payer,
} => process_vote_update_validator(
&rpc_client,
config,
vote_account_pubkey,
*new_identity_account,
*withdraw_authority,
*sign_only,
*dump_transaction_message,
blockhash_query,
*nonce_account,
*nonce_authority,
memo.as_ref(),
*fee_payer,
),
CliCommand::VoteUpdateCommission {
vote_account_pubkey,
commission,
withdraw_authority,
sign_only,
dump_transaction_message,
blockhash_query,
nonce_account,
nonce_authority,
memo,
fee_payer,
} => process_vote_update_commission(
&rpc_client,
config,
vote_account_pubkey,
*commission,
*withdraw_authority,
*sign_only,
*dump_transaction_message,
blockhash_query,
*nonce_account,
*nonce_authority,
memo.as_ref(),
*fee_payer,
),
// Wallet Commands
@ -1616,7 +1706,7 @@ mod tests {
serde_json::{json, Value},
solana_client::{
blockhash_query,
mock_sender::SIGNATURE,
mock_sender_for_cli::SIGNATURE,
rpc_request::RpcRequest,
rpc_response::{Response, RpcResponseContext},
},
@ -1975,7 +2065,13 @@ mod tests {
authorized_voter: Some(bob_pubkey),
authorized_withdrawer: bob_pubkey,
commission: 0,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
config.signers = vec![&keypair, &bob_keypair, &identity_keypair];
let result = process_command(&config);
@ -2006,7 +2102,13 @@ mod tests {
vote_account_pubkey: bob_pubkey,
new_authorized_pubkey,
vote_authorize: VoteAuthorize::Withdrawer,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 0,
new_authorized: None,
};
@ -2019,7 +2121,13 @@ mod tests {
vote_account_pubkey: bob_pubkey,
new_identity_account: 2,
withdraw_authority: 1,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
let result = process_command(&config);
assert!(result.is_ok());
@ -2195,7 +2303,13 @@ mod tests {
authorized_voter: Some(bob_pubkey),
authorized_withdrawer: bob_pubkey,
commission: 0,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
config.signers = vec![&keypair, &bob_keypair, &identity_keypair];
assert!(process_command(&config).is_err());
@ -2204,7 +2318,13 @@ mod tests {
vote_account_pubkey: bob_pubkey,
new_authorized_pubkey: bob_pubkey,
vote_authorize: VoteAuthorize::Voter,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 0,
new_authorized: None,
};
@ -2214,7 +2334,13 @@ mod tests {
vote_account_pubkey: bob_pubkey,
new_identity_account: 1,
withdraw_authority: 1,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
assert!(process_command(&config).is_err());

View File

@ -5,6 +5,7 @@ use {
},
clap::{value_t, value_t_or_exit, App, AppSettings, Arg, ArgMatches, SubCommand},
console::{style, Emoji},
crossbeam_channel::unbounded,
serde::{Deserialize, Serialize},
solana_clap_utils::{
input_parsers::*,
@ -43,13 +44,13 @@ use {
message::Message,
native_token::lamports_to_sol,
nonce::State as NonceState,
pubkey::{self, Pubkey},
pubkey::Pubkey,
rent::Rent,
rpc_port::DEFAULT_RPC_PORT_STR,
signature::Signature,
slot_history,
stake::{self, state::StakeState},
system_instruction, system_program,
system_instruction,
sysvar::{
self,
slot_history::SlotHistory,
@ -262,15 +263,6 @@ impl ClusterQuerySubCommands for App<'_, '_> {
.takes_value(false)
.help("Print timestamp (unix time + microseconds as in gettimeofday) before each line"),
)
.arg(
Arg::with_name("lamports")
.long("lamports")
.value_name("NUMBER")
.takes_value(true)
.default_value("1")
.validator(is_amount)
.help("Number of lamports to transfer for each transaction"),
)
.arg(
Arg::with_name("timeout")
.short("t")
@ -515,7 +507,6 @@ pub fn parse_cluster_ping(
default_signer: &DefaultSigner,
wallet_manager: &mut Option<Arc<RemoteWalletManager>>,
) -> Result<CliCommandInfo, CliError> {
let lamports = value_t_or_exit!(matches, "lamports", u64);
let interval = Duration::from_secs(value_t_or_exit!(matches, "interval", u64));
let count = if matches.is_present("count") {
Some(value_t_or_exit!(matches, "count", u64))
@ -527,7 +518,6 @@ pub fn parse_cluster_ping(
let print_timestamp = matches.is_present("print_timestamp");
Ok(CliCommandInfo {
command: CliCommand::Ping {
lamports,
interval,
count,
timeout,
@ -1358,7 +1348,6 @@ pub fn process_get_transaction_count(rpc_client: &RpcClient, _config: &CliConfig
pub fn process_ping(
rpc_client: &RpcClient,
config: &CliConfig,
lamports: u64,
interval: &Duration,
count: &Option<u64>,
timeout: &Duration,
@ -1368,7 +1357,7 @@ pub fn process_ping(
println_name_value("Source Account:", &config.signers[0].pubkey().to_string());
println!();
let (signal_sender, signal_receiver) = std::sync::mpsc::channel();
let (signal_sender, signal_receiver) = unbounded();
ctrlc::set_handler(move || {
let _ = signal_sender.send(());
})
@ -1379,7 +1368,7 @@ pub fn process_ping(
let mut confirmation_time: VecDeque<u64> = VecDeque::with_capacity(1024);
let mut blockhash = rpc_client.get_latest_blockhash()?;
let mut blockhash_transaction_count = 0;
let mut lamports = 0;
let mut blockhash_acquired = Instant::now();
if let Some(fixed_blockhash) = fixed_blockhash {
let blockhash_origin = if *fixed_blockhash != Hash::default() {
@ -1399,15 +1388,12 @@ pub fn process_ping(
// Fetch a new blockhash every minute
let new_blockhash = rpc_client.get_new_latest_blockhash(&blockhash)?;
blockhash = new_blockhash;
blockhash_transaction_count = 0;
lamports = 0;
blockhash_acquired = Instant::now();
}
let seed =
&format!("{}{}", blockhash_transaction_count, blockhash)[0..pubkey::MAX_SEED_LEN];
let to = Pubkey::create_with_seed(&config.signers[0].pubkey(), seed, &system_program::id())
.unwrap();
blockhash_transaction_count += 1;
let to = config.signers[0].pubkey();
lamports += 1;
let build_message = |lamports| {
let ix = system_instruction::transfer(&config.signers[0].pubkey(), &to, lamports);
@ -2128,7 +2114,7 @@ pub fn process_calculate_rent(
timing::years_as_slots(1.0, &seconds_per_tick, clock::DEFAULT_TICKS_PER_SLOT);
let slots_per_epoch = epoch_schedule.slots_per_epoch as f64;
let years_per_epoch = slots_per_epoch / slots_per_year;
let (lamports_per_epoch, _) = rent.due(0, data_length, years_per_epoch);
let lamports_per_epoch = rent.due(0, data_length, years_per_epoch).lamports();
let cli_rent_calculation = CliRentCalculation {
lamports_per_byte_year: rent.lamports_per_byte_year,
lamports_per_epoch,
@ -2304,7 +2290,6 @@ mod tests {
parse_command(&test_ping, &default_signer, &mut None).unwrap(),
CliCommandInfo {
command: CliCommand::Ping {
lamports: 1,
interval: Duration::from_secs(1),
count: Some(2),
timeout: Duration::from_secs(3),

View File

@ -5,7 +5,7 @@ use {
},
clap::{App, AppSettings, Arg, ArgMatches, SubCommand},
console::style,
serde::{Deserialize, Serialize},
serde::{Deserialize, Deserializer, Serialize, Serializer},
solana_clap_utils::{input_parsers::*, input_validators::*, keypair::*},
solana_cli_output::{QuietDisplay, VerboseDisplay},
solana_client::{client_error::ClientError, rpc_client::RpcClient},
@ -23,6 +23,7 @@ use {
cmp::Ordering,
collections::{HashMap, HashSet},
fmt,
str::FromStr,
sync::Arc,
},
};
@ -45,7 +46,7 @@ pub enum FeatureCliCommand {
},
}
#[derive(Serialize, Deserialize)]
#[derive(Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase", tag = "status", content = "sinceSlot")]
pub enum CliFeatureStatus {
Inactive,
@ -53,7 +54,29 @@ pub enum CliFeatureStatus {
Active(Slot),
}
#[derive(Serialize, Deserialize)]
impl PartialOrd for CliFeatureStatus {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for CliFeatureStatus {
fn cmp(&self, other: &Self) -> Ordering {
match (self, other) {
(Self::Inactive, Self::Inactive) => Ordering::Equal,
(Self::Inactive, _) => Ordering::Greater,
(_, Self::Inactive) => Ordering::Less,
(Self::Pending, Self::Pending) => Ordering::Equal,
(Self::Pending, _) => Ordering::Greater,
(_, Self::Pending) => Ordering::Less,
(Self::Active(self_active_slot), Self::Active(other_active_slot)) => {
self_active_slot.cmp(other_active_slot)
}
}
}
}
#[derive(Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
pub struct CliFeature {
pub id: String,
@ -62,11 +85,28 @@ pub struct CliFeature {
pub status: CliFeatureStatus,
}
impl PartialOrd for CliFeature {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for CliFeature {
fn cmp(&self, other: &Self) -> Ordering {
match self.status.cmp(&other.status) {
Ordering::Equal => self.id.cmp(&other.id),
ordering => ordering,
}
}
}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliFeatures {
pub features: Vec<CliFeature>,
pub feature_activation_allowed: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub cluster_feature_sets: Option<CliClusterFeatureSets>,
#[serde(skip)]
pub inactive: bool,
}
@ -93,11 +133,16 @@ impl fmt::Display for CliFeatures {
CliFeatureStatus::Inactive => style("inactive".to_string()).red(),
CliFeatureStatus::Pending => style("activation pending".to_string()).yellow(),
CliFeatureStatus::Active(activation_slot) =>
style(format!("active since slot {}", activation_slot)).green(),
style(format!("active since slot {:>9}", activation_slot)).green(),
},
feature.description,
)?;
}
if let Some(feature_sets) = &self.cluster_feature_sets {
write!(f, "{}", feature_sets)?;
}
if self.inactive && !self.feature_activation_allowed {
writeln!(
f,
@ -114,6 +159,191 @@ impl fmt::Display for CliFeatures {
impl QuietDisplay for CliFeatures {}
impl VerboseDisplay for CliFeatures {}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliClusterFeatureSets {
pub tool_feature_set: u32,
pub feature_sets: Vec<CliFeatureSet>,
#[serde(skip)]
pub stake_allowed: bool,
#[serde(skip)]
pub rpc_allowed: bool,
}
impl fmt::Display for CliClusterFeatureSets {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let mut tool_feature_set_matches_cluster = false;
let software_versions_title = "Software Version";
let feature_set_title = "Feature Set";
let stake_percent_title = "Stake";
let rpc_percent_title = "RPC";
let mut max_software_versions_len = software_versions_title.len();
let mut max_feature_set_len = feature_set_title.len();
let mut max_stake_percent_len = stake_percent_title.len();
let mut max_rpc_percent_len = rpc_percent_title.len();
let feature_sets: Vec<_> = self
.feature_sets
.iter()
.map(|feature_set_info| {
let me = if self.tool_feature_set == feature_set_info.feature_set {
tool_feature_set_matches_cluster = true;
true
} else {
false
};
let software_versions: Vec<_> = feature_set_info
.software_versions
.iter()
.map(ToString::to_string)
.collect();
let software_versions = software_versions.join(", ");
let feature_set = if feature_set_info.feature_set == 0 {
"unknown".to_string()
} else {
feature_set_info.feature_set.to_string()
};
let stake_percent = format!("{:.2}%", feature_set_info.stake_percent);
let rpc_percent = format!("{:.2}%", feature_set_info.rpc_percent);
max_software_versions_len = max_software_versions_len.max(software_versions.len());
max_feature_set_len = max_feature_set_len.max(feature_set.len());
max_stake_percent_len = max_stake_percent_len.max(stake_percent.len());
max_rpc_percent_len = max_rpc_percent_len.max(rpc_percent.len());
(
software_versions,
feature_set,
stake_percent,
rpc_percent,
me,
)
})
.collect();
if !tool_feature_set_matches_cluster {
writeln!(
f,
"\n{}",
style("To activate features the tool and cluster feature sets must match, select a tool version that matches the cluster")
.bold())?;
} else {
if !self.stake_allowed {
write!(
f,
"\n{}",
style("To activate features the stake must be >= 95%")
.bold()
.red()
)?;
}
if !self.rpc_allowed {
write!(
f,
"\n{}",
style("To activate features the RPC nodes must be >= 95%")
.bold()
.red()
)?;
}
}
writeln!(
f,
"\n\n{}",
style(format!("Tool Feature Set: {}", self.tool_feature_set)).bold()
)?;
writeln!(
f,
"{}",
style(format!(
"{1:<0$} {3:<2$} {5:<4$} {7:<6$}",
max_software_versions_len,
software_versions_title,
max_feature_set_len,
feature_set_title,
max_stake_percent_len,
stake_percent_title,
max_rpc_percent_len,
rpc_percent_title,
))
.bold(),
)?;
for (software_versions, feature_set, stake_percent, rpc_percent, me) in feature_sets {
writeln!(
f,
"{1:<0$} {3:>2$} {5:>4$} {7:>6$} {8}",
max_software_versions_len,
software_versions,
max_feature_set_len,
feature_set,
max_stake_percent_len,
stake_percent,
max_rpc_percent_len,
rpc_percent,
if me { "<-- me" } else { "" },
)?;
}
writeln!(f)
}
}
impl QuietDisplay for CliClusterFeatureSets {}
impl VerboseDisplay for CliClusterFeatureSets {}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliFeatureSet {
software_versions: Vec<CliVersion>,
feature_set: u32,
stake_percent: f64,
rpc_percent: f32,
}
#[derive(Eq, PartialEq, Ord, PartialOrd)]
struct CliVersion(Option<semver::Version>);
impl fmt::Display for CliVersion {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let s = match &self.0 {
None => "unknown".to_string(),
Some(version) => version.to_string(),
};
write!(f, "{}", s)
}
}
impl FromStr for CliVersion {
type Err = semver::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let version_option = if s == "unknown" {
None
} else {
Some(semver::Version::from_str(s)?)
};
Ok(CliVersion(version_option))
}
}
impl Serialize for CliVersion {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(&self.to_string())
}
}
impl<'de> Deserialize<'de> for CliVersion {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let s: &str = Deserialize::deserialize(deserializer)?;
CliVersion::from_str(s).map_err(serde::de::Error::custom)
}
}
pub trait FeatureSubCommands {
fn feature_subcommands(self) -> Self;
}
@ -330,7 +560,10 @@ fn feature_set_stats(rpc_client: &RpcClient) -> Result<FeatureSetStats, ClientEr
}
// Feature activation is only allowed when 95% of the active stake is on the current feature set
fn feature_activation_allowed(rpc_client: &RpcClient, quiet: bool) -> Result<bool, ClientError> {
fn feature_activation_allowed(
rpc_client: &RpcClient,
quiet: bool,
) -> Result<(bool, Option<CliClusterFeatureSets>), ClientError> {
let my_feature_set = solana_version::Version::default().feature_set;
let feature_set_stats = feature_set_stats(rpc_client)?;
@ -346,54 +579,43 @@ fn feature_activation_allowed(rpc_client: &RpcClient, quiet: bool) -> Result<boo
)
.unwrap_or((false, false));
if !quiet {
if feature_set_stats.get(&my_feature_set).is_none() {
println!(
"{}",
style("To activate features the tool and cluster feature sets must match, select a tool version that matches the cluster")
.bold());
} else {
if !stake_allowed {
print!(
"\n{}",
style("To activate features the stake must be >= 95%")
.bold()
.red()
);
}
if !rpc_allowed {
print!(
"\n{}",
style("To activate features the RPC nodes must be >= 95%")
.bold()
.red()
);
}
}
println!(
"\n\n{}",
style(format!("Tool Feature Set: {}", my_feature_set)).bold()
);
let mut feature_set_stats = feature_set_stats.into_iter().collect::<Vec<_>>();
feature_set_stats.sort_by(|l, r| {
match l.1.software_versions[0]
.cmp(&r.1.software_versions[0])
let cluster_feature_sets = if quiet {
None
} else {
let mut feature_sets = feature_set_stats
.into_iter()
.map(
|(
feature_set,
FeatureSetStatsEntry {
stake_percent,
rpc_nodes_percent: rpc_percent,
software_versions,
},
)| {
CliFeatureSet {
software_versions: software_versions.into_iter().map(CliVersion).collect(),
feature_set,
stake_percent,
rpc_percent,
}
},
)
.collect::<Vec<_>>();
feature_sets.sort_by(|l, r| {
match l.software_versions[0]
.cmp(&r.software_versions[0])
.reverse()
{
Ordering::Equal => {
match l
.1
.stake_percent
.partial_cmp(&r.1.stake_percent)
.partial_cmp(&r.stake_percent)
.unwrap()
.reverse()
{
Ordering::Equal => {
l.1.rpc_nodes_percent
.partial_cmp(&r.1.rpc_nodes_percent)
.unwrap()
.reverse()
l.rpc_percent.partial_cmp(&r.rpc_percent).unwrap().reverse()
}
o => o,
}
@ -401,96 +623,15 @@ fn feature_activation_allowed(rpc_client: &RpcClient, quiet: bool) -> Result<boo
o => o,
}
});
Some(CliClusterFeatureSets {
tool_feature_set: my_feature_set,
feature_sets,
stake_allowed,
rpc_allowed,
})
};
let software_versions_title = "Software Version";
let feature_set_title = "Feature Set";
let stake_percent_title = "Stake";
let rpc_percent_title = "RPC";
let mut stats_output = Vec::new();
let mut max_software_versions_len = software_versions_title.len();
let mut max_feature_set_len = feature_set_title.len();
let mut max_stake_percent_len = stake_percent_title.len();
let mut max_rpc_percent_len = rpc_percent_title.len();
for (
feature_set,
FeatureSetStatsEntry {
stake_percent,
rpc_nodes_percent,
software_versions,
},
) in feature_set_stats.into_iter()
{
let me = feature_set == my_feature_set;
let feature_set = if feature_set == 0 {
"unknown".to_string()
} else {
feature_set.to_string()
};
let stake_percent = format!("{:.2}%", stake_percent);
let rpc_percent = format!("{:.2}%", rpc_nodes_percent);
let mut has_unknown = false;
let mut software_versions = software_versions
.iter()
.filter_map(|v| {
if v.is_none() {
has_unknown = true;
}
v.as_ref()
})
.map(ToString::to_string)
.collect::<Vec<_>>();
if has_unknown {
software_versions.push("unknown".to_string());
}
let software_versions = software_versions.join(", ");
max_software_versions_len = max_software_versions_len.max(software_versions.len());
max_feature_set_len = max_feature_set_len.max(feature_set.len());
max_stake_percent_len = max_stake_percent_len.max(stake_percent.len());
max_rpc_percent_len = max_rpc_percent_len.max(rpc_percent.len());
stats_output.push((
software_versions,
feature_set,
stake_percent,
rpc_percent,
me,
));
}
println!(
"{}",
style(format!(
"{1:<0$} {3:<2$} {5:<4$} {7:<6$}",
max_software_versions_len,
software_versions_title,
max_feature_set_len,
feature_set_title,
max_stake_percent_len,
stake_percent_title,
max_rpc_percent_len,
rpc_percent_title,
))
.bold(),
);
for (software_versions, feature_set, stake_percent, rpc_percent, me) in stats_output {
println!(
"{1:<0$} {3:>2$} {5:>4$} {7:>6$} {8}",
max_software_versions_len,
software_versions,
max_feature_set_len,
feature_set,
max_stake_percent_len,
stake_percent,
max_rpc_percent_len,
rpc_percent,
if me { "<-- me" } else { "" },
);
}
println!();
}
Ok(stake_allowed && rpc_allowed)
Ok((stake_allowed && rpc_allowed, cluster_feature_sets))
}
fn status_from_account(account: Account) -> Option<CliFeatureStatus> {
@ -550,10 +691,14 @@ fn process_status(
});
}
let feature_activation_allowed = feature_activation_allowed(rpc_client, features.len() <= 1)?;
features.sort_unstable();
let (feature_activation_allowed, cluster_feature_sets) =
feature_activation_allowed(rpc_client, features.len() <= 1)?;
let feature_set = CliFeatures {
features,
feature_activation_allowed,
cluster_feature_sets,
inactive,
};
Ok(config.output_format.formatted_string(&feature_set))
@ -577,7 +722,7 @@ fn process_activate(
}
}
if !feature_activation_allowed(rpc_client, false)? {
if !feature_activation_allowed(rpc_client, false)?.0 {
match force {
ForceActivation::Almost =>
return Err("Add force argument once more to override the sanity check to force feature activation ".into()),

View File

@ -334,9 +334,17 @@ pub fn check_nonce_account(
match state_from_account(nonce_account)? {
State::Initialized(ref data) => {
if &data.blockhash != nonce_hash {
Err(Error::InvalidHash.into())
Err(Error::InvalidHash {
provided: *nonce_hash,
expected: data.blockhash,
}
.into())
} else if nonce_authority != &data.authority {
Err(Error::InvalidAuthority.into())
Err(Error::InvalidAuthority {
provided: *nonce_authority,
expected: data.authority,
}
.into())
} else {
Ok(())
}
@ -946,15 +954,22 @@ mod tests {
hash(b"invalid"),
0,
)));
let invalid_hash = Account::new_data(1, &data, &system_program::ID);
let invalid_hash = Account::new_data(1, &data, &system_program::ID).unwrap();
if let CliError::InvalidNonce(err) =
check_nonce_account(&invalid_hash.unwrap(), &nonce_pubkey, &blockhash).unwrap_err()
check_nonce_account(&invalid_hash, &nonce_pubkey, &blockhash).unwrap_err()
{
assert_eq!(err, Error::InvalidHash,);
assert_eq!(
err,
Error::InvalidHash {
provided: blockhash,
expected: hash(b"invalid"),
}
);
}
let new_nonce_authority = solana_sdk::pubkey::new_rand();
let data = Versions::new_current(State::Initialized(nonce::state::Data::new(
solana_sdk::pubkey::new_rand(),
new_nonce_authority,
blockhash,
0,
)));
@ -962,7 +977,13 @@ mod tests {
if let CliError::InvalidNonce(err) =
check_nonce_account(&invalid_authority.unwrap(), &nonce_pubkey, &blockhash).unwrap_err()
{
assert_eq!(err, Error::InvalidAuthority,);
assert_eq!(
err,
Error::InvalidAuthority {
provided: nonce_pubkey,
expected: new_nonce_authority,
}
);
}
let data = Versions::new_current(State::Uninitialized);

View File

@ -42,6 +42,7 @@ use {
system_instruction::{self, SystemError},
system_program,
transaction::{Transaction, TransactionError},
transaction_context::TransactionContext,
},
std::{
fs::File,
@ -1990,7 +1991,8 @@ fn read_and_verify_elf(program_location: &str) -> Result<Vec<u8>, Box<dyn std::e
let mut program_data = Vec::new();
file.read_to_end(&mut program_data)
.map_err(|err| format!("Unable to read program file: {}", err))?;
let mut invoke_context = InvokeContext::new_mock(&[], &[]);
let mut transaction_context = TransactionContext::new(Vec::new(), 1, 1);
let mut invoke_context = InvokeContext::new_mock(&mut transaction_context, &[]);
// Verify the program
Executable::<BpfError, ThisInstructionMeter>::from_elf(

View File

@ -16,6 +16,7 @@ use {
pub enum SpendAmount {
All,
Some(u64),
RentExempt,
}
impl Default for SpendAmount {
@ -90,6 +91,7 @@ where
0,
from_pubkey,
fee_pubkey,
0,
build_message,
)?;
Ok((message, spend))
@ -97,6 +99,12 @@ where
let from_balance = rpc_client
.get_balance_with_commitment(from_pubkey, commitment)?
.value;
let from_rent_exempt_minimum = if amount == SpendAmount::RentExempt {
let data = rpc_client.get_account_data(from_pubkey)?;
rpc_client.get_minimum_balance_for_rent_exemption(data.len())?
} else {
0
};
let (message, SpendAndFee { spend, fee }) = resolve_spend_message(
rpc_client,
amount,
@ -104,6 +112,7 @@ where
from_balance,
from_pubkey,
fee_pubkey,
from_rent_exempt_minimum,
build_message,
)?;
if from_pubkey == fee_pubkey {
@ -140,6 +149,7 @@ fn resolve_spend_message<F>(
from_balance: u64,
from_pubkey: &Pubkey,
fee_pubkey: &Pubkey,
from_rent_exempt_minimum: u64,
build_message: F,
) -> Result<(Message, SpendAndFee), CliError>
where
@ -176,5 +186,20 @@ where
},
))
}
SpendAmount::RentExempt => {
let mut lamports = if from_pubkey == fee_pubkey {
from_balance.saturating_sub(fee)
} else {
from_balance
};
lamports = lamports.saturating_sub(from_rent_exempt_minimum);
Ok((
build_message(lamports),
SpendAndFee {
spend: lamports,
fee,
},
))
}
}
}

View File

@ -1,23 +1,29 @@
use {
solana_client::rpc_client::RpcClient,
solana_sdk::{clock::DEFAULT_MS_PER_SLOT, commitment_config::CommitmentConfig, pubkey::Pubkey},
solana_sdk::{clock::DEFAULT_MS_PER_SLOT, commitment_config::CommitmentConfig},
std::{thread::sleep, time::Duration},
};
pub fn check_recent_balance(expected_balance: u64, client: &RpcClient, pubkey: &Pubkey) {
(0..5).for_each(|tries| {
let balance = client
.get_balance_with_commitment(pubkey, CommitmentConfig::processed())
.unwrap()
.value;
if balance == expected_balance {
return;
}
if tries == 4 {
assert_eq!(balance, expected_balance);
}
sleep(Duration::from_millis(500));
});
#[macro_export]
macro_rules! check_balance {
($expected_balance:expr, $client:expr, $pubkey:expr) => {
(0..5).for_each(|tries| {
let balance = $client
.get_balance_with_commitment($pubkey, CommitmentConfig::processed())
.unwrap()
.value;
if balance == $expected_balance {
return;
}
if tries == 4 {
assert_eq!(balance, $expected_balance);
}
std::thread::sleep(std::time::Duration::from_millis(500));
});
};
($expected_balance:expr, $client:expr, $pubkey:expr,) => {
check_balance!($expected_balance, $client, $pubkey)
};
}
pub fn check_ready(rpc_client: &RpcClient) {

File diff suppressed because it is too large Load Diff

View File

@ -39,7 +39,7 @@ use {
system_program,
transaction::Transaction,
},
solana_transaction_status::{EncodedTransaction, UiTransactionEncoding},
solana_transaction_status::{Encodable, EncodedTransaction, UiTransactionEncoding},
std::{fmt::Write as FmtWrite, fs::File, io::Write, sync::Arc},
};
@ -462,18 +462,27 @@ pub fn process_show_account(
let mut account_string = config.output_format.formatted_string(&cli_account);
if config.output_format == OutputFormat::Display
|| config.output_format == OutputFormat::DisplayVerbose
{
if let Some(output_file) = output_file {
let mut f = File::create(output_file)?;
f.write_all(&data)?;
writeln!(&mut account_string)?;
writeln!(&mut account_string, "Wrote account data to {}", output_file)?;
} else if !data.is_empty() {
use pretty_hex::*;
writeln!(&mut account_string, "{:?}", data.hex_dump())?;
match config.output_format {
OutputFormat::Json | OutputFormat::JsonCompact => {
if let Some(output_file) = output_file {
let mut f = File::create(output_file)?;
f.write_all(account_string.as_bytes())?;
writeln!(&mut account_string)?;
writeln!(&mut account_string, "Wrote account to {}", output_file)?;
}
}
OutputFormat::Display | OutputFormat::DisplayVerbose => {
if let Some(output_file) = output_file {
let mut f = File::create(output_file)?;
f.write_all(&data)?;
writeln!(&mut account_string)?;
writeln!(&mut account_string, "Wrote account data to {}", output_file)?;
} else if !data.is_empty() {
use pretty_hex::*;
writeln!(&mut account_string, "{:?}", data.hex_dump())?;
}
}
OutputFormat::DisplayQuiet => (),
}
Ok(account_string)
@ -555,10 +564,8 @@ pub fn process_confirm(
.transaction
.decode()
.expect("Successful decode");
let json_transaction = EncodedTransaction::encode(
decoded_transaction.clone(),
UiTransactionEncoding::Json,
);
let json_transaction =
decoded_transaction.encode(UiTransactionEncoding::Json);
transaction = Some(CliTransaction {
transaction: json_transaction,
@ -600,7 +607,7 @@ pub fn process_decode_transaction(config: &CliConfig, transaction: &Transaction)
let sigverify_status = CliSignatureVerificationStatus::verify_transaction(transaction);
let decode_transaction = CliTransaction {
decoded_transaction: transaction.clone(),
transaction: EncodedTransaction::encode(transaction.clone(), UiTransactionEncoding::Json),
transaction: transaction.encode(UiTransactionEncoding::Json),
meta: None,
block_time: None,
slot: None,

View File

@ -1,8 +1,10 @@
#![allow(clippy::integer_arithmetic)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
test_utils::{check_ready, check_recent_balance},
test_utils::check_ready,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@ -14,6 +16,7 @@ use {
solana_sdk::{
commitment_config::CommitmentConfig,
hash::Hash,
native_token::sol_to_lamports,
pubkey::Pubkey,
signature::{keypair_from_seed, Keypair, Signer},
system_program,
@ -73,10 +76,14 @@ fn full_battery_tests(
&rpc_client,
&config_payer,
&config_payer.signers[0].pubkey(),
2000,
sol_to_lamports(2000.0),
)
.unwrap();
check_recent_balance(2000, &rpc_client, &config_payer.signers[0].pubkey());
check_balance!(
sol_to_lamports(2000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
let mut config_nonce = CliConfig::recent_for_tests();
config_nonce.json_rpc_url = json_rpc_url;
@ -108,12 +115,16 @@ fn full_battery_tests(
seed,
nonce_authority: optional_authority,
memo: None,
amount: SpendAmount::Some(1000),
amount: SpendAmount::Some(sol_to_lamports(1000.0)),
};
process_command(&config_payer).unwrap();
check_recent_balance(1000, &rpc_client, &config_payer.signers[0].pubkey());
check_recent_balance(1000, &rpc_client, &nonce_account);
check_balance!(
sol_to_lamports(1000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
check_balance!(sol_to_lamports(1000.0), &rpc_client, &nonce_account);
// Get nonce
config_payer.signers.pop();
@ -161,12 +172,16 @@ fn full_battery_tests(
nonce_authority: index,
memo: None,
destination_account_pubkey: payee_pubkey,
lamports: 100,
lamports: sol_to_lamports(100.0),
};
process_command(&config_payer).unwrap();
check_recent_balance(1000, &rpc_client, &config_payer.signers[0].pubkey());
check_recent_balance(900, &rpc_client, &nonce_account);
check_recent_balance(100, &rpc_client, &payee_pubkey);
check_balance!(
sol_to_lamports(1000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
check_balance!(sol_to_lamports(900.0), &rpc_client, &nonce_account);
check_balance!(sol_to_lamports(100.0), &rpc_client, &payee_pubkey);
// Show nonce account
config_payer.command = CliCommand::ShowNonceAccount {
@ -208,12 +223,16 @@ fn full_battery_tests(
nonce_authority: 1,
memo: None,
destination_account_pubkey: payee_pubkey,
lamports: 100,
lamports: sol_to_lamports(100.0),
};
process_command(&config_payer).unwrap();
check_recent_balance(1000, &rpc_client, &config_payer.signers[0].pubkey());
check_recent_balance(800, &rpc_client, &nonce_account);
check_recent_balance(200, &rpc_client, &payee_pubkey);
check_balance!(
sol_to_lamports(1000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
check_balance!(sol_to_lamports(800.0), &rpc_client, &nonce_account);
check_balance!(sol_to_lamports(200.0), &rpc_client, &payee_pubkey);
}
#[test]
@ -241,19 +260,27 @@ fn test_create_account_with_seed() {
&rpc_client,
&CliConfig::recent_for_tests(),
&offline_nonce_authority_signer.pubkey(),
42,
sol_to_lamports(42.0),
)
.unwrap();
request_and_confirm_airdrop(
&rpc_client,
&CliConfig::recent_for_tests(),
&online_nonce_creator_signer.pubkey(),
4242,
sol_to_lamports(4242.0),
)
.unwrap();
check_recent_balance(42, &rpc_client, &offline_nonce_authority_signer.pubkey());
check_recent_balance(4242, &rpc_client, &online_nonce_creator_signer.pubkey());
check_recent_balance(0, &rpc_client, &to_address);
check_balance!(
sol_to_lamports(42.0),
&rpc_client,
&offline_nonce_authority_signer.pubkey(),
);
check_balance!(
sol_to_lamports(4242.0),
&rpc_client,
&online_nonce_creator_signer.pubkey(),
);
check_balance!(0, &rpc_client, &to_address);
check_ready(&rpc_client);
@ -263,7 +290,7 @@ fn test_create_account_with_seed() {
let seed = authority_pubkey.to_string()[0..32].to_string();
let nonce_address =
Pubkey::create_with_seed(&creator_pubkey, &seed, &system_program::id()).unwrap();
check_recent_balance(0, &rpc_client, &nonce_address);
check_balance!(0, &rpc_client, &nonce_address);
let mut creator_config = CliConfig::recent_for_tests();
creator_config.json_rpc_url = test_validator.rpc_url();
@ -273,13 +300,21 @@ fn test_create_account_with_seed() {
seed: Some(seed),
nonce_authority: Some(authority_pubkey),
memo: None,
amount: SpendAmount::Some(241),
amount: SpendAmount::Some(sol_to_lamports(241.0)),
};
process_command(&creator_config).unwrap();
check_recent_balance(241, &rpc_client, &nonce_address);
check_recent_balance(42, &rpc_client, &offline_nonce_authority_signer.pubkey());
check_recent_balance(4000, &rpc_client, &online_nonce_creator_signer.pubkey());
check_recent_balance(0, &rpc_client, &to_address);
check_balance!(sol_to_lamports(241.0), &rpc_client, &nonce_address);
check_balance!(
sol_to_lamports(42.0),
&rpc_client,
&offline_nonce_authority_signer.pubkey(),
);
check_balance!(
sol_to_lamports(4000.999999999),
&rpc_client,
&online_nonce_creator_signer.pubkey(),
);
check_balance!(0, &rpc_client, &to_address);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -299,7 +334,7 @@ fn test_create_account_with_seed() {
authority_config.command = CliCommand::ClusterVersion;
process_command(&authority_config).unwrap_err();
authority_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(10.0)),
to: to_address,
from: 0,
sign_only: true,
@ -325,7 +360,7 @@ fn test_create_account_with_seed() {
submit_config.json_rpc_url = test_validator.rpc_url();
submit_config.signers = vec![&authority_presigner];
submit_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(10.0)),
to: to_address,
from: 0,
sign_only: false,
@ -344,8 +379,16 @@ fn test_create_account_with_seed() {
derived_address_program_id: None,
};
process_command(&submit_config).unwrap();
check_recent_balance(241, &rpc_client, &nonce_address);
check_recent_balance(31, &rpc_client, &offline_nonce_authority_signer.pubkey());
check_recent_balance(4000, &rpc_client, &online_nonce_creator_signer.pubkey());
check_recent_balance(10, &rpc_client, &to_address);
check_balance!(sol_to_lamports(241.0), &rpc_client, &nonce_address);
check_balance!(
sol_to_lamports(31.999999999),
&rpc_client,
&offline_nonce_authority_signer.pubkey(),
);
check_balance!(
sol_to_lamports(4000.999999999),
&rpc_client,
&online_nonce_creator_signer.pubkey(),
);
check_balance!(sol_to_lamports(10.0), &rpc_client, &to_address);
}

View File

@ -1,3 +1,4 @@
#![allow(clippy::integer_arithmetic)]
use {
serde_json::Value,
solana_cli::{
@ -77,7 +78,7 @@ fn test_cli_program_deploy_non_upgradeable() {
assert_eq!(account0.lamports, minimum_balance_for_rent_exemption);
assert_eq!(account0.owner, bpf_loader::id());
assert!(account0.executable);
let mut file = File::open(noop_path.to_str().unwrap().to_string()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut elf = Vec::new();
file.read_to_end(&mut elf).unwrap();
assert_eq!(account0.data, elf);

View File

@ -1,9 +1,11 @@
#![allow(clippy::integer_arithmetic)]
use {
solana_cli::cli::{process_command, CliCommand, CliConfig},
solana_client::rpc_client::RpcClient,
solana_faucet::faucet::run_local_faucet,
solana_sdk::{
commitment_config::CommitmentConfig,
native_token::sol_to_lamports,
signature::{Keypair, Signer},
},
solana_streamer::socket::SocketAddrSpace,
@ -22,7 +24,7 @@ fn test_cli_request_airdrop() {
bob_config.json_rpc_url = test_validator.rpc_url();
bob_config.command = CliCommand::Airdrop {
pubkey: None,
lamports: 50,
lamports: sol_to_lamports(50.0),
};
let keypair = Keypair::new();
bob_config.signers = vec![&keypair];
@ -36,5 +38,5 @@ fn test_cli_request_airdrop() {
let balance = rpc_client
.get_balance(&bob_config.signers[0].pubkey())
.unwrap();
assert_eq!(balance, 50);
assert_eq!(balance, sol_to_lamports(50.0));
}

View File

@ -1,10 +1,12 @@
#![allow(clippy::integer_arithmetic)]
#![allow(clippy::redundant_closure)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
stake::StakeAuthorizationIndexed,
test_utils::{check_ready, check_recent_balance},
test_utils::check_ready,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@ -59,7 +61,13 @@ fn test_stake_delegation_force() {
authorized_voter: None,
authorized_withdrawer,
commission: 0,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config).unwrap();
@ -144,7 +152,7 @@ fn test_seed_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_validator.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_validator.signers[0].pubkey());
let stake_address = Pubkey::create_with_seed(
&config_validator.signers[0].pubkey(),
@ -233,7 +241,7 @@ fn test_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_validator.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_validator.signers[0].pubkey());
// Create stake account
config_validator.signers.push(&stake_keypair);
@ -327,7 +335,7 @@ fn test_offline_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_validator.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_validator.signers[0].pubkey());
request_and_confirm_airdrop(
&rpc_client,
@ -336,7 +344,7 @@ fn test_offline_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_offline.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_offline.signers[0].pubkey());
// Create stake account
config_validator.signers.push(&stake_keypair);
@ -905,13 +913,13 @@ fn test_stake_authorize_with_fee_payer() {
process_command(&config_offline).unwrap_err();
request_and_confirm_airdrop(&rpc_client, &config, &default_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &config.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_payer, &payer_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &payer_pubkey);
check_balance!(100_000, &rpc_client, &payer_pubkey);
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
check_balance!(100_000, &rpc_client, &offline_pubkey);
check_ready(&rpc_client);
@ -938,7 +946,7 @@ fn test_stake_authorize_with_fee_payer() {
};
process_command(&config).unwrap();
// `config` balance should be 50,000 - 1 stake account sig - 1 fee sig
check_recent_balance(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
check_balance!(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
// Assign authority with separate fee payer
config.signers = vec![&default_signer, &payer_keypair];
@ -962,10 +970,10 @@ fn test_stake_authorize_with_fee_payer() {
};
process_command(&config).unwrap();
// `config` balance has not changed, despite submitting the TX
check_recent_balance(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
check_balance!(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
// `config_payer` however has paid `config`'s authority sig
// and `config_payer`'s fee sig
check_recent_balance(100_000 - SIG_FEE - SIG_FEE, &rpc_client, &payer_pubkey);
check_balance!(100_000 - SIG_FEE - SIG_FEE, &rpc_client, &payer_pubkey);
// Assign authority with offline fee payer
let blockhash = rpc_client.get_latest_blockhash().unwrap();
@ -1013,10 +1021,10 @@ fn test_stake_authorize_with_fee_payer() {
};
process_command(&config).unwrap();
// `config`'s balance again has not changed
check_recent_balance(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
check_balance!(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
// `config_offline` however has paid 1 sig due to being both authority
// and fee payer
check_recent_balance(100_000 - SIG_FEE, &rpc_client, &offline_pubkey);
check_balance!(100_000 - SIG_FEE, &rpc_client, &offline_pubkey);
}
#[test]
@ -1052,10 +1060,10 @@ fn test_stake_split() {
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 500_000)
.unwrap();
check_recent_balance(500_000, &rpc_client, &config.signers[0].pubkey());
check_balance!(500_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
check_balance!(100_000, &rpc_client, &offline_pubkey);
// Create stake account, identity is authority
let minimum_stake_balance = rpc_client
@ -1082,7 +1090,7 @@ fn test_stake_split() {
from: 0,
};
process_command(&config).unwrap();
check_recent_balance(
check_balance!(
10 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
@ -1102,7 +1110,7 @@ fn test_stake_split() {
amount: SpendAmount::Some(minimum_nonce_balance),
};
process_command(&config).unwrap();
check_recent_balance(minimum_nonce_balance, &rpc_client, &nonce_account.pubkey());
check_balance!(minimum_nonce_balance, &rpc_client, &nonce_account.pubkey());
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -1116,7 +1124,7 @@ fn test_stake_split() {
// Nonced offline split
let split_account = keypair_from_seed(&[2u8; 32]).unwrap();
check_recent_balance(0, &rpc_client, &split_account.pubkey());
check_balance!(0, &rpc_client, &split_account.pubkey());
config_offline.signers.push(&split_account);
config_offline.command = CliCommand::SplitStake {
stake_account_pubkey,
@ -1156,12 +1164,12 @@ fn test_stake_split() {
fee_payer: 0,
};
process_command(&config).unwrap();
check_recent_balance(
check_balance!(
8 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
);
check_recent_balance(
check_balance!(
2 * minimum_stake_balance,
&rpc_client,
&split_account.pubkey(),
@ -1201,10 +1209,10 @@ fn test_stake_set_lockup() {
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 500_000)
.unwrap();
check_recent_balance(500_000, &rpc_client, &config.signers[0].pubkey());
check_balance!(500_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
check_balance!(100_000, &rpc_client, &offline_pubkey);
// Create stake account, identity is authority
let minimum_stake_balance = rpc_client
@ -1238,7 +1246,12 @@ fn test_stake_set_lockup() {
from: 0,
};
process_command(&config).unwrap();
check_recent_balance(
check_balance!(
10 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
);
check_balance!(
10 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
@ -1371,7 +1384,7 @@ fn test_stake_set_lockup() {
amount: SpendAmount::Some(minimum_nonce_balance),
};
process_command(&config).unwrap();
check_recent_balance(minimum_nonce_balance, &rpc_client, &nonce_account_pubkey);
check_balance!(minimum_nonce_balance, &rpc_client, &nonce_account_pubkey);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -1467,10 +1480,10 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 200_000)
.unwrap();
check_recent_balance(200_000, &rpc_client, &config.signers[0].pubkey());
check_balance!(200_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
check_balance!(100_000, &rpc_client, &offline_pubkey);
// Create nonce account
let minimum_nonce_balance = rpc_client
@ -1547,7 +1560,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
from: 0,
};
process_command(&config).unwrap();
check_recent_balance(50_000, &rpc_client, &stake_pubkey);
check_balance!(50_000, &rpc_client, &stake_pubkey);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -1566,7 +1579,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
config_offline.command = CliCommand::WithdrawStake {
stake_account_pubkey: stake_pubkey,
destination_account_pubkey: recipient_pubkey,
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(50_000),
withdraw_authority: 0,
custodian: None,
sign_only: true,
@ -1585,7 +1598,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
config.command = CliCommand::WithdrawStake {
stake_account_pubkey: stake_pubkey,
destination_account_pubkey: recipient_pubkey,
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(50_000),
withdraw_authority: 0,
custodian: None,
sign_only: false,
@ -1601,7 +1614,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
fee_payer: 0,
};
process_command(&config).unwrap();
check_recent_balance(42, &rpc_client, &recipient_pubkey);
check_balance!(50_000, &rpc_client, &recipient_pubkey);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -1661,7 +1674,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
process_command(&config).unwrap();
let seed_address =
Pubkey::create_with_seed(&stake_pubkey, seed, &stake::program::id()).unwrap();
check_recent_balance(50_000, &rpc_client, &seed_address);
check_balance!(50_000, &rpc_client, &seed_address);
}
#[test]

View File

@ -1,9 +1,11 @@
#![allow(clippy::integer_arithmetic)]
#![allow(clippy::redundant_closure)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
test_utils::{check_ready, check_recent_balance},
test_utils::check_ready,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@ -14,6 +16,7 @@ use {
solana_faucet::faucet::run_local_faucet,
solana_sdk::{
commitment_config::CommitmentConfig,
native_token::sol_to_lamports,
nonce::State as NonceState,
pubkey::Pubkey,
signature::{keypair_from_seed, Keypair, NullSigner, Signer},
@ -49,15 +52,16 @@ fn test_transfer() {
let sender_pubkey = config.signers[0].pubkey();
let recipient_pubkey = Pubkey::new(&[1u8; 32]);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 50_000).unwrap();
check_recent_balance(50_000, &rpc_client, &sender_pubkey);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, sol_to_lamports(5.0))
.unwrap();
check_balance!(sol_to_lamports(5.0), &rpc_client, &sender_pubkey);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
// Plain ole transfer
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(1.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@ -73,12 +77,12 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(49_989, &rpc_client, &sender_pubkey);
check_recent_balance(10, &rpc_client, &recipient_pubkey);
check_balance!(sol_to_lamports(4.0) - 1, &rpc_client, &sender_pubkey);
check_balance!(sol_to_lamports(1.0), &rpc_client, &recipient_pubkey);
// Plain ole transfer, failure due to InsufficientFundsForSpendAndFee
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(49_989),
amount: SpendAmount::Some(sol_to_lamports(4.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@ -94,8 +98,8 @@ fn test_transfer() {
derived_address_program_id: None,
};
assert!(process_command(&config).is_err());
check_recent_balance(49_989, &rpc_client, &sender_pubkey);
check_recent_balance(10, &rpc_client, &recipient_pubkey);
check_balance!(sol_to_lamports(4.0) - 1, &rpc_client, &sender_pubkey);
check_balance!(sol_to_lamports(1.0), &rpc_client, &recipient_pubkey);
let mut offline = CliConfig::recent_for_tests();
offline.json_rpc_url = String::default();
@ -105,13 +109,14 @@ fn test_transfer() {
process_command(&offline).unwrap_err();
let offline_pubkey = offline.signers[0].pubkey();
request_and_confirm_airdrop(&rpc_client, &offline, &offline_pubkey, 50).unwrap();
check_recent_balance(50, &rpc_client, &offline_pubkey);
request_and_confirm_airdrop(&rpc_client, &offline, &offline_pubkey, sol_to_lamports(1.0))
.unwrap();
check_balance!(sol_to_lamports(1.0), &rpc_client, &offline_pubkey);
// Offline transfer
let blockhash = rpc_client.get_latest_blockhash().unwrap();
offline.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.5)),
to: recipient_pubkey,
from: 0,
sign_only: true,
@ -133,7 +138,7 @@ fn test_transfer() {
let offline_presigner = sign_only.presigner_of(&offline_pubkey).unwrap();
config.signers = vec![&offline_presigner];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.5)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@ -149,8 +154,8 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(39, &rpc_client, &offline_pubkey);
check_recent_balance(20, &rpc_client, &recipient_pubkey);
check_balance!(sol_to_lamports(0.5) - 1, &rpc_client, &offline_pubkey);
check_balance!(sol_to_lamports(1.5), &rpc_client, &recipient_pubkey);
// Create nonce account
let nonce_account = keypair_from_seed(&[3u8; 32]).unwrap();
@ -166,7 +171,11 @@ fn test_transfer() {
amount: SpendAmount::Some(minimum_nonce_balance),
};
process_command(&config).unwrap();
check_recent_balance(49_987 - minimum_nonce_balance, &rpc_client, &sender_pubkey);
check_balance!(
sol_to_lamports(4.0) - 3 - minimum_nonce_balance,
&rpc_client,
&sender_pubkey,
);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -181,7 +190,7 @@ fn test_transfer() {
// Nonced transfer
config.signers = vec![&default_signer];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(1.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@ -200,8 +209,12 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(49_976 - minimum_nonce_balance, &rpc_client, &sender_pubkey);
check_recent_balance(30, &rpc_client, &recipient_pubkey);
check_balance!(
sol_to_lamports(3.0) - 4 - minimum_nonce_balance,
&rpc_client,
&sender_pubkey,
);
check_balance!(sol_to_lamports(2.5), &rpc_client, &recipient_pubkey);
let new_nonce_hash = nonce_utils::get_account_with_commitment(
&rpc_client,
&nonce_account.pubkey(),
@ -221,7 +234,11 @@ fn test_transfer() {
new_authority: offline_pubkey,
};
process_command(&config).unwrap();
check_recent_balance(49_975 - minimum_nonce_balance, &rpc_client, &sender_pubkey);
check_balance!(
sol_to_lamports(3.0) - 5 - minimum_nonce_balance,
&rpc_client,
&sender_pubkey,
);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@ -236,7 +253,7 @@ fn test_transfer() {
// Offline, nonced transfer
offline.signers = vec![&default_offline_signer];
offline.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.4)),
to: recipient_pubkey,
from: 0,
sign_only: true,
@ -257,7 +274,7 @@ fn test_transfer() {
let offline_presigner = sign_only.presigner_of(&offline_pubkey).unwrap();
config.signers = vec![&offline_presigner];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.4)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@ -276,8 +293,8 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(28, &rpc_client, &offline_pubkey);
check_recent_balance(40, &rpc_client, &recipient_pubkey);
check_balance!(sol_to_lamports(0.1) - 2, &rpc_client, &offline_pubkey);
check_balance!(sol_to_lamports(2.9), &rpc_client, &recipient_pubkey);
}
#[test]
@ -305,19 +322,27 @@ fn test_transfer_multisession_signing() {
&rpc_client,
&CliConfig::recent_for_tests(),
&offline_from_signer.pubkey(),
43,
sol_to_lamports(43.0),
)
.unwrap();
request_and_confirm_airdrop(
&rpc_client,
&CliConfig::recent_for_tests(),
&offline_fee_payer_signer.pubkey(),
3,
sol_to_lamports(1.0) + 3,
)
.unwrap();
check_recent_balance(43, &rpc_client, &offline_from_signer.pubkey());
check_recent_balance(3, &rpc_client, &offline_fee_payer_signer.pubkey());
check_recent_balance(0, &rpc_client, &to_pubkey);
check_balance!(
sol_to_lamports(43.0),
&rpc_client,
&offline_from_signer.pubkey(),
);
check_balance!(
sol_to_lamports(1.0) + 3,
&rpc_client,
&offline_fee_payer_signer.pubkey(),
);
check_balance!(0, &rpc_client, &to_pubkey);
check_ready(&rpc_client);
@ -331,7 +356,7 @@ fn test_transfer_multisession_signing() {
fee_payer_config.command = CliCommand::ClusterVersion;
process_command(&fee_payer_config).unwrap_err();
fee_payer_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(sol_to_lamports(42.0)),
to: to_pubkey,
from: 1,
sign_only: true,
@ -362,7 +387,7 @@ fn test_transfer_multisession_signing() {
from_config.command = CliCommand::ClusterVersion;
process_command(&from_config).unwrap_err();
from_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(sol_to_lamports(42.0)),
to: to_pubkey,
from: 1,
sign_only: true,
@ -390,7 +415,7 @@ fn test_transfer_multisession_signing() {
config.json_rpc_url = test_validator.rpc_url();
config.signers = vec![&fee_payer_presigner, &from_presigner];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(sol_to_lamports(42.0)),
to: to_pubkey,
from: 1,
sign_only: false,
@ -407,9 +432,17 @@ fn test_transfer_multisession_signing() {
};
process_command(&config).unwrap();
check_recent_balance(1, &rpc_client, &offline_from_signer.pubkey());
check_recent_balance(1, &rpc_client, &offline_fee_payer_signer.pubkey());
check_recent_balance(42, &rpc_client, &to_pubkey);
check_balance!(
sol_to_lamports(1.0),
&rpc_client,
&offline_from_signer.pubkey(),
);
check_balance!(
sol_to_lamports(1.0) + 1,
&rpc_client,
&offline_fee_payer_signer.pubkey(),
);
check_balance!(sol_to_lamports(42.0), &rpc_client, &to_pubkey);
}
#[test]
@ -438,8 +471,8 @@ fn test_transfer_all() {
let recipient_pubkey = Pubkey::new(&[1u8; 32]);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 50_000).unwrap();
check_recent_balance(50_000, &rpc_client, &sender_pubkey);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
check_balance!(50_000, &rpc_client, &sender_pubkey);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
@ -461,8 +494,8 @@ fn test_transfer_all() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(0, &rpc_client, &sender_pubkey);
check_recent_balance(49_999, &rpc_client, &recipient_pubkey);
check_balance!(0, &rpc_client, &sender_pubkey);
check_balance!(49_999, &rpc_client, &recipient_pubkey);
}
#[test]
@ -491,8 +524,8 @@ fn test_transfer_unfunded_recipient() {
let recipient_pubkey = Pubkey::new(&[1u8; 32]);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 50_000).unwrap();
check_recent_balance(50_000, &rpc_client, &sender_pubkey);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
check_balance!(50_000, &rpc_client, &sender_pubkey);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
@ -551,17 +584,19 @@ fn test_transfer_with_seed() {
)
.unwrap();
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 1).unwrap();
request_and_confirm_airdrop(&rpc_client, &config, &derived_address, 50_000).unwrap();
check_recent_balance(1, &rpc_client, &sender_pubkey);
check_recent_balance(50_000, &rpc_client, &derived_address);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, sol_to_lamports(1.0))
.unwrap();
request_and_confirm_airdrop(&rpc_client, &config, &derived_address, sol_to_lamports(5.0))
.unwrap();
check_balance!(sol_to_lamports(1.0), &rpc_client, &sender_pubkey);
check_balance!(sol_to_lamports(5.0), &rpc_client, &derived_address);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
// Transfer with seed
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(50_000),
amount: SpendAmount::Some(sol_to_lamports(5.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@ -577,7 +612,7 @@ fn test_transfer_with_seed() {
derived_address_program_id: Some(derived_address_program_id),
};
process_command(&config).unwrap();
check_recent_balance(0, &rpc_client, &sender_pubkey);
check_recent_balance(50_000, &rpc_client, &recipient_pubkey);
check_recent_balance(0, &rpc_client, &derived_address);
check_balance!(sol_to_lamports(1.0) - 1, &rpc_client, &sender_pubkey);
check_balance!(sol_to_lamports(5.0), &rpc_client, &recipient_pubkey);
check_balance!(0, &rpc_client, &derived_address);
}

View File

@ -1,9 +1,11 @@
#![allow(clippy::integer_arithmetic)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
test_utils::check_recent_balance,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
blockhash_query::{self, BlockhashQuery},
rpc_client::RpcClient,
@ -12,7 +14,7 @@ use {
solana_sdk::{
account_utils::StateMut,
commitment_config::CommitmentConfig,
signature::{Keypair, Signer},
signature::{Keypair, NullSigner, Signer},
},
solana_streamer::socket::SocketAddrSpace,
solana_test_validator::TestValidator,
@ -49,7 +51,13 @@ fn test_vote_authorize_and_withdraw() {
authorized_voter: None,
authorized_withdrawer: config.signers[0].pubkey(),
commission: 0,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config).unwrap();
let vote_account = rpc_client
@ -62,12 +70,12 @@ fn test_vote_authorize_and_withdraw() {
.get_minimum_balance_for_rent_exemption(VoteState::size_of())
.unwrap()
.max(1);
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Transfer in some more SOL
config.signers = vec![&default_signer];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(1_000),
amount: SpendAmount::Some(10_000),
to: vote_account_pubkey,
from: 0,
sign_only: false,
@ -83,8 +91,8 @@ fn test_vote_authorize_and_withdraw() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
let expected_balance = expected_balance + 1_000;
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
let expected_balance = expected_balance + 10_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Authorize vote account withdrawal to another signer
let first_withdraw_authority = Keypair::new();
@ -93,7 +101,13 @@ fn test_vote_authorize_and_withdraw() {
vote_account_pubkey,
new_authorized_pubkey: first_withdraw_authority.pubkey(),
vote_authorize: VoteAuthorize::Withdrawer,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 0,
new_authorized: None,
};
@ -112,7 +126,13 @@ fn test_vote_authorize_and_withdraw() {
vote_account_pubkey,
new_authorized_pubkey: withdraw_authority.pubkey(),
vote_authorize: VoteAuthorize::Withdrawer,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 1,
new_authorized: Some(1),
};
@ -126,7 +146,13 @@ fn test_vote_authorize_and_withdraw() {
vote_account_pubkey,
new_authorized_pubkey: withdraw_authority.pubkey(),
vote_authorize: VoteAuthorize::Withdrawer,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 1,
new_authorized: Some(2),
};
@ -144,14 +170,20 @@ fn test_vote_authorize_and_withdraw() {
config.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(100),
withdraw_amount: SpendAmount::Some(1_000),
destination_account_pubkey: destination_account,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config).unwrap();
let expected_balance = expected_balance - 100;
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
check_recent_balance(100, &rpc_client, &destination_account);
let expected_balance = expected_balance - 1_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(1_000, &rpc_client, &destination_account);
// Re-assign validator identity
let new_identity_keypair = Keypair::new();
@ -160,7 +192,13 @@ fn test_vote_authorize_and_withdraw() {
vote_account_pubkey,
new_identity_account: 2,
withdraw_authority: 1,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config).unwrap();
@ -172,8 +210,281 @@ fn test_vote_authorize_and_withdraw() {
withdraw_authority: 1,
destination_account_pubkey: destination_account,
memo: None,
fee_payer: 0,
};
process_command(&config).unwrap();
check_recent_balance(0, &rpc_client, &vote_account_pubkey);
check_recent_balance(expected_balance, &rpc_client, &destination_account);
check_balance!(0, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &destination_account);
}
#[test]
fn test_offline_vote_authorize_and_withdraw() {
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
let test_validator =
TestValidator::with_no_fees(mint_pubkey, Some(faucet_addr), SocketAddrSpace::Unspecified);
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let default_signer = Keypair::new();
let mut config_payer = CliConfig::recent_for_tests();
config_payer.json_rpc_url = test_validator.rpc_url();
config_payer.signers = vec![&default_signer];
let mut config_offline = CliConfig::recent_for_tests();
config_offline.json_rpc_url = String::default();
config_offline.command = CliCommand::ClusterVersion;
let offline_keypair = Keypair::new();
config_offline.signers = vec![&offline_keypair];
// Verify that we cannot reach the cluster
process_command(&config_offline).unwrap_err();
request_and_confirm_airdrop(
&rpc_client,
&config_payer,
&config_payer.signers[0].pubkey(),
100_000,
)
.unwrap();
check_balance!(100_000, &rpc_client, &config_payer.signers[0].pubkey());
request_and_confirm_airdrop(
&rpc_client,
&config_offline,
&config_offline.signers[0].pubkey(),
100_000,
)
.unwrap();
check_balance!(100_000, &rpc_client, &config_offline.signers[0].pubkey());
// Create vote account with specific withdrawer
let vote_account_keypair = Keypair::new();
let vote_account_pubkey = vote_account_keypair.pubkey();
config_payer.signers = vec![&default_signer, &vote_account_keypair];
config_payer.command = CliCommand::CreateVoteAccount {
vote_account: 1,
seed: None,
identity_account: 0,
authorized_voter: None,
authorized_withdrawer: offline_keypair.pubkey(),
commission: 0,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config_payer).unwrap();
let vote_account = rpc_client
.get_account(&vote_account_keypair.pubkey())
.unwrap();
let vote_state: VoteStateVersions = vote_account.state().unwrap();
let authorized_withdrawer = vote_state.convert_to_current().authorized_withdrawer;
assert_eq!(authorized_withdrawer, offline_keypair.pubkey());
let expected_balance = rpc_client
.get_minimum_balance_for_rent_exemption(VoteState::size_of())
.unwrap()
.max(1);
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Transfer in some more SOL
config_payer.signers = vec![&default_signer];
config_payer.command = CliCommand::Transfer {
amount: SpendAmount::Some(10_000),
to: vote_account_pubkey,
from: 0,
sign_only: false,
dump_transaction_message: false,
allow_unfunded_recipient: true,
no_wait: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
derived_address_seed: None,
derived_address_program_id: None,
};
process_command(&config_payer).unwrap();
let expected_balance = expected_balance + 10_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Authorize vote account withdrawal to another signer, offline
let withdraw_authority = Keypair::new();
let blockhash = rpc_client.get_latest_blockhash().unwrap();
config_offline.command = CliCommand::VoteAuthorize {
vote_account_pubkey,
new_authorized_pubkey: withdraw_authority.pubkey(),
vote_authorize: VoteAuthorize::Withdrawer,
sign_only: true,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 0,
new_authorized: None,
};
config_offline.output_format = OutputFormat::JsonCompact;
let sig_response = process_command(&config_offline).unwrap();
let sign_only = parse_sign_only_reply_string(&sig_response);
assert!(sign_only.has_all_signers());
let offline_presigner = sign_only
.presigner_of(&config_offline.signers[0].pubkey())
.unwrap();
config_payer.signers = vec![&offline_presigner];
config_payer.command = CliCommand::VoteAuthorize {
vote_account_pubkey,
new_authorized_pubkey: withdraw_authority.pubkey(),
vote_authorize: VoteAuthorize::Withdrawer,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
authorized: 0,
new_authorized: None,
};
process_command(&config_payer).unwrap();
let vote_account = rpc_client
.get_account(&vote_account_keypair.pubkey())
.unwrap();
let vote_state: VoteStateVersions = vote_account.state().unwrap();
let authorized_withdrawer = vote_state.convert_to_current().authorized_withdrawer;
assert_eq!(authorized_withdrawer, withdraw_authority.pubkey());
// Withdraw from vote account offline
let destination_account = solana_sdk::pubkey::new_rand(); // Send withdrawal to new account to make balance check easy
let blockhash = rpc_client.get_latest_blockhash().unwrap();
let fee_payer_null_signer = NullSigner::new(&default_signer.pubkey());
config_offline.signers = vec![&fee_payer_null_signer, &withdraw_authority];
config_offline.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(1_000),
destination_account_pubkey: destination_account,
sign_only: true,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
config_offline.output_format = OutputFormat::JsonCompact;
let sig_response = process_command(&config_offline).unwrap();
let sign_only = parse_sign_only_reply_string(&sig_response);
let offline_presigner = sign_only
.presigner_of(&config_offline.signers[1].pubkey())
.unwrap();
config_payer.signers = vec![&default_signer, &offline_presigner];
config_payer.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(1_000),
destination_account_pubkey: destination_account,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config_payer).unwrap();
let expected_balance = expected_balance - 1_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(1_000, &rpc_client, &destination_account);
// Re-assign validator identity offline
let blockhash = rpc_client.get_latest_blockhash().unwrap();
let new_identity_keypair = Keypair::new();
let new_identity_null_signer = NullSigner::new(&new_identity_keypair.pubkey());
config_offline.signers = vec![
&fee_payer_null_signer,
&withdraw_authority,
&new_identity_null_signer,
];
config_offline.command = CliCommand::VoteUpdateValidator {
vote_account_pubkey,
new_identity_account: 2,
withdraw_authority: 1,
sign_only: true,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config_offline).unwrap();
config_offline.output_format = OutputFormat::JsonCompact;
let sig_response = process_command(&config_offline).unwrap();
let sign_only = parse_sign_only_reply_string(&sig_response);
let offline_presigner = sign_only
.presigner_of(&config_offline.signers[1].pubkey())
.unwrap();
config_payer.signers = vec![&default_signer, &offline_presigner, &new_identity_keypair];
config_payer.command = CliCommand::VoteUpdateValidator {
vote_account_pubkey,
new_identity_account: 2,
withdraw_authority: 1,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config_payer).unwrap();
// Close vote account offline. Must use WithdrawFromVoteAccount and specify amount, since
// CloseVoteAccount requires RpcClient
let destination_account = solana_sdk::pubkey::new_rand(); // Send withdrawal to new account to make balance check easy
config_offline.signers = vec![&fee_payer_null_signer, &withdraw_authority];
config_offline.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(expected_balance),
destination_account_pubkey: destination_account,
sign_only: true,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::None(blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config_offline).unwrap();
config_offline.output_format = OutputFormat::JsonCompact;
let sig_response = process_command(&config_offline).unwrap();
let sign_only = parse_sign_only_reply_string(&sig_response);
let offline_presigner = sign_only
.presigner_of(&config_offline.signers[1].pubkey())
.unwrap();
config_payer.signers = vec![&default_signer, &offline_presigner];
config_payer.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(expected_balance),
destination_account_pubkey: destination_account,
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::FeeCalculator(blockhash_query::Source::Cluster, blockhash),
nonce_account: None,
nonce_authority: 0,
memo: None,
fee_payer: 0,
};
process_command(&config_payer).unwrap();
check_balance!(0, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &destination_account);
}

View File

@ -1,6 +1,6 @@
[package]
name = "solana-client-test"
version = "1.9.0"
version = "1.10.0"
description = "Solana RPC Test"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -10,24 +10,26 @@ documentation = "https://docs.rs/solana-client-test"
edition = "2021"
[dependencies]
serde_json = "1.0.72"
serde_json = "1.0.78"
serial_test = "0.5.1"
solana-client = { path = "../client", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.9.0" }
solana-metrics = { path = "../metrics", version = "=1.9.0" }
solana-perf = { path = "../perf", version = "=1.9.0" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.9.0" }
solana-rpc = { path = "../rpc", version = "=1.9.0" }
solana-runtime = { path = "../runtime", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-streamer = { path = "../streamer", version = "=1.9.0" }
solana-test-validator = { path = "../test-validator", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-client = { path = "../client", version = "=1.10.0" }
solana-ledger = { path = "../ledger", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.10.0" }
solana-metrics = { path = "../metrics", version = "=1.10.0" }
solana-perf = { path = "../perf", version = "=1.10.0" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.10.0" }
solana-rpc = { path = "../rpc", version = "=1.10.0" }
solana-runtime = { path = "../runtime", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-streamer = { path = "../streamer", version = "=1.10.0" }
solana-test-validator = { path = "../test-validator", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
systemstat = "0.1.10"
[dev-dependencies]
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -4,11 +4,16 @@ use {
solana_client::{
pubsub_client::PubsubClient,
rpc_client::RpcClient,
rpc_config::{RpcAccountInfoConfig, RpcProgramAccountsConfig},
rpc_response::SlotInfo,
rpc_config::{
RpcAccountInfoConfig, RpcBlockSubscribeConfig, RpcBlockSubscribeFilter,
RpcProgramAccountsConfig,
},
rpc_response::{RpcBlockUpdate, SlotInfo},
},
solana_ledger::{blockstore::Blockstore, get_tmp_ledger_path},
solana_rpc::{
optimistically_confirmed_bank_tracker::OptimisticallyConfirmedBank,
rpc::create_test_transactions_and_populate_blockstore,
rpc_pubsub_service::{PubSubConfig, PubSubService},
rpc_subscriptions::RpcSubscriptions,
},
@ -20,7 +25,7 @@ use {
},
solana_sdk::{
clock::Slot,
commitment_config::CommitmentConfig,
commitment_config::{CommitmentConfig, CommitmentLevel},
native_token::sol_to_lamports,
pubkey::Pubkey,
rpc_port,
@ -29,11 +34,12 @@ use {
},
solana_streamer::socket::SocketAddrSpace,
solana_test_validator::TestValidator,
solana_transaction_status::{TransactionDetails, UiTransactionEncoding},
std::{
collections::HashSet,
net::{IpAddr, SocketAddr},
sync::{
atomic::{AtomicBool, Ordering},
atomic::{AtomicBool, AtomicU64, Ordering},
Arc, RwLock,
},
thread::sleep,
@ -119,9 +125,10 @@ fn test_account_subscription() {
let bank1 = Bank::new_from_parent(&bank0, &Pubkey::default(), 1);
bank_forks.write().unwrap().insert(bank1);
let bob = Keypair::new();
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
let subscriptions = Arc::new(RpcSubscriptions::new_for_tests(
&exit,
max_complete_transaction_status_slot,
bank_forks.clone(),
Arc::new(RwLock::new(BlockCommitmentCache::default())),
OptimisticallyConfirmedBank::locked_from_bank_forks_root(&bank_forks),
@ -194,6 +201,113 @@ fn test_account_subscription() {
assert_eq!(errors, [].to_vec());
}
#[test]
#[serial]
fn test_block_subscription() {
// setup BankForks
let exit = Arc::new(AtomicBool::new(false));
let GenesisConfigInfo {
genesis_config,
mint_keypair: alice,
..
} = create_genesis_config(10_000);
let bank = Bank::new_for_tests(&genesis_config);
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
// setup Blockstore
let ledger_path = get_tmp_ledger_path!();
let blockstore = Blockstore::open(&ledger_path).unwrap();
let blockstore = Arc::new(blockstore);
// populate ledger with test txs
let bank = bank_forks.read().unwrap().working_bank();
let keypair1 = Keypair::new();
let keypair2 = Keypair::new();
let keypair3 = Keypair::new();
let max_complete_transaction_status_slot = Arc::new(AtomicU64::new(blockstore.max_root()));
let _confirmed_block_signatures = create_test_transactions_and_populate_blockstore(
vec![&alice, &keypair1, &keypair2, &keypair3],
0,
bank,
blockstore.clone(),
max_complete_transaction_status_slot,
);
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
// setup RpcSubscriptions && PubSubService
let subscriptions = Arc::new(RpcSubscriptions::new_for_tests_with_blockstore(
&exit,
max_complete_transaction_status_slot,
blockstore.clone(),
bank_forks.clone(),
Arc::new(RwLock::new(BlockCommitmentCache::default())),
OptimisticallyConfirmedBank::locked_from_bank_forks_root(&bank_forks),
));
let pubsub_addr = SocketAddr::new(
IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)),
rpc_port::DEFAULT_RPC_PUBSUB_PORT,
);
let pub_cfg = PubSubConfig {
enable_block_subscription: true,
..PubSubConfig::default()
};
let (trigger, pubsub_service) = PubSubService::new(pub_cfg, &subscriptions, pubsub_addr);
std::thread::sleep(Duration::from_millis(400));
// setup PubsubClient
let (mut client, receiver) = PubsubClient::block_subscribe(
&format!("ws://0.0.0.0:{}/", pubsub_addr.port()),
RpcBlockSubscribeFilter::All,
Some(RpcBlockSubscribeConfig {
commitment: Some(CommitmentConfig {
commitment: CommitmentLevel::Confirmed,
}),
encoding: Some(UiTransactionEncoding::Json),
transaction_details: Some(TransactionDetails::Signatures),
show_rewards: None,
}),
)
.unwrap();
// trigger Gossip notification
let slot = bank_forks.read().unwrap().highest_slot();
subscriptions.notify_gossip_subscribers(slot);
let maybe_actual = receiver.recv_timeout(Duration::from_millis(400));
match maybe_actual {
Ok(actual) => {
let versioned_block = blockstore.get_complete_block(slot, false).unwrap();
let legacy_block = versioned_block.into_legacy_block().unwrap();
let block = legacy_block.clone().configure(
UiTransactionEncoding::Json,
TransactionDetails::Signatures,
false,
);
let expected = RpcBlockUpdate {
slot,
block: Some(block),
err: None,
};
let block = legacy_block.configure(
UiTransactionEncoding::Json,
TransactionDetails::Signatures,
false,
);
assert_eq!(actual.value.slot, expected.slot);
assert!(block.eq(&actual.value.block.unwrap()));
}
Err(e) => {
eprintln!("unexpected websocket receive timeout");
assert_eq!(Some(e), None);
}
}
// cleanup
exit.store(true, Ordering::Relaxed);
trigger.cancel();
client.shutdown().unwrap();
pubsub_service.close().unwrap();
}
#[test]
#[serial]
fn test_program_subscription() {
@ -215,9 +329,10 @@ fn test_program_subscription() {
let bank1 = Bank::new_from_parent(&bank0, &Pubkey::default(), 1);
bank_forks.write().unwrap().insert(bank1);
let bob = Keypair::new();
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
let subscriptions = Arc::new(RpcSubscriptions::new_for_tests(
&exit,
max_complete_transaction_status_slot,
bank_forks.clone(),
Arc::new(RwLock::new(BlockCommitmentCache::default())),
OptimisticallyConfirmedBank::locked_from_bank_forks_root(&bank_forks),
@ -300,9 +415,10 @@ fn test_root_subscription() {
let bank0 = bank_forks.read().unwrap().get(0).unwrap().clone();
let bank1 = Bank::new_from_parent(&bank0, &Pubkey::default(), 1);
bank_forks.write().unwrap().insert(bank1);
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
let subscriptions = Arc::new(RpcSubscriptions::new_for_tests(
&exit,
max_complete_transaction_status_slot,
bank_forks.clone(),
Arc::new(RwLock::new(BlockCommitmentCache::default())),
OptimisticallyConfirmedBank::locked_from_bank_forks_root(&bank_forks),
@ -350,8 +466,10 @@ fn test_slot_subscription() {
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let optimistically_confirmed_bank =
OptimisticallyConfirmedBank::locked_from_bank_forks_root(&bank_forks);
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
let subscriptions = Arc::new(RpcSubscriptions::new_for_tests(
&exit,
max_complete_transaction_status_slot,
bank_forks,
Arc::new(RwLock::new(BlockCommitmentCache::default())),
optimistically_confirmed_bank,

View File

@ -1,6 +1,6 @@
[package]
name = "solana-client"
version = "1.9.0"
version = "1.10.0"
description = "Solana Client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@ -10,28 +10,30 @@ license = "Apache-2.0"
edition = "2021"
[dependencies]
async-trait = "0.1.52"
base64 = "0.13.0"
bincode = "1.3.3"
bs58 = "0.4.0"
clap = "2.33.0"
crossbeam-channel = "0.5"
indicatif = "0.16.2"
jsonrpc-core = "18.0.0"
log = "0.4.14"
rayon = "1.5.1"
reqwest = { version = "0.11.6", default-features = false, features = ["blocking", "rustls-tls", "json"] }
semver = "1.0.4"
serde = "1.0.130"
serde = "1.0.134"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.0" }
solana-faucet = { path = "../faucet", version = "=1.9.0" }
solana-net-utils = { path = "../net-utils", version = "=1.9.0" }
solana-measure = { path = "../measure", version = "=1.9.0" }
solana-sdk = { path = "../sdk", version = "=1.9.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.0" }
solana-version = { path = "../version", version = "=1.9.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.0" }
serde_json = "1.0.78"
solana-account-decoder = { path = "../account-decoder", version = "=1.10.0" }
solana-clap-utils = { path = "../clap-utils", version = "=1.10.0" }
solana-faucet = { path = "../faucet", version = "=1.10.0" }
solana-net-utils = { path = "../net-utils", version = "=1.10.0" }
solana-measure = { path = "../measure", version = "=1.10.0" }
solana-sdk = { path = "../sdk", version = "=1.10.0" }
solana-transaction-status = { path = "../transaction-status", version = "=1.10.0" }
solana-version = { path = "../version", version = "=1.10.0" }
solana-vote-program = { path = "../programs/vote", version = "=1.10.0" }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tungstenite = { version = "0.16.0", features = ["rustls-tls-webpki-roots"] }
@ -40,7 +42,7 @@ url = "2.2.2"
[dev-dependencies]
assert_matches = "1.5.0"
jsonrpc-http-server = "18.0.0"
solana-logger = { path = "../logger", version = "=1.9.0" }
solana-logger = { path = "../logger", version = "=1.10.0" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@ -1,4 +1,4 @@
//! The standard [`RpcSender`] over HTTP.
//! Nonblocking [`RpcSender`] over HTTP.
use {
crate::{
@ -8,6 +8,7 @@ use {
rpc_response::RpcSimulateTransactionResult,
rpc_sender::*,
},
async_trait::async_trait,
log::*,
reqwest::{
self,
@ -25,13 +26,13 @@ use {
};
pub struct HttpSender {
client: Arc<reqwest::blocking::Client>,
client: Arc<reqwest::Client>,
url: String,
request_id: AtomicU64,
stats: RwLock<RpcTransportStats>,
}
/// The standard [`RpcSender`] over HTTP.
/// Nonblocking [`RpcSender`] over HTTP.
impl HttpSender {
/// Create an HTTP RPC sender.
///
@ -45,15 +46,11 @@ impl HttpSender {
///
/// The URL is an HTTP URL, usually for port 8899.
pub fn new_with_timeout(url: String, timeout: Duration) -> Self {
// `reqwest::blocking::Client` panics if run in a tokio async context. Shuttle the
// request to a different tokio thread to avoid this
let client = Arc::new(
tokio::task::block_in_place(move || {
reqwest::blocking::Client::builder()
.timeout(timeout)
.build()
})
.expect("build rpc client"),
reqwest::Client::builder()
.timeout(timeout)
.build()
.expect("build rpc client"),
);
Self {
@ -100,12 +97,17 @@ impl<'a> Drop for StatsUpdater<'a> {
}
}
#[async_trait]
impl RpcSender for HttpSender {
fn get_transport_stats(&self) -> RpcTransportStats {
self.stats.read().unwrap().clone()
}
fn send(&self, request: RpcRequest, params: serde_json::Value) -> Result<serde_json::Value> {
async fn send(
&self,
request: RpcRequest,
params: serde_json::Value,
) -> Result<serde_json::Value> {
let mut stats_updater = StatsUpdater::new(&self.stats);
let request_id = self.request_id.fetch_add(1, Ordering::Relaxed);
@ -113,18 +115,15 @@ impl RpcSender for HttpSender {
let mut too_many_requests_retries = 5;
loop {
// `reqwest::blocking::Client` panics if run in a tokio async context. Shuttle the
// request to a different tokio thread to avoid this
let response = {
let client = self.client.clone();
let request_json = request_json.clone();
tokio::task::block_in_place(move || {
client
.post(&self.url)
.header(CONTENT_TYPE, "application/json")
.body(request_json)
.send()
})
client
.post(&self.url)
.header(CONTENT_TYPE, "application/json")
.body(request_json)
.send()
.await
}?;
if !response.status().is_success() {
@ -155,8 +154,7 @@ impl RpcSender for HttpSender {
return Err(response.error_for_status().unwrap_err().into());
}
let mut json =
tokio::task::block_in_place(move || response.json::<serde_json::Value>())?;
let mut json = response.json::<serde_json::Value>().await?;
if json["error"].is_object() {
return match serde_json::from_value::<RpcErrorObject>(json["error"].clone()) {
Ok(rpc_error_object) => {
@ -208,14 +206,16 @@ mod tests {
#[tokio::test(flavor = "multi_thread")]
async fn http_sender_on_tokio_multi_thread() {
let http_sender = HttpSender::new("http://localhost:1234".to_string());
let _ = http_sender.send(RpcRequest::GetVersion, serde_json::Value::Null);
let _ = http_sender
.send(RpcRequest::GetVersion, serde_json::Value::Null)
.await;
}
#[tokio::test(flavor = "current_thread")]
#[should_panic(expected = "can call blocking only when running on the multi-threaded runtime")]
async fn http_sender_ontokio_current_thread_should_panic() {
// RpcClient::new() will panic in the tokio current-thread runtime due to `tokio::task::block_in_place()` usage, and there
// doesn't seem to be a way to detect whether the tokio runtime is multi_thread or current_thread...
let _ = HttpSender::new("http://localhost:1234".to_string());
async fn http_sender_on_tokio_current_thread() {
let http_sender = HttpSender::new("http://localhost:1234".to_string());
let _ = http_sender
.send(RpcRequest::GetVersion, serde_json::Value::Null)
.await;
}
}

View File

@ -4,8 +4,9 @@ extern crate serde_derive;
pub mod blockhash_query;
pub mod client_error;
pub mod http_sender;
pub mod mock_sender;
pub(crate) mod http_sender;
pub(crate) mod mock_sender;
pub mod nonblocking;
pub mod nonce_utils;
pub mod perf_utils;
pub mod pubsub_client;
@ -17,8 +18,15 @@ pub mod rpc_deprecated_config;
pub mod rpc_filter;
pub mod rpc_request;
pub mod rpc_response;
pub mod rpc_sender;
pub(crate) mod rpc_sender;
pub mod spinner;
pub mod thin_client;
pub mod tpu_client;
pub mod transaction_executor;
pub mod mock_sender_for_cli {
/// Magic `SIGNATURE` value used by `solana-cli` unit tests.
/// Please don't use this constant.
pub const SIGNATURE: &str =
"43yNSFC6fYTuPgTNFFhF4axw7AfWxB2BPdurme8yrsWEYwm8299xh8n6TAHjGymiSub1XtyxTNyd9GBfY2hxoBw8";
}

Some files were not shown because too many files have changed in this diff Show More