Compare commits

..

135 Commits

Author SHA1 Message Date
mergify[bot]
535ee281e8 Filter old CrdsValues received via Pull Responses in Gossip (#8150) (#8277)
automerge
2020-02-14 10:21:26 -08:00
mergify[bot]
da843a7ace Fix larger than necessary allocations in streamer (#8187) (#8191)
automerge
2020-02-10 12:52:16 -08:00
Michael Vines
772cf8288c Bump version to 0.22.8 2020-02-03 21:08:54 -07:00
Michael Vines
e81a40ba55 Lock snapshot version to 0.22.6 2020-02-03 17:06:29 -07:00
Michael Vines
a52359a6be Cargo.lock 2020-02-03 17:05:35 -07:00
sakridge
2fe0853fba Fix consensus threshold when new root is created (#8093)
When a new root is created, the oldest slot is popped off
but when the logic checks for identical slots, it assumes
that any difference means a slot was popped off the front.
2020-02-03 16:47:02 -07:00
Sagar Dhawan
de3630f76c Filter repairman peers based on shred_version (#8069)
(cherry picked from commit b9988b62e4)
2020-02-01 08:58:26 -07:00
carllin
ff9e388843 Fix stale gossip entrypoint (#8053)
(cherry picked from commit fd207b6907)
2020-01-31 00:34:29 -07:00
Michael Vines
67a7995f04 Bump version to 0.22.7 2020-01-30 15:47:22 -07:00
Michael Vines
f9d793023c Only error if --expected-shred-version was not provided 2020-01-30 13:25:25 -07:00
Michael Vines
99b2504b38 Rename rpc_peers() to all_rpc_peers() for consistency 2020-01-30 13:21:04 -07:00
Michael Vines
3f3aec29d1 Add different shred test to test_tvu_peers_and_stakes
(cherry picked from commit 0c55b37976)
2020-01-30 11:28:18 -07:00
Justin Starry
7be8124b9e Ignore slow archiver tests (#8032)
automerge

(cherry picked from commit 400412d76c)
2020-01-30 09:38:53 -07:00
Sagar Dhawan
81259daa3f Add shred version filters to Crds Accessors (#8027)
* Add shred version filters to Crds Accessors

* Adopt entrypoint shred_version if one isn't provided

(cherry picked from commit 64c42e28dc)
2020-01-30 08:59:00 -07:00
Michael Vines
136fa5b561 Add leader-schedule subcommand 2020-01-29 20:08:32 -07:00
Michael Vines
63ca6118fa Add --expected-shred-version option 2020-01-29 20:08:32 -07:00
Michael Vines
850d729739 Wait for supermajority by default, add --no-wait-for-supermajority flag to override 2020-01-29 20:08:32 -07:00
Michael Vines
62f9183d17 getClusterNodes now excludes validators with a different shred version 2020-01-29 20:08:32 -07:00
Justin Starry
cfe3481ba4 Log solana-validator args on startup to aid debugging
(cherry picked from commit effe6e3ff3)
2020-01-29 09:40:18 -07:00
Michael Vines
788e9f321c Bump version to v0.22.6 2020-01-28 08:44:44 -07:00
Michael Vines
265e88e734 Fix compute_shred_version() 2020-01-27 19:05:17 -07:00
Michael Vines
e80c74d955 Drop v prefix 2020-01-27 19:05:17 -07:00
Michael Vines
d3efe2317b Remove stray key 2020-01-26 14:36:00 -07:00
Michael Vines
05a661dd88 Bump version to v0.22.5 2020-01-24 21:52:01 -07:00
Michael Vines
84090df770 Bump perf libs to v0.18.0 for CUDA 10.2 support 2020-01-24 21:38:51 -07:00
Stephen Akridge
3f7fe04124 Consensus fix, don't consider threshold check if
lockouts are not increased
2020-01-24 21:34:16 -07:00
mergify[bot]
ac4e3c2426 Add ability to hard fork at any slot (#7801) (#7970)
automerge
2020-01-24 18:57:08 -08:00
Jack May
13af049988 Install move-loader binaries (#7768)
(cherry picked from commit 5cb23c814d)
2020-01-24 18:13:03 -07:00
Michael Vines
bd07f9bdcb Move testnet.solana.com and TdS to their own GCP projects 2020-01-24 16:28:04 -07:00
mergify[bot]
82927fee20 Increase --wait-for-supermajority to wait for 75% online stake (#7957)
automerge
2020-01-23 23:03:13 -08:00
Michael Vines
57d5534bab Add create-snapshot command 2020-01-23 22:21:36 -07:00
Michael Vines
d2c15b596f Add BlockstoreProcessorResult 2020-01-23 21:03:57 -07:00
Michael Vines
5d8dc78718 Move snapshot archive generation out of the SnapshotPackagerService 2020-01-23 15:58:59 -07:00
Michael Vines
c945e80618 Type grooming 2020-01-23 15:58:59 -07:00
Michael Vines
0802793d37 Unify ledger_path arg handling with validator/ 2020-01-23 15:58:59 -07:00
Michael Vines
a5c3750a58 Pass bank_forks by reference 2020-01-23 15:58:59 -07:00
Michael Vines
dc1c5f8b1e --halt-at-slot 1 now halts at slot 1 2020-01-23 15:58:59 -07:00
Michael Vines
653bec01f0 Set BankRc slot correctly when restoring a bank snapshot 2020-01-23 15:58:59 -07:00
Michael Vines
49c94fad60 add_snapshot now returns SlotSnapshotPaths 2020-01-23 15:58:59 -07:00
Michael Vines
98fd1b3fcb Remove superfluous accounts arg 2020-01-23 15:58:59 -07:00
mergify[bot]
93301d1c81 Make run.sh not overwrite genesis if existing (#7837) (#7939)
automerge
2020-01-22 23:38:41 -08:00
mergify[bot]
5aa8ee8ede Uninteresting cleanup (#7938)
automerge
2020-01-22 21:16:25 -08:00
mergify[bot]
28f81bd0a3 Avoid unsorted recent_blockhashes for determinism (#7918) (#7936)
automerge
2020-01-22 18:52:39 -08:00
Michael Vines
1f4ae4318b Reject CI on failed mergify.io backports (#7927)
automerge

(cherry picked from commit 9bd6be779f)
2020-01-22 16:11:07 -07:00
mergify[bot]
bec1cf3145 CLI: Cleanup authority arg usage inconsistencies (#7922) (#7924)
automerge
2020-01-22 14:09:26 -08:00
Michael Vines
5b4b086ebf Add mechanism to load v0.22.3 snapshots on newer Solana versions 2020-01-22 13:19:07 -07:00
Michael Vines
0ef33b6462 don't put accounts in a weird location, use the defaults (#7921)
automerge

(cherry picked from commit f9323c5273)
2020-01-22 12:58:06 -07:00
mergify[bot]
e401bc6997 CLI: Support offline authorities (#7905) (#7920)
automerge
2020-01-22 10:57:16 -08:00
mergify[bot]
8ffd2c12a3 Add and use minimumLedgerSlot RPC API in block-production command (bp #7901) (#7903)
automerge
2020-01-21 14:07:32 -08:00
mergify[bot]
ec4134f26d Revert "Generate MAX_DATA_SHREDS_PER_FEC_BLOCK coding shreds for each FEC block (#7474)" (#7898) (#7899)
automerge
2020-01-21 12:40:42 -08:00
mergify[bot]
35e7b2f975 Remove redundant threadpools in sigverify (bp #7888) (#7890)
automerge
2020-01-20 21:31:56 -08:00
Michael Vines
3509f1158f Assume 1 or more validators 2020-01-20 19:19:29 -07:00
mergify[bot]
1ca33d1967 --limit-ledger-size now accepts an optional slot count value (#7885)
automerge
2020-01-20 14:22:37 -08:00
mergify[bot]
19474ecaae Create ledger directory if it doesn't already exist (#7878)
automerge
2020-01-20 10:41:40 -08:00
Michael Vines
e317940ebc Try running testnet.solana.com with only two validators 2020-01-20 10:23:43 -07:00
mergify[bot]
fbbfa93524 Spy just for RPC to avoid premature supermajority (#7856) (#7875)
automerge
2020-01-19 18:51:13 -08:00
mergify[bot]
c759a04fbc If a bad RPC node is selected try another one instead of aborting (#7871)
automerge
2020-01-18 10:52:15 -08:00
Michael Vines
d1d37db717 Abort if a snapshot download fails for any reason other than 404
(cherry picked from commit e28508ad56)
2020-01-18 09:35:43 -07:00
mergify[bot]
4904b6a532 CLI: Support offline and nonced stake subcommands (#7831) (#7861)
automerge
2020-01-17 13:10:38 -08:00
mergify[bot]
f80a657764 Nonce: Rename instructions with VerbNoun scheme (#7775) (#7778)
automerge
2020-01-17 10:48:33 -08:00
mergify[bot]
344c528b63 Reduce grace ticks, and ignore grace ticks for missing leaders (#7764) (#7779)
automerge
2020-01-16 19:57:41 -08:00
mergify[bot]
ee1300a671 Improve bench-tps keypair generation (#7723) (#7853)
automerge
2020-01-16 19:30:00 -08:00
mergify[bot]
6c2534a8be Add logging surrounding failure in get_slot_entries_with_shred_info() (#7846) (#7851)
automerge
2020-01-16 17:27:52 -08:00
Michael Vines
28a979c7d3 Cargo.lock 2020-01-16 16:34:33 -07:00
mergify[bot]
d071674b03 ignore prost is part of move (#7848) (#7850)
automerge
2020-01-16 15:24:05 -08:00
Michael Vines
8c5f676df0 Bump version to 0.22.4 2020-01-15 18:55:50 -07:00
Tyera Eulberg
6f098e0145 Fix Rpc inconsistencies (#7826)
* Update rpc account format: remove byte arrays

* Base58-encode pubkeys in getStoragePubkeysForSlot

* Update docs

(cherry picked from commit da165d6943)
2020-01-15 16:56:14 -07:00
mergify[bot]
f90bc20a8b CLI: Plumb stake authorities throughout (#7822) (#7830)
automerge
2020-01-15 15:29:47 -08:00
mergify[bot]
60074c9d36 Remove tuple from programNotification (#7819) (#7821)
automerge
2020-01-15 12:13:12 -08:00
Dan Albert
5d9354fca7 Remove word pair from address generator seed string (#7802) (#7823)
* Remove word pair from address generator seed string
2020-01-15 14:48:21 -05:00
mergify[bot]
0ea09d75ed Add new genesis validators (#7814) (#7817)
automerge
2020-01-15 10:22:54 -08:00
mergify[bot]
f475a46df6 Prefer CUDA_HOME environment variable (#7813)
automerge
2020-01-15 08:51:35 -08:00
mergify[bot]
5681a24896 Remove tuples from JSON RPC responses (#7806) (#7811)
automerge
2020-01-15 00:32:03 -08:00
mergify[bot]
214aba6d2f Set bootstrap leader and net/ validator vote account commission to 100% (#7810)
automerge
2020-01-15 00:25:10 -08:00
mergify[bot]
fa551e5fc1 Fix cluster collapse due to no proper shifted read (#7797) (#7807)
automerge
2020-01-14 19:48:36 -08:00
sakridge
d9a5a86d10 Add hash stats information to check hashes between validators (#7780)
automerge
2020-01-14 17:55:46 -07:00
Ryo Onodera
83ad921ad6 Rename slot_hash => bank_hash in AcoountsDB (#7579)
* Rename slot_hash => bank_hash in AcoountsDB
2020-01-14 17:55:46 -07:00
mergify[bot]
5753c719bd Include shred version in gossip (#7800)
automerge
2020-01-14 14:30:10 -08:00
mergify[bot]
322e2e0c6a Improve KeypairFileNotFound error message (#7792) (#7794)
automerge
2020-01-14 13:05:20 -08:00
mergify[bot]
371fdc6495 Book: Drop since-fixed nonce known issue (#7789) (#7790)
automerge
2020-01-14 10:18:20 -08:00
mergify[bot]
d23f2b5754 Unignore advisories as affected ver. is corrected (#7730) (#7783)
automerge
2020-01-13 19:01:23 -08:00
mergify[bot]
a50a015542 Rename blocktree to blockstore (bp #7757) (#7771)
automerge
2020-01-13 16:15:22 -08:00
mergify[bot]
353cfb1980 Update getConfirmedBlock examples (#7772) (#7773)
automerge
2020-01-13 14:35:31 -08:00
mergify[bot]
79d737e760 Book: Update durable nonce proposal entry (#7694) (#7770)
automerge
2020-01-13 13:39:38 -08:00
mergify[bot]
8745034cec getConfirmedBlock: add encoding optional parameter (#7756) (#7767)
automerge
2020-01-12 22:27:09 -08:00
mergify[bot]
db979b30c4 Pick an RPC node at random to avoid getting stuck on a bad RPC node (#7763)
automerge
2020-01-12 20:24:03 -08:00
mergify[bot]
a92855c995 Manage durable nonce stored value in runtime (#7684) (#7760)
automerge
2020-01-10 17:11:47 -08:00
mergify[bot]
5b006eba57 Handle errors on replaying ledger properly (bp #7741) (#7755)
automerge
2020-01-10 15:17:54 -08:00
mergify[bot]
32a728d585 Clarify account creation error messages in CLI (bp #7719) (#7745)
automerge
2020-01-10 07:02:11 -08:00
mergify[bot]
1b3be91e3c Update http crate in bpf program to fix security vulnerability (#7735) (#7743)
automerge
2020-01-09 20:53:56 -08:00
mergify[bot]
2509002fe4 Print bank hash and hash inputs. (#7733) (#7734)
automerge
2020-01-09 17:13:31 -08:00
mergify[bot]
9c9a690d0d Correctly integrate buildkite with codecov (#7718) (#7727)
automerge
2020-01-09 13:45:27 -08:00
mergify[bot]
216cc34224 Update http crate to fix security vulnerability (bp #7725) (#7729)
automerge
2020-01-09 12:51:20 -08:00
mergify[bot]
71f1459ef9 Remove vote account from genesis validators (#7717)
automerge
2020-01-08 22:40:36 -08:00
mergify[bot]
f84bdb7d81 Fix rooted slot iterator (#7695) (#7714)
automerge
2020-01-08 13:23:55 -08:00
mergify[bot]
ed59c58a72 Account for stake held by the current node while waiting for the supermajority to join gossip (#7708)
automerge
2020-01-07 22:13:44 -08:00
mergify[bot]
de941f4074 validator: Add --wait-for-super-majority to facilitate asynchronous cluster restarts (bp #7701) (#7704)
automerge
2020-01-07 15:48:11 -08:00
mergify[bot]
b7fb050d09 Use commas to make a log message more readable (#7696)
automerge
2020-01-06 22:12:03 -08:00
Michael Vines
9ee2e768d6 Bump version to 0.22.3 2020-01-06 08:17:56 -07:00
mergify[bot]
d6d3a3c3d8 getBlockTime: Fix RootedSlotIterator lowest root (#7681) (#7687)
automerge
2020-01-05 23:24:34 -08:00
mergify[bot]
3e229b248f Update getBlockTime rpc docs (#7688) (#7689)
automerge
2020-01-05 23:16:04 -08:00
Tyera Eulberg
0470072436 Cli: fund validator-info accounts with rent-exempt lamports
(cherry picked from commit 580ca36a62)
2020-01-04 23:20:38 -07:00
Michael Vines
f74fa60c8b Revert "Add a stand-alone gossip node on the blocksteamer instance"
This reverts commit a217920561.

This commit is causing trouble when the TdS cluster is reset and
validators running an older genesis config are still present.
Occasionally an RPC URL from an older validator will be selected,
causing a new node to fail to boot.
2020-01-04 16:44:28 -07:00
Michael Vines
c189767090 Bump version to 0.22.2 2020-01-04 14:17:42 -07:00
mergify[bot]
c82c18353d Don't panic if peer_addr() fails (#7678) (#7679)
automerge
2020-01-04 10:39:22 -08:00
mergify[bot]
da58a272dd Set default vote account commission to 100% (#7677)
automerge
2020-01-04 09:52:33 -08:00
mergify[bot]
001f5fbb6b bank: Prune older epoch stakes (bp #7668) (#7676)
automerge
2020-01-04 09:32:16 -08:00
Michael Vines
63cd452ab5 Minor book fixes 2020-01-04 08:53:51 -07:00
mergify[bot]
6ee77e9754 Make validator timestamping more coincident, and increase timestamp sample range (#7673) (#7674)
automerge
2020-01-03 23:30:12 -08:00
mergify[bot]
cee22262fc Move nonce into system program (bp #7645) (#7671)
automerge
2020-01-03 18:33:40 -08:00
mergify[bot]
0d13352916 CLI: Fix default nonce authority resolution (#7657) (#7672)
automerge
2020-01-03 17:18:43 -08:00
mergify[bot]
78a9832f13 Measure heap usage while processing the ledger at validator startup (bp #7667) (#7670)
automerge
2020-01-03 15:43:11 -08:00
Michael Vines
795cf14650 Publish bpf-sdk only in Linux build
(cherry picked from commit 078e7246ac)
2020-01-02 23:22:29 -07:00
mergify[bot]
8c112e8bc4 Publish bpf-sdk releases (#7655) (#7662)
automerge
2020-01-02 21:25:59 -08:00
Michael Vines
8e6d213459 Revert "Remov dead code from TdS testnet manager config (#7414)"
This reverts commit 8920ac02f6.
2020-01-02 21:07:23 -07:00
mergify[bot]
b33df42640 net: Add a stand-alone gossip node on the blocksteamer instance (bp #7654) (#7659)
automerge
2020-01-02 17:26:40 -08:00
mergify[bot]
e0462e6933 Book - Document nonceable CLI subcommands (#7656) (#7660)
automerge
2020-01-02 17:14:08 -08:00
mergify[bot]
1f5e30a366 Add input validation for --creation-time/--lockup-date args (#7646) (#7647)
automerge
2019-12-30 22:39:51 -08:00
mergify[bot]
633eeb1586 Book: Document CLI durable nonce account management (#7595) (#7640)
automerge
2019-12-30 10:17:23 -08:00
Dan Albert
c1148a6da3 Use lamports in genesis (#7631) (#7634)
automerge
2019-12-29 10:22:28 -08:00
mergify[bot]
713e86670d Use lamports in genesis (#7631) (#7633)
automerge
2019-12-29 10:17:16 -08:00
mergify[bot]
c004c726e7 Support nonced transactions in the CLI (#7624) (#7630)
automerge
2019-12-27 13:22:06 -08:00
mergify[bot]
5ffb8631e0 Account for rent (#7626) (#7627)
automerge
2019-12-24 18:41:22 -08:00
Michael Vines
fd32a0280e Cargo.lock 2019-12-24 09:12:11 -07:00
Michael Vines
e76f202eb3 Update gitbook-cage first 2019-12-23 18:17:43 -07:00
Dan Albert
ba4558cb92 Update cargo files to 0.22.1 (#7620) 2019-12-23 19:42:33 -05:00
Dan Albert
74e5577dd4 Move cleanup to a script so it doesn't kill itself (#7603) (#7619)
automerge
2019-12-23 15:23:47 -08:00
Jack May
b878002cf5 Specify version for solana-sdk-macro to enable crate.io publishing (#7616) 2019-12-23 12:38:21 -08:00
mergify[bot]
f111250e3b Groom log messages (#7610) (#7614)
automerge
2019-12-23 10:29:15 -08:00
mergify[bot]
3d91f650db Fix key in genesis (#7585) (#7608)
automerge
2019-12-22 22:41:01 -08:00
mergify[bot]
91a88cda6a show-block-production: Rename "missed" to "skipped" as not all skipped slots are missed slots (#7599) (#7607)
(cherry picked from commit 419da18405)

Co-authored-by: Michael Vines <mvines@gmail.com>
2019-12-22 23:21:24 -07:00
mergify[bot]
2128c17ed0 Extend Stable CI job timeout to 60 minutes (#7604) (#7606)
automerge
2019-12-22 19:57:43 -08:00
Michael Vines
7b819c9b74 MISSED -> SKIPPED 2019-12-22 10:19:12 -07:00
Michael Vines
eec5c661af Remove stray SOLANA_CUDA=1 2019-12-22 10:09:26 -07:00
mergify[bot]
0398f6b87a ledger-tool: Add --all option to bounds, to display all non-empty slots (#7592) (#7598)
automerge
2019-12-20 21:30:47 -08:00
920 changed files with 51253 additions and 96363 deletions

42
.appveyor.yml Normal file
View File

@@ -0,0 +1,42 @@
version: '{build}'
branches:
only:
- master
- /^v[0-9.]+\.[0-9.]+/
cache:
- '%USERPROFILE%\.cargo'
- '%APPVEYOR_BUILD_FOLDER%\target'
clone_folder: d:\projects\solana
build_script:
- bash ci/publish-tarball.sh
notifications:
- provider: Slack
incoming_webhook:
secure: GJsBey+F5apAtUm86MHVJ68Uqa6WN1SImcuIc4TsTZrDhA8K1QWUNw9FFQPybUWDyOcS5dly3kubnUqlGt9ux6Ad2efsfRIQYWv0tOVXKeY=
channel: ci-status
on_build_success: false
on_build_failure: true
on_build_status_changed: true
deploy:
- provider: S3
access_key_id:
secure: fTbJl6JpFebR40J7cOWZ2mXBa3kIvEiXgzxAj6L3N7A=
secret_access_key:
secure: vItsBXb2rEFLvkWtVn/Rcxu5a5+2EwC+b7GsA0waJy9hXh6XuBAD0lnHd9re3g/4
bucket: release.solana.com
region: us-west-1
set_public: true
- provider: GitHub
auth_token:
secure: 81fEmPZ0cV1wLtNuUrcmtgxKF6ROQF1+/ft5m+fHX21z6PoeCbaNo8cTyLioWBj7
draft: false
prerelease: false
on:
appveyor_repo_tag: true

View File

@@ -7,6 +7,9 @@
"GITHUB_TOKEN": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:Vq2dkGTOzfEpRht0BAGHFp/hDogMvXJe:tFXHg1epVt2mq9hkuc5sRHe+KAnVREi/p8S+IZu67XRyzdiA/nGak1k860FXYuuzuaE0QWekaEc=]", "GITHUB_TOKEN": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:Vq2dkGTOzfEpRht0BAGHFp/hDogMvXJe:tFXHg1epVt2mq9hkuc5sRHe+KAnVREi/p8S+IZu67XRyzdiA/nGak1k860FXYuuzuaE0QWekaEc=]",
"INFLUX_DATABASE": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:5KI9WBkXx3R/W4m256mU5MJOE7N8aAT9:Cb8QFELZ9I60t5zhJ9h55Kcs]", "INFLUX_DATABASE": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:5KI9WBkXx3R/W4m256mU5MJOE7N8aAT9:Cb8QFELZ9I60t5zhJ9h55Kcs]",
"INFLUX_PASSWORD": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:hQRMpLCrav+OYkNphkeM4hagdVoZv5Iw:AUO76rr6+gF1OLJA8ZLSG8wHKXgYCPNk6gRCV8rBhZBJ4KwDaxpvOhMl7bxxXG6jol7v4aRa/Lk=]", "INFLUX_PASSWORD": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:hQRMpLCrav+OYkNphkeM4hagdVoZv5Iw:AUO76rr6+gF1OLJA8ZLSG8wHKXgYCPNk6gRCV8rBhZBJ4KwDaxpvOhMl7bxxXG6jol7v4aRa/Lk=]",
"INFLUX_USERNAME": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:R7BNmQjfeqoGDAFTJu9bYTGHol2NgnYN:Q2tOT/EBcFvhFk+DKLKmVU7tLCpVC3Ui]" "INFLUX_USERNAME": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:R7BNmQjfeqoGDAFTJu9bYTGHol2NgnYN:Q2tOT/EBcFvhFk+DKLKmVU7tLCpVC3Ui]",
"SOLANA_INSTALL_UPDATE_MANIFEST_KEYPAIR_x86_64_unknown_linux_gnu": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:Egc2dMrHDU0NcZ71LwGv/V66shUhwYUE:04VoIb8CKy7KYhQ5W4cEW9SDKZltxWBL5Hob106lMBbUOD/yUvKYcG3Ep8JfTMwO3K8zowW5HpU/IdGoilX0XWLiJJ6t+p05WWK0TA16nOEtwrEG+UK8wm3sN+xCO20i4jDhpNpgg3FYFHT5rKTHW8+zaBTNUX/SFxkN67Lm+92IM28CXYE43SU1WV6H99hGFFVpTK5JVM3JuYU1ex/dHRE+xCzTr4MYUB/F+nGoNFW8HUDV/y0e1jxT9to3x0SmnytEEuk+5RUzFuEt9cKNFeNml3fOCi4qL+sfj/Y5pjH9xDiUxsvH/8NL35jbLP244aFHgWcp]",
"SOLANA_INSTALL_UPDATE_MANIFEST_KEYPAIR_x86_64_apple_darwin": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:NeOxSoWCvXB9AL4H6OK26l/7bmsKd/oz:Ijfoxtvk2CHlN1ZXHup3Gg/914kbbAkEGWJfvozA8UIe+aUzUObMyTrKkVOeNAH8Q8YH9tNzk7RRnrTcpnzeCCBLlWcVEeruMxHox3mPRzmSeDLxtbzCl9VePlRO3T7jg90K5hW+ZAkd5J/WJNzpAcmr93ts/of3MbvGHSujId/efCTzJEcP6JInnBb8Vrj7TlgKbzUlnqpq1+NjYPSXN3maKa9pKeo2JWxZlGBMoy6QWUUY5GbYEylw9smwh1LJcHZjlaZNMuOl4gNKtaSr38IXQkAXaRUJDPAmPras00YObKzXU8RkTrP4EoP/jx5LPR7f]",
"SOLANA_INSTALL_UPDATE_MANIFEST_KEYPAIR_x86_64_pc_windows_msvc": "EJ[1:yGpTmjdbyjW2kjgIHkFoJv7Ue7EbUvUbqHyw6anGgWg=:7t+56twjW+jR7fpFNNeRFLPd7E4lbmyN:JuviDpkQrfVcNUGRGsa2e/UhvH6tTYyk1s4cHHE5xZH1NByL7Kpqx36VG/+o1AUGEeSQdsBnKgzYdMoFYbO8o50DoRPc86QIEVXCupD6J9avxLFtQgOWgJp+/mCdUVXlqXiFs/vQgS/L4psrcKdF6WHd77BeUr6ll8DjH+9m5FC9Rcai2pXno6VbPpunHQ0oUdYzhFR64+LiRacBaefQ9igZ+nSEWDLqbaZSyfm9viWkijoVFTq8gAgdXXEh7g0QdxVE5T6bPristJhT6jWBhWunPUCDNFFErWIsbRGctepl4pbCWqh2hNTw9btSgVfeY6uGCOsdy9E=]"
} }
} }

View File

@@ -1,18 +1,5 @@
root: ./docs/src root: ./book/src
structure: structure:
readme: introduction.md readme: introduction.md
summary: SUMMARY.md summary: SUMMARY.md
redirects:
wallet: ./wallet-guide/README.md
wallet/app-wallets: ./wallet-guide/apps.md
wallet/app-wallets/trust-wallet: ./wallet-guide/trust-wallet.md
wallet/app-wallets/ledger-live: ./wallet-guide/ledger-live.md
wallet/cli-wallets: ./wallet-guide/cli.md
wallet/cli-wallets/paper-wallet: ./paper-wallet/README.md
wallet/cli-wallets/paper-wallet/paper-wallet-usage: ./paper-wallet/paper-wallet-usage.md
wallet/cli-wallets/remote-wallet: ./hardware-wallets/README.md
wallet/cli-wallets/remote-wallet/ledger: ./hardware-wallets/ledger.md
wallet/cli-wallets/file-system-wallet: ./file-system-wallet/README.md
wallet/support: ./wallet-guide/support.md

7
.gitignore vendored
View File

@@ -1,7 +1,6 @@
/docs/html/ /book/html/
/docs/src/tests.ok /book/src/tests.ok
/docs/src/cli/usage.md /book/src/.gitbook/assets/*.svg
/docs/src/.gitbook/assets/*.svg
/farf/ /farf/
/solana-release/ /solana-release/
/solana-release.tar.bz2 /solana-release.tar.bz2

View File

@@ -19,27 +19,27 @@ pull_request_rules:
label: label:
add: add:
- automerge - automerge
- name: v1.0 backport - name: v0.21 backport
conditions: conditions:
- label=v1.0 - base=master
- label=v0.21
actions: actions:
backport: backport:
ignore_conflicts: true
branches: branches:
- v1.0 - v0.21
- name: v1.1 backport - name: v0.22 backport
conditions: conditions:
- label=v1.1 - base=master
- label=v0.22
actions: actions:
backport: backport:
ignore_conflicts: true
branches: branches:
- v1.1 - v0.22
- name: v1.2 backport - name: v0.23 backport
conditions: conditions:
- label=v1.2 - base=master
- label=v0.23
actions: actions:
backport: backport:
ignore_conflicts: true
branches: branches:
- v1.2 - v0.23

View File

@@ -1,6 +1,5 @@
os: os:
- osx - osx
- windows
language: rust language: rust
rust: rust:

View File

@@ -224,20 +224,21 @@ Inventing new terms is allowed, but should only be done when the term is widely
used and understood. Avoid introducing new 3-letter terms, which can be used and understood. Avoid introducing new 3-letter terms, which can be
confused with 3-letter acronyms. confused with 3-letter acronyms.
[Terms currently in use](docs/src/terminology.md) [Terms currently in use](book/src/terminology.md)
## Design Proposals ## Design Proposals
Solana's architecture is described by docs generated from markdown files in Solana's architecture is described by a book generated from markdown files in
the `docs/src/` directory, maintained by an *editor* (currently @garious). To the `book/src/` directory, maintained by an *editor* (currently @garious). To
add a design proposal, you'll need to include it in the add a design proposal, you'll need to at least propose a change the content
[Accepted Design Proposals](https://docs.solana.com/proposals) under the [Accepted Design
section of the Solana docs. Here's the full process: Proposals](https://docs.solana.com/book/v/master/proposals) chapter. Here's
the full process:
1. Propose a design by creating a PR that adds a markdown document to the 1. Propose a design by creating a PR that adds a markdown document to the
`docs/src/proposals` directory and references it from the [table of directory `book/src/` and references it from the [table of
contents](docs/src/SUMMARY.md). Add any relevant *maintainers* to the PR contents](book/src/SUMMARY.md). Add any relevant *maintainers* to the PR
review. review.
2. The PR being merged indicates your proposed change was accepted and that the 2. The PR being merged indicates your proposed change was accepted and that the
maintainers support your plan of attack. maintainers support your plan of attack.

6920
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,13 +3,10 @@ members = [
"bench-exchange", "bench-exchange",
"bench-streamer", "bench-streamer",
"bench-tps", "bench-tps",
"accounts-bench",
"banking-bench", "banking-bench",
"cli-config", "chacha-sys",
"client", "client",
"core", "core",
"dos",
"download-utils",
"faucet", "faucet",
"perf", "perf",
"validator", "validator",
@@ -24,12 +21,9 @@ members = [
"logger", "logger",
"log-analyzer", "log-analyzer",
"merkle-tree", "merkle-tree",
"stake-o-matic",
"streamer",
"measure", "measure",
"metrics", "metrics",
"net-shaper", "net-shaper",
"notifier",
"programs/bpf_loader", "programs/bpf_loader",
"programs/budget", "programs/budget",
"programs/btc_spv", "programs/btc_spv",
@@ -40,21 +34,18 @@ members = [
"programs/noop", "programs/noop",
"programs/ownable", "programs/ownable",
"programs/stake", "programs/stake",
"programs/storage",
"programs/vest", "programs/vest",
"programs/vote", "programs/vote",
"remote-wallet", "archiver",
"ramp-tps",
"runtime", "runtime",
"sdk", "sdk",
"sdk-c",
"scripts", "scripts",
"stake-accounts",
"stake-monitor",
"sys-tuner", "sys-tuner",
"tokens",
"transaction-status",
"upload-perf", "upload-perf",
"net-utils", "net-utils",
"version", "fixed-buf",
"vote-signer", "vote-signer",
"cli", "cli",
"rayon-threadlimit", "rayon-threadlimit",

187
README.md
View File

@@ -1,17 +1,76 @@
<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/OMnvVEz.png" width="250" />
</a>
</p>
[![Solana crate](https://img.shields.io/crates/v/solana-core.svg)](https://crates.io/crates/solana-core) [![Solana crate](https://img.shields.io/crates/v/solana-core.svg)](https://crates.io/crates/solana-core)
[![Solana documentation](https://docs.rs/solana-core/badge.svg)](https://docs.rs/solana-core) [![Solana documentation](https://docs.rs/solana-core/badge.svg)](https://docs.rs/solana-core)
[![Build status](https://badge.buildkite.com/8cc350de251d61483db98bdfc895b9ea0ac8ffa4a32ee850ed.svg?branch=master)](https://buildkite.com/solana-labs/solana/builds?branch=master) [![Build status](https://badge.buildkite.com/8cc350de251d61483db98bdfc895b9ea0ac8ffa4a32ee850ed.svg?branch=master)](https://buildkite.com/solana-labs/solana/builds?branch=master)
[![codecov](https://codecov.io/gh/solana-labs/solana/branch/master/graph/badge.svg)](https://codecov.io/gh/solana-labs/solana) [![codecov](https://codecov.io/gh/solana-labs/solana/branch/master/graph/badge.svg)](https://codecov.io/gh/solana-labs/solana)
# Building Blockchain Rebuilt for Scale
===
## **1. Install rustc, cargo and rustfmt.** Solana&trade; is a new blockchain architecture built from the ground up for scale. The architecture supports
up to 710 thousand transactions per second on a gigabit network.
Disclaimer
===
All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the author's best effort. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore nothing in this project constitutes a solicitation for investment.
Introduction
===
It's possible for a centralized database to process 710,000 transactions per second on a standard gigabit network if the transactions are, on average, no more than 176 bytes. A centralized database can also replicate itself and maintain high availability without significantly compromising that transaction rate using the distributed system technique known as Optimistic Concurrency Control [\[H.T.Kung, J.T.Robinson (1981)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.4735). At Solana, we're demonstrating that these same theoretical limits apply just as well to blockchain on an adversarial network. The key ingredient? Finding a way to share time when nodes can't trust one-another. Once nodes can trust time, suddenly ~40 years of distributed systems research becomes applicable to blockchain!
> Perhaps the most striking difference between algorithms obtained by our method and ones based upon timeout is that using timeout produces a traditional distributed algorithm in which the processes operate asynchronously, while our method produces a globally synchronous one in which every process does the same thing at (approximately) the same time. Our method seems to contradict the whole purpose of distributed processing, which is to permit different processes to operate independently and perform different functions. However, if a distributed system is really a single system, then the processes must be synchronized in some way. Conceptually, the easiest way to synchronize processes is to get them all to do the same thing at the same time. Therefore, our method is used to implement a kernel that performs the necessary synchronization--for example, making sure that two different processes do not try to modify a file at the same time. Processes might spend only a small fraction of their time executing the synchronizing kernel; the rest of the time, they can operate independently--e.g., accessing different files. This is an approach we have advocated even when fault-tolerance is not required. The method's basic simplicity makes it easier to understand the precise properties of a system, which is crucial if one is to know just how fault-tolerant the system is. [\[L.Lamport (1984)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.1078)
Furthermore, and much to our surprise, it can be implemented using a mechanism that has existed in Bitcoin since day one. The Bitcoin feature is called nLocktime and it can be used to postdate transactions using block height instead of a timestamp. As a Bitcoin client, you'd use block height instead of a timestamp if you don't trust the network. Block height turns out to be an instance of what's being called a Verifiable Delay Function in cryptography circles. It's a cryptographically secure way to say time has passed. In Solana, we use a far more granular verifiable delay function, a SHA 256 hash chain, to checkpoint the ledger and coordinate consensus. With it, we implement Optimistic Concurrency Control and are now well en route towards that theoretical limit of 710,000 transactions per second.
Architecture
===
Before you jump into the code, review the online book [Solana: Blockchain Rebuilt for Scale](https://docs.solana.com/book/).
(The _latest_ development version of the online book is also [available here](https://docs.solana.com/book/v/master/).)
Release Binaries
===
Official release binaries are available at [Github Releases](https://github.com/solana-labs/solana/releases).
Additionally we provide pre-release binaries for the latest code on the edge and
beta channels. Note that these pre-release binaries may be less stable than an
official release.
### Edge channel
#### Linux (x86_64-unknown-linux-gnu)
* [solana.tar.bz2](http://release.solana.com/edge/solana-release-x86_64-unknown-linux-gnu.tar.bz2)
* [solana-install-init](http://release.solana.com/edge/solana-install-init-x86_64-unknown-linux-gnu) as a stand-alone executable
#### mac OS (x86_64-apple-darwin)
* [solana.tar.bz2](http://release.solana.com/edge/solana-release-x86_64-apple-darwin.tar.bz2)
* [solana-install-init](http://release.solana.com/edge/solana-install-init-x86_64-apple-darwin) as a stand-alone executable
#### Windows (x86_64-pc-windows-msvc)
* [solana.tar.bz2](http://release.solana.com/edge/solana-release-x86_64-pc-windows-msvc.tar.bz2)
* [solana-install-init.exe](http://release.solana.com/edge/solana-install-init-x86_64-pc-windows-msvc.exe) as a stand-alone executable
#### All platforms
* [solana-metrics.tar.bz2](http://release.solana.com.s3.amazonaws.com/edge/solana-metrics.tar.bz2)
### Beta channel
#### Linux (x86_64-unknown-linux-gnu)
* [solana.tar.bz2](http://release.solana.com/beta/solana-release-x86_64-unknown-linux-gnu.tar.bz2)
* [solana-install-init](http://release.solana.com/beta/solana-install-init-x86_64-unknown-linux-gnu) as a stand-alone executable
#### mac OS (x86_64-apple-darwin)
* [solana.tar.bz2](http://release.solana.com/beta/solana-release-x86_64-apple-darwin.tar.bz2)
* [solana-install-init](http://release.solana.com/beta/solana-install-init-x86_64-apple-darwin) as a stand-alone executable
#### Windows (x86_64-pc-windows-msvc)
* [solana.tar.bz2](http://release.solana.com/beta/solana-release-x86_64-pc-windows-msvc.tar.bz2)
* [solana-install-init.exe](http://release.solana.com/beta/solana-install-init-x86_64-pc-windows-msvc.exe) as a stand-alone executable
#### All platforms
* [solana-metrics.tar.bz2](http://release.solana.com.s3.amazonaws.com/beta/solana-metrics.tar.bz2)
Developing
===
Building
---
Install rustc, cargo and rustfmt:
```bash ```bash
$ curl https://sh.rustup.rs -sSf | sh $ curl https://sh.rustup.rs -sSf | sh
@@ -28,43 +87,118 @@ $ rustup update
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu: On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:
```bash ```bash
$ sudo apt-get update $ sudo apt-get install libssl-dev pkg-config zlib1g-dev llvm clang
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang
``` ```
## **2. Download the source code.** Download the source code:
```bash ```bash
$ git clone https://github.com/solana-labs/solana.git $ git clone https://github.com/solana-labs/solana.git
$ cd solana $ cd solana
``` ```
## **3. Build.** Build
```bash ```bash
$ cargo build $ cargo build
``` ```
## **4. Run a minimal local cluster.** Then to run a minimal local cluster
```bash ```bash
$ ./run.sh $ ./run.sh
``` ```
# Testing Testing
---
**Run the test suite:** Run the test suite:
```bash ```bash
$ cargo test $ cargo test
``` ```
### Starting a local testnet Local Testnet
Start your own testnet locally, instructions are in the [online docs](https://docs.solana.com/bench-tps). ---
### Accessing the remote testnet Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://docs.solana.com/book/getting-started).
* `testnet` - public stable testnet accessible via devnet.solana.com. Runs 24/7
# Benchmarking Remote Testnets
---
We maintain several testnets:
* `testnet` - public stable testnet accessible via testnet.solana.com. Runs 24/7
* `testnet-beta` - public beta channel testnet accessible via beta.testnet.solana.com. Runs 24/7
* `testnet-edge` - public edge channel testnet accessible via edge.testnet.solana.com. Runs 24/7
## Deploy process
They are deployed with the `ci/testnet-manager.sh` script through a list of [scheduled
buildkite jobs](https://buildkite.com/solana-labs/testnet-management/settings/schedules).
Each testnet can be manually manipulated from buildkite as well.
## How do I reset the testnet?
Manually trigger the [testnet-management](https://buildkite.com/solana-labs/testnet-management) pipeline
and when prompted select the desired testnet
## How can I scale the tx generation rate?
Increase the TX rate by increasing the number of cores on the client machine which is running
`bench-tps` or run multiple clients. Decrease by lowering cores or using the rayon env
variable `RAYON_NUM_THREADS=<xx>`
## How can I test a change on the testnet?
Currently, a merged PR is the only way to test a change on the testnet. But you
can run your own testnet using the scripts in the `net/` directory.
## Adjusting the number of clients or validators on the testnet
Edit `ci/testnet-manager.sh`
## Metrics Server Maintenance
Sometimes the dashboard becomes unresponsive. This happens due to glitch in the metrics server.
The current solution is to reset the metrics server. Use the following steps.
1. The server is hosted in a GCP VM instance. Check if the VM instance is down by trying to SSH
into it from the GCP console. The name of the VM is ```metrics-solana-com```.
2. If the VM is inaccessible, reset it from the GCP console.
3. Once VM is up (or, was already up), the metrics services can be restarted from build automation.
1. Navigate to https://buildkite.com/solana-labs/metrics-dot-solana-dot-com in your web browser
2. Click on ```New Build```
3. This will show a pop up dialog. Click on ```options``` drop down.
4. Type in ```FORCE_START=true``` in ```Environment Variables``` text box.
5. Click ```Create Build```
6. This will restart the metrics services, and the dashboards should be accessible afterwards.
## Debugging Testnet
Testnet may exhibit different symptoms of failures. Primary statistics to check are
1. Rise in Confirmation Time
2. Nodes are not voting
3. Panics, and OOM notifications
Check the following if there are any signs of failure.
1. Did testnet deployment fail?
1. View buildkite logs for the last deployment: https://buildkite.com/solana-labs/testnet-management
2. Use the relevant branch
3. If the deployment failed, look at the build logs. The build artifacts for each remote node is uploaded.
It's a good first step to triage from these logs.
2. You may have to log into remote node if the deployment succeeded, but something failed during runtime.
1. Get the private key for the testnet deployment from ```metrics-solana-com``` GCP instance.
2. SSH into ```metrics-solana-com``` using GCP console and do the following.
```bash
sudo bash
cd ~buildkite-agent/.ssh
ls
```
3. Copy the relevant private key to your local machine
4. Find the public IP address of the AWS instance for the remote node using AWS console
5. ```ssh -i <private key file> ubuntu@<ip address of remote node>```
6. The logs are in ```~solana\solana``` folder
Benchmarking
---
First install the nightly build of rustc. `cargo bench` requires use of the First install the nightly build of rustc. `cargo bench` requires use of the
unstable features only available in the nightly build. unstable features only available in the nightly build.
@@ -79,11 +213,13 @@ Run the benchmarks:
$ cargo +nightly bench $ cargo +nightly bench
``` ```
# Release Process Release Process
---
The release process for this project is described [here](RELEASE.md). The release process for this project is described [here](RELEASE.md).
# Code coverage
Code coverage
---
To generate code coverage statistics: To generate code coverage statistics:
@@ -92,6 +228,7 @@ $ scripts/coverage.sh
$ open target/cov/lcov-local/index.html $ open target/cov/lcov-local/index.html
``` ```
Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer
productivity metric. When a developer makes a change to the codebase, presumably it's a *solution* to productivity metric. When a developer makes a change to the codebase, presumably it's a *solution* to
some problem. Our unit-test suite is how we encode the set of *problems* the codebase solves. Running some problem. Our unit-test suite is how we encode the set of *problems* the codebase solves. Running
@@ -103,7 +240,3 @@ problem is solved by this code?" On the other hand, if a test does fail and you
better way to solve the same problem, a Pull Request with your solution would most certainly be better way to solve the same problem, a Pull Request with your solution would most certainly be
welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please
send us that patch! send us that patch!
# Disclaimer
All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the author's best effort. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore nothing in this project constitutes a solicitation for investment.

View File

@@ -138,11 +138,30 @@ There are three release channels that map to branches as follows:
### Update documentation ### Update documentation
TODO: Documentation update procedure is WIP as we move to gitbook TODO: Documentation update procedure is WIP as we move to gitbook
Document the new recommended version by updating `docs/src/running-archiver.md` and `docs/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version. Document the new recommended version by updating `book/src/running-archiver.md` and `book/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version.
### Update software on devnet.solana.com #### Publish updated Book
We maintain three copies of the "book" as official documentation:
The testnet running on devnet.solana.com is set to use a fixed release tag 1) "Book" is the documentation for the latest official release. This should get manually updated whenever a new release is made. It is published here:
https://solana-labs.github.io/book/
2) "Book-edge" tracks the tip of the master branch and updates automatically.
https://solana-labs.github.io/book-edge/
3) "Book-beta" tracks the tip of the beta branch and updates automatically.
https://solana-labs.github.io/book-beta/
To manually trigger an update of the "Book", create a new job of the manual-update-book pipeline.
Set the tag of the latest release as the PUBLISH_BOOK_TAG environment variable.
```bash
PUBLISH_BOOK_TAG=v0.16.6
```
https://buildkite.com/solana-labs/manual-update-book
### Update software on testnet.solana.com
The testnet running on testnet.solana.com is set to use a fixed release tag
which is set in the Buildkite testnet-management pipeline. which is set in the Buildkite testnet-management pipeline.
This tag needs to be updated and the testnet restarted after a new release This tag needs to be updated and the testnet restarted after a new release
tag is created. tag is created.
@@ -182,4 +201,4 @@ TESTNET_OP=create-and-start
### Alert the community ### Alert the community
Notify Discord users on #validator-support that a new release for Notify Discord users on #validator-support that a new release for
devnet.solana.com is available testnet.solana.com is available

View File

@@ -1,22 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-accounts-bench"
version = "1.2.1"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
log = "0.4.6"
rayon = "1.3.0"
solana-logger = { path = "../logger", version = "1.2.1" }
solana-runtime = { path = "../runtime", version = "1.2.1" }
solana-measure = { path = "../measure", version = "1.2.1" }
solana-sdk = { path = "../sdk", version = "1.2.1" }
rand = "0.7.0"
clap = "2.33.1"
crossbeam-channel = "0.4"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,105 +0,0 @@
use clap::{value_t, App, Arg};
use rayon::prelude::*;
use solana_measure::measure::Measure;
use solana_runtime::{
accounts::{create_test_accounts, update_accounts, Accounts},
accounts_index::Ancestors,
};
use solana_sdk::pubkey::Pubkey;
use std::fs;
use std::path::PathBuf;
fn main() {
solana_logger::setup();
let matches = App::new("crate")
.about("about")
.version("version")
.arg(
Arg::with_name("num_slots")
.long("num_slots")
.takes_value(true)
.value_name("SLOTS")
.help("Number of slots to store to."),
)
.arg(
Arg::with_name("num_accounts")
.long("num_accounts")
.takes_value(true)
.value_name("NUM_ACCOUNTS")
.help("Total number of accounts"),
)
.arg(
Arg::with_name("iterations")
.long("iterations")
.takes_value(true)
.value_name("ITERATIONS")
.help("Number of bench iterations"),
)
.arg(
Arg::with_name("clean")
.long("clean")
.takes_value(false)
.help("Run clean"),
)
.get_matches();
let num_slots = value_t!(matches, "num_slots", usize).unwrap_or(4);
let num_accounts = value_t!(matches, "num_accounts", usize).unwrap_or(10_000);
let iterations = value_t!(matches, "iterations", usize).unwrap_or(20);
let clean = matches.is_present("clean");
println!("clean: {:?}", clean);
let path = PathBuf::from("farf/accounts-bench");
if fs::remove_dir_all(path.clone()).is_err() {
println!("Warning: Couldn't remove {:?}", path);
}
let accounts = Accounts::new(vec![path]);
println!("Creating {} accounts", num_accounts);
let mut create_time = Measure::start("create accounts");
let pubkeys: Vec<_> = (0..num_slots)
.into_par_iter()
.map(|slot| {
let mut pubkeys: Vec<Pubkey> = vec![];
create_test_accounts(
&accounts,
&mut pubkeys,
num_accounts / num_slots,
slot as u64,
);
pubkeys
})
.collect();
let pubkeys: Vec<_> = pubkeys.into_iter().flatten().collect();
create_time.stop();
println!(
"created {} accounts in {} slots {}",
(num_accounts / num_slots) * num_slots,
num_slots,
create_time
);
let mut ancestors: Ancestors = vec![(0, 0)].into_iter().collect();
for i in 1..num_slots {
ancestors.insert(i as u64, i - 1);
accounts.add_root(i as u64);
}
for x in 0..iterations {
if clean {
let mut time = Measure::start("clean");
accounts.accounts_db.clean_accounts();
time.stop();
println!("{}", time);
for slot in 0..num_slots {
update_accounts(&accounts, &pubkeys, ((x + 1) * num_slots + slot) as u64);
accounts.add_root((x * num_slots + slot) as u64);
}
} else {
let mut pubkeys: Vec<Pubkey> = vec![];
let mut time = Measure::start("hash");
let hash = accounts.accounts_db.update_accounts_hash(0, &ancestors);
time.stop();
println!("hash: {} {}", hash, time);
create_test_accounts(&accounts, &mut pubkeys, 1, 0);
}
}
}

19
archiver/Cargo.toml Normal file
View File

@@ -0,0 +1,19 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-archiver"
version = "0.22.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.33.0"
console = "0.9.1"
solana-clap-utils = { path = "../clap-utils", version = "0.22.8" }
solana-core = { path = "../core", version = "0.22.8" }
solana-logger = { path = "../logger", version = "0.22.8" }
solana-metrics = { path = "../metrics", version = "0.22.8" }
solana-net-utils = { path = "../net-utils", version = "0.22.8" }
solana-sdk = { path = "../sdk", version = "0.22.8" }

147
archiver/src/main.rs Normal file
View File

@@ -0,0 +1,147 @@
use clap::{crate_description, crate_name, App, Arg};
use console::style;
use solana_clap_utils::{
input_validators::is_keypair,
keypair::{
self, keypair_input, KeypairWithSource, ASK_SEED_PHRASE_ARG,
SKIP_SEED_PHRASE_VALIDATION_ARG,
},
};
use solana_core::{
archiver::Archiver,
cluster_info::{Node, VALIDATOR_PORT_RANGE},
contact_info::ContactInfo,
};
use solana_sdk::{commitment_config::CommitmentConfig, signature::KeypairUtil};
use std::{net::SocketAddr, path::PathBuf, process::exit, sync::Arc};
fn main() {
solana_logger::setup();
let matches = App::new(crate_name!())
.about(crate_description!())
.version(solana_clap_utils::version!())
.arg(
Arg::with_name("identity_keypair")
.short("i")
.long("identity-keypair")
.value_name("PATH")
.takes_value(true)
.validator(is_keypair)
.help("File containing an identity (keypair)"),
)
.arg(
Arg::with_name("entrypoint")
.short("n")
.long("entrypoint")
.value_name("HOST:PORT")
.takes_value(true)
.required(true)
.validator(solana_net_utils::is_host_port)
.help("Rendezvous with the cluster at this entry point"),
)
.arg(
Arg::with_name("ledger")
.short("l")
.long("ledger")
.value_name("DIR")
.takes_value(true)
.required(true)
.help("use DIR as persistent ledger location"),
)
.arg(
Arg::with_name("storage_keypair")
.short("s")
.long("storage-keypair")
.value_name("PATH")
.takes_value(true)
.validator(is_keypair)
.help("File containing the storage account keypair"),
)
.arg(
Arg::with_name(ASK_SEED_PHRASE_ARG.name)
.long(ASK_SEED_PHRASE_ARG.long)
.value_name("KEYPAIR NAME")
.multiple(true)
.takes_value(true)
.possible_values(&["identity-keypair", "storage-keypair"])
.help(ASK_SEED_PHRASE_ARG.help),
)
.arg(
Arg::with_name(SKIP_SEED_PHRASE_VALIDATION_ARG.name)
.long(SKIP_SEED_PHRASE_VALIDATION_ARG.long)
.requires(ASK_SEED_PHRASE_ARG.name)
.help(SKIP_SEED_PHRASE_VALIDATION_ARG.help),
)
.get_matches();
let ledger_path = PathBuf::from(matches.value_of("ledger").unwrap());
let identity_keypair = keypair_input(&matches, "identity_keypair")
.unwrap_or_else(|err| {
eprintln!("Identity keypair input failed: {}", err);
exit(1);
})
.keypair;
let KeypairWithSource {
keypair: storage_keypair,
source: storage_keypair_source,
} = keypair_input(&matches, "storage_keypair").unwrap_or_else(|err| {
eprintln!("Storage keypair input failed: {}", err);
exit(1);
});
if storage_keypair_source == keypair::Source::Generated {
clap::Error::with_description(
"The `storage-keypair` argument was not found",
clap::ErrorKind::ArgumentNotFound,
)
.exit();
}
let entrypoint_addr = matches
.value_of("entrypoint")
.map(|entrypoint| {
solana_net_utils::parse_host_port(entrypoint)
.expect("failed to parse entrypoint address")
})
.unwrap();
let gossip_addr = {
let ip = solana_net_utils::get_public_ip_addr(&entrypoint_addr).unwrap();
let mut addr = SocketAddr::new(ip, 0);
addr.set_ip(solana_net_utils::get_public_ip_addr(&entrypoint_addr).unwrap());
addr
};
let node = Node::new_archiver_with_external_ip(
&identity_keypair.pubkey(),
&gossip_addr,
VALIDATOR_PORT_RANGE,
);
println!(
"{} version {} (branch={}, commit={})",
style(crate_name!()).bold(),
solana_clap_utils::version!(),
option_env!("CI_BRANCH").unwrap_or("unknown"),
option_env!("CI_COMMIT").unwrap_or("unknown")
);
solana_metrics::set_host_id(identity_keypair.pubkey().to_string());
println!(
"replicating the data with identity_keypair={:?} gossip_addr={:?}",
identity_keypair.pubkey(),
gossip_addr
);
let entrypoint_info = ContactInfo::new_gossip_entry_point(&entrypoint_addr);
let archiver = Archiver::new(
&ledger_path,
node,
entrypoint_info,
Arc::new(identity_keypair),
Arc::new(storage_keypair),
CommitmentConfig::recent(),
)
.unwrap();
archiver.join();
}

View File

@@ -2,27 +2,19 @@
authors = ["Solana Maintainers <maintainers@solana.com>"] authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018" edition = "2018"
name = "solana-banking-bench" name = "solana-banking-bench"
version = "1.2.1" version = "0.22.8"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
[dependencies] [dependencies]
clap = "2.33.1"
crossbeam-channel = "0.4"
log = "0.4.6" log = "0.4.6"
rand = "0.7.0" rayon = "1.2.0"
rayon = "1.3.0" solana-core = { path = "../core", version = "0.22.8" }
solana-core = { path = "../core", version = "1.2.1" } solana-ledger = { path = "../ledger", version = "0.22.8" }
solana-clap-utils = { path = "../clap-utils", version = "1.2.1" } solana-logger = { path = "../logger", version = "0.22.8" }
solana-streamer = { path = "../streamer", version = "1.2.1" } solana-runtime = { path = "../runtime", version = "0.22.8" }
solana-perf = { path = "../perf", version = "1.2.1" } solana-measure = { path = "../measure", version = "0.22.8" }
solana-ledger = { path = "../ledger", version = "1.2.1" } solana-sdk = { path = "../sdk", version = "0.22.8" }
solana-logger = { path = "../logger", version = "1.2.1" } rand = "0.6.5"
solana-runtime = { path = "../runtime", version = "1.2.1" } crossbeam-channel = "0.3"
solana-measure = { path = "../measure", version = "1.2.1" }
solana-sdk = { path = "../sdk", version = "1.2.1" }
solana-version = { path = "../version", version = "1.2.1" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,38 +1,30 @@
use clap::{crate_description, crate_name, value_t, App, Arg};
use crossbeam_channel::unbounded; use crossbeam_channel::unbounded;
use log::*; use log::*;
use rand::{thread_rng, Rng}; use rand::{thread_rng, Rng};
use rayon::prelude::*; use rayon::prelude::*;
use solana_core::{ use solana_core::banking_stage::{create_test_recorder, BankingStage};
banking_stage::{create_test_recorder, BankingStage}, use solana_core::cluster_info::ClusterInfo;
cluster_info::ClusterInfo, use solana_core::cluster_info::Node;
cluster_info::Node, use solana_core::genesis_utils::{create_genesis_config, GenesisConfigInfo};
poh_recorder::PohRecorder, use solana_core::packet::to_packets_chunked;
poh_recorder::WorkingBankEntry, use solana_core::poh_recorder::PohRecorder;
}; use solana_core::poh_recorder::WorkingBankEntry;
use solana_ledger::{ use solana_ledger::bank_forks::BankForks;
bank_forks::BankForks, use solana_ledger::{blockstore::Blockstore, get_tmp_ledger_path};
blockstore::Blockstore,
genesis_utils::{create_genesis_config, GenesisConfigInfo},
get_tmp_ledger_path,
};
use solana_measure::measure::Measure; use solana_measure::measure::Measure;
use solana_perf::packet::to_packets_chunked;
use solana_runtime::bank::Bank; use solana_runtime::bank::Bank;
use solana_sdk::{ use solana_sdk::hash::Hash;
hash::Hash, use solana_sdk::pubkey::Pubkey;
pubkey::Pubkey, use solana_sdk::signature::Keypair;
signature::Keypair, use solana_sdk::signature::Signature;
signature::Signature, use solana_sdk::system_transaction;
system_transaction, use solana_sdk::timing::{duration_as_us, timestamp};
timing::{duration_as_us, timestamp}, use solana_sdk::transaction::Transaction;
transaction::Transaction, use std::sync::atomic::Ordering;
}; use std::sync::mpsc::Receiver;
use std::{ use std::sync::{Arc, Mutex, RwLock};
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex}, use std::thread::sleep;
thread::sleep, use std::time::{Duration, Instant};
time::{Duration, Instant},
};
fn check_txs( fn check_txs(
receiver: &Arc<Receiver<WorkingBankEntry>>, receiver: &Arc<Receiver<WorkingBankEntry>>,
@@ -65,22 +57,15 @@ fn check_txs(
no_bank no_bank
} }
fn make_accounts_txs( fn make_accounts_txs(txes: usize, mint_keypair: &Keypair, hash: Hash) -> Vec<Transaction> {
total_num_transactions: usize,
hash: Hash,
same_payer: bool,
) -> Vec<Transaction> {
let to_pubkey = Pubkey::new_rand(); let to_pubkey = Pubkey::new_rand();
let payer_key = Keypair::new(); let dummy = system_transaction::transfer(mint_keypair, &to_pubkey, 1, hash);
let dummy = system_transaction::transfer(&payer_key, &to_pubkey, 1, hash); (0..txes)
(0..total_num_transactions)
.into_par_iter() .into_par_iter()
.map(|_| { .map(|_| {
let mut new = dummy.clone(); let mut new = dummy.clone();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect(); let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
if !same_payer { new.message.account_keys[0] = Pubkey::new_rand();
new.message.account_keys[0] = Pubkey::new_rand();
}
new.message.account_keys[1] = Pubkey::new_rand(); new.message.account_keys[1] = Pubkey::new_rand();
new.signatures = vec![Signature::new(&sig[0..64])]; new.signatures = vec![Signature::new(&sig[0..64])];
new new
@@ -104,61 +89,13 @@ fn bytes_as_usize(bytes: &[u8]) -> usize {
bytes[0] as usize | (bytes[1] as usize) << 8 bytes[0] as usize | (bytes[1] as usize) << 8
} }
#[allow(clippy::cognitive_complexity)]
fn main() { fn main() {
solana_logger::setup(); solana_logger::setup();
let num_threads = BankingStage::num_threads() as usize;
let matches = App::new(crate_name!())
.about(crate_description!())
.version(solana_version::version!())
.arg(
Arg::with_name("num_chunks")
.long("num-chunks")
.takes_value(true)
.value_name("SIZE")
.help("Number of transaction chunks."),
)
.arg(
Arg::with_name("packets_per_chunk")
.long("packets-per-chunk")
.takes_value(true)
.value_name("SIZE")
.help("Packets per chunk"),
)
.arg(
Arg::with_name("skip_sanity")
.long("skip-sanity")
.takes_value(false)
.help("Skip transaction sanity execution"),
)
.arg(
Arg::with_name("same_payer")
.long("same-payer")
.takes_value(false)
.help("Use the same payer for transfers"),
)
.arg(
Arg::with_name("iterations")
.long("iterations")
.takes_value(true)
.help("Number of iterations"),
)
.arg(
Arg::with_name("num_threads")
.long("num-threads")
.takes_value(true)
.help("Number of iterations"),
)
.get_matches();
let num_threads =
value_t!(matches, "num_threads", usize).unwrap_or(BankingStage::num_threads() as usize);
// a multiple of packet chunk duplicates to avoid races // a multiple of packet chunk duplicates to avoid races
let num_chunks = value_t!(matches, "num_chunks", usize).unwrap_or(16); const CHUNKS: usize = 8 * 2;
let packets_per_chunk = value_t!(matches, "packets_per_chunk", usize).unwrap_or(192); const PACKETS_PER_BATCH: usize = 192;
let iterations = value_t!(matches, "iterations", usize).unwrap_or(1000); let txes = PACKETS_PER_BATCH * num_threads * CHUNKS;
let total_num_transactions = num_chunks * num_threads * packets_per_chunk;
let mint_total = 1_000_000_000_000; let mint_total = 1_000_000_000_000;
let GenesisConfigInfo { let GenesisConfigInfo {
genesis_config, genesis_config,
@@ -172,44 +109,34 @@ fn main() {
let mut bank_forks = BankForks::new(0, bank0); let mut bank_forks = BankForks::new(0, bank0);
let mut bank = bank_forks.working_bank(); let mut bank = bank_forks.working_bank();
info!("threads: {} txs: {}", num_threads, total_num_transactions); info!("threads: {} txs: {}", num_threads, txes);
let same_payer = matches.is_present("same_payer"); let mut transactions = make_accounts_txs(txes, &mint_keypair, genesis_config.hash());
let mut transactions =
make_accounts_txs(total_num_transactions, genesis_config.hash(), same_payer);
// fund all the accounts // fund all the accounts
transactions.iter().for_each(|tx| { transactions.iter().for_each(|tx| {
let mut fund = system_transaction::transfer( let fund = system_transaction::transfer(
&mint_keypair, &mint_keypair,
&tx.message.account_keys[0], &tx.message.account_keys[0],
mint_total / total_num_transactions as u64, mint_total / txes as u64,
genesis_config.hash(), genesis_config.hash(),
); );
// Ignore any pesky duplicate signature errors in the case we are using single-payer
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
fund.signatures = vec![Signature::new(&sig[0..64])];
let x = bank.process_transaction(&fund); let x = bank.process_transaction(&fund);
x.unwrap(); x.unwrap();
}); });
//sanity check, make sure all the transactions can execute sequentially
let skip_sanity = matches.is_present("skip_sanity"); transactions.iter().for_each(|tx| {
if !skip_sanity { let res = bank.process_transaction(&tx);
//sanity check, make sure all the transactions can execute sequentially assert!(res.is_ok(), "sanity test transactions");
transactions.iter().for_each(|tx| { });
let res = bank.process_transaction(&tx); bank.clear_signatures();
assert!(res.is_ok(), "sanity test transactions error: {:?}", res); //sanity check, make sure all the transactions can execute in parallel
}); let res = bank.process_transactions(&transactions);
bank.clear_signatures(); for r in res {
//sanity check, make sure all the transactions can execute in parallel assert!(r.is_ok(), "sanity parallel execution");
let res = bank.process_transactions(&transactions);
for r in res {
assert!(r.is_ok(), "sanity parallel execution error: {:?}", r);
}
bank.clear_signatures();
} }
bank.clear_signatures();
let mut verified: Vec<_> = to_packets_chunked(&transactions.clone(), packets_per_chunk); let mut verified: Vec<_> = to_packets_chunked(&transactions.clone(), PACKETS_PER_BATCH);
let ledger_path = get_tmp_ledger_path!(); let ledger_path = get_tmp_ledger_path!();
{ {
let blockstore = Arc::new( let blockstore = Arc::new(
@@ -218,7 +145,7 @@ fn main() {
let (exit, poh_recorder, poh_service, signal_receiver) = let (exit, poh_recorder, poh_service, signal_receiver) =
create_test_recorder(&bank, &blockstore, None); create_test_recorder(&bank, &blockstore, None);
let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info); let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info);
let cluster_info = Arc::new(cluster_info); let cluster_info = Arc::new(RwLock::new(cluster_info));
let banking_stage = BankingStage::new( let banking_stage = BankingStage::new(
&cluster_info, &cluster_info,
&poh_recorder, &poh_recorder,
@@ -228,26 +155,25 @@ fn main() {
); );
poh_recorder.lock().unwrap().set_bank(&bank); poh_recorder.lock().unwrap().set_bank(&bank);
let chunk_len = verified.len() / num_chunks; let chunk_len = verified.len() / CHUNKS;
let mut start = 0; let mut start = 0;
// This is so that the signal_receiver does not go out of scope after the closure. // This is so that the signal_receiver does not go out of scope after the closure.
// If it is dropped before poh_service, then poh_service will error when // If it is dropped before poh_service, then poh_service will error when
// calling send() on the channel. // calling send() on the channel.
let signal_receiver = Arc::new(signal_receiver); let signal_receiver = Arc::new(signal_receiver);
let mut total_us = 0; let mut total = 0;
let mut tx_total_us = 0; let mut tx_total = 0;
let base_tx_count = bank.transaction_count();
let mut txs_processed = 0; let mut txs_processed = 0;
let mut root = 1; let mut root = 1;
let collector = Pubkey::new_rand(); let collector = Pubkey::new_rand();
const ITERS: usize = 1_000;
let config = Config { let config = Config {
packets_per_batch: packets_per_chunk, packets_per_batch: PACKETS_PER_BATCH,
chunk_len, chunk_len,
num_threads, num_threads,
}; };
let mut total_sent = 0; for _ in 0..ITERS {
for _ in 0..iterations {
let now = Instant::now(); let now = Instant::now();
let mut sent = 0; let mut sent = 0;
@@ -288,11 +214,7 @@ fn main() {
sleep(Duration::from_millis(5)); sleep(Duration::from_millis(5));
} }
} }
if check_txs( if check_txs(&signal_receiver, txes / CHUNKS, &poh_recorder) {
&signal_receiver,
total_num_transactions / num_chunks,
&poh_recorder,
) {
debug!( debug!(
"resetting bank {} tx count: {} txs_proc: {}", "resetting bank {} tx count: {} txs_proc: {}",
bank.slot(), bank.slot(),
@@ -301,7 +223,7 @@ fn main() {
); );
assert!(txs_processed < bank.transaction_count()); assert!(txs_processed < bank.transaction_count());
txs_processed = bank.transaction_count(); txs_processed = bank.transaction_count();
tx_total_us += duration_as_us(&now.elapsed()); tx_total += duration_as_us(&now.elapsed());
let mut poh_time = Measure::start("poh_time"); let mut poh_time = Measure::start("poh_time");
poh_recorder.lock().unwrap().reset( poh_recorder.lock().unwrap().reset(
@@ -323,7 +245,7 @@ fn main() {
poh_recorder.lock().unwrap().set_bank(&bank); poh_recorder.lock().unwrap().set_bank(&bank);
assert!(poh_recorder.lock().unwrap().bank().is_some()); assert!(poh_recorder.lock().unwrap().bank().is_some());
if bank.slot() > 32 { if bank.slot() > 32 {
bank_forks.set_root(root, &None, None); bank_forks.set_root(root, &None);
root += 1; root += 1;
} }
debug!( debug!(
@@ -333,21 +255,20 @@ fn main() {
poh_time.as_us(), poh_time.as_us(),
); );
} else { } else {
tx_total_us += duration_as_us(&now.elapsed()); tx_total += duration_as_us(&now.elapsed());
} }
// This signature clear may not actually clear the signatures // This signature clear may not actually clear the signatures
// in this chunk, but since we rotate between CHUNKS then // in this chunk, but since we rotate between CHUNKS then
// we should clear them by the time we come around again to re-use that chunk. // we should clear them by the time we come around again to re-use that chunk.
bank.clear_signatures(); bank.clear_signatures();
total_us += duration_as_us(&now.elapsed()); total += duration_as_us(&now.elapsed());
debug!( debug!(
"time: {} us checked: {} sent: {}", "time: {} us checked: {} sent: {}",
duration_as_us(&now.elapsed()), duration_as_us(&now.elapsed()),
total_num_transactions / num_chunks, txes / CHUNKS,
sent, sent,
); );
total_sent += sent;
if bank.slot() > 0 && bank.slot() % 16 == 0 { if bank.slot() > 0 && bank.slot() % 16 == 0 {
for tx in transactions.iter_mut() { for tx in transactions.iter_mut() {
@@ -355,25 +276,19 @@ fn main() {
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect(); let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
tx.signatures[0] = Signature::new(&sig[0..64]); tx.signatures[0] = Signature::new(&sig[0..64]);
} }
verified = to_packets_chunked(&transactions.clone(), packets_per_chunk); verified = to_packets_chunked(&transactions.clone(), PACKETS_PER_BATCH);
} }
start += chunk_len; start += chunk_len;
start %= verified.len(); start %= verified.len();
} }
let txs_processed = bank_forks.working_bank().transaction_count();
debug!("processed: {} base: {}", txs_processed, base_tx_count);
eprintln!( eprintln!(
"{{'name': 'banking_bench_total', 'median': '{:.2}'}}", "{{'name': 'banking_bench_total', 'median': '{}'}}",
(1000.0 * 1000.0 * total_sent as f64) / (total_us as f64), total / ITERS as u64,
); );
eprintln!( eprintln!(
"{{'name': 'banking_bench_tx_total', 'median': '{:.2}'}}", "{{'name': 'banking_bench_tx_total', 'median': '{}'}}",
(1000.0 * 1000.0 * total_sent as f64) / (tx_total_us as f64), tx_total / ITERS as u64,
);
eprintln!(
"{{'name': 'banking_bench_success_tx_total', 'median': '{:.2}'}}",
(1000.0 * 1000.0 * (txs_processed - base_tx_count) as f64) / (total_us as f64),
); );
drop(verified_sender); drop(verified_sender);

View File

@@ -2,37 +2,40 @@
authors = ["Solana Maintainers <maintainers@solana.com>"] authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018" edition = "2018"
name = "solana-bench-exchange" name = "solana-bench-exchange"
version = "1.2.1" version = "0.22.8"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
publish = false publish = false
[dependencies] [dependencies]
clap = "2.33.1" bincode = "1.2.1"
itertools = "0.9.0" bs58 = "0.3.0"
clap = "2.32.0"
env_logger = "0.7.1"
itertools = "0.8.2"
log = "0.4.8" log = "0.4.8"
num-derive = "0.3" num-derive = "0.3"
num-traits = "0.2" num-traits = "0.2"
rand = "0.7.0" rand = "0.6.5"
rayon = "1.3.0" rayon = "1.2.0"
serde_json = "1.0.53" serde = "1.0.104"
serde_yaml = "0.8.12" serde_derive = "1.0.103"
solana-clap-utils = { path = "../clap-utils", version = "1.2.1" } serde_json = "1.0.44"
solana-core = { path = "../core", version = "1.2.1" } serde_yaml = "0.8.11"
solana-genesis = { path = "../genesis", version = "1.2.1" } solana-clap-utils = { path = "../clap-utils", version = "0.22.8" }
solana-client = { path = "../client", version = "1.2.1" } solana-core = { path = "../core", version = "0.22.8" }
solana-faucet = { path = "../faucet", version = "1.2.1" } solana-genesis = { path = "../genesis", version = "0.22.8" }
solana-exchange-program = { path = "../programs/exchange", version = "1.2.1" } solana-client = { path = "../client", version = "0.22.8" }
solana-logger = { path = "../logger", version = "1.2.1" } solana-faucet = { path = "../faucet", version = "0.22.8" }
solana-metrics = { path = "../metrics", version = "1.2.1" } solana-exchange-program = { path = "../programs/exchange", version = "0.22.8" }
solana-net-utils = { path = "../net-utils", version = "1.2.1" } solana-logger = { path = "../logger", version = "0.22.8" }
solana-runtime = { path = "../runtime", version = "1.2.1" } solana-metrics = { path = "../metrics", version = "0.22.8" }
solana-sdk = { path = "../sdk", version = "1.2.1" } solana-net-utils = { path = "../net-utils", version = "0.22.8" }
solana-version = { path = "../version", version = "1.2.1" } solana-runtime = { path = "../runtime", version = "0.22.8" }
solana-sdk = { path = "../sdk", version = "0.22.8" }
untrusted = "0.7.0"
ws = "0.9.1"
[dev-dependencies] [dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "1.2.1" } solana-local-cluster = { path = "../local-cluster", version = "0.22.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -15,7 +15,7 @@ use solana_sdk::{
client::{Client, SyncClient}, client::{Client, SyncClient},
commitment_config::CommitmentConfig, commitment_config::CommitmentConfig,
pubkey::Pubkey, pubkey::Pubkey,
signature::{Keypair, Signer}, signature::{Keypair, KeypairUtil},
timing::{duration_as_ms, duration_as_s}, timing::{duration_as_ms, duration_as_s},
transaction::Transaction, transaction::Transaction,
{system_instruction, system_program}, {system_instruction, system_program},
@@ -449,7 +449,7 @@ fn swapper<T>(
} }
account_group = (account_group + 1) % account_groups as usize; account_group = (account_group + 1) % account_groups as usize;
let (blockhash, _fee_calculator, _last_valid_slot) = client let (blockhash, _fee_calculator) = client
.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) .get_recent_blockhash_with_commitment(CommitmentConfig::recent())
.expect("Failed to get blockhash"); .expect("Failed to get blockhash");
let to_swap_txs: Vec<_> = to_swap let to_swap_txs: Vec<_> = to_swap
@@ -459,7 +459,7 @@ fn swapper<T>(
let owner = &signer.pubkey(); let owner = &signer.pubkey();
Transaction::new_signed_instructions( Transaction::new_signed_instructions(
&[s], &[s],
&[exchange_instruction::swap_request( vec![exchange_instruction::swap_request(
owner, owner,
&swap.0.pubkey, &swap.0.pubkey,
&swap.1.pubkey, &swap.1.pubkey,
@@ -577,7 +577,7 @@ fn trader<T>(
} }
account_group = (account_group + 1) % account_groups as usize; account_group = (account_group + 1) % account_groups as usize;
let (blockhash, _fee_calculator, _last_valid_slot) = client let (blockhash, _fee_calculator) = client
.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) .get_recent_blockhash_with_commitment(CommitmentConfig::recent())
.expect("Failed to get blockhash"); .expect("Failed to get blockhash");
@@ -590,7 +590,7 @@ fn trader<T>(
let space = mem::size_of::<ExchangeState>() as u64; let space = mem::size_of::<ExchangeState>() as u64;
Transaction::new_signed_instructions( Transaction::new_signed_instructions(
&[owner.as_ref(), trade], &[owner.as_ref(), trade],
&[ vec![
system_instruction::create_account( system_instruction::create_account(
owner_pubkey, owner_pubkey,
trade_pubkey, trade_pubkey,
@@ -701,7 +701,7 @@ fn verify_funding_transfer<T: SyncClient + ?Sized>(
false false
} }
pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>], lamports: u64) { pub fn fund_keys(client: &dyn Client, source: &Keypair, dests: &[Arc<Keypair>], lamports: u64) {
let total = lamports * (dests.len() as u64 + 1); let total = lamports * (dests.len() as u64 + 1);
let mut funded: Vec<(&Keypair, u64)> = vec![(source, total)]; let mut funded: Vec<(&Keypair, u64)> = vec![(source, total)];
let mut notfunded: Vec<&Arc<Keypair>> = dests.iter().collect(); let mut notfunded: Vec<&Arc<Keypair>> = dests.iter().collect();
@@ -749,7 +749,7 @@ pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>]
.map(|(k, m)| { .map(|(k, m)| {
( (
k.clone(), k.clone(),
Transaction::new_unsigned_instructions(&system_instruction::transfer_many( Transaction::new_unsigned_instructions(system_instruction::transfer_many(
&k.pubkey(), &k.pubkey(),
&m, &m,
)), )),
@@ -760,10 +760,9 @@ pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>]
let mut retries = 0; let mut retries = 0;
let amount = chunk[0].1[0].1; let amount = chunk[0].1[0].1;
while !to_fund_txs.is_empty() { while !to_fund_txs.is_empty() {
let receivers: usize = to_fund_txs let receivers = to_fund_txs
.iter() .iter()
.map(|(_, tx)| tx.message().instructions.len()) .fold(0, |len, (_, tx)| len + tx.message().instructions.len());
.sum();
debug!( debug!(
" {} to {} in {} txs", " {} to {} in {} txs",
@@ -776,7 +775,7 @@ pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>]
to_fund_txs.len(), to_fund_txs.len(),
); );
let (blockhash, _fee_calculator, _last_valid_slot) = client let (blockhash, _fee_calculator) = client
.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) .get_recent_blockhash_with_commitment(CommitmentConfig::recent())
.expect("blockhash"); .expect("blockhash");
to_fund_txs.par_iter_mut().for_each(|(k, tx)| { to_fund_txs.par_iter_mut().for_each(|(k, tx)| {
@@ -825,11 +824,7 @@ pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>]
} }
} }
pub fn create_token_accounts<T: Client>( pub fn create_token_accounts(client: &dyn Client, signers: &[Arc<Keypair>], accounts: &[Keypair]) {
client: &T,
signers: &[Arc<Keypair>],
accounts: &[Keypair],
) {
let mut notfunded: Vec<(&Arc<Keypair>, &Keypair)> = signers.iter().zip(accounts).collect(); let mut notfunded: Vec<(&Arc<Keypair>, &Keypair)> = signers.iter().zip(accounts).collect();
while !notfunded.is_empty() { while !notfunded.is_empty() {
@@ -850,15 +845,14 @@ pub fn create_token_accounts<T: Client>(
exchange_instruction::account_request(owner_pubkey, &new_keypair.pubkey()); exchange_instruction::account_request(owner_pubkey, &new_keypair.pubkey());
( (
(from_keypair, new_keypair), (from_keypair, new_keypair),
Transaction::new_unsigned_instructions(&[create_ix, request_ix]), Transaction::new_unsigned_instructions(vec![create_ix, request_ix]),
) )
}) })
.collect(); .collect();
let accounts: usize = to_create_txs let accounts = to_create_txs
.iter() .iter()
.map(|(_, tx)| tx.message().instructions.len() / 2) .fold(0, |len, (_, tx)| len + tx.message().instructions.len() / 2);
.sum();
debug!( debug!(
" Creating {} accounts in {} txs", " Creating {} accounts in {} txs",
@@ -868,7 +862,7 @@ pub fn create_token_accounts<T: Client>(
let mut retries = 0; let mut retries = 0;
while !to_create_txs.is_empty() { while !to_create_txs.is_empty() {
let (blockhash, _fee_calculator, _last_valid_slot) = client let (blockhash, _fee_calculator) = client
.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) .get_recent_blockhash_with_commitment(CommitmentConfig::recent())
.expect("Failed to get blockhash"); .expect("Failed to get blockhash");
to_create_txs to_create_txs
@@ -974,12 +968,7 @@ fn generate_keypairs(num: u64) -> Vec<Keypair> {
rnd.gen_n_keypairs(num) rnd.gen_n_keypairs(num)
} }
pub fn airdrop_lamports<T: Client>( pub fn airdrop_lamports(client: &dyn Client, faucet_addr: &SocketAddr, id: &Keypair, amount: u64) {
client: &T,
faucet_addr: &SocketAddr,
id: &Keypair,
amount: u64,
) {
let balance = client.get_balance_with_commitment(&id.pubkey(), CommitmentConfig::recent()); let balance = client.get_balance_with_commitment(&id.pubkey(), CommitmentConfig::recent());
let balance = balance.unwrap_or(0); let balance = balance.unwrap_or(0);
if balance >= amount { if balance >= amount {
@@ -997,7 +986,7 @@ pub fn airdrop_lamports<T: Client>(
let mut tries = 0; let mut tries = 0;
loop { loop {
let (blockhash, _fee_calculator, _last_valid_slot) = client let (blockhash, _fee_calculator) = client
.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) .get_recent_blockhash_with_commitment(CommitmentConfig::recent())
.expect("Failed to get blockhash"); .expect("Failed to get blockhash");
match request_airdrop_transaction(&faucet_addr, &id.pubkey(), amount_to_drop, blockhash) { match request_airdrop_transaction(&faucet_addr, &id.pubkey(), amount_to_drop, blockhash) {

View File

@@ -1,7 +1,7 @@
use clap::{crate_description, crate_name, value_t, App, Arg, ArgMatches}; use clap::{crate_description, crate_name, value_t, App, Arg, ArgMatches};
use solana_core::gen_keys::GenKeys; use solana_core::gen_keys::GenKeys;
use solana_faucet::faucet::FAUCET_PORT; use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::signature::{read_keypair_file, Keypair}; use solana_sdk::signature::{read_keypair_file, Keypair, KeypairUtil};
use std::net::SocketAddr; use std::net::SocketAddr;
use std::process::exit; use std::process::exit;
use std::time::Duration; use std::time::Duration;

View File

@@ -5,13 +5,13 @@ pub mod order_book;
use crate::bench::{airdrop_lamports, create_client_accounts_file, do_bench_exchange, Config}; use crate::bench::{airdrop_lamports, create_client_accounts_file, do_bench_exchange, Config};
use log::*; use log::*;
use solana_core::gossip_service::{discover_cluster, get_multi_client}; use solana_core::gossip_service::{discover_cluster, get_multi_client};
use solana_sdk::signature::Signer; use solana_sdk::signature::KeypairUtil;
fn main() { fn main() {
solana_logger::setup(); solana_logger::setup();
solana_metrics::set_panic_hook("bench-exchange"); solana_metrics::set_panic_hook("bench-exchange");
let matches = cli::build_args(solana_version::version!()).get_matches(); let matches = cli::build_args(solana_clap_utils::version!()).get_matches();
let cli_config = cli::extract_args(&matches); let cli_config = cli::extract_args(&matches);
let cli::Config { let cli::Config {
@@ -54,9 +54,10 @@ fn main() {
); );
} else { } else {
info!("Connecting to the cluster"); info!("Connecting to the cluster");
let nodes = discover_cluster(&entrypoint_addr, num_nodes).unwrap_or_else(|_| { let (nodes, _archivers) =
panic!("Failed to discover nodes"); discover_cluster(&entrypoint_addr, num_nodes).unwrap_or_else(|_| {
}); panic!("Failed to discover nodes");
});
let (client, num_clients) = get_multi_client(&nodes); let (client, num_clients) = get_multi_client(&nodes);

View File

@@ -10,13 +10,12 @@ use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster};
use solana_runtime::bank::Bank; use solana_runtime::bank::Bank;
use solana_runtime::bank_client::BankClient; use solana_runtime::bank_client::BankClient;
use solana_sdk::genesis_config::create_genesis_config; use solana_sdk::genesis_config::create_genesis_config;
use solana_sdk::signature::{Keypair, Signer}; use solana_sdk::signature::{Keypair, KeypairUtil};
use std::process::exit; use std::process::exit;
use std::sync::mpsc::channel; use std::sync::mpsc::channel;
use std::time::Duration; use std::time::Duration;
#[test] #[test]
#[ignore]
fn test_exchange_local_cluster() { fn test_exchange_local_cluster() {
solana_logger::setup(); solana_logger::setup();
@@ -59,7 +58,7 @@ fn test_exchange_local_cluster() {
let faucet_addr = addr_receiver.recv_timeout(Duration::from_secs(2)).unwrap(); let faucet_addr = addr_receiver.recv_timeout(Duration::from_secs(2)).unwrap();
info!("Connecting to the cluster"); info!("Connecting to the cluster");
let nodes = let (nodes, _) =
discover_cluster(&cluster.entry_point_info.gossip, NUM_NODES).unwrap_or_else(|err| { discover_cluster(&cluster.entry_point_info.gossip, NUM_NODES).unwrap_or_else(|err| {
error!("Failed to discover {} nodes: {:?}", NUM_NODES, err); error!("Failed to discover {} nodes: {:?}", NUM_NODES, err);
exit(1); exit(1);
@@ -86,7 +85,7 @@ fn test_exchange_bank_client() {
solana_logger::setup(); solana_logger::setup();
let (genesis_config, identity) = create_genesis_config(100_000_000_000_000); let (genesis_config, identity) = create_genesis_config(100_000_000_000_000);
let mut bank = Bank::new(&genesis_config); let mut bank = Bank::new(&genesis_config);
bank.add_builtin_program("exchange_program", id(), process_instruction); bank.add_instruction_processor(id(), process_instruction);
let clients = vec![BankClient::new(bank)]; let clients = vec![BankClient::new(bank)];
let mut config = Config::default(); let mut config = Config::default();

View File

@@ -2,18 +2,14 @@
authors = ["Solana Maintainers <maintainers@solana.com>"] authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018" edition = "2018"
name = "solana-bench-streamer" name = "solana-bench-streamer"
version = "1.2.1" version = "0.22.8"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
[dependencies] [dependencies]
clap = "2.33.1" clap = "2.33.0"
solana-clap-utils = { path = "../clap-utils", version = "1.2.1" } solana-clap-utils = { path = "../clap-utils", version = "0.22.8" }
solana-streamer = { path = "../streamer", version = "1.2.1" } solana-core = { path = "../core", version = "0.22.8" }
solana-logger = { path = "../logger", version = "1.2.1" } solana-logger = { path = "../logger", version = "0.22.8" }
solana-net-utils = { path = "../net-utils", version = "1.2.1" } solana-net-utils = { path = "../net-utils", version = "0.22.8" }
solana-version = { path = "../version", version = "1.2.1" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,13 +1,14 @@
use clap::{crate_description, crate_name, App, Arg}; use clap::{crate_description, crate_name, App, Arg};
use solana_streamer::packet::{Packet, Packets, PacketsRecycler, PACKET_DATA_SIZE}; use solana_core::packet::{Packet, Packets, PacketsRecycler, PACKET_DATA_SIZE};
use solana_streamer::streamer::{receiver, PacketReceiver}; use solana_core::result::Result;
use solana_core::streamer::{receiver, PacketReceiver};
use std::cmp::max; use std::cmp::max;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket}; use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering}; use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::mpsc::channel; use std::sync::mpsc::channel;
use std::sync::Arc; use std::sync::Arc;
use std::thread::sleep; use std::thread::sleep;
use std::thread::{spawn, JoinHandle, Result}; use std::thread::{spawn, JoinHandle};
use std::time::Duration; use std::time::Duration;
use std::time::SystemTime; use std::time::SystemTime;
@@ -52,7 +53,7 @@ fn main() -> Result<()> {
let matches = App::new(crate_name!()) let matches = App::new(crate_name!())
.about(crate_description!()) .about(crate_description!())
.version(solana_version::version!()) .version(solana_clap_utils::version!())
.arg( .arg(
Arg::with_name("num-recv-sockets") Arg::with_name("num-recv-sockets")
.long("num-recv-sockets") .long("num-recv-sockets")
@@ -67,8 +68,7 @@ fn main() -> Result<()> {
} }
let mut port = 0; let mut port = 0;
let ip_addr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)); let mut addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), 0);
let mut addr = SocketAddr::new(ip_addr, 0);
let exit = Arc::new(AtomicBool::new(false)); let exit = Arc::new(AtomicBool::new(false));
@@ -76,7 +76,7 @@ fn main() -> Result<()> {
let mut read_threads = Vec::new(); let mut read_threads = Vec::new();
let recycler = PacketsRecycler::default(); let recycler = PacketsRecycler::default();
for _ in 0..num_sockets { for _ in 0..num_sockets {
let read = solana_net_utils::bind_to(ip_addr, port, false).unwrap(); let read = solana_net_utils::bind_to(port, false).unwrap();
read.set_read_timeout(Some(Duration::new(1, 0))).unwrap(); read.set_read_timeout(Some(Duration::new(1, 0))).unwrap();
addr = read.local_addr().unwrap(); addr = read.local_addr().unwrap();

View File

@@ -2,40 +2,38 @@
authors = ["Solana Maintainers <maintainers@solana.com>"] authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018" edition = "2018"
name = "solana-bench-tps" name = "solana-bench-tps"
version = "1.2.1" version = "0.22.8"
repository = "https://github.com/solana-labs/solana" repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0" license = "Apache-2.0"
homepage = "https://solana.com/" homepage = "https://solana.com/"
[dependencies] [dependencies]
bincode = "1.2.1" bincode = "1.2.1"
clap = "2.33.1" clap = "2.33.0"
log = "0.4.8" log = "0.4.8"
rayon = "1.3.0" rayon = "1.2.0"
serde_json = "1.0.53" serde = "1.0.104"
serde_yaml = "0.8.12" serde_derive = "1.0.103"
solana-clap-utils = { path = "../clap-utils", version = "1.2.1" } serde_json = "1.0.44"
solana-core = { path = "../core", version = "1.2.1" } serde_yaml = "0.8.11"
solana-genesis = { path = "../genesis", version = "1.2.1" } solana-clap-utils = { path = "../clap-utils", version = "0.22.8" }
solana-client = { path = "../client", version = "1.2.1" } solana-core = { path = "../core", version = "0.22.8" }
solana-faucet = { path = "../faucet", version = "1.2.1" } solana-genesis = { path = "../genesis", version = "0.22.8" }
solana-librapay = { path = "../programs/librapay", version = "1.2.1", optional = true } solana-client = { path = "../client", version = "0.22.8" }
solana-logger = { path = "../logger", version = "1.2.1" } solana-faucet = { path = "../faucet", version = "0.22.8" }
solana-metrics = { path = "../metrics", version = "1.2.1" } solana-librapay = { path = "../programs/librapay", version = "0.22.8", optional = true }
solana-measure = { path = "../measure", version = "1.2.1" } solana-logger = { path = "../logger", version = "0.22.8" }
solana-net-utils = { path = "../net-utils", version = "1.2.1" } solana-metrics = { path = "../metrics", version = "0.22.8" }
solana-runtime = { path = "../runtime", version = "1.2.1" } solana-measure = { path = "../measure", version = "0.22.8" }
solana-sdk = { path = "../sdk", version = "1.2.1" } solana-net-utils = { path = "../net-utils", version = "0.22.8" }
solana-move-loader-program = { path = "../programs/move_loader", version = "1.2.1", optional = true } solana-runtime = { path = "../runtime", version = "0.22.8" }
solana-version = { path = "../version", version = "1.2.1" } solana-sdk = { path = "../sdk", version = "0.22.8" }
solana-move-loader-program = { path = "../programs/move_loader", version = "0.22.8", optional = true }
[dev-dependencies] [dev-dependencies]
serial_test = "0.4.0" serial_test = "0.3.2"
serial_test_derive = "0.4.0" serial_test_derive = "0.3.1"
solana-local-cluster = { path = "../local-cluster", version = "1.2.1" } solana-local-cluster = { path = "../local-cluster", version = "0.22.8" }
[features] [features]
move = ["solana-librapay", "solana-move-loader-program"] move = ["solana-librapay", "solana-move-loader-program"]
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -7,7 +7,7 @@ use solana_faucet::faucet::request_airdrop_transaction;
#[cfg(feature = "move")] #[cfg(feature = "move")]
use solana_librapay::{create_genesis, upload_mint_script, upload_payment_script}; use solana_librapay::{create_genesis, upload_mint_script, upload_payment_script};
use solana_measure::measure::Measure; use solana_measure::measure::Measure;
use solana_metrics::{self, datapoint_info}; use solana_metrics::{self, datapoint_debug};
use solana_sdk::{ use solana_sdk::{
client::Client, client::Client,
clock::{DEFAULT_TICKS_PER_SECOND, DEFAULT_TICKS_PER_SLOT, MAX_PROCESSING_AGE}, clock::{DEFAULT_TICKS_PER_SECOND, DEFAULT_TICKS_PER_SLOT, MAX_PROCESSING_AGE},
@@ -15,7 +15,7 @@ use solana_sdk::{
fee_calculator::FeeCalculator, fee_calculator::FeeCalculator,
hash::Hash, hash::Hash,
pubkey::Pubkey, pubkey::Pubkey,
signature::{Keypair, Signer}, signature::{Keypair, KeypairUtil},
system_instruction, system_transaction, system_instruction, system_transaction,
timing::{duration_as_ms, duration_as_s, duration_as_us, timestamp}, timing::{duration_as_ms, duration_as_s, duration_as_us, timestamp},
transaction::Transaction, transaction::Transaction,
@@ -26,9 +26,9 @@ use std::{
process::exit, process::exit,
sync::{ sync::{
atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering}, atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering},
Arc, Mutex, RwLock, Arc, RwLock,
}, },
thread::{sleep, Builder, JoinHandle}, thread::{sleep, Builder},
time::{Duration, Instant}, time::{Duration, Instant},
}; };
@@ -55,9 +55,7 @@ type LibraKeys = (Keypair, Pubkey, Pubkey, Vec<Keypair>);
fn get_recent_blockhash<T: Client>(client: &T) -> (Hash, FeeCalculator) { fn get_recent_blockhash<T: Client>(client: &T) -> (Hash, FeeCalculator) {
loop { loop {
match client.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) { match client.get_recent_blockhash_with_commitment(CommitmentConfig::recent()) {
Ok((blockhash, fee_calculator, _last_valid_slot)) => { Ok((blockhash, fee_calculator)) => return (blockhash, fee_calculator),
return (blockhash, fee_calculator)
}
Err(err) => { Err(err) => {
info!("Couldn't get recent blockhash: {:?}", err); info!("Couldn't get recent blockhash: {:?}", err);
sleep(Duration::from_secs(1)); sleep(Duration::from_secs(1));
@@ -66,144 +64,6 @@ fn get_recent_blockhash<T: Client>(client: &T) -> (Hash, FeeCalculator) {
} }
} }
fn wait_for_target_slots_per_epoch<T>(target_slots_per_epoch: u64, client: &Arc<T>)
where
T: 'static + Client + Send + Sync,
{
if target_slots_per_epoch != 0 {
info!(
"Waiting until epochs are {} slots long..",
target_slots_per_epoch
);
loop {
if let Ok(epoch_info) = client.get_epoch_info() {
if epoch_info.slots_in_epoch >= target_slots_per_epoch {
info!("Done epoch_info: {:?}", epoch_info);
break;
}
info!(
"Waiting for epoch: {} now: {}",
target_slots_per_epoch, epoch_info.slots_in_epoch
);
}
sleep(Duration::from_secs(3));
}
}
}
fn create_sampler_thread<T>(
client: &Arc<T>,
exit_signal: &Arc<AtomicBool>,
sample_period: u64,
maxes: &Arc<RwLock<Vec<(String, SampleStats)>>>,
) -> JoinHandle<()>
where
T: 'static + Client + Send + Sync,
{
info!("Sampling TPS every {} second...", sample_period);
let exit_signal = exit_signal.clone();
let maxes = maxes.clone();
let client = client.clone();
Builder::new()
.name("solana-client-sample".to_string())
.spawn(move || {
sample_txs(&exit_signal, &maxes, sample_period, &client);
})
.unwrap()
}
fn generate_chunked_transfers(
recent_blockhash: Arc<RwLock<Hash>>,
shared_txs: &SharedTransactions,
shared_tx_active_thread_count: Arc<AtomicIsize>,
source_keypair_chunks: Vec<Vec<&Keypair>>,
dest_keypair_chunks: &mut Vec<VecDeque<&Keypair>>,
threads: usize,
duration: Duration,
sustained: bool,
libra_args: Option<LibraKeys>,
) {
// generate and send transactions for the specified duration
let start = Instant::now();
let keypair_chunks = source_keypair_chunks.len();
let mut reclaim_lamports_back_to_source_account = false;
let mut chunk_index = 0;
while start.elapsed() < duration {
generate_txs(
shared_txs,
&recent_blockhash,
&source_keypair_chunks[chunk_index],
&dest_keypair_chunks[chunk_index],
threads,
reclaim_lamports_back_to_source_account,
&libra_args,
);
// In sustained mode, overlap the transfers with generation. This has higher average
// performance but lower peak performance in tested environments.
if sustained {
// Ensure that we don't generate more transactions than we can handle.
while shared_txs.read().unwrap().len() > 2 * threads {
sleep(Duration::from_millis(1));
}
} else {
while !shared_txs.read().unwrap().is_empty()
|| shared_tx_active_thread_count.load(Ordering::Relaxed) > 0
{
sleep(Duration::from_millis(1));
}
}
// Rotate destination keypairs so that the next round of transactions will have different
// transaction signatures even when blockhash is reused.
dest_keypair_chunks[chunk_index].rotate_left(1);
// Move on to next chunk
chunk_index = (chunk_index + 1) % keypair_chunks;
// Switch directions after transfering for each "chunk"
if chunk_index == 0 {
reclaim_lamports_back_to_source_account = !reclaim_lamports_back_to_source_account;
}
}
}
fn create_sender_threads<T>(
client: &Arc<T>,
shared_txs: &SharedTransactions,
thread_batch_sleep_ms: usize,
total_tx_sent_count: &Arc<AtomicUsize>,
threads: usize,
exit_signal: &Arc<AtomicBool>,
shared_tx_active_thread_count: &Arc<AtomicIsize>,
) -> Vec<JoinHandle<()>>
where
T: 'static + Client + Send + Sync,
{
(0..threads)
.map(|_| {
let exit_signal = exit_signal.clone();
let shared_txs = shared_txs.clone();
let shared_tx_active_thread_count = shared_tx_active_thread_count.clone();
let total_tx_sent_count = total_tx_sent_count.clone();
let client = client.clone();
Builder::new()
.name("solana-client-sender".to_string())
.spawn(move || {
do_tx_transfers(
&exit_signal,
&shared_txs,
&shared_tx_active_thread_count,
&total_tx_sent_count,
thread_batch_sleep_ms,
&client,
);
})
.unwrap()
})
.collect()
}
pub fn do_bench_tps<T>( pub fn do_bench_tps<T>(
client: Arc<T>, client: Arc<T>,
config: Config, config: Config,
@@ -220,7 +80,6 @@ where
duration, duration,
tx_count, tx_count,
sustained, sustained,
target_slots_per_epoch,
.. ..
} = config; } = config;
@@ -249,7 +108,18 @@ where
// collect the max transaction rate and total tx count seen // collect the max transaction rate and total tx count seen
let maxes = Arc::new(RwLock::new(Vec::new())); let maxes = Arc::new(RwLock::new(Vec::new()));
let sample_period = 1; // in seconds let sample_period = 1; // in seconds
let sample_thread = create_sampler_thread(&client, &exit_signal, sample_period, &maxes); info!("Sampling TPS every {} second...", sample_period);
let sample_thread = {
let exit_signal = exit_signal.clone();
let maxes = maxes.clone();
let client = client.clone();
Builder::new()
.name("solana-client-sample".to_string())
.spawn(move || {
sample_txs(&exit_signal, &maxes, sample_period, &client);
})
.unwrap()
};
let shared_txs: SharedTransactions = Arc::new(RwLock::new(VecDeque::new())); let shared_txs: SharedTransactions = Arc::new(RwLock::new(VecDeque::new()));
@@ -270,31 +140,70 @@ where
.unwrap() .unwrap()
}; };
let s_threads = create_sender_threads( let s_threads: Vec<_> = (0..threads)
&client, .map(|_| {
&shared_txs, let exit_signal = exit_signal.clone();
thread_batch_sleep_ms, let shared_txs = shared_txs.clone();
&total_tx_sent_count, let shared_tx_active_thread_count = shared_tx_active_thread_count.clone();
threads, let total_tx_sent_count = total_tx_sent_count.clone();
&exit_signal, let client = client.clone();
&shared_tx_active_thread_count, Builder::new()
); .name("solana-client-sender".to_string())
.spawn(move || {
wait_for_target_slots_per_epoch(target_slots_per_epoch, &client); do_tx_transfers(
&exit_signal,
&shared_txs,
&shared_tx_active_thread_count,
&total_tx_sent_count,
thread_batch_sleep_ms,
&client,
);
})
.unwrap()
})
.collect();
// generate and send transactions for the specified duration
let start = Instant::now(); let start = Instant::now();
let keypair_chunks = source_keypair_chunks.len();
let mut reclaim_lamports_back_to_source_account = false;
let mut chunk_index = 0;
while start.elapsed() < duration {
generate_txs(
&shared_txs,
&recent_blockhash,
&source_keypair_chunks[chunk_index],
&dest_keypair_chunks[chunk_index],
threads,
reclaim_lamports_back_to_source_account,
&libra_args,
);
generate_chunked_transfers( // In sustained mode, overlap the transfers with generation. This has higher average
recent_blockhash, // performance but lower peak performance in tested environments.
&shared_txs, if sustained {
shared_tx_active_thread_count, // Ensure that we don't generate more transactions than we can handle.
source_keypair_chunks, while shared_txs.read().unwrap().len() > 2 * threads {
&mut dest_keypair_chunks, sleep(Duration::from_millis(1));
threads, }
duration, } else {
sustained, while shared_tx_active_thread_count.load(Ordering::Relaxed) > 0 {
libra_args, sleep(Duration::from_millis(1));
); }
}
// Rotate destination keypairs so that the next round of transactions will have different
// transaction signatures even when blockhash is reused.
dest_keypair_chunks[chunk_index].rotate_left(1);
// Move on to next chunk
chunk_index = (chunk_index + 1) % keypair_chunks;
// Switch directions after transfering for each "chunk"
if chunk_index == 0 {
reclaim_lamports_back_to_source_account = !reclaim_lamports_back_to_source_account;
}
}
// Stop the sampling threads so it will collect the stats // Stop the sampling threads so it will collect the stats
exit_signal.store(true, Ordering::Relaxed); exit_signal.store(true, Ordering::Relaxed);
@@ -333,7 +242,7 @@ where
fn metrics_submit_lamport_balance(lamport_balance: u64) { fn metrics_submit_lamport_balance(lamport_balance: u64) {
info!("Token balance: {}", lamport_balance); info!("Token balance: {}", lamport_balance);
datapoint_info!( datapoint_debug!(
"bench-tps-lamport_balance", "bench-tps-lamport_balance",
("balance", lamport_balance, i64) ("balance", lamport_balance, i64)
); );
@@ -464,7 +373,7 @@ fn generate_txs(
duration_as_ms(&duration), duration_as_ms(&duration),
blockhash, blockhash,
); );
datapoint_info!( datapoint_debug!(
"bench-tps-generate_txs", "bench-tps-generate_txs",
("duration", duration_as_us(&duration), i64) ("duration", duration_as_us(&duration), i64)
); );
@@ -570,7 +479,7 @@ fn do_tx_transfers<T: Client>(
duration_as_ms(&transfer_start.elapsed()), duration_as_ms(&transfer_start.elapsed()),
tx_len as f32 / duration_as_s(&transfer_start.elapsed()), tx_len as f32 / duration_as_s(&transfer_start.elapsed()),
); );
datapoint_info!( datapoint_debug!(
"bench-tps-do_tx_transfers", "bench-tps-do_tx_transfers",
("duration", duration_as_us(&transfer_start.elapsed()), i64), ("duration", duration_as_us(&transfer_start.elapsed()), i64),
("count", tx_len, i64) ("count", tx_len, i64)
@@ -652,9 +561,10 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
let to_fund_txs: Vec<(&Keypair, Transaction)> = to_fund let to_fund_txs: Vec<(&Keypair, Transaction)> = to_fund
.par_iter() .par_iter()
.map(|(k, t)| { .map(|(k, t)| {
let tx = Transaction::new_unsigned_instructions( let tx = Transaction::new_unsigned_instructions(system_instruction::transfer_many(
&system_instruction::transfer_many(&k.pubkey(), &t), &k.pubkey(),
); &t,
));
(*k, tx) (*k, tx)
}) })
.collect(); .collect();
@@ -691,9 +601,7 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
let too_many_failures = Arc::new(AtomicBool::new(false)); let too_many_failures = Arc::new(AtomicBool::new(false));
let loops = if starting_txs < 1000 { 3 } else { 1 }; let loops = if starting_txs < 1000 { 3 } else { 1 };
// Only loop multiple times for small (quick) transaction batches // Only loop multiple times for small (quick) transaction batches
let time = Arc::new(Mutex::new(Instant::now()));
for _ in 0..loops { for _ in 0..loops {
let time = time.clone();
let failed_verify = Arc::new(AtomicUsize::new(0)); let failed_verify = Arc::new(AtomicUsize::new(0));
let client = client.clone(); let client = client.clone();
let verified_txs = &verified_txs; let verified_txs = &verified_txs;
@@ -724,15 +632,11 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
remaining_count, verified_txs, failed_verify remaining_count, verified_txs, failed_verify
); );
} }
if remaining_count > 0 { if remaining_count % 100 == 0 {
let mut time_l = time.lock().unwrap(); info!(
if time_l.elapsed().as_secs() > 2 { "Verifying transfers... {} remaining, {} verified, {} failures",
info!( remaining_count, verified_txs, failed_verify
"Verifying transfers... {} remaining, {} verified, {} failures", );
remaining_count, verified_txs, failed_verify
);
*time_l = Instant::now();
}
} }
verified verified
@@ -1025,7 +929,7 @@ fn fund_move_keys<T: Client>(
.collect(); .collect();
let tx = Transaction::new_signed_instructions( let tx = Transaction::new_signed_instructions(
&[funding_key], &[funding_key],
&system_instruction::transfer_many(&funding_key.pubkey(), &pubkey_amounts), system_instruction::transfer_many(&funding_key.pubkey(), &pubkey_amounts),
blockhash, blockhash,
); );
client.send_message(&[funding_key], tx.message).unwrap(); client.send_message(&[funding_key], tx.message).unwrap();
@@ -1153,8 +1057,8 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
// pay for the transaction fees in a new run. // pay for the transaction fees in a new run.
let enough_lamports = 8 * lamports_per_account / 10; let enough_lamports = 8 * lamports_per_account / 10;
if first_keypair_balance < enough_lamports || last_keypair_balance < enough_lamports { if first_keypair_balance < enough_lamports || last_keypair_balance < enough_lamports {
let fee_rate_governor = client.get_fee_rate_governor().unwrap(); let (_blockhash, fee_calculator) = get_recent_blockhash(client.as_ref());
let max_fee = fee_rate_governor.max_lamports_per_signature; let max_fee = fee_calculator.max_lamports_per_signature;
let extra_fees = extra * max_fee; let extra_fees = extra * max_fee;
let total_keypairs = keypairs.len() as u64 + 1; // Add one for funding keypair let total_keypairs = keypairs.len() as u64 + 1; // Add one for funding keypair
let mut total = lamports_per_account * total_keypairs + extra_fees; let mut total = lamports_per_account * total_keypairs + extra_fees;
@@ -1228,7 +1132,7 @@ mod tests {
use solana_runtime::bank::Bank; use solana_runtime::bank::Bank;
use solana_runtime::bank_client::BankClient; use solana_runtime::bank_client::BankClient;
use solana_sdk::client::SyncClient; use solana_sdk::client::SyncClient;
use solana_sdk::fee_calculator::FeeRateGovernor; use solana_sdk::fee_calculator::FeeCalculator;
use solana_sdk::genesis_config::create_genesis_config; use solana_sdk::genesis_config::create_genesis_config;
#[test] #[test]
@@ -1275,8 +1179,8 @@ mod tests {
#[test] #[test]
fn test_bench_tps_fund_keys_with_fees() { fn test_bench_tps_fund_keys_with_fees() {
let (mut genesis_config, id) = create_genesis_config(10_000); let (mut genesis_config, id) = create_genesis_config(10_000);
let fee_rate_governor = FeeRateGovernor::new(11, 0); let fee_calculator = FeeCalculator::new(11, 0);
genesis_config.fee_rate_governor = fee_rate_governor; genesis_config.fee_calculator = fee_calculator;
let bank = Bank::new(&genesis_config); let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank)); let client = Arc::new(BankClient::new(bank));
let keypair_count = 20; let keypair_count = 20;

View File

@@ -1,10 +1,10 @@
use clap::{crate_description, crate_name, App, Arg, ArgMatches}; use clap::{crate_description, crate_name, App, Arg, ArgMatches};
use solana_faucet::faucet::FAUCET_PORT; use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::fee_calculator::FeeRateGovernor; use solana_sdk::fee_calculator::FeeCalculator;
use solana_sdk::signature::{read_keypair_file, Keypair}; use solana_sdk::signature::{read_keypair_file, Keypair, KeypairUtil};
use std::{net::SocketAddr, process::exit, time::Duration}; use std::{net::SocketAddr, process::exit, time::Duration};
const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::LAMPORTS_PER_SOL; const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::SOL_LAMPORTS;
/// Holds the configuration for a single run of the benchmark /// Holds the configuration for a single run of the benchmark
pub struct Config { pub struct Config {
@@ -25,7 +25,6 @@ pub struct Config {
pub multi_client: bool, pub multi_client: bool,
pub use_move: bool, pub use_move: bool,
pub num_lamports_per_account: u64, pub num_lamports_per_account: u64,
pub target_slots_per_epoch: u64,
} }
impl Default for Config { impl Default for Config {
@@ -44,11 +43,10 @@ impl Default for Config {
client_ids_and_stake_file: String::new(), client_ids_and_stake_file: String::new(),
write_to_client_file: false, write_to_client_file: false,
read_from_client_file: false, read_from_client_file: false,
target_lamports_per_signature: FeeRateGovernor::default().target_lamports_per_signature, target_lamports_per_signature: FeeCalculator::default().target_lamports_per_signature,
multi_client: true, multi_client: true,
use_move: false, use_move: false,
num_lamports_per_account: NUM_LAMPORTS_PER_ACCOUNT_DEFAULT, num_lamports_per_account: NUM_LAMPORTS_PER_ACCOUNT_DEFAULT,
target_slots_per_epoch: 0,
} }
} }
} }
@@ -174,15 +172,6 @@ pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
"Number of lamports per account.", "Number of lamports per account.",
), ),
) )
.arg(
Arg::with_name("target_slots_per_epoch")
.long("target-slots-per-epoch")
.value_name("SLOTS")
.takes_value(true)
.help(
"Wait until epochs are this many slots long.",
),
)
} }
/// Parses a clap `ArgMatches` structure into a `Config` /// Parses a clap `ArgMatches` structure into a `Config`
@@ -270,12 +259,5 @@ pub fn extract_args<'a>(matches: &ArgMatches<'a>) -> Config {
args.num_lamports_per_account = v.to_string().parse().expect("can't parse lamports"); args.num_lamports_per_account = v.to_string().parse().expect("can't parse lamports");
} }
if let Some(t) = matches.value_of("target_slots_per_epoch") {
args.target_slots_per_epoch = t
.to_string()
.parse()
.expect("can't parse target slots per epoch");
}
args args
} }

View File

@@ -3,8 +3,8 @@ use solana_bench_tps::bench::{do_bench_tps, generate_and_fund_keypairs, generate
use solana_bench_tps::cli; use solana_bench_tps::cli;
use solana_core::gossip_service::{discover_cluster, get_client, get_multi_client}; use solana_core::gossip_service::{discover_cluster, get_client, get_multi_client};
use solana_genesis::Base64Account; use solana_genesis::Base64Account;
use solana_sdk::fee_calculator::FeeRateGovernor; use solana_sdk::fee_calculator::FeeCalculator;
use solana_sdk::signature::{Keypair, Signer}; use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_program; use solana_sdk::system_program;
use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc}; use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc};
@@ -12,10 +12,10 @@ use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::e
pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7; pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7;
fn main() { fn main() {
solana_logger::setup_with_default("solana=info"); solana_logger::setup_with_filter("solana=info");
solana_metrics::set_panic_hook("bench-tps"); solana_metrics::set_panic_hook("bench-tps");
let matches = cli::build_args(solana_version::version!()).get_matches(); let matches = cli::build_args(solana_clap_utils::version!()).get_matches();
let cli_config = cli::extract_args(&matches); let cli_config = cli::extract_args(&matches);
let cli::Config { let cli::Config {
@@ -41,7 +41,7 @@ fn main() {
let (keypairs, _) = generate_keypairs(&id, keypair_count as u64); let (keypairs, _) = generate_keypairs(&id, keypair_count as u64);
let num_accounts = keypairs.len() as u64; let num_accounts = keypairs.len() as u64;
let max_fee = let max_fee =
FeeRateGovernor::new(*target_lamports_per_signature, 0).max_lamports_per_signature; FeeCalculator::new(*target_lamports_per_signature, 0).max_lamports_per_signature;
let num_lamports_per_account = (num_accounts - 1 + NUM_SIGNATURES_FOR_TXS * max_fee) let num_lamports_per_account = (num_accounts - 1 + NUM_SIGNATURES_FOR_TXS * max_fee)
/ num_accounts / num_accounts
+ num_lamports_per_account; + num_lamports_per_account;
@@ -67,10 +67,11 @@ fn main() {
} }
info!("Connecting to the cluster"); info!("Connecting to the cluster");
let nodes = discover_cluster(&entrypoint_addr, *num_nodes).unwrap_or_else(|err| { let (nodes, _archivers) =
eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err); discover_cluster(&entrypoint_addr, *num_nodes).unwrap_or_else(|err| {
exit(1); eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
}); exit(1);
});
let client = if *multi_client { let client = if *multi_client {
let (client, num_clients) = get_multi_client(&nodes); let (client, num_clients) = get_multi_client(&nodes);

View File

@@ -8,7 +8,7 @@ use solana_faucet::faucet::run_local_faucet;
use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster}; use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster};
#[cfg(feature = "move")] #[cfg(feature = "move")]
use solana_sdk::move_loader::solana_move_loader_program; use solana_sdk::move_loader::solana_move_loader_program;
use solana_sdk::signature::{Keypair, Signer}; use solana_sdk::signature::{Keypair, KeypairUtil};
use std::sync::{mpsc::channel, Arc}; use std::sync::{mpsc::channel, Arc};
use std::time::Duration; use std::time::Duration;

26
book/README.md Normal file
View File

@@ -0,0 +1,26 @@
Building the Solana book
---
Install the book's dependnecies, build, and test the book:
```bash
$ ./build.sh
```
Run any Rust tests in the markdown:
```bash
$ make test
```
Render markdown as HTML:
```bash
$ make build
```
Render and view the book:
```bash
$ make open
```

View File

@@ -24,7 +24,7 @@ msc {
... ; ... ;
Validator abox Validator [label="\nmax\nlockout\n"]; Validator abox Validator [label="\nmax\nlockout\n"];
|||; |||;
Cluster box Cluster [label="credits redeemed (at epoch)"]; StakerX => Cluster [label="StakeState::RedeemCredits()"];
StakerY => Cluster [label="StakeState::RedeemCredits()"] ;
} }

20
book/art/sdk-tools.bob Normal file
View File

@@ -0,0 +1,20 @@
.----------------------------------------.
| Solana Runtime |
| |
| .------------. .------------. |
| | | | | |
.-------->| Verifier +-->| Accounts | |
| | | | | | |
.----------. | | `------------` `------------` |
| +--------` | ^ |
| Client | | LoadAccounts | |
| +--------. | .----------------` |
`----------` | | | |
| | .------+-----. .-------------. |
| | | | | | |
`-------->| Loader +-->| Interpreter | |
| | | | | |
| `------------` `-------------` |
| |
`----------------------------------------`

View File

@@ -0,0 +1,18 @@
+------------+
| Bank-Merkle|
+------------+
^ ^
/ \
+-----------------+ +-------------+
| Bank-Diff-Merkle| | Block-Merkle|
+-----------------+ +-------------+
^ ^
/ \
+------+ +--------------------------+
| Hash | | Previous Bank-Diff-Merkle|
+------+ +--------------------------+
^ ^
/ \
+---------------+ +---------------+
| Hash(Account1)| | Hash(Account2)|
+---------------+ +---------------+

22
book/art/tvu.bob Normal file
View File

@@ -0,0 +1,22 @@
.--------.
| Leader |
`--------`
^
|
.------------------------------------|--------------------.
| TVU | |
| | |
| .-------. .------------. .----+---. .---------. |
.------------. | | Shred | | Retransmit | | Replay | | Storage | |
| Upstream +----->| Fetch +-->| Stage +-->| Stage +-->| Stage | |
| Validators | | | Stage | | | | | | | |
`------------` | `-------` `----+-------` `----+---` `---------` |
| ^ | | |
| | | | |
`--------|----------|----------------|--------------------`
| | |
| V v
.+-----------. .------.
| Gossip | | Bank |
| Service | `------`
`------------`

View File

@@ -8,5 +8,3 @@ create-missing = false
[output.html] [output.html]
theme = "theme" theme = "theme"
[output.linkcheck]

View File

@@ -3,13 +3,11 @@ set -e
cd "$(dirname "$0")" cd "$(dirname "$0")"
: "${rust_stable:=}" # Pacify shellcheck usage=$(cargo -q run -p solana-cli -- -C ~/.foo --help | sed 's|'"$HOME"'|~|g')
usage=$(cargo +"$rust_stable" -q run -p solana-cli -- -C ~/.foo --help | sed -e 's|'"$HOME"'|~|g' -e 's/[[:space:]]\+$//') out=${1:-src/api-reference/cli.md}
out=${1:-src/cli/usage.md} cat src/api-reference/.cli.md > "$out"
cat src/cli/.usage.md.header > "$out"
section() { section() {
declare mark=${2:-"###"} declare mark=${2:-"###"}
@@ -27,12 +25,10 @@ section() {
section "$usage" >> "$out" section "$usage" >> "$out"
usage=$(sed -e '/^ \{5,\}/d' <<<"$usage")
in_subcommands=0 in_subcommands=0
while read -r subcommand rest; do while read -r subcommand rest; do
[[ $subcommand == "SUBCOMMANDS:" ]] && in_subcommands=1 && continue [[ $subcommand == "SUBCOMMANDS:" ]] && in_subcommands=1 && continue
if ((in_subcommands)); then if ((in_subcommands)); then
section "$(cargo +"$rust_stable" -q run -p solana-cli -- help "$subcommand" | sed -e 's|'"$HOME"'|~|g' -e 's/[[:space:]]\+$//')" "####" >> "$out" section "$(cargo -q run -p solana-cli -- help "$subcommand" | sed 's|'"$HOME"'|~|g')" "####" >> "$out"
fi fi
done <<<"$usage">>"$out" done <<<"$usage">>"$out"

6
book/build.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
make -j"$(nproc)" test

View File

@@ -1,6 +1,6 @@
BOB_SRCS=$(wildcard art/*.bob) BOB_SRCS=$(wildcard art/*.bob)
MSC_SRCS=$(wildcard art/*.msc) MSC_SRCS=$(wildcard art/*.msc)
MD_SRCS=$(wildcard src/*.md src/*/*.md) src/cli/usage.md MD_SRCS=$(wildcard src/*.md)
SVG_IMGS=$(BOB_SRCS:art/%.bob=src/.gitbook/assets/%.svg) $(MSC_SRCS:art/%.msc=src/.gitbook/assets/%.svg) SVG_IMGS=$(BOB_SRCS:art/%.bob=src/.gitbook/assets/%.svg) $(MSC_SRCS:art/%.msc=src/.gitbook/assets/%.svg)
@@ -15,7 +15,6 @@ test: $(TEST_STAMP)
open: $(TEST_STAMP) open: $(TEST_STAMP)
mdbook build --open mdbook build --open
./set-solana-release-tag.sh
watch: $(SVG_IMGS) watch: $(SVG_IMGS)
mdbook watch mdbook watch
@@ -28,12 +27,6 @@ src/.gitbook/assets/%.svg: art/%.msc
@mkdir -p $(@D) @mkdir -p $(@D)
mscgen -T svg -i $< -o $@ mscgen -T svg -i $< -o $@
../target/debug/solana:
cd ../cli && cargo build
src/cli/usage.md: build-cli-usage.sh ../target/debug/solana
./$<
src/%.md: %.md src/%.md: %.md
@mkdir -p $(@D) @mkdir -p $(@D)
@cp $< $@ @cp $< $@
@@ -44,7 +37,6 @@ $(TEST_STAMP): $(TARGET)
$(TARGET): $(SVG_IMGS) $(MD_SRCS) $(TARGET): $(SVG_IMGS) $(MD_SRCS)
mdbook build mdbook build
./set-solana-release-tag.sh
clean: clean:
rm -f $(SVG_IMGS) src/tests.ok rm -f $(SVG_IMGS) src/tests.ok

View File

Before

Width:  |  Height:  |  Size: 542 KiB

After

Width:  |  Height:  |  Size: 542 KiB

View File

Before

Width:  |  Height:  |  Size: 256 KiB

After

Width:  |  Height:  |  Size: 256 KiB

View File

Before

Width:  |  Height:  |  Size: 269 KiB

After

Width:  |  Height:  |  Size: 269 KiB

90
book/src/SUMMARY.md Normal file
View File

@@ -0,0 +1,90 @@
# Table of contents
* [Introduction](introduction.md)
* [Terminology](terminology.md)
* [Getting Started](getting-started/README.md)
* [Testnet Participation](getting-started/testnet-participation.md)
* [Example Client: Web Wallet](getting-started/webwallet.md)
* [Programming Model](programs/README.md)
* [Example: Tic-Tac-Toe](programs/tictactoe.md)
* [Drones](programs/drones.md)
* [A Solana Cluster](cluster/README.md)
* [Synchronization](cluster/synchronization.md)
* [Leader Rotation](cluster/leader-rotation.md)
* [Fork Generation](cluster/fork-generation.md)
* [Managing Forks](cluster/managing-forks.md)
* [Turbine Block Propagation](cluster/turbine-block-propagation.md)
* [Ledger Replication](cluster/ledger-replication.md)
* [Secure Vote Signing](cluster/vote-signing.md)
* [Stake Delegation and Rewards](cluster/stake-delegation-and-rewards.md)
* [Performance Metrics](cluster/performance-metrics.md)
* [Anatomy of a Validator](validator/README.md)
* [TPU](validator/tpu.md)
* [TVU](validator/tvu/README.md)
* [Blockstore](validator/tvu/blockstore.md)
* [Gossip Service](validator/gossip.md)
* [The Runtime](validator/runtime.md)
* [Anatomy of a Transaction](transaction.md)
* [Running a Validator](running-validator/README.md)
* [Validator Requirements](running-validator/validator-reqs.md)
* [Choosing a Testnet](running-validator/validator-testnet.md)
* [Installing the Validator Software](running-validator/validator-software.md)
* [Starting a Validator](running-validator/validator-start.md)
* [Staking](running-validator/validator-stake.md)
* [Monitoring a Validator](running-validator/validator-monitor.md)
* [Publishing Validator Info](running-validator/validator-info.md)
* [Troubleshooting](running-validator/validator-troubleshoot.md)
* [Running an Archiver](running-archiver.md)
* [Paper Wallet](paper-wallet/README.md)
* [Installation](paper-wallet/installation.md)
* [Paper Wallet Usage](paper-wallet/usage.md)
* [Offline Signing](offline-signing/README.md)
* [Durable Transaction Nonces](offline-signing/durable-nonce.md)
* [API Reference](api-reference/README.md)
* [Transaction](api-reference/transaction-api.md)
* [Instruction](api-reference/instruction-api.md)
* [Blockstreamer](api-reference/blockstreamer.md)
* [JSON RPC API](api-reference/jsonrpc-api.md)
* [JavaScript API](api-reference/javascript-api.md)
* [solana CLI](api-reference/cli.md)
* [Accepted Design Proposals](proposals/README.md)
* [Ledger Replication](proposals/ledger-replication-to-implement.md)
* [Secure Vote Signing](proposals/vote-signing-to-implement.md)
* [Cluster Test Framework](proposals/cluster-test-framework.md)
* [Validator](proposals/validator-proposal.md)
* [Simple Payment and State Verification](proposals/simple-payment-and-state-verification.md)
* [Cross-Program Invocation](proposals/cross-program-invocation.md)
* [Inter-chain Transaction Verification](proposals/interchain-transaction-verification.md)
* [Snapshot Verification](proposals/snapshot-verification.md)
* [Bankless Leader](proposals/bankless-leader.md)
* [Slashing](proposals/slashing.md)
* [Implemented Design Proposals](implemented-proposals/README.md)
* [Blockstore](implemented-proposals/blockstore.md)
* [Cluster Software Installation and Updates](implemented-proposals/installer.md)
* [Cluster Economics](implemented-proposals/ed_overview/README.md)
* [Validation-client Economics](implemented-proposals/ed_overview/ed_validation_client_economics/README.md)
* [State-validation Protocol-based Rewards](implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md)
* [State-validation Transaction Fees](implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md)
* [Replication-validation Transaction Fees](implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md)
* [Validation Stake Delegation](implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_validation_stake_delegation.md)
* [Replication-client Economics](implemented-proposals/ed_overview/ed_replication_client_economics/README.md)
* [Storage-replication Rewards](implemented-proposals/ed_overview/ed_replication_client_economics/ed_rce_storage_replication_rewards.md)
* [Replication-client Reward Auto-delegation](implemented-proposals/ed_overview/ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md)
* [Economic Sustainability](implemented-proposals/ed_overview/ed_economic_sustainability.md)
* [Attack Vectors](implemented-proposals/ed_overview/ed_attack_vectors.md)
* [Economic Design MVP](implemented-proposals/ed_overview/ed_mvp.md)
* [References](implemented-proposals/ed_overview/ed_references.md)
* [Deterministic Transaction Fees](implemented-proposals/transaction-fees.md)
* [Tower BFT](implemented-proposals/tower-bft.md)
* [Leader-to-Leader Transition](implemented-proposals/leader-leader-transition.md)
* [Leader-to-Validator Transition](implemented-proposals/leader-validator-transition.md)
* [Persistent Account Storage](implemented-proposals/persistent-account-storage.md)
* [Reliable Vote Transmission](implemented-proposals/reliable-vote-transmission.md)
* [Repair Service](implemented-proposals/repair-service.md)
* [Testing Programs](implemented-proposals/testing-programs.md)
* [Credit-only Accounts](implemented-proposals/readonly-accounts.md)
* [Embedding the Move Langauge](implemented-proposals/embedding-move.md)
* [Staking Rewards](implemented-proposals/staking-rewards.md)
* [Rent](implemented-proposals/rent.md)
* [Durable Transaction Nonces](implemented-proposals/durable-tx-nonces.md)
* [Validator Timestamp Oracle](implemented-proposals/validator-timestamp-oracle.md)

View File

@@ -8,7 +8,7 @@ The [solana-cli crate](https://crates.io/crates/solana-cli) provides a command-l
```bash ```bash
// Command // Command
$ solana-keygen pubkey $ solana address
// Return // Return
<PUBKEY> <PUBKEY>
@@ -22,6 +22,12 @@ $ solana airdrop 2
// Return // Return
"2.00000000 SOL" "2.00000000 SOL"
// Command
$ solana airdrop 123 --lamports
// Return
"123 lamports"
``` ```
### Get Balance ### Get Balance

View File

@@ -0,0 +1,4 @@
# API Reference
The following sections contain API references material you may find useful when developing applications utilizing a Solana cluster.

View File

@@ -0,0 +1,28 @@
# Blockstreamer
Solana supports a node type called an _blockstreamer_. This validator variation is intended for applications that need to observe the data plane without participating in transaction validation or ledger replication.
A blockstreamer runs without a vote signer, and can optionally stream ledger entries out to a Unix domain socket as they are processed. The JSON-RPC service still functions as on any other node.
To run a blockstreamer, include the argument `no-signer` and \(optional\) `blockstream` socket location:
```bash
$ ./multinode-demo/validator-x.sh --no-signer --blockstream <SOCKET>
```
The stream will output a series of JSON objects:
* An Entry event JSON object is sent when each ledger entry is processed, with the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "entry"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `entry`, the entry, as JSON object
* A Block event JSON object is sent when a block is complete, with the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "block"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `l`, the slot leader id, as base-58 encoded string
* `hash`, the [blockhash](terminology.md#blockhash), as base-58 encoded string

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,38 @@
# Instruction
For the purposes of building a [Transaction](../transaction.md), a more verbose instruction format is used:
* **Instruction:**
* **program\_id:** The pubkey of the on-chain program that executes the
instruction
* **accounts:** An ordered list of accounts that should be passed to
the program processing the instruction, including metadata detailing
if an account is a signer of the transaction and if it is a credit
only account.
* **data:** A byte array that is passed to the program executing the
instruction
A more compact form is actually included in a `Transaction`:
* **CompiledInstruction:**
* **program\_id\_index:** The index of the `program_id` in the
`account_keys` list
* **accounts:** An ordered list of indices into `account_keys`
specifying the accounds that should be passed to the program
processing the instruction.
* **data:** A byte array that is passed to the program executing the
instruction

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,62 @@
# Transaction
## Components of a `Transaction`
* **Transaction:**
* **message:** Defines the transaction
* **header:** Details the account types of and signatures required by
the transaction
* **num\_required\_signatures:** The total number of signatures
required to make the transaction valid.
* **num\_credit\_only\_signed\_accounts:** The last
`num_readonly_signed_accounts` signatures refer to signing
credit only accounts. Credit only accounts can be used concurrently
by multiple parallel transactions, but their balance may only be
increased, and their account data is read-only.
* **num\_credit\_only\_unsigned\_accounts:** The last
`num_readonly_unsigned_accounts` public keys in `account_keys` refer
to non-signing credit only accounts
* **account\_keys:** List of public keys used by the transaction, including
by the instructions and for signatures. The first
`num_required_signatures` public keys must sign the transaction.
* **recent\_blockhash:** The ID of a recent ledger entry. Validators will
reject transactions with a `recent_blockhash` that is too old.
* **instructions:** A list of [instructions](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/instruction.md) that are
run sequentially and committed in one atomic transaction if all
succeed.
* **signatures:** A list of signatures applied to the transaction. The
list is always of length `num_required_signatures`, and the signature
at index `i` corresponds to the public key at index `i` in `account_keys`.
The list is initialized with empty signatures \(i.e. zeros\), and
populated as signatures are added.
## Transaction Signing
A `Transaction` is signed by using an ed25519 keypair to sign the serialization of the `message`. The resulting signature is placed at the index of `signatures` matching the index of the keypair's public key in `account_keys`.
## Transaction Serialization
`Transaction`s \(and their `message`s\) are serialized and deserialized using the [bincode](https://crates.io/crates/bincode) crate with a non-standard vector serialization that uses only one byte for the length if it can be encoded in 7 bits, 2 bytes if it fits in 14 bits, or 3 bytes if it requires 15 or 16 bits. The vector serialization is defined by Solana's [short-vec](https://github.com/solana-labs/solana/blob/master/sdk/src/short_vec.rs).

View File

@@ -4,7 +4,7 @@ A validator votes on a PoH hash for two purposes. First, the vote indicates it
believes the ledger is valid up until that point in time. Second, since many believes the ledger is valid up until that point in time. Second, since many
valid forks may exist at a given height, the vote also indicates exclusive valid forks may exist at a given height, the vote also indicates exclusive
support for the fork. This document describes only the former. The latter is support for the fork. This document describes only the former. The latter is
described in [Tower BFT](../implemented-proposals/tower-bft.md). described in [Tower BFT](tower-bft.md).
## Current Design ## Current Design
@@ -50,7 +50,7 @@ log the time since the NewBlock transaction was submitted.
### Finality and Payouts ### Finality and Payouts
[Tower BFT](../implemented-proposals/tower-bft.md) is the proposed fork selection algorithm. It proposes [Tower BFT](tower-bft.md) is the proposed fork selection algorithm. It proposes
that payment to miners be postponed until the *stack* of validator votes reaches that payment to miners be postponed until the *stack* of validator votes reaches
a certain depth, at which point rollback is not economically feasible. The vote a certain depth, at which point rollback is not economically feasible. The vote
program may therefore implement Tower BFT. Vote instructions would need to program may therefore implement Tower BFT. Vote instructions would need to

View File

@@ -1,20 +1,20 @@
# A Solana Cluster # A Solana Cluster
A Solana cluster is a set of validators working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this section, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes. A Solana cluster is a set of validators working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this chapter, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes.
## Creating a Cluster ## Creating a Cluster
Before starting any validators, one first needs to create a _genesis config_. The config references two public keys, a _mint_ and a _bootstrap validator_. The validator holding the bootstrap validator's private key is responsible for appending the first entries to the ledger. It initializes its internal state with the mint's account. That account will hold the number of native tokens defined by the genesis config. The second validator then contacts the bootstrap validator to register as a _validator_. Additional validators then register with any registered member of the cluster. Before starting any validators, one first needs to create a _genesis config_. The config references two public keys, a _mint_ and a _bootstrap leader_. The validator holding the bootstrap leader's private key is responsible for appending the first entries to the ledger. It initializes its internal state with the mint's account. That account will hold the number of native tokens defined by the genesis config. The second validator then contacts the bootstrap leader to register as a _validator_ or _archiver_. Additional validators then register with any registered member of the cluster.
A validator receives all entries from the leader and submits votes confirming those entries are valid. After voting, the validator is expected to store those entries. Once the validator observes a sufficient number of copies exist, it deletes its copy. A validator receives all entries from the leader and submits votes confirming those entries are valid. After voting, the validator is expected to store those entries until archiver nodes submit proofs that they have stored copies of it. Once the validator observes a sufficient number of copies exist, it deletes its copy.
## Joining a Cluster ## Joining a Cluster
Validators enter the cluster via registration messages sent to its _control plane_. The control plane is implemented using a _gossip_ protocol, meaning that a node may register with any existing node, and expect its registration to propagate to all nodes in the cluster. The time it takes for all nodes to synchronize is proportional to the square of the number of nodes participating in the cluster. Algorithmically, that's considered very slow, but in exchange for that time, a node is assured that it eventually has all the same information as every other node, and that that information cannot be censored by any one node. Validators and archivers enter the cluster via registration messages sent to its _control plane_. The control plane is implemented using a _gossip_ protocol, meaning that a node may register with any existing node, and expect its registration to propagate to all nodes in the cluster. The time it takes for all nodes to synchronize is proportional to the square of the number of nodes participating in the cluster. Algorithmically, that's considered very slow, but in exchange for that time, a node is assured that it eventually has all the same information as every other node, and that that information cannot be censored by any one node.
## Sending Transactions to a Cluster ## Sending Transactions to a Cluster
Clients send transactions to any validator's Transaction Processing Unit \(TPU\) port. If the node is in the validator role, it forwards the transaction to the designated leader. If in the leader role, the node bundles incoming transactions, timestamps them creating an _entry_, and pushes them onto the cluster's _data plane_. Once on the data plane, the transactions are validated by validator nodes, effectively appending them to the ledger. Clients send transactions to any validator's Transaction Processing Unit \(TPU\) port. If the node is in the validator role, it forwards the transaction to the designated leader. If in the leader role, the node bundles incoming transactions, timestamps them creating an _entry_, and pushes them onto the cluster's _data plane_. Once on the data plane, the transactions are validated by validator nodes and replicated by archiver nodes, effectively appending them to the ledger.
## Confirming Transactions ## Confirming Transactions
@@ -37,4 +37,4 @@ Solana rotates leaders at fixed intervals, called _slots_. Each leader may only
Next, transactions are broken into batches so that a node can send transactions to multiple parties without making multiple copies. If, for example, the leader needed to send 60 transactions to 6 nodes, it would break that collection of 60 into batches of 10 transactions and send one to each node. This allows the leader to put 60 transactions on the wire, not 60 transactions for each node. Each node then shares its batch with its peers. Once the node has collected all 6 batches, it reconstructs the original set of 60 transactions. Next, transactions are broken into batches so that a node can send transactions to multiple parties without making multiple copies. If, for example, the leader needed to send 60 transactions to 6 nodes, it would break that collection of 60 into batches of 10 transactions and send one to each node. This allows the leader to put 60 transactions on the wire, not 60 transactions for each node. Each node then shares its batch with its peers. Once the node has collected all 6 batches, it reconstructs the original set of 60 transactions.
A batch of transactions can only be split so many times before it is so small that header information becomes the primary consumer of network bandwidth. At the time of this writing, the approach is scaling well up to about 150 validators. To scale up to hundreds of thousands of validators, each node can apply the same technique as the leader node to another set of nodes of equal size. We call the technique [_Turbine Block Propogation_](turbine-block-propagation.md). A batch of transactions can only be split so many times before it is so small that header information becomes the primary consumer of network bandwidth. At the time of this writing, the approach is scaling well up to about 150 validators. To scale up to hundreds of thousands of validators, each node can apply the same technique as the leader node to another set of nodes of equal size. We call the technique _data plane fanout_; learn more in the [data plan fanout](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/data-plane-fanout.md) section.

View File

@@ -1,6 +1,6 @@
# Fork Generation # Fork Generation
This section describes how forks naturally occur as a consequence of [leader rotation](leader-rotation.md). The chapter describes how forks naturally occur as a consequence of [leader rotation](leader-rotation.md).
## Overview ## Overview

View File

@@ -1,15 +1,5 @@
# Ledger Replication # Ledger Replication
Note: this ledger replication solution was partially implemented, but not
completed. The partial implementation was removed by
https://github.com/solana-labs/solana/pull/9992 in order to prevent the security
risk of unused code. The first part of this design document reflects the
once-implemented parts of ledger replication. The
[second part of this document](#ledger-replication-not-implemented) describes the
parts of the solution never implemented.
## Proof of Replication
At full capacity on a 1gbps network solana will generate 4 petabytes of data per year. To prevent the network from centralizing around validators that have to store the full data set this protocol proposes a way for mining nodes to provide storage capacity for pieces of the data. At full capacity on a 1gbps network solana will generate 4 petabytes of data per year. To prevent the network from centralizing around validators that have to store the full data set this protocol proposes a way for mining nodes to provide storage capacity for pieces of the data.
The basic idea to Proof of Replication is encrypting a dataset with a public symmetric key using CBC encryption, then hash the encrypted dataset. The main problem with the naive approach is that a dishonest storage node can stream the encryption and delete the data as it's hashed. The simple solution is to periodically regenerate the hash based on a signed PoH value. This ensures that all the data is present during the generation of the proof and it also requires validators to have the entirety of the encrypted data present for verification of every proof of every identity. So the space required to validate is `number_of_proofs * data_size` The basic idea to Proof of Replication is encrypting a dataset with a public symmetric key using CBC encryption, then hash the encrypted dataset. The main problem with the naive approach is that a dishonest storage node can stream the encryption and delete the data as it's hashed. The simple solution is to periodically regenerate the hash based on a signed PoH value. This ensures that all the data is present during the generation of the proof and it also requires validators to have the entirety of the encrypted data present for verification of every proof of every identity. So the space required to validate is `number_of_proofs * data_size`
@@ -277,139 +267,3 @@ Some percentage of fake proofs are also necessary to receive a reward from stora
are storing that slice of the ledger. are storing that slice of the ledger.
# Ledger Replication Not Implemented
Replication behavior yet to be implemented.
## Storage epoch
The storage epoch should be the number of slots which results in around 100GB-1TB of ledger to be generated for archivers to store. Archivers will start storing ledger when a given fork has a high probability of not being rolled back.
## Validator behavior
1. Every NUM\_KEY\_ROTATION\_TICKS it also validates samples received from
archivers. It signs the PoH hash at that point and uses the following
algorithm with the signature as the input:
* The low 5 bits of the first byte of the signature creates an index into
another starting byte of the signature.
* The validator then looks at the set of storage proofs where the byte of
the proof's sha state vector starting from the low byte matches exactly
with the chosen byte\(s\) of the signature.
* If the set of proofs is larger than the validator can handle, then it
increases to matching 2 bytes in the signature.
* Validator continues to increase the number of matching bytes until a
workable set is found.
* It then creates a mask of valid proofs and fake proofs and sends it to
the leader. This is a storage proof confirmation transaction.
2. After a lockout period of NUM\_SECONDS\_STORAGE\_LOCKOUT seconds, the
validator then submits a storage proof claim transaction which then causes the
distribution of the storage reward if no challenges were seen for the proof to
the validators and archivers party to the proofs.
## Archiver behavior
1. The archiver then generates another set of offsets which it submits a fake
proof with an incorrect sha state. It can be proven to be fake by providing the
seed for the hash result.
* A fake proof should consist of an archiver hash of a signature of a PoH
value. That way when the archiver reveals the fake proof, it can be
verified on chain.
2. The archiver monitors the ledger, if it sees a fake proof integrated, it
creates a challenge transaction and submits it to the current leader. The
transacation proves the validator incorrectly validated a fake storage proof.
The archiver is rewarded and the validator's staking balance is slashed or
frozen.
## Storage proof contract logic
Each archiver and validator will have their own storage account. The validator's account would be separate from their gossip id similiar to their vote account. These should be implemented as two programs one which handles the validator as the keysigner and one for the archiver. In that way when the programs reference other accounts, they can check the program id to ensure it is a validator or archiver account they are referencing.
### SubmitMiningProof
```text
SubmitMiningProof {
slot: u64,
sha_state: Hash,
signature: Signature,
};
keys = [archiver_keypair]
```
Archivers create these after mining their stored ledger data for a certain hash value. The slot is the end slot of the segment of ledger they are storing, the sha\_state the result of the archiver using the hash function to sample their encrypted ledger segment. The signature is the signature that was created when they signed a PoH value for the current storage epoch. The list of proofs from the current storage epoch should be saved in the account state, and then transfered to a list of proofs for the previous epoch when the epoch passes. In a given storage epoch a given archiver should only submit proofs for one segment.
The program should have a list of slots which are valid storage mining slots. This list should be maintained by keeping track of slots which are rooted slots in which a significant portion of the network has voted on with a high lockout value, maybe 32-votes old. Every SLOTS\_PER\_SEGMENT number of slots would be added to this set. The program should check that the slot is in this set. The set can be maintained by receiving a AdvertiseStorageRecentBlockHash and checking with its bank/Tower BFT state.
The program should do a signature verify check on the signature, public key from the transaction submitter and the message of the previous storage epoch PoH value.
### ProofValidation
```text
ProofValidation {
proof_mask: Vec<ProofStatus>,
}
keys = [validator_keypair, archiver_keypair(s) (unsigned)]
```
A validator will submit this transaction to indicate that a set of proofs for a given segment are valid/not-valid or skipped where the validator did not look at it. The keypairs for the archivers that it looked at should be referenced in the keys so the program logic can go to those accounts and see that the proofs are generated in the previous epoch. The sampling of the storage proofs should be verified ensuring that the correct proofs are skipped by the validator according to the logic outlined in the validator behavior of sampling.
The included archiver keys will indicate the the storage samples which are being referenced; the length of the proof\_mask should be verified against the set of storage proofs in the referenced archiver account\(s\), and should match with the number of proofs submitted in the previous storage epoch in the state of said archiver account.
### ClaimStorageReward
```text
ClaimStorageReward {
}
keys = [validator_keypair or archiver_keypair, validator/archiver_keypairs (unsigned)]
```
Archivers and validators will use this transaction to get paid tokens from a program state where SubmitStorageProof, ProofValidation and ChallengeProofValidations are in a state where proofs have been submitted and validated and there are no ChallengeProofValidations referencing those proofs. For a validator, it should reference the archiver keypairs to which it has validated proofs in the relevant epoch. And for an archiver it should reference validator keypairs for which it has validated and wants to be rewarded.
### ChallengeProofValidation
```text
ChallengeProofValidation {
proof_index: u64,
hash_seed_value: Vec<u8>,
}
keys = [archiver_keypair, validator_keypair]
```
This transaction is for catching lazy validators who are not doing the work to validate proofs. An archiver will submit this transaction when it sees a validator has approved a fake SubmitMiningProof transaction. Since the archiver is a light client not looking at the full chain, it will have to ask a validator or some set of validators for this information maybe via RPC call to obtain all ProofValidations for a certain segment in the previous storage epoch. The program will look in the validator account state see that a ProofValidation is submitted in the previous storage epoch and hash the hash\_seed\_value and see that the hash matches the SubmitMiningProof transaction and that the validator marked it as valid. If so, then it will save the challenge to the list of challenges that it has in its state.
### AdvertiseStorageRecentBlockhash
```text
AdvertiseStorageRecentBlockhash {
hash: Hash,
slot: u64,
}
```
Validators and archivers will submit this to indicate that a new storage epoch has passed and that the storage proofs which are current proofs should now be for the previous epoch. Other transactions should check to see that the epoch that they are referencing is accurate according to current chain state.

View File

@@ -1,6 +1,6 @@
# Stake Delegation and Rewards # Stake Delegation and Rewards
Stakers are rewarded for helping to validate the ledger. They do this by delegating their stake to validator nodes. Those validators do the legwork of replaying the ledger and send votes to a per-node vote account to which stakers can delegate their stakes. The rest of the cluster uses those stake-weighted votes to select a block when forks arise. Both the validator and staker need some economic incentive to play their part. The validator needs to be compensated for its hardware and the staker needs to be compensated for the risk of getting its stake slashed. The economics are covered in [staking rewards](../implemented-proposals/staking-rewards.md). This section, on the other hand, describes the underlying mechanics of its implementation. Stakers are rewarded for helping to validate the ledger. They do this by delegating their stake to validator nodes. Those validators do the legwork of replaying the ledger and send votes to a per-node vote account to which stakers can delegate their stakes. The rest of the cluster uses those stake-weighted votes to select a block when forks arise. Both the validator and staker need some economic incentive to play their part. The validator needs to be compensated for its hardware and the staker needs to be compensated for the risk of getting its stake slashed. The economics are covered in [staking rewards](../proposals/staking-rewards.md). This chapter, on the other hand, describes the underlying mechanics of its implementation.
## Basic Design ## Basic Design
@@ -29,7 +29,11 @@ VoteState is the current state of all the votes the validator has submitted to t
* Account::lamports - The accumulated lamports from the commission. These do not count as stakes. * Account::lamports - The accumulated lamports from the commission. These do not count as stakes.
* `authorized_voter` - Only this identity is authorized to submit votes. This field can only modified by this identity. * `authorized_voter` - Only this identity is authorized to submit votes. This field can only modified by this identity.
* `node_pubkey` - The Solana node that votes in this account. * `node_pubkey` - The Solana node that votes in this account.
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address and the authorized vote signer * `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's
```text
address and the authorized vote signer
```
### VoteInstruction::Initialize\(VoteInit\) ### VoteInstruction::Initialize\(VoteInit\)
@@ -44,11 +48,13 @@ VoteState is the current state of all the votes the validator has submitted to t
Updates the account with a new authorized voter or withdrawer, according to the VoteAuthorize parameter \(`Voter` or `Withdrawer`\). The transaction must be by signed by the Vote account's current `authorized_voter` or `authorized_withdrawer`. Updates the account with a new authorized voter or withdrawer, according to the VoteAuthorize parameter \(`Voter` or `Withdrawer`\). The transaction must be by signed by the Vote account's current `authorized_voter` or `authorized_withdrawer`.
* `account[0]` - RW - The VoteState * `account[0]` - RW - The VoteState
`VoteState::authorized_voter` or `authorized_withdrawer` is set to to `Pubkey`. `VoteState::authorized_voter` or `authorized_withdrawer` is set to to `Pubkey`.
### VoteInstruction::Vote\(Vote\) ### VoteInstruction::Vote\(Vote\)
* `account[0]` - RW - The VoteState * `account[0]` - RW - The VoteState
`VoteState::lockouts` and `VoteState::credits` are updated according to voting lockout rules see [Tower BFT](../implemented-proposals/tower-bft.md) `VoteState::lockouts` and `VoteState::credits` are updated according to voting lockout rules see [Tower BFT](../implemented-proposals/tower-bft.md)
* `account[1]` - RO - `sysvar::slot_hashes` A list of some N most recent slots and their hashes for the vote to be verified against. * `account[1]` - RO - `sysvar::slot_hashes` A list of some N most recent slots and their hashes for the vote to be verified against.
@@ -67,10 +73,18 @@ StakeState::Stake is the current delegation preference of the **staker** and con
* `voter_pubkey` - The pubkey of the VoteState instance the lamports are delegated to. * `voter_pubkey` - The pubkey of the VoteState instance the lamports are delegated to.
* `credits_observed` - The total credits claimed over the lifetime of the program. * `credits_observed` - The total credits claimed over the lifetime of the program.
* `activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up. * `activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up.
* `deactivated` - the epoch at which this stake was de-activated, some cool down epochs are required before the account is fully deactivated, and the stake available for withdrawal * `deactivated` - the epoch at which this stake was de-activated, some cool down epochs are required before the account
```text
is fully deactivated, and the stake available for withdrawal
```
* `authorized_staker` - the pubkey of the entity that must sign delegation, activation, and deactivation transactions * `authorized_staker` - the pubkey of the entity that must sign delegation, activation, and deactivation transactions
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address, and the authorized staker * `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's
```text
address, and the authorized staker
```
### StakeState::RewardsPool ### StakeState::RewardsPool
@@ -80,22 +94,42 @@ The Stakes and the RewardsPool are accounts that are owned by the same `Stake` p
### StakeInstruction::DelegateStake ### StakeInstruction::DelegateStake
The Stake account is moved from Initialized to StakeState::Stake form, or from a deactivated (i.e. fully cooled-down) StakeState::Stake to activated StakeState::Stake. This is how stakers choose the vote account and validator node to which their stake account lamports are delegated. The transaction must be signed by the stake's `authorized_staker`. The Stake account is moved from Ininitialized to StakeState::Stake form. This is how stakers choose their initial delegate validator node and activate their stake account lamports. The transaction must be signed by the stake's `authorized_staker`. If the stake account is already StakeState::Stake \(i.e. already activated\), the stake is re-delegated. Stakes may be re-delegated at any time, and updated stakes are reflected immediately, but only one re-delegation is permitted per epoch.
* `account[0]` - RW - The StakeState::Stake instance. `StakeState::Stake::credits_observed` is initialized to `VoteState::credits`, `StakeState::Stake::voter_pubkey` is initialized to `account[1]`. If this is the initial delegation of stake, `StakeState::Stake::stake` is initialized to the account's balance in lamports, `StakeState::Stake::activated` is initialized to the current Bank epoch, and `StakeState::Stake::deactivated` is initialized to std::u64::MAX * `account[0]` - RW - The StakeState::Stake instance. `StakeState::Stake::credits_observed` is initialized to `VoteState::credits`, `StakeState::Stake::voter_pubkey` is initialized to `account[1]`. If this is the initial delegation of stake, `StakeState::Stake::stake` is initialized to the account's balance in lamports, `StakeState::Stake::activated` is initialized to the current Bank epoch, and `StakeState::Stake::deactivated` is initialized to std::u64::MAX
* `account[1]` - R - The VoteState instance. * `account[1]` - R - The VoteState instance.
* `account[2]` - R - sysvar::clock account, carries information about current Bank epoch * `account[2]` - R - sysvar::clock account, carries information about current Bank epoch
* `account[3]` - R - sysvar::stakehistory account, carries information about stake history * `account[3]` - R - stake::Config accoount, carries warmup, cooldown, and slashing configuration
* `account[4]` - R - stake::Config accoount, carries warmup, cooldown, and slashing configuration
### StakeInstruction::Authorize\(Pubkey, StakeAuthorize\) ### StakeInstruction::Authorize\(Pubkey, StakeAuthorize\)
Updates the account with a new authorized staker or withdrawer, according to the StakeAuthorize parameter \(`Staker` or `Withdrawer`\). The transaction must be by signed by the Stakee account's current `authorized_staker` or `authorized_withdrawer`. Any stake lock-up must have expired, or the lock-up custodian must also sign the transaction. Updates the account with a new authorized staker or withdrawer, according to the StakeAuthorize parameter \(`Staker` or `Withdrawer`\). The transaction must be by signed by the Stakee account's current `authorized_staker` or `authorized_withdrawer`.
* `account[0]` - RW - The StakeState * `account[0]` - RW - The StakeState
`StakeState::authorized_staker` or `authorized_withdrawer` is set to to `Pubkey`. `StakeState::authorized_staker` or `authorized_withdrawer` is set to to `Pubkey`.
### StakeInstruction::RedeemVoteCredits
The staker or the owner of the Stake account sends a transaction with this instruction to claim rewards.
The Vote account and the Stake account pair maintain a lifetime counter of total rewards generated and claimed. Rewards are paid according to a point value supplied by the Bank from inflation. A `point` is one credit \* one staked lamport, rewards paid are proportional to the number of lamports staked.
* `account[0]` - RW - The StakeState::Stake instance that is redeeming rewards.
* `account[1]` - R - The VoteState instance, must be the same as `StakeState::voter_pubkey`
* `account[2]` - RW - The StakeState::RewardsPool instance that will fulfill the request \(picked at random\).
* `account[3]` - R - sysvar::rewards account from the Bank that carries point value.
* `account[4]` - R - sysvar::stake\_history account from the Bank that carries stake warmup/cooldown history
Reward is paid out for the difference between `VoteState::credits` to `StakeState::Stake::credits_observed`, multiplied by `sysvar::rewards::Rewards::validator_point_value`. `StakeState::Stake::credits_observed` is updated to`VoteState::credits`. The commission is deposited into the Vote account token balance, and the reward is deposited to the Stake account token balance and the stake account's `stake` is increased by the same amount \(re-invested\).
```text
let credits_to_claim = vote_state.credits - stake_state.credits_observed;
stake_state.credits_observed = vote_state.credits;
```
`credits_to_claim` is used to compute the reward and commission, and `StakeState::Stake::credits_observed` is updated to the latest `VoteState::credits` value.
### StakeInstruction::Deactivate ### StakeInstruction::Deactivate
A staker may wish to withdraw from the network. To do so he must first deactivate his stake, and wait for cool down. A staker may wish to withdraw from the network. To do so he must first deactivate his stake, and wait for cool down.
@@ -128,11 +162,11 @@ Lamports build up over time in a Stake account and any excess over activated sta
## Staking Rewards ## Staking Rewards
The specific mechanics and rules of the validator rewards regime is outlined here. Rewards are earned by delegating stake to a validator that is voting correctly. Voting incorrectly exposes that validator's stakes to [slashing](../proposals/slashing.md). The specific mechanics and rules of the validator rewards regime is outlined here. Rewards are earned by delegating stake to a validator that is voting correctly. Voting incorrectly exposes that validator's stakes to [slashing](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/staking-and-rewards.md).
### Basics ### Basics
The network pays rewards from a portion of network [inflation](../terminology.md#inflation). The number of lamports available to pay rewards for an epoch is fixed and must be evenly divided among all staked nodes according to their relative stake weight and participation. The weighting unit is called a [point](../terminology.md#point). The network pays rewards from a portion of network [inflation](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/inflation.md). The number of lamports available to pay rewards for an epoch is fixed and must be evenly divided among all staked nodes according to their relative stake weight and participation. The weighting unit is called a [point](../terminology.md#point).
Rewards for an epoch are not available until the end of that epoch. Rewards for an epoch are not available until the end of that epoch.
@@ -154,7 +188,7 @@ Stakers who have delegated to that validator earn points in proportion to their
Stakes, once delegated, do not become effective immediately. They must first pass through a warm up period. During this period some portion of the stake is considered "effective", the rest is considered "activating". Changes occur on epoch boundaries. Stakes, once delegated, do not become effective immediately. They must first pass through a warm up period. During this period some portion of the stake is considered "effective", the rest is considered "activating". Changes occur on epoch boundaries.
The stake program limits the rate of change to total network stake, reflected in the stake program's `config::warmup_rate` \(set to 25% per epoch in the current implementation\). The stake program limits the rate of change to total network stake, reflected in the stake program's `config::warmup_rate` \(typically 25% per epoch\).
The amount of stake that can be warmed up each epoch is a function of the previous epoch's total effective stake, total activating stake, and the stake program's configured warmup rate. The amount of stake that can be warmed up each epoch is a function of the previous epoch's total effective stake, total activating stake, and the stake program's configured warmup rate.
@@ -194,4 +228,4 @@ Only lamports in excess of effective+activating stake may be withdrawn at any ti
### Lock-up ### Lock-up
Stake accounts support the notion of lock-up, wherein the stake account balance is unavailable for withdrawal until a specified time. Lock-up is specified as an epoch height, i.e. the minimum epoch height that must be reached by the network before the stake account balance is available for withdrawal, unless the transaction is also signed by a specified custodian. This information is gathered when the stake account is created, and stored in the Lockup field of the stake account's state. Changing the authorized staker or withdrawer is also subject to lock-up, as such an operation is effectively a transfer. Stake accounts support the notion of lock-up, wherein the stake account balance is unavailable for withdrawal until a specified time. Lock-up is specified as an epoch height, i.e. the minimum epoch height that must be reached by the network before the stake account balance is available for withdrawal, unless the transaction is also signed by a specified custodian. This information is gathered when the stake account is created, and stored in the Lockup field of the stake account's state.

View File

@@ -18,7 +18,7 @@ Another difference between PoH and VDFs is that a VDF is used only for tracking
## Relationship to Consensus Mechanisms ## Relationship to Consensus Mechanisms
Proof of History is not a consensus mechanism, but it is used to improve the performance of Solana's Proof of Stake consensus. It is also used to improve the performance of the data plane protocols. Proof of History is not a consensus mechanism, but it is used to improve the performance of Solana's Proof of Stake consensus. It is also used to improve the performance of the data plane and replication protocols.
## More on Proof of History ## More on Proof of History

View File

@@ -8,7 +8,7 @@ During its slot, the leader node distributes shreds between the validator nodes
In order for data plane fanout to work, the entire cluster must agree on how the cluster is divided into neighborhoods. To achieve this, all the recognized validator nodes \(the TVU peers\) are sorted by stake and stored in a list. This list is then indexed in different ways to figure out neighborhood boundaries and retransmit peers. For example, the leader will simply select the first nodes to make up layer 0. These will automatically be the highest stake holders, allowing the heaviest votes to come back to the leader first. Layer-0 and lower-layer nodes use the same logic to find their neighbors and next layer peers. In order for data plane fanout to work, the entire cluster must agree on how the cluster is divided into neighborhoods. To achieve this, all the recognized validator nodes \(the TVU peers\) are sorted by stake and stored in a list. This list is then indexed in different ways to figure out neighborhood boundaries and retransmit peers. For example, the leader will simply select the first nodes to make up layer 0. These will automatically be the highest stake holders, allowing the heaviest votes to come back to the leader first. Layer-0 and lower-layer nodes use the same logic to find their neighbors and next layer peers.
To reduce the possibility of attack vectors, each shred is transmitted over a random tree of neighborhoods. Each node uses the same set of nodes representing the cluster. A random tree is generated from the set for each shred using a seed derived from the leader id, slot and shred index. To reduce the possibility of attack vectors, each shred is transmitted over a random tree of neighborhoods. Each node uses the same set of nodes representing the cluster. A random tree is generated from the set for each shred using randomness derived from the shred itself. Since the random seed is not known in advance, attacks that try to eclipse neighborhoods from certain leaders or blocks become very difficult, and should require almost complete control of the stake in the cluster.
## Layer and Neighborhood Structure ## Layer and Neighborhood Structure

View File

@@ -0,0 +1,18 @@
## Storage Rent Economics
Each transaction that is submitted to the Solana ledger imposes costs. Transaction fees paid by the submitter, and collected by a validator, in theory, account for the acute, transacitonal, costs of validating and adding that data to the ledger. At the same time, our compensation design for archivers (see [Replication-client Economics](ed_replication_client_economics.md)), in theory, accounts for the long term storage of the historical ledger. Unaccounted in this process is the mid-term storage of active ledger state, necessarily maintined by the rotating validator set. This type of storage imposes costs not only to validators but also to the broader network as active state grows so does data transmission and validation overhead. To account for these costs, we describe here our preliminary design and implementation of storage rent.
Storage rent can be paid via one of two methods:
Method 1: Set it and forget it
With this approach, accounts with two-years worth of rent deposits secured are exempt from network rent charges. By maintaining this minimum-balance, the broader network benefits from reduced liquitity and the account holder can trust that their `Account::data` will be retained for continual access/usage.
Method 2: Pay per byte
If an account has less than two-years worth of deposited rent the network charges rent on a per-epoch basis, in credit for the next epoch (but in arrears when necessary). This rent is deducted at a rate specified in genesis, in lamports per kilobyte-year.
For information on the technical implementation details of this design, see the [Rent](rent.md) section.

View File

@@ -1,4 +1,4 @@
# Benchmark a Cluster # Getting Started
The Solana git repository contains all the scripts you might need to spin up your own local testnet. Depending on what you're looking to achieve, you may want to run a different variation, as the full-fledged, performance-enhanced multinode testnet is considerably more complex to set up than a Rust-only, singlenode testnode. If you are looking to develop high-level features, such as experimenting with smart contracts, save yourself some setup headaches and stick to the Rust-only singlenode demo. If you're doing performance optimization of the transaction pipeline, consider the enhanced singlenode demo. If you're doing consensus work, you'll need at least a Rust-only multinode demo. If you want to reproduce our TPS metrics, run the enhanced multinode demo. The Solana git repository contains all the scripts you might need to spin up your own local testnet. Depending on what you're looking to achieve, you may want to run a different variation, as the full-fledged, performance-enhanced multinode testnet is considerably more complex to set up than a Rust-only, singlenode testnode. If you are looking to develop high-level features, such as experimenting with smart contracts, save yourself some setup headaches and stick to the Rust-only singlenode demo. If you're doing performance optimization of the transaction pipeline, consider the enhanced singlenode demo. If you're doing consensus work, you'll need at least a Rust-only multinode demo. If you want to reproduce our TPS metrics, run the enhanced multinode demo.
@@ -52,12 +52,12 @@ $ NDEBUG=1 ./multinode-demo/faucet.sh
### Singlenode Testnet ### Singlenode Testnet
Before you start a validator, make sure you know the IP address of the machine you want to be the bootstrap validator for the demo, and make sure that udp ports 8000-10000 are open on all the machines you want to test with. Before you start a validator, make sure you know the IP address of the machine you want to be the bootstrap leader for the demo, and make sure that udp ports 8000-10000 are open on all the machines you want to test with.
Now start the bootstrap validator in a separate shell: Now start the bootstrap leader in a separate shell:
```bash ```bash
$ NDEBUG=1 ./multinode-demo/bootstrap-validator.sh $ NDEBUG=1 ./multinode-demo/bootstrap-leader.sh
``` ```
Wait a few seconds for the server to initialize. It will print "leader ready..." when it's ready to receive transactions. The leader will request some tokens from the faucet if it doesn't have any. The faucet does not need to be running for subsequent leader starts. Wait a few seconds for the server to initialize. It will print "leader ready..." when it's ready to receive transactions. The leader will request some tokens from the faucet if it doesn't have any. The faucet does not need to be running for subsequent leader starts.
@@ -74,7 +74,7 @@ To run a performance-enhanced validator on Linux, [CUDA 10.0](https://developer.
```bash ```bash
$ ./fetch-perf-libs.sh $ ./fetch-perf-libs.sh
$ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/bootstrap-validator.sh $ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/bootstrap-leader.sh
$ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/validator.sh $ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/validator.sh
``` ```
@@ -121,12 +121,12 @@ thread apply all bt
This will dump all the threads stack traces into gdb.txt This will dump all the threads stack traces into gdb.txt
## Developer Testnet ## Public Testnet
In this example the client connects to our public testnet. To run validators on the testnet you would need to open udp ports `8000-10000`. In this example the client connects to our public testnet. To run validators on the testnet you would need to open udp ports `8000-10000`.
```bash ```bash
$ NDEBUG=1 ./multinode-demo/bench-tps.sh --entrypoint devnet.solana.com:8001 --faucet devnet.solana.com:9900 --duration 60 --tx_count 50 $ NDEBUG=1 ./multinode-demo/bench-tps.sh --entrypoint testnet.solana.com:8001 --faucet testnet.solana.com:9900 --duration 60 --tx_count 50
``` ```
You can observe the effects of your client's transactions on our [metrics dashboard](https://metrics.solana.com:3000/d/monitor/cluster-telemetry?var-testnet=devnet) You can observe the effects of your client's transactions on our [dashboard](https://metrics.solana.com:3000/d/testnet/testnet-hud?orgId=2&from=now-30m&to=now&refresh=5s&var-testnet=testnet)

View File

@@ -0,0 +1,7 @@
# Testnet Participation
Participate in our testnet:
* [Running a Validator](../running-validator/)
* [Running an Archiver](../running-archiver.md)

View File

@@ -85,3 +85,6 @@ Replay stage uses Blockstore APIs to find the longest chain of entries it can ha
## Pruning Blockstore ## Pruning Blockstore
Once Blockstore entries are old enough, representing all the possible forks becomes less useful, perhaps even problematic for replay upon restart. Once a validator's votes have reached max lockout, however, any Blockstore contents that are not on the PoH chain for that vote for can be pruned, expunged. Once Blockstore entries are old enough, representing all the possible forks becomes less useful, perhaps even problematic for replay upon restart. Once a validator's votes have reached max lockout, however, any Blockstore contents that are not on the PoH chain for that vote for can be pruned, expunged.
Archiver nodes will be responsible for storing really old ledger contents, and validators need only persist their bank periodically.

View File

@@ -28,7 +28,7 @@ lockout on a bank `b`.
This computation is performed on a votable candidate bank `b` as follows. This computation is performed on a votable candidate bank `b` as follows.
```text ```
let output: HashMap<b, StakeLockout> = HashMap::new(); let output: HashMap<b, StakeLockout> = HashMap::new();
for vote_account in b.vote_accounts { for vote_account in b.vote_accounts {
for v in vote_account.vote_stack { for v in vote_account.vote_stack {
@@ -62,7 +62,7 @@ votes > v as the number of confirmations will be lower).
Now more specifically, we augment the above computation to: Now more specifically, we augment the above computation to:
```text ```
let output: HashMap<b, StakeLockout> = HashMap::new(); let output: HashMap<b, StakeLockout> = HashMap::new();
let fork_commitment_cache = ForkCommitmentCache::default(); let fork_commitment_cache = ForkCommitmentCache::default();
for vote_account in b.vote_accounts { for vote_account in b.vote_accounts {
@@ -76,7 +76,7 @@ Now more specifically, we augment the above computation to:
``` ```
where `f'` is defined as: where `f'` is defined as:
```text ```
fn f`( fn f`(
stake_lockout: &mut StakeLockout, stake_lockout: &mut StakeLockout,
some_ancestor: &mut BlockCommitment, some_ancestor: &mut BlockCommitment,

View File

@@ -0,0 +1,16 @@
# Cluster Economics
**Subject to change.**
Solanas crypto-economic system is designed to promote a healthy, long term self-sustaining economy with participant incentives aligned to the security and decentralization of the network. The main participants in this economy are validation-clients and replication-clients. Their contributions to the network, state validation and data storage respectively, and their requisite incentive mechanisms are discussed below.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are issuances from a global, protocol-defined, inflation rate. These rewards will constitute the total reward delivered to replication and validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to participate in the network.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be a result of a global supply inflation rate, calculated per Solana epoch and distributed amongst the active validator set. As discussed further below, the per annum inflation rate is based on a pre-determined disinflationary schedule. This provides the network with monetary supply predictability which supports long term economic stability and security.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction \(be it a state execution or proof-of-replication verification\). A mechanism for long-term economic stability and forking protection through partial burning of each transaction fee is also discussed below.
A high-level schematic of Solanas crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics/), [State-validation Protocol-based Rewards](ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_validation_client_economics/ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. Additionally, in [Storage Rent Economics](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_storage_rent_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. The [Replication-client Economics](ed_replication_client_economics/) chapter will review the Solana network design for global ledger storage/redundancy and archiver-client economics \([Storage-replication rewards](ed_replication_client_economics/ed_rce_storage_replication_rewards.md)\) along with an archiver-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md). An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
**Figure 1**: Schematic overview of Solana economic incentive design.

View File

@@ -0,0 +1,14 @@
# Attack Vectors
**Subject to change.**
## Colluding validation and replication clients
A colluding validation-client, may take the strategy to mark PoReps from non-colluding archiver nodes as invalid as an attempt to maximize the rewards for the colluding archiver nodes. In this case, it isnt feasible for the offended-against archiver nodes to petition the network for resolution as this would result in a network-wide vote on each offending PoRep and create too much overhead for the network to progress adequately. Also, this mitigation attempt would still be vulnerable to a &gt;= 51% staked colluder.
Alternatively, transaction fees from submitted PoReps are pooled and distributed across validation-clients in proportion to the number of valid PoReps discounted by the number of invalid PoReps as voted by each validator-client. Thus invalid votes are directly dis-incentivized through this reward channel. Invalid votes that are revealed by archiver nodes as fishing PoReps, will not be discounted from the payout PoRep count.
Another collusion attack involves a validator-client who may take the strategy to ignore invalid PoReps from colluding archiver and vote them as valid. In this case, colluding archiver-clients would not have to store the data while still receiving rewards for validated PoReps. Additionally, colluding validator nodes would also receive rewards for validating these PoReps. To mitigate this attack, validators must randomly sample PoReps corresponding to the ledger block they are validating and because of this, there will be multiple validators that will receive the colluding archivers invalid submissions. These non-colluding validators will be incentivized to mark these PoReps as invalid as they have no way to determine whether the proposed invalid PoRep is actually a fishing PoRep, for which a confirmation vote would result in the validators stake being slashed.
In this case, the proportion of time a colluding pair will be successful has an upper limit determined by the % of stake of the network claimed by the colluding validator. This also sets bounds to the value of such an attack. For example, if a colluding validator controls 10% of the total validator stake, transaction fees will be lost \(likely sent to mining pool\) by the colluding archiver 90% of the time and so the attack vector is only profitable if the per-PoRep reward at least 90% higher than the average PoRep transaction fee. While, probabilistically, some colluding archiver-client PoReps will find their way to colluding validation-clients, the network can also monitor rates of paired \(validator + archiver\) discrepancies in voting patterns and censor identified colluders in these cases.

View File

@@ -0,0 +1,14 @@
# Economic Sustainability
**Subject to change.**
Long term economic sustainability is one of the guiding principles of Solanas economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solanas network, these components take the form of token issuance \(via inflation\) and token burning.
The dominant remittances from the Solana mining pool are validator and archiver rewards. The disinflationary mechanism is a flat, protocol-specified and adjusted, % of each transaction fee.
The Archiver rewards are to be delivered to archivers as a portion of the network inflation after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 3**
**Figure 3**: Example PoRep reward design as a function of global network storage redundancy.
In the example shown in Figure 1, multiple per PoRep base rewards are explored \(as a % of Tx Fee\) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy \(i.e. 10X\).

View File

@@ -0,0 +1,16 @@
# Economic Design MVP
**Subject to change.**
The preceeding sections, outlined in the [Economic Design Overview](./), describe a long-term vision of a sustainable Solana economy. Of course, we don't expect the final implementation to perfectly match what has been described above. We intend to fully engage with network stakeholders throughout the implementation phases \(i.e. pre-testnet, testnet, mainnet\) to ensure the system supports, and is representative of, the various network participants' interests. The first step toward this goal, however, is outlining a some desired MVP economic features to be available for early pre-testnet and testnet participants. Below is a rough sketch outlining basic economic functionality from which a more complete and functional system can be developed.
## MVP Economic Features
* Faucet to deliver testnet SOLs to validators for staking and application development.
* Mechanism by which validators are rewarded via network inflation.
* Ability to delegate tokens to validator nodes
* Validator set commission fees on interest from delegated tokens.
* Archivers to receive fixed, arbitrary reward for submitting validated PoReps. Reward size mechanism \(i.e. PoRep reward as a function of total ledger redundancy\) to come later.
* Pooling of archiver PoRep transaction fees and weighted distribution to validators based on PoRep verification \(see [Replication-validation Transaction Fees](ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md). It will be useful to test this protection against attacks on testnet.
* Nice-to-have: auto-delegation of archiver rewards to validator.

View File

@@ -0,0 +1,6 @@
# Replication-client Economics
**Subject to change.**
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of archivers provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided, as proved by successfully submitting Proofs-of-Replication to the cluster.. These rewards are captured by generating and entering Proofs of Replication \(PoReps\) into the PoH stream which can be validated by Validation nodes as described above in the [Replication-validation Transaction Fees](../ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md) chapter.

View File

@@ -0,0 +1,8 @@
# Replication-client Reward Auto-delegation
**Subject to change.**
The ability for Solana network participants to earn rewards by providing storage service is a unique on-boarding path that requires little hardware overhead and minimal upfront capital. It offers an avenue for individuals with extra-storage space on their home laptops or PCs to contribute to the security of the network and become integrated into the Solana economy.
To enhance this on-boarding ramp and facilitate further participation and investment in the Solana economy, replication-clients have the opportunity to auto-delegate their rewards to validation-clients of their choice. Much like the automatic reinvestment of stock dividends, in this scenario, an archiver-client can earn Solana tokens by providing some storage capacity to the network \(i.e. via submitting valid PoReps\), have the protocol-based rewards automatically assigned as delegation to a staked validator node of the archiver's choice and earn interest, less a fee, from the validation-client's network participation.

View File

@@ -0,0 +1,8 @@
# Storage-replication Rewards
**Subject to change.**
Archiver-clients download, encrypt and submit PoReps for ledger block sections.3 PoReps submitted to the PoH stream, and subsequently validated, function as evidence that the submitting archiver client is indeed storing the assigned ledger block sections on local hard drive space as a service to the network. Therefore, archiver clients should earn protocol rewards proportional to the amount of storage, and the number of successfully validated PoReps, that they are verifiably providing to the network.
Additionally, archiver clients have the opportunity to capture a portion of slashed bounties \[TBD\] of dishonest validator clients. This can be accomplished by an archiver client submitting a verifiably false PoRep for which a dishonest validator client receives and signs as a valid PoRep. This reward incentive is to prevent lazy validators and minimize validator-archiver collusion attacks, more on this below.

View File

@@ -0,0 +1,8 @@
# Validation-client Economics
**Subject to change.**
Validator-clients are eligible to receive protocol-based \(i.e. inflation-based\) rewards issued via stake-based annual interest rates \(calculated per epoch\) by providing compute \(CPU+GPU\) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic disinflationary schedule as a function of total amount of circulating tokens. The network is expected to launch with an annual inflation rate around 15%, set to decrease by 15% per year until a long-term stable rate of 1-2% is reached. These issuances are to be split and distributed to participating validators and archivers, with around 90% of the issued tokens allocated for validator rewards. Because the network will be distributing a fixed amount of inflation rewards across the stake-weighted valdiator set, any individual validator's interest rate will be a function of the amount of staked SOL in relation to the circulating SOL.
Additionally, validator clients may earn revenue through fees via state-validation transactions and Proof-of-Replication \(PoRep\) transactions. For clarity, we separately describe the design and motivation of these revenue distriubutions for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and rent, and PoRep-validation transaction fees.

View File

@@ -0,0 +1,12 @@
# Replication-validation Transaction Fees
**Subject to change.**
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by archiver-clients. In this case, validators are providing compute \(CPU/GPU\) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.
While replication-clients are incentivized and rewarded through protocol-based rewards schedule \(see [Replication-client Economics](../ed_replication_client_economics/)\), validator-clients will be incentivized to include and validate PoReps in PoH through collection of transaction fees associated with the submitted PoReps and distribution of protocol rewards proportional to the validated PoReps. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy \(e.g. 10x data redundancy\).
The validation of PoReps by validation-clients is computationally more expensive than state-validation \(detail in the [Economic Sustainability](../ed_economic_sustainability.md) chapter\), thus the transaction fees are expected to be proportionally higher.
There are various attack vectors available for colluding validation and replication clients, also described in detail below in [Economic Sustainability](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_economic_sustainability/README.md). To protect against various collusion attack vectors, for a given epoch, validator rewards are distributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of PoReps that mismatch the archivers challenge. The PoRep challenge game is described in [Ledger Replication](https://github.com/solana-labs/solana/blob/master/book/src/ledger-replication.md#the-porep-game). This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps \(note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid\).

View File

@@ -0,0 +1,30 @@
# State-validation Protocol-based Rewards
**Subject to change.**
Validator-clients have two functional roles in the Solana network:
* Validate \(vote\) the current global state of that PoH along with any Proofs-of-Replication \(see [Replication Client Economics](../ed_replication_client_economics/)\) that they are eligible to validate.
* Be elected as leader on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and Proofs-of-Replication and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. As previously discussed, compensation for validator-clients is provided via a protocol-based annual inflation rate dispersed in proportion to the stake-weight of each validator \(see below\) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each transaction fee, less a protocol-specified amount that is destroyed \(see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)\). PoRep transaction fees are also collected by the leader client and validator PoRep rewards are distributed in proportion to the number of validated PoReps less the number of PoReps that mismatch an archiver's challenge. \(see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md)\)
The effective protocol-based annual interest rate \(%\) per epoch received by validation-clients is to be a function of:
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_validartion_client_economics.md)\)
* the fraction of staked SOLs out of the current total circulating supply,
* the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
The first factor is a function of protocol parameters only \(i.e. independent of validator behavior in a given epoch\) and results in a global validation reward schedule designed to incentivize early participation, provide clear montetary stability and provide optimal security in the network.
At any given point in time, a specific validator's interest rate can be determined based on the porportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year \(the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch\). With these broad assumptions, the 10-year inflation rate \(adjusted daily for this example\) is shown in **Figure 2**, while the total circulating token supply is illustrated in **Figure 3**. Neglected in this toy-model is the inflation supression due to the portion of each transaction fee that is to be destroyed.
![drawing](../../../.gitbook/assets/p_ex_schedule.png) \*\*Figure 2:\*\* In this example schedule, the annual inflation rate \[%\] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
![drawing](../../../.gitbook/assets/p_ex_supply.png) \*\*Figure 3:\*\* The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in \*\*Figure 2\*\* Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabalize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation and ledger storage replication \(PoReps\) are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators and archiver nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assummed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in\*\* Figure 4\*\*.
![drawing](../../../.gitbook/assets/p_ex_interest.png)
**Figure 4:** Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.
This epoch-specific protocol-defined interest rate sets an upper limit of _protocol-generated_ annual interest rate \(not absolute total interest rate\) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch.

View File

@@ -9,10 +9,11 @@ Each transaction sent through the network, to be processed by the current leader
* open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader, * open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below. * and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies \(e.g. Bitcoin, Ethereum\), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is destroyed, with the remaining fee going to the current leader processing the transaction. A scheduled global inflation rate provides a source for rewards distributed to validation-clients, through the process described above. Many current blockchain economies \(e.g. Bitcoin, Ethereum\), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is destroyed, with the remaining fee going to the current leader processing the transaction. A scheduled global inflation rate provides a source for rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
Transaction fees are set by the network cluster based on recent historical throughput, see [Congestion Driven Fees](../../transaction-fees.md#congestion-driven-fees). This minimum portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilization. By monitoring a protocol specified gas usage with respect to a desired, target usage amount, the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level. Transaction fees are set by the network cluster based on recent historical throughput, see [Congestion Driven Fees](../../transaction-fees.md#congestion-driven-fees). This minimum portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount, the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
As mentioned, a fixed-proportion of each transaction fee is to be destroyed. The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing an inflation limiting mechanism that protects against "tax evasion" attacks \(i.e. side-channel fee payments\)[1](../ed_references.md). As mentioned, a fixed-proportion of each transaction fee is to be destroyed. The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing an inflation limiting mechansim that protects against "tax evasion" attacks \(i.e. side-channel fee payments\)[1](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_referenced.md).
Additionally, the burnt fees can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total fees destroyed to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the burnt fees on their fork themselves, thus potentially reducing the incentive to censor in the first place. Additionally, the burnt fees can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total fees destroyed to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the burnt fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.

View File

@@ -21,7 +21,12 @@ Running a Solana validation-client required relatively modest upfront hardware c
**Table 2** example high-end hardware setup for running a Solana client. **Table 2** example high-end hardware setup for running a Solana client.
Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solanas validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties can become involved in the Solana network/economy via delegation of previously acquired tokens with a reliable validation node to earn a portion of the interest generated. Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solanas validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties still have two options to become involved in the Solana network/economy:
Delegation of tokens to validation-clients provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature intends to create a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services. 1. Delegation of previously acquired tokens with a reliable validation node to earn a portion of interest generated
2. Provide local storage space as a replication-client and receive rewards by submitting Proof-of-Replication \(see [Replication-client Economics](../ed_replication_client_economics/)\).
a. This participant has the additional option to directly delegate their earned storage rewards \([Replication-client Reward Auto-delegation](../ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md)\)
Delegation of tokens to validation-clients, via option 1, provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature intends to create a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services.

View File

@@ -11,7 +11,7 @@ This document proposes an easy to use software install and updater that can be u
The easiest install method for supported platforms: The easiest install method for supported platforms:
```bash ```bash
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/solana-install-init.sh | sh $ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.18.0/install/solana-install-init.sh | sh
``` ```
This script will check github for the latest tagged release and download and run the `solana-install-init` binary from there. This script will check github for the latest tagged release and download and run the `solana-install-init` binary from there.
@@ -20,7 +20,7 @@ If additional arguments need to be specified during the installation, the follow
```bash ```bash
$ init_args=.... # arguments for `solana-install-init ...` $ init_args=.... # arguments for `solana-install-init ...`
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/solana-install-init.sh | sh -s - ${init_args} $ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.18.0/install/solana-install-init.sh | sh -s - ${init_args}
``` ```
### Fetch and run a pre-built installer from a Github release ### Fetch and run a pre-built installer from a Github release
@@ -28,7 +28,7 @@ $ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/
With a well-known release URL, a pre-built binary can be obtained for supported platforms: With a well-known release URL, a pre-built binary can be obtained for supported platforms:
```bash ```bash
$ curl -o solana-install-init https://github.com/solana-labs/solana/releases/download/v1.0.0/solana-install-init-x86_64-apple-darwin $ curl -o solana-install-init https://github.com/solana-labs/solana/releases/download/v0.18.0/solana-install-init-x86_64-apple-darwin
$ chmod +x ./solana-install-init $ chmod +x ./solana-install-init
$ ./solana-install-init --help $ ./solana-install-init --help
``` ```
@@ -77,7 +77,7 @@ pub struct UpdateManifest {
pub download_sha256: String, // SHA256 digest of the release tar.bz2 file pub download_sha256: String, // SHA256 digest of the release tar.bz2 file
} }
/// Data of an Update Manifest program Account. /// Userdata of an Update Manifest program Account.
#[derive(Serialize, Deserialize, Default, Debug, PartialEq)] #[derive(Serialize, Deserialize, Default, Debug, PartialEq)]
pub struct SignedUpdateManifest { pub struct SignedUpdateManifest {
pub manifest: UpdateManifest, pub manifest: UpdateManifest,
@@ -154,7 +154,7 @@ FLAGS:
OPTIONS: OPTIONS:
-d, --data_dir <PATH> Directory to store install data [default: .../Library/Application Support/solana] -d, --data_dir <PATH> Directory to store install data [default: .../Library/Application Support/solana]
-u, --url <URL> JSON RPC URL for the solana cluster [default: http://devnet.solana.com] -u, --url <URL> JSON RPC URL for the solana cluster [default: http://testnet.solana.com:8899]
-p, --pubkey <PUBKEY> Public key of the update manifest [default: 9XX329sPuskWhH4DQh6k16c87dHKhXLBZTL3Gxmve8Gp] -p, --pubkey <PUBKEY> Public key of the update manifest [default: 9XX329sPuskWhH4DQh6k16c87dHKhXLBZTL3Gxmve8Gp]
``` ```

Some files were not shown because too many files have changed in this diff Show More