Compare commits

...

2621 Commits

Author SHA1 Message Date
fe1676bc3a Review comments 2019-03-11 16:58:43 -06:00
1a9ef37251 Update programs using simple error mapping to use CustomError 2019-03-11 16:58:43 -06:00
db5370c5df Add helper macro to implement bincode serialization of program-specific errors 2019-03-11 16:58:43 -06:00
804378e8f7 Add ProgramError::CustomError and truncate value to 32 bytes 2019-03-11 16:58:43 -06:00
56b0ba2601 KvStore - A data-store to support BlockTree (#2897)
* Mostly implement key-value store and add integration points

Essential key-value store functionality is implemented, needs more work to be integrated, tested, and activated.

Behind the `kvstore` feature.
2019-03-11 17:53:14 -05:00
3073ebb20d reduce pub 2019-03-11 17:09:21 -05:00
f8e07ef5a3 banking_stage_entryfication fails when run as cargo test
Add some retry for getting entries from the channel.
2019-03-11 14:13:32 -07:00
a4b6d181a2 rename forwarder ports to tpu_via_blobs 2019-03-11 14:07:17 -07:00
0b8c5d807d code cleanup 2019-03-11 14:07:17 -07:00
e201136eee more review comments 2019-03-11 14:07:17 -07:00
55f660d5f9 address review comments 2019-03-11 14:07:17 -07:00
a4acc631ee Refactor packing packets into blobs into separate packets_to_blob() function in packets.rs 2019-03-11 14:07:17 -07:00
3ddf4b6c24 PR fixes 2019-03-11 14:07:17 -07:00
ccd1173a83 Add local cluster test for forwarding 2019-03-11 14:07:17 -07:00
cd1a9faacd Batch packet forwarding in banking stage 2019-03-11 14:07:17 -07:00
b60b8ec5ae Add logic for deserialzing packets embedded in blobs 2019-03-11 14:07:17 -07:00
536c8accf8 Add separate sockets for tpu forwarder and run different protocol for those sockets 2019-03-11 14:07:17 -07:00
7beefb3f81 Add forwarder sockets and address to contact info and sockets structs 2019-03-11 14:07:17 -07:00
fe1f67ea9a clippy errors 2019-03-11 14:07:17 -07:00
069ce71256 fix clippy 2019-03-11 14:07:17 -07:00
e3cacb9296 Buffer unprocessed packets if next leader is the current node 2019-03-11 14:07:17 -07:00
0c592c52f6 Wake up replay stage when the poh bank is cleared. (#3211)
* wake up replay stage when the poh bank is cleared

* bump ticks per second

* Increase ticks per slot to match faster tick rate

* Remove check that working bank must be the bank for the greatest slot

* Make start_leader() skip starting TPU for slots we've already been leader for
2019-03-11 13:58:23 -07:00
78bb96ee51 Reduce bootstrap leader stake (#3218) 2019-03-11 13:29:44 -07:00
86e2f35ac4 Only need the TPU and a light client implement Transact 2019-03-10 23:20:10 -06:00
7696a64891 Add design doc for testing programs 2019-03-10 23:20:10 -06:00
799ed24113 Integrate bank-forks proposal into the book 2019-03-10 20:13:36 -06:00
63477dabcd Attempt to clarify bank forks 2019-03-10 20:13:36 -06:00
cd0bc1dea5 updates to reflect new_from_parent() (#3076)
* design draft

* update

* section on updating root forks

* updates to reflect new_from_parent()

* fixup

* Grammar check
2019-03-10 13:59:16 -07:00
195a880576 pass Pubkeys as refs, copy only where values needed (#3213)
* pass Pubkeys as refs, copy only where values needed

* Pubkey is pervasive

* fixup
2019-03-09 19:28:43 -08:00
ac226c3e14 Remove superfluous set_leader() usage 2019-03-08 19:59:54 -08:00
4d5b832775 Remove commented out and clearly broken test 2019-03-08 19:59:54 -08:00
79b2542ca4 Remove CrdsValue::LeaderId 2019-03-08 19:41:51 -08:00
17921c9fae Delete NodeInfo type 2019-03-08 18:37:36 -08:00
5de38852d2 Add cluster test framework doc. (#3189) 2019-03-08 19:29:41 -07:00
0acdbc0d03 plumb staking_account and voting_keypair from multinode-demo to Vote (#3199)
* plumb staking_account and voting_keypair from bash to Vote
2019-03-08 19:29:08 -07:00
c8c85ff93b Fix propagation of incorrectly signed messages in Gossip (#3201) 2019-03-08 18:08:24 -08:00
31cbb52654 Rename new_entry_point as new_gossip_entry_point to clarify usage 2019-03-08 17:42:25 -08:00
cd88f81817 bench-tps no longer uses an invalid ContactInfo for RPC 2019-03-08 17:42:25 -08:00
6de24ff0be s/account/program in info msgs 2019-03-08 16:30:29 -07:00
de4d14ddc0 set_leader() now remains local and doesn't emit a LeaderId gossip message 2019-03-08 15:10:19 -08:00
5b386ec30a Delete cluster_info::get_gossip_top_leader() 2019-03-08 12:10:34 -08:00
8f0aa956a3 bench-tps no longer cares who the leader is 2019-03-08 11:43:07 -08:00
e04148ff44 Reduce leader_id visiblity 2019-03-08 11:42:06 -08:00
d5d853838c RPC now sends transactions at the local TPU
The local TPU will forward the transactions as needed if it's not
currently the leader
2019-03-08 11:42:06 -08:00
e18673953c Remove poll_gossip_for_leader() 2019-03-08 11:14:47 -08:00
12f3fd75e8 StorageStage now sends transactions at the local TPU 2019-03-08 11:03:49 -08:00
7bd0929157 Remove process_block() 2019-03-08 09:36:30 -08:00
19488ba42a Speling 2019-03-08 09:36:30 -08:00
f0dc10c67b Hide close(), the user is supposed to drop instead 2019-03-08 09:36:30 -08:00
f55103498f Remove commented test code 2019-03-07 19:18:53 -07:00
639cb49356 Fix wallet integration tests 2019-03-07 19:18:53 -07:00
c5e9c6fdb6 Get chacha off Budget 2019-03-07 19:18:53 -07:00
7a4ccc8719 Fix Budget's payment_with_fee test
Fee is now independent of the contract.
2019-03-07 19:18:53 -07:00
125a345c90 Fix pubsub test 2019-03-07 19:18:53 -07:00
3dc22e7323 Simulate auto-creation of system accounts 2019-03-07 19:18:53 -07:00
17dcd1f62a Resurrect the tests 2019-03-07 19:18:53 -07:00
a277f3e816 Migrate to TransactionBuilder
This code wasn't updated after we started batching instructions.
The current code does allocations instead of using CreateAccount.
The runtime shouldn't allow that, so getting this code out of the
way before we lock down the runtime.
2019-03-07 19:18:53 -07:00
10b16753af Remove 'new' constructor 2019-03-07 19:18:53 -07:00
4625aed3a5 Make hypen/underscore consistent 2019-03-07 16:51:25 -08:00
259c820f15 Review comments 2019-03-07 17:21:32 -07:00
e888c90ecf Add program notifications to JSON RPC documentation 2019-03-07 17:21:32 -07:00
b053bc2790 Load accounts by program owner for program subscriptions 2019-03-07 17:21:32 -07:00
6a81f9e443 Add program subscriptions to rpc 2019-03-07 17:21:32 -07:00
0ef1fa7c76 Replace RemoteVoteSigner with a user-supplied keypair
Vote program currently offers no path to vote with the
authorized voter. There should be a
VoteInstruction::new_authorized_vote() that accepts the
keypair of the authorized voter and the pubkey of the vote
account. The only option in current code is
VoteInstruction::new_vote() that accepts the voter's keypair
and assumes that pubkey is the vote account.
2019-03-07 17:15:36 -07:00
02eb234399 Fix TVU and PoH Recorder going out of sync (#3164)
* Fix broadcast_stage error

* Account for very fast ticks in tick verification
2019-03-07 15:49:07 -08:00
8d80da6b46 Fix picking account store paths
Store the set of accounts paths in AccountsDB and choose with an rng
when we need to create a new one. Remove path from AccountStorageEntry object.
2019-03-07 14:58:52 -08:00
22855def27 Fix race condition in store.
Multiple threads can enter the read lock and
all store the new empty set to account_maps.
Check again after taking write lock to make sure
only one thread actually inserts the new entry.
2019-03-07 14:58:52 -08:00
0be59cad4e Remove dead code 2019-03-07 13:05:42 -08:00
5edbd6a7fb gossip_service::discover() now reports the leader 2019-03-07 13:05:42 -08:00
54ff9b3ac2 Shutdown gossip on failure 2019-03-07 13:05:42 -08:00
5463226184 Give spy nodes a proper keypair 2019-03-07 13:05:42 -08:00
b96bccd71f Use Self 2019-03-07 13:05:42 -08:00
07a948a0d0 Replicator now uses its keypair for gossip 2019-03-07 13:05:42 -08:00
8f034280dc Increase polling frequency to report convergence quicker 2019-03-07 13:05:42 -08:00
83f551d9b9 Use poll_gossip_for_leader() 2019-03-07 13:05:42 -08:00
f83a64d17f poll_gossip_for_leader: simplify timeout arg 2019-03-07 13:05:42 -08:00
8bc7d5a172 Remove spy_node duplication 2019-03-07 13:05:42 -08:00
96c0222b30 Employ gossip_service::discover() 2019-03-07 13:05:42 -08:00
679a718cbf poll_gossip_for_leader() code cleanup 2019-03-07 13:05:42 -08:00
b083e4db48 Resolve TODO 2019-03-07 13:05:42 -08:00
a3cab470d3 Rename ClusterInfo::new_with_keypair() to ClusterInfo::new() 2019-03-07 13:05:42 -08:00
bb93504965 Rename ClusterInfo::new() to ClusterInfo::new_with_invalid_keypair() 2019-03-07 13:05:42 -08:00
4d58bf4b28 Don't use solana_entrypoint in static libraries 2019-03-07 12:42:13 -07:00
505f77b108 Move a more generic process_transaction to runtime.rs 2019-03-07 12:42:13 -07:00
5b672f8921 Generalize Budget tests to work on multi-ix txs 2019-03-07 12:42:13 -07:00
9e9c0785e7 groom broadcast (#3170) 2019-03-07 09:43:42 -08:00
94882418ab Simplify TransactionBuilder
A stepping stone to replacing all Transaction constructors with
TransactionBuilders.
2019-03-07 08:11:03 -07:00
c6cb3bb0bc Bump env_logger from 0.6.0 to 0.6.1
Bumps [env_logger](https://github.com/sebasmagri/env_logger) from 0.6.0 to 0.6.1.
- [Release notes](https://github.com/sebasmagri/env_logger/releases)
- [Commits](https://github.com/sebasmagri/env_logger/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-03-06 22:29:44 -07:00
9fedc9513b Use generics for add/remove subscriptions 2019-03-06 20:50:48 -08:00
0badc90058 Wallet new tests 2019-03-06 20:46:18 -08:00
61fbea3ee4 Cleanup AccountStorage apis
Remove duplicate code
2019-03-06 18:30:36 -08:00
a4a3995a84 Add staking commands to wallet 2019-03-06 17:50:15 -08:00
01fb76f4bd add epoch warmup (#3166)
add epoch warmup
2019-03-06 16:32:23 -08:00
d09639f7d2 Move the design out of the proposals section 2019-03-06 17:24:17 -07:00
946ee8a354 Add description of vote and rewards programs 2019-03-06 17:24:17 -07:00
e63b899ca5 Boot staker setup from fullnode 2019-03-06 16:50:27 -07:00
63a4ed74a4 consolidate logic for epoch and slot_index into Bank (#3144) 2019-03-06 14:44:21 -08:00
a3782d699d Bump bytes from 0.4.11 to 0.4.12
Bumps [bytes](https://github.com/carllerche/bytes) from 0.4.11 to 0.4.12.
- [Release notes](https://github.com/carllerche/bytes/releases)
- [Changelog](https://github.com/carllerche/bytes/blob/v0.4.x/CHANGELOG.md)
- [Commits](https://github.com/carllerche/bytes/compare/v0.4.11...v0.4.12)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-03-06 15:05:01 -07:00
97f2c96a7e Add a transaction and instruction 2019-03-06 15:04:15 -07:00
5979627258 Add authorized voter 2019-03-06 15:04:15 -07:00
9d580e363a Fix hostname part of queries in dashboard 2019-03-06 13:26:15 -08:00
9163e5b004 Fix sorting order of stakes in confirmation time calculations 2019-03-06 13:11:04 -08:00
0252bf2f46 fix fmt 2019-03-06 12:25:28 -08:00
283bb84134 Create UDP socket once per process_loop for forwarding transactions 2019-03-06 12:25:28 -08:00
0a4f909566 requestAirdrop RPC API is now optional 2019-03-06 10:23:57 -08:00
516aa44aad Don't fetch the working_bank twice 2019-03-06 10:23:57 -08:00
b1763f9187 Remove dead code 2019-03-06 10:23:57 -08:00
b03fd782de Make room for more fields in JsonRpcConfig 2019-03-06 10:23:57 -08:00
b850f3c1dd Remove unnecessary cleanup_paths
drop handles it
2019-03-06 11:17:37 -07:00
789a9df9f6 s/id/hash in block events 2019-03-06 08:51:10 -08:00
bd39ab9365 Clean up exit signal handling 2019-03-05 19:20:29 -08:00
1c0cfb17a3 Start leader based on Poh tick height. (#3084)
* Start leader based on poh and test

* Equalize validator and leader stakes in LocalCluster

* Clear WorkingBank on poh_recorder reset
2019-03-05 17:56:51 -08:00
9491999a95 Remove remaining erc20 references 2019-03-05 17:56:44 -08:00
e2d30db7e1 Rename tokens to lamports 2019-03-05 17:56:44 -08:00
3129e299e4 Rename tokens to lamports in programs/ 2019-03-05 17:56:44 -08:00
0604bbb473 Rename tokens to lamports in wallet/ 2019-03-05 17:56:44 -08:00
545feab6db Misc token to lamport renaming 2019-03-05 17:56:44 -08:00
3794048c91 Rename tokens to lamports in book/ 2019-03-05 17:56:44 -08:00
beb45f44ac solana-genesis: rename tokens to lamports 2019-03-05 17:28:06 -08:00
f1d1852691 Rename tokens to lamports in core/ 2019-03-05 17:28:06 -08:00
53f09c44f3 Rename tokens to lamports in sdk/ 2019-03-05 17:28:06 -08:00
bd237a2d6f Add transaction to test harness to set the delegate for validator vote accounts 2019-03-05 16:51:47 -07:00
76a7038335 Update test harness to set a delegate on validator vote accounts 2019-03-05 16:51:47 -07:00
c24d95c885 Remove bench-tps, upload-perf, and bench-streamer from code coverage report 2019-03-05 15:35:31 -08:00
cb0560df92 remove dead code 2019-03-05 15:35:24 -08:00
ec034a5cb9 Fix invalid Barrier transactions (#3139) 2019-03-05 15:16:36 -08:00
ca99ebaaf4 Add way to create account with delegate in 1 tx 2019-03-05 16:14:57 -07:00
b9e878ee80 slot_height considered harmful (#3135)
* slot_height considered harmful
* fix test_tick_slot_epoch_indexes
2019-03-05 14:18:29 -08:00
33c4c7e511 Split up long test 2019-03-05 15:16:51 -07:00
b67ac22336 Replace superfluous integration tests with needed one 2019-03-05 15:16:51 -07:00
6ff2572ebe Refactor system entrypoint to use helper fns; add unit tests 2019-03-05 15:16:51 -07:00
a539c9ad67 Restore print ban, and widen the net 2019-03-05 14:09:40 -08:00
1997640094 Remove prints 2019-03-05 14:09:40 -08:00
e7eafbd24e Adapt to recent programs/ shuffle 2019-03-05 13:14:07 -08:00
378a0f511e Stop looking for solana-fullnode-config 2019-03-05 12:44:27 -08:00
9349f90a59 Inherit transaction count from parent (#3134) 2019-03-05 12:34:21 -08:00
0f1d6c6271 Check for no entries left in blocktree in a given slot
There may not be ENTRIES_PER_SEGMENT entries a slot, if so
then we will hang waiting for more.
2019-03-05 11:53:40 -08:00
8e70f5bf84 Same fix, different location
What's this doing way up here?
2019-03-05 12:46:18 -07:00
52fc974cdf The funder is not a staker 2019-03-05 12:46:18 -07:00
6e9d803091 Remove usage of unsafe for Accounts 2019-03-05 10:13:03 -08:00
fc8489a04d Stop using LocalVoteSigner 2019-03-05 09:34:54 -07:00
e248efce06 Add programs/system explicitly to CI test suite 2019-03-05 09:33:27 -07:00
b4084c6298 Fix random comment typo 2019-03-05 09:33:27 -07:00
2fdfa98d55 Fix process_pay SystemTransaction type 2019-03-05 09:33:27 -07:00
f506b0a224 Fix test: Prevent SystemInstruction CreateAccount from overwriting accounts in use 2019-03-05 09:33:27 -07:00
202adb1bf1 Create failing test 2019-03-05 09:33:27 -07:00
885eeec3ed Boot storage program from the SDK 2019-03-05 07:16:33 -07:00
5e9f802d7d Boot token_program from the SDK 2019-03-05 07:16:33 -07:00
e4be57c3b6 Bump libc from 0.2.49 to 0.2.50
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.49 to 0.2.50.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.49...0.2.50)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-03-05 07:14:51 -07:00
6ab6e6cb9b Clean up exit flag handing across TVU 2019-03-04 21:26:50 -08:00
2a849ae268 Inline LeaderServices 2019-03-04 21:26:50 -08:00
4808f6a9f8 Clean up exit flag handing in TPU 2019-03-04 21:26:50 -08:00
96bfe92334 Clean up fullnode/tpu/tvu/fetch_stage exit signal 2019-03-04 21:26:50 -08:00
e7cde846cb Clean up gossip service exit flag handling 2019-03-04 21:26:50 -08:00
eb90d8d463 Clean up Rpc exit signal 2019-03-04 21:26:50 -08:00
6a8a97f644 Remove dead code 2019-03-04 20:05:14 -08:00
3fc846d789 Try to use the RPC exit API to cleanly exit nodes 2019-03-04 19:58:37 -08:00
0f77531f09 Simplify pass-through arg handling 2019-03-04 19:58:37 -08:00
20b831264e Properly plumb exit flag to PubSubService 2019-03-04 19:58:37 -08:00
43bab23651 remove duplicate child creation (#3100)
* remove duplicate child creation
* resurrect test for partial slot
* simplify blocktree_processor some more (no tick_height, yay!)
* ensure frozen
2019-03-04 19:22:23 -08:00
906df5e20e Exit signal cleanup: pass in references, make the receiver clone as needed 2019-03-04 18:43:21 -08:00
794e961328 use Bank's notion of leader_id where possible (#3119) 2019-03-04 18:40:47 -08:00
a481822321 Fix signatureUnsubscribe documentation (#3118) 2019-03-04 18:07:16 -08:00
dc42c12f2b Revert to more consistent naming (#3114) 2019-03-04 17:50:19 -08:00
6d82123125 rename bank_id to bank_slot 2019-03-04 17:10:27 -08:00
4f6d7702c5 Add a way to build unsigned transactions 2019-03-04 17:47:46 -07:00
97274030b9 Add test with transaction with no signatures
Add checks for no signature
2019-03-04 16:42:52 -08:00
9ce2bc94bf Add flag to enable the JSON RPC fullnodeExit API 2019-03-04 15:49:02 -08:00
51502537b1 Remove extra reference 2019-03-04 15:49:02 -08:00
7b49c9f09c Delete fullnode-config/ 2019-03-04 15:49:02 -08:00
4714dc3a5c De-pub 2019-03-04 15:49:02 -08:00
44013855d8 Book nits (#3096)
* Book nits

* nits
2019-03-04 15:44:54 -07:00
846fdd3b2d Bump reqwest from 0.9.10 to 0.9.11
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.9.10 to 0.9.11.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.9.10...v0.9.11)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-03-04 13:47:37 -07:00
03d6c9a552 Defeature bpf_loader; bpf_{c,rust} features now confined to programs/bpf 2019-03-04 11:02:37 -08:00
d0be16b49a Remove duplicated code 2019-03-04 11:02:37 -08:00
3a4018cd03 review comments; rename Unsafe to TestOnlyAllowRpcFullnodeExit 2019-03-04 10:18:17 -08:00
5aaaa7f45c fixup! 2019-03-04 10:18:17 -08:00
c299dd390e Fullnode rpc to exit with unsafe config 2019-03-04 10:18:17 -08:00
a3016aebaf Put accounts test data files in target directory
And gitignore it so those aren't added accidentally.
2019-03-04 10:17:02 -08:00
fb55d1c3d4 Design for leader to leader transition between slots (#2715) 2019-03-04 10:10:52 -08:00
9c44c173df Remove ipv6 feature 2019-03-04 09:56:58 -08:00
d708982f27 Remove unstable and test feature flags 2019-03-04 09:30:00 -08:00
bb774173bb Add PohRecorder reset tests (#3083)
* tests for reset

* fixup!
2019-03-04 08:08:22 -07:00
3906b1af6a deadcode (#3081) 2019-03-03 21:16:59 -08:00
de1d7ce312 Cleanup staking utils to divide functionality between delegate and normal node utitliites. Also replaces vote_states() with more generalized vote_accounts() in Bank. (#3070) 2019-03-03 18:04:13 -08:00
1654199b23 Use PohRecorder to synchronize instead of rotate. (#3080) 2019-03-03 16:44:06 -08:00
2ec9bc9f05 Revive payments via Budget 2019-03-03 17:29:13 -07:00
e8ae603a01 Add failing test for a Budget payment 2019-03-03 17:29:13 -07:00
e4dba03e12 accounts shedding (#3078)
* accounts shedding

* fixup
2019-03-03 16:04:04 -08:00
8ec10d4de9 Simplify Budget's serialize 2019-03-03 14:24:53 -08:00
baca3e6b6b Cleanup Budget
* BudgetProgram -> BudgetState
* Instruction -> BudgetInstruction
* Move BudgetState into its own module
* BudgetInstruction::NewBudget -> BudgetInstruction::InitializeAccount
* BudgetInstruction::new_budget -> BudgetInstruction::new_initialize_account
2019-03-03 14:49:35 -07:00
fc5fcd6cd4 Move native_loader into solana_runtime 2019-03-03 10:59:08 -07:00
33496ffea2 Adjust paths 2019-03-02 22:11:48 -08:00
b8b7de5522 Script can now be run from any directory 2019-03-02 22:11:48 -08:00
109101c2dc Cleanup features and fix build errors 2019-03-02 22:11:48 -08:00
534619f72f Update manifest-path 2019-03-02 22:11:48 -08:00
44322124c8 Update paths 2019-03-02 22:11:48 -08:00
9923c543e8 Fix ci scripts 2019-03-02 22:11:48 -08:00
41b5899856 Move programs/Cargo.toml into bpf/ 2019-03-02 22:11:48 -08:00
b830449f23 Move top-level native program tests to their respective crates 2019-03-02 22:11:48 -08:00
037fcf6b3d Bump all native programs up a level
Don't categorize programs by a single backend.
2019-03-02 22:11:48 -08:00
e1a1296b9b Fix cleanup_paths
Add back remove of parent in Accounts::drop, but
remove that in the cleanup_paths helper
for the account tests which do not use
make_default_dir.
2019-03-02 20:24:57 -08:00
3f4ff3f7b5 Delete duplicate file 2019-03-02 18:57:11 -07:00
cd4bccfd12 Remove snap support 2019-03-02 17:41:09 -08:00
9c3e7e40cf Less pub 2019-03-02 17:36:51 -08:00
a9a7fc56eb Purge MAX_RECENT_TICK_HASHES 2019-03-02 17:04:42 -08:00
398b78dd97 Delete duplicate file 2019-03-02 16:44:36 -08:00
1edf6c361e Move Vote program out of the SDK 2019-03-02 16:44:36 -08:00
b99e3eafdd Fix stakes not being setup correctly 2019-03-02 16:44:36 -08:00
e6486b2824 Move Budget out of the SDK 2019-03-02 16:44:36 -08:00
d22a13257e Refactor bank get vote accounts (#3052) 2019-03-02 16:44:36 -08:00
f4c5b9ccb0 remove remove_dir_all() of paths' parents (which we didn't make to begin with) 2019-03-02 12:36:41 -08:00
a94880574b block_hash => blockhash 2019-03-02 12:13:30 -07:00
0f1582c196 cargo fmt 2019-03-02 12:13:30 -07:00
85159a0eb4 Rename JSON RPC getLastId to getRecentBlockHash 2019-03-02 12:13:30 -07:00
258cf21416 Purge remaining last_id (now called block_hash) 2019-03-02 12:13:30 -07:00
2bfad87a5f Rename Bank.last_id() to Bank.last_block_hash() 2019-03-02 12:13:30 -07:00
95cbb8a5c0 Switch to recent_block_hash 2019-03-02 12:13:30 -07:00
ce1b72809a Rename get_last_id() to get_recent_block_hash() 2019-03-02 12:13:30 -07:00
4f3e149a98 Remove stale/wrong comments 2019-03-02 12:13:30 -07:00
642d3d903f Rename get_storage_mining_entry_height to get_storage_entry_height for consistency 2019-03-02 12:13:30 -07:00
81cd461591 Rename storage_last_id to storage_block_hash 2019-03-02 12:13:30 -07:00
ea110efabd Rename AdvertiseStorageLastId to AdvertiseStorageRecentBlockHash 2019-03-02 12:13:30 -07:00
0743f54dfe Rename LastIdNotFound to BlockHashNotFound 2019-03-02 12:13:30 -07:00
176d5e0d37 Rename Transaction last_id field to recent_block_hash 2019-03-02 12:13:30 -07:00
16b71a6be0 Cleanup fork id generation
Accounts could end up with id collision depending on how
banks are created, this shouldn't happen.
2019-03-02 10:34:41 -08:00
13ee8efd42 Move build.rs into core/ 2019-03-02 09:52:18 -08:00
5f5d779ee1 Move src/ into core/src. Top-level crate is now called solana-workspace 2019-03-02 09:52:18 -08:00
7b849b042c Split rewards_program.rs 2019-03-02 10:11:37 -07:00
d32f5b6cca Use process_blocktree to verify the ledger 2019-03-02 08:47:31 -08:00
fcbcf000c4 Use a valid last_id 2019-03-02 08:47:31 -08:00
2bc939f535 Adapt to slower moving last_ids 2019-03-02 08:47:31 -08:00
d5de5bec4f Register a new last_id once per slot 2019-03-02 08:47:31 -08:00
61beb42797 Decouple tick counting from hash queue 2019-03-02 08:47:31 -08:00
e5be3e1dca HashQueue no longer hard codes max_entries 2019-03-02 08:47:31 -08:00
986c54de58 Comment out test that's not actually testing anything
@sakridge, fyi
2019-03-02 07:50:32 -07:00
49b7e67585 Return program error from process_transaction()
Our unit-test helper `process_transaction()` wasn't returning
program errors, which made testing programs tedious and
counter-intuitive.
2019-03-02 07:50:32 -07:00
db825b6e26 Fix vote program bugs
Also:

* Add an assertion to the transaction builder if not enough
keypairs were provided for all keys that require signatures.
* Expose bugs in the runtime.
2019-03-02 07:50:32 -07:00
8e273caf7d Brush up data-plane-fanout to read less like a proposal 2019-03-01 22:50:42 -07:00
b1a648113f simple replay stage 2019-03-01 20:56:29 -08:00
2782922f7a Rename BroadcastService back to BroadcastStage 2019-03-01 21:10:53 -07:00
041a06b432 kill multinode (#3038) 2019-03-01 20:09:13 -08:00
269a82f796 Bump serde_derive from 1.0.88 to 1.0.89
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.88 to 1.0.89.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.88...v1.0.89)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-03-01 20:15:49 -07:00
6b83ce4937 address review comments 2019-03-01 17:58:05 -08:00
ae557104a5 Create vote account and fund it in local cluster test harness 2019-03-01 17:58:05 -08:00
6a34b11dd0 Sum up all stakes for a delegate when calculating stake (#3045) 2019-03-01 17:31:59 -08:00
54417acfba changed vote_states to vote_accounts, more useable (#3047) 2019-03-01 17:22:49 -08:00
29d12d9ff1 remove new_bank_from_parent_with_id() (#3039) 2019-03-01 16:39:23 -08:00
4ee857ab7d More vote account fixes
vote_index not being maintained correctly during a squash.
The tokens==0 shielding accounts were being inserted with
owner=default Pubkey so they didn't know they are vote accounts
and should update the vote accounts set.
2019-03-01 16:25:14 -08:00
771a88665c Bump serde from 1.0.88 to 1.0.89
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.88 to 1.0.89.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.88...v1.0.89)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-03-01 15:51:11 -07:00
a7c18cc0b4 Fnbool_to_FnOptionT 2019-03-01 14:12:50 -08:00
e30e4cc603 Remove get_confirmation_timestamp() from HashQueue 2019-03-01 13:38:17 -08:00
fdc31e99df Clean up type casts 2019-03-01 13:38:17 -08:00
a72325dbc2 entry_id -> entry 2019-03-01 13:38:17 -08:00
67b6be66c8 Rename MAX_ENTRY_IDS 2019-03-01 13:38:17 -08:00
8ec13d557f Generalize tick_height to hash_height 2019-03-01 13:38:17 -08:00
31f570a9f4 Remove unused functions 2019-03-01 13:38:17 -08:00
46b7b795bf Fix Typo in Fullnode Diagram (#3036) 2019-03-01 11:58:09 -08:00
38273427ad have banks save vote_state by epoch to support stable leader schedules (#3019)
have banks save vote_state by epoch to support stable leader schedules
2019-03-01 11:54:28 -08:00
46fb0b1b94 Rename last_id to last_hash within HashQueue 2019-03-01 11:48:09 -08:00
224b705f8d Rename genesis_block.last_id() to genesis_block.hash() 2019-03-01 11:48:09 -08:00
028f41eb51 Move secure vote signing out of proposals 2019-03-01 12:16:28 -07:00
c27726e065 Add a black box local cluster harness (#3028)
Integration test harness for the network.
2019-03-01 10:36:52 -08:00
a57fb00584 Rename last_id_queue.rs to hash_queue.rs 2019-03-01 09:50:51 -08:00
360055ad70 Rename LastIdQueue to HashQueue 2019-03-01 09:50:51 -08:00
558f10c862 Rename PohEntry.id to PohEntry.hash 2019-03-01 09:50:51 -08:00
c53c351759 Rename erc20 to token-program
Everything it uses already had that name, just the crate was never
renamed.
2019-03-01 10:47:38 -07:00
7c4473e0aa Rename Entry.id to Entry.hash 2019-03-01 09:31:49 -08:00
7e7b79ef34 Rename prev_id to prev_hash 2019-03-01 09:31:49 -08:00
e993d511e3 Rename last_entry_id variables to last_entry_hash 2019-03-01 09:01:59 -08:00
251b0957f1 Ignore flaky test_dropped_handoff_recovery 2019-03-01 09:01:28 -08:00
b9524217fe Update rust example to use BPF enabled infrastructure (#2974) 2019-02-28 22:05:11 -08:00
6b228df3df Remove last_entry_id/next_blob_index from TvuRotationInfo 2019-02-28 21:57:17 -08:00
6cf6a1ccc3 process_blocktree() now halts forks at the first partial slot 2019-02-28 21:57:17 -08:00
d889e77fba Add reset_slot_consumed() 2019-02-28 21:57:17 -08:00
93d65aa9cc Use your words 2019-02-28 21:02:29 -08:00
f216a7179a Ignore test_full_leader_validator_network 2019-02-28 21:01:10 -08:00
434b8a8970 Fix another PR race 2019-02-28 20:11:50 -08:00
cc9191f1b0 Update blocktree API's (#3025) 2019-02-28 19:49:22 -08:00
567bbecca0 use bank.id() where we want 'slot'; bank.slot_height() is not slot (#3014) 2019-02-28 19:07:47 -08:00
07e4f9a611 Fix PR race 2019-02-28 18:44:07 -08:00
b41286919d Rename bank.id to bank.slot (#3018) 2019-02-28 18:02:45 -08:00
564057c812 Bump rust-bpf-sysroot to pull in liballoc 2019-02-28 17:25:28 -08:00
20e4edec61 Refactor Vote Program Account setup (#2992) 2019-02-28 17:08:45 -08:00
d5f0e49535 Refactor fullnode rotation test (#3015) 2019-02-28 15:53:09 -08:00
30bccc0c68 Fix slot index used while calculating leader schedule
- slot_leader_at() was using absolute slot number instead of index in the epoch
2019-02-28 15:41:01 -08:00
1c44b738fe Fix vote_accounts test 2019-02-28 15:22:47 -08:00
217f30f9c3 Add get_supermajority_slot() function (#2976)
* Moved supermajority functions into new module, staking_utils

* Move staking functions out of bank, and into staking_utils, change get_supermajority_slot to only use state from epoch boundary

* Move bank slot height in staked_nodes_at_slot() to be bank id
2019-02-28 13:15:25 -08:00
fec867539d More SlotMeta docs (#3011) 2019-02-28 12:18:11 -07:00
d123d86d84 remove forks.working_bank() where possible (#3010) 2019-02-28 10:57:58 -08:00
485ccd20e4 Use TransactionBuilder in the Rewards transaction 2019-02-28 10:53:26 -08:00
8d004ee947 Clarify is_full 2019-02-28 11:06:06 -07:00
4704aa1f80 Rename SlotMeta::is_trunk to SlotMeta::is_rooted 2019-02-28 10:39:56 -07:00
271115a6be Switch blockstream_service to create_new_tmp_ledger! 2019-02-28 07:59:17 -08:00
a79caf7795 Test transaction with a fee 2019-02-28 08:56:55 -07:00
404aa63147 Add TransactionBuilder 2019-02-28 08:56:55 -07:00
4610706d9f Generalize instruction
For serialization: Instruction<u8, u8>
For users:         Instruction<Pubkey, (Pubkey, bool)>
For programs:      Instruction<Pubkey, (Pubkey, bool, Account)>
2019-02-28 08:56:55 -07:00
8e4cd6fcc3 Delete leader scheduler artifact 2019-02-28 07:47:37 -08:00
6eb09a6901 Trigger blockstream on full-slot notification (clean up superfluous stuff) 2019-02-28 07:20:16 -07:00
e04d2379df Remove bank dependency from forward_entries 2019-02-28 07:20:16 -07:00
5b72a984a3 Bump serde_json from 1.0.38 to 1.0.39
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.38 to 1.0.39.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.38...v1.0.39)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-28 06:57:17 -07:00
cf545e64b8 xargo requiress sysroot as source to build dependent crates 2019-02-28 00:49:06 -08:00
ac1e266588 Bump rust-bpf to pull in built-in target bpfel-unknown-unknown (#3001) 2019-02-28 00:26:50 -08:00
0f2226901d Fix transaction count after squash 2019-02-27 23:21:49 -08:00
dad1511484 test_bank_squash: validate transaction_count() before/after squashing 2019-02-27 23:21:49 -08:00
05646d72b8 Remove unnecessary fetching of a new last_id 2019-02-27 22:58:59 -08:00
7ccd601100 Remove incorrect file description 2019-02-27 22:36:18 -08:00
d23f8a3e99 increase accounts coverage (#2993) 2019-02-27 21:42:14 -08:00
0dc5af62ff Standardize on 'use log::*' for easy access to all log level macros 2019-02-27 21:16:23 -08:00
855f1823a4 Include solana-logger for use by tests 2019-02-27 21:16:23 -08:00
7fd40f1eb9 add failing test for #2994 (#2995) 2019-02-27 20:46:26 -08:00
95f2f05f45 Refactor account serialize in appendvec
Remove dupe code and see how this compares to bincode.
Add benchmarks to justify custom serialize and also experiment with
safe solutions.
2019-02-27 19:57:50 -08:00
cd976a8082 s/tx/transaction/ for function names 2019-02-27 17:00:10 -08:00
163ed40efb Send program write transactions concurrently 2019-02-27 17:00:10 -08:00
32aaa5fd06 Derive retry timeout from slot duration 2019-02-27 17:00:10 -08:00
163874d4da remove purge parameter to accounts (#2990) 2019-02-27 16:06:06 -08:00
873007bae1 Fix tests and move bank dependency slightly 2019-02-27 15:31:23 -08:00
a67a88c8ef Hoist EntrySender in ReplayStage 2019-02-27 15:31:23 -08:00
6d1b43f1b1 Make leader_schedule a utitlity module named leader_schedule_utils (#2988) 2019-02-27 14:41:46 -08:00
3a20a20807 Reintroduce leader_id to blobs (#2986) 2019-02-27 13:37:08 -08:00
e45559a1a7 Add slot 3 back to ASCII art (#2979)
* Add slot 3 back to ASCII art

* New slot-oriented diagrams

When 1-block-per-slotm, slots are drawn vertically. That's the ideal
case. Abandoning a block is what should look like something forking
off to the side.
2019-02-27 14:27:58 -07:00
140954a53c Remove Tpu::is_leader(), fullnode doesn't need it anymore 2019-02-27 11:55:21 -08:00
b5d7ac3ce3 Set delay based on ticks_per_slot to ensure the test makes it to a new block 2019-02-27 11:13:29 -08:00
b5d714eec7 Derive retry timeout from slot duration 2019-02-27 11:13:29 -08:00
36cdaffe25 Fix indent 2019-02-27 11:11:24 -08:00
16e2443f61 Remove unnecessary if 2019-02-27 11:06:38 -08:00
9adbc1dd60 nit: always pass &Arc<Bank>, clone() only where consumed 2019-02-27 10:55:43 -08:00
b6ccb475f1 Clarify FIXME source 2019-02-27 10:37:48 -08:00
ca0f16ccc0 Fix test failure 2019-02-27 08:22:52 -08:00
c241a56fb0 Remove extraneous print. 2019-02-27 08:22:52 -08:00
4149f7fd1c Fix review comments 2019-02-27 08:22:52 -08:00
cc68ecdacf Use default if previous values do not exist 2019-02-27 08:22:52 -08:00
96b349dcbb Performance optimizations 2019-02-27 08:22:52 -08:00
5216952691 Change benchmark path to target/ or OUT_DIR
Also reduce some code duplication with cleanup_dirs fn.
2019-02-27 08:22:52 -08:00
c46b2541fe - Fix lock/unlock of accounts
- Fix format check warnings
2019-02-27 08:22:52 -08:00
2158ba5863 tx count per fork 2019-02-27 08:22:52 -08:00
180d297df8 Rebase and panic with no accounts
Add Accounts::has_accounts function for hash_internal_state calculation.
2019-02-27 08:22:52 -08:00
c276375a0e Persistent account storage across directories 2019-02-27 08:22:52 -08:00
130563cd4c AppendVec 2019-02-27 08:22:52 -08:00
9e2a7921c8 Recover from rebase 2019-02-26 22:08:17 -08:00
9539154a4a Remove test_name arg 2019-02-26 22:08:17 -08:00
84bd9296cd Centralize unwrap() within create_new_tmp_ledger! 2019-02-26 22:08:17 -08:00
88ecce12a2 No longer need to give new_fullnode() a random string 2019-02-26 22:08:17 -08:00
5a7b99ecc2 Add/employ create_new_tmp_ledger!() 2019-02-26 22:08:17 -08:00
55a76ed4b0 Populate test ledgers with a full slots to reduce test boilerplate 2019-02-26 22:08:17 -08:00
033a04129a Add lockouts to vote program (#2944)
* Add lockouts to vote program

* Rename MAX_VOTE_HISTORY TO MAX_LOCKOUT_HISTORY, change process_vote() to only pop votes after MAX_LOCKOUT_HISTORY + 1 votes have arrived

* Correctly calculate serialized size of an Option, rename root_block to root_slot
2019-02-26 22:19:31 -07:00
789fff2ae2 Replace LeaderScheduler with LeaderScheduler1 (#2962)
* Migrate to LeaderScheduler1 (and added some missing methods)
* Delete LeaderScheduler
* Rename LeaderScheduler1 to LeaderScheduler
2019-02-26 22:16:18 -07:00
9750488200 Update rust-bpf-sysroot to pull in latest core,stdsimd (#2972) 2019-02-26 19:55:28 -08:00
46ec5cf765 Bump dirs from 1.0.4 to 1.0.5
Bumps [dirs](https://github.com/soc/dirs-rs) from 1.0.4 to 1.0.5.
- [Release notes](https://github.com/soc/dirs-rs/releases)
- [Commits](https://github.com/soc/dirs-rs/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-26 20:04:36 -07:00
ee16cc77a3 Move last_ids to a simple Hash, unwrap from Arc<RwLock>> 2019-02-26 18:19:26 -08:00
a669241cb1 Add/use get_tmp_ledger_path!() and tmp_copy_blocktree!() 2019-02-26 17:50:43 -08:00
0174945853 Program tests now check signature status (#2965) 2019-02-26 17:09:57 -08:00
ea0837973e blocktree_processor to use slots as bank ids, and squash 2019-02-26 17:35:22 -07:00
85819983d7 Bump lazy_static from 1.2.0 to 1.3.0
Bumps [lazy_static](https://github.com/rust-lang-nursery/lazy-static.rs) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/rust-lang-nursery/lazy-static.rs/releases)
- [Commits](https://github.com/rust-lang-nursery/lazy-static.rs/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-26 17:31:19 -07:00
78841532f7 Add Rust helpers (#2959) 2019-02-26 15:17:38 -08:00
72214b2b68 Squash test to test parent bank after squash 2019-02-26 15:15:34 -08:00
ee83a2ac29 Make stake sorting more deterministic for data plane 2019-02-26 14:11:08 -08:00
82c759b6cb Add whitespace, comment cleanup 2019-02-26 14:07:39 -08:00
6de5354b8e Update the RPC bank on fullnode rotation 2019-02-26 14:07:39 -08:00
87281f6ed5 ensure at Accounts level that tokens == 0 means None (#2960) 2019-02-26 13:51:39 -08:00
a8cd66ffa2 Pull Rust enabled LLVM (#2957) 2019-02-26 13:03:57 -08:00
d1e1258f97 Revert "Ignore flaky test_active_set_refresh_with_bank"
This reverts commit 10ad536e09.
2019-02-26 12:04:58 -08:00
4d73bbe48f Fix flaky gossip weighted tests 2019-02-26 11:58:03 -08:00
10ad536e09 Ignore flaky test_active_set_refresh_with_bank 2019-02-26 11:56:47 -08:00
bc2d4c7681 Clean up test_boot_validator_from_file() 2019-02-26 11:12:05 -08:00
a7f200847f Clean up test_leader_restart_validator_start_from_old_ledger 2019-02-26 11:12:05 -08:00
411f154827 Reduce log spam 2019-02-26 11:12:05 -08:00
6dcb97af9e Move PohService and PohRecorder out of banking_stage and into fullnode (#2852)
* Move PohService out of banking_stage and into fullnode.

* 10 second slots
2019-02-26 10:48:18 -08:00
9420ba52e9 Squash the new working bank to ensure zero-balance accounts get purged 2019-02-26 10:09:31 -08:00
ec35c1fc79 Fix leader scheduling in replay stage 2019-02-26 09:51:12 -07:00
b752511f41 Attempt to pull the completed replication work into the book 2019-02-26 09:23:12 -07:00
af206111e2 Hoist new leader scheduler up to protocol level
Attempt to feel similar to LeaderScheduler to easy migration.
2019-02-26 08:23:01 -08:00
ba50e1ac81 Move data plane fanout chapter out of proposals 2019-02-26 09:20:09 -07:00
f9f493ee7a Tighten up storage_stage changes 2019-02-26 09:05:00 -07:00
137233b4a1 Add EntryMeta wrapper 2019-02-26 09:05:00 -07:00
3897b66270 Let the bank creator decide where to send transaction fees 2019-02-26 08:06:08 -07:00
feefdca969 Minor cleanup to Bank and LastIdQueue 2019-02-26 06:46:38 -08:00
25690ff078 merge_parents() => squash() (#2943) 2019-02-25 20:34:05 -08:00
897279eddb Encapsulate log::Level so counter macro users don't need to use it 2019-02-25 20:01:30 -08:00
5f5725a4ea Re-add leader scheduler 2019-02-25 19:28:24 -08:00
6a61f25735 Only install rust-bpf if rust-bpf version changes (#2939) 2019-02-25 19:09:16 -08:00
454c66f988 fixup! 2019-02-25 18:17:36 -08:00
3e893ffddc Remove max_tick_height, leader_scheduler from broadcast_service 2019-02-25 18:17:36 -08:00
58eebd7f6c Remove tick counting from broadast service 2019-02-25 18:17:36 -08:00
ba5077701d Avoid possible simplified lowering of passed struct (#2938) 2019-02-25 17:05:59 -08:00
2f44555437 Fix fullnode test 2019-02-25 16:55:22 -08:00
299b642803 Cleanup fullnode rotate integration test, and unignore two tests 2019-02-25 16:55:22 -08:00
a2bf59cbba Ignore rust toolchain and sysroot 2019-02-25 16:40:35 -08:00
329382f016 Pull BPF enabled rustc and sysroot into SDK (#2936) 2019-02-25 15:35:45 -08:00
67c9bbc6b2 * drop parents once merged (#2930)
* add bank.id() which can be used by BankForks, blocktree_processor
* add bank.hash(), make hash_internal_state() private
* add bank.freeze()/is_frozen(), also useful for blocktree_processor, eventual freeze()ing in replay
2019-02-25 14:05:02 -08:00
6088b3bfc8 Replace DEFAULT_SLOT_HEIGHT with 0 2019-02-25 13:09:13 -08:00
2be7896157 Pull in latest rBPF that includes Rust dependent changes (#2929) 2019-02-25 12:42:48 -08:00
0b37f530ae Start replay stage from the slot-relative blob index, not the global entry height 2019-02-25 11:38:46 -08:00
c13ae10d31 Fix replay_stage to 1) skip leader slots, 2) create/set working banks properly 2019-02-25 11:38:46 -08:00
1e15e6375a Check for entry height in the unchanging bank_forks_info instead of a racy check to blocktree 2019-02-25 11:38:46 -08:00
ed684c5ec6 Build docker image with rust 1.32 2019-02-25 09:16:11 -08:00
2fbdec59cb Generalize access to staked nodes 2019-02-25 08:49:43 -08:00
710f88edda Handle edge cases earlier
We have lots of tests that work off genesis block.  Also, one
might want to generate a future leader schedule under the assumption
the stakers stay the same.
2019-02-25 08:49:43 -08:00
db899a2813 Inline LeaderSchedule::new_from_bank()
Breaks circular dependency and offers more flexibility in bank's
usage.
2019-02-25 08:49:43 -08:00
aad0d90fdd Use epoch_height to generate schedule instead of last_id
I had suggested the last_id, but that puts an unnecessary dependency
on LastIdsQueue. Using epoch height is pretty interesting in that
given the same set of stakers, you simply increment the seed once
per epoch.

Also, tighten up the LeaderSchedule code.
2019-02-25 08:49:43 -08:00
72b4834446 Add Bank::prev_slot_leader() and Bank::next_slot_leader() 2019-02-25 08:49:43 -08:00
ec48c58df1 Award tx fees to validators in new leader schedule
Also, generalize the leader_schedule functions a bit to allow for
prev_slot_leader and next_slot_leader, should they be needed.
2019-02-25 08:49:43 -08:00
0947ec59c9 Expose the new leader schedule functionality from the bank. 2019-02-25 08:49:43 -08:00
d67211305c Ignore slow benchmarks 2019-02-24 23:15:05 -07:00
c65046e1a2 Use PohRecorder as the Poh synchronization point. (#2926)
Cleanup poh_recorder and poh_service.

* ticks are sent only if poh.tick_height > WorkingBank::min_tick_height and <= WorkingBank::max_tick_height
* entries are recorded only if poh.tick_height >= WorkingBank::min_tick_height and < WorkingBank::max_tick_height
2019-02-24 08:59:49 -08:00
ba7d121724 Switch to Bank::staked_nodes(); want node_id, not staker_id
Also, update LeaderScheduler's code to use node_id as well.
Unfortuntely, no unit tests for this, because there's currently
only one way to set staker_id/node_id, and they are both set
to the same value.
2019-02-24 07:52:44 -07:00
a1070e9572 Split ActiveStakers over Bank and LeaderScheduler 2019-02-24 07:52:44 -07:00
f89e83ae49 Delete redundant code 2019-02-23 16:09:00 -08:00
264f502ed7 Query the bank for the current slot leader 2019-02-23 15:51:37 -07:00
c5876ddca9 Make LeaderScheduler::new_with_window_len private
It's useful for unit-testing, but generally isn't a variable
validators should be modifying. Blockstream and BlockstreamService
were the only ones using it. Switching them from a hard-coded 10
to the default didn't cause any test failures, so running with it.
2019-02-23 14:48:27 -07:00
fdf6cae6fb Use bank for leader scheduler's config
This ensures GenesisBlock is always configured with the same
ticks_per_slot as LeaderScheduler. This will make it easier
to migrate to bank-generated schedules.
2019-02-23 14:48:27 -07:00
d26f836212 tmp_copy_ledger -> tmp_copy_blocktree 2019-02-23 08:32:05 -07:00
da98982732 Deprecate tmp_copy_ledger
This should allow us to get rid of all the manual routing of
ticks_per_slot in the test suite.
2019-02-23 07:57:45 -07:00
cc10e84ab7 sample_ledger -> sample_blocktree 2019-02-23 07:08:11 -07:00
6cd91cd7ec Hold slots_per_epoch, not ticks_per_epoch
Same as bank and less invariants to check
2019-02-22 22:02:23 -07:00
e19dbdc527 Use Bank for ticks_per_slot 2019-02-22 22:02:23 -07:00
0b8809da6e Fix duplicated path to fullnode
Fixes flaky tests.
2019-02-22 16:35:40 -08:00
35aefdf1db Reduce test noise (#2907) 2019-02-22 16:27:19 -08:00
66891d9d4e Don't use global storage account
Other accounts would not be able to modify the system accounts userdata.
2019-02-22 15:59:55 -08:00
6bca577d6d Bump libc from 0.2.48 to 0.2.49
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.48 to 0.2.49.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-22 16:45:14 -07:00
f5400ccefc Ignore storage test
@sakridge is working on a fix.
2019-02-22 16:18:10 -07:00
a56d717ea8 Add a check that shows why the storage program is failing 2019-02-22 16:18:10 -07:00
11c7aab023 Add some unit-tests 2019-02-22 16:18:10 -07:00
5541eedcc4 Reject modifications to userdata if not owned by the program 2019-02-22 16:18:10 -07:00
77ea4cd285 Reapply dependency Band-aid to make CI happy 2019-02-22 15:56:07 -07:00
8353b420d1 Move blocktree-oriented diagram out of proposals 2019-02-22 15:24:36 -07:00
71602fe04b Fix root package dependencies (#2899) 2019-02-22 14:08:25 -08:00
054c12ea0f Bump hex-literal from 0.1.2 to 0.1.3
Bumps [hex-literal](https://github.com/RustCrypto/utils) from 0.1.2 to 0.1.3.
- [Release notes](https://github.com/RustCrypto/utils/releases)
- [Commits](https://github.com/RustCrypto/utils/compare/hex-literal-v0.1.2...hex-literal-v0.1.3)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-22 13:47:55 -07:00
0003dbf3ba remove unnecessary imports 2019-02-22 12:13:05 -08:00
c07b6c30a1 Remove special casing of ProgramError in blocktree processor
- Also refactor bank.rs and add unit tests
2019-02-22 12:13:05 -08:00
bad48ce83c Split replicator doc into what is implemented and what is not 2019-02-22 13:12:49 -07:00
2d03ae2fae Migrate fullnode to create_tmp_sample_blocktree 2019-02-22 11:18:01 -07:00
3a7008949f Build all deps (#2896) 2019-02-22 09:49:25 -08:00
973ad7554e Remove superfluous GenesisBlock::load() 2019-02-22 08:41:59 -08:00
3be154490d Deprecate create_tmp_sample_ledger 2019-02-22 00:24:46 -07:00
3610768888 Run featurized tests on sub-packages (#2867) 2019-02-21 22:38:36 -08:00
4602d3bf46 Unit-tests can use ordinary keypairs 2019-02-21 22:01:20 -08:00
778583ad08 Inline BlockConfig::ticks_per_slot 2019-02-21 20:37:21 -08:00
fb904e7a29 Enable CUDA persistence mode to reduce surprises 2019-02-21 19:25:17 -08:00
b501090443 Route BankForks into the ReplayStage 2019-02-21 19:25:17 -08:00
f0f55af35b Add scheduler config to genesis
Anything that affects how the ledger is interpreted needs to be
in the genesis block or someplace on the ledger before later
parts of the ledger are interpreted. We currently don't have an
on-chain program for cluster parameters, so that leaves only
the genesis block option.
2019-02-21 17:29:55 -08:00
3e8d96a95b fix failing tests 2019-02-21 16:35:23 -08:00
9713a3ac02 fix clippy warnings 2019-02-21 16:35:23 -08:00
5c9777970d moved fee collection code to runtime 2019-02-21 16:35:23 -08:00
c142a82ae0 Charge transaction fee even in case of ProgramError 2019-02-21 16:35:23 -08:00
18d48f09f8 Plumb blockstreamer name through testnet scripts 2019-02-21 17:24:29 -07:00
deeabb862d Call it blockstreamer 2019-02-21 17:24:29 -07:00
d8f6865338 Rename EntryStream to Blockstream 2019-02-21 17:24:29 -07:00
4a0c759795 Fix misspellings stumbled on 2019-02-21 17:24:29 -07:00
a131c90260 Add doc for api node 2019-02-21 17:24:29 -07:00
fc48062867 Rename active_window_length to active_window_num_slots 2019-02-21 15:48:13 -08:00
f77788447c Debug for Account
Derive prints the full userdata vec which is questionably useful.
2019-02-21 14:57:32 -08:00
d25fc7a649 Stop passing blob_index unnecessarily into ReplayStage 2019-02-21 15:33:01 -07:00
bf3d2bd2ec Update Gossip entry in the book 2019-02-21 15:32:21 -07:00
60a6ff80ee Change votes and associated test/helper functions to vote based on slot height 2019-02-21 15:31:53 -07:00
9e1c5e1ab0 switch vote program to use slot height instead of tick height, change confirmation computation to use slots 2019-02-21 15:31:53 -07:00
20fffd8abf Delete BankForks::finalized_bank() 2019-02-21 13:21:08 -08:00
98ed785711 Cargo.lock 2019-02-21 13:00:19 -08:00
7cb695df12 RetransmitStage now gets a BankForks 2019-02-21 12:56:56 -08:00
c94bc2a0b6 Remove dead code 2019-02-21 12:38:43 -08:00
511085b747 Make trait pub 2019-02-21 13:32:25 -07:00
f76ac94d70 Remove leader_schedule_offset public method
Also,

* Rename the private variable to include units.
* Better doc
2019-02-21 12:28:11 -08:00
32caa55d67 Offer a way to get the leader_schedule from any Bank instance 2019-02-21 12:28:11 -08:00
b69475937f Program tests depend on native/noop (#2873) 2019-02-21 12:22:55 -08:00
f6ff33db8e * add merge_parents(), which means 'eat your parent' (#2851)
* add is_root(), which is false if the bank has a parent
* use is_root() for store_slow and store_accounts to decide whether to purge on zero balance
2019-02-21 12:08:50 -08:00
dcf1200d2a Make Fullnode do less work on rotation, ReplayStage can just pass along more details 2019-02-21 11:13:06 -08:00
40977fa99f More forward-looking test 2019-02-21 10:54:25 -07:00
f4df8ff5b3 Add slot_height() and epoch_height() methods to Bank 2019-02-21 10:54:25 -07:00
080db1c62d Plumb BankForks into GossipService 2019-02-20 22:19:51 -08:00
4d5e2c8a4d Plumb BankForks into RPC subsystem 2019-02-20 21:46:48 -08:00
13d018e3e1 Fix stake selection for the Data Plane (#2863)
* Update data-plane to use stakes instead of a bank directly

* Rename get_stakes to staked_nodes
2019-02-20 21:38:16 -08:00
59ee2b8892 Fullnode now holds a BankForks instead of a Bank 2019-02-20 21:13:04 -08:00
0dde79f42b Push BankForks into Fullnode::new() 2019-02-20 21:13:04 -08:00
a4411ef6a1 Generate a schedule from a bank 2019-02-20 20:33:33 -08:00
3c62e2332e Cleanup stakes for gossip (#2860) 2019-02-20 20:02:47 -08:00
1cd88968cf Remove get_leader_for_next_tick() 2019-02-20 19:33:03 -08:00
28a53959e0 Remove dead types 2019-02-20 18:39:32 -08:00
7c26a4d0a0 Add weighted sampling based on stakes (#2854)
* Add weighted sampling based on stakes
2019-02-20 18:21:08 -08:00
6ed2e4c187 process_blocktree now loads forks 2019-02-20 17:27:02 -08:00
a484c87354 Make gossip selection stake based (#2848) 2019-02-20 17:08:56 -08:00
33c7f92f56 Dial down CI timeouts 2019-02-20 16:43:13 -08:00
b8f6280fe5 Move hash_internal_state tests into runtime
This was intended as a Bank test, but only in blocktree_processor
because of its dependency on Entry, which solana_runtime doesn't
know about.
2019-02-20 16:13:26 -08:00
822bebea46 Allow multiple forks without regenerating the hash 2019-02-20 16:13:26 -08:00
582a7192ec Hold Bank's own parent hash instead of the parent's 2019-02-20 16:13:26 -08:00
5492aad61e Cache ticks until a working bank can pick them up 2019-02-20 14:14:38 -08:00
27f973c923 github review 2019-02-20 14:19:25 -07:00
3357cebcdb Added notes from discussion on discord 2019-02-20 14:19:25 -07:00
7ce9c0a2e9 cleanup runtime chapter 2019-02-20 14:18:43 -07:00
e9daf57d7f Absorb LeaderScheduler's rank_active_set()
Delete overly-complicated tests
2019-02-20 13:13:31 -07:00
1c2169aec7 Use rank_stakes() in LeaderScheduler 2019-02-20 13:13:31 -07:00
cf163a9dab Remove unutilized cuteness 2019-02-20 13:13:31 -07:00
dfcf3f94dc Absorb LeaderScheduler::get_active_set()
No functional changes
2019-02-20 13:13:31 -07:00
b13fb6097f Get rid of the HashSet special case
ActiveSet ranks on construction. get_active_set() is on its way out.
This is a stepping stone.
2019-02-20 13:13:31 -07:00
6e24a4aa50 Less copy pasta 2019-02-20 13:13:31 -07:00
fb1c6cf4da Drop a bunch of dependencies on VotingKeypair
And de-Arc
2019-02-20 13:13:31 -07:00
af1b8f8a26 Absorb vote utilities
But drop dependency on VotingKeypair. Only pass in VotingKeypair
in VotingKeypair tests or integration tests.
2019-02-20 13:13:31 -07:00
88d6db8537 And ranking and simplify 2019-02-20 13:13:31 -07:00
6ce2c06fd6 Add primitive ActiveStakers and LeaderSchedule objects 2019-02-20 13:13:31 -07:00
136f7e4b3b Update test to validate entry height 2019-02-20 11:42:06 -07:00
0a73bb7efd Add tick-height field to entry event payload 2019-02-20 11:42:06 -07:00
2cf00021d9 Update golden hash to account for tick_height removal 2019-02-20 07:47:04 -08:00
8d38c2f800 Remove Entry::tick_height field 2019-02-20 07:47:04 -08:00
9848de6cda Remove special case in Bank::deposit()
And use it to process the genesis block.
2019-02-20 08:12:37 -07:00
19a3606315 Fix broken test, added some tests to calculate tx fee
Some code cleanup
2019-02-20 08:12:37 -07:00
cc2227d943 rename slot_num 2019-02-20 08:12:37 -07:00
a33921ed34 address review comments 2019-02-20 08:12:37 -07:00
2e75ff27ac Fix test 2019-02-20 08:12:37 -07:00
a27cdf55e7 Credit transaction fees to the slot leader 2019-02-20 08:12:37 -07:00
3d00992c95 Remove dependency on Entry::tick_height 2019-02-20 06:57:38 -08:00
77cb70dd80 Remove dependency on Entry::tick_height 2019-02-19 22:40:10 -08:00
8daba3e563 Add test demonstrating that process_blocktree()'s implementation is lacking 2019-02-19 20:37:06 -08:00
94f9ac0332 DRY up GenesisBlock 2019-02-19 20:34:58 -08:00
a17903a89f Tweak process_blocktree() signature to return a BankForks 2019-02-19 20:01:22 -08:00
dda0a1f39b Move storage tests out of Bank 2019-02-19 17:26:33 -07:00
0ef670a865 Move sender out of poh_recorder (#2837) 2019-02-19 16:22:33 -08:00
04f54655c2 Minor cleanup 2019-02-19 15:53:31 -08:00
dc5590f2bf unuse std (#2833) 2019-02-19 15:27:07 -08:00
bc52fce810 Fix the custom programs command in net.sh 2019-02-19 13:53:43 -07:00
b9bb92099e Go object-oriented
Easy to imagine a trait here that's implemented using a Bank or
a testnet.
2019-02-19 10:59:06 -07:00
64dcc31ac7 Migrate Rewards test from runtime to Bank 2019-02-19 10:59:06 -07:00
36546b4c4c Expose a Bank API for adding native programs
Also use it to tighten up the code to add the builtin programs.
2019-02-19 10:20:27 -07:00
dde886f058 Move Bank to its own crate
Also:
* counters.rs to solana_metrics
* genesis_block.rs to solana_sdk
2019-02-19 07:17:04 -07:00
781f7ef570 fix test_repair_empty_slot 2019-02-18 23:38:28 -08:00
3e8bb32ffd Add test for write_entries() 2019-02-18 23:38:28 -08:00
df310641fb Re-enable and add tests 2019-02-18 23:38:28 -08:00
21ef55f205 re-enable repair service tests 2019-02-18 23:38:28 -08:00
ade36566ea i 2019-02-18 21:56:23 -08:00
08d7a0d52d Upgrade to Rust 1.32.0
$ rustup update stable
2019-02-18 21:44:09 -07:00
1fd2885995 Add missing - 2019-02-18 20:09:18 -08:00
d357640fbf Centralize decentralized timing constants 2019-02-18 19:46:58 -08:00
ad9cd23202 Notify subscribers from ReplayStage 2019-02-18 20:04:30 -07:00
5916177dc8 Drop RpcPubSubService's dependency on the Bank
Pass in RpcSubscriptions instead, which let's you choose a
bank fork when it's time to send notifications.
2019-02-18 20:04:30 -07:00
905b1e2775 Add notify_subscribers() 2019-02-18 20:04:30 -07:00
377d45c9dd Pull RpcSubscriptions out of the Bank 2019-02-18 20:04:30 -07:00
a444cac2aa Switch to upstream AMIs for non-CUDA EC2 testnets 2019-02-18 18:59:56 -08:00
1e714eb6b2 Generate ec2 security group programmatically 2019-02-18 18:59:56 -08:00
3f14466965 Limit blockexplorer versions to 1.x.y
Per semver semantics when blockexplorer 2.0.0 is released it will be
incompatible in some way with 1.x.y and thus should be opt in.
2019-02-18 16:48:33 -08:00
e0b8f4202d Use slot height for BankForks ids 2019-02-18 17:27:20 -07:00
11b14bd3ab Bump reqwest from 0.9.9 to 0.9.10
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.9.9 to 0.9.10.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.9.9...v0.9.10)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-18 13:28:55 -07:00
90684483e2 Make Bank::hash_internal_state() work with checkpoints 2019-02-18 12:47:10 -07:00
760a82cb08 Add optional deploy of custom programs (#2817)
* Add optional deploy of custom programs

* Review comments
2019-02-18 11:43:36 -07:00
0317583489 Move avalanche logic to ClusterInfo
The simulator doesn't depend on RetransmitStage. It depends on
just one function, which is similar in spirit to many of the
methods in ClusterInfo.
2019-02-18 09:08:18 -08:00
1c3f2bba6d Move avalanche simulator to integration tests 2019-02-18 09:08:18 -08:00
7d62bf9a3d Move crds_gossip simulator to integration tests 2019-02-18 09:55:52 -07:00
7c248cd2ef Move expensive test to integration tests
This test passes consistently when the test suite is run with a
single thread. It fails consistently on MacOS when run as part
of the unit-test suite.

No idea why it passes in CI.
2019-02-18 09:27:23 -07:00
e4119268ca Delete expensive integration test in unit-test suite 2019-02-18 09:27:09 -07:00
fc2760e761 Remove bank dependency from poh_recorder (#2810)
* Remove bank dependency from poh_recorder

* clippy
2019-02-18 06:33:07 -08:00
c57084de36 Ignore test_two_fullnodes_rotate integration tests 2019-02-18 06:19:46 -08:00
907aff3b43 Cleanup Poh code 2019-02-17 21:12:55 -07:00
2793404116 Ensure blockexplorer comes back up when nodes are updated instead of restarted 2019-02-17 20:07:12 -08:00
d850f67979 Remove 'Compute' from name ComputeLeaderConfirmationService
struct names should be a noun
2019-02-17 19:44:09 -08:00
8080063024 nit 2019-02-17 19:30:45 -07:00
f33c6eb95f delete leader rotation signal from banking stage 2019-02-17 19:30:45 -07:00
4e3d71c2c9 Batch joins on entire tpumode struct instead of individual services 2019-02-17 19:30:23 -07:00
a074cb78cd Ensure leader services are closed before starting new ones 2019-02-17 19:30:23 -07:00
0dbc33f781 Finish removing getConfirmationTime 2019-02-17 16:27:50 -08:00
25bbc3bc2a wrong error 2019-02-17 15:43:13 -08:00
5f55a9be84 fmt 2019-02-17 15:43:13 -08:00
300e3d151d remove the signal sender since its superfelous to a recv error 2019-02-17 15:43:13 -08:00
2f7911b62a Boot BankError::MaxHeightReached 2019-02-17 16:30:01 -07:00
54dfe708c1 use ref for new_from_parent; test that transactions don't leak to parent 2019-02-17 15:02:08 -07:00
8166925f04 copy a new bank 2019-02-17 15:02:08 -07:00
64f1d93cc3 Use the accounts list from parents up to finalized bank for Account::load apis.
Borrow checker

query the previous parents accounts

cleanup!

s/tree/parents

Tests!  Last_ids need to be inherited as well otherwise nothing works.

new_from_parent
2019-02-17 15:02:08 -07:00
6d67568037 Delete useless wrappers 2019-02-17 14:10:34 -07:00
5003e97479 Inline private functions
Better code coverage in exchange for calling `create_session()`
2019-02-17 14:10:34 -07:00
858068cdc0 Drop sudo, it's now handled internally by the block explorer 2019-02-17 12:29:53 -08:00
65fb307d0f Avoid '' argument to fullnode.sh 2019-02-17 11:43:41 -08:00
2f1fe726f5 Expand imports
tokio is a heavy dependency. This gives us some visibility into
what we're using.
2019-02-17 12:20:05 -07:00
e9b0e3cb9d Move RpcSignatureStatus into its own module
And fixup some imports from previous commits.
2019-02-17 12:20:05 -07:00
34fceca7ff Fix compiler warnings 2019-02-17 12:20:05 -07:00
c646845cd3 Move RpcService into its own module 2019-02-17 12:20:05 -07:00
eb483bc053 Move RpcPubSubService into its own module 2019-02-17 12:20:05 -07:00
50d3fa7437 Move RpcSubscriptions into its own module 2019-02-17 12:20:05 -07:00
9f7fc5f054 Boot unused trait
Some ambitious unit-testing plans unimplemented?
2019-02-17 12:20:05 -07:00
a27e9cb3c2 Add -u option 2019-02-17 10:45:25 -08:00
10270dcbad Add an API node to non-perf testnets 2019-02-17 10:39:27 -08:00
4ff4fb6c38 Add support for an API node that hosts the block explorer 2019-02-17 10:39:27 -08:00
c8c794e340 Use the accounts and status cache from parents up to finalized bank for calls. (#2798)
* Use the accounts list from parents up to finalized bank for Account::load apis.

* Borrow checker

* query the previous parents accounts

* cleanup!

* s/tree/parents

* Tests!  Last_ids need to be inherited as well otherwise nothing works.
2019-02-17 08:01:31 -08:00
97a1e950ef write entries in blocktree now sets parent slot properly (#2800) 2019-02-17 04:36:49 -08:00
9fa8105ae8 Add a way to make a DAG of checkpointed Banks 2019-02-16 21:49:06 -07:00
d68b6ea7b1 Default entry stream socket to location used by the block explorer 2019-02-16 19:14:19 -08:00
58f4709362 Reduce log severity of entry stream errors 2019-02-16 19:10:00 -08:00
f71cd2c6f3 Status cache runs out of space in the bloom filter (#2796)
The cache is designed for 1m statuses, about 1 second worth of transactions at full capacity. Refresh the cache every 1 second worth of ticks.
2019-02-16 16:41:03 -08:00
8ec1f6ea2e Applied review feedback 2019-02-16 17:15:31 -07:00
d63c8ae1ae Add PR guidelines 2019-02-16 17:15:31 -07:00
e39094ac37 Hoist Slot Leader dependencies up to BankingStage 2019-02-16 15:36:31 -07:00
b539389741 Move all Validator dependencies from Bank to blocktree_processor 2019-02-16 15:01:26 -07:00
ac35fe9ed1 Flip the dependency; Create bank before scheduler 2019-02-16 14:16:48 -07:00
3d70afc578 Boot leader scheduler from the bank
Functional change: the leader scheduler is no longer implicitly
updated by PohRecorder via register_tick(). That's intended to
be a "feature" (crossing fingers).
2019-02-16 14:16:48 -07:00
b919b3e3b2 Bank no longer updates a leader scheduler by default 2019-02-16 14:16:48 -07:00
7a7349f2ff Don't update the leader scheduler in bank's default constructor 2019-02-16 14:16:48 -07:00
07b57735b1 Move leader scheduler test out of bank 2019-02-16 14:16:48 -07:00
e42c95a327 Bump bincode from 1.1.1 to 1.1.2
Bumps [bincode](https://github.com/TyOverby/bincode) from 1.1.1 to 1.1.2.
- [Release notes](https://github.com/TyOverby/bincode/releases)
- [Commits](https://github.com/TyOverby/bincode/compare/v1.1.1...v1.1.2)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-16 13:58:37 -07:00
473af78368 Support --entry-stream argument 2019-02-16 10:40:47 -08:00
ab6c7f6ca3 /it/ti/ 2019-02-16 10:40:47 -08:00
599516473a Add top-level run.sh for easy local cluster startup 2019-02-16 10:40:47 -08:00
83ac075b22 Use full app name for better cli help text 2019-02-16 10:40:47 -08:00
3548c6c43a Add support for locally built programs 2019-02-16 10:40:47 -08:00
3bfe2e75b5 Boot new_with_leader_scheduler_config
Only used in one place. Easy enough to use the one with the shared
leader scheduler.
2019-02-16 10:55:58 -07:00
97c93629a5 Don't use the Bank's LeaderScheduler 2019-02-16 10:55:58 -07:00
643384e1ec Add LeaderScheduler constructor to Bank
By passing a config and not a Arc'ed LeaderScheduler, callers
need to use `Bank::leader_scheduler` to access the scheduler.
By using the new constructor, there should be no incentive to
reach into the bank for that object.
2019-02-16 10:55:58 -07:00
1809277e05 Encapsulate Bank accounts
That way we don't need to TODOs saying "don't forget to iterate
over checkpoints too". It should be assumed that when the bank
references its previous checkpoint all its methods would
acknowledge it.
2019-02-16 08:41:35 -07:00
7981865fd2 Boot unused confirmation-time from Bank
This broken metric is already submitted to influx. Why make it
available via RPC too? If so, why store it in the bank and not
in the RPC service?
2019-02-16 08:11:43 -07:00
4467d5eb4c Extract process_ledger from Bank
Fullnode was the only real consumer of process_ledger and it was
only there to process a Blocktree. Blocktree is a tree, and a
ledger is a sequence, so something's clearly not right here.
Drop all other dependencies on process_ledger (only one test) so
that it can be fixed up in isolation.
2019-02-16 08:07:26 -07:00
38aed0c886 Bump serde_derive from 1.0.87 to 1.0.88
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.87 to 1.0.88.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.87...v1.0.88)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-16 04:57:32 -08:00
02801b3e75 Bump serde from 1.0.87 to 1.0.88
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.87 to 1.0.88.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.87...v1.0.88)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-16 05:02:10 -07:00
b79d361e6c Add --entry-stream support 2019-02-15 22:52:27 -08:00
9eb8b67b5c Install blockexplorer dependencies 2019-02-15 20:17:46 -08:00
132c664e18 No longer modify external userdata 2019-02-15 18:36:55 -07:00
288645aeb7 Add rewards integration test 2019-02-15 18:36:55 -07:00
55f06f5bad Make vote_program available to reward_program tests
Making `solana_vote_program` is not an option because
then vote_program's entrypoint conflicts with reward_program's
entrypoint.

This unfortunately turns the SDK into a dumping ground for all
things shared between vote_program and other programs. Better
would be to create a solana-vote-api crate similar to the
solana-rewards-api crate.
2019-02-15 18:36:55 -07:00
a2cb18bfe9 Only require voting account to be signed 2019-02-15 18:36:55 -07:00
d35b3754a2 Reorg
Now clients can use all the libraries to create transactions
and disect account data without needing to be constrained about
what can be compiled into a shared object or BPF.

Likewise, program development can move forward without being
concerned with bloating the shared object.
2019-02-15 18:36:55 -07:00
7f3aca15dd Add a library for creating Rewards transactions
And move out of the SDK
2019-02-15 18:36:55 -07:00
2c5cbaff25 Add unit-test for Rewards program 2019-02-15 18:36:55 -07:00
134cd7ab04 Add Rewards program 2019-02-15 18:36:55 -07:00
c74b8b6df3 Add a design for leader schedule rotation and genesis. (#2714)
Leader schedule rotation.
2019-02-15 16:34:34 -08:00
573116e259 Remove count_last_ids API 2019-02-15 11:05:41 -08:00
71ab030ea4 Fiddle with timeouts to make CI happy 2019-02-14 18:40:31 -08:00
c4125b80ec Reduce max_tick_height to speed up CI 2019-02-14 18:40:31 -08:00
626a381ddc Collect and re-forward packets received while TpuForwarder is shutting down 2019-02-14 18:40:31 -08:00
5333bda234 test_3_partitions is unstable, ignore 2019-02-14 17:30:42 -08:00
cceeb8e52d On leader rotation forward any unprocessed transaction packets to the new leader 2019-02-14 14:49:48 -08:00
94a0d10499 Avoid overrunning slot0 2019-02-14 14:49:48 -08:00
3f6aba23dd Add custom BlocktreeConfig for bad tests that break with the default 2019-02-14 14:49:48 -08:00
cd9dac4c7e Use a reasonable max_tick_height 2019-02-14 14:49:48 -08:00
f478894729 Revert "Set DEFAULT_TICKS_PER_SLOT = 32 to stabilize integration tests"
This reverts commit 2d2572d2cb.
2019-02-14 14:49:48 -08:00
97790480c9 Increase poll_for_signature retry timeout 2019-02-14 14:49:48 -08:00
9643c39bf6 Fix slot in block event 2019-02-14 14:25:54 -08:00
0a08d40237 fix repair service to support multinode tests that depend on repairs 2019-02-14 13:37:55 -08:00
d029997aef add parent slot to broadcast 2019-02-14 13:37:55 -08:00
ceb27b431e Add tree test to test multiple chaining children 2019-02-14 13:37:55 -08:00
d3761c2435 Change definitions in book to match current changes 2019-02-14 13:37:55 -08:00
b25d8ce764 Comment out repair service tests, to be fixed in another PR 2019-02-14 13:37:55 -08:00
34da362ee6 fix blocktree tests 2019-02-14 13:37:55 -08:00
de6109c599 replace num_blocks with parent block 2019-02-14 13:37:55 -08:00
736f08815e Add protocol request for requesting the highest blob in a slot (#2759) 2019-02-14 12:47:21 -08:00
106645d9bd add message terminator (newline) to socket writer output to ease client integration 2019-02-14 12:27:53 -08:00
c55ada2f26 Fix wallet test 2019-02-14 13:26:46 -07:00
4e4a1643c4 Boot SystemInstruction::Spawn 2019-02-14 13:26:46 -07:00
e1e84d4465 Don't reassign owner in Spawn 2019-02-14 13:26:46 -07:00
4a0009365e Use Account::owner as loader for executable accounts 2019-02-14 13:26:46 -07:00
3849b8ece4 Bump bincode from 1.0.1 to 1.1.1 (#2709)
* Bump bincode from 1.0.1 to 1.1.1

Bumps [bincode](https://github.com/TyOverby/bincode) from 1.0.1 to 1.1.1.
- [Release notes](https://github.com/TyOverby/bincode/releases)
- [Commits](https://github.com/TyOverby/bincode/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* update autocfg 0.1.1 => 0.1.2
2019-02-14 12:46:22 -06:00
f2ab8f17c8 udpate staking section 2019-02-14 07:45:58 -07:00
48671a1728 Let native_loader own native executable accounts 2019-02-13 20:55:36 -08:00
72b6ec4aa8 Add native program account constructor 2019-02-13 20:55:36 -08:00
8790a92f07 Adjust create_counter to avoid imposing an AtomicUsize import on users 2019-02-13 20:24:04 -08:00
0f8ff07b51 tpu now hangs on to its cluster_info 2019-02-13 16:16:18 -08:00
dca73068c5 address review comments 2019-02-13 15:31:45 -08:00
4094e62ed3 propose architecture change for fullnode 2019-02-13 15:31:45 -08:00
7a0e897960 address review comments 2019-02-13 15:31:45 -08:00
e78fc74e03 Update fullnode diagram to reflect bank, voting and forks changes 2019-02-13 15:31:45 -08:00
5054e74f7f update to edge book 2019-02-13 14:08:19 -07:00
72e6a39172 Fix the link to proposals chapter in the CONTRIBUTING guidelines 2019-02-13 14:08:19 -07:00
be73db13e0 Improve EntryStream trait and struct names 2019-02-13 13:07:30 -08:00
cbaba5cbf3 Review comments 2019-02-13 13:07:30 -08:00
c1447b2695 Add block event logic to EntryStreamStage 2019-02-13 13:07:30 -08:00
e58f08b60f Refactor EntryStream
Co-authored-by: Sunny Gleason <sunny.gleason@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2019-02-13 13:07:30 -08:00
662d62f561 Always assert on the main test thread to abort quickly 2019-02-13 12:54:06 -08:00
cf4813a1ec Add tests to transact with a cluster rotating at 1 tick per slot 2019-02-13 12:54:06 -08:00
b03636dc33 Bolster test_fullnode_rotate() checks 2019-02-13 12:54:06 -08:00
6187779d10 Wait for monitor threads to exit before Blocktree destruction 2019-02-13 12:54:06 -08:00
ddc8bfed29 Fix bad window_send_test channel logic
Test could hang if the blobs are not sent in the right order.
2019-02-13 11:23:54 -08:00
f1221d724d Consolidate logic with entry helper function
Creates an entry and updates the hash.
Also cleanup blobs creation in test_replay
2019-02-13 11:23:54 -08:00
aec44e3761 Add design for the leader validator loop (#2650) 2019-02-13 12:00:43 -07:00
aed07f0f48 Bump jsonrpc-derive from 10.0.2 to 10.1.0 (#2748)
* Bump jsonrpc-derive from 10.0.2 to 10.1.0

Bumps [jsonrpc-derive](https://github.com/paritytech/jsonrpc) from 10.0.2 to 10.1.0.
- [Release notes](https://github.com/paritytech/jsonrpc/releases)
- [Commits](https://github.com/paritytech/jsonrpc/compare/v10.0.2...v10.1.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Bump version for all jsonrpc crates; remove pubsub dependency in vote-signer
2019-02-13 10:44:22 -07:00
c178fc7249 Rewrite get_votes()
Panic if deserialize fails.
2019-02-13 10:05:28 -07:00
41554f433b Fix VoteTransaction::get_votes() 2019-02-13 10:05:28 -07:00
863956d09c Add multinode test for two nodes rotating at 1 tick per slot 2019-02-12 21:17:06 -08:00
7118178e2c Correctly compute max_tick_height when starting up a node 2019-02-12 21:17:06 -08:00
1eabe66c85 setup_leader_validator: remove unnecessary ticks_per_slot parameter 2019-02-12 21:17:06 -08:00
2de0a9e453 Log on bogus blobs 2019-02-12 21:17:06 -08:00
0bb6940c1a Faster exit for storage_stage client
Shorten the timeout and check for exit on every iteration
of fetching a last id.
2019-02-12 20:45:22 -08:00
e341b33f21 Remove ticks_per_slot from Blocktree::write_entries(), it already knows 2019-02-12 15:52:27 -08:00
6abdd6401d clippy: passing BlocktreeConfig by ref is ok 2019-02-12 15:52:27 -08:00
6632c7026d Pass a BlocktreeConfig into all ledger helper functions 2019-02-12 15:52:27 -08:00
c474cf1eef Pass BlocktreeConfig around as a reference 2019-02-12 15:52:27 -08:00
e26cd2eb26 Make Genesis block handle extra tokens for the leader (#2743) 2019-02-12 15:49:23 -08:00
b33becabca rename flag 2019-02-12 15:06:52 -08:00
3c8a8640aa restructure test_broadcast_last_tick test to check for is_last_blob 2019-02-12 15:06:52 -08:00
a1b5ea9cb1 test for is_last_blob at end of broadcast 2019-02-12 15:06:52 -08:00
bc162637a6 Add is_last_blob flag to blob to signal the end of a slot 2019-02-12 15:06:52 -08:00
8f1b7c3fff Enable test_replay (#2741)
* Enable test_replay

* Refactor get_last_id

* Fix test ledger path
2019-02-12 15:03:11 -08:00
be71f49d80 Change write_entries() and create_tmp_ledger() to take ticks_per_slot (#2736)
* Change write_entries() and create_tmp_ledger() to take ticks_per_slot

* PR nits
2019-02-12 13:14:33 -08:00
8b39eb5e4e Replace Blob Ids with Forward property (#2734)
* Replace Blob Id with Blob forwarding

* Update simulation to properly propagate blobs
2019-02-12 10:56:48 -08:00
1173cf7ed4 review comments 2019-02-12 08:41:02 -08:00
b4fd141105 fix broken test 2019-02-12 08:41:02 -08:00
0002b5dd02 Write to ledger in BroadcastService
- Also disconnect the channel between TPU and TVU
2019-02-12 08:41:02 -08:00
709598541f Remove stale TODO comment 2019-02-11 22:13:07 -08:00
aa781811af Add mulitnode tests demonstrating leader rotation at 1 tick per slot 2019-02-11 19:50:33 -08:00
b595bf8f44 Set blob_index correctly when tick_height is at the last tick of a slot 2019-02-11 19:50:33 -08:00
f6979a090e leader_scheduler: reduce the amount of special case handling for tick_height 0 2019-02-11 19:05:14 -08:00
2e1dcd84f9 Add Avalanche Simulation (#2727)
- No packet drops yet
- Optimistic retransmits without leader-id
2019-02-11 16:20:31 -08:00
144d321193 Remove Box for RPC pubsub subscriptions 2019-02-11 15:47:29 -08:00
d41dec9395 Make EntryStreamStage optional 2019-02-11 14:07:24 -08:00
f977327c7b Move EntryStream into its own Tvu stage 2019-02-11 14:07:24 -08:00
aac1a58651 Try harder to keep LeaderSchedulerConfig and BlocktreeConfig in sync 2019-02-11 13:10:12 -08:00
095afdfe47 Merge leader_to_validator/validator_to_leader 2019-02-11 08:57:44 -08:00
4ae1783b97 Remove code duplication between leader_to_validator/validator_to_leader 2019-02-10 17:53:42 -08:00
cd92adb1c6 Stop sending metrics by default
`source scripts/configure-metrics.sh` can be used at any time to easily
activate metrics if desired for local development and test.
2019-02-10 17:24:45 -08:00
7dec40ff05 slot 0 now contains the same number of ticks as all subsequent slots 2019-02-10 16:34:10 -08:00
4b38ecd916 fix tpu tvu bank race (#2707)
* Fix tpu tvu bank race

* Test highlighting race between tvu and tpu banks during leader to leader transitions
2019-02-10 16:28:52 -08:00
02c0098d57 Less --verbose by default 2019-02-10 10:19:16 -08:00
1e58c585d3 Add retry_get_balance function
clients don't need to know about json
2019-02-10 09:08:16 -08:00
ed4e9febe0 Refactor wallet processing
Yuge functions
2019-02-10 09:08:16 -08:00
1c61415cee Remove stale TODO. #1899 was resolved a while ago 2019-02-09 16:57:46 -08:00
c02625f91a Ban Default::default() 2019-02-09 10:12:32 -08:00
da5b777ee7 Purge Default::default() 2019-02-09 10:12:32 -08:00
a6aaca814c Rename enum Config to enum PohServiceConfig 2019-02-09 10:12:32 -08:00
ab3dd2a1b3 Integrate the blocktree proposal into the book (#2704) 2019-02-08 20:27:35 -07:00
7b7a2fc52b Rename Appendix to API Reference
And move before the proposals, since all this stuff is already
implemented.
2019-02-08 18:08:00 -07:00
95b28d4d8c Move now to after super majority time is calculated
'now' could end up being earlier than the supermajority calculated time.
Leading to underflow errors and thread panic.
2019-02-08 15:53:23 -08:00
1278396bd5 Cleanup consecutive entries code from window_service (#2697)
* Remove returning entries from db_ledger on insert

* Fix tests to check for correctness

* Delete generate_repairs and max_repair_entry_height
2019-02-08 14:19:28 -08:00
0e29868e34 add ticks_left_in_block (#2694)
* add ticks_left_in_block

* de-combine tests
2019-02-08 10:30:14 -08:00
0115a1f834 Remove unused SocketAddr 2019-02-08 10:23:39 -08:00
cf103add54 Remove old Tpu leader rotation shutdown mechanism 2019-02-08 09:07:35 -08:00
766af58cd8 Prune unnecessary test imports 2019-02-08 08:43:11 -08:00
5200435bab Strip unused return type 2019-02-08 08:43:11 -08:00
56734dca3b Align Tpu::new() and Tpu::switch_to_leader() arguments 2019-02-07 21:33:49 -08:00
dbaf8e66ab Remove code duplication 2019-02-07 21:33:49 -08:00
6e7c5f205b Rename db_ledger to blocktree (#2698) 2019-02-07 20:52:39 -08:00
e7df3cfe22 thin_client grooming: remove dead code, improve var names and error reporting 2019-02-07 19:41:58 -08:00
0e8540417f Add get_next_last_id 2019-02-07 19:41:58 -08:00
c3ad0eebec Clean up get_last_id() 2019-02-07 19:41:58 -08:00
c82ffaabdc Rename, purge use of term delta
This would be a fine document to introduce the term delta, but
it looks like the content flows just fine without it.
2019-02-07 16:25:23 -07:00
4e6a9b029a finalized -> frozen 2019-02-07 16:25:23 -07:00
3e519faaa8 Move to 80-char lines 2019-02-07 16:25:23 -07:00
e2eb7c1ba7 Render ASCII art 2019-02-07 16:25:23 -07:00
87ba5b865d Fix markdown 2019-02-07 16:25:23 -07:00
992f2790e7 Cleanup 2019-02-07 16:25:23 -07:00
e1a099632e fork design book 2019-02-07 16:25:23 -07:00
fd7db7a954 Support multiple forks in the ledger (#2277)
* Modify db_ledger to support per_slot metadata, add signal for updates, and add chaining to slots in db_ledger

* Modify replay stage to ask db_ledger for updates based on slots

* Add repair send/receive metrics

* Add repair service, remove old repair code

* Fix tmp_copy_ledger and setup for tests to account for multiple slots and tick limits within slots
2019-02-07 15:10:54 -08:00
5bb4ac9873 Cleanup 2019-02-07 16:09:04 -07:00
31b0d14856 wip, initial explanation on vote signer validator and stake owner relationship 2019-02-07 16:09:04 -07:00
952ab2bde5 Runtime fix 2019-02-07 11:30:05 -08:00
3c6af52a71 Fix pay-to-self Accounts bug (#2682)
* Add failing tests

* Fix tests

* Plumb AccountLoadedTwice error

* Fixup budget cancel actions to not depend on duplicate accounts

* Use has_duplicates

* Update budget-based golden
2019-02-07 12:14:10 -07:00
6317bec7aa Avoid empty --features= arg to avoid unnecessary cargo building 2019-02-07 10:42:57 -08:00
eb3ba5ce2d tmi: disable --verbose by default. | export V=1| to request verbosity 2019-02-07 10:42:57 -08:00
1f0b3f954a leader_scheduler: replace older terminology with ticks, slots and epochs 2019-02-07 10:42:57 -08:00
cdb2a7bef3 Move runtime benchmark 2019-02-07 09:46:06 -08:00
f6515b2b6a Remove top-level dependencies on solana-runtime's dependencies 2019-02-07 09:46:06 -08:00
5128d7d6c3 Move runtime.rs into its own crate 2019-02-07 09:46:06 -08:00
731e5e1291 Boot lua loader
Good fun, but unnecessary and I haven't been updating the rlua
dependency. If someone wants this, it can be developed outside
the solana repo.
2019-02-07 10:25:11 -07:00
cedee73548 Temporarily bump DEFAULT_TICKS_PER_SLOT to 64
See solana-labs/solana#2675
2019-02-07 09:16:43 -08:00
8136d52c0b Whitelist the metrics-solana-com buildkite agent from docker container cleanup 2019-02-07 08:33:53 -08:00
d1945c29d7 Don't depend on solana_native_loader for IDs in the SDK 2019-02-07 08:23:44 -08:00
83b40e4f30 Inline assertions from overreaching helper
The assert_counters() helper creates unreadable tests and makes
us have to update every test any time a counter is added. Instead,
we can just assert the values of any particular counters the test
may have affected.
2019-02-07 08:43:52 -07:00
95ac6305bc Remove unnecessary dependencies on fullnode mod 2019-02-06 21:31:48 -08:00
ab4828aae7 Replace role_notifiers tuple with two explicit fields 2019-02-06 21:31:48 -08:00
c506423e70 Remove superfluous imports 2019-02-06 21:31:48 -08:00
f0843fc5f1 NodeServices: de-pub, remove dead code 2019-02-06 21:31:48 -08:00
c87e035302 Remove multinode test dependency on Fullnode internals 2019-02-06 20:38:22 -08:00
abb9a72b27 Reduce Fullnode public API surface 2019-02-06 20:04:51 -08:00
acc6bf1564 Don't over complicate the solution 2019-02-06 19:55:12 -08:00
db688207a5 Add abort signals to tvu/tpu receivers 2019-02-06 19:55:12 -08:00
9681c4d468 Fix resource hogging when waiting for role transition 2019-02-06 19:55:12 -08:00
d9e2b94d7a bank::new_with_leader_scheduler_config() - remove Option<> 2019-02-06 19:47:09 -08:00
f789038baa Consolidate fullnode ledger helpers 2019-02-06 19:47:09 -08:00
2e23b03f94 Remove dead code 2019-02-06 19:47:09 -08:00
5181a2a9b1 Guard against invalid tick heights 2019-02-06 14:23:10 -08:00
2d2572d2cb Set DEFAULT_TICKS_PER_SLOT = 32 to stabilize integration tests 2019-02-06 14:23:10 -08:00
fa553029d5 Temporarily disable test_validator_to_leader_transition 2019-02-06 14:23:10 -08:00
c986a20bcf Disable unstable test: test_multi_node_dynamic_network 2019-02-06 14:23:10 -08:00
c5a74ada05 leader_scheduler: remove bootstrap_height 2019-02-06 14:23:10 -08:00
73979d8f5a Remove sleep, fund the vote account faster 2019-02-06 14:23:10 -08:00
f90d96367d Add Fullnode::run() to optionally manage node role transitions automatically 2019-02-06 14:23:10 -08:00
5f565c92c9 cargo incremental builds breaks Rust BPF, locally disable it (#2674) 2019-02-06 13:59:10 -08:00
7452486c72 Kill running docker containers left over from a previous job 2019-02-06 13:57:11 -08:00
afdf0efd31 Disable bpf_rust temporarily 2019-02-06 13:31:35 -08:00
7fc271ef97 Bump stable timeout 2019-02-06 13:31:35 -08:00
582ba4f173 Move economics into the proposed changes
Once this is implemented, we'll move it into the "A Solana Cluster"
section.
2019-02-06 09:29:52 -07:00
0229c97071 Move economics images into img/
And flip the exe bit
2019-02-06 09:29:52 -07:00
c0b398c7c9 Fix markdown and typo 2019-02-06 09:29:52 -07:00
549f9676f1 Allow validators to accumulate credits for voting 2019-02-05 14:24:49 -07:00
6248624ee7 Bump jsonrpc-derive from 10.0.1 to 10.0.2
Bumps [jsonrpc-derive](https://github.com/paritytech/jsonrpc) from 10.0.1 to 10.0.2.
- [Release notes](https://github.com/paritytech/jsonrpc/releases)
- [Commits](https://github.com/paritytech/jsonrpc/compare/v10.0.1...v10.0.2)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-05 08:17:25 -07:00
0025d36880 Move solana proper back to paritytech/jsonrpc 2019-02-04 22:17:23 -07:00
4985b682c3 Move vote_signer back to paritytech/jsonrpc 2019-02-04 22:17:23 -07:00
85333c5d62 Bump serde_derive from 1.0.85 to 1.0.87
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.85 to 1.0.87.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.85...v1.0.87)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-04 17:07:01 -07:00
3feda8a315 ReplayStage asking ledger for updates (#2597)
* Modify replay stage to ask db_ledger for updates instead of reading from upstream channel

* Add signal for db_ledger to update listeners about updates

* fix flaky test
2019-02-04 15:33:43 -08:00
5375c420c1 headers style have been adjusted 2019-02-04 14:25:26 -07:00
ac9f6a77c9 Fix compilation errors due to missing "features" section in Cargo.toml
- e.g. breaks in compilations during testnet deployment with Cuda enabled
2019-02-04 11:30:40 -08:00
58f4e0653a Updates to edge testnet dashboard
- Update leader/validator pipeline stage graph, as any node can be
  doing either of the roles
- Update network stats graphs to remove hostname based filtering
2019-02-04 11:08:39 -08:00
03e6a56b3c Add datetime to EntryStream message 2019-02-04 11:03:54 -08:00
32f19c5c19 Bump serde from 1.0.85 to 1.0.87
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.85 to 1.0.87.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.85...v1.0.87)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-04 09:08:09 -07:00
98e893c69b Avoid empty --features= arg to avoid unnecessary cargo building 2019-02-02 20:08:49 -08:00
fea480526b Add macOS tip 2019-02-02 20:08:49 -08:00
4aa6695a13 source ulimit-n.sh so it applies to the current shell 2019-02-02 20:08:49 -08:00
a7e5423ede Set ulimit -n 2019-02-02 20:08:49 -08:00
3ff8bbcf65 Cleanup economic design (#2649)
* Remove section numbers
* Fix all hyperlinks
* Add detail on protocol-designated minimum tx fee amount
2019-02-02 18:35:18 -08:00
9d34ded5f3 Update and fix test_broadcast_last_tick (#2644) 2019-02-01 17:13:15 -08:00
511d8275d6 Document current vote program semantics
And add a new 'staker_id' VoteState member variable to offer a path to
work our way out.  Update leader scheduler to use staker_id, but
continue setting it to 'from_id' for the moment.

No functional changes here.
2019-02-01 16:03:46 -08:00
0a9226ec8e Use voting helper 2019-02-01 16:03:46 -08:00
9c07a8c26a VoteProgram -> VoteState 2019-02-01 16:03:46 -08:00
6058bfb687 Simplify voting helpers 2019-02-01 16:03:46 -08:00
7a6d730db3 Skip retransmit when node is leader (#2625)
* Skip retransmit when node is leader

* Fix window test
2019-02-01 14:30:26 -08:00
2985988f0d Re-enable test_broadcast_last_tick (#2639) 2019-02-01 14:23:20 -08:00
d62c9ac309 Create program/ crate avoid / crate dependency on bpfloader
The bpfloader crate was triggering cargo to perform excessive rebuilds
of in-workspace dependencies.  Unclear why exactly, but seems related to
the special dual crate-type employed by bpfloader.
2019-02-01 12:42:46 -08:00
85c8af08b3 Link dangling program cuda features to the src/ crate 2019-02-01 12:42:46 -08:00
21c09073a1 Add help script to easily run all integration tests 2019-02-01 12:42:46 -08:00
40acaee446 Remove unnecessary abstractions and helper functions 2019-02-01 12:42:46 -08:00
d9a22705ce Broadcast Service should handle SendError
- After TVU shuts down, the brroadcast service will get a SendError
  when it tries to send blobs to it
2019-02-01 12:28:00 -08:00
dad0bfe447 Replace transaction traits with structs
Also:
* SystemTransaction::new -> new_account
* SystemTransaction::new_create -> new_program_account
2019-02-01 11:38:25 -08:00
1b3e7f734a Update solana-vote-signer to Rust 2018 2019-02-01 12:12:26 -07:00
0e58023794 Bump serde_json from 1.0.37 to 1.0.38
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.37 to 1.0.38.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.37...v1.0.38)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-02-01 10:06:21 -07:00
4fb9c8a547 Bump timeout 2019-02-01 07:11:17 -08:00
43cce3a8fc speling 2019-02-01 07:11:17 -08:00
344427c1dc Update to rust nightly 2019-01-31 2019-02-01 07:11:17 -08:00
82a2080e45 Rename VoteSignerProxy to VotingKeypair
Works just like a normal Keypair, but will only sign voting
transactions.
2019-02-01 07:11:17 -07:00
9a4abe96c7 Reduce VoteSignerProxy to KeypairUtil 2019-02-01 07:11:17 -07:00
d87c2eb903 Fullnode::new() - provide FullnodeConfig as a ref 2019-01-31 21:12:36 -08:00
65708f234d Remove unused import 2019-01-31 21:12:36 -08:00
b6b179af97 Fix bad merge 2019-01-31 20:15:04 -08:00
37003da854 Fix potential of checking tvu bank for truth when its behind (#2614)
* Fix race between tpu and tvu, where tvu bank is not caught up to tpu bank

* Add test

* Cleanup Fullnode tests
2019-01-31 19:21:02 -08:00
3f323aba1a Search and destroy loitering processes from previous CI runs 2019-01-31 16:17:44 -08:00
29889a90e5 ignore ledger-tool/target (#2624) 2019-01-31 16:09:56 -08:00
ed478675ba Push and query the ClusterInfo for votes. (#2622) 2019-02-01 05:21:29 +05:30
9767468b7f Remove unneeded Option 2019-01-31 13:53:59 -08:00
8ba1d5f426 treat genesis special (#2615)
* treat genesis special

* fix poh_recorder to understand new world order
2019-01-31 13:53:08 -08:00
84567d36cf Leader scheduler groundwork for Blocktree (#2599)
* Groundwork for entry tree, align constants with definitions in the book

* Fix edge case in test, node can keep generating ticks between handle_role_transition and exit() call
2019-01-31 13:44:24 -08:00
32162ef0f1 Connect TPU's broadcast service with TVU's blob fetch stage (#2587)
* Connect TPU's broadcast service with TVU's blob fetch stage

- This is needed since ledger is being written only in TVU now

* fix clippy warnings

* fix failing test

* fix broken tests

* fixed failing tests
2019-01-31 13:43:22 -08:00
2dd20c38b2 fix the test 2019-01-31 12:55:17 -08:00
aa1bd603e6 Fix recvmmsg test for timeout 2019-01-31 12:55:17 -08:00
e104941569 Add design proposal for reliable vote transmission (#2601)
* reliable vote transmission design proposal

* summary

* comments
2019-01-31 07:34:49 -08:00
2754ceec60 StatusDeque split into separate objects with their own root checkpoint strategy (#2613)
Split up StatusDeque into different modules

* LastIdQueue tracks last_ids
* StatusCache keeps track of signature statuses
* StatusCache stores success as a bit in a bloom filter
* Overhead for 1m Ok transactions is 4mb in memory
* Less concurrency between the objects, last_id and status_cache are read and written to at different points in the pipeline
* Each object has its own strategy for merging into the root checkpoint
2019-01-31 06:53:52 -08:00
609e915169 Fix clippy warning
Always pass Arcs by reference. Then you'll only need to clone()
to cross thread boundaries.
2019-01-30 21:59:05 -07:00
11f1c00ca7 Only send pubkey to ReplayStage 2019-01-30 21:59:05 -07:00
a74b24fdf0 Only store the fullnode's pubkey
Only vote_signer is used for signing
2019-01-30 21:59:05 -07:00
e25992a011 Always give Fullnode a vote signer
This will allow us to use the the signer's pubkey as the node id.

Disable voting with a configuration option.
2019-01-30 21:59:05 -07:00
00bb5925e1 use a .gitignore'd file name for transactionCount (#2609) 2019-01-30 20:19:10 -08:00
1b50fbbc90 remove Result<> from Blob accessors, add parent (#2608)
* remove Result<> from Blob accessors, add parent
* update chacha's golden
* fixup benches
2019-01-30 20:18:28 -08:00
a746969995 Don't set socket as blocking in recvmmsg for non Linux targets (#2611)
* Don't set socket as blocking in recvmmsg for non Linux targets

- The user of the function is controling this flag

* added a test
2019-01-30 19:47:53 -08:00
c536a0bf04 Remove mention of BCC 2019-01-30 18:00:04 -07:00
5b8e7bfcf2 s/voter/validator 2019-01-30 15:44:51 -07:00
3cbbceec78 rewarding 2019-01-30 15:44:51 -07:00
e684fafb68 fmt 2019-01-30 15:44:51 -07:00
651342b3db cleanup fork selection 2019-01-30 15:44:51 -07:00
c01290438f Move virtual genesis tick into the ledger proper as entry 0 2019-01-30 14:02:07 -08:00
9e9c82869a create_tmp_sample_ledger() need not return the genesis block 2019-01-30 14:02:07 -08:00
494b143453 Delete create_tmp_genesis 2019-01-30 14:02:07 -08:00
8cc1cde0fe create_tmp_sample_ledger() now returns entry_height and last_id 2019-01-30 14:02:07 -08:00
883fc39c80 Rename EntryTree to Blocktree 2019-01-30 13:29:34 -07:00
1c0758e3bd Accounts refactoring for forking.
* Move last_id and signature dup handling out of Accounts.
* Accounts that handle overlays.
2019-01-30 11:36:49 -08:00
668d353add Inline VoteSigner::new_vote_account
So that we can stop using the validator keypair to fund the
the voting account.
2019-01-30 10:42:42 -07:00
06a1681fdc Remove redundant annotations 2019-01-30 10:42:42 -07:00
a16e41002e reduce gossip nodes in concurrent tests for CI 2019-01-30 10:26:28 -07:00
16e705dc75 Boil away unneeded Fullnode::new_* functions 2019-01-29 20:10:10 -08:00
b52228feb9 Remove assumption that the mint starts with 10_000 tokens 2019-01-29 20:10:10 -08:00
25f25d0f82 new_fullnode: don't return the genesis_block, nobody uses it 2019-01-29 17:51:07 -08:00
85e7046caf Use signer for signing transactions, not constructing them 2019-01-29 18:35:05 -07:00
c741a960b9 Generalize Transaction::new to accept anything that implements KeypairUtil 2019-01-29 18:35:05 -07:00
34c8b2cc2f Remove redundant Arc 2019-01-29 18:35:05 -07:00
278effad49 Implement KeypairUtil for VoteSignerProxy 2019-01-29 18:35:05 -07:00
a0bed5375d remove println!, add check to keep it out (#2585)
* remove debugging prints

* remove println!, add check to keep it out
2019-01-29 16:02:03 -08:00
9eecd549e4 Bump rand from 0.6.4 to 0.6.5 (#2564)
* Bump rand from 0.6.4 to 0.6.5

Bumps [rand](https://github.com/rust-random/rand) from 0.6.4 to 0.6.5.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.6.4...0.6.5)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Update rand_core, rand_jitter, rand_os

fixes compile errors due to type mismatch from differing versions
2019-01-29 17:44:34 -06:00
a2c3369713 storage_state field doesn't actually exist 2019-01-29 12:34:59 -08:00
1f9ab7f58f copy bank for TPU 2019-01-29 12:11:48 -08:00
3e1a926aa6 wip 2019-01-29 12:11:48 -08:00
57f82934f2 Bump hex-literal from 0.1.1 to 0.1.2 (#2565)
Bumps [hex-literal](https://github.com/RustCrypto/utils) from 0.1.1 to 0.1.2.
- [Release notes](https://github.com/RustCrypto/utils/releases)
- [Commits](https://github.com/RustCrypto/utils/compare/opaque-debug_v0.1.1...hex-literal-v0.1.2)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-29 13:15:49 -06:00
f3a8aec64d Bump tokio from 0.1.14 to 0.1.15 (#2557)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 0.1.14 to 0.1.15.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Changelog](https://github.com/tokio-rs/tokio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-0.1.14...tokio-0.1.15)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-29 13:12:50 -06:00
e2e5bc65a9 Bump assert_cmd from 0.10.2 to 0.11.0 (#2580)
* Bump assert_cmd from 0.10.2 to 0.11.0

Bumps [assert_cmd](https://github.com/assert-rs/assert_cmd) from 0.10.2 to 0.11.0.
- [Release notes](https://github.com/assert-rs/assert_cmd/releases)
- [Changelog](https://github.com/assert-rs/assert_cmd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/assert-rs/assert_cmd/compare/v0.10.2...v0.11.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>, Mark Sinclair Jr <mark@solana.com>

* Replace use of removed `Command::main_binary`

assert_cmd 11.0 removed this. replaced with
`Command::cargo_bin(env!("CARGO_PKG_NAME"))`
2019-01-29 13:10:48 -06:00
df136578d4 Remove unnecessary FullnodeConfig::rpc_port option 2019-01-29 10:22:39 -08:00
ae7f169027 Add FullnodeConfig struct to Fullnode::new* functions
This avoids having to touch *every* Fullnode::new* call site when
a new fullnode option is added
2019-01-29 09:42:48 -08:00
6da7a784f2 Stream entries (#2582)
* Add entry streaming option

* Fix tests

* Remove obsolete comment

* Move entry stream functionality to struct w/ trait in order to test without i/o
2019-01-29 00:21:27 -08:00
12cddf725e Harmonize Fullnode::new* function arguments 2019-01-28 22:37:56 -08:00
d8861c2a5f Wait until the leader shows up on gossip 2019-01-28 22:37:56 -08:00
145fb3675d check for debugging lint in CI (#2578)
* check for debugging lint in CI
* nit
* add TODO
2019-01-28 18:32:30 -08:00
77e8cb2718 Update nominal() checks for json genesis block 2019-01-28 17:08:59 -08:00
a8ea6471e7 Add ledger-tool tests to CI 2019-01-28 17:08:59 -08:00
bfaf5634a1 .unwrap() in tests instead of assert!()ing .is_ok() for a better failure message 2019-01-28 16:10:32 -08:00
53afa64634 Remove storage_state from the bank
Construct in TVU and pass to RPC and StorageStage instead.
2019-01-28 15:41:41 -08:00
c9bf9ce094 eliminate re-use of a TX here, we're testing for empty account balance (#2576) 2019-01-28 15:21:08 -08:00
a2e29fa71f Alphabetize and make consistent fullnode arguments 2019-01-28 14:32:32 -08:00
637f58364a remove io from the tests 2019-01-28 13:52:13 -08:00
1bd04b26e5 Remove ignore flag from rpc_pubsub tests 2019-01-28 13:52:13 -08:00
29ef9370a6 Remove LeaderSchedulerConfig options 2019-01-28 13:51:01 -08:00
2262f279d5 Reduce boilerplate code with helper function to create
fullnode/bank/genesis
2019-01-28 13:48:58 -08:00
e4f477cf90 Retype num_ticks as u64 to reduce casting 2019-01-28 11:24:50 -08:00
33f921235d Improve message-signing ergonomics 2019-01-26 14:57:22 -07:00
1bae87d4b3 Add unit-test-friendly VoteSignerProxy constructor 2019-01-26 14:56:49 -07:00
1e43fb587e Rename the module that now contains only GenKeys 2019-01-26 06:57:24 -08:00
d65e7b9fcc Speedup rotation (#2468)
Speedup leader to validator transitions
2019-01-26 13:58:08 +05:30
4bb6549895 Genesis block is now a json file 2019-01-25 09:05:15 -08:00
06e3cd3d2a Bump serde_json from 1.0.36 to 1.0.37
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.36 to 1.0.37.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.36...v1.0.37)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-25 03:34:06 -08:00
e9e01557b7 fix leaked threads from unclosed fullnode 2019-01-25 03:02:49 -08:00
e0f046b7a5 Optimize Transaction/Instruction serialization with custom routine (#2515)
* Optimize transaction serialization with custom routine to reduce the serialized size.

* Update serialized_size to accept self as parameter

* Optimize serialize / deserialize operations
2019-01-24 21:14:15 -08:00
9845aec007 Rename data_replicator tests module
replicator name is associated with storage replicators, so
data_replicator sounds like that but it is actually a bunch of gossip
tests.
2019-01-24 15:49:55 -08:00
81c82b5af9 Add test for ignore ProgramErrors in process_entries (#2544) 2019-01-24 13:37:12 -08:00
a9b083e585 Set fetch stage socket non blocking to false while during recv (#2542)
* Set fetch stage socket non blocking to false while during recv

* remove ProgramError changes from this PR
2019-01-24 12:46:40 -08:00
9abc500269 Fix BPF C tests and run as part of CI (#2540) 2019-01-24 12:15:37 -08:00
b9eb7e14e6 Use clap arg conflicts check 2019-01-24 10:47:37 -08:00
b7be5b9a7a Add no-signer argument 2019-01-24 10:47:37 -08:00
ce41760fdd Update definitions of block and slot 2019-01-23 18:22:20 -08:00
a7503050c2 Bump libc from 0.2.47 to 0.2.48
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.47 to 0.2.48.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.47...0.2.48)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-23 18:22:05 -08:00
d4eb69ca14 Bump reqwest from 0.9.8 to 0.9.9
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.9.8 to 0.9.9.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.9.8...v0.9.9)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-23 17:24:48 -08:00
aba9df8457 Remove get_stake placeholder 2019-01-23 17:03:20 -08:00
6aa80e431d increase startup timeout for localnet sanity (#2534) 2019-01-23 15:06:08 -08:00
bae7612f36 Revert "Wait until the node successfully boots"
This reverts commit e84f1f6de7.
2019-01-23 11:27:08 -08:00
a0bc8b8af3 BPF programs can support up to 5 arguments (#2528) 2019-01-23 09:55:08 -08:00
73930b5eac Unfold log on errors 2019-01-23 07:48:59 -08:00
fbeba259b3 Reorg tests 2019-01-23 00:02:30 -08:00
d1bedeae13 Wait for nodes to finish booting before running sanity checks 2019-01-23 00:02:30 -08:00
e84f1f6de7 Wait until the node successfully boots 2019-01-23 00:02:30 -08:00
cc88f9bcd6 Add mechanism to determine when a node has finished booting 2019-01-23 00:02:30 -08:00
f630b50902 Check for new vote account signature explicitly for better error reporting on failures 2019-01-23 00:02:30 -08:00
9a7082d0d5 Report stuck last_id in error message 2019-01-23 00:02:30 -08:00
8dc9089611 Display confirmation time 2019-01-23 00:02:30 -08:00
222d2d7953 Verify transaction count as reported by the bootstrap-leader node is advancing 2019-01-23 00:02:30 -08:00
27c10d4468 cargo fmt 2019-01-22 21:56:04 -08:00
a17467aefd Lower level of message from storage_stage 2019-01-22 21:23:10 -08:00
73b10c196e Disable integration test that fails in CI 2019-01-22 19:24:44 -08:00
965dbbe835 stop enumeration if next entry is disjoint, band-aid (#2518)
* stop enumeration if next entry is disjoint, band-aid, fies #2426
* clippy
2019-01-22 15:50:36 -08:00
e3ae10bacc User-initiated builds now select the correct channel 2019-01-22 14:23:46 -08:00
fcda94b673 Use beta channel for stable dashboard once a beta tag exists 2019-01-22 12:22:57 -08:00
b1109b813e Bump byteorder from 1.3.0 to 1.3.1
Bumps [byteorder](https://github.com/BurntSushi/byteorder) from 1.3.0 to 1.3.1.
- [Release notes](https://github.com/BurntSushi/byteorder/releases)
- [Changelog](https://github.com/BurntSushi/byteorder/blob/master/CHANGELOG.md)
- [Commits](https://github.com/BurntSushi/byteorder/compare/1.3.0...1.3.1)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-22 09:58:48 -08:00
122a5b2f69 dedup the 2019-01-22 09:47:43 -08:00
dea20248c4 Increase job timeout 2019-01-22 09:35:03 -08:00
ae90ac238c Use unique log file for each additional (-x/-X) fullnodes 2019-01-22 08:27:36 -08:00
3b0ca9f478 Add rolling update test 2019-01-22 08:27:36 -08:00
61e79e6d02 Add -c to resume a previous run 2019-01-22 08:27:36 -08:00
1cdab81a3c Add -R option to restart the cluster incrementally 2019-01-22 08:27:36 -08:00
dca0ba6a5d Use -X for dynamic fullnodes, to ensure keypair remains constant during iterations 2019-01-22 08:27:36 -08:00
d666ebc558 Add tests for vote_program 2019-01-21 18:05:52 -07:00
c84b796e17 remove dead code (#2512) 2019-01-21 16:24:11 -08:00
7204bb40bf Don't fail process_entries with ProgramErrors (#2509) 2019-01-21 15:26:06 -08:00
637d5c6691 Fix rpc port argument name 2019-01-21 16:25:51 -07:00
3c86f41769 Run buildkite iterations in parallel 2019-01-21 14:04:19 -08:00
f37eb533f1 Replicator timeout (#2480)
* Add timeout to Replicator::new; used when polling for leader

* Add timeout functionality to replicator ledger download

Shares the same timeout as polling for leader

Defaults to 30 seconds

* Add docs for Replicator::new
2019-01-21 15:37:41 -06:00
6e8b69fc88 Cleanup leader_addr, it's really entrypoint_addr 2019-01-21 13:06:30 -08:00
cb23070dfe Remove sleeps on fullnode spin-up in integration tests 2019-01-21 13:27:31 -07:00
5d9d83d312 Less clones 2019-01-21 12:56:27 -07:00
823252dd41 Cleanup terminology 2019-01-21 12:56:27 -07:00
35764225ed Remove socket from rpc test and move integration test 2019-01-21 12:29:04 -07:00
c7e5006bcf Remove assert!()s that hide error codes, making failure debug harder 2019-01-21 10:36:56 -08:00
b0149a54d8 Bump serde_derive from 1.0.84 to 1.0.85
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.84 to 1.0.85.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.84...v1.0.85)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-21 11:25:42 -07:00
e6030d66eb split load+execute from commit in bank, insert record between them in TPU code (#2487)
* split load+execute from commit in bank, insert record between them in TPU code
* clippy
* remove clear_signatures() race with commit_transactions()
* add #[test] back
2019-01-21 10:17:04 -08:00
6611188edf Move subscriptions to rpc_pubsub (#2490)
* Move subscriptions to rpc_pubsub

- this helps avoid recreating pubsub_service on node's role change

* fixed tests and addressed review comments

* fix clippy errors

* address review comments
2019-01-21 09:59:09 -08:00
abbb037888 Implement storage contract logic 2019-01-21 08:36:49 -08:00
132d59ca6a new_bank_from_db_ledger need not be public 2019-01-21 08:11:13 -08:00
200d5e62c2 Bump byteorder from 1.2.7 to 1.3.0
Bumps [byteorder](https://github.com/BurntSushi/byteorder) from 1.2.7 to 1.3.0.
- [Release notes](https://github.com/BurntSushi/byteorder/releases)
- [Changelog](https://github.com/BurntSushi/byteorder/blob/master/CHANGELOG.md)
- [Commits](https://github.com/BurntSushi/byteorder/compare/1.2.7...1.3.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-21 09:07:17 -07:00
b748942d6a Bump serde from 1.0.84 to 1.0.85
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.84 to 1.0.85.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.84...v1.0.85)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-21 08:25:24 -07:00
648b6597bf configure ulimit 2019-01-20 10:54:12 -08:00
5b73a8eceb Rework fullnode boot sequence 2019-01-19 21:35:46 -08:00
514bf32b99 Enable ledger verification for non-perf testnets 2019-01-19 20:28:56 -08:00
2073188345 Fullnode no longer fails to process a ledger with ProgramErrors in it 2019-01-18 21:06:50 -08:00
c0b472292b Revert "Entries that result in a ProgramError are still valid entries"
This reverts commit ab23b41998.
2019-01-18 21:06:50 -08:00
1b15fd1da6 Fix build, add new parameter to new_with_bank 2019-01-18 15:07:46 -08:00
6883ea0944 Give the fullnode one million tokens as a #2355 workaround 2019-01-18 13:42:04 -08:00
303289777f rsync/airdrop only if ledger doesn't exist (eg, on first run after setup.sh) 2019-01-18 13:42:04 -08:00
a8bf00fe20 Timeout if a leader is not found within 30 seconds 2019-01-18 13:42:04 -08:00
6282c53fe5 Add iterations with leader rotation enabled and periodic restarts 2019-01-18 13:42:04 -08:00
dac28e0961 Temporarily ignore wallet sanity failures when leader rotation is enabled
This commit should be reverted once https://github.com/solana-labs/solana/issues/2474 is fixed
2019-01-18 13:42:04 -08:00
4f86563352 Entries that result in a ProgramError are still valid entries 2019-01-18 13:42:04 -08:00
818afc68c1 Report number of entries and last_id on successful verification 2019-01-18 13:42:04 -08:00
443d8ce7c4 Add option to restart the cluster during iterations 2019-01-18 13:42:04 -08:00
da5cb0b012 Verify ledger before starting up the fullnode 2019-01-18 13:42:04 -08:00
922ffdfc28 Remove unnecessary ledger/ subdirectory 2019-01-18 13:42:04 -08:00
2f1107ff4f Add randomness to broadcast 2019-01-18 13:05:35 -08:00
1fd7bd7ede Storage fixes
* replicators generate their sample values
* fixes to replicator block height logic
2019-01-18 13:05:35 -08:00
c0c38463c7 Remove hard coded ports 2019-01-17 23:34:21 -08:00
c1e142d1dc Revert "test_rpc_new fails locally, ignore for now"
This reverts commit 0c46f15f94.
2019-01-17 23:34:21 -08:00
6933f2bad1 Remove stale TODO 2019-01-17 23:22:07 -08:00
b03d1d8894 Enable integration test logging for better debug on CI failure 2019-01-17 23:14:18 -08:00
8e4a86e329 Recovery multinode tests 2019-01-17 23:14:18 -08:00
1f87d9ba4a add bloom benchmarking, perf improvement from Fnv ~= 8X (#2477)
* add bloom benchmarking, perf improvement from Fnv ~= 8X
* have a look at bits.set()
* ignore new benches to pacify CI (solana_upload_perf?)
2019-01-17 18:22:21 -08:00
14267e172d Add local drone integration test 2019-01-17 15:06:04 -08:00
95e83cfe3f Add tested process_drone_request method 2019-01-17 15:06:04 -08:00
e74574706e Record Transactions with program errors in the ledger, they paid the fee 2019-01-17 13:55:56 -08:00
b381d9e06d add pngs and formatting 2019-01-17 14:30:20 -07:00
a416b53d11 file permissions 2019-01-17 14:30:20 -07:00
6fd13e3af0 rename files 2019-01-17 14:30:20 -07:00
4b7dc8200c reference reformatting 2019-01-17 14:30:20 -07:00
b83279848a html table to md table 2019-01-17 14:30:20 -07:00
a48b278c10 add economic design sections to TOC 2019-01-17 14:30:20 -07:00
75e19f4f0f Add build script 2019-01-17 12:38:04 -08:00
1a5bf0c689 Remove sleep 2019-01-17 12:38:04 -08:00
3e245f16c0 Add wallet deploy integration test 2019-01-17 12:38:04 -08:00
2698b7614b Add wallet deploy unit tests, incl program test fixture 2019-01-17 12:38:04 -08:00
b296a9a0c7 Rename slice to segment to match book terminology 2019-01-17 10:19:45 -08:00
9c8e853567 Rename --rpc arg to --rpc-port to match wallet cli 2019-01-17 09:04:57 -08:00
825d8ef6c9 Add ability to use the RPC endpoint from a node other than the bootstrap leader 2019-01-17 09:04:57 -08:00
da1201c552 Add --rpc-port option to select a custom RPC port 2019-01-17 09:04:57 -08:00
e0c05bf437 Minor clean up 2019-01-17 09:04:57 -08:00
a84b6bc7e4 Overhaul wallet rpc/drone command-line arguments 2019-01-17 08:36:05 -08:00
00c4c30d72 Fix testnet bootup issue (#2465)
* Fix testnet bootup issue

* address review comments
2019-01-16 19:18:32 -08:00
72c7139d8c Allow chained BudgetExpr via indirection (#2461)
* Allow chained BudgetExpr via indirection

Change `And`, `Or`, and `After` expressions to contain
`Box<BudgetExpr>`s instead of directly holding payments

* run cargo fmt
2019-01-16 18:51:50 -06:00
e287ba1a7e Bump bv from 0.10.0 to 0.11.0
Bumps [bv](https://github.com/tov/bv-rs) from 0.10.0 to 0.11.0.
- [Release notes](https://github.com/tov/bv-rs/releases)
- [Changelog](https://github.com/tov/bv-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tov/bv-rs/compare/0.10.0...0.11.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-16 16:47:35 -08:00
8e67a18551 Add network timeouts and fix tests
test_rpc_send_tx could fail if it didn't succeed on the first try
* add some retries to consistently pass
2019-01-16 15:59:40 -08:00
590b88f718 Bump serde_json from 1.0.35 to 1.0.36
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.35 to 1.0.36.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.35...v1.0.36)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-16 15:57:01 -07:00
00ee8813f7 Add initial build step to getting started (#2455) 2019-01-16 14:46:57 -06:00
e76f2ea89c Add record to runtime pipeline 2019-01-16 13:39:30 -07:00
c9f57c2d96 Review feedback 2019-01-16 13:38:32 -07:00
438d36341d Add note about implications on previous blocks 2019-01-16 13:38:32 -07:00
3ab54b1591 Review feedback 2019-01-16 13:38:32 -07:00
77a2f186ee Add block confirmation proposal 2019-01-16 13:38:32 -07:00
526344c9ac Log signature status uniformly 2019-01-16 12:17:46 -08:00
f8bd19f5db Log the time it look to process the ledger for easier log inspection 2019-01-16 10:45:47 -08:00
e4c6e4bf26 Report full node info before starting/updating network 2019-01-16 10:24:00 -08:00
8783563176 Report full node info before running sanity 2019-01-16 10:24:00 -08:00
6015a0ff15 Add info command 2019-01-16 10:24:00 -08:00
63b76c32f9 80-char lines 2019-01-16 09:40:45 -08:00
c9264ee12c Cleanup fanout docs and ASCII art 2019-01-16 09:40:45 -08:00
0d7b1a84cb Remove unused timeout wallet arg and config field 2019-01-16 09:20:45 -08:00
81e17bad40 Failure to write a datapoint should not be fatal 2019-01-16 09:08:47 -08:00
97d90b99e2 Describe Data-Plane fanout and distribution (#2008) 2019-01-16 21:38:10 +05:30
03d4d1cb36 Store and resend votes if leader's TPU port is unknown (#2438)
* Store and resend votes if leader's TPU port is unknown

* fix build errors

* fix failing tests
2019-01-16 06:14:55 -08:00
3282cb85ae Refactor request_and_confirm_airdrop() to use send_and_confirm_tx() 2019-01-15 19:29:59 -08:00
9354e797b6 Cleanly handle balance underflows 2019-01-15 19:29:59 -08:00
3f9c2bc33b Resend transactions a couple times before giving up 2019-01-15 16:07:32 -08:00
4369c1a113 RPC port is no longer reset on leader-to-validator transition 2019-01-15 16:06:56 -08:00
b1e57e2a30 Retry rpc requests on connection failures
Applied a blanket default retry count of 5, which seems like enough but
not excessive retries.
2019-01-15 15:30:10 -08:00
4d9489aeb1 Use RPC endpoint of the provided network entrypoint rather than searching for the leader 2019-01-15 15:13:57 -08:00
45c247fa5b bloom for forking (#2431)
* bloom for forking
* clippy fixes
* remove bloom_hash_index
2019-01-15 13:56:54 -08:00
4e2663023b Bump nix from 0.12.0 to 0.13.0
Bumps [nix](https://github.com/nix-rust/nix) from 0.12.0 to 0.13.0.
- [Release notes](https://github.com/nix-rust/nix/releases)
- [Changelog](https://github.com/nix-rust/nix/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nix-rust/nix/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-15 14:35:22 -07:00
fa4608a95d Change leader rotation time to a multiple of ticks per block (#2414)
* Change leader rotation time to a multiple of ticks per block

* fix component dependencies

* review comments
2019-01-15 12:07:58 -08:00
fec47a09a9 Add test from drone business logic; remove flaky, mis-placed integration test 2019-01-15 12:53:09 -07:00
022a97da99 revert revert of kill window (#2427)
* remove window code from most places
* window used only for testing
* remove unecessary clippy directives
2019-01-15 10:51:53 -08:00
e9116736cd Bump libc from 0.2.46 to 0.2.47
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.46 to 0.2.47.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.46...0.2.47)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-15 08:56:16 -07:00
2b549e3af6 Bump hashbrown from 0.1.7 to 0.1.8
Bumps [hashbrown](https://github.com/Amanieu/hashbrown) from 0.1.7 to 0.1.8.
- [Release notes](https://github.com/Amanieu/hashbrown/releases)
- [Changelog](https://github.com/Amanieu/hashbrown/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Amanieu/hashbrown/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-15 08:55:24 -07:00
b2afd1ea0b Bump rbpf to 0.1.9 (#2422) 2019-01-15 00:42:30 -08:00
ef8e5b40b6 Use dep files and restore tests 2019-01-14 23:41:07 -08:00
a6773ad442 Specify entrypoint when building rust programs 2019-01-14 20:13:01 -08:00
c2add08efb Move parameter to make flags variable 2019-01-14 20:12:06 -08:00
a33c76a456 Remove JsonRpcRequestProcessor dependency 2019-01-14 17:39:31 -08:00
11b1bd278a Remove unused exit field 2019-01-14 17:39:12 -08:00
e3a96ed3fc Minor new cleanup 2019-01-14 16:04:29 -08:00
447243f994 Revert "remove window code from most places" (#2417)
* Revert "Fix link to book in Local Testnet section (#2416)"

This reverts commit 710c0c9980.

* Revert "Add current leader information to dashboard (#2413)"

This reverts commit f0300c1711.

* Revert "remove window code from most places (#2389)"

This reverts commit e3c0bd5a3f.
2019-01-14 15:11:18 -08:00
710c0c9980 Fix link to book in Local Testnet section (#2416) 2019-01-14 14:57:12 -08:00
f0300c1711 Add current leader information to dashboard (#2413) 2019-01-14 14:20:05 -08:00
e3c0bd5a3f remove window code from most places (#2389)
* remove window code from most places
* window used only for testing
* remove unecessary clippy directives
2019-01-14 12:11:55 -08:00
8af61f561b Improve Wallet coverage (#2385)
* Add trait for RpcRequestHandler trait for RpcClient and add MockRpcClient for unit tests

* Add request_airdrop integration test

* Add timestamp_tx, witness_tx, and cancel_tx to wallet integration tests; add wallet integration tests to test-stable

* Add test cases

* Ignore plentiful sleeps in unit tests
2019-01-14 00:10:03 -07:00
780360834d Iteration testing v0.1 2019-01-13 21:49:09 -08:00
74e503da92 Hold an accounts_db read lock as briefly as possible to avoid deadlocking 2019-01-13 21:49:09 -08:00
d28b643c84 localnet-sanity.sh now supports iterations testing 2019-01-13 21:49:09 -08:00
dc1049a6e7 Bump serde_json from 1.0.34 to 1.0.35
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.34 to 1.0.35.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.34...v1.0.35)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-12 21:26:45 -07:00
f965b3de46 Bump reqwest from 0.9.7 to 0.9.8
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.9.7 to 0.9.8.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-12 12:44:00 -07:00
eb54a4fe91 Update book URL 2019-01-12 11:08:29 -08:00
5d3847d14d Publish book from both the edge and beta channels 2019-01-12 11:08:29 -08:00
5b92286568 Remove channel duplication 2019-01-12 11:08:29 -08:00
094bc59553 refactor: reduce node_info scope 2019-01-12 10:28:38 -07:00
e9a0b3a8f3 Add BPF-to-BPF and PC relative call tests (#2395) 2019-01-11 19:33:08 -08:00
1724430489 Remove clippy override for cyclomatic complexity (#2392)
* Remove clippy override for cyclomatic complexity

* reduce cyclomatic complexity of main function

* fix compilation errors
2019-01-11 16:49:18 -08:00
23c43ed21b Multi-file BPF C builds (#2393) 2019-01-11 15:33:21 -08:00
79b334b7f1 Don't use count_valid_ids in bench 2019-01-11 14:54:17 -07:00
9328ee4f63 Revert "Revert "Delete unused code and its tests""
This reverts commit d6b3991d49.
2019-01-11 14:54:17 -07:00
d7594b19fc Implemented a trait for vote signer service (#2386)
* Implemented a trait for vote signer service

* removes need for RPC in unit tests for vote signing

* fix build errors

* address some review comments
2019-01-11 12:58:31 -08:00
d6b3991d49 Revert "Delete unused code and its tests"
This reverts commit e713ba06f1.
2019-01-11 07:30:28 -08:00
ec63bacdc1 Bump reqwest from 0.9.6 to 0.9.7
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.9.6 to 0.9.7.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.9.6...v0.9.7)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-10 23:23:38 -07:00
e713ba06f1 Delete unused code and its tests 2019-01-10 23:19:38 -07:00
37cb218437 Drop the serialization length 2019-01-10 17:05:03 -08:00
4f79a8a204 Use serialized_size - less fragile 2019-01-10 17:05:03 -08:00
7341298a11 Cleanup tpu forwarder (#2377)
* Use unwrap() on locks

An error there generally indicates a programmer error, not a
runtime error, so a detailed runtime message is not generally useful.

* Only clone Arcs when passing them across thread boundaries

* Cleanup TPU forwarder

By separating the query from the update, all the branches get easier to
test. Also, the update operation gets so simple, that we see it
belongs over in packet.rs.

* Thanks clippy

cute
2019-01-10 13:34:48 -07:00
b9c27e3a9d Bump rocksdb from 0.10.1 to 0.11.0 (#2376)
Bumps [rocksdb](https://github.com/spacejam/rust-rocksdb) from 0.10.1 to 0.11.0.
- [Release notes](https://github.com/spacejam/rust-rocksdb/releases)
- [Changelog](https://github.com/rust-rocksdb/rust-rocksdb/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spacejam/rust-rocksdb/compare/v0.10.1...v0.11.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-10 11:12:06 -08:00
885fe38c01 Move BloomHashIndex into its own module
This trait is for bloom, not crds.

Slightly better would be to put it in the SDK so that the trait
implementations could go into hash and pubkey, but if we don't
want compatibility constraints, this is the next best thing.
2019-01-10 10:22:16 -08:00
2dbe8fc1a9 Refactor vote signer code (#2368)
* Refactor vote signer code

* fixed test compilation errors

* address clippy errors

* fix missing macro_use

* move macro use

* review comments
2019-01-10 09:21:38 -08:00
7122139e12 Rewrite TPU forwarder test (#2344) 2019-01-10 13:50:28 +05:30
4e6c03c9da Avoid holding a read lock during IO 2019-01-10 00:34:50 -07:00
d5f27f9b1e shellcheck 2019-01-09 22:06:58 -07:00
86f19a3ab3 Propagate PS4 to prevent unintentional buildkite log unfolding 2019-01-09 22:02:31 -07:00
be0eefb0af Add timeout to prevent stuck bench-tps when a cluster goes bad 2019-01-09 19:21:53 -07:00
c1cd92bbee Avoid -d arg conflict
-D is now "delete"
-d is now "disk type"
2019-01-09 16:39:24 -08:00
44b7684d56 Fix some instances of ledger to db_ledger 2019-01-09 16:33:37 -08:00
0c90e1eff6 Make entry_sender optional on window_service
window_service in replicator has no need to consume the the produced entries.
2019-01-09 15:15:47 -08:00
491bca5e4b Remove ledger.rs
Split into entry.rs for entry-constructing functions and EntrySlice
trait and db_ledger.rs for ledger helper test functions.
2019-01-09 15:15:47 -08:00
ebd676faaa Rename Block to EntrySlice 2019-01-09 15:15:47 -08:00
045c5e8556 Remove most of the old ledger code
Removes LedgerWriter, read_ledger, LedgerWindow
2019-01-09 15:15:47 -08:00
45b4cf2887 Remove store_ledger_stage which is no longer needed 2019-01-09 15:15:47 -08:00
4b5acc065a Give the bootstrap leader one million tokens as a #2355 workaround 2019-01-09 13:30:20 -08:00
73eca72f14 Switch test to send a repair request to try and download from replicator
Removes need for read_ledger in the test and also tests replicator
download path.
2019-01-09 13:24:12 -08:00
28431ff22c Add configurable RUST_LOG for ./net.sh sanity 2019-01-09 12:12:50 -08:00
639bed2f6d Reorder sanity.
1. Check for presence of nodes
2. Check for functioning RPC API
3. Then try the wallet
2019-01-09 12:05:30 -08:00
77794eebdb Remove |cargo package| sanity step
Unfortunately due to our multi-crate repo, as soon as
|./scripts/increment-cargo-version.sh| is run after a release, |cargo
package| will fail for crates that depend on other in-tree crates, as
the new crate version has not yet been published to crates.io.
For now this means that we need to continue flying blind and be prepared
to deal with minor publishing issues on each new release.
2019-01-09 11:59:24 -08:00
eb37aa2bba Kill monitoring scripts by process group to ensure a full shutdown 2019-01-09 11:59:01 -08:00
048fe371aa set -x for more detailed logs 2019-01-09 11:59:01 -08:00
0b666ad9fd De-dup error messages 2019-01-09 11:59:01 -08:00
87c9af142f Preserve config/ when skipSetup 2019-01-09 11:59:01 -08:00
6b46c22b42 Use restart 2019-01-09 11:59:01 -08:00
94494b64d7 whack commented out, obsolete, superceded test 2019-01-09 11:30:07 -08:00
b648f37b97 encapsulate erasure_cf (#2349) 2019-01-09 10:21:55 -08:00
78d3b83900 Remove vestigial vote account configuration from fullnode-config 2019-01-09 09:56:44 -08:00
56b6ed6730 Rerun build if any file in a directory has changed (#2343) 2019-01-09 09:56:23 -08:00
e0c68bf9ad docs: -z is a common option 2019-01-08 21:11:43 -08:00
64ebd9a194 Add update-to-restart operation. Also try to update before restarting on sanity failures 2019-01-08 21:11:43 -08:00
35fe08b3bc Add update support 2019-01-08 21:11:43 -08:00
aedab3f83f Run sanity when previous ledger/setup is preserved 2019-01-08 21:11:43 -08:00
5c87ddc80e nit: hide echo 2019-01-08 21:11:43 -08:00
f53810fcd2 Remove unused exit variable
The exit variable was only used by a test.
2019-01-08 20:22:31 -08:00
56fa3a09c8 Surface the spy node's id, useful for log analysis 2019-01-08 17:43:41 -08:00
58bca04a3f Couple edits 2019-01-08 17:44:09 -07:00
3c6afe7707 Rename get_blob_bytes to read_blobs_bytes 2019-01-08 16:00:39 -08:00
09296e0d71 Fix two storage tests
* test_encrypt_files_many_keys_multiple_keys passing
  - buffer chunk size unified between single key and multiple key path,
    which shouldn't be necessary but can fix later.
* test_encrypt_file_many_keys_bad_key_length passing
2019-01-08 16:00:39 -08:00
4b3d64ec9f Convert chacha_encrypt_file to work with db_ledger blobs directly 2019-01-08 16:00:39 -08:00
a904e15ecc enscapsulate data_cf (#2336)
* enscapsulate data_cf
2019-01-08 15:53:44 -08:00
a82a5ae184 Delete unused code
The ignored test is still broken, but at least no longer creates a
window for no reason.

Also removed all remaining references to "ncp".
2019-01-08 14:09:50 -08:00
bafd90807d encapsulate meta_cf (#2335) 2019-01-08 11:41:55 -08:00
08924ea36a Bump rand from 0.6.3 to 0.6.4
Bumps [rand](https://github.com/rust-random/rand) from 0.6.3 to 0.6.4.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.6.3...0.6.4)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-08 11:09:03 -07:00
0f8ea6872e Add missing error counters and load_account test cases (#2327) 2019-01-08 09:20:25 -08:00
1b7598e351 Add retries to RPC API probe 2019-01-08 08:50:51 -08:00
d2431128c7 Remove WriteStage from TPU/TVU diagrams
Fixes #2312
2019-01-08 08:42:06 -08:00
8e0e12e5c9 Bump reqwest from 0.9.5 to 0.9.6
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.9.5 to 0.9.6.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.9.5...v0.9.6)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-08 09:11:14 -07:00
e883117a7d Add missing description field, required for crate publishing 2019-01-07 23:02:32 -08:00
cd0e08cae5 Add fullnode-config crate 2019-01-07 23:02:32 -08:00
1490c42d9f Use docker rust docker image to avoid rocksdb build errors 2019-01-07 23:02:32 -08:00
789ee9f138 package or publish. Also package on branch builds 2019-01-07 23:02:32 -08:00
2c52e82352 Use retry_make_rpc_request to avoid occasional CI test failures 2019-01-07 21:25:25 -08:00
0a981a6606 Double publish crate timeout 2019-01-07 20:46:21 -08:00
534f8d7a4e Don't turn the build red if channel cannot be figured (eg, building a tag) 2019-01-07 19:56:07 -08:00
c4ca76e39e Only check TRIGGERED_BUILDKITE_TAG 2019-01-07 19:56:01 -08:00
a8b9899dee Add retry, restore ignored tests 2019-01-07 19:30:08 -08:00
d2cb4e003c Re-enable the --lib tests 2019-01-07 15:28:20 -08:00
0a0c62f384 Fixes to CI bench comparison (#2319)
* Fixes to CI bench comparison

- The table columns did not match the header
- The last commit was not identified correctly

* review comments
2019-01-07 14:26:21 -08:00
6000df9779 Optimize has_duplicates() for short slices 2019-01-07 13:20:04 -07:00
24963e547c with_subset() -> get_subset_unchecked_mut()
A simpler, safer, and better documented use of unsafe code
2019-01-07 13:20:04 -07:00
3ad3dee4ef Retry node registration to avoid failing before the local vote signer starts 2019-01-07 11:02:35 -08:00
46d44ca99c Add make_rpc_request retry mechanism 2019-01-07 11:02:35 -08:00
06d1af8b18 Remove stale comment 2019-01-07 09:35:39 -08:00
d34b2c4ffd Bump tokio from 0.1.13 to 0.1.14
Bumps [tokio](https://github.com/tokio-rs/tokio) from 0.1.13 to 0.1.14.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Changelog](https://github.com/tokio-rs/tokio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-0.1.13...tokio-0.1.14)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-07 08:38:15 -07:00
0c52df7569 Consolidate locks and error handling when loading accounts(#2309) 2019-01-06 22:06:55 -08:00
91bd38504e Use vote signer service in fullnode (#2009)
* Use vote signer service in fullnode

* Use native types for signature and pubkey, and address other review comments

* Start local vote signer if a remote service address is not provided

* Rebased to master

* Fixes after rebase
2019-01-05 12:57:52 -08:00
71a2b794b4 Enable info logging on non-perf clusters to aid debug of failures 2019-01-05 08:28:32 -08:00
373714bf0b Disable publish snap again 2019-01-04 21:20:33 -08:00
ee769171b9 Restore publish snap 2019-01-04 20:46:44 -08:00
6ebadbcca3 Plot testnet-manager events 2019-01-04 20:12:11 -08:00
3f60d98163 Update comments (#2310) 2019-01-04 19:19:56 -08:00
ea00c1274e Add net sanity failure metric 2019-01-04 18:45:55 -08:00
b7dc9dbc76 RPC API now assumes a drone running on the bootstrap leader 2019-01-04 18:45:55 -08:00
8b357dcb32 cargo fmt 2019-01-04 16:39:04 -08:00
1f6346d880 De-dup ledgers - db_ledger is now the only ledger written to disk 2019-01-04 16:37:00 -08:00
b7bd38744c Spelling and formatting 2019-01-04 16:04:31 -08:00
f8a67e282a Ignore test_tpu_forwarder (#2307) 2019-01-04 16:02:50 -08:00
0a7e199c82 Don't follow the leader: assume drone runs on the network entrypoint 2019-01-04 15:58:42 -08:00
5143f6d6f1 Boot unused crate 2019-01-04 14:34:23 -07:00
30b662df39 Remove clones in native programs 2019-01-04 13:38:03 -07:00
33f2d83506 Add timeout and prints to port search
Otherwise nc can hang forever.
2019-01-04 11:07:17 -08:00
4244a14ad3 Bump rand from 0.6.2 to 0.6.3
Bumps [rand](https://github.com/rust-random/rand) from 0.6.2 to 0.6.3.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.6.2...0.6.3)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-04 11:09:27 -07:00
f031fe58fa Bump rand from 0.6.1 to 0.6.2
Bumps [rand](https://github.com/rust-random/rand) from 0.6.1 to 0.6.2.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.6.1...0.6.2)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-04 08:43:36 -07:00
84cc240f34 Enhance ledger-tool
Add a command-line argument (min-hashes) to restrict the entries
processed by ledger-tool.  For example, --min-hashes 1 will strip
"empty" Entries, i.e. those with num_hashes = 0.

Add basic ledger tool test
2019-01-04 08:17:43 -07:00
b26906df1b Bump rand_chacha from 0.1.0 to 0.1.1
Bumps [rand_chacha](https://github.com/rust-random/rand) from 0.1.0 to 0.1.1.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/rand_chacha-0.1.0...rand_isaac-0.1.1)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-04 08:15:18 -07:00
56a3197f7f spelling 2019-01-03 18:48:24 -08:00
0505d7bd32 Don't double-clone every account 2019-01-03 17:42:37 -07:00
a448c0b81e implement per-thread, per-batch sleep in ms (throttling allows easier UI dev) (#2293)
* implement per-thread, per-batch sleep in ms (throttling allows easier UI development)
* tidy up sleep() call with Duration::from_millis() instead of Duration::new()
* fixup indentation style
* removing multinode-demo/client-throttled.sh (same functionality available via arguments)
2019-01-03 17:16:06 -05:00
8116fe8def Add proposed design for db_ledger (#2253)
* Add proposed design for db_ledger
2019-01-03 14:12:55 -08:00
7c6dcc8c73 Ignore wallet/target 2019-01-03 10:28:43 -08:00
1a9401e1f3 Permit build on Cargo.{lock,toml} changes 2019-01-03 09:35:11 -08:00
00d310f86d Remove some metrics datapoint, as it was causing excessive logging (#2287)
- 100 nodes test was bringing down the influx DB server
2019-01-03 09:25:11 -08:00
c4259fc8cc Bump libc from 0.2.45 to 0.2.46
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.45 to 0.2.46.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.45...0.2.46)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-03 09:13:03 -07:00
8c5614daa1 Bump serde_derive from 1.0.82 to 1.0.84
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.82 to 1.0.84.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.82...v1.0.84)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-02 15:54:13 -08:00
eb668c6466 Bump serde from 1.0.82 to 1.0.84
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.82 to 1.0.84.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.82...v1.0.84)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-01-02 16:42:35 -07:00
a461c5682d First stab at Rust BPF (#2269)
First stab at Rust BPF
2019-01-02 15:12:42 -08:00
e3478ee2ab svg logo and increased font size for better reading experience 2019-01-02 08:33:57 -07:00
0bea870b22 Dynamic N layer 'avalanche' broadcast and retransmit (#2058)
* Dynamic N layer avalanche broadcast and retransmit
2019-01-02 14:16:15 +05:30
5fbdc6450d Bump serde_json from 1.0.33 to 1.0.34
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.33 to 1.0.34.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.33...v1.0.34)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-30 21:15:59 -08:00
1531a1777a Add RPC API check 2018-12-24 22:51:36 -08:00
f38345fdad Cargo.lock 2018-12-24 22:51:36 -08:00
04d46ea33f Run oom-monitor as root 2018-12-24 22:51:36 -08:00
3a2fa9a650 Enable ledger/validator sanity for non-perf testnets 2018-12-24 22:51:36 -08:00
f5bbc5e961 Fix args 2018-12-23 20:56:13 -08:00
95c9fefbd0 Make sanity failure message more visible 2018-12-23 17:30:59 -08:00
073a48ab85 Restore timeout 2018-12-23 17:30:41 -08:00
753a783ba9 Add solana user to adm group for /var/log/syslog access 2018-12-23 17:28:35 -08:00
58f2598d5d Revert "Validators make a transaction to advertise their storage last_id"
This reverts commit a1759aed19.
2018-12-23 14:02:09 -08:00
3c835b692b Use netLogDir 2018-12-23 10:33:43 -08:00
7f2fa8bbcb Collect and upload network logs 2018-12-23 10:19:10 -08:00
a6fd1ca3db Add logs subcommand to fetch remote logs from each network node 2018-12-23 10:19:10 -08:00
58a4905916 Make reconstruct_entries_from_blobs() support Blobs and borrowed SharedBlobs, make distinction between to_blobs and to_shared_blobs (#2270) 2018-12-22 19:30:30 -08:00
2c9607d5da Rename getConfirmation -> getConfirmationTime 2018-12-22 12:47:02 -08:00
371cb4f0f3 Document getConfirmationTime 2018-12-22 12:47:02 -08:00
b46c809544 source ci/upload-ci-artifact.sh 2018-12-22 12:34:30 -08:00
eb29a2898c Uplaod net/log/ files as buildkite artifacts 2018-12-22 12:22:58 -08:00
c3a74e5e63 Avoid unnecessary shellcheck directives 2018-12-22 11:57:47 -08:00
a1759aed19 Validators make a transaction to advertise their storage last_id
* Also implement more storage contract logic
* Add transactions for proof validation,
* Move storage state members into system storage account userdata
2018-12-21 15:45:30 -08:00
1a3387706d Spawn threads based on cpu count (#2232) 2018-12-21 13:55:45 -08:00
41f8764232 Ignore error while enabling nvidia persistence mode (#2265) 2018-12-21 12:37:51 -08:00
4bf797c8f1 Load nvidia drivers on node startup (#2263)
* Load nvidia drivers on node startup

* added new script to enable nvidia driver persistent mode

* remove set -ex
2018-12-21 11:43:52 -08:00
7e3b54f826 Remove llc step when building BPF C programs (#2254) 2018-12-21 08:49:29 -08:00
23d3a9ae42 Use CUDA for testnet automation performance calculations (#2259) 2018-12-21 04:27:31 -08:00
756156e9db Use SSD for testnet automation (#2257) 2018-12-20 20:13:56 -08:00
4807fb4c5c Use if to be more explict about error handling (set -e trouble?) 2018-12-20 19:07:17 -08:00
dd25c5b085 Link Cargo.toml features 2018-12-20 19:07:17 -08:00
becfd1e9fa ci/test-stable-perf.sh now runs on macOS 2018-12-20 17:23:22 -08:00
951d6398a0 Rename finality to confirmation (#2250)
* Rename finality to confirmation

* fix cargo fmt errors
2018-12-20 15:47:48 -08:00
7c98545b33 Use newer votes to calculate confirmation time (#2247) 2018-12-20 15:27:47 -08:00
bb1060bdad Reduce ticks per block to increase voting frequency (#2242) 2018-12-20 14:43:03 -08:00
7ad45a91ec Fix compile error 2018-12-20 13:47:36 -08:00
51045962d3 Adjust settings 2018-12-20 12:32:25 -08:00
034c5d0422 db_ledger now fully encapsulates rocksdb 2018-12-20 12:32:25 -08:00
7148c14178 Debug broadcast (#2233)
* Account for duplicate blobs in process_blobs

* Increase max bytes for level base to match write buffer
2018-12-20 12:12:04 -08:00
93fb61dc8f Re-export rocksdb::DBRawIterator until it can be encapsulated 2018-12-20 10:38:03 -08:00
b36ceb5be4 Remove rocksdb dependency from result.rs 2018-12-20 10:38:03 -08:00
37d7ad819b Purge DB::destroy() usage 2018-12-20 10:38:03 -08:00
6bb6785936 Document updating Cargo.lock during a version bump 2018-12-20 09:20:00 -08:00
cb70824ed1 Cargo.lock 2018-12-20 09:20:00 -08:00
ddc1082e8c Stable dashboard can now actually come from the stable channel 2018-12-20 08:04:59 -08:00
f98d72a30b Fetch latest tag 2018-12-19 20:17:49 -08:00
71df71c601 Revert Clang to C lang change 2018-12-19 20:17:23 -08:00
1d0f7c44e2 Copy editing introduction
Not sure how else to comment, but I'd encourage keeping the content timeless rather than an update. So don't talk about recent releases or next we're doing X. (just a suggestion ;)
2018-12-19 20:17:23 -08:00
d78f19f8bb Select correct branch for {testnet,-perf} when using a stable channel tag 2018-12-19 17:46:18 -08:00
0e567381fb v0.12.0 2018-12-19 17:03:28 -08:00
e2225d3b71 Add more Azure details 2018-12-19 16:31:28 -08:00
666af1e62d Debug broadcast (#2208)
* Add per cf rocksdb options, increase compaction and flush threads

* Change broadcast stage to bulk write blobs

* add db_ledger function specifically for broadcast

* fix broken tests

* fix benches
2018-12-19 16:11:47 -08:00
2fe3402362 Use SSD for perf testnet (#2227) 2018-12-19 16:11:26 -08:00
14a236198f nit: rename publish-solana-tar.sh to publish-tarball.sh 2018-12-19 14:26:25 -08:00
cc1b43b90a Retire GCP setup 2018-12-19 14:26:25 -08:00
9448f0ce52 Add more Azure CI documentation 2018-12-19 14:26:25 -08:00
59fdd8f6be Move wallet airdrop retries into fullnode start script 2018-12-19 13:49:04 -08:00
7b20318ee4 Run s3cmd in a container to avoid additional CI system dependencies 2018-12-19 13:09:24 -08:00
c3c955b02e Build/install native programs within cargo-install-all.sh 2018-12-19 11:53:08 -08:00
6e56e41461 Document how to create a new Azure CI machine 2018-12-18 23:35:09 -08:00
d74d5e0e44 nit: prevent shellcheck command from getting expanded by default 2018-12-18 18:44:20 -08:00
cac08171de nit: prevent book build from getting expanded by default 2018-12-18 18:44:20 -08:00
6f6c350781 Skip stable-perf of no .rs files are modified in a PR 2018-12-18 18:44:20 -08:00
506724fc93 Remove non-standard : anchors 2018-12-18 18:44:20 -08:00
b4fe70d3d8 Skip bench of no .rs files are modified in a PR 2018-12-18 18:09:59 -08:00
3efbffe4e3 Run coverage when test-coverage.sh is modified 2018-12-18 18:09:59 -08:00
cafa873f06 run tests in single thread so local runs succeed 2018-12-18 17:38:44 -08:00
b4f4347d6e add some more tests (#2217) 2018-12-18 17:27:03 -08:00
5c866dd000 test drive new coverage stuff (#2216) 2018-12-18 16:44:27 -08:00
974249f2a5 Parallelize entry processing in replay stage in validators (#2212)
* Parallelize entry processing in replay stage in validators

- single threaded entry processing is not utlizing CPU cores to the fullest

* fix tests and address review comments
2018-12-18 16:06:05 -08:00
a65022aed7 DbLedger doesn't need to be mut, doesn't need an RwLock (#2215)
* DbLedger doesn't need to be mut, doesn't need an RwLock

* fix erasure cases
2018-12-18 15:18:57 -08:00
b101f40c32 Initial revision 2018-12-18 14:27:37 -08:00
e8e6c70e19 Remove duplicate _ definitions 2018-12-18 14:25:10 -08:00
c8d27f6424 Drop _ to clean up CI logs, apply more -j 2018-12-18 14:11:15 -08:00
287e8cefda Keep gcno files around to prevent breaking CI builds with a warm target/ cache 2018-12-18 14:07:42 -08:00
db8f2d9f07 Make ulimit non-fatal to keep the ci-cuda machine happy 2018-12-18 14:02:43 -08:00
cd6736d70b Remove duplication between test-stable{,-perf}.sh 2018-12-18 14:02:43 -08:00
0d2e3788ba Justify each coverage flag, and other cleanup 2018-12-18 13:03:38 -08:00
c0dcf67ec8 Move book build into test-checks 2018-12-18 13:03:38 -08:00
bc52336a1b affected_files metadata is only available for PR builds 2018-12-18 13:03:38 -08:00
3bfb052b0a Overhaul coverage setup 2018-12-18 10:48:06 -08:00
c71d5a111e Extract grcov download script 2018-12-18 10:48:06 -08:00
437b62c4d9 Upgrade grcov 2018-12-18 10:48:06 -08:00
cbca0ae264 Remove dead code 2018-12-18 10:48:06 -08:00
e0cde7dfc5 Remove stale log section 2018-12-18 10:32:40 -08:00
e720070945 Flip && style 2018-12-18 09:56:43 -08:00
a8ab6f4caf Preserve stable as default, use +nightly to get nightly 2018-12-18 09:54:47 -08:00
b7b1884950 Pass BUILDKITE_COMMIT env var into containers 2018-12-18 08:53:39 -08:00
755064d3e2 Use |cargo +nightly| to avoid assuming nightly is default 2018-12-18 08:44:33 -08:00
24a984086e nightly is now 1.33 2018-12-18 08:44:33 -08:00
4b831d58b7 Don't fiddle with default rust, humans don't like that 2018-12-18 08:44:33 -08:00
62f36037ea Pass CI env var into containers 2018-12-18 00:47:41 -08:00
ffdc1814c6 Add counters for gossip verification failures (#2094) 2018-12-17 20:12:50 -08:00
29776c0283 Publish book only on content changes instead of on every commit 2018-12-17 16:42:22 -08:00
69d7384cc0 Enable leader rotation on edge testnet (#2204) 2018-12-17 16:04:25 -08:00
9720ac0019 Fix try_erasure() (#2185)
* Fix try_erasure bug

* Re-enable asserts in test_replicator_startup

* Add test for out of order process_blobs
2018-12-17 15:34:19 -08:00
fc56e1e517 Correct crate-type to match other native programs 2018-12-17 15:17:13 -08:00
0f4837980f Switch noop from println to solana_logger 2018-12-17 14:56:12 -08:00
9a6e27ac36 Accounts is to big, should be its own module (#2198)
Account module is to big, should be in its own module.
2018-12-17 12:41:23 -08:00
07202205c4 Revert "ignore unstable tests"
This reverts commit bd7ef5d445071329a3b49b1f8be71b602226bbec.
2018-12-17 10:47:32 -08:00
dc56bbeec8 Ensure the full workspace is built for coverage 2018-12-17 10:47:32 -08:00
4be537c51a Temporarily disable nightly build until it can be fixed 2018-12-17 10:15:38 -08:00
66c568ba67 Add wallet sanity timeout 2018-12-17 09:58:34 -08:00
9ff8abaf29 Ensure port is not inuse before selecting it 2018-12-17 09:31:31 -08:00
b7144560c9 Include port number when gossip bind_to fails 2018-12-17 09:31:31 -08:00
4be6d01dfb Move last ids (#2187)
* Break out last_ids into its own module
* Boot SignatureNotFound from BankError
* No longer return BankError from LastIds methods
* No longer piggypack on BankError for a LastIds signature status
* Drop all dependencies on the bank
* SignatureStatus -> Status and LastIds -> StatusDeque
* Unstable tests, issue 2193
2018-12-17 07:55:56 -08:00
aef84320e0 Double cache size for stable-perf 2018-12-16 23:05:44 -08:00
9a5195e79e Remove CARGO_TARGET_CACHE_NAME, use BUILDKITE_LABEL 2018-12-16 23:05:44 -08:00
cc111941bb Cargo.lock 2018-12-16 23:05:44 -08:00
74ee1e5087 Increase the number of files a node may have open at a time 2018-12-15 17:15:22 -08:00
e5d1bd6589 Drop public suffix on build names 2018-12-15 16:54:23 -08:00
6a0f7a5ceb Update command path 2018-12-15 16:54:23 -08:00
554cd03269 Update buildkite badge URL 2018-12-15 16:54:23 -08:00
9995194cf1 Regenerate secrets 2018-12-15 15:27:58 -08:00
1298ab1647 Use ejson to manage build secrets 2018-12-15 15:10:04 -08:00
b8ab3078fb Add pipeline upload script 2018-12-15 15:10:04 -08:00
50e8666a14 Add format-url.sh 2018-12-15 15:10:04 -08:00
0659971ecf Remove unused cargo dependencies 2018-12-14 23:55:56 -08:00
fd562cb9e2 Rust 2018 cleanup 2018-12-14 21:57:15 -08:00
aaa5cd4615 Remove stray keygen 2018-12-14 21:57:15 -08:00
3f835f8ee3 Use proper match condition for duration (#2182) 2018-12-14 21:18:41 -08:00
5bf9a20d42 fullnode-config no longer depends on src/ 2018-12-14 20:13:34 -08:00
eedc8c7812 Move src/netutil.rs into its own crate 2018-12-14 20:13:34 -08:00
f0d1ed0cc4 |cargo test --all| 2018-12-14 19:32:04 -08:00
8ba1aed5a3 Fix up tests 2018-12-14 19:32:04 -08:00
9ef5e51c0f Cleanup slot remnants in db_ledger (#2153)
* Cleanup slot remnants in db_ledger
2018-12-14 17:05:41 -08:00
fe5566d642 Local testnet info (#2174) 2018-12-14 15:55:58 -08:00
4a2933b0b6 Update README.md 2018-12-14 15:55:16 -08:00
8ee0e9632c Switch to using hashbrown version of HashMap and (#2158)
HashSet for improved performance and memory usage
2018-12-14 15:10:10 -08:00
8fcb7112ec Fetch a new last_id to prevent DuplicateSignature errors during AccountInUse retries 2018-12-14 13:33:31 -08:00
6ac466c0a4 Move src/logger.rs into logger/ crate to unify logging across the workspace 2018-12-14 13:10:43 -08:00
d45fcc4381 Move src/wallet.rs into wallet/ crate 2018-12-14 12:15:18 -08:00
a22e1199cf Add fork selection RFC (#2061)
RFC and simulation for fork generation.
2018-12-14 11:15:23 -08:00
79f12d6b55 Move EntryTree back to proposals 2018-12-14 12:12:34 -07:00
483f6702a6 Rewrite synchronization chapter (#2156)
* Rewrite synchronization chapter
* Add synchronization terminology
2018-12-14 11:06:53 -07:00
f6e3464ab9 bench-tps rebase 2018-12-14 09:38:46 -08:00
708876e9a7 Fix CI and related issues in bench-tps
Rename crate to `solana-bench-tps` in its Cargo.toml

Move crate

Add to ci/publish-crate.sh
2018-12-14 09:38:46 -08:00
29d04aa533 Move bench_tps to new crate in workspace
Separate CLI/clap related code, create a new `Config` struct to hold all
configuration/CLI args

Remove most code from `main.rs`

Add a little documentation
2018-12-14 09:38:46 -08:00
6fcccedb70 align tick entries' tick_height with actual number of ticks in bank (#2147) 2018-12-14 02:25:50 -08:00
60f3aeb4ef clippy fix 2018-12-13 23:40:26 -08:00
c1ad987b04 Run checks over all crates in the workspace 2018-12-13 23:40:26 -08:00
9d0b7c6b31 Remove bench_streamer feature 2018-12-13 22:25:27 -08:00
d489cb1a8b Desnake upload_ci_artifact for consistency 2018-12-13 22:25:27 -08:00
0fe6d61036 Move binaries from src/bin into their own crate 2018-12-13 22:25:27 -08:00
092edabd2d Add homepage field to all crates 2018-12-13 22:25:27 -08:00
1a68bce94c Rename fullnode.rs to main.rs 2018-12-13 22:25:27 -08:00
87fe3ade81 Add noop cuda feature entry 2018-12-13 20:08:24 -08:00
accabca618 Find solana-fullnode-cuda 2018-12-13 20:08:24 -08:00
091b21fae7 Vote every number of ticks (#2141)
* Vote every number of ticks

* address review comments

* fix for failing leader rotation tests

* remove check for vote failure from replay tests
(as votes will be cached and transmitted when leader is available)
2018-12-13 18:43:10 -08:00
85398c728a Disable assert in replicator startup test 2018-12-13 16:50:30 -08:00
7325b19aef Do not allocate for each metrics submission (#2146) 2018-12-13 16:40:00 -08:00
7cdbbfa88e Storage stage updates
* Remove logging init from storage program: saw a crash in a test
  indicating the logger being init'ed twice.
* Add entry_height mining proof to indicate which segment the result is
  for
* Add an interface to get storage miner pubkeys for a given entry_height
* Add an interface to get the current storage mining entry_height
* Set the tvu socket to 0.0.0.0:0 in replicator to stop getting entries
  after the desired ledger segment is downloaded.
* Use signature of PoH height to determine which block to download for
  replicator.
2018-12-13 11:30:12 -08:00
3ce3f1adc1 Move book dev instructions out of top-level readme 2018-12-13 11:17:11 -07:00
9880a86f80 remove prev_id, unused (#2150) 2018-12-13 09:24:38 -08:00
647e5d76b0 Move solana-fullnode into fullnode/ 2018-12-13 01:45:29 -08:00
7e4af9382e Move solana-upload-perf into upload-perf/ 2018-12-13 01:06:40 -08:00
282d4a3563 Move solana-keygen into keygen/ 2018-12-13 01:06:40 -08:00
cafeef33c3 Relocate all keypair generation into one location: sdk/src/signature.rs 2018-12-13 01:06:40 -08:00
4f48f1a850 add db_ledger genesis, rework to_blob(), to_blobs() (#2135) 2018-12-12 20:42:12 -08:00
a05a378db4 cleanup 2018-12-12 19:12:51 -08:00
245362db96 Make a dummy version of serving repairs from db_ledger 2018-12-12 19:12:51 -08:00
b1b190b80d Fix too many args in Tvu::new (#2114)
* Reduce args in Tvu::new under to 8

Now pass in sockets through a the crate::tvu::Sockets struct

Move ClusterInfo.keypair to pub(crate) in order to remove redundant
signing keypair parameter

* remove commented code
2018-12-12 18:57:48 -08:00
3408ce89a7 add check_tick_height (#2144) 2018-12-12 18:52:11 -08:00
59a094cb77 Ensure bpf_c files exist to avoid accidental rebuilds as the tree changes 2018-12-12 17:30:41 -08:00
8782b14842 Cargo.lock 2018-12-12 17:14:50 -08:00
0f38b4b856 Remove unused dependencies 2018-12-12 17:14:50 -08:00
75f407e191 Provide entire elf to bpf_loader 2018-12-12 17:14:50 -08:00
4b07778609 Add bench_streamer feature to inhibit building solana-bench-streamer by default
This program is not currently used in any automation and is fairly slow
to build.  Disabling it by default will speed incremental builds.
2018-12-12 16:31:13 -08:00
9b81696a09 remove obsoleted TODO 2018-12-12 16:26:59 -08:00
80e19e0ad7 Encapsulate accounts of solana:🏦:Accounts
Make the field private and expose an account_values() method that
returns the values iterator from the internal hashmap
2018-12-12 16:26:59 -08:00
962e8dca1d Fix markdown 2018-12-12 17:19:46 -07:00
8da4be1b34 Prefer the term 'cluster' over 'network'
Use 'network' for the networking stack. Examples:

* The network drops packets.
* The cluster rejects bad transactions.
* The Solana cluster runs on a gigabit network.
2018-12-12 17:19:46 -07:00
f2ef74d1a1 Consistent naming between ToC and chapters 2018-12-12 17:19:46 -07:00
546c92751b 80-char lines 2018-12-12 17:19:46 -07:00
ae903f190e Broadcast for slots (#2081)
* Insert blobs into db_ledger in broadcast stage to support leader to validator transitions

* Add transmitting real slots to broadcast stage

* Handle real slots instead of default slots in window

* Switch to dummy repair on slots and modify erasure to support leader rotation

* Shorten length of holding locks

* Remove logger from replicator test
2018-12-12 15:58:29 -08:00
bf33d9d703 Disable snap build until #2127 is resolved 2018-12-12 15:13:11 -08:00
3a89d80a61 Update name in TPU 2018-12-12 14:55:27 -07:00
fd45e83651 Add web wallet example 2018-12-12 14:55:27 -07:00
27e2fd9b06 Update README.md 2018-12-12 14:35:22 -07:00
9a49ace606 No longer reserve terms from the terminology chapter
We followed the precedent set by the Rust book here, but now that
proposals are integrated, each proposal can simply include its own
terminology section.
2018-12-12 14:12:07 -07:00
3413ecc2bd Change query used to find list of nodes in the network (#2124)
* Change query used to find list of nodes in the network

* include "All" option for host selection
2018-12-12 12:38:00 -08:00
ad8b095677 Capitalize acronyms in book 2018-12-12 12:15:20 -07:00
38c72070fb Update links 2018-12-12 12:11:12 -07:00
93fe1af1a8 Integrate EntryTree description into the TVU doc 2018-12-12 12:11:12 -07:00
504bf4ba84 Bring drone description into the present 2018-12-12 12:11:12 -07:00
9f9c5fcf10 Migrate all RFC content into the book 2018-12-12 12:11:12 -07:00
90a0237457 Cherrypick recent changes to gossip RFC
Delete the RFC since this is all implemented.

See: 02bfcd23a9
2018-12-12 11:55:07 -07:00
c83538a60c Add new proposal process
And move replication and enclave proposals there to get a feel
for how it'd look.
2018-12-12 11:04:57 -07:00
13d4e3f29f Replace the leader rotation chapter with the latest RFC
The content that was originally copied was split into multiple
RFCs, leaving the book copy to bitrot.
2018-12-12 10:48:58 -07:00
cefbb7c27d Fix shared object relcations with multiple static arrays (#2121) 2018-12-12 08:41:45 -08:00
fa98434096 Update variables in dashboard (#2117)
* Update variables in dashboard

* fix escaped strings for query
2018-12-12 06:06:33 -08:00
af3ca02e35 Switch testnet-edge from snap to tarball
Snap publishing has been failing all day, unclear why.  Potentially
revert this commit if/when resolved.
2018-12-11 23:34:41 -08:00
5c396c222a Clean up install-native-programs.sh usage 2018-12-11 23:29:05 -08:00
088bab61a4 Remove |cargo install| duplication 2018-12-11 23:29:05 -08:00
080d18b06e Only run publish-crate on release branches, clarify crate ordering 2018-12-11 23:29:05 -08:00
54fb4e370c Abort make if scripts/install.sh fails 2018-12-11 21:57:53 -08:00
17f1f40140 branch -> fork 2018-12-11 17:37:54 -07:00
b011ed6358 branch -> fork
Save your branches for git
2018-12-11 17:36:16 -07:00
acbc6335af Minor fixes 2018-12-11 17:33:43 -07:00
511c84760e Fix typos, rendering and old terms 2018-12-11 17:27:54 -07:00
6cbf82dbe0 Delete storage.md 2018-12-11 17:10:01 -07:00
896622de64 Delete empty page
Bring this back in after replication is fully integrated.
2018-12-11 17:09:44 -07:00
1a160a86fa Fix typo and curve corners 2018-12-11 17:07:43 -07:00
11abd3cf6e Update tictactoe.md 2018-12-11 17:03:49 -07:00
9552badb16 Reference tic-tac-toe README instead of copying it
Also expand a bit on how it works.
2018-12-11 16:01:35 -08:00
6fd41beccd Reference the JavaScript API docs more directly 2018-12-11 16:01:35 -08:00
c679dea1b7 Add instructions to build and run tic-tac-toe 2018-12-11 16:01:35 -08:00
4788a4f775 Correctly describe repair and retransmit peers (#2110) 2018-12-11 15:51:47 -08:00
9243bc58db Metrics for window repair (#2106)
* Metrics for window repair

- Also increase max repair length

* fix vote counters, and add repair window graph

* update per node graphs

* revert max repair length change
2018-12-11 15:43:41 -08:00
2238725d1c empty entries -> ticks 2018-12-11 15:26:39 -07:00
bffa9f914c Next leader needs to publish empties 2018-12-11 15:26:39 -07:00
eeb31074de Take 2 2018-12-11 15:26:39 -07:00
af22de2cfa Cleanup leader rotation RFC 2018-12-11 15:26:39 -07:00
1d3f05a9d4 Update validator vote count 2018-12-11 13:32:39 -08:00
935524f20c Fix eh frame relocation (#2109)
* Exclude .eh_frame
2018-12-11 12:14:41 -08:00
5847961fec Fix BPF loader messages (#2098) 2018-12-11 11:20:26 -08:00
40d7f5eff8 Bump libc from 0.2.44 to 0.2.45
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.44 to 0.2.45.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.44...0.2.45)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-11 11:52:27 -07:00
c57dedb034 Add missing ld.lld wrapper needed for shared objects linking 2018-12-11 09:56:20 -08:00
b2d7b34082 Add |./net.sh update| command to live update all network nodes 2018-12-11 09:40:22 -08:00
4d67aca919 add genesis and read_ledger to db_ledger (#2097) 2018-12-11 09:14:23 -08:00
e3dfd7b1ab Allow BPF structure passing and returning (#2100)
* Add BPF struct passing and returning tests
2018-12-11 09:03:37 -08:00
166945a461 Bump serde from 1.0.81 to 1.0.82
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.81 to 1.0.82.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.81...v1.0.82)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-11 08:53:20 -08:00
46866be21d Bump serde_derive from 1.0.81 to 1.0.82
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.81 to 1.0.82.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.81...v1.0.82)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-11 09:21:11 -07:00
154e20484d Use hostname in database if env is set (#2101) 2018-12-10 22:59:38 -08:00
aeee25e703 add tick_height to Entry to be able to repair by period, chain forks of Entries, etc. (#2096) 2018-12-10 20:03:04 -08:00
b51bcb55db Fix broken dashboard counters (#2093) 2018-12-10 16:10:44 -08:00
b5784de33f Disable leader rotation for testnet-automation until it's ready 2018-12-10 15:23:11 -08:00
9556a9be17 Update the artwork 2018-12-10 15:26:43 -07:00
01c524ddd2 Revert changes to counter names 2018-12-10 15:26:43 -07:00
5e703dc70a Free up the term 'replicate' for exclusive use in replicator
Also, align Sockets field names with ContactInfo.
2018-12-10 15:26:43 -07:00
bc96bd3410 Fix peer count in edge dashboard (#2090)
Fixes #2075
2018-12-10 14:24:32 -08:00
094f0a8be3 Leader rotation flag plumbing 2018-12-10 14:07:59 -08:00
3d996bf080 Disable leader rotation on CI testnets until it's ready 2018-12-10 14:07:59 -08:00
4b05ee6811 Add hacky sleep 2018-12-10 14:05:00 -08:00
d7032aeb43 Add vote instruction debug log 2018-12-10 13:24:14 -08:00
4ea1c030bc Give bootstrap leader one more token 2018-12-10 13:24:14 -08:00
172e511e56 Use retry_transfer to test multiple times for replicator tokens
May fix failures in CI where replicator is trying to do an airdrop.
2018-12-10 12:19:00 -08:00
4481efd51e Merge pull request #2084 from CriesofCarrots/fix-wallet-accountinuse
Fix wallet accountinuse
2018-12-10 12:20:55 -07:00
337c2bfd29 Fix spelling 2018-12-10 09:31:17 -08:00
ffc82c027e Fix markdown rendering 2018-12-10 09:53:56 -07:00
e8fd5b4600 Correct keypair argument 2018-12-10 08:41:22 -08:00
67f8916aa8 Bump serde from 1.0.80 to 1.0.81
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.80 to 1.0.81.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.80...v1.0.81)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-10 08:38:52 -08:00
96e01f3a79 Bump itertools from 0.7.11 to 0.8.0
Bumps [itertools](https://github.com/bluss/rust-itertools) from 0.7.11 to 0.8.0.
- [Release notes](https://github.com/bluss/rust-itertools/releases)
- [Commits](https://github.com/bluss/rust-itertools/compare/0.7.11...0.8.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-10 08:58:26 -07:00
1e755f261f Bump serde_derive from 1.0.80 to 1.0.81
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.80 to 1.0.81.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.80...v1.0.81)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-12-10 08:56:45 -07:00
b2ddac610c Add option to skip setup during cluster start 2018-12-10 07:47:15 -08:00
ad05f64b13 crdt-vote-count metric is now named cluster_info-vote-count 2018-12-09 19:23:11 -08:00
9b472d36fc Add --path . to keep new cargo content 2018-12-09 18:09:03 -08:00
b54b0a1d25 Document that -P is now available for |config| 2018-12-09 15:25:27 -08:00
f5794de636 Clean up bootstrap leader terminology in comments and variable names 2018-12-09 15:25:27 -08:00
7ae9d9690b mkdir-p for the caller 2018-12-09 09:41:14 -08:00
db3cca7fbe Display wallet address before airdrop to help with debug on airdrop failures 2018-12-09 09:41:14 -08:00
b9743957fa Make directory to hold programs 2018-12-09 08:38:41 -08:00
0ef099421c cargo fmt 2018-12-08 23:19:55 -07:00
f1ae5b1795 Fix warnings 2018-12-08 23:19:55 -07:00
a8d6c75a24 cargo +nightly fix --features=bpf_c,cuda,erasure,chacha --edition-idioms 2018-12-08 23:19:55 -07:00
1c2394227e Enable Rust 2018 2018-12-08 23:19:55 -07:00
c49e2f8bbd cargo +nightly fix --features=bpf_c,cuda,erasure,chacha --edition 2018-12-08 23:19:55 -07:00
af403ba6fa Ignore broken chacha bench 2018-12-08 23:19:55 -07:00
ec5a8141eb cargo fix --edition 2018-12-08 23:19:55 -07:00
92584bd323 Only run the audit 2018-12-08 23:19:55 -07:00
586d9ee850 fix some nits (#2034)
rework maybe_cargo_install(), renamed to cargo_install_unless, updated to take a command to attempt
2018-12-08 19:14:19 -08:00
2de45a4da5 Update airdrop tokens to 3 for fullnode (#2051)
Filter out leader while computing the super majority stake
2018-12-08 16:54:42 -08:00
f5569e76db Relocate native programs to deps/ subdirectory of the current executable
This layout is `cargo build` compatible, no post-build file moves
required.
2018-12-08 16:31:01 -08:00
3a13ecba1f Upgrade to Rust 1.31.0 2018-12-08 11:45:59 -08:00
73b9ee9e84 Add solana_ prefix to native_loader program
This allows its logging to show up in the default RUST_LOG=solana=info
log setting
2018-12-08 11:04:45 -08:00
b1682558a6 Remove optional --identity argument to simplify command 2018-12-08 10:22:51 -08:00
0a7c07977d Follow-up to 872a3317b 2018-12-08 09:23:08 -08:00
0a83b17cdd Upgrade to Rust 1.31.0 (#2052)
* Upgrade to Rust 1.31.0
* Upgrade nightly
* Fix all clippy warnings
* Revert relaxed version check and update
2018-12-07 20:01:28 -07:00
2bad6584f6 Update solana-genesis arguments 2018-12-07 16:57:02 -08:00
872a3317b5 Fully switch to bootstrap-leader for command-line args 2018-12-07 16:57:02 -08:00
38901002b0 Accept an ip address in addition to domain name 2018-12-07 16:57:02 -08:00
1db6a882bb rsync of genesis ledger now works for non-snap deployments 2018-12-07 16:57:02 -08:00
571522e738 Update jsonrpc version 2018-12-07 17:47:54 -07:00
b5a80d3d49 Update ledger replication chapter (#2029)
* ledger block -> ledger segment

The book already defines a *block* to be a slight variation of
how block-based changes define it. It's the thing the cluster
confirms should be the next set of transactions on the ledger.

* Boot storage description from the book
2018-12-07 16:52:36 -07:00
3441d3399b Replicator rework
* Move more of the replicator logic into the replicator class
* Add support for the RPC interface to query the storage last_id value
  that the replicator would sign and use to pick a block.
* Fix replicator connecting to gossip and change test to exercise that
  scenario.
2018-12-07 15:20:36 -08:00
fa288ab197 Remove note about replicators mining on same identity
Replicators pick their own identity, validators sample from
those.
2018-12-07 14:41:53 -08:00
af11562627 Correct ledger path 2018-12-07 11:32:08 -08:00
286f08f095 Drop old validator name, use fullnode instead 2018-12-07 11:32:08 -08:00
92c3e26c7a Flip symlinks 2018-12-07 11:32:08 -08:00
6516c2532d Ensure native programs for the correct platform are installed 2018-12-07 11:32:08 -08:00
82a0cc9d27 Ensure destination is not present 2018-12-07 11:32:08 -08:00
fa58da2401 Explicitly specific build variant when installing native programs 2018-12-07 11:32:08 -08:00
1ddf93fd86 Strip cp -r arg 2018-12-07 10:43:36 -08:00
cba9c5619e Relax stable version check during the transation period between 1.30 and 1.31 2018-12-06 19:44:47 -08:00
70c149c7da Rename leader/validator to bootstrap-leader/fullnode
Only rsyncing the genesis ledger snuck in here as well
2018-12-06 19:44:47 -08:00
b34e197424 Add newline at end of file 2018-12-06 17:46:46 -08:00
f4b26247c0 Genesis only needs a keypair, not the entire fullnode::Config 2018-12-06 16:31:24 -08:00
8f0a1e32d5 Use consistent naming for the mint id file 2018-12-06 16:31:24 -08:00
c4b8f0cd2f bench-tps will now generate an ephemeral identity if not provided with one
Also simplify scripts as a result
2018-12-06 16:30:48 -08:00
aecb06cd2a Update versions in install-libssl-compatibility.sh (#2044) 2018-12-06 15:57:30 -08:00
e3c4f1f586 Move client keygen into client.sh 2018-12-06 14:49:26 -08:00
97b1156a7a Rename Ncp to GossipService
And BroadcastStage to BroadcastService since it's not included in the
TPU pipeline.
2018-12-06 15:48:19 -07:00
02bfcd23a9 review comments (#2033) 2018-12-06 12:53:57 -08:00
cc2f448d92 Add fullnode --no-leader-rotation flag 2018-12-06 11:30:19 -08:00
b45d07c8cb Remove non-common functions from common.sh 2018-12-06 10:15:14 -08:00
f0fe089013 Adapt testnet-deploy metric datapoint names to {,bootnode-}fullnode 2018-12-06 08:04:33 -08:00
a20c1b4547 Apply review feedback
And take a stab at clarifying some other sections too.
2018-12-06 08:44:01 -07:00
56ffb4385d Use gossip RFC to seed the NCP description
And format the gossip RFC for easy diffing.
2018-12-06 08:44:01 -07:00
db3c5f91b6 Update configure 2018-12-05 22:51:44 -08:00
17204b4696 Use 80-character lines for easy diffing 2018-12-05 22:10:55 -07:00
8a83c45bc6 Use the book conventions for easy migration 2018-12-05 22:10:55 -07:00
a6312ba98f Switch snap to bootstrap-fullnode/fullnode naming 2018-12-05 18:59:43 -08:00
4170f11958 More detail for the storage RFC protocol
And section numbers which can be referenced from github issues.
2018-12-05 17:40:46 -08:00
04a0652614 Generalize net/ from leader/validator to bootstrap-fullnode/fullnode 2018-12-05 17:11:16 -08:00
b880dafe28 Cleanup intro 2018-12-05 15:25:11 -08:00
36530fc7c6 Fix link 2018-12-05 15:41:32 -07:00
4fd4218178 update terminology before tearing into RFCs (#1995)
update terminology before tearing into RFCs
2018-12-05 14:35:41 -08:00
632425c7d7 Move native_loader under programs/native/ 2018-12-05 14:32:42 -08:00
ad3e36a7ab Bump rand from 0.5.5 to 0.6.1 (#1891)
* Bump rand from 0.5.5 to 0.6.1

Bumps [rand](https://github.com/rust-random/rand) from 0.5.5 to 0.6.1.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Fix conflicts and deprecated usages

* Fix benches
2018-12-05 14:12:10 -08:00
a29b307554 Reorg programming model to be more top-down
First explain how a client interacts with existing programs and why
you'd do that. Next, mention that users can contribute their own programs.
Then explain how those programs can be written in any language.
Finally, mention persistent storage, which is only needed by
stateful programs.
2018-12-05 13:36:43 -08:00
1bcafca690 Find test_tx again 2018-12-05 13:29:29 -08:00
5d80edd969 Properly check for failure (can't rely on set -e here) 2018-12-05 13:26:06 -08:00
e21b6d9db3 ensure we'd actually have N hashes per tick (#2011) 2018-12-05 12:49:41 -08:00
9c30bddb88 Rocks db erasure decoding (#1900)
* Change erasure to consume new RocksDb window

* Change tests for erasure

* Remove erasure from window

* Integrate erasure decoding back into window

* Remove corrupted blobs from ledger

* Replace Erasure result with result module's Result
2018-12-05 12:47:19 -08:00
7336645501 Move programs into the executable location so native_loader can find them 2018-12-05 10:49:06 -08:00
59e6bd115e system_program must be a static lib as it allocates Account memory 2018-12-05 10:49:06 -08:00
8597701b0f Expand matching to include optional _program suffix 2018-12-05 10:49:06 -08:00
15aef079e3 Include builtin programs for ledger verification 2018-12-05 10:49:06 -08:00
42689d4842 cargo fmt 2018-12-05 10:49:06 -08:00
6e9b8e21ae Drop new-style Result return to avoid error-type wrangling
Plus a backtrace at the point of failure is always nice
2018-12-05 10:49:06 -08:00
424612ea9d Reduce |ulimit -n| on macOS to max supported amount 2018-12-05 10:49:06 -08:00
5afafd9146 Update list of crates to publish 2018-12-05 10:49:06 -08:00
affa76f81d Initialize logger 2018-12-05 10:49:06 -08:00
340d5d557a Add vote program to workspace 2018-12-05 10:49:06 -08:00
214ed3667c Move system_transaction out of src/ 2018-12-05 10:49:06 -08:00
122627dda2 Move loader_transaction out of src/ 2018-12-05 10:49:06 -08:00
7af95eadcc Move vote_transaction out of src/ 2018-12-05 10:49:06 -08:00
9ee858a00c Move budget_program out of src/ 2018-12-05 10:49:06 -08:00
27d456bf93 Move storage_program out of src/ 2018-12-05 10:49:06 -08:00
ea6e042a6f Move vote_program out of src/ 2018-12-05 10:49:06 -08:00
a594f56c02 Add token_program.rs to sdk/ 2018-12-05 10:49:06 -08:00
e6fa74fe69 Remove custom Error enum, just use ProgramError 2018-12-05 10:49:06 -08:00
f184d69c7a Add account userdata errors 2018-12-05 10:49:06 -08:00
228a5aa75d Remove stray comment 2018-12-05 10:49:06 -08:00
9a4f8199d6 Move system_program out of src/ 2018-12-05 10:49:06 -08:00
ae0be1e857 Remove bpf_loader.rs 2018-12-05 10:49:06 -08:00
d010cac8a5 Remove token_program.rs 2018-12-05 10:49:06 -08:00
63a758508a Add sdk native_loader.rs 2018-12-05 10:49:06 -08:00
bf2658cee0 Apply review feedback 2018-12-05 10:30:16 -08:00
6ecb00a1d8 Add account access rules 2018-12-05 10:30:16 -08:00
1990501786 Describe executable and owner account metadata 2018-12-05 10:30:16 -08:00
963de90b7f Apply review feedback 2018-12-05 10:30:16 -08:00
13c7c3b3a6 Rewrite programming model with developer focus
Previous version talked about concurrency, which is described
in detail in the Anatomy of a Fullnode chapter. App developers
probably don't care that their programs run in parallel with
other programs. From their perspective, there's no difference
between 10x parallelism and a 10x faster CPU.
2018-12-05 10:30:16 -08:00
e4049f3733 Ensure subshell failures are reported 2018-12-05 10:28:03 -08:00
3cefa59a14 Remove stray tabs 2018-12-05 08:11:55 -08:00
0cb5ae41c6 Enable BPF shared objects (#2012)
* Switch to BPF ELF shared objects (.so)
2018-12-04 22:03:32 -08:00
209040e80e Free up term "finality" to imply "economic finality" (#2002)
* leader finality -> confirmation

Free up term "finality" to imply "economic finality."

* Reorder chapters
2018-12-04 20:52:38 -07:00
2112c87e13 Initial vote signing service implementation (#1996)
* Initial vote signing service implementation

- Does not use enclave for secure signing

* fix clippy errors

* added some tests

* more tests

* Address review comments + more tests
2018-12-04 11:10:57 -08:00
da44b0f0f6 Move markdown book theme to its default directory
It was getting in the way of my "git grep".
2018-12-04 10:14:41 -08:00
c1c2f1f0a9 Cleanup ad-hoc rpc address formation
Lots of places where we are forming rpc addresses.
2018-12-03 18:13:55 -08:00
777a0a858e Move ProgramError into sdk/ 2018-12-03 13:50:00 -08:00
68e99c18c0 Remove duplicate SYSTEM_PROGRAM_ID 2018-12-03 13:50:00 -08:00
c99f93e40a Remove signature.rs indirection 2018-12-03 13:50:00 -08:00
969016b9e4 Integrate cleanup from book (#1991)
This is backwards. In the future, I'll make changes to the RFC
first. Once the design is implemented, it can be more of a copy-paste
into the book.
2018-12-03 11:53:03 -07:00
4ae58cc854 Change range of leader scheduler to match current broadcasts (#1920) 2018-12-03 00:10:43 -08:00
1fbbf13ec9 Dissuade DOCKER=1 usage 2018-12-02 23:15:43 -08:00
3f9dc08984 Use docker system includes that now exist 2018-12-02 23:04:00 -08:00
1ddf9960a6 Update to llvm 0.0.4 2018-12-02 21:30:57 -08:00
9f45c0eb03 Set OS correctly 2018-12-02 21:11:56 -08:00
67155861e5 generate.sh output 2018-12-02 21:11:56 -08:00
5111255942 Map native filesystem to same location within docker 2018-12-02 21:11:56 -08:00
b405deb55a Always use llvm-native's include, as llvm-docker has no include 2018-12-02 21:11:56 -08:00
9b5368d0ec fixes to rfcs (#1976) 2018-12-02 16:44:14 -07:00
f8aa806d77 Explain how ledger broadcasting works (#1960) 2018-12-02 16:43:40 -07:00
e98ef7306d Update LLVM (#1987)
Build for all targets, use bzip2
2018-12-02 14:33:07 -08:00
188904c318 Fix Docker paths after move (#1986) 2018-12-02 13:47:05 -08:00
9594293804 Write versions in .. 2018-12-02 12:17:44 -08:00
814801d321 Restore OS macro 2018-12-02 12:17:44 -08:00
0896511b14 Echo install.sh output properly 2018-12-02 12:17:44 -08:00
222b177745 Echo cxx instead of cc when building c++ source files 2018-12-02 12:17:44 -08:00
4189a30b13 Check for version.md instead of README.md 2018-12-02 11:28:19 -08:00
f6f0a5d448 Store version info in version.md instead of README.md 2018-12-02 10:12:16 -08:00
b21facab7b Add metrics for prune messages (#1981) 2018-12-01 14:05:40 -08:00
70312ed77f Package package.sh to avoid a special case 2018-12-01 12:37:57 -08:00
ee9255cb1d Avoid unnecessary llvm/ subdirectory 2018-12-01 12:37:57 -08:00
f045e19ddc Remove version info from llvm/criterion install directory 2018-12-01 12:37:57 -08:00
3f1bececdf Update location of bpf sdk 2018-12-01 12:37:57 -08:00
34c3a0cc1f Add signature verification to gossip (#1937) 2018-12-01 12:00:30 -08:00
8ef73eee51 Reject builds faster: if sanity checks fail don't bother with the rest 2018-12-01 11:43:29 -08:00
e52f3f34a4 Autoinstall dependencies in the SDK itself 2018-12-01 10:47:59 -08:00
27b617b340 Remove upstream LLVM install instructions as we now (temporarily) bundle a forked LLVM 2018-12-01 10:47:59 -08:00
21a73d81ee grooming 2018-12-01 10:47:59 -08:00
7c3e6e8e86 Move bpf-sdk to sdk/bpf 2018-12-01 10:47:59 -08:00
42dc18ddfc Avoid exiting when cmd is not found 2018-11-30 20:44:34 -08:00
801df72680 h4,h5 font size increased 2018-11-30 18:03:55 -08:00
c8f161d17f a custom mdbook theme implemented to improve book style and structure 2018-11-30 18:03:55 -08:00
549bfe7412 Vote signing JSON RPC service (#1965)
* Vote signing JSON RPC service

- barebone service that listens for RPC requests

* Daemon for vote signer service

* Add request APIs for JSON RPC

* Cleanup of cargo dependencies

* Fix compiler error
2018-11-30 15:07:08 -08:00
b00011a3f1 Use custom LLVM (#1971)
BPF SDK uses custom LLVM
2018-11-30 14:33:29 -08:00
3ca826a480 re-enable test_tpu_forwarder (#1964) 2018-11-30 13:52:37 -08:00
b8ebb4d609 Cleanup RFCs on branch generation and leader rotation (#1967)
* rework rfcs

* comments
2018-11-30 12:51:40 -08:00
5321b606c1 update gossip and entrytree RFCs (#1972) 2018-11-30 12:26:46 -08:00
a1ad74a986 Bump nix from 0.11.0 to 0.12.0
Bumps [nix](https://github.com/nix-rust/nix) from 0.11.0 to 0.12.0.
- [Release notes](https://github.com/nix-rust/nix/releases)
- [Changelog](https://github.com/nix-rust/nix/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nix-rust/nix/compare/v0.11.0...v0.12.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-11-30 10:39:13 -07:00
29d95328ce Use non-zero exit on channel determination failure 2018-11-30 08:50:17 -08:00
b2eeccbcc2 Find channel-info.sh 2018-11-30 08:49:49 -08:00
bad0b55ab6 Expose which keys signed the Transaction in the SDK 2018-11-30 08:16:23 -08:00
0878bd53d9 Delete stub src/transaction.rs 2018-11-29 23:07:57 -08:00
de910e1169 Make test_pubkey_distribution faster
multi-thread pubkey histogram generation.
2018-11-29 17:37:37 -08:00
f2cf647508 add entry-tree-cache and gossip rfc (#1946) 2018-11-29 15:44:58 -08:00
9684737de7 Add wait before checking confirm again
Otherwise we can quickly check that we
have no signature 4 times in a row.
2018-11-29 15:32:58 -08:00
ecc87ab1aa Add a an optional timeout to thin_client
Such that a negative test like test_transaction_count doesn't
have to wait num_retries * default_timeout.
2018-11-29 13:53:40 -08:00
3cc0dd0d1e stabilize testing with --test-threads=1 2018-11-29 12:54:42 -08:00
fa359c6fc4 Merge vote new and register transactions 2018-11-29 12:31:34 -08:00
5c71f2a439 Add ulimit check to stable test suite
cargo test needs larger ulimit than default as well.
2018-11-29 11:39:42 -08:00
8cc751d1cc Improve RPC service startup error messages with actual error
Error always fixed to message about ports but that's not the only
error that can occur.
2018-11-29 11:39:42 -08:00
978fd6858f Move replicator_startup_test to integration test set
Sometimes fails when run multithreaded with other tests.
2018-11-29 11:39:42 -08:00
41689256c6 Ensure key[0] is signed 2018-11-29 10:26:46 -08:00
99445f475b Add leader rotation links
Avoid the term "leader selection" here. More precise terms are
"leader scheduling", "leader rotation", and "fork selection."
2018-11-28 18:08:05 -08:00
070d6a2faa Drop mention of CLI tooling
This is a "how does it work?" chapter, not "how do I do it?"
2018-11-28 18:08:05 -08:00
3de63570f6 Better formatting and lots of terminology links 2018-11-28 18:08:05 -08:00
8d1ac37734 More terms 2018-11-28 18:08:05 -08:00
36503ead70 Fix capitalization
And delete JSON RPC Service for now, since it currently has no
content.
2018-11-28 18:08:05 -08:00
f4d3b3f0d6 Merged synchronization, PoH and VDF sections 2018-11-28 18:08:05 -08:00
acee1f7c6c Merged synchronization, PoH and VDF sections 2018-11-28 18:08:05 -08:00
c242467fdf Expland cluster overview, integrate Avalanche chapter 2018-11-28 18:08:05 -08:00
47ae25eeb9 Fix link 2018-11-28 17:48:41 -07:00
ddc4e7ffa0 use fewer transactions for the public, "welcome to Solana" demo 2018-11-28 16:23:22 -08:00
6a2ffafdb9 Update docker-solana location for CI 2018-11-28 16:20:02 -08:00
0c091c1b24 Dockerized LLVM (#1914)
Optionally build with dockererized custom llvm
2018-11-28 14:41:53 -08:00
55993ef0ce RFC for rendezvous of vote signing service with validator node (#1947) 2018-11-28 14:19:57 -08:00
30a0820cbe Update README.md 2018-11-28 13:33:55 -08:00
194e3100a9 Additional checks in test_bank_checkpoint_zero_balance (#1943) 2018-11-28 12:40:34 -08:00
8ad4464d4b add tests for other "from" indexes signing (or not) 2018-11-28 07:56:04 -08:00
e7b0a736f5 verify signature is on the from account 2018-11-28 07:56:04 -08:00
fa4bdb4613 add --no-capture to get some logs from flaky tests 2018-11-27 23:24:20 -08:00
167eb01735 optimize bench-tps and rpc_request to work on crappy WSL boxes 2018-11-27 22:45:08 -08:00
8fb5d72b13 Make insufficient tokens message more helpful 2018-11-27 17:37:25 -08:00
83c0711760 Rename SolKeyedAccounts to SolKeyedAccount 2018-11-27 15:36:04 -08:00
8947c5a4aa Set account to default if the balance reaches 0 in a checkpoint bank (#1932)
Fixes: #1931
2018-11-27 14:17:29 -08:00
a7562c9be1 Extract execute_transaction() from the bank 2018-11-27 12:35:52 -07:00
08dc169f94 Hoist load_loaders()
This makes execute_transactions() stateless.
2018-11-27 12:35:52 -07:00
f549d8ac74 Hoist loading of loaders
This might cause a TPS boost in batched BPF transactions, since
now it'll only clone its account once per transaction instead of
once per instruction.
2018-11-27 12:35:52 -07:00
1ac7536286 Pass executable_accounts into with_subset() 2018-11-27 12:35:52 -07:00
ec0a56cb9c Tokens are unsigned 2018-11-27 10:14:37 -08:00
f0d24a68ee Configure -rpath to locate libcriterion 2018-11-26 21:16:42 -08:00
2c529f2118 Ancestor verification for vote signing (#1919) 2018-11-26 19:26:54 -08:00
af1d9345e0 De-dup ci book build 2018-11-26 18:38:57 -08:00
03ce45d93a Fix snap build 2018-11-26 18:38:48 -08:00
1695803248 added branch determination and enclave configuration section to encla… (#1873)
* added branch determination and enclave configuration section to enclave rfc

* spelling and grammar
2018-11-26 17:57:38 -08:00
58e3dd4cb6 Avoid trying to install svgbob when already installed 2018-11-26 17:18:55 -08:00
c7f678688d Stub out log functions when building tests 2018-11-26 15:41:49 -08:00
7bf4c08f70 Add BPF C unittest framework 2018-11-26 12:25:29 -08:00
69beee5416 Install svgbob 2018-11-26 09:44:19 -08:00
2200a31331 Generate book images via Make 2018-11-26 09:44:19 -08:00
88e270723f Move markdown book out of src/ 2018-11-26 09:44:19 -08:00
a13e25f083 Ignore flaky test_tpu_forwarder 2018-11-26 09:27:21 -08:00
826ac80e62 Avoid subverting bool return value 2018-11-26 09:11:40 -08:00
4506584c48 Employ stdbool.h, add stub wchar.h 2018-11-26 09:11:40 -08:00
3d3a30e200 Fix mdbook test 2018-11-26 07:51:10 -08:00
76b83ac0f4 Move testnet demos into the book
Have git readme focus on fullnode development and the book focus on
users.
2018-11-26 07:51:10 -08:00
903a9bfd05 s/contract/program/ 2018-11-26 08:20:42 -07:00
655ee1a64b Fix typos 2018-11-26 08:20:42 -07:00
e0e6c3fdb2 Extract execute_instruction() to seed new runtime module
Fixes #1528
2018-11-26 08:20:42 -07:00
31f00974f2 Hoist the lookup of executable accounts 2018-11-26 08:20:42 -07:00
c3218bb9c2 Hoist tick_height 2018-11-26 08:20:42 -07:00
90fb6ed739 Bump itertools from 0.7.9 to 0.7.11
Bumps [itertools](https://github.com/bluss/rust-itertools) from 0.7.9 to 0.7.11.
- [Release notes](https://github.com/bluss/rust-itertools/releases)
- [Commits](https://github.com/bluss/rust-itertools/compare/0.7.9...0.7.11)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-11-26 08:19:20 -07:00
d2972024de Uppercase acronyms
Looks like there will be very little Rust code in the markdown book
so switching back to English capitalization conventions.
2018-11-25 22:58:07 -07:00
3f9ad1253d Re-enable fixed tests (#1907) 2018-11-25 20:51:55 -08:00
a556a54dc9 Use title in link 2018-11-25 20:29:45 -07:00
dc0a2ca656 Move disclaimer down a bit
Odd to see a disclaimer before knowing anything about what you're reading
2018-11-25 20:27:35 -07:00
e9f986e54d Boot comma 2018-11-25 20:22:46 -07:00
357d852382 Add title to markdown book 2018-11-25 20:19:45 -07:00
6e00c6790e Move testnet metrics dashboard management out of the Grafana UI 2018-11-25 16:10:25 -08:00
f36604357e Remove CUDA Snap references 2018-11-25 16:08:29 -08:00
c3fb9d5549 Cleanup book (#1904)
* Cleanup book

* Distinguish upstream from downstream validators
* Add BroadcastStage to Fullnode/Tpu diagrams
* First attempt to re-describe the runtime

* Reorg book

Push back details of the fullnode implementation
2018-11-25 16:58:38 -07:00
f5b5c54d7d Update condition for nosigverify (#1903) 2018-11-25 13:11:07 -08:00
9f0b06bb86 Filter out leader node while retransmitting blobs (#1894) 2018-11-24 20:33:49 -08:00
57a384d6a0 Rocks db window service (#1888)
* Add db_window module for windowing functions from RocksDb

* Replace window with db_window functions in window_service

* Fix tests

* Make note of change in db_window

* Create RocksDb ledger in bin/fullnode

* Make db_ledger functions generic

* Add db_ledger to bin/replicator
2018-11-24 19:32:33 -08:00
69802e141f Add the story of how this codebase came to be 2018-11-24 14:39:53 -07:00
6fc02b7424 Detect legacy programs upfront 2018-11-24 11:56:51 -07:00
30cdd85028 Implement the same interface in all builtin programs 2018-11-24 11:56:51 -07:00
871dd47019 Extract the part of execute_instruction that should only return a ProgramError
TODO: hoist load_executable_accounts() and then change
process_instruction() to return ProgramError.
2018-11-24 11:56:51 -07:00
37f8dd57e2 Extract ProgramError from BankError 2018-11-24 11:56:51 -07:00
f827bfd83f Remove instruction index parameter 2018-11-24 11:56:51 -07:00
b3af930153 Rename process_transaction to process_instruction 2018-11-24 11:56:51 -07:00
cd488b7d07 Hoist program static methods to top-level functions 2018-11-24 11:56:51 -07:00
e2373ff51a add nosigverify command line option to ease debug 2018-11-23 16:55:04 -08:00
b3d2c900cd Rename BudgetState to BudgetProgram 2018-11-23 13:25:17 -07:00
d5adec20a3 get_ip_addr: Fall back to loopback if no better option exists 2018-11-23 13:24:41 -05:00
942256a647 Add db_ledger benchmarks (#1875)
* Add db_ledger benchmarks

* ignore benches in CI, due to timeouts
2018-11-23 06:12:43 -08:00
ca39486d06 Bump libc from 0.2.43 to 0.2.44
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.43 to 0.2.44.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.43...0.2.44)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-11-22 12:32:38 -07:00
db632fcc2a Bump tokio from 0.1.11 to 0.1.13
Bumps [tokio](https://github.com/tokio-rs/tokio) from 0.1.11 to 0.1.13.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Changelog](https://github.com/tokio-rs/tokio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-0.1.11...tokio-0.1.13)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-11-22 11:12:46 -07:00
a3321a5d80 Fix endianess in db_ledger to account for the default byte-comparator used by Rocksdb (#1885) 2018-11-22 01:35:19 -08:00
521de13571 Add maximum repair length to db_window (#1886)
* Add maximum repair length to db_window
2018-11-21 23:44:49 -08:00
e6f91269ec Use --no-tty with apt-key in Docker 2018-11-21 16:45:48 -08:00
3abf6a8a30 Reorg the markdown book to cater to app devs
First, talk about how a client interacts with Solana to do useful
things. Then describe how the fullnode you're talking to works and
why it's so very fast.  Last, why that fullnode you don't trust
does what you asked it to anyway.
2018-11-21 15:49:57 -08:00
8d7f380dfd Remove extra version check 2018-11-21 14:30:26 -08:00
59163e2dd9 Optimize some CI stuff (#1880)
* CI Optimizations
2018-11-21 12:16:16 -08:00
574021041d Calculate tag in README
Don't have people test-driving old code. Latest tag should be good.
2018-11-21 11:17:23 -07:00
872adf1031 Update README.md 2018-11-20 16:48:18 -08:00
5fc1167802 Update README to say cuda 10.0
Prebuilts fetched with fetch-perf-libs are built
with cuda 10 now.
2018-11-20 10:07:15 -07:00
c89a09e5d0 Fix build issue seen when launching gce instance (#1874) 2018-11-20 07:37:16 -08:00
d9dabdfc74 Rocks db window utils (#1851)
* Implement new ledger module based on RocksDb

* Add db_window module for windowing functions from RocksDb
2018-11-19 23:20:18 -08:00
6b910d1bd4 add tpu_forwarding, simplify ClusterInfo::new() from Result<Self> to Self 2018-11-19 20:45:49 -08:00
1c4f799845 alphabetize deps (#1872) 2018-11-19 20:13:09 -08:00
bbd9ea8c00 Delete settings.rs.foo 2018-11-19 13:39:08 -08:00
fc67a968e8 Use known keys in the unit test to avoid random false positives. 2018-11-19 13:41:24 -07:00
3d113611cc remove Result<> return from ClusterInfo::new() (#1869)
strip Result<> for ClusterInfo::new()
2018-11-19 11:25:14 -08:00
c1af48bd85 Rename program_id => owner 2018-11-18 16:24:13 -08:00
07667771ef Fix Gossip Pushes going to invalid addresses (#1858) 2018-11-17 19:57:28 -08:00
3822c29415 Route program_id to program entrypoint 2018-11-17 19:42:03 -08:00
ff386d6585 Add disclaimer to markdown book
copy-paste from readme
2018-11-17 19:56:08 -07:00
e3ddfd8dff Remove budget RFC
It describes the wallet CLI, not the Budget program. And all the
same content is now maintained in src/wallet.md.
2018-11-17 19:52:00 -07:00
f0c79fdbca Delete 0005-branches-tags-and-channels.md 2018-11-17 18:34:47 -08:00
88ddb31477 teminology cleanup: leader slots and voting rounds 2018-11-17 18:56:13 -07:00
077d1a41f1 Add too book 2018-11-17 18:56:13 -07:00
857ab8662e backticks and missing variable descriptions 2018-11-17 18:56:13 -07:00
a17f9bd0f4 Work towards adding leader rotation to the book 2018-11-17 18:56:13 -07:00
f4b9e93b11 Migrate storage RFC to book 2018-11-17 18:55:08 -07:00
2c11bf2e66 Various book cleanup
* Merge Leader and Validator diagrams
* New sdk-tools diagram
* Move terminology to just after introduction
* Purge use of LAMPORT as an acronym
* Add notes about persistent storage
2018-11-17 17:50:29 -08:00
0e33773e92 Copy release docs into RELEASE.md
Once the repo implements something proposed in an RFC, no need to acknowledge its existence.

@mvines, please update this if it's no longer accurate.
2018-11-17 18:48:53 -07:00
719e14b30a Add an explicit state of a reserved signature
An RPC client that fetches the signature status before the bank finishes
executing the corresponding Transaction should receive SignatureNotFound
instead of Confirmed
2018-11-17 16:40:23 -08:00
38883d1de4 Clarify comment 2018-11-17 16:40:23 -08:00
c6c8351fca Update env_logger requirement from 0.5.12 to 0.6.0
Updates the requirements on [env_logger](https://github.com/sebasmagri/env_logger) to permit the latest version.
- [Release notes](https://github.com/sebasmagri/env_logger/releases)
- [Commits](https://github.com/sebasmagri/env_logger/commits/v0.6.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-11-17 16:30:44 -08:00
043f50487a Document patch version updates after a release is made 2018-11-17 16:29:19 -08:00
3a2b91f1b7 Add Cargo.lock to avoid getting broken by random upstream changes 2018-11-17 15:54:21 -08:00
a76d11d486 Don't ignore Cargo.lock 2018-11-17 15:54:21 -08:00
d1f01b5209 Fix clippy lint 2018-11-17 15:54:21 -08:00
7a54dbf7d5 Restore clippy, and run clippy sooner 2018-11-17 15:54:21 -08:00
33a5d5fe93 Enable debug builds by default for better backtraces 2018-11-17 10:52:08 -08:00
201a4b7b2a Advance input pointer correctly 2018-11-17 10:30:21 -08:00
591a28d516 Avoid extra commit when publishing book 2018-11-17 10:17:52 -08:00
22d160a3c3 Install drone 2018-11-17 17:20:15 +00:00
903c82d7f1 Add timeouts 2018-11-17 09:09:25 -08:00
b2e0395f19 Bump release tarball build timeout (ahem rocksdb) 2018-11-17 08:12:03 -08:00
d96a6b42a5 Move drone into its own crate 2018-11-16 20:42:21 -08:00
cf95708c18 Set drone address to always be the initial network entry point (#1847)
* Set drone address to always be the initial network entry point, so that even when leaders rotate the client can still find the drone

* Extract drone address as a separate argument to bench-tps

* Add drone port to client.sh instead of setting it in bench-tps

* Add drone entrypoint to scripts

* Fix build error
2018-11-16 19:56:26 -08:00
7fe50d6402 Temporarily disable clippy 2018-11-16 19:55:33 -08:00
e1c7b99450 Accounts get kicked if no tokens 2018-11-16 18:53:37 -08:00
12ae7b9a6b Add test for tvu POH verification (#1844) 2018-11-16 15:48:10 -08:00
6ac5700f2e Move metrics into its own crate 2018-11-16 15:10:07 -08:00
a0dd8617be Remove airdrop from fullnode 2018-11-16 13:25:55 -08:00
1576072edb remove spurious eprintln!() 2018-11-16 10:21:58 -08:00
03d206a7ca Check for valid tvu, not tpu in broadcast (#1836) 2018-11-15 23:30:22 -08:00
c973de1d76 Decouple log and metrics rate (#1839)
Use separate env for log and metrics rate.

Set default log level to WARN if unset.
2018-11-15 22:27:16 -08:00
71336965a6 Limit targets to 4 in bench-tps
Transaction got bigger so can only fit 4 targets in a
Transaction now.
2018-11-15 20:25:07 -08:00
e791d0f74d Drone now returns signed airdrop transactions 2018-11-15 17:13:13 -08:00
3543a9a49f Add check for missing signature with fee'ed transaction
And update fetch-perf-libs version
2018-11-15 16:23:13 -08:00
7dd198a99e Change signed_key to index into account_keys
If index is within the signed keys range.
2018-11-15 16:23:13 -08:00
e048116ab2 Remove signed_keys
Use first signatures.len() of account_keys for signing
2018-11-15 16:23:13 -08:00
cda9ad8565 Multiple signatures for transactions
With multiple instructions in a TX may need
multiple signatures.

Fixes #1531
2018-11-15 16:23:13 -08:00
928f375683 Rocks db (#1792)
* Add rocksdb crate

* Implement new ledger module based on RocksDb
2018-11-15 15:53:31 -08:00
d3e521f70e accept other socket errors, ignore unless out of tries (#1835) 2018-11-15 15:49:37 -08:00
96e03eca14 Remove unused dependency 2018-11-15 15:13:50 -08:00
659dfbf51f cargo:rerun always triggers if file does not exist 2018-11-15 14:59:54 -08:00
a7ee428214 Fix build 2018-11-15 14:06:57 -08:00
a41254e18c Add scalable gossip library (#1546)
* Cluster Replicated Data Store

Separate the data storage and merge strategy from the network IO boundary.
Implement an eager push overlay for transporting recent messages.

Simulation shows fast convergence with 20k nodes.
2018-11-15 13:23:26 -08:00
4a3230904e Specify rpc port 2018-11-15 12:32:15 -08:00
c81a3f6ced Fix RPC address clashes on local multi-node testnet (#1821)
* Fix RPC address clashes on local multi-node testnet
2018-11-15 10:42:02 -08:00
a5412fc0cd Fix find port functions 2018-11-15 10:45:39 -07:00
83fc3c10cf Setup CUDA env for local builds 2018-11-15 08:00:52 -08:00
6b6c87e510 Run BPF tests in CI 2018-11-14 17:16:37 -08:00
267f9115ba Add drone RFC (#1754)
* Add stamps RFC

* Don't use the language 'load the program'

* Replace stamps RFC with new more general drone design

* Fix typo

* Describe potential techniques for getting recent last_ids
2018-11-14 15:19:34 -08:00
39c87fd103 Add BPF benchmarks 2018-11-14 12:06:06 -08:00
2ad2fdd235 Remove inline simple program to avoid maintenance burden 2018-11-14 10:39:22 -08:00
1fda4b77ef Expose tick_height to bpf programs 2018-11-14 10:33:27 -08:00
5a8938209b Expose tick_height to native programs 2018-11-14 10:33:27 -08:00
0bf2ff6138 Add convenience macro for native program entrypoint 2018-11-14 10:33:27 -08:00
e33f3a2562 Publish expected native program entrypoint in sdk/ 2018-11-14 10:33:27 -08:00
bba19ce667 Catch up to solana-genesis tokens argument name change 2018-11-14 09:55:33 -08:00
9bf2d1d7b4 Publish BPF SDK to a channel-specific URL to ease downstream pickup 2018-11-14 09:36:44 -08:00
9fe210c454 Add host information to db entries (#1778)
Add new field to each db entry identifying the host
that it originated from.
2018-11-13 21:54:15 -08:00
f99fae3c61 Use exact solana-rbpf version, not maintaining backward compatibility 2018-11-13 17:45:46 -08:00
860dcdb449 Stubs for some libc headers 2018-11-13 17:44:46 -08:00
70cebaf74a Add size_t/ssize_t/sol_memset/sol_strlen 2018-11-13 17:44:46 -08:00
317fe19da7 Fix INC_DIRS usage 2018-11-13 17:44:46 -08:00
e7b6c8b7e0 Accounts get kicked if no tokens 2018-11-13 17:23:13 -08:00
478ba75d6b Update featurized test 2018-11-13 17:19:10 -08:00
4e553ea095 test_replicate fails locally, ignore 2018-11-13 17:13:25 -08:00
0c46f15f94 test_rpc_new fails locally, ignore for now 2018-11-13 17:12:25 -08:00
7b92497d21 Update counters irrespective of logging level (#1799) 2018-11-13 16:55:14 -08:00
4668a798ca Fix Sagar and I crossing wires (#1810) 2018-11-13 15:18:54 -08:00
729d28d910 Add poh verification before processing entries
- Replicate stage now verifies entries delivered
  by the window
- Minor refactor of entries_from_blobs
2018-11-13 14:17:00 -08:00
66e9d30fda Change testnet automation to use TAR instead of snap (#1809) 2018-11-13 13:33:15 -08:00
6335be803c Broadcast last tick before leader rotation (#1766)
* Broadcast last tick before leader rotation to everybody on network

* Add test

* Refactor broadcast
2018-11-13 02:21:37 -08:00
a77b1ff767 Revert "Migrate from ring to ed25519-dalek" (#1798)
* Revert "Migrate from ring to ed25519-dalek"

This reverts commit 7c610b216b.

* Fix test failures with revert
2018-11-12 22:34:43 -08:00
1f6ece233f Remove unused path 2018-11-12 22:24:56 -08:00
d53077bb3e Activate perf-libs compatible CUDA env 2018-11-12 22:24:56 -08:00
2b44d5fb6a Fix snap PR builds 2018-11-12 22:24:56 -08:00
10e1e0c125 Switch to perf-libs v0.11.0 for CUDA 10 support 2018-11-12 20:58:52 -08:00
017c281eaf Remove CUDA support from Snap 2018-11-12 20:31:16 -08:00
c5b1bc1128 Remove obsolete update-default-cuda.sh 2018-11-12 20:31:16 -08:00
dafdab1bbc Add clang dependency to docker images, update validation checks (#1794) 2018-11-12 19:36:36 -08:00
d0ebee5e3b Correct path to solana-perf-CUDA_HOME.txt 2018-11-12 19:17:54 -08:00
aa7c741ec0 Switch to perf-libs v0.10.6 2018-11-12 19:17:54 -08:00
9e7b9487b0 perf-libs now drives setting CUDA_HOME 2018-11-12 18:49:15 -08:00
c7a67b5a02 Add deploy command to test 2018-11-12 18:21:16 -07:00
0e749dad4c Use cluster_info to get rpc address 2018-11-12 18:21:16 -07:00
fa72160c95 add last_id to Entry, PohEntry (#1783)
add prev_id to Entry, PohEntry
2018-11-12 17:03:23 -08:00
851e012c6c Upgrade EC2 image to 18.04 with CUDA 9.2 and 10 2018-11-12 15:17:34 -08:00
7f76403d0a Clean ~/solana during network start to avoid tripping over leftover files 2018-11-12 15:09:14 -08:00
126f065cc9 Extract complex loop from execute_instruction 2018-11-12 14:47:23 -08:00
7ee4dec3f1 Upgrade GCE GPU image to 18.04 2018-11-12 12:18:50 -08:00
c07d09c011 Add net/scp.sh for easier file transfer to/from network nodes 2018-11-12 11:48:53 -08:00
4d98da44e3 Fix possibility of a vote error breaking ledger (#1768)
* Fix possibility of a vote error breaking ledger

* Add test
2018-11-12 11:40:32 -08:00
15c00ea2ef Improve comments 2018-11-12 10:59:01 -08:00
522876c808 Rename Account.program_id to Account.owner 2018-11-12 10:59:01 -08:00
7d05cc8c5d Add missing account fields 2018-11-12 10:59:01 -08:00
49f4be6a2b codemod --extensions rs loader_program_id loader 2018-11-12 10:59:01 -08:00
e702515312 Add basic C++ support 2018-11-12 09:08:40 -08:00
5fce8d2ce1 Don't ignore VoteProgram errors 2018-11-11 22:18:06 -07:00
2696b22348 Cleanup TVU diagram 2018-11-11 20:55:21 -08:00
5df4754579 Don't call instructions transactions 2018-11-11 20:07:15 -08:00
a00284c727 Remove userdata diff and make helper fn 2018-11-11 18:57:28 -07:00
3832602ec4 Move notifications after store_accounts 2018-11-11 18:57:28 -07:00
3466f139a4 set -e shuffling 2018-11-11 16:24:36 -08:00
def7d156f6 codemod --extensions sh '#!/usr/bin/env bash -e' '#!/usr/bin/env bash\nset -e' 2018-11-11 16:24:36 -08:00
33aab094ef codemod --extensions sh '#!/bin/bash' '#!/usr/bin/env bash' 2018-11-11 16:24:36 -08:00
cf6f344ccc Add CUDA_HOME env var to permit overriding the CUDA install location 2018-11-11 16:24:18 -08:00
b670b9bcde Regenerate identity files in CI 2018-11-11 09:22:52 -07:00
fea86b2955 No longer serialize as JSON-encoded pkcs8
That's supposed to be an ASCII format, but we're not making use
of it. We can switch back to that some day, but if we do, it shouldn't
be JSON-encoded.
2018-11-11 09:22:52 -07:00
7c610b216b Migrate from ring to ed25519-dalek
Why?

* Pure Rust, no BoringSSL (or OpenSSL) dependency
* Those avx2 benchmarks
* ring includes far more than what we need
* ring author won't add release tags: https://github.com/briansmith/ring#versioning--stability
2018-11-11 09:22:52 -07:00
bec34496f1 Generate id.json earlier 2018-11-10 18:05:55 -08:00
49014393e1 Be less fancy for bash 4.4 compat 2018-11-10 18:05:55 -08:00
818d03c835 Bump earlyoom version 2018-11-10 15:56:17 -08:00
cdf1a96e23 Revert "V1 Window/Ledger based on RocksDb (#1712)"
This reverts commit bfcdec95cb.
2018-11-09 20:25:53 -07:00
bfcdec95cb V1 Window/Ledger based on RocksDb (#1712)
* Add rocksdb

* Implement new ledger module based on RocksDb
2018-11-09 18:30:26 -08:00
fc55835932 Revert "Boot rpc_port"
This reverts commit 1984b6db06.
2018-11-09 17:52:10 -07:00
3772910bf2 Boot rpc_port 2018-11-09 17:52:10 -07:00
24379c14dc Fix clippy warnings 2018-11-09 17:52:10 -07:00
23846bcf1c Don't require a cluster to query for one's own pubkey 2018-11-09 17:52:10 -07:00
9dd0a6e6a7 Boot drone_addr and rpc_addr from config
WalletConfig is intended for the validated command-line input.
2018-11-09 17:52:10 -07:00
5ca473ac2d Don't get the network from parse_args 2018-11-09 17:52:10 -07:00
e1a551e8f2 Create target/ if it doesn't exist yet 2018-11-09 12:03:07 -08:00
0926702269 Fix grcov download on macos and upload gcda/gcdo files for debugging 2018-11-09 11:19:28 -07:00
0a85347a0d Upgrade Rust stable to 1.30.1
Fixes `cargo doc`
2018-11-09 07:46:51 -08:00
fb59f73c1a Link readme to book (#1750)
* Link readme to book
2018-11-09 07:27:03 -07:00
eaa8b9cb1e Publish book 2018-11-09 02:13:59 -07:00
b8261d7d83 Determine network version for tar and local deploys 2018-11-08 22:02:42 -08:00
f5827d4a83 Fix typo 2018-11-08 17:15:48 -07:00
b0f8a983c4 Add the solana-wallet documentation (#1744)
* Add the solana-wallet documentation

There doesn't seem to be a way to publish bin docs to crates.io.
Until there is, we can include CLI documentation is the appendix
of the markdown book.

* A command to generate all the usage docs

Usage:

$ scripts/wallet-help.sh >> src/wallet.md
2018-11-08 15:42:20 -07:00
56c77bf482 Add IntelliJ files to ignore 2018-11-08 13:00:00 -08:00
d831c5dcc9 remove dead poh code (#1747) 2018-11-08 12:55:23 -08:00
ce474eaf54 Better titles for tpu and tvu 2018-11-08 11:33:52 -07:00
0da1c06b15 Add disk to the hardware used by both Tpu and Tvu 2018-11-08 11:33:52 -07:00
01edc94a4b Move description of the Rust flavor of stages to service.rs 2018-11-08 11:33:52 -07:00
f96563c3f2 Add documentation for pipelining 2018-11-08 11:33:52 -07:00
30697f63f1 add support for slots in erasure (#1736) 2018-11-08 10:20:03 -08:00
433fcef70b Enclave RFC updates for PoH verification (#1739)
* Enclave RFC updates for PoH verification

* fix spelling error
2018-11-08 06:52:14 -08:00
34b5b3d9c5 Add TODO in logs section 2018-11-07 20:46:57 -08:00
ea8b19a40f Update testnet info 2018-11-07 20:43:51 -08:00
b0405db5a9 Assign static IPs to {edge,beta}.testnet.solana.com 2018-11-07 20:11:00 -08:00
f34f0af6b1 Install native programs in the correct location 2018-11-07 19:44:57 -08:00
51ed48941b Continue if docker0 is not present 2018-11-07 19:33:20 -08:00
22b6cbb4da Switch testnet to AWS 2018-11-07 18:57:08 -08:00
87ac549689 Work around AWS key management limitation 2018-11-07 18:48:27 -08:00
2a6046de8e Cleanup TVU code to look like its block diagram (#1737)
* Reorg TVU code to look like TVU diagram

And move channel creation into LedgerWriteStage so that it can
be used in the same was as all the other stages.

* Delete commented out code
2018-11-07 19:25:36 -07:00
25dd5145bb Switch to us-west-1a, us-west-1b is causing trouble 2018-11-07 18:23:28 -08:00
f8f11b7f50 Remove docker0 interface if present 2018-11-07 18:23:24 -08:00
82f914e0dc Work around AWS boot check weirdness 2018-11-07 15:46:04 -08:00
3b41eec199 Shuffle AWS regions 2018-11-07 15:00:55 -08:00
9359cc69d5 Invert gpu check 2018-11-07 14:44:40 -08:00
b02b636b36 Support local tarball deploys 2018-11-07 14:44:40 -08:00
a537154c28 Remove all cuda dependencies from release tarball beyond solana-fullnode-cuda 2018-11-07 14:44:40 -08:00
39e1bdeb71 Initial RFC for use of enclave for vote signing (#1734)
* Initial RFC for use of enclave for vote signing

* Fix grammar

* address review comments
2018-11-07 14:36:16 -08:00
43bd28cdfa Add loader_ prefix to LoaderTransaction methods 2018-11-07 15:06:38 -07:00
6c10458b5b leader slots in Blobs (#1732)
* add leader slot to Blobs
* remove get_X() methods in favor of X() methods for Blob
* add slot to get_scheduled_leader()
2018-11-07 13:18:14 -08:00
3ccbf81646 Update README.md 2018-11-07 13:04:14 -08:00
2e38cd98c0 Update README.md 2018-11-07 12:58:24 -08:00
7780d9bab8 Add ledger write and storage stage to TVU documentation 2018-11-07 12:07:12 -08:00
8feed96eac Update README.md 2018-11-07 11:19:37 -08:00
16d23292dc Improve error messages 2018-11-07 10:35:10 -08:00
812a8bcc6c Permit release tag tarballs 2018-11-07 10:33:58 -08:00
63807935cb Switch testnet/testnet-beta to tarball release 2018-11-07 10:30:02 -08:00
92a8b646df Fix tarball publishing for tags 2018-11-07 10:26:19 -08:00
d9f9e347ab Delete testnet-master, testnet-master-perf 2018-11-07 10:08:29 -08:00
2ef8ebe111 AWS AMIs are region specific 2018-11-07 10:05:58 -08:00
038a46b5ef Integrate the markdown book into the codebase
This implies that the book should describe exactly what is implemented,
and will not lead the way and eventually bitrot as the RFCs do.
2018-11-07 10:58:47 -07:00
3852ad3048 Make markdown docs more modular
No need to assume the book context.
2018-11-07 10:58:47 -07:00
1075a73902 Elf relocations (#1724)
Use relocatable BPF ELFs
2018-11-07 09:40:23 -08:00
863a0c3f8f s/edge/beta/ 2018-11-07 08:54:32 -08:00
f8673931b8 Increase boot timeout 2018-11-07 08:32:15 -08:00
dd4fb7aa90 Add AWS-based nets 2018-11-07 07:47:39 -08:00
2af5aad032 Switch testnet/testnet-perf to the latest beta or stable tag 2018-11-07 07:47:39 -08:00
9027141ff8 Publish release tarballs for tags 2018-11-07 07:47:39 -08:00
c4bc331663 Add support for using a release tar 2018-11-07 07:47:39 -08:00
8be7c13d2d Stub out architecture book (#1674)
* Stub out architecture documentation

* Add book HTML generation and book tests to CI

* Add heading

* Better table of contents

* Reference existing documentation

Move ASCII art from code comments into rendered SVG

* Attempt to fix CI

* Add lamport docs

And truncate lines to 80 characters

* Fix links

And reference shorter, newer description of PoH.

* Replace ASCII art with SVG

* Streamline for Pillbox

* Update path before optional install

* Use $CARGO_HOME instead of $HOME

* Delete code

Attempt to describe all data structures without code.

* Boot RPU from docs, add JsonRpcService

Also, use Rust naming conventions in the block diagrams to
minimize the jump from docs to code.

* Latest code uses tick_height

* Rename bob/ folder to art/

A home for any ASCII art

* Import JSON RPC API

* More mdbook docs

* Add Ncp

* Cleanup links

* Move pipelining description into fullnode description

* Move high-level transaction docs into top-level doc

* Delete unused files
2018-11-06 18:00:58 -07:00
d7ea66b6a1 RPC and Pubsub, bind to 0.0.0.0 2018-11-06 15:45:36 -07:00
371c69d425 Add ledger write stage counters (#1713) 2018-11-06 14:44:54 -08:00
c9c1564d26 Fetch v0.10.5 perf libs (#1727)
- includes SGX enclave for signing
2018-11-06 14:20:22 -08:00
cd18a1b7db t 2018-11-06 14:08:47 -08:00
6aac096c77 Add timeout to prevent a stuck ssh 2018-11-06 14:08:28 -08:00
7b58bd621a Remove node check from client start-up
If the network loses a validator or two, it's the job of the sanity
check to detect this not the bench clients
2018-11-06 13:57:06 -08:00
9b43b00d5c remove tick_count, leader_scheduler, from broadcast code (#1725) 2018-11-06 13:17:41 -08:00
76694bfcf4 remove entry_writer.rs (#1720) 2018-11-06 12:42:31 -08:00
bfad138bb3 Pass any serializable to Transaction constructor 2018-11-06 11:23:59 -07:00
d8d23c9971 Remove unused debug trace 2018-11-06 09:29:39 -08:00
f77b30e81d Fix link 2018-11-06 09:28:55 -07:00
d379478603 Rename stable testnets back to beta 2018-11-06 09:27:40 -07:00
2600684999 Move testnet docs into readme
Also, described testnet and testnet-perf as stable instead of beta.
2018-11-06 09:27:40 -07:00
54968b59bb Update last_id between client retries
Fixes #1694
2018-11-06 09:06:15 -07:00
6b5d12a8bb Set metrics database correctly 2018-11-06 07:25:18 -08:00
c4b9d5d8b9 Remove stray line 2018-11-05 20:53:34 -08:00
f683817b48 Remove RPU; replace with RPC 2018-11-05 20:30:47 -07:00
52491b467a Update testnet deploy docs 2018-11-05 19:12:55 -08:00
7789fda016 Add testnet-manager pipeline 2018-11-05 17:35:30 -08:00
22abc27be4 add tests for bank.purge() (#1711) 2018-11-05 16:43:27 -08:00
c9138f964b Change token type from i64 to u64
Fixes #1526
2018-11-05 15:25:26 -07:00
c4346e6191 Add testnet pipeline for prebuilt images (#1708)
* Add testnet pipeline for prebuilt images

- It'll speed up testnet testing for released images

* removed quotes from variable

* address review comments

* fix testnet automation error
2018-11-05 13:50:33 -08:00
1a7830f460 Set imageName if G 2018-11-05 13:33:42 -08:00
b418c1abab ignore multinode demo logs 2018-11-05 10:57:51 -08:00
1fbf1d2cf2 Add checkpoint, rollback to to bank (#1662)
add linked-list capability to accounts

change accounts from a linked list to a VecDeque

add checkpoint and rollback for lastids

add subscriber notifications for rollbacks

checkpoint transaction count, too
2018-11-05 09:47:41 -08:00
5a85cc4626 Rename buildkite-snap to buildkite-secondary 2018-11-05 08:47:51 -08:00
8041461a07 Bump EC2 validator machine type 2018-11-05 08:47:51 -08:00
2ce72a1683 Update version in readme 2018-11-05 08:05:03 -07:00
eae9372a5d Upgrade GCP CPU-based testnet to 18.04 2018-11-04 19:18:47 -08:00
ed09b2bdb8 Document BPF C program limitations 2018-11-04 12:31:38 -08:00
1d7722043f genesis has 3 entries now 2018-11-02 22:02:13 -07:00
95f9488a70 use default buffer size for index, use BLOB_DATA_SIZE for data buffer (#1693) 2018-11-02 21:52:57 -07:00
e7cbbd8d45 cargo fmt 2018-11-02 19:54:49 -07:00
c8c255ad73 Rename Budget to BudgetExpr 2018-11-02 19:54:49 -07:00
a264f8fa9b Fix |cargo test| 2018-11-02 19:04:59 -07:00
40e945b0c8 Move token_program from src/ to programs/native/ 2018-11-02 18:13:02 -07:00
f3b04894b9 Try harder to snap download 2018-11-03 00:29:13 +00:00
35b7e50166 Rebase on new RFC file naming 2018-11-02 16:52:21 -06:00
6b3f684e2a elw staking rfc revisions 2018-11-02 16:50:06 -06:00
63c66ce765 initial staking design overview 2018-11-02 16:50:06 -06:00
0636399b7a Compute finality computation in new ComputeLeaderFinalityService (#1652)
* Move finality computation into a service run from the banking stage, ComputeLeaderFinalityService

* Change last ids nth to tick height, remove separate tick height from bank
2018-11-02 15:49:14 -07:00
2c74815cc9 ci: correct crates.io publishing order 2018-11-02 15:39:24 -07:00
298bd6479a Add first leader to genesis (#1681)
* Add first leader to genesis entries, consume in genesis.sh

* Set bootstrap leader in the bank on startup, remove instantiation of bootstrap leader from bin/fullnode

* Remove need to initialize bootstrap leader in leader_scheduler, now can be read from genesis entries

* Add separate interface new_with_leader() in mint for creating genesis leader entries
2018-11-02 14:32:05 -07:00
a8481215fa Model the process after Rust's RFC process 2018-11-02 14:55:39 -06:00
b7545b08fa Add process for making architectural changes 2018-11-02 14:55:39 -06:00
cf8f3bcbed Ship native programs in snap 2018-11-01 15:59:41 -07:00
b8534a402d shell 2018-11-01 15:25:27 -07:00
45b9a7f8e9 shell 2018-11-01 14:40:21 -07:00
879431ebcd Add timeout to TcpStream connect, and rename test 2018-11-01 14:13:19 -06:00
102354c218 Add balance check retries 2018-11-01 11:28:33 -06:00
af1283e92c Improve airdrop confirmation logic 2018-11-01 11:28:33 -06:00
6b777b066a Find clang 7 better
If LLVM_DIR is defined, use it to locate clang.  Otherwise use brew on
macOS, and assume clang-7 otherwise
2018-11-01 09:48:38 -07:00
1e01088698 Improve clang install info for Linux 2018-11-01 09:48:38 -07:00
3ea0651078 Rename sol_bpf.h to solana_sdk.h 2018-10-31 23:46:34 -07:00
776b1c2294 sol_bpf.h improvements
- Define NULL
- Add sol_memcmp()
- Use sizeof() more
- Add SOL_ARRAY_SIZE
- Make sol_deserialize() more flexible
2018-10-31 23:46:34 -07:00
dffa2eb04f Do not parallelize deserialize operation (#1663)
Deserialize operations are faster when done serially with the
MT banking stage and helps with performance improvement with
reduced thread context switches.
2018-10-31 22:12:15 -07:00
5ecb9da801 Fix up bpf numeric types 2018-10-31 20:53:44 -07:00
00889c5139 Fix bad function arguments (#1682) 2018-10-31 19:55:58 -07:00
af8dc3fd83 Fix snap build
cuda and chacha features required for chacha_cuda
2018-10-31 17:59:31 -07:00
ba884b4e36 Add thin client test for vote functionality, fix sizing errors in vote contract (#1643)
* Added tests to thin client to test VoteContract calls, fix VoteContract sizing errors

* Calculate upper bound on VoteProgram size at runtime, add test for serializing/deserializing a max sized VoteProgram state
2018-10-31 17:47:50 -07:00
6ddd494826 Improve rpc logging 2018-10-31 15:21:55 -06:00
aa2fd3f3bb Storage RFC grammar 2018-10-31 13:44:21 -07:00
cf00354f42 Add storage stage which does storage mining verification for validators 2018-10-31 13:44:21 -07:00
47f1fa3f2e Remove purging of leader id from cluster info (#1642) 2018-10-31 12:30:48 -07:00
db98f7e0b4 Use env variables to disable validator sanity and ledger verification (#1675) 2018-10-31 12:30:33 -07:00
38ee5c4dfb Program may not exit (#1669)
Cap max executed instructions, report number of executed instructions
2018-10-31 10:59:56 -07:00
aca2f9666d Fix deps (#1672) 2018-10-31 10:12:17 -07:00
b74e085538 SYSTEM_INC_DIRS needs immediate expansion 2018-10-31 07:20:09 -07:00
899de2ff56 Revert inclusion change, fix doc 2018-10-31 07:03:38 -07:00
cf521a5bd2 Fix const 2018-10-31 07:03:38 -07:00
bc13248e1c Fix C programs 2018-10-31 07:03:38 -07:00
0529f36fde Run workspace member's tests (#1666)
Run workspace member's tests
2018-10-30 22:53:36 -07:00
74b4ecb7f3 Upgrade to influx_db_client@0.3.6 2018-10-30 19:44:09 -07:00
333f658eb6 Fix lua_loader tests (#1665) 2018-10-30 18:36:18 -07:00
7cb5c0708b Fetch v0.10.4 which has v100 binary compiled in
This may or may not fix high latencies seen on the snap build on v100.
GPU driver will not have to JIT the device code for V100 though which
is an improvement.
2018-10-30 18:06:16 -07:00
85869552e0 Update testnet scripts to use release tar ball (#1660)
* Update testnet scripts to use release tar ball

* use curl instead of s3cmd
2018-10-30 18:05:38 -07:00
6f9843c14b Publish a tarball of Solana release binaries (#1656)
* Publish a tarball of solana release binaries

* included native programs in Solana release tar

* Remove PR check from publish script
2018-10-30 15:31:52 -07:00
7d44f60e45 Find native program with solana_ prefix 2018-10-30 13:13:37 -07:00
8d16f69bb9 Improve account subscribe/unsubscribe logging 2018-10-30 12:03:35 -07:00
3a73a09391 Avoid panicking when a native library doesn't exist 2018-10-30 12:03:35 -07:00
009c71f7e2 Demote info logs 2018-10-30 12:03:35 -07:00
073d39df44 Add solana_ prefix to loaders so their logs appear in the default RUST_LOG config 2018-10-30 12:03:35 -07:00
ae7222f0df Work around influxdb panic 2018-10-30 12:03:35 -07:00
4d6c54272a Tweak logging 2018-10-30 12:03:35 -07:00
13bfdde228 remove ledger tail code, WINDOW_SIZE begone (#1617)
* remove WINDOW_SIZE, use window.window_size()
* move ledger tail, redundant with ledger-based repair
2018-10-30 10:05:18 -07:00
3cc78d3a41 Added a new remote node configuration script to set rmem/wmem (#1647)
* Added a new remote node configuration script to set rmem/wmem

* Update common.sh for rmem/wmem configuration
2018-10-30 09:17:35 -07:00
45bb97cad6 Permit {INC,LLVM,OUT,SRC,SYSTEM_INC}_DIRs to be overridden 2018-10-30 07:59:07 -07:00
546e4c5696 Remove bpf tictactoe 2018-10-29 21:43:37 -07:00
6b1917b931 Add programs/bpf/c/sdk entries 2018-10-29 20:52:38 -07:00
30b22c8b78 Use NUM_KA 2018-10-29 20:52:38 -07:00
6f5e92e5b3 README updates 2018-10-29 20:52:38 -07:00
cce5c70f29 LD -> LLC 2018-10-29 20:52:38 -07:00
4af7c82ef0 Add extern "C" block 2018-10-29 20:52:38 -07:00
52e5fb7e0c Use #pragma once, it's widely supported
Fix up some spelling too
2018-10-29 20:52:38 -07:00
a013e8ceb1 Rename sol_bpf_c.h to sol_bpf.h 2018-10-29 20:52:38 -07:00
864632b582 slight reformatting 2018-10-29 20:52:38 -07:00
71d6eaacef Apply some const 2018-10-29 20:52:38 -07:00
4aba05d749 Include system includes in .d, remove unneeded tabs 2018-10-29 20:52:38 -07:00
7d335165ec Tune make output 2018-10-29 19:32:47 -07:00
37213209c5 Create programs/bpf/c/sdk/ 2018-10-29 19:10:29 -07:00
fbde9bb731 Run bench-tps for longer duration in testnet (#1638)
- Increased to 2+ hours
2018-10-29 15:03:08 -07:00
f6b1b5ab37 Remove unnecessary checks 2018-10-29 13:27:52 -07:00
7abd456d45 Increase rmem and wmem for remote nodes in testnet (#1635) 2018-10-29 13:04:54 -07:00
f12743de38 Create/publish bpf-sdk tarball 2018-10-29 12:54:57 -07:00
77e10ed757 Add utility to figure the current crate version 2018-10-29 12:54:57 -07:00
ebcb9a2103 Add llvm install info 2018-10-29 10:00:45 -07:00
6fb2e080bc Ignore out/ 2018-10-29 10:00:45 -07:00
3ac5ffc188 Use V=1 for verbosity, easier to type 2018-10-29 10:00:45 -07:00
88187ef282 Find llvm using brew on macOS 2018-10-29 10:00:45 -07:00
489894cb32 Mention logs more 2018-10-27 08:49:52 -07:00
be003970b7 Program_ids were overlapping (#1626)
Program_ids were overlapping
2018-10-26 19:44:53 -07:00
3488ea7d1c Cleanup c programs (#1620)
Cleanup C programs
2018-10-26 19:38:07 -07:00
9a6a399a29 Bump version number to pick up fixed cuda library
Has fix for unaligned memory access in chacha_encrypt_many_sample
function.
2018-10-26 14:57:14 -07:00
7ab65352be Fix featurized integration test (#1621)
Fix featurized integration test
2018-10-26 11:53:44 -07:00
b28fbfa13e Use a smaller test value for window_size
Otherwise this test takes forever to run.
2018-10-26 11:38:55 -07:00
07c656093c Remove tictactoe programs 2018-10-25 21:22:07 -07:00
c9e8346e6a cargo fmt 2018-10-25 17:24:24 -07:00
9e5ac76855 0.11.0 2018-10-25 17:19:07 -07:00
f671b7f63f Publish root crate too 2018-10-25 17:16:18 -07:00
236113e417 cargo fmt 2018-10-25 17:13:41 -07:00
a340b18b19 Upgrade to rust 1.30 2018-10-25 17:13:41 -07:00
f6c8e1a4bf Vote contract (#1552)
* Add Vote Contract

* Move ownership of LeaderScheduler from Fullnode to the bank

* Modified ReplicateStage to consume leader information from bank

* Restart RPC Services in Leader To Validator Transition

* Make VoteContract Context Free

* Remove voting from ClusterInfo and Tpu

* Remove dependency on ActiveValidators in LeaderScheduler

* Switch VoteContract to have two steps 1) Register 2) Vote. Change thin client to create + register a voting account on fullnode startup

* Remove check in leader_to_validator transition for unique references to bank, b/c jsonrpc service and rpcpubsub hold references through jsonhttpserver
2018-10-25 16:58:40 -07:00
160cff4a30 Check for TRIGGERED_BUILDKITE_TAG 2018-10-25 16:37:54 -07:00
48685cf766 0.10.0-pre2 2018-10-25 16:19:31 -07:00
0f32102684 Restrict characters to those supported by semvar_bash 2018-10-25 16:19:00 -07:00
d46682d1f2 Restrict characters to those supported by semvar_bash 2018-10-25 16:12:29 -07:00
55833e20b1 Create Poh Service (#1604)
* Create new Poh Service, replace tick generation in BankingStage
2018-10-25 14:56:21 -07:00
02cfa76916 Plumb GetTransactionCount through solana-wallet 2018-10-25 14:58:51 -06:00
9314eea7e9 Add leader-readiness test to wallet-sanity 2018-10-25 14:58:51 -06:00
1733beabf7 mv common/ sdk/ 2018-10-25 13:26:10 -07:00
471d8f6ff9 Fix up the version references to all other internal crates 2018-10-25 12:54:32 -07:00
e47fcb196b s/solana_program_interface/solana[_-]sdk/g 2018-10-25 12:31:45 -07:00
3ae53961c8 Support prerelease versioning 2018-10-25 12:31:45 -07:00
113b002095 Delete programs/native/move_funds 2018-10-25 11:37:38 -07:00
9447537d8c Increment internal Cargo references to solana_program_interface 2018-10-25 11:03:03 -07:00
7404b8739e Make template headers smaller 2018-10-25 11:51:37 -06:00
7239395d95 Add Issue and PR templates 2018-10-25 11:51:37 -06:00
926d459c8f Script away cargo version bumping 2018-10-25 09:38:58 -07:00
7cabe203dc Sync version with top-level Cargo.toml 2018-10-25 09:38:58 -07:00
1e53f4266a Fetch perf-libs with configurable packet size
sig verify library uses passed in size directly
to get packet size, so rust side can be modified
without changing cuda library.
2018-10-25 08:26:35 -07:00
24b513c3c7 Migrate to latest rbpf (#1605)
Migrate to updated rbpf
2018-10-25 02:58:04 -07:00
b982595c73 Add version check and rustup 2018-10-24 19:48:58 -07:00
af8a36b7fb Exclude chacha_cuda when chacha is disabled 2018-10-24 17:02:46 -07:00
208e7d7943 Explicitly reject transactions larger than PACKET_SIZE 2018-10-24 15:34:27 -07:00
557736f1cf Split leader rotation into separate RFC 2018-10-24 13:16:06 -06:00
61927e1941 Fix compile error for write_entries
Takes a reference now.
2018-10-24 11:31:30 -07:00
fc75827aaf .gitignore *.log 2018-10-24 10:58:27 -07:00
2f2531d921 Add retries to Wallet deploy 2018-10-24 11:13:32 -06:00
d5f20980eb Incorporate preloaded bpf loader 2018-10-24 11:13:32 -06:00
21eae981f9 Add deploy method to solana-wallet 2018-10-24 11:13:32 -06:00
ead7f4287a Storage mining fixups...
* Use IV to make unique identies
* Use hex! macro for hex literal and not string converted to u8 slice
* fix sha sampling to control init/end of sha state
2018-10-24 09:58:41 -07:00
3b33150cfb Bump drone read timeout to 10s
The previous timeout of 3s was not generous enough occasionally
2018-10-24 08:52:41 -07:00
6d34a68e54 Ignore test_leader_restart_validator_start_from_old_ledger (#1586)
Ignore test_leader_restart_validator_start_from_old_ledger
2018-10-23 18:10:31 -07:00
5c483c9928 remove unused variable 2018-10-23 16:52:56 -06:00
a68c99d782 Fix transaction count on testnet dashboard 2018-10-23 16:52:56 -06:00
0aebbae909 Fix message 2018-10-23 15:45:58 -07:00
a3a2215bda Fix warning 2018-10-23 15:45:58 -07:00
eb377993b3 Debug scripts point to debug flavor (#1585) 2018-10-23 14:48:50 -07:00
5ca52d785c Preload BPF loader (#1573)
Preload BPF loader
2018-10-23 14:44:41 -07:00
8d9912b4e2 Move ledger write to its own stage (#1577)
* Move ledger write to its own stage

- Also, rename write_stage to leader_vote_stage, as write functionality
  is moved to a different stage

* Address review comments

* Fix leader rotation test failure

* address review comments
2018-10-23 14:42:48 -07:00
c77b1c9687 i 2018-10-23 14:14:09 -07:00
8849ecd772 capture consensus discussion of 10/10/2018 2018-10-23 15:07:58 -06:00
7977b97227 Surface AccountInUse to JSON RPC users so they know to retry the transaction 2018-10-23 13:55:30 -07:00
4f34822900 Improve logging on various error conditions 2018-10-23 13:40:59 -07:00
bbb38ac106 Increase window size (#1578)
Addresses the following problem
- Validators are not able to keep up with the leader
- The future blobs (outside of window) get dropped
- The validators won't process repair requests for these future blobs
2018-10-23 10:25:01 -07:00
ce934a547e Storage RFC validator incentive clarification 2018-10-23 09:46:38 -06:00
16b19d35dd Disable test_boot_validator_from_file (#1576) 2018-10-23 00:47:15 -07:00
45cfa5b574 Add instruction to transfer account ownership 2018-10-20 21:54:25 -05:00
df9ccce5b2 Remove hostname() from calls to metrics as it's expensive operation (#1557) 2018-10-20 06:38:20 -07:00
f8516b677a Load program data in chunks (#1556)
Load program data in chunks
2018-10-19 18:28:38 -07:00
dfde83bdce Wildcard early OOM deb package revision (#1554) 2018-10-19 14:17:19 -07:00
cb0f19e4f1 Shield rerun-if-changed under the feature flags so
that cargo watch doesn't cause re-build every iteration.
2018-10-19 12:07:29 -07:00
26b99d3f85 Ensure witness and timestamp keys are signed
Before this patch, an attacker could point Budget instructions to
unsigned keys, and authorize a transaction from an unauthorized
party.
2018-10-19 10:06:59 -06:00
2f9c0d1d9e Add method to lookup signed keys 2018-10-19 10:06:59 -06:00
0423cafbeb Cleanup and update Smart Contracts Engine RFC to what is currently in the code (#1539)
* Cleanup and update to the state of the code

* update

* render

* render

* comments on memory allocation
2018-10-19 06:08:49 -07:00
0bd1412562 Switch leader scheduler to use PoH ticks instead of Entry height (#1519)
* Add PoH height to process_ledger()

* Moved broadcast_stage Leader Scheduling logic to use Poh height instead of entry_height

* Moved LeaderScheduler logic to PoH in ReplicateStage

* Fix Leader scheduling tests to use PoH instead of entry height

* Change is_leader detection in repair() to use PoH instead of entry height

* Add tests to LeaderScheduler for new functionality

* fix Entry::new and genesis block PoH counts

* Moved LeaderScheduler to PoH ticks

* Cleanup to resolve PR comments
2018-10-18 22:57:48 -07:00
0339642e77 Added TicTacToe Dashboard and tests (#1547)
* Add tictactoe dashboard and tests
2018-10-18 14:19:25 -07:00
37a0b7b132 Initial validator code for rust side hooks for chacha cuda parallel encrypt 2018-10-18 13:50:19 -07:00
c30b605047 Actually submit the storage mining proof
Get an aidrop so replicator can submit mining transaction

Some other minor type cleanup.
2018-10-18 13:50:19 -07:00
76076d6fad move last_id age checking into the HashMap
* allows for simpler chaining of banks
  * looks 1.5-2% faster than looping through a VecDequeue

TODO: remove timestamp()?
2018-10-18 11:07:00 -07:00
0a819ec4e2 Programs were not spawned by SystemProgram (#1533)
* SystemProgram spawns programs
2018-10-18 10:33:30 -07:00
57a717056e Delegate accounts now record the original approved amount 2018-10-18 08:53:25 -07:00
856c48541f Restore elaborate attack
The test is showing how you can sneak by verify_plan() but not
verify_signature().
2018-10-18 08:46:02 -06:00
2045091c4f Add SystemProgram::Move ix to Budget tx 2018-10-18 08:46:02 -06:00
03ac5a6eef Move all source tokens into Budget account
Budget now assumes the source account holds all tokens the program
should spend.

Note: the static guarantees implied by verify_plan() are meaningless
under the new contract engine. The bank no longer calls it. This
serves as a nice example of where comparing code coverage between
integration tests and unit tests would have shown us where a
change rendered unit tests meaningless.
2018-10-18 08:46:02 -06:00
32fadc9c30 Merge debits and credits
Debits no longer need to be applied before credits. Instead, we
lock any accounts we'd debit and so error out on the second attempt
to lock the same account.
2018-10-18 08:46:02 -06:00
15a89d4f17 Boot Contract type from Budget
In the old bank (before the contract engine), Contract wasn't specific
to Budget. It provided the same service as what is now called
SystemProgram::Move, but without requiring a separate account.
2018-10-18 08:46:02 -06:00
d0f43e9934 consolidate tmp ledgers 2018-10-18 08:45:31 -06:00
31e779d3f2 Added counters to track more metrics on dashboard (#1535)
- Total number of IP packets TX/RX from all nodes in the testnet
- Last consumed index on validator
- Last transmitted index on leader
2018-10-17 17:32:50 -07:00
30c79fd40d Change validator node machine type (#1537)
- The current nodes are using lower RAM compared to leader/clients
2018-10-17 17:16:50 -07:00
639c93460a Write stage optimizations (#1534)
- Testnet dashboard shows that channel pressure for write stage
  is incrementing on every iteration of write.
- This change optimizes ledger writing by removing cloning of map
  and reducing calls to flush
2018-10-17 13:02:32 -07:00
7611730cdb move off /tmp 2018-10-17 12:15:30 -07:00
9df9c1433a remove another use of /tmp 2018-10-17 12:15:30 -07:00
4ea422bcec run integration tests serially 2018-10-17 11:37:10 -07:00
6074e4f962 Attempt to stabilize the test suite
The integration tests are allowed to open sockets, so running them
in parallel may cause "Too many open files" errors. This patch
runs the unit tests in parallel and the integration test serially.
2018-10-17 11:37:10 -07:00
d52e6d01ec typo in readme 2018-10-17 02:04:05 -06:00
63caca33be SystemProgram test was failing due to expected panic 2018-10-16 18:02:44 -07:00
64efa62a74 enable logging in loaders 2018-10-16 16:55:11 -07:00
912eb5e8e9 remove bank.is_leader, dead code (#1516) 2018-10-16 15:26:44 -07:00
bb628e8495 Rename loaders 2018-10-16 14:27:08 -07:00
d0c19c2c97 cargo fmt 2018-10-16 14:11:04 -07:00
926fdb7519 Rename dynamic_program.rs to native_loader.rs 2018-10-16 14:11:04 -07:00
c886625c83 Move from solana/rbpf fork to qmonnet/rbpf (#1511) 2018-10-16 13:13:54 -07:00
f6c10d8a2e Add channel pressure for validator TVU stages (#1509) 2018-10-16 12:54:23 -07:00
2bd877528f Par process entries (#1499)
* Parallel entry processor.
2018-10-16 12:09:48 -07:00
d09889b1dd Program bank integration (#1462)
Native, BPF and Lua loaders integrated into the bank
2018-10-16 09:43:49 -07:00
1b2e9122d5 Pubsub listen on random open port when rpc does (quiet some test errors) 2018-10-16 00:11:26 -06:00
7424388924 Fix session drop 2018-10-16 00:11:26 -06:00
537436bd5e RPC PubSub now uses a well-known socket 2018-10-16 00:11:26 -06:00
32fc0cd7e9 Fix bug introduced during RUST_LOG escaping (#1507)
* Fix bug introduced during RUST_LOG escaping
- remote node configuration should not be quoted

* shellcheck disable SC2090
2018-10-15 16:49:22 -07:00
fb99494858 Improve rpc code coverage (#1487) 2018-10-15 11:01:40 -06:00
5b4d4b97bc Upgrade to latest stable Rust, 1.29.2 2018-10-15 09:54:24 -06:00
c5180c8092 Permit RUST_LOG overrides 2018-10-14 12:40:37 -07:00
515c200d86 Refactor and add test for new Entry::serialized_size() 2018-10-14 10:53:47 -06:00
32aab82e32 Don't allocate to see if transactions will fit in a blob 2018-10-14 10:53:47 -06:00
6aaa350145 effeciently pack gossip responsens and only respond up to max size. (#1493) 2018-10-14 06:45:02 -07:00
d3b4dfe104 Add bool return to entrypoint signature to permit programs to fail transactions 2018-10-13 20:01:43 -07:00
9fc30f6db4 Escape RUST_LOG configuration in remote-node.sh (#1489)
* Escape RUST_LOG configuration in remote-node.sh

- If it was set to #, it was causing other parameters to be commented out

* escape other variables as well

* disabled shell check

* Fix shellcheck error
2018-10-13 13:35:54 -07:00
2d0f07091d Handle dynamic program dlopen failures gracefully 2018-10-13 11:31:10 -07:00
3828eda507 Demote log messages 2018-10-13 11:31:10 -07:00
1e736ec16d Demote log messages 2018-10-12 20:16:57 -07:00
bba6437ea9 Use a single structure for last_ids and last_ids_sigs 2018-10-12 16:39:35 -07:00
e5ab9a856c Upload bench output as build artifacts (#1478)
* Upload bench output as build artifacts

* Fix tags types

* Pull previous stats from metrics

* Change the default branch for comparison

* Fix formatting

* Fix build errors

* Address review comments

* Dedup some common code

* Add eval for channel info to find branch name
2018-10-12 15:13:10 -07:00
1515bba9c6 Use cluster_info in rpc to get current leader addresses (#1480) 2018-10-12 14:25:56 -06:00
14a9ef4bbe move PoH verification off bank.last_id() (#1476) 2018-10-12 11:50:34 -07:00
041040c659 pubsub.rs -> rpc_pubsub.rs 2018-10-12 08:39:06 -07:00
47f69f2d24 1) Switch broken tests to generate an empty tick in their ledgers to use as last_id, 2) Fix bug where PoH generator in BankingStage did not referenced the last tick instead of the last entry on startup, causing ledger verification to fail on the new tick added by the PoH generator (#1479) 2018-10-12 00:39:10 -07:00
9dd4dc2088 Mark failing tests as ignore 2018-10-11 15:32:36 -07:00
b534c32ee3 New minor version for jsonrpc crates 2018-10-11 13:35:06 -06:00
d2712f1457 Specify patch for jsonrpc crates 2018-10-11 11:38:14 -07:00
183f560d06 Add raw entries interface to ledger for getting slices as [u8] 2018-10-11 09:40:34 -07:00
ae150c0897 Remove getAddress, it doesn't exist 2018-10-11 08:28:39 -07:00
606e1396cf Fix link 2018-10-11 08:25:38 -07:00
5c85e037f8 Tick entry ids as only valid last_ids (#1441)
Generate tick entry ids and only register ticks as the last_id expected by the bank.  Since the bank is MT, the in-flight pipeline of transactions cannot be close to the end of the queue or there is a high possibility that a starved thread will encode an expired last_id into the ledger.  The banking_stage therefore uses a shorter age limit for encoded last_ids then the validators.

Bench client doesn't send transactions that are older then 30 seconds.
2018-10-10 17:23:06 -07:00
5c523716aa Ship native programs 2018-10-10 16:49:48 -07:00
5f8cbf359e Use cdylib to avoid runtime libstd dependencies 2018-10-10 16:49:48 -07:00
e83834e6be Build native programs in release configuration 2018-10-10 16:49:48 -07:00
02225aa95c Look for native programs in same directory as the current executable 2018-10-10 16:49:48 -07:00
9931ac9780 Leader scheduler plumbing (#1440)
* Added LeaderScheduler module and tests

* plumbing for LeaderScheduler in Fullnode + tests. Add vote processing for active set to ReplicateStage and WriteStage

* Add LeaderScheduler plumbing for Tvu, window, and tests

* Fix bank and switch tests to use new LeaderScheduler

* move leader rotation check from window service to replicate stage

* Add replicate_stage leader rotation exit test

* removed leader scheduler from the window service and associated modules/tests

* Corrected is_leader calculation in repair() function in window.rs

* Integrate LeaderScheduler with write_stage for leader to validator transitions

* Integrated LeaderScheduler with BroadcastStage

* Removed gossip leader rotation from crdt

* Add multi validator, leader test

* Comments and cleanup

* Remove unneeded checks from broadcast stage

* Fix case where a validator/leader need to immediately transition on startup after reading ledger and seeing they are not in the correct role

* Set new leader in validator -> validator transitions

* Clean up for PR comments, refactor LeaderScheduler from process_entry/process_ledger_tail

* Cleaned out LeaderScheduler options, implemented LeaderScheduler strategy that only picks the bootstrap leader to support existing tests, drone/airdrops

* Ignore test_full_leader_validator_network test due to bug where the next leader in line fails to get the last entry before rotation (b/c it hasn't started up yet). Added a test test_dropped_handoff_recovery go track this bug
2018-10-10 16:49:41 -07:00
2ba2bc72ca Cleanup multisig lua 2018-10-10 17:17:17 -06:00
45b8ba9ede Demo M-N multisig library in Lua 2018-10-10 17:17:17 -06:00
40968e09b7 Do a *little* more than noop 2018-10-10 15:57:30 -07:00
262f26cf76 SystemProgram transactions now fail on invalid arguments 2018-10-10 15:19:03 -07:00
785c619198 Add pubsub module for rpc info subscriptions (#1439) 2018-10-10 14:51:43 -06:00
24a993710d Avoid panic when account.source is None 2018-10-10 10:53:00 -07:00
c240bb12ae Change buildkite agent for testnet automation 2018-10-09 15:04:55 -07:00
eed3b9db94 Add ERC20-like Token program 2018-10-09 12:53:37 -07:00
29a8823db1 Env variables for testnet-automation parameters (#1455)
- This will enable us to create custom pipelines for field events
2018-10-09 11:50:56 -07:00
a80955eacb Change format of data for TPS/Finality metrics in testnet automation (#1446)
* Change format of data for TPS/Finality metrics in testnet automation

* Revert number of nodes for testnet automation

* Split python command to its own script

* Fix python command line arguments
2018-10-09 10:35:01 -07:00
9716c3de71 Add an abort test to justify a key field 2018-10-09 11:06:48 -06:00
34fa3208e0 Demo self-modifying Lua program
Also, drop dependency on bincode.
2018-10-09 11:06:48 -06:00
9c4e19958b Use accounts[1] for Lua code and tx userdata as arg data
This makes the Lua version nearly identical to the C one.
2018-10-09 11:06:48 -06:00
0403299728 Add context-free Lua smart contracts
lua_State is not preserved across runs and account userdata is not converted into
Lua values. All this allows us to do is manipulate the number of tokens
in each account and DoS the Fullnode with those three little words,
"repeat until false".

Why bother? Research. rlua's project goals are well-aligned with the LAMPORT runtime.

What's next:
* rlua to add security limits, such as number of instructions executed
* Add a way to deserialize Account::userdata OR use Account::program_id
  to look up a metatable for lua_newuserdata().
2018-10-09 11:06:48 -06:00
95701114e3 Crdt -> ClusterInfo 2018-10-09 03:49:39 -06:00
a99d17c3ac put temp, test files in OUT_DIR (#1448) 2018-10-08 16:15:17 -07:00
517149d325 Move rpc request methods from wallet into separate module 2018-10-08 13:02:08 -06:00
32aa2575b5 Purge BudgetTransaction from entry 2018-10-08 11:34:04 -07:00
8fe7b96629 Purge BudgetTransaction from banking_stage 2018-10-08 11:34:04 -07:00
9350619afa log to influx once (#1438) 2018-10-06 14:37:14 -07:00
d8d8f0bfc8 Fund all the keys with move many transactions (#1436)
* Fund all the keys with move many transactions

* logs
2018-10-05 16:45:27 -07:00
0a39722719 Add support to trigger testnet from a PR (#1434)
* Add support for different node counts

* Update variable names

* Delete network even after failures

* Add array for node counts

* Changed number of nodes to a space separated string of numbers

* Adjust number of nodes

* Snap will not be published if the env variable DO_NOT_PUBLISH_SNAP is set

* Address review comments

* Replaced influx db URL
2018-10-05 16:32:05 -07:00
9c0fa4d1d2 Upload coverage HTML reports (#1421)
Uploads two reports to Buildkite, one from cargo-cov and one from lcov via grcov.  The lcov one is busted on linux and is what we need to bring codecov.io back up again. It works great on macos if you wanted to generate them locally and prefer lcov HTML reports.

* Also comment out non-coverage build to speed things up.
2018-10-05 10:17:35 -07:00
da0404ad03 Reduce maintenance of maintainers list 2018-10-04 23:05:08 -07:00
b508fdb62c Cleanup field names 2018-10-04 16:51:05 -07:00
680f90df21 Fix comment 2018-10-04 14:21:06 -07:00
1a68807ad9 Enable mt-bank (#1368)
* Enable mt-bank

* cleanup and interleaving lock tests
2018-10-04 13:15:54 -07:00
d901767b54 Makefile is not relevant 2018-10-04 10:35:48 -07:00
13d4443d4d Add BPF support & C-based BPF tic-tac-toe (#1422)
Add initial support for BPF and a C port of tictactoe
2018-10-04 09:44:44 -07:00
74b63c12a0 Add tests to LeaderScheduler to increase code coverage 2018-10-03 21:58:29 -07:00
cd42f6591a PR fixes - remove redundant case 2018-10-03 21:58:29 -07:00
5491422b12 Fix validator_to_leader_transition test to not start up tpu after shutting down tvu, as the tpu now outputs ticks that will mess up the verification check 2018-10-03 21:58:29 -07:00
23f3ff3cf0 Added LeaderScheduler module and tests 2018-10-03 21:58:29 -07:00
f90488c77b Demote 'not enough peers in crdt table' log message 2018-10-02 22:00:54 -07:00
beb4536841 Run a fullnode+drone automatically when the container starts up 2018-10-02 18:09:35 -07:00
3fa46dd66d Add replicator sha sampling
replicator will submit mining proofs with the result of sampling
the encrypted file with a hashing algorithm.
2018-10-02 17:04:46 -07:00
ad5fcf778f Publish minimal Solana docker images to dockerhub 2018-10-02 16:57:48 -07:00
83b000ae88 Remove SNAP_ prefix 2018-10-02 16:57:48 -07:00
33e179caa6 Update sha2 requirement from 0.7.0 to 0.8.0
Updates the requirements on [sha2](https://github.com/RustCrypto/hashes) to permit the latest version.
- [Release notes](https://github.com/RustCrypto/hashes/releases)
- [Commits](https://github.com/RustCrypto/hashes/commits/sha2-v0.8.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-10-02 09:00:05 -06:00
b1e941cab9 Return all instances 2018-10-01 07:51:48 -07:00
6db961d256 Correct comment 2018-09-30 00:08:09 -07:00
83409ded59 Correctly deserialize large userdata 2018-09-29 19:39:54 -07:00
396b2e9772 Ignore keep alive for completed games 2018-09-29 19:39:54 -07:00
94459deb94 Disable codecov.io reporting 2018-09-28 19:19:16 -07:00
660af84b8d Use the same versions of llvm-cov and libprofile 2018-09-28 19:19:16 -07:00
7b31020903 Add back llvm-dev for llvm-cov 2018-09-28 19:19:16 -07:00
9a4143b4d9 Upgrade llvm-dev and boot kcov
Need clang-dev, not llvm-dev because cargo-cov looks for libprofile
in a clang installation directory.
2018-09-28 19:19:16 -07:00
aebc47ad55 Attempt coverage reporting 2018-09-28 19:19:16 -07:00
b6b5455917 Fix test in coverage build 2018-09-28 19:19:16 -07:00
5bc01cd51a Revive code coverage 2018-09-28 19:19:16 -07:00
c79acac37b Add tic-tac-toe dashboard program 2018-09-28 18:48:34 -07:00
a5f2aa6777 s/grid/board/g 2018-09-28 18:48:34 -07:00
4169e5c510 Simplify game setup messaging 2018-09-28 18:48:34 -07:00
0727c440b3 Add KeepAlive message so players can detect abandoned games 2018-09-28 18:48:34 -07:00
19a7ff0c43 Pin down nightly in benchmark build 2018-09-28 19:29:50 -06:00
5f18403199 Upgrade nightly 2018-09-28 19:29:50 -06:00
9f325fca09 Re-enable cargo audit 2018-09-28 17:53:41 -06:00
10d08acefa Reenable cargo audit 2018-09-28 17:53:41 -06:00
52d50e6bc4 Update for new solana-jsonrpc 2018-09-28 17:53:41 -06:00
e7de7c32db Transactions with multiple programs. (#1381)
Transactions contain a vector of instructions that are executed atomically.
Bench shows a 2.3x speed up when using 5 instructions per tx.
2018-09-28 16:16:35 -07:00
a5f07638ec Use static str define for ledger files 2018-09-28 14:23:37 -07:00
aa2a3fe201 Add chacha module to encrypt ledger files 2018-09-28 14:23:37 -07:00
abd13ba4ca move program tests to integration 2018-09-28 11:30:10 -07:00
485ba093b3 Install kcov to CI environment 2018-09-28 11:20:27 -06:00
36b18e4fb5 Create new wallet on each run of wallet-sanity 2018-09-28 07:39:31 -07:00
8d92232949 Specify zone 2018-09-28 07:32:49 -07:00
e4d8c094a4 Include -z when deleting network 2018-09-27 21:27:09 -07:00
d26e1c51a9 0.10.0 2018-09-27 16:38:53 -07:00
675ff64094 Fail CI on clippy warnings 2018-09-27 16:21:12 -06:00
423e7ebc3f Pacify clippy 2018-09-27 16:21:12 -06:00
f9fe6a0f72 Move clippy to Rust stable 2018-09-27 16:21:12 -06:00
8d007bd7f7 Upgrade rustc and add clippy to stable 2018-09-27 16:21:12 -06:00
6cdbdfbbcb Enable bench and fix upload-perf 2018-09-27 14:16:56 -07:00
35e6343d61 Update testnet-deploy script to configure GPUs for leader node (#1379) 2018-09-27 13:42:24 -07:00
7fb7839c8f Configure GPU type/count from command line in GCE scripts (#1376)
* Configure GPU type/count from command line in GCE scripts

* Change CLI to input full leader machine type information with GPU
2018-09-27 11:55:56 -07:00
dbc1ffc75e Use jsonrpc fork 2018-09-27 12:50:38 -06:00
1fdbe893c5 Improve game setup experience: X now shares game key and accepts O 2018-09-27 10:44:13 -07:00
55a542bff0 Fix erasure and cuda related compilation errors 2018-09-27 10:42:37 -06:00
e10574c64d Remove recycler and it's usage
- The memory usage due to recycler was high, and incrementing with
  time.
2018-09-27 10:42:37 -06:00
2e00be262e Remove data from BankError.
This reduces how much memory is written to last_id_sigs table on very TX, and has a 40% impact on
`cargo +nightly watch -x 'bench bench_banking_stage'`
2018-09-27 09:07:56 -06:00
4172bde081 Only send a vote once a second 2018-09-27 09:06:41 -06:00
9c47e022dc break dependency of programs on solana core (#1371)
* break dependency of programs on Solana core
2018-09-27 07:49:26 -07:00
874addc51a Move KeyedAccount into Account
Now programs don't need to depend on dynamic_program and its
dependencies.
2018-09-26 20:40:40 -06:00
b7ae5b712a Move Pubkey into its own module 2018-09-26 20:40:40 -06:00
c6d7cd2d33 Move Account into its own module
Also use default Default generator, since system program ID is
[0; 32]. Bank should probably be the one to set this anyway.
2018-09-26 20:40:40 -06:00
386a96b7e0 capture multinode logs by default (#1367) 2018-09-26 19:30:40 -07:00
b238c57179 Add trace! when an error is mapped to GenericFailure 2018-09-26 19:30:20 -07:00
1821e72812 Add getSignatureStatus 2018-09-26 19:00:34 -07:00
a23c230603 fix reverse loop in write_stage, simplify banking_stage, add tooling to help find this (#1366) 2018-09-26 18:37:24 -07:00
4e01fd5458 Update test to show when we should collect tx fees
See #1157 for details. The `from` account should be cloned
before execute_transaction(), and that's the only one that should
be stored if there's an error executing the program.
2018-09-26 19:30:27 -06:00
e416cf7adf Let clients know when transactions failed 2018-09-26 19:30:27 -06:00
25edb9e447 fix benches 2018-09-26 19:29:46 -06:00
93c4f6c9b8 Synchronize PoH, bank last_id queue and ledger entry channel.
PoH, bank's last_id queue and the Entry channel need to have a synchronized order of ids.
2018-09-26 16:19:03 -07:00
718031ec35 Ignore the test_leader_to_validator_transition until it can handle PoH entries 2018-09-26 16:59:57 -06:00
d546614936 Handle deserialize failure with error 2018-09-26 15:17:07 -07:00
ac8d738045 Don't call unwrap() in StorageProgram::process_tx 2018-09-26 15:17:07 -07:00
ca962371b8 Fix build
Two PRs crossed in flight.
2018-09-26 14:40:48 -06:00
e6f8922e35 fix issue #1347 (#1355) 2018-09-26 13:31:39 -07:00
7292ece7ad Free up term instruction for new multi-instruction feature 2018-09-26 14:17:15 -06:00
df3b78c18c Move BudgetTransaction into its own module 2018-09-26 14:17:15 -06:00
c83dcea87d Move SystemTransaction into its own module 2018-09-26 14:17:15 -06:00
be20c99758 Promote the one true transaction constructor 2018-09-26 14:17:15 -06:00
694add9919 Move budget-specific and system-specific tx constructors into traits
These functions pull in budget-specific and system-specific
dependencies that aren't needed by the runtime.
2018-09-26 14:17:15 -06:00
afc764752c Permit testnets without a GPU 2018-09-26 10:37:41 -07:00
113c8b5880 Rollback jsonrpc SendTransaction pool for signature; ignore flaky tests 2018-09-26 10:25:29 -07:00
a5b28349ed Add max entry height to download for replicator 2018-09-26 09:57:22 -07:00
bb7ecc7cd9 Migrate to solana-labs fork of jsonrpc
This changes aims to be a no-op. Future changes to rev should be
along the new solana-0.1 branch.
2018-09-26 10:08:37 -06:00
14bc160674 Clean up test and add signature return to rpc send tx 2018-09-25 16:38:51 -07:00
d438c22618 Update RFC 2018-09-25 16:38:51 -07:00
bcbae0a64f Fix witness functionality 2018-09-25 16:38:51 -07:00
f636408647 Fix timestamp and cancel functionality
- Also serialize and send helper fn
2018-09-25 16:38:51 -07:00
3ffc7aa5bc Add helper fn to get last id 2018-09-25 16:38:51 -07:00
7b7e8c0d3f Clippy 2018-09-25 16:38:51 -07:00
11ea9e7c4b Add cancelable handling 2018-09-25 16:38:51 -07:00
2b82121325 Fix wallet-sanity to reflect new wallet arg syntax 2018-09-25 16:38:51 -07:00
5038e5ccd7 Preliminary Wallet-Budget functionality 2018-09-25 16:38:51 -07:00
e943ed8caf Expand parse_command and add tests 2018-09-25 16:38:51 -07:00
c196952afd Flesh out Wallet CLI & add placeholder WalletCommands 2018-09-25 16:38:51 -07:00
e7383a7e66 Validator to leader (#1303)
* Add check in window_service to exit in checks for leader rotation, and propagate that service exit up to fullnode

* Added logic to shutdown Tvu once ReplicateStage finishes

* Added test for successfully shutting down validator and starting up leader

* Add test for leader validator interaction

* fix streamer to check for exit signal before checking socket again to prevent busy leaders from never returning

* PR comments - Rewrite make_consecutive_blobs() function, revert genesis function change
2018-09-25 15:41:29 -07:00
8a7545197f move tick generation back to banking_stage, add unit tests (#1332)
* move tick generation back to banking_stage, add unit tests

fixes #1217

* remove channel() stuff for synchronous comm; use a mutex
2018-09-25 15:01:51 -07:00
680072e5e2 No need to special case vote failures 2018-09-25 13:43:35 -06:00
4ca377a655 Delete dead code 2018-09-25 13:43:35 -06:00
751dd7eebb Move vote into ReplicateStage after process_entries 2018-09-25 13:43:35 -06:00
8f0e0c4440 Add tic-tac-toe program 2018-09-25 12:07:41 -07:00
50cf73500e Remove rfc 004 2018-09-25 12:07:41 -07:00
db310a044c Add Budget::And element, and supporting functions (#1329) 2018-09-25 12:38:13 -06:00
88a609ade5 groom write_stage 2018-09-25 00:18:35 -07:00
304d63623f give replication some time to happen
fixes #1307
2018-09-24 23:57:09 -07:00
407b2682e8 remove dead code 2018-09-24 23:12:09 -07:00
0f4fd8367d Add counters for channel pressure and time spent in TPU pipeline (#1324)
* Add counters for channel pressure and time spent in TPU pipeline

* Fixed failing tests

* Fix rust format issue
2018-09-24 17:13:49 -07:00
747ba6a8d3 Boot BudgetState::last_error 2018-09-24 17:14:23 -06:00
bb99fd40de Update transaction status in the bank
This will allow jsonrpc to query the system to find out if a
recent transaction failed.
2018-09-24 17:14:23 -06:00
e972d6639d Return errors from BudgetProgram::process_transaction 2018-09-24 17:14:23 -06:00
22e77c9485 Add a way of getting transaction errors out of the bank 2018-09-24 17:14:23 -06:00
bc88473030 Increase wmem for kernel network memory usage (#1323)
- Validators were running out of kernel buffer while retransmitting
  blobs
2018-09-24 13:02:56 -07:00
95677a81c5 Pacify clippy 2018-09-24 13:36:31 -06:00
ea37d29d3a Pass Bank::process_transactions() a reference to the txs instead of moving them 2018-09-24 13:36:31 -06:00
e030673c9d Do a recv on join to prevent channel destruction (#1320)
before window thread join
2018-09-24 11:50:37 -07:00
3e76efe97e Fix bench compilation (#1311) 2018-09-24 10:40:42 -07:00
f5a30615c1 Ignore replicator startup for now 2018-09-24 09:43:58 -06:00
e5e325154b Add --shell argument 2018-09-24 08:05:47 -07:00
9e3d2956d8 remove last recycle? 2018-09-24 08:09:41 -06:00
26b1466ef6 Initial integration of dynamic contracts and native module loading (#1256)
* Integration of native dynamic programs
2018-09-23 22:13:44 -07:00
a1f01fb8f8 revert is_some to not is_none, causes test failure 2018-09-23 17:09:18 -06:00
b2be0e2e5e fix clippy warning 2018-09-23 17:09:18 -06:00
1a45587c08 fix clippy warnings 2018-09-23 17:09:18 -06:00
3199f174a3 Add option to pass boot disk type to gce create (#1308) 2018-09-22 16:43:47 -07:00
a51c2f193e fix Rob and Carl crossing wires 2018-09-21 21:37:25 -07:00
be31da3dce lastidnotfound step 2: (#1300)
lastidnotfound step 2:
  * move "record stage", aka poh_service into banking stage
  * remove Entry.has_more, is incompatible with leader rotation
  * rewrite entry_next_hash in terms of Poh
  * simplify and unify transaction hashing (no embedded nulls)
  * register_last_entry from banking stage, fixes #1171 (w00t!)
  * new PoH doesn't generate empty ledger entries, so some fixes necessary in 
         multinode tests that rely on that (e.g. giving validators airdrops)
  * make window repair less patient, if we've been waiting for an answer, 
          don't be shy about most recent blobs
   * delete recorder and record stage
   * make more verbost  thin_client error reporting
   * more tracing in window (sigh)
2018-09-21 21:01:13 -07:00
54b407b4ca Wait on blob fetch before window, Seems to fix instability (#1304)
also cleanup ledger.
2018-09-21 18:56:20 -07:00
e87cac06da Request/reqwest improvements
- Use json macro to simplify request builds
- Add proxy option for reqwest to use TLS
- Add rpc port options for configured nodes
2018-09-21 18:06:20 -06:00
ad4fef4f09 Doc for rpc_port configuration 2018-09-21 18:06:20 -06:00
e3b3701e13 Add RPC port option to fullnode 2018-09-21 18:06:20 -06:00
9228fe11c9 Port Wallet to jsonrpc and fix tests 2018-09-21 18:06:20 -06:00
5ab38afa51 Changed the window_service in Replicator to send entries instead of blobs (#1302) 2018-09-21 16:50:58 -07:00
e49b8f0ce7 Update poh_service.rs 2018-09-21 16:03:54 -07:00
c50ac96f75 Moved deserialization of blobs to entries from replicate_stage to window_service (#1287) 2018-09-21 16:01:24 -07:00
a9355c33b2 Placeholder storage contract and replicator client (#1286)
* Add hooks for executing the storage contract

* Add store_ledger stage
  Similar to replicate_stage but no voting/banking stuff, just convert
  blobs to entries and write the ledger out

* Add storage_addr to tests and add new NodeInfo constructor
  to reduce duplication...
2018-09-21 15:32:15 -07:00
3dcee9f79e Update poh_service.rs 2018-09-21 08:01:24 -07:00
2614189157 cargo fmt 2018-09-20 19:46:20 -07:00
beeb09646a suppress warning: unused variable: recycler 2018-09-20 19:46:20 -07:00
67f1fbab5f Treat rustc warnings as errors in CI 2018-09-20 19:46:20 -07:00
c0e7e43e96 fixup! s/contract/program 2018-09-20 19:33:54 -07:00
9bfead2e01 s/contract/program 2018-09-20 19:33:54 -07:00
6073cd57fa Boot Recycler::recycle() 2018-09-20 17:08:51 -06:00
5174be5fe7 Rename getAccount to getAccountInfo 2018-09-20 15:18:56 -07:00
62a18d4c02 step one of lastidnotfound: record_stage->record_service, trim recorder to hashes (#1281)
step one of lastidnotfound

* record_stage->record_service, trim recorder to hashes
* doc updates, hash multiple without alloc()

cc #1171
2018-09-20 15:02:24 -07:00
a6c15684c9 Avoid panicking invalid instructions 2018-09-20 14:08:39 -07:00
5691bf557c Handle bad account userdata better 2018-09-20 14:08:39 -07:00
8f01f7cf21 Trace syscalls for more helpful logs 2018-09-20 14:08:39 -07:00
bb8c94ad2c Add getAccount JSON RPC request 2018-09-20 13:58:15 -07:00
d98e35e095 Delete no longer used PaymentPlan trait 2018-09-20 14:22:45 -06:00
3163fbad0e Remove 'Plan' indirection since it's implied by BUDGET_CONTRACT_ID 2018-09-20 14:22:45 -06:00
0172422961 Require a self-assigned account ID 2018-09-20 14:16:14 -06:00
8ccfb26923 tests for my IP picker 2018-09-20 09:21:09 -07:00
12a474b6ee sort local interfaces before selecting one 2018-09-20 09:21:09 -07:00
270fd6d61c Fix compiler warnings 2018-09-20 09:47:36 -06:00
7b9c7d4150 Cleaned up find_leader_rotation function. Added testing for WriteStage find_leader_rotation_index() function (#1276) 2018-09-19 18:16:00 -07:00
55126f5fb6 Marked Tvu functionality in Fullnode as unused for now 2018-09-19 16:05:31 -07:00
431692d9d0 Use a Drop trait to keep track of lifetimes for recycled objects.
* Move recycler instances to the point of allocation
* sinks no longer need to call `recycle`
* Remove the recycler arguments from all the apis that no longer need them
2018-09-19 16:59:42 -06:00
6732a9078d Clarify AfterTimestamp wire format 2018-09-19 13:28:35 -07:00
2981076a14 Add solana-upload-perf to parse json from bench and upload to influx (#1166) 2018-09-19 13:16:55 -07:00
5740ea3807 RFC 006: Wallet CLI 2018-09-19 12:10:53 -06:00
cd2d50e06c Changed transition to restart Rpu rather than modify bank to prevent lock contention 2018-09-19 10:48:05 -06:00
8c8a4ba705 debugging commit 2018-09-19 10:48:05 -06:00
b10de40506 Made LEADER_ROTATION_INTERVAL settable so that integration tests don't time out 2018-09-19 10:48:05 -06:00
2030dfa435 Implement PR comments, tidy up 2018-09-19 10:48:05 -06:00
bfe64f5f6e Added integration test for transitioning leader to validator to see that tpu pipeline can exit and restart a tvu. Fixed Tpu and broadcast stage so that exiting later stages in the pipeline also causes earlier stages to exit. 2018-09-19 10:48:05 -06:00
6d27751365 give fullnode ownership of state needed to dynamically start up a tpu or tvu for role transition 2018-09-19 10:48:05 -06:00
1fb1c0a681 added jointypes to the stages in the tpu involved in leader rotation 2018-09-19 10:48:05 -06:00
062f654fe0 formatted code 2018-09-19 10:48:05 -06:00
d3cb161c36 Added broadcast stage test for leader rotation exit 2018-09-19 10:48:05 -06:00
98b47d2540 Added check in broadcast stage to exit after transmitting last blob before leader rotation. Also added tests 2018-09-19 10:48:05 -06:00
f28ba3937b Added check in write stage to exit when scheduled entry_height for leader rotation is detected 2018-09-19 10:48:05 -06:00
91cf14e641 Rewrote service trait join() method to allow thread join handles to return values other than () 2018-09-19 10:48:05 -06:00
7601a8001c Update reqwest requirement from 0.8.6 to 0.9.0
Updates the requirements on [reqwest](https://github.com/seanmonstar/reqwest) to permit the latest version.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/commits/v0.9.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-09-19 10:47:02 -06:00
0ee6c5bf9d Read multiple entries in write stage (#1259)
- Also use rayon to parallelize to_blobs() to maximize CPU usage
2018-09-18 21:45:49 -07:00
6dee632d67 Remove Signature from ApplySignature 2018-09-18 20:00:42 -07:00
51e5de4d97 Log specific send_transaction error messages 2018-09-18 16:17:08 -07:00
1f08b22c8e Tweak log messages 2018-09-18 16:17:08 -07:00
83ae5bcee2 Detect binary changes in serialized contract userdata 2018-09-18 16:17:08 -07:00
339a570b26 Update comment 2018-09-18 16:17:08 -07:00
5310b6e5a2 Move entry->blob creation out of write stage (#1257)
- The write stage will output vector of entries
- Broadcast stage will create blobs out of the entries
- Helps reduce MIPS requirements for write stage
2018-09-18 13:49:10 -07:00
7d14f44a7c Move register_entry_id() call out of write stage (#1253)
* Move register_entry_id() call out of write stage

- Write stage is MIPS intensive and has become a bottleneck for
  TPU pipeline
- This will reduce the MIPS requirements for the stage

* Fix rust format issues
2018-09-18 11:42:25 -07:00
c830eeeae4 Update RELEASE.md 2018-09-18 10:31:26 -07:00
157fcf1de5 initial RELEASE.md (#1244)
initial RELEASE.md and RELEASE_TEMPLATE.md
2018-09-18 10:23:15 -07:00
e050160ce5 Use tagged perf-libs to enable controlled updates 2018-09-18 09:21:44 -07:00
f273351789 Add missing port number 2018-09-18 09:36:54 -06:00
aebf7f88e5 Various spelling fixes 2018-09-17 19:37:59 -07:00
aac1571670 mint now uses the SystemContract instead of Budget 2018-09-17 18:02:40 -07:00
8bae75a8a6 system contract tests 2018-09-17 14:34:55 -07:00
c2f7ca9d8f Change process_command return type and improve test 2018-09-17 13:45:47 -07:00
6ec0e42220 budget as separate contract and system call contract (#1189)
* budget and system contracts and verification

* contract check_id methods
* system call contract
* verify contract execution rules
* move system into its own file
* allocate before transfer for budget
* store error in budget context
* budget contract and tests without bank
* moved budget of of bank
2018-09-17 13:36:31 -07:00
072b244575 Add perf counters for record/write stages (#1240) 2018-09-17 11:07:04 -07:00
7ac9d6c604 Create keygen helper function for use in Wallet CLI, print keypair statement 2018-09-17 11:53:33 -06:00
0125163190 Remove wallet.sh, update entrypoint syntax for wallet network argument 2018-09-17 11:53:33 -06:00
a06f4b1d44 Update wallet to trigger keygen if no keypair provided and no keypair found in default location 2018-09-17 11:53:33 -06:00
10daa015c4 Simplify timeout arg 2018-09-17 11:53:33 -06:00
0babee39a4 Update wallet to take network arg 2018-09-17 11:53:33 -06:00
7c08b397eb Update testnet documentation 2018-09-17 09:26:25 -07:00
155ee8792f Add GPU support to ec2-provider 2018-09-17 09:26:25 -07:00
f89f121d2b Add AWS EC2 support 2018-09-17 09:26:25 -07:00
27986d7abb Standardize CLI help text 2018-09-16 15:17:10 -06:00
8b7edc6d64 Alphabetize 2018-09-16 15:17:10 -06:00
7dfab867fe Mark --outfile parameter as required 2018-09-16 10:49:02 -07:00
fd36954477 clippy 2018-09-15 05:12:53 -06:00
fd51599fa8 Replace replace(..., None) with take()
This is strictly for simplicity, since Option::take() is imlemented with replace().
2018-09-15 05:12:09 -06:00
3ca80c676c Disable large-network until it's fixed 2018-09-14 20:13:17 -07:00
be7cce1fd2 Tweak GCE scripts for higher node count (#1229)
* Tweak GCE scripts for higher node count

- Some validators were unable to rsync config from leader when
  the node count was high (e.g. 25). Looks like the leader node was
  getting more rsync requests in parallel than it count handle.
- This change staggers the validators bootup, and rsync time

* Address review comments
2018-09-14 17:17:08 -07:00
e142aafca9 Use multiple sockets for receiving blobs on validators (#1228)
* Use multiple sockets for receiving blobs on validators

- The blobs that are broadcasted by leader or retransmitted by peer
  validators are received on replicate_port
- Using reuse_addr/reuse_port, multiple sockets can be opened for
  the same port
- This allows the kernel to queue data to user space app on multiple
  socket queues, preventing over-running one queue
- This helps with reducing packets dropped due to queue over-runs

Fixes #1224

* Fixed failing tests
2018-09-14 16:56:06 -07:00
4196cf43e8 cargo fmt 2018-09-14 16:37:49 -07:00
a344eb7dd0 Upgrade rust stable to 1.29 2018-09-14 16:37:49 -07:00
d12537bdb7 Include UDP sent statistics in net stats (#1225) 2018-09-14 13:32:13 -07:00
bcb3b3c21f Add integration tests to wallet module 2018-09-14 08:21:33 -06:00
d8c9a1aae9 Add method to run local drone for tests 2018-09-14 08:21:33 -06:00
9ca2f5b3f7 Move all handling except network/gossip from /bin to wallet module 2018-09-14 08:21:33 -06:00
9e24775051 update README with v0.8 and update demo scripts to match 2018-09-13 18:37:37 -07:00
4dc30ea104 Add recycler stats (#1187) 2018-09-13 14:49:48 -07:00
90df6237c6 Implements recvmmsg() for UDP packets (#1161)
* Implemented recvmmsg() for UDP packets

- This change implements binding between libc API for recvmmsg()
- The function can receive multiple packets using one system call

Fixes #1141

* Added unit tests for recvmmsg()

* Added recv_mmsg() wrapper for non Linux OS

* Address review comments for recvmmsg()

* Remove unnecessary imports

* Moved target specific dependencies to the function
2018-09-13 14:41:28 -07:00
80caa8fdce add back some defaults for client.sh 2018-09-13 14:05:53 -07:00
8706774ea7 Rewrote service trait join() method to allow thread join handles to return values other than () (#1213) 2018-09-13 14:00:17 -07:00
1d7e87d430 Increase number of sockets for transaction processing 2018-09-13 14:22:07 -06:00
1a4cd763f8 Fix missing recycle in recv_from (#1205)
In the error case that i>0 (we have blobs to send)
we break out of the loop and do not push the allocated r
to the v array. We should recycle this blob, otherwise it
will be dropped.
2018-09-13 08:29:18 -07:00
ee74b367ce Add docker install script 2018-09-12 17:09:37 -07:00
f06113500d bench-tps/net sanity: add ability to check for unexpected extra nodes 2018-09-12 15:38:57 -07:00
9ab5692acf fix "leak" in Blob::recv_from (#1198)
* fix "leak" in Blob::recv_from

fixes #1199
2018-09-12 14:45:43 -07:00
e7a910b664 v0.9 2018-09-12 10:27:33 -07:00
b52230097e groom Fullnode's new_with_bank() to match new() more 2018-09-12 09:24:42 -07:00
a8fdb8a5a7 use a single BlobRecycler per fullnode 2018-09-11 16:56:54 -07:00
297f859631 Change '>=' back to '>' to fix recycling of blobs/packets (#1192)
Recycler will have a strong ref to the item so it will be at
least 1, >= will always prevent recycling.
2018-09-11 16:52:45 -07:00
5d19b799af Fix snap configuration for netstat daemon (#1190)
- Also increased the frequency at which the stats are sent
- Fixed file permissions for snapcraft.yaml
2018-09-11 14:49:05 -07:00
af3eb5a16c .sh 2018-09-11 11:29:49 -07:00
b313b7f6f9 Revert "move rpc_server to drop() semantics instead of having its own thread"
This reverts commit 40aa0654fa.
2018-09-10 22:48:33 -07:00
016ee36808 remove -x 2018-09-10 21:40:14 -07:00
c3fc98c48f use gossip to find the leader for every airdrop request 2018-09-10 21:29:45 -07:00
40aa0654fa move rpc_server to drop() semantics instead of having its own thread 2018-09-10 20:25:53 -07:00
bace2880d0 Correct spelling 2018-09-10 19:58:21 -07:00
9d80eefb81 Log the number of accounts each 250k txes (#1178) 2018-09-10 17:40:00 -07:00
1c17c6dd2b Report UDP network statistics (#1176)
* Report UDP network statistics

Fixes #1093

* Address review comments

* Address additional review comments

* Fix shellcheck errors
2018-09-10 15:52:08 -07:00
2be0dbddbb Correct spelling 2018-09-10 13:48:43 -07:00
a91b785ba5 move fullnode trace generation into crdt 2018-09-10 13:47:57 -07:00
0ef05de889 Add sleep to prevent spinning thread 2018-09-10 12:50:28 -07:00
a093d5c809 Fix erasure build 2018-09-10 11:40:26 -06:00
fc64e1853c Initialize Window, not SharedWindow
Wrap with Arc<RwLock>> when/if needed, no earlier.
2018-09-10 11:40:26 -06:00
7f669094de Split window into two modules 2018-09-10 11:40:26 -06:00
5025d89c88 Inline window method implementations 2018-09-10 11:40:26 -06:00
2b44c4504a Use WindowUtil for more idiomatic code 2018-09-10 11:40:26 -06:00
d2c9beb843 Add a trait to pretend Window is an object 2018-09-10 11:40:26 -06:00
9e6d3bf532 Correct spelling 2018-09-10 09:29:01 -07:00
a89b611e9e comments (#1165) 2018-09-09 07:07:38 -07:00
ebcac3c2d1 Use a common solana user on all testnet instances 2018-09-08 22:34:26 -07:00
7029e4395c Fix OOM reporting 2018-09-08 18:57:31 -07:00
5afcdcbbe6 More log grooming 2018-09-08 14:16:34 -07:00
3840b4b516 Groom log output 2018-09-08 14:10:18 -07:00
7aeb6d642b Display log file 2018-09-08 13:59:45 -07:00
1d6c4aacae Retry rsync a couple times before failing 2018-09-08 13:59:45 -07:00
9f5c86e60c Install earlyoom at gce instance startup 2018-09-08 13:59:45 -07:00
9f413fd656 Establish net/scripts/... for better scoping 2018-09-08 13:59:45 -07:00
97c3125a78 improve localnet-sanity's robustness (#1160)
* fix poll_gossip_for_leader() loop to actually wait
         for 30 seconds
    * reduce reuseaddr use to only when necessary,
         try to avoid already bound sockets
    * move nat.rs to netutil.rs
    * add gossip tracing to thin_client and bench-tps
2018-09-09 04:50:43 +09:00
a77aca75b2 Add NO_VALIDATOR_SANITY back 2018-09-07 22:37:05 -07:00
96bfd9478b make all the nodes have a pretty seq id (#1159) 2018-09-08 14:18:18 +09:00
e8206cb2d4 Echo the network address before entering a quiet polling loop 2018-09-07 21:20:00 -07:00
c3af0d9d25 Improve client.log 2018-09-07 21:20:00 -07:00
932c994dc9 Use new bench-tps command-line args 2018-09-07 21:20:00 -07:00
c34d911eaf Migrate Budget DSL to use the Account state (#979)
* Migrate Budget DSL to use the Account state instead of global bank data structures.

* Serialize Instruction into Transaction::userdata.
* Store the pending set in the Account::userdata
* Enforce the token balance rules on contract execution. This becomes the entry point for generic contracts.
* This pr will have a performance impact on the bank. The next set of changes will fix this by locking each account during multi threaded execution of all the contracts.
* With this change a contract transaction needs to store its state under an address. That address could be the destination of the tokens, or any random address. For the latter, an extra step would be needed to claim the tokens which isn't implemented by budget_dsl at the moment.
* test tracking issue 1157
2018-09-07 20:18:36 -07:00
ddd1871840 Install libssl1.1 for solanalabs/rust docker image compat 2018-09-07 19:57:41 -07:00
db825788fa Document how to get ssh access into CD testnets 2018-09-07 19:41:13 -07:00
b1b03ec13b Refine docker image tagging to avoid breaking stabilization branches on updates 2018-09-07 18:42:25 -07:00
73a8441add /var/snap is not writable by most users 2018-09-07 17:41:20 -07:00
bf29590f41 WSL needs ReuseAddr in addition to ReusePort (which it doesn't honor) (#1149) 2018-09-08 07:28:22 +09:00
51b27779c9 client changes for TODOs and looping (#1138)
* remove client.sh from snap
* default to ephemeral instead of ~/.config key
* rework CLI for bench-tps
* remote multinode-demo stuff from remote-client.sh
* remove multinode-demo from remote-sanity and localnet-sanity
2018-09-08 07:07:10 +09:00
5169c8d08f Add method to return hash of bank state 2018-09-07 15:38:53 -06:00
0d945e6a92 Groom testnet-sanity logging 2018-09-07 12:45:48 -07:00
1090254ba5 Add datapoints for leader/validator start 2018-09-07 12:45:48 -07:00
e51445d857 🙃 2018-09-07 12:24:34 -07:00
4b47abd3bf Fix --num-nodes argument parsing 2018-09-07 12:20:42 -07:00
71a617b4dc Fix erasure build 2018-09-07 13:18:19 -06:00
a722802c95 Window write lock to read lock 2018-09-07 13:18:19 -06:00
e9f44b6661 window -> window_service 2018-09-07 13:18:19 -06:00
9693de1867 Reposition parameters 2018-09-07 13:18:19 -06:00
f7ea95aed1 Hoist lock, reposition parameters 2018-09-07 13:18:19 -06:00
f07ce59be8 Toggle parameters 2018-09-07 13:18:19 -06:00
da423b6cf0 Hoist read lock 2018-09-07 13:18:19 -06:00
d5f60b68e4 Hoist window write lock 2018-09-07 13:18:19 -06:00
78b3a8f7f9 Hoist repair_window() branches
This probably would have been done if repair_window() was unit-tested.
2018-09-07 13:18:19 -06:00
d77699c126 Do the easy check first
All functions above operate on immutable values, so this shouldn't
change functionality, but no repair_window() tests to be certain.hI
2018-09-07 13:18:19 -06:00
09ba0dae15 Remove redundant clone() 2018-09-07 13:18:19 -06:00
a5c7575207 Rewrite find_next_missing, call it clear_slots 2018-09-07 13:18:19 -06:00
50f040530b Remove redundant cast 2018-09-07 13:18:19 -06:00
7f99c90539 Simplify using early return and Result::ok() 2018-09-07 13:18:19 -06:00
d8564b725c Don't reference window to get each slot 2018-09-07 13:18:19 -06:00
e4de25442a Hoist write lock
It needed to be passed the lock before, because it contained a
branch where one side didn't require locking. Now that that
defensive programming was hoisted, we can hoist the write lock
as well, leaving a simpler function for unit testing.
2018-09-07 13:18:19 -06:00
3b2ea8fd40 Hoist untested branch in window
If there were unit tests for this function, the author would have
written it this way to make their own life easier.
2018-09-07 13:18:19 -06:00
9a1832ed61 Bump ping timeout 2018-09-07 12:01:43 -07:00
9e45f1f5e2 Doc fixup 2018-09-07 12:01:43 -07:00
ee682d5bc3 Move wallet-sanity.sh out of multinode-demo/ 2018-09-07 12:01:43 -07:00
05decc863f Make set -x more buildkite friendly 2018-09-07 12:01:43 -07:00
506a81e8cc Assume -y 2018-09-07 12:01:43 -07:00
dcb30a8489 Delete leader node first 2018-09-07 12:01:43 -07:00
a2631e89f6 Use consistent style 2018-09-07 12:01:43 -07:00
ab208ddb77 Clean up arg handling 2018-09-07 12:01:43 -07:00
09a48d773a Run bench-tps in a tmux 2018-09-07 12:01:43 -07:00
88298bf321 Add -n option 2018-09-07 12:01:43 -07:00
d252f7f687 Revert "Default to 10 validators"
This reverts commit ed5fbaef06.
2018-09-07 12:01:43 -07:00
533ebc17f2 Install multilog automatically on a CI machine 2018-09-07 11:56:23 -07:00
f4947236dc Keep cargo-target-cache size under 6GB-ish 2018-09-07 11:45:27 -07:00
e088833b81 s/create/start/ 2018-09-06 21:07:11 -07:00
53e16f68d9 Improve error handling 2018-09-06 20:57:05 -07:00
ed5fbaef06 Default to 10 validators 2018-09-06 20:46:49 -07:00
b1bacf12a6 Add some log sections 2018-09-06 20:38:11 -07:00
66ff602659 Rewrite ci/testnet-{deploy,sanity}.sh in terms of net/ primitives 2018-09-06 19:54:39 -07:00
e175c9dea9 Remove ip address hardcode. Fixes #959 2018-09-06 19:54:39 -07:00
5a57d9b5d9 de-y 2018-09-06 19:54:39 -07:00
03e87e4169 Add more metrics 2018-09-06 19:54:39 -07:00
abfff66d53 Retry ssh a couple times before giving up 2018-09-06 19:54:39 -07:00
31dee553d5 Split start/version reporting 2018-09-06 19:54:39 -07:00
9ca6a2d25b Configure boot disk size 2018-09-06 19:54:39 -07:00
a3178c3bc7 Remove unused name tag 2018-09-06 19:54:39 -07:00
aa07bdfbaa Optionally suppress delete confirmation 2018-09-06 19:54:39 -07:00
eaef9be710 Clarify -f 2018-09-06 19:54:39 -07:00
cae345b416 Allow - in prefix 2018-09-06 19:54:39 -07:00
acb1171422 Add -e option 2018-09-06 19:54:39 -07:00
52d8f293b6 Add links to citations
And fix hyphens in quote.
2018-09-06 20:41:05 -06:00
636eb8d058 Add Leslie Lamport quote to README 2018-09-06 20:41:05 -06:00
0fa27f65bb Use the default Pubkey formatter instead of debug_id() 2018-09-06 16:31:47 -06:00
8f94e3f7ae Buffer tokens when switching directions to prevent errors (#1126)
Even if transactions are dropped, accounts will have buffer
of tokens. Should reduce or eliminate AccountNotFound errors seen in the
leader while bench-tps is running.
2018-09-06 14:20:01 -07:00
05460eec0d Open multiple sockets for transaction UDP port (#1128)
* Reuse UDP port and open multiple sockets for transaction address

* Fixed failing crdt tests

* Add tests for reusing UDP ports

* Address review comments

* Updated bench-streamer to use multiple receive sockets

* Fix minimum number of recv sockets for bench-streamer

* Address review comments

Fixes #1132

* Moved bind_to function to nat.rs
2018-09-06 14:13:40 -07:00
072d0b67e4 Send deploy metrics to the testnet-specific database 2018-09-06 08:30:03 -07:00
fdc48d521c use USER instead of whoami (#1134)
* use USER instead of whoami

make gcloud_FigureRemoteUsername robust against unsolicited output
   (that I get on login ;) )

validate --prefix argument

* Update gcloud.sh
2018-09-07 00:18:05 +09:00
6560b0e2cc s/whoami/id -un/ 2018-09-05 14:26:21 -07:00
ec38dba209 GCE leader nodes can now be provisioned with a static IP address 2018-09-05 14:26:21 -07:00
d9e4bce6ad Add drop stats to bench-tps (#1127)
See how many transactions made it through
2018-09-05 11:58:41 -07:00
1fd4343621 Add total count to stat (#1124) 2018-09-05 09:28:18 -07:00
8d87627a49 t 2018-09-05 09:09:50 -07:00
aacf27fb76 Add convienience link to current Snap log files 2018-09-05 09:02:02 -07:00
a51536d107 Add log tail hint 2018-09-05 09:02:02 -07:00
1c874fbc1b Make This is little more hacky 2018-09-05 09:02:02 -07:00
0362169671 Better scope leader and validator setup 2018-09-05 09:02:02 -07:00
e2e569cb43 Set rsync url for local deployments 2018-09-05 09:02:02 -07:00
8c51b47e85 Preserve existing ssh config 2018-09-05 09:02:02 -07:00
017eb10e76 Add file header doc 2018-09-05 09:02:02 -07:00
f50aeb0e58 Always add perf-libs to LD_LIBRARY_PATH 2018-09-05 09:02:02 -07:00
48c19d3100 Enable cargo features to be specified 2018-09-05 09:02:02 -07:00
aaf0a23134 Add Tips section 2018-09-05 09:02:02 -07:00
89db85dbf9 Work around concurrent |gcloud compute ssh| terminal issue 2018-09-05 09:02:02 -07:00
e677cda027 Private IP networks now work, and are the default 2018-09-05 09:02:02 -07:00
db9219ccc8 Improve error monitoring 2018-09-05 09:02:02 -07:00
06fd945f85 Set node config correctly 2018-09-05 09:02:02 -07:00
6ad4a81123 s/_/-/g in filenames 2018-09-05 09:02:02 -07:00
bcaa0fdcb1 net/ can now deploy Snaps 2018-09-05 09:02:02 -07:00
2cb1375217 Run gcloud_PrepInstancesForSsh in parallel 2018-09-05 09:02:02 -07:00
9365a47d42 Employ a startup script 2018-09-05 09:02:02 -07:00
6ffe205447 Add -g option 2018-09-05 09:02:02 -07:00
ec3e62dd58 Add net/ sanity 2018-09-05 09:02:02 -07:00
fa07c49cc9 net/ can now deploy Snaps 2018-09-05 09:02:02 -07:00
449d7042f0 Configure metrics correctly 2018-09-05 09:02:02 -07:00
7e2b65374d gce instance types are now configurable 2018-09-05 09:02:02 -07:00
8e39465700 Drop .sh extension to hide from shellcheck 2018-09-05 09:02:02 -07:00
43b4207101 Run oom-monitor in net/ testnets 2018-09-05 09:02:02 -07:00
ff991b87da Add support for deploying from non-Linux machines 2018-09-05 09:02:02 -07:00
c81c19234f Improve incremental speed of docker cargo builds outside of CI 2018-09-05 09:02:02 -07:00
399caf343c Morph gce_multinode-based scripts into net/ 2018-09-05 09:02:02 -07:00
ffb72136c8 Remove account from balances table after error seen (#1120)
If balance goes to 0, then bank removes the account
from it's account table and returns no account error. Thin client
should also update the account to this state or it will
still have the cached balance from the last successful get_balance().
2018-09-04 21:33:19 -07:00
1a615bde2b Update README.md (#1117)
* Update README.md

* Fix spelling

* Improved punctuation
2018-09-04 20:41:11 -07:00
cf2626a1c5 Update instructions to upgrade nightly docker image 2018-09-04 20:56:40 -06:00
68c72d6f34 Fix nightly build 2018-09-04 20:56:40 -06:00
65f78905cd Install cargo-cov on latest nightly 2018-09-04 20:56:40 -06:00
70a8ae4612 Fixed private IP variable in gcloud script (#1119) 2018-09-04 16:24:19 -07:00
d82ec2634c Fix is_leader boolean (#1115)
A node is the leader if the address is none
2018-09-04 13:38:24 -07:00
b4a7a18334 Update README.md 2018-09-04 13:29:00 -07:00
c44c5f0b09 take into account size of an Entry (#1116) 2018-09-05 05:07:58 +09:00
226d3b9471 Trace recycle() calls (#968)
* trace recycle() calls fixes #810
2018-09-05 05:07:02 +09:00
2752bde683 Print to indicate what drone is doing while waiting for gossip 2018-09-04 13:45:08 -06:00
b8816d722c Fix Block::to_blobs() benchmark
16% speedup, w00t!

name                                control  ns/iter  variable  ns/iter  diff ns/iter   diff %  speedup
bench_block_to_blobs_to_block       29,897            25,807                   -4,090  -13.68%   x 1.16
2018-09-04 07:50:23 -10:00
2aa72cc72e Return a Vec from to_blobs() instead of using a mut parameter 2018-09-04 07:50:23 -10:00
8cc030ef84 Use Vec instead of VecDeque for SharedBlobs 2018-09-04 07:50:23 -10:00
9a9f89293a Better error handling messages for airdrops 2018-09-04 06:46:43 -10:00
501deeef56 accounts should never be negative (#1083) 2018-09-04 06:43:18 -10:00
05f921d544 Don't call println in the test suite 2018-09-04 06:01:32 -10:00
ab7a2960b1 Don't use product name in solana library 2018-09-04 06:01:32 -10:00
4e2deaa33b Less mut 2018-09-04 06:01:32 -10:00
d5ef18337c Remove redundant return value
And don't log the same error twice.
2018-09-04 06:01:32 -10:00
d18ea501b7 Minimize unsafe code 2018-09-04 06:01:32 -10:00
c9a1ac9b8c Don't propogate errors we'll never handle 2018-09-04 06:01:32 -10:00
c2a4cb544e Borrow, don't clone entries 2018-09-04 06:01:32 -10:00
3ab12076e8 Convert voting functions to methods
More idiomatic Rust.
2018-09-04 05:53:58 -10:00
6a383c45fc Update sendTransaction example to reflect new array size 2018-09-04 05:44:10 -10:00
7cc27e7bd1 Doc requestAirdrop rpc method 2018-09-04 05:44:10 -10:00
0464087327 Add api definitions 2018-09-04 05:44:10 -10:00
c193c7de12 Add JSON-RPC API Documentation 2018-09-04 05:44:10 -10:00
61abee204f don't check for snap mode in common.sh, is only relevant to snap daemons (#1113)
snap mode is for daemons, remove it from client (i.e. common.sh)

supply leader info to client via snap
2018-09-04 14:31:54 +09:00
a99dbb2a0c set -x in client.sh 2018-09-04 11:55:04 +09:00
e834c76b40 --count => --num-nodes 2018-09-04 07:07:25 +09:00
7b3c7f148b supply leader and leader_address 2018-09-02 02:27:05 +09:00
fb4b33b81b make the repair_backoff test more robust (#1095)
* more the repair_backoff test more robust

* fix names and magic numbers
2018-08-31 12:40:56 -10:00
25d7dc7b96 fixups 2018-09-01 04:38:18 +09:00
d1f1cbe88f leader-address=>leader-ip 2018-09-01 04:38:18 +09:00
a4e7b6e90c more fixups for client.sh changes 2018-09-01 03:33:21 +09:00
fbc7c9c431 fix client_start to deal with new client.sh 2018-09-01 03:23:05 +09:00
8b248dcf09 specify port 2018-09-01 02:56:24 +09:00
4938aad939 fixups 2018-09-01 02:21:46 +09:00
7e882dfe62 inform all snaps where the network is 2018-09-01 02:21:46 +09:00
5c8cb96f88 rebase fixup 2018-08-31 23:21:07 +09:00
9d1eb4f9ea remove 'localhost' leader (redundant, un-dig-friendly) 2018-08-31 23:21:07 +09:00
210a4d0640 fixup 2018-08-31 23:21:07 +09:00
176e806d94 rework of netwrk rendezvous
* rename NodeInfo field of Node from "data" to "info"
      (touches a lot of files)

  * update client to use gossip to find leader, a la drone

  * rework multinode scripts
      * move more stuff into rust
      * added usage to all
      * no more rsync unless you're a validator (TODO: whack that, too)
  * fullnode doesn't bail if drone isn't up yet, just keeps trying
  * drone doesn't bail if network isn't up yet, just keeps trying
2018-08-31 23:21:07 +09:00
eb4e5a7bd0 fixups 2018-08-31 23:21:07 +09:00
ba27596076 fixups 2018-08-31 23:21:07 +09:00
63e44dcc35 continue rendezvous refactor for gossip and repair
* remove trailing whitespace in ci/audit.sh

  * code review fixups
     * rename GOSSIP_PORT_RANGE => SOLANA_PORT_RANGE
     * remove out-of-date TODO in localnet-sanity.sh

  * remove features=test and code that was using it (localhost prohibitions in
      crdt) added TODO in crdt.rs, maybe we should boot localhost in production
      networks?

  * boot tvu_window from NodeInfo: instead, send repair requests from the repair
      socket (to gossip on peer) and answer repair requests via the sockaddr
      from the repair request

  * remove various unused pub functions

  * banish SocketAddr parse().unwrap() to a macro that can also accept simpler stuff
2018-08-31 23:21:07 +09:00
c0ba676658 fixup 2018-08-31 23:21:07 +09:00
1af4cee63b fix #1079
* move gossip/NCP off assuming anything about its address
  * use a single socket to send and receive gossip
  * remove --addr/-a from CLIs
  * rearrange networking utility code
  * use Arc<UdpSocket> to share the Sync-safe UdpSocket among threads
  * rename TestNode to Node

TODO:

  * re-enable 127.0.0.1 as a valid address in crdt
  * change repair request/response to a similar, single socket
  * pick cloned sockets or Arc<UdpSocket> for all these (rpu uses tryclone())
  * update contact_info with network truthiness instead of what the node
      says?
2018-08-31 23:21:07 +09:00
cb52a335bd re-enable localnet-sanity 2018-08-31 23:21:07 +09:00
e308a4279e Update RPC requestAirdrop endpoint to return airdrop tx signature 2018-08-28 18:27:41 -06:00
513a934ff6 Update request_airdrop utility function to pass along airdrop tx signature 2018-08-28 18:27:41 -06:00
77d820c842 Update drone module to return airdrop tx signature 2018-08-28 18:27:41 -06:00
30cbe7c6a9 Update jsonrpc crate version 2018-08-28 18:27:24 -06:00
18ef643dc7 Keep locals local 2018-08-28 08:11:44 -07:00
73a0bf8d30 Avoid unbounded /var/tmp growth 2018-08-28 08:11:44 -07:00
9d53208d68 Use gcloud_DeleteInstances 2018-08-28 08:11:44 -07:00
d26f135159 Find metrics-write-datapoint.sh again 2018-08-27 22:41:58 -07:00
c8e3ce26a9 Start of scripts/gcloud.sh 2018-08-27 22:35:14 -07:00
f88970a964 source oom-score-adj.sh from validator.sh 2018-08-28 10:01:41 +09:00
51d911e3f4 Update testnet-sanity.sh 2018-08-27 15:44:10 -07:00
bd5c6158ae Move some common scripts from multinode-demo/ to scripts/ 2018-08-27 13:52:38 -07:00
cd0db7842c Remove unused _config.yml 2018-08-27 13:52:38 -07:00
31d1087103 Documentation 2018-08-27 13:52:38 -07:00
0efd64df6f no need for sudo, move ledger copy out of SNAP_DATA 2018-08-28 05:42:05 +09:00
28bdf346f6 clean up after ledger sanity 2018-08-28 05:42:05 +09:00
48762834d9 Randomize repair requests (#1059)
* randomize packet repair requests

* exponential random repair requests

* use gen_range to get a uniform distribution
2018-08-27 07:05:48 -07:00
8d0d429acd update 2018-08-26 23:34:25 -07:00
e5408368f7 fmt 2018-08-26 22:35:26 -07:00
61492fd27e exit if no leader 2018-08-26 22:35:26 -07:00
bbce08a67b bench needs to discover leader as well 2018-08-26 22:35:26 -07:00
a002148098 retry transfer and poll 2018-08-26 16:10:46 -07:00
90ae662e4d Fix packet header offset
And update transaction offsets to use the same approach as packet.rs.
Maybe this should be serialized_size(), but thanks to this
GenericArray update, those values are the same.
2018-08-26 14:27:19 -06:00
60d8f5489f Update transaction layout offsets
24 less bytes in minimal transactions. 10% TPS boost?
2018-08-26 14:27:19 -06:00
59dd8b650d Update generic-array requirement from 0.11.1 to 0.12.0
Updates the requirements on [generic-array](https://github.com/fizyk20/generic-array) to permit the latest version.
- [Release notes](https://github.com/fizyk20/generic-array/releases)
- [Changelog](https://github.com/fizyk20/generic-array/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fizyk20/generic-array/commits)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2018-08-26 14:27:19 -06:00
738247ad44 advertise valid gossip address in drone and wallet (#1066)
* advertize valid gossip address in drone and wallet

get rid of asserts

check for valid ip address

check for valid address

ip address

* tests

* cleanup

* cleanup

* print error

* bump

* disable tests

* disable nightly
2018-08-26 11:36:27 -07:00
5b0bb7e607 Skip invalid nodes for finality (#1068)
* skip invalid nodes for finality

* check valid last_ids only

* fixup!

* fixup!
2018-08-25 23:12:41 -07:00
f7c0d30167 Disallow localhost in deployment (#1064)
* disallow localhost in deployment

* tests

* fmt

* integration tests do not have a flag to check

* fmt
2018-08-25 21:09:18 -07:00
8e98c7c9d6 fix purge test 2018-08-25 19:56:09 -07:00
50661e7b8d Added poll_balance_with_timeout method (#1062)
* Added poll_balance_with_timeout method

- updated bench-tps, fullnode and wallet to use this method instead
  of repeatedly calling poll_get_balance()

* Address review comments

- Revert some changes to use wrapper poll_get_balance()

* Reverting bench-tps to use poll_get_balance

- The original code is checking if the balance has been updated,
  instead of just retrieving the balance. The logic is different
  than poll_balance_with_timeout()

* Reverting wallet to use poll_get_balance

- The break condition in the loop is different than poll_balance_with_timeout().
  It's checking if the balance has been updated.
2018-08-25 18:24:25 -07:00
ad159e0906 Fix crash in fullnode when poll_get_balance() returns error (#1058) 2018-08-25 15:25:13 -07:00
d3fac8a06f Dynamically bind to available UDP ports in Fullnode (#920)
* Dynamically bind to available UDP ports in Fullnode

* Added tests for dynamic port binding

- Also removed hard coding of port range from CRDT
2018-08-25 10:24:16 -07:00
c641ba1006 Up network buffers to 64MB max (#1057)
500ms of data at 1Gbps = 125GB/2 = 64MB
Seems to help tx rate in GCP network tests.
2018-08-24 18:17:48 -07:00
de379ed915 Fix sig verify counters to be unique and tweak perf counters (#1056)
print events and add current events to old value to report
2018-08-24 16:05:32 -07:00
d4554c6b78 RFC Branches, Channels, and Tags 2018-08-23 21:28:05 -07:00
6fc21a4223 Don't hang in transaction_count (#1052)
Situation is there can be that there can be bad entries in
the bench-tps CRDT table until they get purged later. Threads however
are created for those bad entries and then will hang on trying
to get the transaction_count from those bad addresses and never end.
2018-08-23 20:57:13 -07:00
71319978df Up drone request amount (#1051)
Multiple clients will request 500k each so up this to support them.
2018-08-23 15:30:35 -07:00
6147e54686 Cap repair requests timeout (#958) 2018-08-23 15:30:21 -07:00
0c8eec2563 Cleanup Fullnode construction
leader_id was already set by Fullnode constructor. And cleanup the
rest of that code while in the neighborhood.

Thanks @CriesofCarrots!
2018-08-23 13:42:54 -07:00
4ab58f069a Add back JsonRpcService changes 2018-08-23 13:42:54 -07:00
85f96d926a Pacify clippy 2018-08-23 13:42:54 -07:00
816de4f8ec Hoist shared code between leaders and validators 2018-08-23 13:42:54 -07:00
42229a1105 Hoist thread_hdls 2018-08-23 13:42:54 -07:00
d8820053af Inline create_leader_threads and create_validator_threads 2018-08-23 13:42:54 -07:00
731f8512c6 Hoist Arc<Bank> 2018-08-23 13:42:54 -07:00
a133784706 Rename mode-specific constructors and return only thread handles 2018-08-23 13:42:54 -07:00
be58fdf1bb Less constructors 2018-08-23 13:42:54 -07:00
57daeb35d2 Drop all references to new_leader and new_validator 2018-08-23 13:42:54 -07:00
9c5e69bf3d Don't offer two ways to specify a leader 2018-08-23 13:42:54 -07:00
cfac127e4c Extract lower-level constructor
Passing in the bank is useful for unit-tests since Fullnode doesn't
store it in a member variable.
2018-08-23 13:42:54 -07:00
fda4523cbf Fix broken doc 2018-08-23 13:42:54 -07:00
cabe80b129 Increment counter by number of packets received (#1049)
So that we can see the total packets/s
2018-08-23 12:32:50 -07:00
d4c41219f9 Improve gossip use for drone and wallet
- Add utility function
  - Add thread sleep
  - Enable configurable timeout for gossip poll
2018-08-23 13:08:59 -06:00
4fdd9fbfca Wallet: use gossip to identify leader's port config 2018-08-23 13:08:59 -06:00
bdf5ac9c1a Drone: use gossip to identify leader's port config 2018-08-23 13:08:59 -06:00
f1785c76a4 Rework counter increment outside apply_debits loop (#1046)
Reduces prints/atomics work inside the process_transactions loop
2018-08-23 09:42:59 -07:00
2de8fe9c5f Pass bank to rpc as reference 2018-08-23 09:06:17 -06:00
d910ed68a3 Use balance to verify requestAirdrop success 2018-08-23 09:06:17 -06:00
f7f7ecd4c6 Add json-rpc requestAirdrop endpoint 2018-08-23 09:06:17 -06:00
a9c3a28a3b Add json-rpc sendTransaction endpoint 2018-08-23 09:06:17 -06:00
96787ff4ac Use builtin sum 2018-08-22 16:24:19 -06:00
c3ed4d28de Change average TPS to max average tps seen for any node and...
add script to collect perf stats
2018-08-22 14:55:04 -07:00
f1e35c3bc6 GCE script change to use GCE private network for multinode tests (#1042)
- Also the user can specify the zone where the nodes should be created
2018-08-22 13:21:33 -07:00
db3fb3a27c Boot criterion (#1032)
* Revert benchmarks back to libtest

Criterion has too many dependencies, it's execution as slower, and
we didn't see the kind of precision we had hoped for to use it to
block CI builds.

* Ignore benchmarks that take more than a few milliseconds per iteration

* Revert "Ignore benchmarks that take more than a few milliseconds per iteration"

This reverts commit b87cdf6ef4.

* Don't run benchmarks in CI

They are already built in the nightly build. Executing them in CI
doesn't add much value until the results are precise enough to act
on.
2018-08-22 08:57:07 -06:00
8282442956 fixes #927 2018-08-22 17:47:59 +09:00
a355d9f46c Add error catch for rpc server builder 2018-08-21 14:04:52 -06:00
be4824c955 Add custom panic hook for RPC port bind 2018-08-21 14:04:52 -06:00
86c1d97c13 Fix validator rpc addr to match leader 2018-08-20 22:35:06 -07:00
0b48aea937 echo commands, use PID (good form) 2018-08-21 11:41:00 +09:00
cdec0cead2 files have to appear in the snap 2018-08-21 11:41:00 +09:00
831709ce7e fixups 2018-08-21 10:36:03 +09:00
b7b8a31532 make a copy of the ledger for sanity check
we can't verify a live ledger, unfortunately, fixes #985
2018-08-21 10:36:03 +09:00
15406545d8 Document how to adjust the number of clients or validators on the testnet 2018-08-20 18:35:01 -07:00
5aced8224f Revert "make a copy of the ledger for sanity check"
This reverts commit af20a43b77.
2018-08-21 10:34:52 +09:00
af20a43b77 make a copy of the ledger for sanity check
we can't verify a live ledger, unfortunately, fixes #985
2018-08-21 09:45:52 +09:00
39c3280860 Don't block on large network test 2018-08-20 16:48:37 -06:00
2d35345c50 Boot unused creates 2018-08-20 16:48:37 -06:00
a02910be32 Remove pubkey from getBalance response 2018-08-20 15:02:48 -07:00
b9ec97a30b Add counter for bank transaction errors (#1015) 2018-08-20 14:56:01 -07:00
2e89999d88 # This is a combination of 4 commits.
# This is the 1st commit message:

Fix tesetment readme

# This is the commit message #2:

updte

# This is the commit message #3:

typo

# This is the commit message #4:

cleanup
2018-08-20 13:49:56 -07:00
24b0031925 Reduce number of nodes in multinode test (#1003) 2018-08-20 13:40:42 -07:00
9eeaf2d502 Bind RPC port on all interfaces 2018-08-20 12:45:50 -07:00
c9e6fb36c3 Avoid unncessary cargo rebuilds in non-perf configuration 2018-08-20 12:03:44 -07:00
8de317113c clippy: remove identity conversion 2018-08-20 10:55:55 -07:00
a1ec549630 Pin nightly rust for more controlled updating 2018-08-20 10:55:55 -07:00
ecddff98f5 Add --nopull argument 2018-08-20 10:55:55 -07:00
10066d67bf Add llvm deb repository 2018-08-19 09:01:36 -07:00
a07f7435c6 \ 2018-08-19 08:49:29 -07:00
d3523ebbe5 Nightly image now derives from stable image 2018-08-19 08:47:59 -07:00
133ddb11ff typo in README 2018-08-18 18:24:42 -07:00
1bf15ae907 Temporarily disable cargo audit CI failure 2018-08-18 12:29:49 -06:00
f73f3941cd Revert ill-advised jsonrpc marker, and handle jsonrpc server close 2018-08-18 12:29:49 -06:00
d69d79612b Simplify Rpc request processing 2018-08-18 12:29:49 -06:00
64ea5126e0 Fix early return for invalid parameter 2018-08-18 12:29:49 -06:00
9df3aa50d5 Remove unnecessary solana_ prefixes 2018-08-18 12:29:49 -06:00
cab75b7829 Handle potential panics 2018-08-18 12:29:49 -06:00
d9fac86015 Use jsonrpc git repo, allowing removal of Default bound for Metadata 2018-08-18 12:29:49 -06:00
1eb8724a89 Disable Rpc module for other tests to prevent port conflicts 2018-08-18 12:29:49 -06:00
c6662a4512 Implement Rpc in Fullnode 2018-08-18 12:29:49 -06:00
d3c09b4e96 Update jsonrpc dependency syntax 2018-08-18 12:29:49 -06:00
124f6e83d2 Rpc get last id endpoint 2018-08-18 12:29:49 -06:00
569ff73b39 Rpc tests 2018-08-18 12:29:49 -06:00
fc1dbddd93 Implement json-rpc functionality 2018-08-18 12:29:49 -06:00
3ae867bdd6 fixups 2018-08-18 02:22:52 -07:00
bc5f29150b fix erasure, remove Entry "pad"
* fixes #997
 * Entry pad is no longer required since erasure coding aligns data length
2018-08-18 02:22:52 -07:00
46016b8c7e crashes generate_coding() 2018-08-18 02:22:52 -07:00
5dbecd6b6b add logging, more conservative reset 2018-08-18 02:22:52 -07:00
877920e61b Compute snap channel using ci/channel-info.sh 2018-08-17 23:15:48 -07:00
3d1e908dad Add script to fetch latest channel info 2018-08-17 23:15:48 -07:00
6880c2bef0 Exclude ci/semver_bash/; don't want to diverge from upstream 2018-08-17 23:15:48 -07:00
78872ffb4b Vendor https://github.com/cloudflare/semver_bash/tree/c1133faf0e 2018-08-17 23:15:48 -07:00
229d825fe0 Fix master-perf basename 2018-08-17 21:59:36 -07:00
edc5fc098e Make SNAP_CHANNEL more visible in build log 2018-08-17 21:39:54 -07:00
bbe815468d Add instructions on how to run the demo against testnet.solana.com and watch it on the dashboard 2018-08-17 21:26:06 -07:00
82e7725a42 Invert logic 2018-08-17 21:16:35 -07:00
dc61cf1c8d Keep v0.7 snap off the edge channel 2018-08-17 21:12:10 -07:00
aba63e2c6c Log expansion directive must be on its own line 2018-08-17 20:58:14 -07:00
c2ddd056e2 Add option to skip ledger verification 2018-08-17 20:41:30 -07:00
c9508e84f2 0.8.0 2018-08-17 17:56:35 -07:00
f6f0900506 Large network test to not poll validator for sigs (#998)
- The finality is already reached. The test will check the signature
  in validators once, instead of polling. This will help speed up the test.
2018-08-17 14:38:19 -07:00
7aeef27b99 not quite banishing build.rs, but better 2018-08-16 22:33:31 -07:00
98d0ef6df5 Add some wget retries 2018-08-16 20:22:49 -07:00
208a7f16cb Fix bench-tps nokey error 2018-08-16 19:38:26 -06:00
16cf31c3a3 fix #990 2018-08-16 15:52:30 -07:00
2b48daaeba accept multiple expected outputs 2018-08-16 14:44:51 -07:00
79d24ee227 fixed test according to @rob-solana 2018-08-16 14:44:51 -07:00
a284030ecc Account type with state
comments

fixups!

fixups!

fixups for a real Result<> from get_balance()

on 2nd thought, be more rigorous

Merge branch 'rob-solana-accounts_with_state' into accounts_with_state

update

review comments

comments

get rid of option
2018-08-16 14:44:51 -07:00
561 changed files with 71436 additions and 18664 deletions

31
.buildkite/env/README.md vendored Normal file
View File

@ -0,0 +1,31 @@
[ejson](https://github.com/Shopify/ejson) and
[ejson2env](https://github.com/Shopify/ejson2env) are used to manage access
tokens and other secrets required for CI.
#### Setup
```bash
$ sudo gem install ejson ejson2env
```
then obtain the necessary keypair and place it in `/opt/ejson/keys/`.
#### Usage
Run the following command to decrypt the secrets into the environment:
```bash
eval $(ejson2env secrets.ejson)
```
#### Managing secrets.ejson
To decrypt `secrets.ejson` for modification, run:
```bash
$ ejson decrypt secrets.ejson -o secrets_unencrypted.ejson
```
Edit, then run the following to re-encrypt the file **BEFORE COMMITING YOUR
CHANGES**:
```bash
$ ejson encrypt secrets_unencrypted.ejson
$ mv secrets_unencrypted.ejson secrets.ejson
```

10
.buildkite/env/secrets.ejson vendored Normal file
View File

@ -0,0 +1,10 @@
{
"_public_key": "ae29f4f7ad2fc92de70d470e411c8426d5d48db8817c9e3dae574b122192335f",
"environment": {
"CODECOV_TOKEN": "EJ[1:Kqnm+k1Z4p8nr7GqMczXnzh6azTk39tj3bAbCKPitUc=:EzVa4Gpj2Qn5OhZQlVfGFchuROgupvnW:CbWc6sNh1GCrAbrncxDjW00zUAD/Sa+ccg7CFSz8Ua6LnCYnSddTBxJWcJEbEs0MrjuZRQ==]",
"CRATES_IO_TOKEN": "EJ[1:Kqnm+k1Z4p8nr7GqMczXnzh6azTk39tj3bAbCKPitUc=:qF7QrUM8j+19mptcE1YS71CqmrCM13Ah:TZCatJeT1egCHiufE6cGFC1VsdJkKaaqV6QKWkEsMPBKvOAdaZbbVz9Kl+lGnIsF]",
"INFLUX_DATABASE": "EJ[1:Kqnm+k1Z4p8nr7GqMczXnzh6azTk39tj3bAbCKPitUc=:PetD/4c/EbkQmFEcK21g3cBBAPwFqHEw:wvYmDZRajy2WngVFs9AlwyHk]",
"INFLUX_USERNAME": "EJ[1:Kqnm+k1Z4p8nr7GqMczXnzh6azTk39tj3bAbCKPitUc=:WcnqZdmDFtJJ01Zu5LbeGgbYGfRzBdFc:a7c5zDDtCOu5L1Qd2NKkxT6kljyBcbck]",
"INFLUX_PASSWORD": "EJ[1:Kqnm+k1Z4p8nr7GqMczXnzh6azTk39tj3bAbCKPitUc=:LIZgP9Tp9yE9OlpV8iogmLOI7iW7SiU3:x0nYdT1A6sxu+O+MMLIN19d2t6rrK1qJ3+HnoWG3PDodsXjz06YJWQKU/mx6saqH+QbGtGV5mk0=]"
}
}

View File

@ -1,2 +1,33 @@
CI_BUILD_START=$(date +%s)
export CI_BUILD_START
#
# Kill any running docker containers, which are potentially left over from the
# previous CI job
#
(
containers=$(docker ps -q)
if [[ $(hostname) != metrics-solana-com && -n $containers ]]; then
echo "+++ Killing stale docker containers"
docker ps
# shellcheck disable=SC2086 # Don't want to double quote $containers
docker kill $containers
fi
)
# Processes from previously aborted CI jobs seem to loiter, unclear why as one
# would expect the buildkite-agent to clean up all child processes of the
# aborted CI job.
# But as a workaround for now manually kill some known loiterers. These
# processes will all have the `init` process as their PPID:
(
victims=
for name in bash cargo docker solana; do
victims="$victims $(pgrep -u "$(id -u)" -P 1 -d \ $name)"
done
for victim in $victims; do
echo "Killing pid $victim"
kill -9 "$victim" || true
done
)

View File

@ -3,15 +3,14 @@
#
# Save target/ for the next CI build on this machine
#
if [[ -n $CARGO_TARGET_CACHE_NAME ]]; then
(
d=$HOME/cargo-target-cache/"$CARGO_TARGET_CACHE_NAME"
mkdir -p "$d"
set -x
rsync -a --delete --link-dest="$PWD" target "$d"
du -hs "$d"
)
fi
(
set -x
d=$HOME/cargo-target-cache/"$BUILDKITE_LABEL"
mkdir -p "$d"
set -x
rsync -a --delete --link-dest="$PWD" target "$d"
du -hs "$d"
)
#
# Add job_stats data point
@ -41,5 +40,5 @@ else
point="job_stats,$point_tags $point_fields"
multinode-demo/metrics_write_datapoint.sh "$point" || true
scripts/metrics-write-datapoint.sh "$point" || true
fi

View File

@ -1,13 +1,29 @@
#!/bin/bash -e
#!/usr/bin/env bash
set -e
[[ -n "$CARGO_TARGET_CACHE_NAME" ]] || exit 0
eval "$(ejson2env .buildkite/env/secrets.ejson)"
# Ensure the pattern "+++ ..." never occurs when |set -x| is set, as buildkite
# interprets this as the start of a log group.
# Ref: https://buildkite.com/docs/pipelines/managing-log-output
export PS4="++"
#
# Restore target/ from the previous CI build on this machine
#
(
d=$HOME/cargo-target-cache/"$CARGO_TARGET_CACHE_NAME"
mkdir -p "$d"/target
set -x
d=$HOME/cargo-target-cache/"$BUILDKITE_LABEL"
if [[ -d $d ]]; then
du -hs "$d"
read -r cacheSizeInGB _ < <(du -s --block-size=1000000000 "$d")
if [[ $cacheSizeInGB -gt 10 ]]; then
echo "$d has gotten too large, removing it"
rm -rf "$d"
fi
fi
mkdir -p "$d"/target
rsync -a --delete --link-dest="$d" "$d"/target .
)

20
.buildkite/pipeline-upload.sh Executable file
View File

@ -0,0 +1,20 @@
#!/usr/bin/env bash
#
# This script is used to upload the full buildkite pipeline. The steps defined
# in the buildkite UI should simply be:
#
# steps:
# - command: ".buildkite/pipeline-upload.sh"
#
set -e
cd "$(dirname "$0")"/..
buildkite-agent pipeline upload ci/buildkite.yml
if [[ $BUILDKITE_BRANCH =~ ^pull ]]; then
# Add helpful link back to the corresponding Github Pull Request
buildkite-agent annotate --style info --context pr-backlink \
"Github Pull Request: https://github.com/solana-labs/solana/$BUILDKITE_BRANCH"
fi

View File

@ -1,5 +1,12 @@
ignore:
- "src/bin"
coverage:
range: 50..100
round: down
precision: 1
status:
project: off
patch: off
comment:
layout: "diff"
behavior: default
require_changes: no

6
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,6 @@
#### Problem
#### Proposed Solution

5
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,5 @@
#### Problem
#### Summary of Changes
Fixes #

28
.github/RELEASE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,28 @@
# Release v0.X.Y <milestone name>
fun blurb about the name, what's in the release
## Major Features And Improvements
* bulleted
* list of features and improvements
## Breaking Changes
* bulleted
* list
* of
* protocol changes/breaks
* API breaks
* CLI changes
* etc.
## Bug Fixes and Other Changes
* can be pulled from commit log, or synthesized
## Thanks to our Contributors
This release contains contributions from many people at Solana, as well as:
pull from commit log

25
.gitignore vendored
View File

@ -1,16 +1,23 @@
Cargo.lock
/target/
/ledger-tool/target/
/wallet/target/
/core/target/
/book/html/
/book/src/img/
/book/src/tests.ok
**/*.rs.bk
.cargo
# node configuration files
# node config that is rsynced
/config/
/config-private/
/config-drone/
/config-validator/
/config-client/
/multinode-demo/test/config-client/
# node config that remains local
/config-local/
# test temp files, ledgers, etc.
/farf/
# log files
*.log
log-*.txt
# intellij files
/.idea/
/solana.iml

View File

@ -8,6 +8,49 @@ don't agree with a convention, submit a PR patching this document and let's disc
the PR is accepted, *all* code should be updated as soon as possible to reflect the new
conventions.
Pull Requests
---
Small, frequent PRs are much preferred to large, infrequent ones. A large PR is difficult
to review, can block others from making progress, and can quickly get its author into
"rebase hell". A large PR oftentimes arises when one change requires another, which requires
another, and then another. When you notice those dependencies, put the fix into a commit of
its own, then checkout a new branch, and cherrypick it. Open a PR to start the review
process and then jump back to your original branch to keep making progress. Once the commit
is merged, you can use git-rebase to purge it from your original branch.
```bash
$ git pull --rebase upstream master
```
### How big is too big?
If there are no functional changes, PRs can be very large and that's no problem. If,
however, your changes are making meaningful changes or additions, then about 1,000 lines of
changes is about the most you should ask a Solana maintainer to review.
### Should I send small PRs as I develop large, new components?
Add only code to the codebase that is ready to be deployed. If you are building a large
library, consider developing it in a separate git repository. When it is ready to be
integrated, the Solana maintainers will work with you to decide on a path forward. Smaller
libraries may be copied in whereas very large ones may be pulled in with a package manager.
### When will my PR be reviewed?
PRs are typically reviewed and merged in under 7 days. If your PR has been open for longer,
it's a strong indicator that the reviewers aren't confident the change meets the quality
standards of the codebase. You might consider closing it and coming back with smaller PRs
and longer descriptions detailing what problem it solves and how it solves it.
Draft Pull Requests
---
If you want early feedback on your PR, use GitHub's "Draft Pull Request" mechanism. Draft
PRs are a convenient way to collaborate with the Solana maintainers without triggering
notifications as you make changes. When you feel your PR is ready for a broader audience,
you can transition your draft PR to a standard PR with the click of a button.
Rust coding conventions
---
@ -17,7 +60,7 @@ Rust coding conventions
* All Rust code is linted with Clippy. If you'd prefer to ignore its advice, do so explicitly:
```rust
#[cfg_attr(feature = "cargo-clippy", allow(too_many_arguments))]
#[allow(clippy::too_many_arguments)]
```
Note: Clippy defaults can be overridden in the top-level file `.clippy.toml`.
@ -30,7 +73,7 @@ Rust coding conventions
* For function and method names, use `<verb>_<subject>`. For unit tests, that verb should
always be `test` and for benchmarks the verb should always be `bench`. Avoid namespacing
function names with some arbitrary word. Avoid abreviating words in function names.
function names with some arbitrary word. Avoid abbreviating words in function names.
* As they say, "When in Rome, do as the Romans do." A good patch should acknowledge the coding
conventions of the code that surrounds it, even in the case where that code has not yet been
@ -43,11 +86,27 @@ Terminology
Inventing new terms is allowed, but should only be done when the term is widely used and
understood. Avoid introducing new 3-letter terms, which can be confused with 3-letter acronyms.
Some terms we currently use regularly in the codebase:
[Terms currently in use](book/src/terminology.md)
* fullnode: n. A fully participating network node.
* hash: n. A SHA-256 Hash.
* keypair: n. A Ed25519 key-pair, containing a public and private key.
* pubkey: n. The public key of a Ed25519 key-pair.
* sigverify: v. To verify a Ed25519 digital signature.
Proposing architectural changes
---
Solana's architecture is described by a book generated from markdown files in
the `book/src/` directory, maintained by an *editor* (currently @garious). To
change the architecture, you'll need to at least propose a change the content
under the [Proposed
Changes](https://solana-labs.github.io/book-edge/proposals.html) chapter. Here's
the full process:
1. Propose to a change to the architecture by creating a PR that adds a
markdown document to the directory `book/src/` and references it from the
[table of contents](book/src/SUMMARY.md). Add the editor and any relevant
*maintainers* to the PR review.
2. The PR being merged indicates your proposed change was accepted and that the
editor and maintainers support your plan of attack.
3. Submit PRs that implement the proposal. When the implementation reveals the
need for tweaks to the architecture, be sure to update the proposal and have
that change reviewed by the same people as in step 1.
4. Once the implementation is complete, the editor will then work to integrate
the document into the book.

3242
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,119 +1,90 @@
[package]
name = "solana"
name = "solana-workspace"
description = "Blockchain, Rebuilt for Scale"
version = "0.7.1"
version = "0.12.0"
documentation = "https://docs.rs/solana"
homepage = "http://solana.com/"
homepage = "https://solana.com/"
readme = "README.md"
repository = "https://github.com/solana-labs/solana"
authors = [
"Anatoly Yakovenko <anatoly@solana.com>",
"Greg Fitzgerald <greg@solana.com>",
"Stephen Akridge <stephen@solana.com>",
"Michael Vines <mvines@solana.com>",
"Rob Walker <rob@solana.com>",
"Pankaj Garg <pankaj@solana.com>",
"Tyera Eulberg <tyera@solana.com>",
]
authors = ["Solana Maintainers <maintainers@solana.com>"]
license = "Apache-2.0"
[[bin]]
name = "solana-bench-tps"
path = "src/bin/bench-tps.rs"
[[bin]]
name = "solana-bench-streamer"
path = "src/bin/bench-streamer.rs"
[[bin]]
name = "solana-drone"
path = "src/bin/drone.rs"
[[bin]]
name = "solana-fullnode"
path = "src/bin/fullnode.rs"
[[bin]]
name = "solana-fullnode-config"
path = "src/bin/fullnode-config.rs"
[[bin]]
name = "solana-genesis"
path = "src/bin/genesis.rs"
[[bin]]
name = "solana-ledger-tool"
path = "src/bin/ledger-tool.rs"
[[bin]]
name = "solana-keygen"
path = "src/bin/keygen.rs"
[[bin]]
name = "solana-wallet"
path = "src/bin/wallet.rs"
edition = "2018"
[badges]
codecov = { repository = "solana-labs/solana", branch = "master", service = "github" }
[features]
unstable = []
ipv6 = []
cuda = []
erasure = []
[dependencies]
atty = "0.2"
bincode = "1.0.0"
bs58 = "0.2.0"
byteorder = "1.2.1"
chrono = { version = "0.4.0", features = ["serde"] }
clap = "2.31"
dirs = "1.0.2"
env_logger = "0.5.12"
futures = "0.1.21"
generic-array = { version = "0.11.1", default-features = false, features = ["serde"] }
getopts = "0.2"
influx_db_client = "0.3.4"
itertools = "0.7.8"
libc = "0.2.1"
log = "0.4.2"
matches = "0.1.6"
pnet_datalink = "0.21.0"
rand = "0.5.1"
rayon = "1.0.0"
reqwest = "0.8.6"
ring = "0.13.2"
sha2 = "0.7.0"
serde = "1.0.27"
serde_derive = "1.0.27"
serde_json = "1.0.10"
sys-info = "0.5.6"
tokio = "0.1"
tokio-codec = "0.1"
tokio-core = "0.1.17"
tokio-io = "0.1"
untrusted = "0.6.2"
chacha = ["solana/chacha"]
cuda = ["solana/cuda"]
erasure = ["solana/erasure"]
[dev-dependencies]
criterion = "0.2"
[[bench]]
name = "bank"
harness = false
bincode = "1.1.2"
bs58 = "0.2.0"
hashbrown = "0.1.8"
log = "0.4.2"
rand = "0.6.5"
rayon = "1.0.0"
reqwest = "0.9.11"
serde_json = "1.0.39"
solana = { path = "core", version = "0.12.0" }
solana-logger = { path = "logger", version = "0.12.0" }
solana-netutil = { path = "netutil", version = "0.12.0" }
solana-runtime = { path = "runtime", version = "0.12.0" }
solana-sdk = { path = "sdk", version = "0.12.0" }
sys-info = "0.5.6"
[[bench]]
name = "banking_stage"
harness = false
[[bench]]
name = "blocktree"
[[bench]]
name = "ledger"
harness = false
[[bench]]
name = "signature"
harness = false
name = "gen_keys"
[[bench]]
name = "sigverify"
harness = false
[[bench]]
required-features = ["chacha"]
name = "chacha"
[workspace]
members = [
".",
"bench-streamer",
"bench-tps",
"core",
"drone",
"fullnode",
"genesis",
"keygen",
"ledger-tool",
"logger",
"metrics",
"programs/bpf",
"programs/bpf_loader",
"programs/budget",
"programs/budget_api",
"programs/token",
"programs/token_api",
"programs/failure",
"programs/noop",
"programs/rewards",
"programs/rewards_api",
"programs/storage",
"programs/storage_api",
"programs/system",
"programs/vote",
"programs/vote_api",
"replicator",
"sdk",
"upload-perf",
"vote-signer",
"wallet",
]
exclude = ["programs/bpf/rust/noop"]

323
README.md
View File

@ -1,9 +1,9 @@
[![Solana crate](https://img.shields.io/crates/v/solana.svg)](https://crates.io/crates/solana)
[![Solana documentation](https://docs.rs/solana/badge.svg)](https://docs.rs/solana)
[![Build status](https://badge.buildkite.com/d4c4d7da9154e3a8fb7199325f430ccdb05be5fc1e92777e51.svg?branch=master)](https://solana-ci-gate.herokuapp.com/buildkite_public_log?https://buildkite.com/solana-labs/solana/builds/latest/master)
[![Build status](https://badge.buildkite.com/8cc350de251d61483db98bdfc895b9ea0ac8ffa4a32ee850ed.svg?branch=master)](https://buildkite.com/solana-labs/solana/builds?branch=master)
[![codecov](https://codecov.io/gh/solana-labs/solana/branch/master/graph/badge.svg)](https://codecov.io/gh/solana-labs/solana)
Blockchain, Rebuilt for Scale
Blockchain Rebuilt for Scale
===
Solana&trade; is a new blockchain architecture built from the ground up for scale. The architecture supports
@ -17,222 +17,18 @@ All claims, content, designs, algorithms, estimates, roadmaps, specifications, a
Introduction
===
It's possible for a centralized database to process 710,000 transactions per second on a standard gigabit network if the transactions are, on average, no more than 176 bytes. A centralized database can also replicate itself and maintain high availability without significantly compromising that transaction rate using the distributed system technique known as Optimistic Concurrency Control [H.T.Kung, J.T.Robinson (1981)]. At Solana, we're demonstrating that these same theoretical limits apply just as well to blockchain on an adversarial network. The key ingredient? Finding a way to share time when nodes can't trust one-another. Once nodes can trust time, suddenly ~40 years of distributed systems research becomes applicable to blockchain! Furthermore, and much to our surprise, it can implemented using a mechanism that has existed in Bitcoin since day one. The Bitcoin feature is called nLocktime and it can be used to postdate transactions using block height instead of a timestamp. As a Bitcoin client, you'd use block height instead of a timestamp if you don't trust the network. Block height turns out to be an instance of what's being called a Verifiable Delay Function in cryptography circles. It's a cryptographically secure way to say time has passed. In Solana, we use a far more granular verifiable delay function, a SHA 256 hash chain, to checkpoint the ledger and coordinate consensus. With it, we implement Optimistic Concurrency Control and are now well in route towards that theoretical limit of 710,000 transactions per second.
It's possible for a centralized database to process 710,000 transactions per second on a standard gigabit network if the transactions are, on average, no more than 176 bytes. A centralized database can also replicate itself and maintain high availability without significantly compromising that transaction rate using the distributed system technique known as Optimistic Concurrency Control [\[H.T.Kung, J.T.Robinson (1981)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.4735). At Solana, we're demonstrating that these same theoretical limits apply just as well to blockchain on an adversarial network. The key ingredient? Finding a way to share time when nodes can't trust one-another. Once nodes can trust time, suddenly ~40 years of distributed systems research becomes applicable to blockchain!
> Perhaps the most striking difference between algorithms obtained by our method and ones based upon timeout is that using timeout produces a traditional distributed algorithm in which the processes operate asynchronously, while our method produces a globally synchronous one in which every process does the same thing at (approximately) the same time. Our method seems to contradict the whole purpose of distributed processing, which is to permit different processes to operate independently and perform different functions. However, if a distributed system is really a single system, then the processes must be synchronized in some way. Conceptually, the easiest way to synchronize processes is to get them all to do the same thing at the same time. Therefore, our method is used to implement a kernel that performs the necessary synchronization--for example, making sure that two different processes do not try to modify a file at the same time. Processes might spend only a small fraction of their time executing the synchronizing kernel; the rest of the time, they can operate independently--e.g., accessing different files. This is an approach we have advocated even when fault-tolerance is not required. The method's basic simplicity makes it easier to understand the precise properties of a system, which is crucial if one is to know just how fault-tolerant the system is. [\[L.Lamport (1984)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.1078)
Testnet Demos
Furthermore, and much to our surprise, it can be implemented using a mechanism that has existed in Bitcoin since day one. The Bitcoin feature is called nLocktime and it can be used to postdate transactions using block height instead of a timestamp. As a Bitcoin client, you'd use block height instead of a timestamp if you don't trust the network. Block height turns out to be an instance of what's being called a Verifiable Delay Function in cryptography circles. It's a cryptographically secure way to say time has passed. In Solana, we use a far more granular verifiable delay function, a SHA 256 hash chain, to checkpoint the ledger and coordinate consensus. With it, we implement Optimistic Concurrency Control and are now well en route towards that theoretical limit of 710,000 transactions per second.
Architecture
===
The Solana repo contains all the scripts you might need to spin up your own
local testnet. Depending on what you're looking to achieve, you may want to
run a different variation, as the full-fledged, performance-enhanced
multinode testnet is considerably more complex to set up than a Rust-only,
singlenode testnode. If you are looking to develop high-level features, such
as experimenting with smart contracts, save yourself some setup headaches and
stick to the Rust-only singlenode demo. If you're doing performance optimization
of the transaction pipeline, consider the enhanced singlenode demo. If you're
doing consensus work, you'll need at least a Rust-only multinode demo. If you want
to reproduce our TPS metrics, run the enhanced multinode demo.
Before you jump into the code, review the online book [Solana: Blockchain Rebuilt for Scale](https://solana-labs.github.io/book/).
For all four variations, you'd need the latest Rust toolchain and the Solana
source code:
First, install Rust's package manager Cargo.
```bash
$ curl https://sh.rustup.rs -sSf | sh
$ source $HOME/.cargo/env
```
Now checkout the code from github:
```bash
$ git clone https://github.com/solana-labs/solana.git
$ cd solana
```
The demo code is sometimes broken between releases as we add new low-level
features, so if this is your first time running the demo, you'll improve
your odds of success if you check out the
[latest release](https://github.com/solana-labs/solana/releases)
before proceeding:
```bash
$ git checkout v0.7.0-beta
```
Configuration Setup
---
The network is initialized with a genesis ledger and leader/validator configuration files.
These files can be generated by running the following script.
```bash
$ ./multinode-demo/setup.sh
```
Drone
---
In order for the leader, client and validators to work, we'll need to
spin up a drone to give out some test tokens. The drone delivers Milton
Friedman-style "air drops" (free tokens to requesting clients) to be used in
test transactions.
Start the drone on the leader node with:
```bash
$ ./multinode-demo/drone.sh
```
Singlenode Testnet
---
Before you start a fullnode, make sure you know the IP address of the machine you
want to be the leader for the demo, and make sure that udp ports 8000-10000 are
open on all the machines you want to test with.
Now start the server:
```bash
$ ./multinode-demo/leader.sh
```
Wait a few seconds for the server to initialize. It will print "Ready." when it's ready to
receive transactions. The leader will request some tokens from the drone if it doesn't have any.
The drone does not need to be running for subsequent leader starts.
Multinode Testnet
---
To run a multinode testnet, after starting a leader node, spin up some validator nodes:
```bash
$ ./multinode-demo/validator.sh ubuntu@10.0.1.51:~/solana 10.0.1.51
```
To run a performance-enhanced leader or validator (on Linux),
[CUDA 9.2](https://developer.nvidia.com/cuda-downloads) must be installed on
your system:
```bash
$ ./fetch-perf-libs.sh
$ SOLANA_CUDA=1 ./multinode-demo/leader.sh
$ SOLANA_CUDA=1 ./multinode-demo/validator.sh ubuntu@10.0.1.51:~/solana 10.0.1.51
```
Testnet Client Demo
---
Now that your singlenode or multinode testnet is up and running, in a separate shell, let's send it some transactions! Note we pass in
the JSON configuration file here, not the genesis ledger.
```bash
$ ./multinode-demo/client.sh ubuntu@10.0.1.51:~/solana 2 #The leader machine and the total number of nodes in the network
```
What just happened? The client demo spins up several threads to send 500,000 transactions
to the testnet as quickly as it can. The client then pings the testnet periodically to see
how many transactions it processed in that time. Take note that the demo intentionally
floods the network with UDP packets, such that the network will almost certainly drop a
bunch of them. This ensures the testnet has an opportunity to reach 710k TPS. The client
demo completes after it has convinced itself the testnet won't process any additional
transactions. You should see several TPS measurements printed to the screen. In the
multinode variation, you'll see TPS measurements for each validator node as well.
Linux Snap
---
A Linux [Snap](https://snapcraft.io/) is available, which can be used to
easily get Solana running on supported Linux systems without building anything
from source. The `edge` Snap channel is updated daily with the latest
development from the `master` branch. To install:
```bash
$ sudo snap install solana --edge --devmode
```
(`--devmode` flag is required only for `solana.fullnode-cuda`)
Once installed the usual Solana programs will be available as `solona.*` instead
of `solana-*`. For example, `solana.fullnode` instead of `solana-fullnode`.
Update to the latest version at any time with:
```bash
$ snap info solana
$ sudo snap refresh solana --devmode
```
### Daemon support
The snap supports running a leader, validator or leader+drone node as a system
daemon.
Run `sudo snap get solana` to view the current daemon configuration. To view
daemon logs:
1. Run `sudo snap logs -n=all solana` to view the daemon initialization log
2. Runtime logging can be found under `/var/snap/solana/current/leader/`,
`/var/snap/solana/current/validator/`, or `/var/snap/solana/current/drone/` depending
on which `mode=` was selected. Within each log directory the file `current`
contains the latest log, and the files `*.s` (if present) contain older rotated
logs.
Disable the daemon at any time by running:
```bash
$ sudo snap set solana mode=
```
Runtime configuration files for the daemon can be found in
`/var/snap/solana/current/config`.
#### Leader daemon
```bash
$ sudo snap set solana mode=leader
```
If CUDA is available:
```bash
$ sudo snap set solana mode=leader enable-cuda=1
```
`rsync` must be configured and running on the leader.
1. Ensure rsync is installed with `sudo apt-get -y install rsync`
2. Edit `/etc/rsyncd.conf` to include the following
```
[config]
path = /var/snap/solana/current/config
hosts allow = *
read only = true
```
3. Run `sudo systemctl enable rsync; sudo systemctl start rsync`
4. Test by running `rsync -Pzravv rsync://<ip-address-of-leader>/config
solana-config` from another machine. **If the leader is running on a cloud
provider it may be necessary to configure the Firewall rules to permit ingress
to port tcp:873, tcp:9900 and the port range udp:8000-udp:10000**
To run both the Leader and Drone:
```bash
$ sudo snap set solana mode=leader+drone
```
#### Validator daemon
```bash
$ sudo snap set solana mode=validator
```
If CUDA is available:
```bash
$ sudo snap set solana mode=validator enable-cuda=1
```
By default the validator will connect to **testnet.solana.com**, override
the leader IP address by running:
```bash
$ sudo snap set solana mode=validator leader-address=127.0.0.1 #<-- change IP address
```
It's assumed that the leader will be running `rsync` configured as described in
the previous **Leader daemon** section.
(The _latest_ development version of the online book is also [available here](https://solana-labs.github.io/book-edge/).)
Developing
===
@ -248,15 +44,16 @@ $ source $HOME/.cargo/env
$ rustup component add rustfmt-preview
```
If your rustc version is lower than 1.26.1, please update it:
If your rustc version is lower than 1.31.0, please update it:
```bash
$ rustup update
```
On Linux systems you may need to install libssl-dev and pkg-config. On Ubuntu:
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:
```bash
$ sudo apt-get install libssl-dev pkg-config
$ sudo apt-get install libssl-dev pkg-config zlib1g-dev llvm clang
```
Download the source code:
@ -266,47 +63,79 @@ $ git clone https://github.com/solana-labs/solana.git
$ cd solana
```
Build
```bash
$ cargo build --all
```
Then to run a minimal local cluster
```bash
$ ./run.sh
```
Testing
---
Run the test suite:
```bash
$ cargo test
$ cargo test --all
```
To emulate all the tests that will run on a Pull Request, run:
```bash
$ ./ci/run-local.sh
```
Debugging
Local Testnet
---
There are some useful debug messages in the code, you can enable them on a per-module and per-level
basis with the normal RUST\_LOG environment variable. Run the fullnode with this syntax:
```bash
$ RUST_LOG=solana::streamer=debug,solana::server=info cat genesis.log | ./target/release/solana-fullnode > transactions0.log
```
to see the debug and info sections for streamer and server respectively. Generally
we are using debug for infrequent debug messages, trace for potentially frequent messages and
info for performance-related logging.
Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://solana-labs.github.io/book/getting-started.html).
Attaching to a running process with gdb:
Remote Testnets
---
```
$ sudo gdb
attach <PID>
set logging on
thread apply all bt
```
We maintain several testnets:
* `testnet` - public stable testnet accessible via testnet.solana.com, with an https proxy for web apps at api.testnet.solana.com. Runs 24/7
* `testnet-beta` - public beta channel testnet accessible via beta.testnet.solana.com. Runs 24/7
* `testnet-edge` - public edge channel testnet accessible via edge.testnet.solana.com. Runs 24/7
* `testnet-perf` - permissioned stable testnet running a 24/7 soak test
* `testnet-beta-perf` - permissioned beta channel testnet running a multi-hour soak test weekday mornings
* `testnet-edge-perf` - permissioned edge channel testnet running a multi-hour soak test weekday mornings
## Deploy process
They are deployed with the `ci/testnet-manager.sh` script through a list of [scheduled
buildkite jobs](https://buildkite.com/solana-labs/testnet-management/settings/schedules).
Each testnet can be manually manipulated from buildkite as well.
## How do I reset the testnet?
Manually trigger the [testnet-management](https://buildkite.com/solana-labs/testnet-management) pipeline
and when prompted select the desired testnet
## How can I scale the tx generation rate?
Increase the TX rate by increasing the number of cores on the client machine which is running
`bench-tps` or run multiple clients. Decrease by lowering cores or using the rayon env
variable `RAYON_NUM_THREADS=<xx>`
## How can I test a change on the testnet?
Currently, a merged PR is the only way to test a change on the testnet. But you
can run your own testnet using the scripts in the `net/` directory.
## Adjusting the number of clients or validators on the testnet
Edit `ci/testnet-manager.sh`
This will dump all the threads stack traces into gdb.txt
Benchmarking
---
First install the nightly build of rustc. `cargo bench` requires unstable features:
First install the nightly build of rustc. `cargo bench` requires use of the
unstable features only available in the nightly build.
```bash
$ rustup install nightly
@ -315,28 +144,24 @@ $ rustup install nightly
Run the benchmarks:
```bash
$ cargo +nightly bench --features="unstable"
$ cargo +nightly bench
```
Release Process
---
The release process for this project is described [here](RELEASE.md).
Code coverage
---
To generate code coverage statistics, install cargo-cov. Note: the tool currently only works
in Rust nightly.
To generate code coverage statistics:
```bash
$ cargo +nightly install cargo-cov
$ scripts/coverage.sh
$ open target/cov/lcov-local/index.html
```
Run cargo-cov and generate a report:
```bash
$ cargo +nightly cov test
$ cargo +nightly cov report --open
```
The coverage report will be written to `./target/cov/report/index.html`
Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer
productivity metric. When a developer makes a change to the codebase, presumably it's a *solution* to
@ -349,3 +174,5 @@ problem is solved by this code?" On the other hand, if a test does fail and you
better way to solve the same problem, a Pull Request with your solution would most certainly be
welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please
send us that patch!

105
RELEASE.md Normal file
View File

@ -0,0 +1,105 @@
# Solana Release process
## Branches and Tags
```
========================= master branch (edge channel) =======================>
\ \ \
\___v0.7.0 tag \ \
\ \ v0.9.0 tag__\
\ v0.8.0 tag__\ \
v0.7.1 tag__\ \ v0.9 branch (beta channel)
\___v0.7.2 tag \___v0.8.1 tag
\ \
\ \
v0.7 branch v0.8 branch (stable channel)
```
### master branch
All new development occurs on the `master` branch.
Bug fixes that affect a `vX.Y` branch are first made on `master`. This is to
allow a fix some soak time on `master` before it is applied to one or more
stabilization branches.
Merging to `master` first also helps ensure that fixes applied to one release
are present for future releases. (Sometimes the joy of landing a critical
release blocker in a branch causes you to forget to propagate back to
`master`!)"
Once the bug fix lands on `master` it is cherry-picked into the `vX.Y` branch
and potentially the `vX.Y-1` branch. The exception to this rule is when a bug
fix for `vX.Y` doesn't apply to `master` or `vX.Y-1`.
Immediately after a new stabilization branch is forged, the `Cargo.toml` minor
version (*Y*) in the `master` branch is incremented by the release engineer.
Incrementing the major version of the `master` branch is outside the scope of
this document.
### v*X.Y* stabilization branches
These are stabilization branches for a given milestone. They are created off
the `master` branch as late as possible prior to the milestone release.
### v*X.Y.Z* release tag
The release tags are created as desired by the owner of the given stabilization
branch, and cause that *X.Y.Z* release to be shipped to https://crates.io
Immediately after a new v*X.Y.Z* branch tag has been created, the `Cargo.toml`
patch version number (*Z*) of the stabilization branch is incremented by the
release engineer.
## Channels
Channels are used by end-users (humans and bots) to consume the branches
described in the previous section, so they may automatically update to the most
recent version matching their desired stability.
There are three release channels that map to branches as follows:
* edge - tracks the `master` branch, least stable.
* beta - tracks the largest (and latest) `vX.Y` stabilization branch, more stable.
* stable - tracks the second largest `vX.Y` stabilization branch, most stable.
## Release Steps
### Changing channels
When cutting a new channel branch these pre-steps are required:
1. Pick your branch point for release on master.
1. Create the branch. The name should be "v" + the first 2 "version" fields
from Cargo.toml. For example, a Cargo.toml with version = "0.9.0" implies
the next branch name is "v0.9".
1. Push the new branch to the solana repository
1. Update Cargo.toml on master to the next semantic version (e.g. 0.9.0 -> 0.10.0)
by running `./scripts/increment-cargo-version.sh`, then rebuild with a
`cargo build --all` to cause a refresh of `Cargo.lock`.
1. Push your Cargo.toml change and the autogenerated Cargo.lock changes to the
master branch
At this point, ci/channel-info.sh should show your freshly cut release branch as
"BETA_CHANNEL" and the previous release branch as "STABLE_CHANNEL".
### Updating channels (i.e. "making a release")
We use [github's Releases UI](https://github.com/solana-labs/solana/releases) for tagging a release.
1. Go [there ;)](https://github.com/solana-labs/solana/releases).
1. Click "Draft new release". The release tag must exactly match the `version`
field in `/Cargo.toml` prefixed by `v` (ie, `<branchname>.X`).
1. If the first major release on the branch (e.g. v0.8.0), paste in [this
template](https://raw.githubusercontent.com/solana-labs/solana/master/.github/RELEASE_TEMPLATE.md)
and fill it in.
1. Test the release by generating a tag using semver's rules. First try at a
release should be `<branchname>.X-rc.0`.
1. Verify release automation:
1. [Crates.io](https://crates.io/crates/solana) should have an updated Solana version.
1. ...
1. After testnet deployment, verify that testnets are running correct software.
http://metrics.solana.com should show testnet running on a hash from your
newly created branch.
1. Once the release has been made, update Cargo.toml on release to the next
semantic version (e.g. 0.9.0 -> 0.9.1) by running
`./scripts/increment-cargo-version.sh patch`, then rebuild with a `cargo
build --all` to cause a refresh of `Cargo.lock`.
1. Push your Cargo.toml change and the autogenerated Cargo.lock changes to the
release branch

View File

@ -1 +0,0 @@
theme: jekyll-theme-slate

17
bench-streamer/Cargo.toml Normal file
View File

@ -0,0 +1,17 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-streamer"
version = "0.12.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.32.0"
solana = { path = "../core", version = "0.12.0" }
solana-logger = { path = "../logger", version = "0.12.0" }
solana-netutil = { path = "../netutil", version = "0.12.0" }
[features]
cuda = ["solana/cuda"]

View File

@ -1,9 +1,9 @@
extern crate solana;
use solana::packet::{Packet, PacketRecycler, BLOB_SIZE, PACKET_DATA_SIZE};
use clap::{App, Arg};
use solana::packet::{Packet, SharedPackets, BLOB_SIZE, PACKET_DATA_SIZE};
use solana::result::Result;
use solana::streamer::{receiver, PacketReceiver};
use std::net::{SocketAddr, UdpSocket};
use std::cmp::max;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::mpsc::channel;
use std::sync::Arc;
@ -12,9 +12,9 @@ use std::thread::{spawn, JoinHandle};
use std::time::Duration;
use std::time::SystemTime;
fn producer(addr: &SocketAddr, recycler: &PacketRecycler, exit: Arc<AtomicBool>) -> JoinHandle<()> {
fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
let send = UdpSocket::bind("0.0.0.0:0").unwrap();
let msgs = recycler.allocate();
let msgs = SharedPackets::default();
let msgs_ = msgs.clone();
msgs.write().unwrap().packets.resize(10, Packet::default());
for w in &mut msgs.write().unwrap().packets {
@ -36,12 +36,7 @@ fn producer(addr: &SocketAddr, recycler: &PacketRecycler, exit: Arc<AtomicBool>)
})
}
fn sink(
recycler: PacketRecycler,
exit: Arc<AtomicBool>,
rvs: Arc<AtomicUsize>,
r: PacketReceiver,
) -> JoinHandle<()> {
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketReceiver) -> JoinHandle<()> {
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
@ -49,28 +44,55 @@ fn sink(
let timer = Duration::new(1, 0);
if let Ok(msgs) = r.recv_timeout(timer) {
rvs.fetch_add(msgs.read().unwrap().packets.len(), Ordering::Relaxed);
recycler.recycle(msgs);
}
})
}
fn main() -> Result<()> {
let read = UdpSocket::bind("127.0.0.1:0")?;
read.set_read_timeout(Some(Duration::new(1, 0)))?;
let mut num_sockets = 1usize;
let matches = App::new("solana-bench-streamer")
.arg(
Arg::with_name("num-recv-sockets")
.long("num-recv-sockets")
.value_name("NUM")
.takes_value(true)
.help("Use NUM receive sockets"),
)
.get_matches();
if let Some(n) = matches.value_of("num-recv-sockets") {
num_sockets = max(num_sockets, n.to_string().parse().expect("integer"));
}
let mut port = 0;
let mut addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), 0);
let addr = read.local_addr()?;
let exit = Arc::new(AtomicBool::new(false));
let pack_recycler = PacketRecycler::default();
let (s_reader, r_reader) = channel();
let t_reader = receiver(read, exit.clone(), pack_recycler.clone(), s_reader);
let t_producer1 = producer(&addr, &pack_recycler, exit.clone());
let t_producer2 = producer(&addr, &pack_recycler, exit.clone());
let t_producer3 = producer(&addr, &pack_recycler, exit.clone());
let mut read_channels = Vec::new();
let mut read_threads = Vec::new();
for _ in 0..num_sockets {
let read = solana_netutil::bind_to(port, false).unwrap();
read.set_read_timeout(Some(Duration::new(1, 0))).unwrap();
addr = read.local_addr().unwrap();
port = addr.port();
let (s_reader, r_reader) = channel();
read_channels.push(r_reader);
read_threads.push(receiver(Arc::new(read), &exit, s_reader, "bench-streamer"));
}
let t_producer1 = producer(&addr, exit.clone());
let t_producer2 = producer(&addr, exit.clone());
let t_producer3 = producer(&addr, exit.clone());
let rvs = Arc::new(AtomicUsize::new(0));
let t_sink = sink(pack_recycler.clone(), exit.clone(), rvs.clone(), r_reader);
let sink_threads: Vec<_> = read_channels
.into_iter()
.map(|r_reader| sink(exit.clone(), rvs.clone(), r_reader))
.collect();
let start = SystemTime::now();
let start_val = rvs.load(Ordering::Relaxed);
sleep(Duration::new(5, 0));
@ -81,10 +103,14 @@ fn main() -> Result<()> {
let fcount = (end_val - start_val) as f64;
println!("performance: {:?}", fcount / ftime);
exit.store(true, Ordering::Relaxed);
t_reader.join()?;
for t_reader in read_threads {
t_reader.join()?;
}
t_producer1.join()?;
t_producer2.join()?;
t_producer3.join()?;
t_sink.join()?;
for t_sink in sink_threads {
t_sink.join()?;
}
Ok(())
}

21
bench-tps/Cargo.toml Normal file
View File

@ -0,0 +1,21 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-tps"
version = "0.12.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.32.0"
rayon = "1.0.3"
serde_json = "1.0.39"
solana = { path = "../core", version = "0.12.0" }
solana-drone = { path = "../drone", version = "0.12.0" }
solana-logger = { path = "../logger", version = "0.12.0" }
solana-metrics = { path = "../metrics", version = "0.12.0" }
solana-sdk = { path = "../sdk", version = "0.12.0" }
[features]
cuda = ["solana/cuda"]

540
bench-tps/src/bench.rs Normal file
View File

@ -0,0 +1,540 @@
use solana_metrics;
use rayon::prelude::*;
use solana::client::mk_client;
use solana::contact_info::ContactInfo;
use solana::thin_client::ThinClient;
use solana_drone::drone::request_airdrop_transaction;
use solana_metrics::influxdb;
use solana_sdk::hash::Hash;
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_transaction::SystemTransaction;
use solana_sdk::timing::timestamp;
use solana_sdk::timing::{duration_as_ms, duration_as_s};
use solana_sdk::transaction::Transaction;
use std::cmp;
use std::collections::VecDeque;
use std::net::SocketAddr;
use std::process::exit;
use std::sync::atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering};
use std::sync::{Arc, RwLock};
use std::thread::sleep;
use std::time::Duration;
use std::time::Instant;
pub struct NodeStats {
/// Maximum TPS reported by this node
pub tps: f64,
/// Total transactions reported by this node
pub tx: u64,
}
pub const MAX_SPENDS_PER_TX: usize = 4;
pub type SharedTransactions = Arc<RwLock<VecDeque<Vec<(Transaction, u64)>>>>;
pub fn metrics_submit_lamport_balance(lamport_balance: u64) {
println!("Token balance: {}", lamport_balance);
solana_metrics::submit(
influxdb::Point::new("bench-tps")
.add_tag("op", influxdb::Value::String("lamport_balance".to_string()))
.add_field("balance", influxdb::Value::Integer(lamport_balance as i64))
.to_owned(),
);
}
pub fn sample_tx_count(
exit_signal: &Arc<AtomicBool>,
maxes: &Arc<RwLock<Vec<(SocketAddr, NodeStats)>>>,
first_tx_count: u64,
v: &ContactInfo,
sample_period: u64,
) {
let mut client = mk_client(&v);
let mut now = Instant::now();
let mut initial_tx_count = client.transaction_count();
let mut max_tps = 0.0;
let mut total;
let log_prefix = format!("{:21}:", v.tpu.to_string());
loop {
let tx_count = client.transaction_count();
assert!(
tx_count >= initial_tx_count,
"expected tx_count({}) >= initial_tx_count({})",
tx_count,
initial_tx_count
);
let duration = now.elapsed();
now = Instant::now();
let sample = tx_count - initial_tx_count;
initial_tx_count = tx_count;
let ns = duration.as_secs() * 1_000_000_000 + u64::from(duration.subsec_nanos());
let tps = (sample * 1_000_000_000) as f64 / ns as f64;
if tps > max_tps {
max_tps = tps;
}
if tx_count > first_tx_count {
total = tx_count - first_tx_count;
} else {
total = 0;
}
println!(
"{} {:9.2} TPS, Transactions: {:6}, Total transactions: {}",
log_prefix, tps, sample, total
);
sleep(Duration::new(sample_period, 0));
if exit_signal.load(Ordering::Relaxed) {
println!("{} Exiting validator thread", log_prefix);
let stats = NodeStats {
tps: max_tps,
tx: total,
};
maxes.write().unwrap().push((v.tpu, stats));
break;
}
}
}
/// Send loopback payment of 0 lamports and confirm the network processed it
pub fn send_barrier_transaction(
barrier_client: &mut ThinClient,
blockhash: &mut Hash,
source_keypair: &Keypair,
dest_id: &Pubkey,
) {
let transfer_start = Instant::now();
let mut poll_count = 0;
loop {
if poll_count > 0 && poll_count % 8 == 0 {
println!(
"polling for barrier transaction confirmation, attempt {}",
poll_count
);
}
*blockhash = barrier_client.get_recent_blockhash();
let signature = barrier_client
.transfer(0, &source_keypair, dest_id, blockhash)
.expect("Unable to send barrier transaction");
let confirmatiom = barrier_client.poll_for_signature(&signature);
let duration_ms = duration_as_ms(&transfer_start.elapsed());
if confirmatiom.is_ok() {
println!("barrier transaction confirmed in {} ms", duration_ms);
solana_metrics::submit(
influxdb::Point::new("bench-tps")
.add_tag(
"op",
influxdb::Value::String("send_barrier_transaction".to_string()),
)
.add_field("poll_count", influxdb::Value::Integer(poll_count))
.add_field("duration", influxdb::Value::Integer(duration_ms as i64))
.to_owned(),
);
// Sanity check that the client balance is still 1
let balance = barrier_client
.poll_balance_with_timeout(
&source_keypair.pubkey(),
&Duration::from_millis(100),
&Duration::from_secs(10),
)
.expect("Failed to get balance");
if balance != 1 {
panic!("Expected an account balance of 1 (balance: {}", balance);
}
break;
}
// Timeout after 3 minutes. When running a CPU-only leader+validator+drone+bench-tps on a dev
// machine, some batches of transactions can take upwards of 1 minute...
if duration_ms > 1000 * 60 * 3 {
println!("Error: Couldn't confirm barrier transaction!");
exit(1);
}
let new_blockhash = barrier_client.get_recent_blockhash();
if new_blockhash == *blockhash {
if poll_count > 0 && poll_count % 8 == 0 {
println!("blockhash is not advancing, still at {:?}", *blockhash);
}
} else {
*blockhash = new_blockhash;
}
poll_count += 1;
}
}
pub fn generate_txs(
shared_txs: &SharedTransactions,
source: &[Keypair],
dest: &[Keypair],
threads: usize,
reclaim: bool,
contact_info: &ContactInfo,
) {
let mut client = mk_client(contact_info);
let blockhash = client.get_recent_blockhash();
let tx_count = source.len();
println!("Signing transactions... {} (reclaim={})", tx_count, reclaim);
let signing_start = Instant::now();
let pairs: Vec<_> = if !reclaim {
source.iter().zip(dest.iter()).collect()
} else {
dest.iter().zip(source.iter()).collect()
};
let transactions: Vec<_> = pairs
.par_iter()
.map(|(id, keypair)| {
(
SystemTransaction::new_account(id, &keypair.pubkey(), 1, blockhash, 0),
timestamp(),
)
})
.collect();
let duration = signing_start.elapsed();
let ns = duration.as_secs() * 1_000_000_000 + u64::from(duration.subsec_nanos());
let bsps = (tx_count) as f64 / ns as f64;
let nsps = ns as f64 / (tx_count) as f64;
println!(
"Done. {:.2} thousand signatures per second, {:.2} us per signature, {} ms total time, {}",
bsps * 1_000_000_f64,
nsps / 1_000_f64,
duration_as_ms(&duration),
blockhash,
);
solana_metrics::submit(
influxdb::Point::new("bench-tps")
.add_tag("op", influxdb::Value::String("generate_txs".to_string()))
.add_field(
"duration",
influxdb::Value::Integer(duration_as_ms(&duration) as i64),
)
.to_owned(),
);
let sz = transactions.len() / threads;
let chunks: Vec<_> = transactions.chunks(sz).collect();
{
let mut shared_txs_wl = shared_txs.write().unwrap();
for chunk in chunks {
shared_txs_wl.push_back(chunk.to_vec());
}
}
}
pub fn do_tx_transfers(
exit_signal: &Arc<AtomicBool>,
shared_txs: &SharedTransactions,
contact_info: &ContactInfo,
shared_tx_thread_count: &Arc<AtomicIsize>,
total_tx_sent_count: &Arc<AtomicUsize>,
thread_batch_sleep_ms: usize,
) {
let client = mk_client(&contact_info);
loop {
if thread_batch_sleep_ms > 0 {
sleep(Duration::from_millis(thread_batch_sleep_ms as u64));
}
let txs;
{
let mut shared_txs_wl = shared_txs.write().unwrap();
txs = shared_txs_wl.pop_front();
}
if let Some(txs0) = txs {
shared_tx_thread_count.fetch_add(1, Ordering::Relaxed);
println!(
"Transferring 1 unit {} times... to {}",
txs0.len(),
contact_info.tpu
);
let tx_len = txs0.len();
let transfer_start = Instant::now();
for tx in txs0 {
let now = timestamp();
if now > tx.1 && now - tx.1 > 1000 * 30 {
continue;
}
client.transfer_signed(&tx.0).unwrap();
}
shared_tx_thread_count.fetch_add(-1, Ordering::Relaxed);
total_tx_sent_count.fetch_add(tx_len, Ordering::Relaxed);
println!(
"Tx send done. {} ms {} tps",
duration_as_ms(&transfer_start.elapsed()),
tx_len as f32 / duration_as_s(&transfer_start.elapsed()),
);
solana_metrics::submit(
influxdb::Point::new("bench-tps")
.add_tag("op", influxdb::Value::String("do_tx_transfers".to_string()))
.add_field(
"duration",
influxdb::Value::Integer(duration_as_ms(&transfer_start.elapsed()) as i64),
)
.add_field("count", influxdb::Value::Integer(tx_len as i64))
.to_owned(),
);
}
if exit_signal.load(Ordering::Relaxed) {
break;
}
}
}
pub fn verify_funding_transfer(client: &mut ThinClient, tx: &Transaction, amount: u64) -> bool {
for a in &tx.account_keys[1..] {
if client.get_balance(a).unwrap_or(0) >= amount {
return true;
}
}
false
}
/// fund the dests keys by spending all of the source keys into MAX_SPENDS_PER_TX
/// on every iteration. This allows us to replay the transfers because the source is either empty,
/// or full
pub fn fund_keys(client: &mut ThinClient, source: &Keypair, dests: &[Keypair], lamports: u64) {
let total = lamports * dests.len() as u64;
let mut funded: Vec<(&Keypair, u64)> = vec![(source, total)];
let mut notfunded: Vec<&Keypair> = dests.iter().collect();
println!("funding keys {}", dests.len());
while !notfunded.is_empty() {
let mut new_funded: Vec<(&Keypair, u64)> = vec![];
let mut to_fund = vec![];
println!("creating from... {}", funded.len());
for f in &mut funded {
let max_units = cmp::min(notfunded.len(), MAX_SPENDS_PER_TX);
if max_units == 0 {
break;
}
let start = notfunded.len() - max_units;
let per_unit = f.1 / (max_units as u64);
let moves: Vec<_> = notfunded[start..]
.iter()
.map(|k| (k.pubkey(), per_unit))
.collect();
notfunded[start..]
.iter()
.for_each(|k| new_funded.push((k, per_unit)));
notfunded.truncate(start);
if !moves.is_empty() {
to_fund.push((f.0, moves));
}
}
// try to transfer a "few" at a time with recent blockhash
// assume 4MB network buffers, and 512 byte packets
const FUND_CHUNK_LEN: usize = 4 * 1024 * 1024 / 512;
to_fund.chunks(FUND_CHUNK_LEN).for_each(|chunk| {
let mut tries = 0;
// this set of transactions just initializes us for bookkeeping
#[allow(clippy::clone_double_ref)] // sigh
let mut to_fund_txs: Vec<_> = chunk
.par_iter()
.map(|(k, m)| {
(
k.clone(),
SystemTransaction::new_move_many(k, &m, Hash::default(), 0),
)
})
.collect();
let amount = chunk[0].1[0].1;
while !to_fund_txs.is_empty() {
let receivers = to_fund_txs
.iter()
.fold(0, |len, (_, tx)| len + tx.instructions.len());
println!(
"{} {} to {} in {} txs",
if tries == 0 {
"transferring"
} else {
" retrying"
},
amount,
receivers,
to_fund_txs.len(),
);
let blockhash = client.get_recent_blockhash();
// re-sign retained to_fund_txes with updated blockhash
to_fund_txs.par_iter_mut().for_each(|(k, tx)| {
tx.sign(&[*k], blockhash);
});
to_fund_txs.iter().for_each(|(_, tx)| {
client.transfer_signed(&tx).expect("transfer");
});
// retry anything that seems to have dropped through cracks
// again since these txs are all or nothing, they're fine to
// retry
to_fund_txs.retain(|(_, tx)| !verify_funding_transfer(client, &tx, amount));
tries += 1;
}
println!("transferred");
});
println!("funded: {} left: {}", new_funded.len(), notfunded.len());
funded = new_funded;
}
}
pub fn airdrop_lamports(
client: &mut ThinClient,
drone_addr: &SocketAddr,
id: &Keypair,
tx_count: u64,
) {
let starting_balance = client.poll_get_balance(&id.pubkey()).unwrap_or(0);
metrics_submit_lamport_balance(starting_balance);
println!("starting balance {}", starting_balance);
if starting_balance < tx_count {
let airdrop_amount = tx_count - starting_balance;
println!(
"Airdropping {:?} lamports from {} for {}",
airdrop_amount,
drone_addr,
id.pubkey(),
);
let blockhash = client.get_recent_blockhash();
match request_airdrop_transaction(&drone_addr, &id.pubkey(), airdrop_amount, blockhash) {
Ok(transaction) => {
let signature = client.transfer_signed(&transaction).unwrap();
client.poll_for_signature(&signature).unwrap();
}
Err(err) => {
panic!(
"Error requesting airdrop: {:?} to addr: {:?} amount: {}",
err, drone_addr, airdrop_amount
);
}
};
let current_balance = client.poll_get_balance(&id.pubkey()).unwrap_or_else(|e| {
println!("airdrop error {}", e);
starting_balance
});
println!("current balance {}...", current_balance);
metrics_submit_lamport_balance(current_balance);
if current_balance - starting_balance != airdrop_amount {
println!(
"Airdrop failed! {} {} {}",
id.pubkey(),
current_balance,
starting_balance
);
exit(1);
}
}
}
pub fn compute_and_report_stats(
maxes: &Arc<RwLock<Vec<(SocketAddr, NodeStats)>>>,
sample_period: u64,
tx_send_elapsed: &Duration,
total_tx_send_count: usize,
) {
// Compute/report stats
let mut max_of_maxes = 0.0;
let mut max_tx_count = 0;
let mut nodes_with_zero_tps = 0;
let mut total_maxes = 0.0;
println!(" Node address | Max TPS | Total Transactions");
println!("---------------------+---------------+--------------------");
for (sock, stats) in maxes.read().unwrap().iter() {
let maybe_flag = match stats.tx {
0 => "!!!!!",
_ => "",
};
println!(
"{:20} | {:13.2} | {} {}",
(*sock).to_string(),
stats.tps,
stats.tx,
maybe_flag
);
if stats.tps == 0.0 {
nodes_with_zero_tps += 1;
}
total_maxes += stats.tps;
if stats.tps > max_of_maxes {
max_of_maxes = stats.tps;
}
if stats.tx > max_tx_count {
max_tx_count = stats.tx;
}
}
if total_maxes > 0.0 {
let num_nodes_with_tps = maxes.read().unwrap().len() - nodes_with_zero_tps;
let average_max = total_maxes / num_nodes_with_tps as f64;
println!(
"\nAverage max TPS: {:.2}, {} nodes had 0 TPS",
average_max, nodes_with_zero_tps
);
}
println!(
"\nHighest TPS: {:.2} sampling period {}s max transactions: {} clients: {} drop rate: {:.2}",
max_of_maxes,
sample_period,
max_tx_count,
maxes.read().unwrap().len(),
(total_tx_send_count as u64 - max_tx_count) as f64 / total_tx_send_count as f64,
);
println!(
"\tAverage TPS: {}",
max_tx_count as f32 / duration_as_s(tx_send_elapsed)
);
}
// First transfer 3/4 of the lamports to the dest accounts
// then ping-pong 1/4 of the lamports back to the other account
// this leaves 1/4 lamport buffer in each account
pub fn should_switch_directions(num_lamports_per_account: u64, i: u64) -> bool {
i % (num_lamports_per_account / 4) == 0 && (i >= (3 * num_lamports_per_account) / 4)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_switch_directions() {
assert_eq!(should_switch_directions(20, 0), false);
assert_eq!(should_switch_directions(20, 1), false);
assert_eq!(should_switch_directions(20, 14), false);
assert_eq!(should_switch_directions(20, 15), true);
assert_eq!(should_switch_directions(20, 16), false);
assert_eq!(should_switch_directions(20, 19), false);
assert_eq!(should_switch_directions(20, 20), true);
assert_eq!(should_switch_directions(20, 21), false);
assert_eq!(should_switch_directions(20, 99), false);
assert_eq!(should_switch_directions(20, 100), true);
assert_eq!(should_switch_directions(20, 101), false);
}
}

183
bench-tps/src/cli.rs Normal file
View File

@ -0,0 +1,183 @@
use std::net::SocketAddr;
use std::process::exit;
use std::time::Duration;
use clap::{crate_version, App, Arg, ArgMatches};
use solana_drone::drone::DRONE_PORT;
use solana_sdk::signature::{read_keypair, Keypair, KeypairUtil};
/// Holds the configuration for a single run of the benchmark
pub struct Config {
pub network_addr: SocketAddr,
pub drone_addr: SocketAddr,
pub id: Keypair,
pub threads: usize,
pub num_nodes: usize,
pub duration: Duration,
pub tx_count: usize,
pub thread_batch_sleep_ms: usize,
pub sustained: bool,
pub reject_extra_nodes: bool,
pub converge_only: bool,
}
impl Default for Config {
fn default() -> Config {
Config {
network_addr: SocketAddr::from(([127, 0, 0, 1], 8001)),
drone_addr: SocketAddr::from(([127, 0, 0, 1], DRONE_PORT)),
id: Keypair::new(),
threads: 4,
num_nodes: 1,
duration: Duration::new(std::u64::MAX, 0),
tx_count: 500_000,
thread_batch_sleep_ms: 0,
sustained: false,
reject_extra_nodes: false,
converge_only: false,
}
}
}
/// Defines and builds the CLI args for a run of the benchmark
pub fn build_args<'a, 'b>() -> App<'a, 'b> {
App::new("solana-bench-tps")
.version(crate_version!())
.arg(
Arg::with_name("network")
.short("n")
.long("network")
.value_name("HOST:PORT")
.takes_value(true)
.help("Rendezvous with the network at this gossip entry point; defaults to 127.0.0.1:8001"),
)
.arg(
Arg::with_name("drone")
.short("d")
.long("drone")
.value_name("HOST:PORT")
.takes_value(true)
.help("Location of the drone; defaults to network:DRONE_PORT"),
)
.arg(
Arg::with_name("identity")
.short("i")
.long("identity")
.value_name("PATH")
.takes_value(true)
.help("File containing a client identity (keypair)"),
)
.arg(
Arg::with_name("num-nodes")
.short("N")
.long("num-nodes")
.value_name("NUM")
.takes_value(true)
.help("Wait for NUM nodes to converge"),
)
.arg(
Arg::with_name("reject-extra-nodes")
.long("reject-extra-nodes")
.help("Require exactly `num-nodes` on convergence. Appropriate only for internal networks"),
)
.arg(
Arg::with_name("threads")
.short("t")
.long("threads")
.value_name("NUM")
.takes_value(true)
.help("Number of threads"),
)
.arg(
Arg::with_name("duration")
.long("duration")
.value_name("SECS")
.takes_value(true)
.help("Seconds to run benchmark, then exit; default is forever"),
)
.arg(
Arg::with_name("converge-only")
.long("converge-only")
.help("Exit immediately after converging"),
)
.arg(
Arg::with_name("sustained")
.long("sustained")
.help("Use sustained performance mode vs. peak mode. This overlaps the tx generation with transfers."),
)
.arg(
Arg::with_name("tx_count")
.long("tx_count")
.value_name("NUM")
.takes_value(true)
.help("Number of transactions to send per batch")
)
.arg(
Arg::with_name("thread-batch-sleep-ms")
.short("z")
.long("thread-batch-sleep-ms")
.value_name("NUM")
.takes_value(true)
.help("Per-thread-per-iteration sleep in ms"),
)
}
/// Parses a clap `ArgMatches` structure into a `Config`
/// # Arguments
/// * `matches` - command line arguments parsed by clap
/// # Panics
/// Panics if there is trouble parsing any of the arguments
pub fn extract_args<'a>(matches: &ArgMatches<'a>) -> Config {
let mut args = Config::default();
if let Some(addr) = matches.value_of("network") {
args.network_addr = addr.parse().unwrap_or_else(|e| {
eprintln!("failed to parse network: {}", e);
exit(1)
});
}
if let Some(addr) = matches.value_of("drone") {
args.drone_addr = addr.parse().unwrap_or_else(|e| {
eprintln!("failed to parse drone address: {}", e);
exit(1)
});
}
if matches.is_present("identity") {
args.id = read_keypair(matches.value_of("identity").unwrap())
.expect("can't read client identity");
}
if let Some(t) = matches.value_of("threads") {
args.threads = t.to_string().parse().expect("can't parse threads");
}
if let Some(n) = matches.value_of("num-nodes") {
args.num_nodes = n.to_string().parse().expect("can't parse num-nodes");
}
if let Some(duration) = matches.value_of("duration") {
args.duration = Duration::new(
duration.to_string().parse().expect("can't parse duration"),
0,
);
}
if let Some(s) = matches.value_of("tx_count") {
args.tx_count = s.to_string().parse().expect("can't parse tx_account");
}
if let Some(t) = matches.value_of("thread-batch-sleep-ms") {
args.thread_batch_sleep_ms = t
.to_string()
.parse()
.expect("can't parse thread-batch-sleep-ms");
}
args.sustained = matches.is_present("sustained");
args.converge_only = matches.is_present("converge-only");
args.reject_extra_nodes = matches.is_present("reject-extra-nodes");
args
}

249
bench-tps/src/main.rs Normal file
View File

@ -0,0 +1,249 @@
mod bench;
mod cli;
use crate::bench::*;
use solana::client::mk_client;
use solana::gen_keys::GenKeys;
use solana::gossip_service::discover;
use solana_metrics;
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::collections::VecDeque;
use std::process::exit;
use std::sync::atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering};
use std::sync::{Arc, RwLock};
use std::thread::sleep;
use std::thread::Builder;
use std::time::Duration;
use std::time::Instant;
fn main() {
solana_logger::setup();
solana_metrics::set_panic_hook("bench-tps");
let matches = cli::build_args().get_matches();
let cfg = cli::extract_args(&matches);
let cli::Config {
network_addr: network,
drone_addr,
id,
threads,
thread_batch_sleep_ms,
num_nodes,
duration,
tx_count,
sustained,
reject_extra_nodes,
converge_only,
} = cfg;
let nodes = discover(&network, num_nodes).unwrap_or_else(|err| {
eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
exit(1);
});
if nodes.len() < num_nodes {
eprintln!(
"Error: Insufficient nodes discovered. Expecting {} or more",
num_nodes
);
exit(1);
}
if reject_extra_nodes && nodes.len() > num_nodes {
eprintln!(
"Error: Extra nodes discovered. Expecting exactly {}",
num_nodes
);
exit(1);
}
if converge_only {
return;
}
let cluster_entrypoint = nodes[0].clone(); // Pick the first node, why not?
let mut client = mk_client(&cluster_entrypoint);
let mut barrier_client = mk_client(&cluster_entrypoint);
let mut seed = [0u8; 32];
seed.copy_from_slice(&id.public_key_bytes()[..32]);
let mut rnd = GenKeys::new(seed);
println!("Creating {} keypairs...", tx_count * 2);
let mut total_keys = 0;
let mut target = tx_count * 2;
while target > 0 {
total_keys += target;
target /= MAX_SPENDS_PER_TX;
}
let gen_keypairs = rnd.gen_n_keypairs(total_keys as u64);
let barrier_source_keypair = Keypair::new();
let barrier_dest_id = Keypair::new().pubkey();
println!("Get lamports...");
let num_lamports_per_account = 20;
// Sample the first keypair, see if it has lamports, if so then resume
// to avoid lamport loss
let keypair0_balance = client
.poll_get_balance(&gen_keypairs.last().unwrap().pubkey())
.unwrap_or(0);
if num_lamports_per_account > keypair0_balance {
let extra = num_lamports_per_account - keypair0_balance;
let total = extra * (gen_keypairs.len() as u64);
airdrop_lamports(&mut client, &drone_addr, &id, total);
println!("adding more lamports {}", extra);
fund_keys(&mut client, &id, &gen_keypairs, extra);
}
let start = gen_keypairs.len() - (tx_count * 2) as usize;
let keypairs = &gen_keypairs[start..];
airdrop_lamports(&mut barrier_client, &drone_addr, &barrier_source_keypair, 1);
println!("Get last ID...");
let mut blockhash = client.get_recent_blockhash();
println!("Got last ID {:?}", blockhash);
let first_tx_count = client.transaction_count();
println!("Initial transaction count {}", first_tx_count);
let exit_signal = Arc::new(AtomicBool::new(false));
// Setup a thread per validator to sample every period
// collect the max transaction rate and total tx count seen
let maxes = Arc::new(RwLock::new(Vec::new()));
let sample_period = 1; // in seconds
println!("Sampling TPS every {} second...", sample_period);
let v_threads: Vec<_> = nodes
.into_iter()
.map(|v| {
let exit_signal = exit_signal.clone();
let maxes = maxes.clone();
Builder::new()
.name("solana-client-sample".to_string())
.spawn(move || {
sample_tx_count(&exit_signal, &maxes, first_tx_count, &v, sample_period);
})
.unwrap()
})
.collect();
let shared_txs: SharedTransactions = Arc::new(RwLock::new(VecDeque::new()));
let shared_tx_active_thread_count = Arc::new(AtomicIsize::new(0));
let total_tx_sent_count = Arc::new(AtomicUsize::new(0));
let s_threads: Vec<_> = (0..threads)
.map(|_| {
let exit_signal = exit_signal.clone();
let shared_txs = shared_txs.clone();
let cluster_entrypoint = cluster_entrypoint.clone();
let shared_tx_active_thread_count = shared_tx_active_thread_count.clone();
let total_tx_sent_count = total_tx_sent_count.clone();
Builder::new()
.name("solana-client-sender".to_string())
.spawn(move || {
do_tx_transfers(
&exit_signal,
&shared_txs,
&cluster_entrypoint,
&shared_tx_active_thread_count,
&total_tx_sent_count,
thread_batch_sleep_ms,
);
})
.unwrap()
})
.collect();
// generate and send transactions for the specified duration
let start = Instant::now();
let mut reclaim_lamports_back_to_source_account = false;
let mut i = keypair0_balance;
while start.elapsed() < duration {
let balance = client.poll_get_balance(&id.pubkey()).unwrap_or(0);
metrics_submit_lamport_balance(balance);
// ping-pong between source and destination accounts for each loop iteration
// this seems to be faster than trying to determine the balance of individual
// accounts
let len = tx_count as usize;
generate_txs(
&shared_txs,
&keypairs[..len],
&keypairs[len..],
threads,
reclaim_lamports_back_to_source_account,
&cluster_entrypoint,
);
// In sustained mode overlap the transfers with generation
// this has higher average performance but lower peak performance
// in tested environments.
if !sustained {
while shared_tx_active_thread_count.load(Ordering::Relaxed) > 0 {
sleep(Duration::from_millis(100));
}
}
// It's not feasible (would take too much time) to confirm each of the `tx_count / 2`
// transactions sent by `generate_txs()` so instead send and confirm a single transaction
// to validate the network is still functional.
send_barrier_transaction(
&mut barrier_client,
&mut blockhash,
&barrier_source_keypair,
&barrier_dest_id,
);
i += 1;
if should_switch_directions(num_lamports_per_account, i) {
reclaim_lamports_back_to_source_account = !reclaim_lamports_back_to_source_account;
}
}
// Stop the sampling threads so it will collect the stats
exit_signal.store(true, Ordering::Relaxed);
println!("Waiting for validator threads...");
for t in v_threads {
if let Err(err) = t.join() {
println!(" join() failed with: {:?}", err);
}
}
// join the tx send threads
println!("Waiting for transmit threads...");
for t in s_threads {
if let Err(err) = t.join() {
println!(" join() failed with: {:?}", err);
}
}
let balance = client.poll_get_balance(&id.pubkey()).unwrap_or(0);
metrics_submit_lamport_balance(balance);
compute_and_report_stats(
&maxes,
sample_period,
&start.elapsed(),
total_tx_sent_count.load(Ordering::Relaxed),
);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_switch_directions() {
assert_eq!(should_switch_directions(20, 0), false);
assert_eq!(should_switch_directions(20, 1), false);
assert_eq!(should_switch_directions(20, 14), false);
assert_eq!(should_switch_directions(20, 15), true);
assert_eq!(should_switch_directions(20, 16), false);
assert_eq!(should_switch_directions(20, 19), false);
assert_eq!(should_switch_directions(20, 20), true);
assert_eq!(should_switch_directions(20, 21), false);
assert_eq!(should_switch_directions(20, 99), false);
assert_eq!(should_switch_directions(20, 100), true);
assert_eq!(should_switch_directions(20, 101), false);
}
}

248
benches/append_vec.rs Normal file
View File

@ -0,0 +1,248 @@
#![feature(test)]
extern crate rand;
extern crate test;
use bincode::{deserialize, serialize_into, serialized_size};
use rand::{thread_rng, Rng};
use solana_runtime::append_vec::{
deserialize_account, get_serialized_size, serialize_account, AppendVec,
};
use solana_sdk::account::Account;
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::env;
use std::io::Cursor;
use std::path::PathBuf;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, RwLock};
use std::thread::spawn;
use test::Bencher;
const START_SIZE: u64 = 4 * 1024 * 1024;
const INC_SIZE: u64 = 1 * 1024 * 1024;
macro_rules! align_up {
($addr: expr, $align: expr) => {
($addr + ($align - 1)) & !($align - 1)
};
}
fn get_append_vec_bench_path(path: &str) -> PathBuf {
let out_dir = env::var("OUT_DIR").unwrap_or_else(|_| "target".to_string());
let mut buf = PathBuf::new();
buf.push(&format!("{}/{}", out_dir, path));
buf
}
#[bench]
fn append_vec_atomic_append(bencher: &mut Bencher) {
let path = get_append_vec_bench_path("bench_append");
let mut vec = AppendVec::<AtomicUsize>::new(&path, true, START_SIZE, INC_SIZE);
bencher.iter(|| {
if vec.append(AtomicUsize::new(0)).is_none() {
assert!(vec.grow_file().is_ok());
assert!(vec.append(AtomicUsize::new(0)).is_some());
}
});
std::fs::remove_file(path).unwrap();
}
#[bench]
fn append_vec_atomic_random_access(bencher: &mut Bencher) {
let path = get_append_vec_bench_path("bench_ra");
let mut vec = AppendVec::<AtomicUsize>::new(&path, true, START_SIZE, INC_SIZE);
let size = 1_000_000;
for _ in 0..size {
if vec.append(AtomicUsize::new(0)).is_none() {
assert!(vec.grow_file().is_ok());
assert!(vec.append(AtomicUsize::new(0)).is_some());
}
}
bencher.iter(|| {
let index = thread_rng().gen_range(0, size as u64);
vec.get(index * std::mem::size_of::<AtomicUsize>() as u64);
});
std::fs::remove_file(path).unwrap();
}
#[bench]
fn append_vec_atomic_random_change(bencher: &mut Bencher) {
let path = get_append_vec_bench_path("bench_rax");
let mut vec = AppendVec::<AtomicUsize>::new(&path, true, START_SIZE, INC_SIZE);
let size = 1_000_000;
for k in 0..size {
if vec.append(AtomicUsize::new(k)).is_none() {
assert!(vec.grow_file().is_ok());
assert!(vec.append(AtomicUsize::new(k)).is_some());
}
}
bencher.iter(|| {
let index = thread_rng().gen_range(0, size as u64);
let atomic1 = vec.get(index * std::mem::size_of::<AtomicUsize>() as u64);
let current1 = atomic1.load(Ordering::Relaxed);
assert_eq!(current1, index as usize);
let next = current1 + 1;
let mut index = vec.append(AtomicUsize::new(next));
if index.is_none() {
assert!(vec.grow_file().is_ok());
index = vec.append(AtomicUsize::new(next));
}
let atomic2 = vec.get(index.unwrap());
let current2 = atomic2.load(Ordering::Relaxed);
assert_eq!(current2, next);
});
std::fs::remove_file(path).unwrap();
}
#[bench]
fn append_vec_atomic_random_read(bencher: &mut Bencher) {
let path = get_append_vec_bench_path("bench_read");
let mut vec = AppendVec::<AtomicUsize>::new(&path, true, START_SIZE, INC_SIZE);
let size = 1_000_000;
for _ in 0..size {
if vec.append(AtomicUsize::new(0)).is_none() {
assert!(vec.grow_file().is_ok());
assert!(vec.append(AtomicUsize::new(0)).is_some());
}
}
bencher.iter(|| {
let index = thread_rng().gen_range(0, size);
let atomic1 = vec.get((index * std::mem::size_of::<AtomicUsize>()) as u64);
let current1 = atomic1.load(Ordering::Relaxed);
assert_eq!(current1, 0);
});
std::fs::remove_file(path).unwrap();
}
#[bench]
fn append_vec_concurrent_lock_append(bencher: &mut Bencher) {
let path = get_append_vec_bench_path("bench_lock_append");
let vec = Arc::new(RwLock::new(AppendVec::<AtomicUsize>::new(
&path, true, START_SIZE, INC_SIZE,
)));
let vec1 = vec.clone();
let size = 1_000_000;
let count = Arc::new(AtomicUsize::new(0));
let count1 = count.clone();
spawn(move || loop {
let mut len = count.load(Ordering::Relaxed);
{
let rlock = vec1.read().unwrap();
loop {
if rlock.append(AtomicUsize::new(0)).is_none() {
break;
}
len = count.fetch_add(1, Ordering::Relaxed);
}
if len >= size {
break;
}
}
{
let mut wlock = vec1.write().unwrap();
if len >= size {
break;
}
assert!(wlock.grow_file().is_ok());
}
});
bencher.iter(|| {
let _rlock = vec.read().unwrap();
let len = count1.load(Ordering::Relaxed);
assert!(len < size * 2);
});
std::fs::remove_file(path).unwrap();
}
#[bench]
fn append_vec_concurrent_get_append(bencher: &mut Bencher) {
let path = get_append_vec_bench_path("bench_get_append");
let vec = Arc::new(RwLock::new(AppendVec::<AtomicUsize>::new(
&path, true, START_SIZE, INC_SIZE,
)));
let vec1 = vec.clone();
let size = 1_000_000;
let count = Arc::new(AtomicUsize::new(0));
let count1 = count.clone();
spawn(move || loop {
let mut len = count.load(Ordering::Relaxed);
{
let rlock = vec1.read().unwrap();
loop {
if rlock.append(AtomicUsize::new(0)).is_none() {
break;
}
len = count.fetch_add(1, Ordering::Relaxed);
}
if len >= size {
break;
}
}
{
let mut wlock = vec1.write().unwrap();
if len >= size {
break;
}
assert!(wlock.grow_file().is_ok());
}
});
bencher.iter(|| {
let rlock = vec.read().unwrap();
let len = count1.load(Ordering::Relaxed);
if len > 0 {
let index = thread_rng().gen_range(0, len);
rlock.get((index * std::mem::size_of::<AtomicUsize>()) as u64);
}
});
std::fs::remove_file(path).unwrap();
}
#[bench]
fn bench_account_serialize(bencher: &mut Bencher) {
let num: usize = 1000;
let account = Account::new(2, 100, &Keypair::new().pubkey());
let len = get_serialized_size(&account);
let ser_len = align_up!(len + std::mem::size_of::<u64>(), std::mem::size_of::<u64>());
let mut memory = vec![0; num * ser_len];
bencher.iter(|| {
for i in 0..num {
let start = i * ser_len;
serialize_account(&mut memory[start..start + ser_len], &account, len);
}
});
// make sure compiler doesn't delete the code.
let index = thread_rng().gen_range(0, num);
if memory[index] != 0 {
println!("memory: {}", memory[index]);
}
let start = index * ser_len;
let new_account = deserialize_account(&memory[start..start + ser_len], 0, num * len).unwrap();
assert_eq!(new_account, account);
}
#[bench]
fn bench_account_serialize_bincode(bencher: &mut Bencher) {
let num: usize = 1000;
let account = Account::new(2, 100, &Keypair::new().pubkey());
let len = serialized_size(&account).unwrap() as usize;
let mut memory = vec![0u8; num * len];
bencher.iter(|| {
for i in 0..num {
let start = i * len;
let cursor = Cursor::new(&mut memory[start..start + len]);
serialize_into(cursor, &account).unwrap();
}
});
// make sure compiler doesn't delete the code.
let index = thread_rng().gen_range(0, len);
if memory[index] != 0 {
println!("memory: {}", memory[index]);
}
let start = index * len;
let new_account: Account = deserialize(&memory[start..start + len]).unwrap();
assert_eq!(new_account, account);
}

View File

@ -1,66 +0,0 @@
#[macro_use]
extern crate criterion;
extern crate bincode;
extern crate rayon;
extern crate solana;
use bincode::serialize;
use criterion::{Bencher, Criterion};
use rayon::prelude::*;
use solana::bank::*;
use solana::hash::hash;
use solana::mint::Mint;
use solana::signature::{Keypair, KeypairUtil};
use solana::transaction::Transaction;
fn bench_process_transaction(bencher: &mut Bencher) {
let mint = Mint::new(100_000_000);
let bank = Bank::new(&mint);
// Create transactions between unrelated parties.
let transactions: Vec<_> = (0..4096)
.into_par_iter()
.map(|i| {
// Seed the 'from' account.
let rando0 = Keypair::new();
let tx = Transaction::new(&mint.keypair(), rando0.pubkey(), 10_000, mint.last_id());
assert!(bank.process_transaction(&tx).is_ok());
// Seed the 'to' account and a cell for its signature.
let last_id = hash(&serialize(&i).unwrap()); // Unique hash
bank.register_entry_id(&last_id);
let rando1 = Keypair::new();
let tx = Transaction::new(&rando0, rando1.pubkey(), 1, last_id);
assert!(bank.process_transaction(&tx).is_ok());
// Finally, return the transaction to the benchmark.
tx
})
.collect();
bencher.iter_with_setup(
|| {
// Since benchmarker runs this multiple times, we need to clear the signatures.
bank.clear_signatures();
transactions.clone()
},
|transactions| {
let results = bank.process_transactions(transactions);
assert!(results.iter().all(Result::is_ok));
},
)
}
fn bench(criterion: &mut Criterion) {
criterion.bench_function("bench_process_transaction", |bencher| {
bench_process_transaction(bencher);
});
}
criterion_group!(
name = benches;
config = Criterion::default().sample_size(2);
targets = bench
);
criterion_main!(benches);

View File

@ -1,229 +1,241 @@
extern crate bincode;
#[macro_use]
extern crate criterion;
extern crate rayon;
extern crate solana;
#![feature(test)]
use criterion::{Bencher, Criterion};
extern crate test;
use rand::{thread_rng, Rng};
use rayon::prelude::*;
use solana::bank::Bank;
use solana::banking_stage::BankingStage;
use solana::mint::Mint;
use solana::packet::{to_packets_chunked, PacketRecycler};
use solana::record_stage::Signal;
use solana::signature::{Keypair, KeypairUtil};
use solana::transaction::Transaction;
use solana::banking_stage::{create_test_recorder, BankingStage};
use solana::cluster_info::ClusterInfo;
use solana::cluster_info::Node;
use solana::packet::to_packets_chunked;
use solana::poh_recorder::WorkingBankEntries;
use solana::service::Service;
use solana_runtime::bank::Bank;
use solana_sdk::genesis_block::GenesisBlock;
use solana_sdk::hash::hash;
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{KeypairUtil, Signature};
use solana_sdk::system_transaction::SystemTransaction;
use solana_sdk::timing::{DEFAULT_TICKS_PER_SLOT, MAX_RECENT_BLOCKHASHES};
use std::iter;
use std::sync::atomic::Ordering;
use std::sync::mpsc::{channel, Receiver};
use std::sync::Arc;
use std::sync::{Arc, RwLock};
use std::time::Duration;
use test::Bencher;
// use self::test::Bencher;
// use bank::{Bank, MAX_ENTRY_IDS};
// use bincode::serialize;
// use hash::hash;
// use mint::Mint;
// use rayon::prelude::*;
// use signature::{Keypair, KeypairUtil};
// use std::collections::HashSet;
// use std::time::Instant;
// use transaction::Transaction;
//
// fn bench_process_transactions(_bencher: &mut Bencher) {
// let mint = Mint::new(100_000_000);
// let bank = Bank::new(&mint);
// // Create transactions between unrelated parties.
// let txs = 100_000;
// let last_ids: Mutex<HashSet<Hash>> = Mutex::new(HashSet::new());
// let transactions: Vec<_> = (0..txs)
// .into_par_iter()
// .map(|i| {
// // Seed the 'to' account and a cell for its signature.
// let dummy_id = i % (MAX_ENTRY_IDS as i32);
// let last_id = hash(&serialize(&dummy_id).unwrap()); // Semi-unique hash
// {
// let mut last_ids = last_ids.lock().unwrap();
// if !last_ids.contains(&last_id) {
// last_ids.insert(last_id);
// bank.register_entry_id(&last_id);
// }
// }
//
// // Seed the 'from' account.
// let rando0 = Keypair::new();
// let tx = Transaction::new(&mint.keypair(), rando0.pubkey(), 1_000, last_id);
// bank.process_transaction(&tx).unwrap();
//
// let rando1 = Keypair::new();
// let tx = Transaction::new(&rando0, rando1.pubkey(), 2, last_id);
// bank.process_transaction(&tx).unwrap();
//
// // Finally, return a transaction that's unique
// Transaction::new(&rando0, rando1.pubkey(), 1, last_id)
// })
// .collect();
//
// let banking_stage = EventProcessor::new(bank, &mint.last_id(), None);
//
// let now = Instant::now();
// assert!(banking_stage.process_transactions(transactions).is_ok());
// let duration = now.elapsed();
// let sec = duration.as_secs() as f64 + duration.subsec_nanos() as f64 / 1_000_000_000.0;
// let tps = txs as f64 / sec;
//
// // Ensure that all transactions were successfully logged.
// drop(banking_stage.historian_input);
// let entries: Vec<Entry> = banking_stage.output.lock().unwrap().iter().collect();
// assert_eq!(entries.len(), 1);
// assert_eq!(entries[0].transactions.len(), txs as usize);
//
// println!("{} tps", tps);
// }
fn check_txs(receiver: &Receiver<Signal>, ref_tx_count: usize) {
fn check_txs(receiver: &Receiver<WorkingBankEntries>, ref_tx_count: usize) {
let mut total = 0;
loop {
let signal = receiver.recv().unwrap();
if let Signal::Transactions(transactions) = signal {
total += transactions.len();
if total >= ref_tx_count {
break;
let entries = receiver.recv_timeout(Duration::new(1, 0));
if let Ok((_, entries)) = entries {
for (entry, _) in &entries {
total += entry.transactions.len();
}
} else {
assert!(false);
break;
}
if total >= ref_tx_count {
break;
}
}
assert_eq!(total, ref_tx_count);
}
#[bench]
#[ignore]
fn bench_banking_stage_multi_accounts(bencher: &mut Bencher) {
let tx = 10_000_usize;
let num_threads = BankingStage::num_threads() as usize;
// a multiple of packet chunk 2X duplicates to avoid races
let txes = 192 * 50 * num_threads * 2;
let mint_total = 1_000_000_000_000;
let mint = Mint::new(mint_total);
let num_dst_accounts = 8 * 1024;
let num_src_accounts = 8 * 1024;
let srckeys: Vec<_> = (0..num_src_accounts).map(|_| Keypair::new()).collect();
let dstkeys: Vec<_> = (0..num_dst_accounts)
.map(|_| Keypair::new().pubkey())
.collect();
let transactions: Vec<_> = (0..tx)
.map(|i| {
Transaction::new(
&srckeys[i % num_src_accounts],
dstkeys[i % num_dst_accounts],
i as i64,
mint.last_id(),
)
})
.collect();
let (genesis_block, mint_keypair) = GenesisBlock::new(mint_total);
let (verified_sender, verified_receiver) = channel();
let (signal_sender, signal_receiver) = channel();
let packet_recycler = PacketRecycler::default();
let setup_transactions: Vec<_> = (0..num_src_accounts)
.map(|i| {
Transaction::new(
&mint.keypair(),
srckeys[i].pubkey(),
mint_total / num_src_accounts as i64,
mint.last_id(),
)
let bank = Arc::new(Bank::new(&genesis_block));
let dummy = SystemTransaction::new_move(
&mint_keypair,
&mint_keypair.pubkey(),
1,
genesis_block.hash(),
0,
);
let transactions: Vec<_> = (0..txes)
.into_par_iter()
.map(|_| {
let mut new = dummy.clone();
let from: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
let to: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
new.account_keys[0] = Pubkey::new(&from[0..32]);
new.account_keys[1] = Pubkey::new(&to[0..32]);
new.signatures = vec![Signature::new(&sig[0..64])];
new
})
.collect();
bencher.iter(move || {
let bank = Arc::new(Bank::new(&mint));
let verified_setup: Vec<_> =
to_packets_chunked(&packet_recycler, &setup_transactions.clone(), tx)
.into_iter()
.map(|x| {
let len = (*x).read().unwrap().packets.len();
(x, iter::repeat(1).take(len).collect())
})
.collect();
let verified_setup_len = verified_setup.len();
verified_sender.send(verified_setup).unwrap();
BankingStage::process_packets(&bank, &verified_receiver, &signal_sender, &packet_recycler)
.unwrap();
check_txs(&signal_receiver, num_src_accounts);
let verified: Vec<_> = to_packets_chunked(&packet_recycler, &transactions.clone(), 192)
.into_iter()
.map(|x| {
let len = (*x).read().unwrap().packets.len();
(x, iter::repeat(1).take(len).collect())
})
.collect();
let verified_len = verified.len();
verified_sender.send(verified).unwrap();
BankingStage::process_packets(&bank, &verified_receiver, &signal_sender, &packet_recycler)
.unwrap();
check_txs(&signal_receiver, tx);
// fund all the accounts
transactions.iter().for_each(|tx| {
let fund = SystemTransaction::new_move(
&mint_keypair,
&tx.account_keys[0],
mint_total / txes as u64,
genesis_block.hash(),
0,
);
let x = bank.process_transaction(&fund);
x.unwrap();
});
}
//sanity check, make sure all the transactions can execute sequentially
transactions.iter().for_each(|tx| {
let res = bank.process_transaction(&tx);
assert!(res.is_ok(), "sanity test transactions");
});
bank.clear_signatures();
//sanity check, make sure all the transactions can execute in parallel
let res = bank.process_transactions(&transactions);
for r in res {
assert!(r.is_ok(), "sanity parallel execution");
}
bank.clear_signatures();
let verified: Vec<_> = to_packets_chunked(&transactions.clone(), 192)
.into_iter()
.map(|x| {
let len = x.read().unwrap().packets.len();
(x, iter::repeat(1).take(len).collect())
})
.collect();
let (exit, poh_recorder, poh_service, signal_receiver) = create_test_recorder(&bank);
let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info);
let cluster_info = Arc::new(RwLock::new(cluster_info));
let _banking_stage = BankingStage::new(&cluster_info, &poh_recorder, verified_receiver);
poh_recorder.lock().unwrap().set_bank(&bank);
fn bench_banking_stage_single_from(bencher: &mut Bencher) {
let tx = 10_000_usize;
let mint = Mint::new(1_000_000_000_000);
let mut pubkeys = Vec::new();
let num_keys = 8;
for _ in 0..num_keys {
pubkeys.push(Keypair::new().pubkey());
let mut id = genesis_block.hash();
for _ in 0..(MAX_RECENT_BLOCKHASHES * DEFAULT_TICKS_PER_SLOT as usize) {
id = hash(&id.as_ref());
bank.register_tick(&id);
}
let transactions: Vec<_> = (0..tx)
.into_par_iter()
.map(|i| {
Transaction::new(
&mint.keypair(),
pubkeys[i % num_keys],
i as i64,
mint.last_id(),
)
})
.collect();
let half_len = verified.len() / 2;
let mut start = 0;
bencher.iter(move || {
// make sure the transactions are still valid
bank.register_tick(&genesis_block.hash());
for v in verified[start..start + half_len].chunks(verified.len() / num_threads) {
verified_sender.send(v.to_vec()).unwrap();
}
check_txs(&signal_receiver, txes / 2);
bank.clear_signatures();
start += half_len;
start %= verified.len();
});
exit.store(true, Ordering::Relaxed);
poh_service.join().unwrap();
}
#[bench]
#[ignore]
fn bench_banking_stage_multi_programs(bencher: &mut Bencher) {
let progs = 4;
let num_threads = BankingStage::num_threads() as usize;
// a multiple of packet chunk 2X duplicates to avoid races
let txes = 96 * 100 * num_threads * 2;
let mint_total = 1_000_000_000_000;
let (genesis_block, mint_keypair) = GenesisBlock::new(mint_total);
let (verified_sender, verified_receiver) = channel();
let (signal_sender, signal_receiver) = channel();
let packet_recycler = PacketRecycler::default();
let bank = Arc::new(Bank::new(&genesis_block));
let dummy = SystemTransaction::new_move(
&mint_keypair,
&mint_keypair.pubkey(),
1,
genesis_block.hash(),
0,
);
let transactions: Vec<_> = (0..txes)
.into_par_iter()
.map(|_| {
let mut new = dummy.clone();
let from: Vec<u8> = (0..32).map(|_| thread_rng().gen()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
let to: Vec<u8> = (0..32).map(|_| thread_rng().gen()).collect();
new.account_keys[0] = Pubkey::new(&from[0..32]);
new.account_keys[1] = Pubkey::new(&to[0..32]);
let prog = new.instructions[0].clone();
for i in 1..progs {
//generate programs that spend to random keys
let to: Vec<u8> = (0..32).map(|_| thread_rng().gen()).collect();
let to_key = Pubkey::new(&to[0..32]);
new.account_keys.push(to_key);
assert_eq!(new.account_keys.len(), i + 2);
new.instructions.push(prog.clone());
assert_eq!(new.instructions.len(), i + 1);
new.instructions[i].accounts[1] = 1 + i as u8;
assert_eq!(new.key(i, 1), Some(&to_key));
assert_eq!(
new.account_keys[new.instructions[i].accounts[1] as usize],
to_key
);
}
assert_eq!(new.instructions.len(), progs);
new.signatures = vec![Signature::new(&sig[0..64])];
new
})
.collect();
transactions.iter().for_each(|tx| {
let fund = SystemTransaction::new_move(
&mint_keypair,
&tx.account_keys[0],
mint_total / txes as u64,
genesis_block.hash(),
0,
);
bank.process_transaction(&fund).unwrap();
});
//sanity check, make sure all the transactions can execute sequentially
transactions.iter().for_each(|tx| {
let res = bank.process_transaction(&tx);
assert!(res.is_ok(), "sanity test transactions");
});
bank.clear_signatures();
//sanity check, make sure all the transactions can execute in parallel
let res = bank.process_transactions(&transactions);
for r in res {
assert!(r.is_ok(), "sanity parallel execution");
}
bank.clear_signatures();
let verified: Vec<_> = to_packets_chunked(&transactions.clone(), 96)
.into_iter()
.map(|x| {
let len = x.read().unwrap().packets.len();
(x, iter::repeat(1).take(len).collect())
})
.collect();
let (exit, poh_recorder, poh_service, signal_receiver) = create_test_recorder(&bank);
let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info);
let cluster_info = Arc::new(RwLock::new(cluster_info));
let _banking_stage = BankingStage::new(&cluster_info, &poh_recorder, verified_receiver);
poh_recorder.lock().unwrap().set_bank(&bank);
let mut id = genesis_block.hash();
for _ in 0..(MAX_RECENT_BLOCKHASHES * DEFAULT_TICKS_PER_SLOT as usize) {
id = hash(&id.as_ref());
bank.register_tick(&id);
}
let half_len = verified.len() / 2;
let mut start = 0;
bencher.iter(move || {
let bank = Arc::new(Bank::new(&mint));
let verified: Vec<_> = to_packets_chunked(&packet_recycler, &transactions.clone(), tx)
.into_iter()
.map(|x| {
let len = (*x).read().unwrap().packets.len();
(x, iter::repeat(1).take(len).collect())
})
.collect();
let verified_len = verified.len();
verified_sender.send(verified).unwrap();
BankingStage::process_packets(&bank, &verified_receiver, &signal_sender, &packet_recycler)
.unwrap();
check_txs(&signal_receiver, tx);
// make sure the transactions are still valid
bank.register_tick(&genesis_block.hash());
for v in verified[start..start + half_len].chunks(verified.len() / num_threads) {
verified_sender.send(v.to_vec()).unwrap();
}
check_txs(&signal_receiver, txes / 2);
bank.clear_signatures();
start += half_len;
start %= verified.len();
});
exit.store(true, Ordering::Relaxed);
poh_service.join().unwrap();
}
fn bench(criterion: &mut Criterion) {
criterion.bench_function("bench_banking_stage_multi_accounts", |bencher| {
bench_banking_stage_multi_accounts(bencher);
});
criterion.bench_function("bench_process_stage_single_from", |bencher| {
bench_banking_stage_single_from(bencher);
});
}
criterion_group!(
name = benches;
config = Criterion::default().sample_size(2);
targets = bench
);
criterion_main!(benches);

194
benches/blocktree.rs Normal file
View File

@ -0,0 +1,194 @@
#![feature(test)]
use rand;
extern crate test;
#[macro_use]
extern crate solana;
use rand::seq::SliceRandom;
use rand::{thread_rng, Rng};
use solana::blocktree::{get_tmp_ledger_path, Blocktree};
use solana::entry::{make_large_test_entries, make_tiny_test_entries, EntrySlice};
use solana::packet::{Blob, BLOB_HEADER_SIZE};
use test::Bencher;
// Given some blobs and a ledger at ledger_path, benchmark writing the blobs to the ledger
fn bench_write_blobs(bench: &mut Bencher, blobs: &mut Vec<Blob>, ledger_path: &str) {
let blocktree =
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger");
let num_blobs = blobs.len();
bench.iter(move || {
for blob in blobs.iter_mut() {
let index = blob.index();
blocktree
.put_data_blob_bytes(
blob.slot(),
index,
&blob.data[..BLOB_HEADER_SIZE + blob.size()],
)
.unwrap();
blob.set_index(index + num_blobs as u64);
}
});
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
}
// Insert some blobs into the ledger in preparation for read benchmarks
fn setup_read_bench(
blocktree: &mut Blocktree,
num_small_blobs: u64,
num_large_blobs: u64,
slot: u64,
) {
// Make some big and small entries
let mut entries = make_large_test_entries(num_large_blobs as usize);
entries.extend(make_tiny_test_entries(num_small_blobs as usize));
// Convert the entries to blobs, write the blobs to the ledger
let mut blobs = entries.to_blobs();
for (index, b) in blobs.iter_mut().enumerate() {
b.set_index(index as u64);
b.set_slot(slot);
}
blocktree
.write_blobs(&blobs)
.expect("Expectd successful insertion of blobs into ledger");
}
// Write small blobs to the ledger
#[bench]
#[ignore]
fn bench_write_small(bench: &mut Bencher) {
let ledger_path = get_tmp_ledger_path!();
let num_entries = 32 * 1024;
let entries = make_tiny_test_entries(num_entries);
let mut blobs = entries.to_blobs();
for (index, b) in blobs.iter_mut().enumerate() {
b.set_index(index as u64);
}
bench_write_blobs(bench, &mut blobs, &ledger_path);
}
// Write big blobs to the ledger
#[bench]
#[ignore]
fn bench_write_big(bench: &mut Bencher) {
let ledger_path = get_tmp_ledger_path!();
let num_entries = 32 * 1024;
let entries = make_large_test_entries(num_entries);
let mut blobs = entries.to_blobs();
for (index, b) in blobs.iter_mut().enumerate() {
b.set_index(index as u64);
}
bench_write_blobs(bench, &mut blobs, &ledger_path);
}
#[bench]
#[ignore]
fn bench_read_sequential(bench: &mut Bencher) {
let ledger_path = get_tmp_ledger_path!();
let mut blocktree =
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger");
// Insert some big and small blobs into the ledger
let num_small_blobs = 32 * 1024;
let num_large_blobs = 32 * 1024;
let total_blobs = num_small_blobs + num_large_blobs;
let slot = 0;
setup_read_bench(&mut blocktree, num_small_blobs, num_large_blobs, slot);
let num_reads = total_blobs / 15;
let mut rng = rand::thread_rng();
bench.iter(move || {
// Generate random starting point in the range [0, total_blobs - 1], read num_reads blobs sequentially
let start_index = rng.gen_range(0, num_small_blobs + num_large_blobs);
for i in start_index..start_index + num_reads {
let _ = blocktree.get_data_blob(slot, i as u64 % total_blobs);
}
});
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
}
#[bench]
#[ignore]
fn bench_read_random(bench: &mut Bencher) {
let ledger_path = get_tmp_ledger_path!();
let mut blocktree =
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger");
// Insert some big and small blobs into the ledger
let num_small_blobs = 32 * 1024;
let num_large_blobs = 32 * 1024;
let total_blobs = num_small_blobs + num_large_blobs;
let slot = 0;
setup_read_bench(&mut blocktree, num_small_blobs, num_large_blobs, slot);
let num_reads = total_blobs / 15;
// Generate a num_reads sized random sample of indexes in range [0, total_blobs - 1],
// simulating random reads
let mut rng = rand::thread_rng();
let indexes: Vec<usize> = (0..num_reads)
.map(|_| rng.gen_range(0, total_blobs) as usize)
.collect();
bench.iter(move || {
for i in indexes.iter() {
let _ = blocktree.get_data_blob(slot, *i as u64);
}
});
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
}
#[bench]
#[ignore]
fn bench_insert_data_blob_small(bench: &mut Bencher) {
let ledger_path = get_tmp_ledger_path!();
let blocktree =
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger");
let num_entries = 32 * 1024;
let entries = make_tiny_test_entries(num_entries);
let mut blobs = entries.to_blobs();
blobs.shuffle(&mut thread_rng());
bench.iter(move || {
for blob in blobs.iter_mut() {
let index = blob.index();
blob.set_index(index + num_entries as u64);
}
blocktree.write_blobs(&blobs).unwrap();
});
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
}
#[bench]
#[ignore]
fn bench_insert_data_blob_big(bench: &mut Bencher) {
let ledger_path = get_tmp_ledger_path!();
let blocktree =
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger");
let num_entries = 32 * 1024;
let entries = make_large_test_entries(num_entries);
let mut shared_blobs = entries.to_shared_blobs();
shared_blobs.shuffle(&mut thread_rng());
bench.iter(move || {
for blob in shared_blobs.iter_mut() {
let index = blob.read().unwrap().index();
blocktree.write_shared_blobs(vec![blob.clone()]).unwrap();
blob.write().unwrap().set_index(index + num_entries as u64);
}
});
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
}

29
benches/chacha.rs Normal file
View File

@ -0,0 +1,29 @@
//#![feature(test)]
//
//extern crate solana;
//extern crate test;
//
//use solana::chacha::chacha_cbc_encrypt_files;
//use std::fs::remove_file;
//use std::fs::File;
//use std::io::Write;
//use std::path::Path;
//use test::Bencher;
//
//#[bench]
//fn bench_chacha_encrypt(bench: &mut Bencher) {
// let in_path = Path::new("bench_chacha_encrypt_file_input.txt");
// let out_path = Path::new("bench_chacha_encrypt_file_output.txt.enc");
// {
// let mut in_file = File::create(in_path).unwrap();
// for _ in 0..1024 {
// in_file.write("123456foobar".as_bytes()).unwrap();
// }
// }
// bench.iter(move || {
// chacha_cbc_encrypt_files(in_path, out_path, "thetestkey".to_string()).unwrap();
// });
//
// remove_file(in_path).unwrap();
// remove_file(out_path).unwrap();
//}

12
benches/gen_keys.rs Normal file
View File

@ -0,0 +1,12 @@
#![feature(test)]
extern crate test;
use solana::gen_keys::GenKeys;
use test::Bencher;
#[bench]
fn bench_gen_keys(b: &mut Bencher) {
let mut rnd = GenKeys::new([0u8; 32]);
b.iter(|| rnd.gen_n_keypairs(1000));
}

View File

@ -1,40 +1,24 @@
#[macro_use]
extern crate criterion;
extern crate solana;
#![feature(test)]
use criterion::{Bencher, Criterion};
use solana::hash::{hash, Hash};
use solana::ledger::{next_entries, reconstruct_entries_from_blobs, Block};
use solana::packet::BlobRecycler;
use solana::signature::{Keypair, KeypairUtil};
use solana::transaction::Transaction;
use std::collections::VecDeque;
extern crate test;
use solana::entry::{next_entries, reconstruct_entries_from_blobs, EntrySlice};
use solana_sdk::hash::{hash, Hash};
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_transaction::SystemTransaction;
use test::Bencher;
#[bench]
fn bench_block_to_blobs_to_block(bencher: &mut Bencher) {
let zero = Hash::default();
let one = hash(&zero.as_ref());
let keypair = Keypair::new();
let tx0 = Transaction::new(&keypair, keypair.pubkey(), 1, one);
let tx0 = SystemTransaction::new_move(&keypair, &keypair.pubkey(), 1, one, 0);
let transactions = vec![tx0; 10];
let entries = next_entries(&zero, 1, transactions);
let blob_recycler = BlobRecycler::default();
bencher.iter(|| {
let mut blob_q = VecDeque::new();
entries.to_blobs(&blob_recycler, &mut blob_q);
assert_eq!(reconstruct_entries_from_blobs(blob_q).unwrap(), entries);
let blobs = entries.to_blobs();
assert_eq!(reconstruct_entries_from_blobs(blobs).unwrap().0, entries);
});
}
fn bench(criterion: &mut Criterion) {
criterion.bench_function("bench_block_to_blobs_to_block", |bencher| {
bench_block_to_blobs_to_block(bencher);
});
}
criterion_group!(
name = benches;
config = Criterion::default().sample_size(2);
targets = bench
);
criterion_main!(benches);

View File

@ -1,24 +0,0 @@
#[macro_use]
extern crate criterion;
extern crate solana;
use criterion::{Bencher, Criterion};
use solana::signature::GenKeys;
fn bench_gen_keys(b: &mut Bencher) {
let mut rnd = GenKeys::new([0u8; 32]);
b.iter(|| rnd.gen_n_keypairs(1000));
}
fn bench(criterion: &mut Criterion) {
criterion.bench_function("bench_gen_keys", |bencher| {
bench_gen_keys(bencher);
});
}
criterion_group!(
name = benches;
config = Criterion::default().sample_size(2);
targets = bench
);
criterion_main!(benches);

View File

@ -1,36 +1,21 @@
#[macro_use]
extern crate criterion;
extern crate bincode;
extern crate rayon;
extern crate solana;
#![feature(test)]
use criterion::{Bencher, Criterion};
use solana::packet::{to_packets, PacketRecycler};
extern crate test;
use solana::packet::to_packets;
use solana::sigverify;
use solana::transaction::test_tx;
use solana::test_tx::test_tx;
use test::Bencher;
#[bench]
fn bench_sigverify(bencher: &mut Bencher) {
let tx = test_tx();
// generate packet vector
let packet_recycler = PacketRecycler::default();
let batches = to_packets(&packet_recycler, &vec![tx; 128]);
let batches = to_packets(&vec![tx; 128]);
// verify packets
bencher.iter(|| {
let _ans = sigverify::ed25519_verify(&batches);
})
}
fn bench(criterion: &mut Criterion) {
criterion.bench_function("bench_sigverify", |bencher| {
bench_sigverify(bencher);
});
}
criterion_group!(
name = benches;
config = Criterion::default().sample_size(2);
targets = bench
);
criterion_main!(benches);

26
book/README.md Normal file
View File

@ -0,0 +1,26 @@
Building the Solana book
---
Install the book's dependnecies, build, and test the book:
```bash
$ ./build.sh
```
Run any Rust tests in the markdown:
```bash
$ make test
```
Render markdown as HTML:
```bash
$ make build
```
Render and view the book:
```bash
$ make open
```

View File

@ -0,0 +1,25 @@
+---------------------------------------------------------------------------------------------------------+
| Neighborhood Above |
| |
| +----------------+ +----------------+ +----------------+ +----------------+ |
| | +------>+ +------>+ +------>+ | |
| | Neighbor 1 | | Neighbor 2 | | Neighbor 3 | | Neighbor 4 | |
| | +<------+ +<------+ +<------+ | |
| +--+-------------+ +--+-------------+ +-----+----------+ +--+-------------+ |
| | | | | |
+---------------------------------------------------------------------------------------------------------+
| | | |
| | | |
| | | |
| | | |
| | | |
+---------------------------------------------------------------------------------------------------------+
| | | Neighborhood Below | | |
| v v v v |
| +--+-------------+ +--+-------------+ +-----+----------+ +--+-------------+ |
| | +------>+ +------>+ +------>+ | |
| | Neighbor 1 | | Neighbor 2 | | Neighbor 3 | | Neighbor 4 | |
| | +<------+ +<------+ +<------+ | |
| +----------------+ +----------------+ +----------------+ +----------------+ |
| |
+---------------------------------------------------------------------------------------------------------+

28
book/art/data-plane.bob Normal file
View File

@ -0,0 +1,28 @@
+--------------+
| |
+------------+ Leader +------------+
| | | |
| +--------------+ |
v v
+--------+--------+ +--------+--------+
| +--------------------->+ |
+-----------------+ Validator 1 | | Validator 2 +-------------+
| | +<---------------------+ | |
| +------+-+-+------+ +---+-+-+---------+ |
| | | | | | | |
| | | | | | | |
| +---------------------------------------------+ | | |
| | | | | | | |
| | | | | +----------------------+ | |
| | | | | | | |
| | | | +--------------------------------------------+ |
| | | | | | | |
| | | +----------------------+ | | |
| | | | | | | |
v v v v v v v v
+--------------------+ +--------------------+ +--------------------+ +--------------------+
| | | | | | | |
| Neighborhood 1 | | Neighborhood 2 | | Neighborhood 3 | | Neighborhood 4 |
| | | | | | | |
+--------------------+ +--------------------+ +--------------------+ +--------------------+

View File

@ -0,0 +1,13 @@
validator action
+----+ ----------------
| | L1 | E1
| +----+ / \ vote(E1)
| | L2 | E2 x
| +----+ / \ / \ vote(E2)
time | | L3 | E3 x E3' x
| +----+ / \ / \ / \ / \ slash(E3)
| | L4 | x x E4 x x x x x
| +----+ | | | | | | | | vote(E4)
v | L5 | xx xx xx E5 xx xx xx xx
+----+ hang on to E4 and E5 for more...

View File

@ -0,0 +1,9 @@
1
|
2
/|
/ |
| |
| 4
|
5

View File

@ -0,0 +1,11 @@
1
|
3
|\
| \
| |
| |
| |
6 |
|
7

13
book/art/forks.bob Normal file
View File

@ -0,0 +1,13 @@
1
|\
2 \
/| |
/ | 3
| | |\
| 4 | \
| | |
5 | |
| |
6 |
|
7

30
book/art/fullnode.bob Normal file
View File

@ -0,0 +1,30 @@
.--------------------------------------.
| Fullnode |
| |
.--------. | .-------------------. |
| |---->| | |
| Client | | | JSON RPC Service | |
| |<----| | |
`----+---` | `-------------------` |
| | ^ |
| | | .----------------. | .------------------.
| | | | Gossip Service |<----------| Validators |
| | | `----------------` | | |
| | | ^ | | |
| | | | | | .------------. |
| | .---+---. .----+---. .-----------. | | | | |
| | | Bank |<-+ Replay | | BlobFetch |<------+ Upstream | |
| | | Forks | | Stage | | Stage | | | | Validators | |
| | `-------` `--------` `--+--------` | | | | |
| | ^ ^ | | | `------------` |
| | | | v | | |
| | | .--+--------. | | |
| | | | Blocktree | | | |
| | | `-----------` | | .------------. |
| | | ^ | | | | |
| | | | | | | Downstream | |
| | .--+--. .-------+---. | | | Validators | |
`-------->| TPU +---->| Broadcast +--------------->| | |
| `-----` | Service | | | `------------` |
| `-----------` | `------------------`
`--------------------------------------`

10
book/art/runtime.bob Normal file
View File

@ -0,0 +1,10 @@
.------------. .-----------. .---------------. .--------------. .-----------------------.
| PoH verify +---> | sigverify +--->| lock accounts +--->| validate fee +--->| allocate new accounts +--->
| TVU | `-----------` `---------------` `--------------` `-----------------------`
`------------`
.---------------. .---------. .------------. .-----------------. .-----------------.
--->| load accounts +--->| execute +--->| PoH record +--->| commit accounts +-->| unlock accounts |
`---------------` `---------` | TPU | `-----------------` `-----------------`
`------------`

20
book/art/sdk-tools.bob Normal file
View File

@ -0,0 +1,20 @@
.----------------------------------------.
| Solana Runtime |
| |
| .------------. .------------. |
| | | | | |
.-------->| Verifier +-->| Accounts | |
| | | | | | |
.----------. | | `------------` `------------` |
| +--------` | ^ |
| Client | | LoadAccounts | |
| +--------. | .----------------` |
`----------` | | | |
| | .------+-----. .-------------. |
| | | | | | |
`-------->| Loader +-->| Interpreter | |
| | | | | |
| `------------` `-------------` |
| |
`----------------------------------------`

18
book/art/tpu.bob Normal file
View File

@ -0,0 +1,18 @@
.-------------------------------------------.
| TPU .-------------. |
| | PoH Service | |
| `--------+----` |
| ^ | |
| | v |
| .-------. .-----------. .-+-------. | .------------.
.---------. | | Fetch | | SigVerify | | Banking | | | Broadcast |
| Clients |--->| Stage |->| Stage |->| Stage |------>| Service |
`---------` | | | | | | | | | |
| `-------` `-----------` `----+----` | `------------`
| | |
`---------------------------------|---------`
|
v
.------.
| Bank |
`------`

22
book/art/tvu.bob Normal file
View File

@ -0,0 +1,22 @@
.--------.
| Leader |
`--------`
^
|
.------------------------------------|--------------------.
| TVU | |
| | |
| .-------. .------------. .----+---. .---------. |
.------------. | | Blob | | Retransmit | | Replay | | Storage | |
| Upstream +----->| Fetch +-->| Stage +-->| Stage +-->| Stage | |
| Validators | | | Stage | | | | | | | |
`------------` | `-------` `----+-------` `----+---` `---------` |
| ^ | | |
| | | | |
`--------|----------|----------------|--------------------`
| | |
| V v
.+-----------. .------.
| Gossip | | Bank |
| Service | `------`
`------------`

10
book/book.toml Normal file
View File

@ -0,0 +1,10 @@
[book]
title = "Solana: Blockchain Rebuilt for Scale"
authors = ["The Solana Team"]
[build]
build-dir = "html"
create-missing = false
[output.html]
theme = "theme"

18
book/build.sh Executable file
View File

@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
cargo_install_unless() {
declare crate=$1
shift
"$@" > /dev/null 2>&1 || \
cargo install "$crate"
}
export PATH=$CARGO_HOME/bin:$PATH
cargo_install_unless mdbook mdbook --help
cargo_install_unless svgbob_cli svgbob --help
make -j"$(nproc)"

33
book/makefile Normal file
View File

@ -0,0 +1,33 @@
BOB_SRCS=$(wildcard art/*.bob)
MD_SRCS=$(wildcard src/*.md)
SVG_IMGS=$(BOB_SRCS:art/%.bob=src/img/%.svg)
all: html/index.html
test: src/tests.ok
open: all
mdbook build --open
watch: $(SVG_IMGS)
mdbook watch
src/img/%.svg: art/%.bob
@mkdir -p $(@D)
svgbob < $< > $@
src/%.md: %.md
@mkdir -p $(@D)
@cp $< $@
src/tests.ok: $(SVG_IMGS) $(MD_SRCS)
mdbook test
touch $@
html/index.html: src/tests.ok
mdbook build
clean:
rm -f $(SVG_IMGS) src/tests.ok
rm -rf html

59
book/src/SUMMARY.md Normal file
View File

@ -0,0 +1,59 @@
# Solana Architecture
- [Introduction](introduction.md)
- [Terminology](terminology.md)
- [Getting Started](getting-started.md)
- [Example: Web Wallet](webwallet.md)
- [Programming Model](programs.md)
- [Example: Tic-Tac-Toe](tictactoe.md)
- [Drones](drones.md)
- [A Solana Cluster](cluster.md)
- [Synchronization](synchronization.md)
- [Leader Rotation](leader-rotation.md)
- [Fork Generation](fork-generation.md)
- [Managing Forks](managing-forks.md)
- [Data Plane Fanout](data-plane-fanout.md)
- [Ledger Replication](ledger-replication.md)
- [Secure Vote Signing](vote-signing.md)
- [Staking Delegation and Rewards](stake-delegation-and-rewards.md)
- [Anatomy of a Fullnode](fullnode.md)
- [TPU](tpu.md)
- [TVU](tvu.md)
- [Blocktree](blocktree.md)
- [Gossip Service](gossip.md)
- [The Runtime](runtime.md)
- [API Reference](api-reference.md)
- [Blockstreamer](blockstreamer.md)
- [JSON RPC API](jsonrpc-api.md)
- [JavaScript API](javascript-api.md)
- [solana-wallet CLI](wallet.md)
- [Proposed Architectural Changes](proposals.md)
- [Ledger Replication](ledger-replication-to-implement.md)
- [Secure Vote Signing](vote-signing-to-implement.md)
- [Staking Rewards](staking-rewards.md)
- [Fork Selection](fork-selection.md)
- [Reliable Vote Transmission](reliable-vote-transmission.md)
- [Persistent Account Storage](persistent-account-storage.md)
- [Leader to Leader Transition](leader-leader-transition.md)
- [Cluster Economics](ed_overview.md)
- [Validation-client Economics](ed_validation_client_economics.md)
- [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md)
- [State-validation Transaction Fees](ed_vce_state_validation_transaction_fees.md)
- [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md)
- [Validation Stake Delegation](ed_vce_validation_stake_delegation.md)
- [Replication-client Economics](ed_replication_client_economics.md)
- [Storage-replication Rewards](ed_rce_storage_replication_rewards.md)
- [Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md)
- [Economic Sustainability](ed_economic_sustainability.md)
- [Attack Vectors](ed_attack_vectors.md)
- [References](ed_references.md)
- [Leader-to-Validator Transition](leader-validator-transition.md)
- [Cluster Test Framework](cluster-test-framework.md)
- [Testing Programs](testing-programs.md)

View File

@ -0,0 +1,4 @@
# API Reference
The following sections contain API references material you may find useful
when developing applications utilizing a Solana cluster.

View File

@ -0,0 +1,84 @@
# Block Confirmation
A validator votes on a PoH hash for two purposes. First, the vote indicates it
believes the ledger is valid up until that point in time. Second, since many
valid forks may exist at a given height, the vote also indicates exclusive
support for the fork. This document describes only the former. The latter is
described in [fork selection](fork-selection.md).
## Current Design
To start voting, a validator first registers an account to which it will send
its votes. It then sends votes to that account. The vote contains the tick
height of the block it is voting on. The account stores the 32 highest heights.
### Problems
* Only the validator knows how to find its own votes directly.
Other components, such as the one that calculates confirmation time, needs to
be baked into the fullnode code. The fullnode code queries the bank for all
accounts owned by the vote program.
* Voting ballots do not contain a PoH hash. The validator is only voting that
it has observed an arbitrary block at some height.
* Voting ballots do not contain a hash of the bank state. Without that hash,
there is no evidence that the validator executed the transactions and
verified there were no double spends.
## Proposed Design
### No Cross-block State Initially
At the moment a block is produced, the leader shall add a NewBlock transaction
to the ledger with a number of tokens that represents the validation reward.
It is effectively an incremental multisig transaction that sends tokens from
the mining pool to the validators. The account should allocate just enough
space to collect the votes required to achieve a supermajority. When a
validator observes the NewBlock transaction, it has the option to submit a vote
that includes a hash of its ledger state (the bank state). Once the account has
sufficient votes, the vote program should disperse the tokens to the
validators, which causes the account to be deleted.
#### Logging Confirmation Time
The bank will need to be aware of the vote program. After each transaction, it
should check if it is a vote transaction and if so, check the state of that
account. If the transaction caused the supermajority to be achieved, it should
log the time since the NewBlock transaction was submitted.
### Finality and Payouts
Locktower is the proposed [fork selection](fork-selection.md) algorithm. It
proposes that payment to miners be postponed until the *stack* of validator
votes reaches a certain depth, at which point rollback is not economically
feasible. The vote program may therefore implement locktower. Vote instructions
would need to reference a global locktower account so that it can track
cross-block state.
## Challenges
### On-chain voting
Using programs and accounts to implement this is a bit tedious. The hardest
part is figuring out how much space to allocate in NewBlock. The two variables
are the *active set* and the stakes of those validators. If we calculate the
active set at the time NewBlock is submitted, the number of validators to
allocate space for is known upfront. If, however, we allow new validators to
vote on old blocks, then we'd need a way to allocate space dynamically.
Similar in spirit, if the leader caches stakes at the time of NewBlock, the
vote program doesn't need to interact with the bank when it processes votes. If
we don't, then we have the option to allow stakes to float until a vote is
submitted. A validator could conceivably reference its own staking account, but
that'd be the current account value instead of the account value of the most
recently finalized bank state. The bank currently doesn't offer a means to
reference accounts from particular points in time.
### Voting Implications on Previous Blocks
Does a vote on one height imply a vote on all blocks of lower heights of
that fork? If it does, we'll need a way to lookup the accounts of all
blocks that haven't yet reached supermajority. If not, the validator could
send votes to all blocks explicitly to get the block rewards.

37
book/src/blockstreamer.md Normal file
View File

@ -0,0 +1,37 @@
# Blockstreamer
Solana supports a node type called an *blockstreamer*. This fullnode variation
is intended for applications that need to observe the data plane without
participating in transaction validation or ledger replication.
A blockstreamer runs without a vote signer, and can optionally stream ledger
entries out to a Unix domain socket as they are processed. The JSON-RPC service
still functions as on any other node.
To run a blockstreamer, include the argument `no-signer` and (optional)
`blockstream` socket location:
```bash
$ ./multinode-demo/fullnode-x.sh --no-signer --blockstream <SOCKET>
```
The stream will output a series of JSON objects:
- An Entry event JSON object is sent when each ledger entry is processed, with
the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "entry"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `entry`, the entry, as JSON object
- A Block event JSON object is sent when a block is complete, with the
following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "block"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `l`, the slot leader id, as base-58 encoded string
* `id`, the block id, as base-58 encoded string

102
book/src/blocktree.md Normal file
View File

@ -0,0 +1,102 @@
# Blocktree
After a block reaches finality, all blocks from that one on down
to the genesis block form a linear chain with the familiar name
blockchain. Until that point, however, the validator must maintain all
potentially valid chains, called *forks*. The process by which forks
naturally form as a result of leader rotation is described in
[fork generation](fork-generation.md). The *blocktree* data structure
described here is how a validator copes with those forks until blocks
are finalized.
The blocktree allows a validator to record every blob it observes
on the network, in any order, as long as the blob is signed by the expected
leader for a given slot.
Blobs are moved to a fork-able key space the tuple of `leader slot` + `blob
index` (within the slot). This permits the skip-list structure of the Solana
protocol to be stored in its entirety, without a-priori choosing which fork to
follow, which Entries to persist or when to persist them.
Repair requests for recent blobs are served out of RAM or recent files and out
of deeper storage for less recent blobs, as implemented by the store backing
Blocktree.
### Functionalities of Blocktree
1. Persistence: the Blocktree lives in the front of the nodes verification
pipeline, right behind network receive and signature verification. If the
blob received is consistent with the leader schedule (i.e. was signed by the
leader for the indicated slot), it is immediately stored.
2. Repair: repair is the same as window repair above, but able to serve any
blob that's been received. Blocktree stores blobs with signatures,
preserving the chain of origination.
3. Forks: Blocktree supports random access of blobs, so can support a
validator's need to rollback and replay from a Bank checkpoint.
4. Restart: with proper pruning/culling, the Blocktree can be replayed by
ordered enumeration of entries from slot 0. The logic of the replay stage
(i.e. dealing with forks) will have to be used for the most recent entries in
the Blocktree.
### Blocktree Design
1. Entries in the Blocktree are stored as key-value pairs, where the key is the concatenated
slot index and blob index for an entry, and the value is the entry data. Note blob indexes are zero-based for each slot (i.e. they're slot-relative).
2. The Blocktree maintains metadata for each slot, in the `SlotMeta` struct containing:
* `slot_index` - The index of this slot
* `num_blocks` - The number of blocks in the slot (used for chaining to a previous slot)
* `consumed` - The highest blob index `n`, such that for all `m < n`, there exists a blob in this slot with blob index equal to `n` (i.e. the highest consecutive blob index).
* `received` - The highest received blob index for the slot
* `next_slots` - A list of future slots this slot could chain to. Used when rebuilding
the ledger to find possible fork points.
* `last_index` - The index of the blob that is flagged as the last blob for this slot. This flag on a blob will be set by the leader for a slot when they are transmitting the last blob for a slot.
* `is_rooted` - True iff every block from 0...slot forms a full sequence without any holes. We can derive is_rooted for each slot with the following rules. Let slot(n) be the slot with index `n`, and slot(n).is_full() is true if the slot with index `n` has all the ticks expected for that slot. Let is_rooted(n) be the statement that "the slot(n).is_rooted is true". Then:
is_rooted(0)
is_rooted(n+1) iff (is_rooted(n) and slot(n).is_full()
3. Chaining - When a blob for a new slot `x` arrives, we check the number of blocks (`num_blocks`) for that new slot (this information is encoded in the blob). We then know that this new slot chains to slot `x - num_blocks`.
4. Subscriptions - The Blocktree records a set of slots that have been "subscribed" to. This means entries that chain to these slots will be sent on the Blocktree channel for consumption by the ReplayStage. See the `Blocktree APIs` for details.
5. Update notifications - The Blocktree notifies listeners when slot(n).is_rooted is flipped from false to true for any `n`.
### Blocktree APIs
The Blocktree offers a subscription based API that ReplayStage uses to ask for entries it's interested in. The entries will be sent on a channel exposed by the Blocktree. These subscription API's are as follows:
1. `fn get_slots_since(slot_indexes: &[u64]) -> Vec<SlotMeta>`: Returns new slots connecting to any element of the list `slot_indexes`.
2. `fn get_slot_entries(slot_index: u64, entry_start_index: usize, max_entries: Option<u64>) -> Vec<Entry>`: Returns the entry vector for the slot starting with `entry_start_index`, capping the result at `max` if `max_entries == Some(max)`, otherwise, no upper limit on the length of the return vector is imposed.
Note: Cumulatively, this means that the replay stage will now have to know when a slot is finished, and subscribe to the next slot it's interested in to get the next set of entries. Previously, the burden of chaining slots fell on the Blocktree.
### Interfacing with Bank
The bank exposes to replay stage:
1. `prev_hash`: which PoH chain it's working on as indicated by the hash of the last
entry it processed
2. `tick_height`: the ticks in the PoH chain currently being verified by this
bank
3. `votes`: a stack of records that contain:
1. `prev_hashes`: what anything after this vote must chain to in PoH
2. `tick_height`: the tick height at which this vote was cast
3. `lockout period`: how long a chain must be observed to be in the ledger to
be able to be chained below this vote
Replay stage uses Blocktree APIs to find the longest chain of entries it can
hang off a previous vote. If that chain of entries does not hang off the
latest vote, the replay stage rolls back the bank to that vote and replays the
chain from there.
### Pruning Blocktree
Once Blocktree entries are old enough, representing all the possible forks
becomes less useful, perhaps even problematic for replay upon restart. Once a
validator's votes have reached max lockout, however, any Blocktree contents
that are not on the PoH chain for that vote for can be pruned, expunged.
Replicator nodes will be responsible for storing really old ledger contents,
and validators need only persist their bank periodically.

View File

@ -0,0 +1,122 @@
# Cluster Test Framework
This document proposes the Cluster Test Framework (CTF). CTF is a test harness
that allows tests to execute against a local, in-process cluster or a
deployed cluster.
## Motivation
The goal of CTF is to provide a framework for writing tests independent of where
and how the cluster is deployed. Regressions can be captured in these tests and
the tests can be run against deployed clusters to verify the deployment. The
focus of these tests should be on cluster stability, consensus, fault tolerance,
API stability.
Tests should verify a single bug or scenario, and should be written with the
least amount of internal plumbing exposed to the test.
## Design Overview
Tests are provided an entry point, which is a `contact_info::ContactInfo`
structure, and a keypair that has already been funded.
Each node in the cluster is configured with a `fullnode::FullnodeConfig` at boot
time. At boot time this configuration specifies any extra cluster configuration
required for the test. The cluster should boot with the configuration when it
is run in-process or in a data center.
Once booted, the test will discover the cluster through a gossip entry point and
configure any runtime behaviors via fullnode RPC.
## Test Interface
Each CTF test starts with an opaque entry point and a funded keypair. The test
should not depend on how the cluster is deployed, and should be able to exercise
all the cluster functionality through the publicly available interfaces.
```rust,ignore
use crate::contact_info::ContactInfo;
use solana_sdk::signature::{Keypair, KeypairUtil};
pub fn test_this_behavior(
entry_point_info: &ContactInfo,
funding_keypair: &Keypair,
num_nodes: usize,
)
```
## Cluster Discovery
At test start, the cluster has already been established and is fully connected.
The test can discover most of the available nodes over a few second.
```rust,ignore
use crate::gossip_service::discover;
// Discover the cluster over a few seconds.
let cluster_nodes = discover(&entry_point_info, num_nodes);
```
## Cluster Configuration
To enable specific scenarios, the cluster needs to be booted with special
configurations. These configurations can be captured in
`fullnode::FullnodeConfig`.
For example:
```rust,ignore
let mut fullnode_config = FullnodeConfig::default();
fullnode_config.rpc_config.enable_fullnode_exit = true;
let local = LocalCluster::new_with_config(
num_nodes,
10_000,
100,
&fullnode_config
);
```
## How to design a new test
For example, there is a bug that shows that the cluster fails when it is flooded
with invalid advertised gossip nodes. Our gossip library and protocol may
change, but the cluster still needs to stay resilient to floods of invalid
advertised gossip nodes.
Configure the RPC service:
```rust,ignore
let mut fullnode_config = FullnodeConfig::default();
fullnode_config.rpc_config.enable_rpc_gossip_push = true;
fullnode_config.rpc_config.enable_rpc_gossip_refresh_active_set = true;
```
Wire the RPCs and write a new test:
```rust,ignore
pub fn test_large_invalid_gossip_nodes(
entry_point_info: &ContactInfo,
funding_keypair: &Keypair,
num_nodes: usize,
) {
let cluster = discover(&entry_point_info, num_nodes);
// Poison the cluster.
let mut client = mk_client(&entry_point_info);
for _ in 0..(num_nodes * 100) {
client.gossip_push(
cluster_info::invalid_contact_info()
);
}
sleep(Durration::from_millis(1000));
// Force refresh of the active set.
for node in &cluster {
let mut client = mk_client(&node);
client.gossip_refresh_active_set();
}
// Verify that spends still work.
verify_spends(&cluster);
}
```

100
book/src/cluster.md Normal file
View File

@ -0,0 +1,100 @@
# A Solana Cluster
A Solana cluster is a set of fullnodes working together to serve client
transactions and maintain the integrity of the ledger. Many clusters may
coexist. When two clusters share a common genesis block, they attempt to
converge. Otherwise, they simply ignore the existence of the other.
Transactions sent to the wrong one are quietly rejected. In this chapter, we'll
discuss how a cluster is created, how nodes join the cluster, how they share
the ledger, how they ensure the ledger is replicated, and how they cope with
buggy and malicious nodes.
## Creating a Cluster
Before starting any fullnodes, one first needs to create a *genesis block*.
The block contains entries referencing two public keys, a *mint* and a
*bootstrap leader*. The fullnode holding the bootstrap leader's secret key is
responsible for appending the first entries to the ledger. It initializes its
internal state with the mint's account. That account will hold the number of
native tokens defined by the genesis block. The second fullnode then contacts
the bootstrap leader to register as a *validator* or *replicator*. Additional
fullnodes then register with any registered member of the cluster.
A validator receives all entries from the leader and submits votes confirming
those entries are valid. After voting, the validator is expected to store those
entries until replicator nodes submit proofs that they have stored copies of
it. Once the validator observes a sufficient number of copies exist, it deletes
its copy.
## Joining a Cluster
Fullnodes and replicators enter the cluster via registration messages sent to
its *control plane*. The control plane is implemented using a *gossip*
protocol, meaning that a node may register with any existing node, and expect
its registration to propagate to all nodes in the cluster. The time it takes
for all nodes to synchronize is proportional to the square of the number of
nodes participating in the cluster. Algorithmically, that's considered very
slow, but in exchange for that time, a node is assured that it eventually has
all the same information as every other node, and that that information cannot
be censored by any one node.
## Sending Transactions to a Cluster
Clients send transactions to any fullnode's Transaction Processing Unit (TPU)
port. If the node is in the validator role, it forwards the transaction to the
designated leader. If in the leader role, the node bundles incoming
transactions, timestamps them creating an *entry*, and pushes them onto the
cluster's *data plane*. Once on the data plane, the transactions are validated
by validator nodes and replicated by replicator nodes, effectively appending
them to the ledger.
## Confirming Transactions
A Solana cluster is capable of subsecond *confirmation* for up to 150 nodes
with plans to scale up to hundreds of thousands of nodes. Once fully
implemented, confirmation times are expected to increase only with the
logarithm of the number of validators, where the logarithm's base is very high.
If the base is one thousand, for example, it means that for the first thousand
nodes, confirmation will be the duration of three network hops plus the time it
takes the slowest validator of a supermajority to vote. For the next million
nodes, confirmation increases by only one network hop.
Solana defines confirmation as the duration of time from when the leader
timestamps a new entry to the moment when it recognizes a supermajority of
ledger votes.
A gossip network is much too slow to achieve subsecond confirmation once the
network grows beyond a certain size. The time it takes to send messages to all
nodes is proportional to the square of the number of nodes. If a blockchain
wants to achieve low confirmation and attempts to do it using a gossip network,
it will be forced to centralize to just a handful of nodes.
Scalable confirmation can be achieved using the follow combination of
techniques:
1. Timestamp transactions with a VDF sample and sign the timestamp.
2. Split the transactions into batches, send each to separate nodes and have
each node share its batch with its peers.
3. Repeat the previous step recursively until all nodes have all batches.
Solana rotates leaders at fixed intervals, called *slots*. Each leader may only
produce entries during its allotted slot. The leader therefore timestamps
transactions so that validators may lookup the public key of the designated
leader. The leader then signs the timestamp so that a validator may verify the
signature, proving the signer is owner of the designated leader's public key.
Next, transactions are broken into batches so that a node can send transactions
to multiple parties without making multiple copies. If, for example, the leader
needed to send 60 transactions to 6 nodes, it would break that collection of 60
into batches of 10 transactions and send one to each node. This allows the
leader to put 60 transactions on the wire, not 60 transactions for each node.
Each node then shares its batch with its peers. Once the node has collected all
6 batches, it reconstructs the original set of 60 transactions.
A batch of transactions can only be split so many times before it is so small
that header information becomes the primary consumer of network bandwidth. At
the time of this writing, the approach is scaling well up to about 150
validators. To scale up to hundreds of thousands of validators, each node can
apply the same technique as the leader node to another set of nodes of equal
size. We call the technique *data plane fanout*; learn more in the [data plan
fanout](data-plane-fanout.md) section.

View File

@ -0,0 +1,84 @@
# Data Plane Fanout
A Solana cluster uses a multi-layer mechanism called *data plane fanout* to
broadcast transaction blobs to all nodes in a very quick and efficient manner.
In order to establish the fanout, the cluster divides itself into small
collections of nodes, called *neighborhoods*. Each node is responsible for
sharing any data it receives with the other nodes in its neighborhood, as well
as propagating the data on to a small set of nodes in other neighborhoods.
During its slot, the leader node distributes blobs between the validator nodes
in one neighborhood (layer 1). Each validator shares its data within its
neighborhood, but also retransmits the blobs to one node in each of multiple
neighborhoods in the next layer (layer 2). The layer-2 nodes each share their
data with their neighborhood peers, and retransmit to nodes in the next layer,
etc, until all nodes in the cluster have received all the blobs.
<img alt="Two layer cluster" src="img/data-plane.svg" class="center"/>
## Neighborhood Assignment - Weighted Selection
In order for data plane fanout to work, the entire cluster must agree on how the
cluster is divided into neighborhoods. To achieve this, all the recognized
validator nodes (the TVU peers) are sorted by stake and stored in a list. This
list is then indexed in different ways to figure out neighborhood boundaries and
retransmit peers. For example, the leader will simply select the first nodes to
make up layer 1. These will automatically be the highest stake holders, allowing
the heaviest votes to come back to the leader first. Layer-1 and lower-layer
nodes use the same logic to find their neighbors and lower layer peers.
## Layer and Neighborhood Structure
The current leader makes its initial broadcasts to at most `DATA_PLANE_FANOUT`
nodes. If this layer 1 is smaller than the number of nodes in the cluster, then
the data plane fanout mechanism adds layers below. Subsequent layers follow
these constraints to determine layer-capacity: Each neighborhood contains
`NEIGHBORHOOD_SIZE` nodes and each layer may have up to `DATA_PLANE_FANOUT/2`
neighborhoods.
As mentioned above, each node in a layer only has to broadcast its blobs to its
neighbors and to exactly 1 node in each next-layer neighborhood, instead of to
every TVU peer in the cluster. In the default mode, each layer contains
`DATA_PLANE_FANOUT/2` neighborhoods. The retransmit mechanism also supports a
second, `grow`, mode of operation that squares the number of neighborhoods
allowed each layer. This dramatically reduces the number of layers needed to
support a large cluster, but can also have a negative impact on the network
pressure on each node in the lower layers. A good way to think of the default
mode (when `grow` is disabled) is to imagine it as chain of layers, where the
leader sends blobs to layer-1 and then layer-1 to layer-2 and so on, the `layer
capacities` remain constant, so all layers past layer-2 will have the same
number of nodes until the whole cluster is covered. When `grow` is enabled, this
becomes a traditional fanout where layer-3 will have the square of the number of
nodes in layer-2 and so on.
#### Configuration Values
`DATA_PLANE_FANOUT` - Determines the size of layer 1. Subsequent
layers have `DATA_PLANE_FANOUT/2` neighborhoods when `grow` is inactive.
`NEIGHBORHOOD_SIZE` - The number of nodes allowed in a neighborhood.
Neighborhoods will fill to capacity before new ones are added, i.e if a
neighborhood isn't full, it _must_ be the last one.
`GROW_LAYER_CAPACITY` - Whether or not retransmit should be behave like a
_traditional fanout_, i.e if each additional layer should have growing
capacities. When this mode is disabled (default), all layers after layer 1 have
the same capacity, keeping the network pressure on all nodes equal.
Currently, configuration is set when the cluster is launched. In the future,
these parameters may be hosted on-chain, allowing modification on the fly as the
cluster sizes change.
## Neighborhoods
The following diagram shows how two neighborhoods in different layers interact.
What this diagram doesn't capture is that each neighbor actually receives
blobs from one validator per neighborhood above it. This means that, to
cripple a neighborhood, enough nodes (erasure codes +1 per neighborhood) from
the layer above need to fail. Since multiple neighborhoods exist in the upper
layer and a node will receive blobs from a node in each of those neighborhoods,
we'd need a big network failure in the upper layers to end up with incomplete
data.
<img alt="Inner workings of a neighborhood"
src="img/data-plane-neighborhood.svg" class="center"/>

86
book/src/drones.md Normal file
View File

@ -0,0 +1,86 @@
# Creating Signing Services with Drones
This chapter defines an off-chain service called a *drone*, which acts as
custodian of a user's private key. In its simplest form, it can be used to
create *airdrop* transactions, a token transfer from the drone's account to a
client's account.
## Signing Service
A drone is a simple signing service. It listens for requests to sign
*transaction data*. Once received, the drone validates the request however it
sees fit. It may, for example, only accept transaction data with a
`SystemInstruction::Move` instruction transferring only up to a certain amount
of tokens. If the drone accepts the transaction, it returns an `Ok(Signature)`
where `Signature` is a signature of the transaction data using the drone's
private key. If it rejects the transaction data, it returns a `DroneError`
describing why.
## Examples
### Granting access to an on-chain game
Creator of on-chain game tic-tac-toe hosts a drone that responds to airdrop
requests containing an `InitGame` instruction. The drone signs the transaction
data in the request and returns it, thereby authorizing its account to pay the
transaction fee and as well as seeding the game's account with enough tokens to
play it. The user then creates a transaction for its transaction data and the
drones signature and submits it to the Solana cluster. Each time the user
interacts with the game, the game pays the user enough tokens to pay the next
transaction fee to advance the game. At that point, the user may choose to keep
the tokens instead of advancing the game. If the creator wants to defend
against that case, they could require the user to return to the drone to sign
each instruction.
### Worldwide airdrop of a new token
Creator of a new on-chain token (ERC-20 interface), may wish to do a worldwide
airdrop to distribute its tokens to millions of users over just a few seconds.
That drone cannot spend resources interacting with the Solana cluster. Instead,
the drone should only verify the client is unique and human, and then return
the signature. It may also want to listen to the Solana cluster for recent
entry IDs to support client retries and to ensure the airdrop is targeting the
desired cluster.
## Attack vectors
### Invalid recent_blockhash
The drone may prefer its airdrops only target a particular Solana cluster. To
do that, it listens to the cluster for new entry IDs and ensure any requests
reference a recent one.
Note: to listen for new entry IDs assumes the drone is either a fullnode or a
*light* client. At the time of this writing, light clients have not been
implemented and no proposal describes them. This document assumes one of the
following approaches be taken:
1. Define and implement a light client
2. Embed a fullnode
3. Query the jsonrpc API for the latest last id at a rate slightly faster than
ticks are produced.
### Double spends
A client may request multiple airdrops before the first has been submitted to
the ledger. The client may do this maliciously or simply because it thinks the
first request was dropped. The drone should not simply query the cluster to
ensure the client has not already received an airdrop. Instead, it should use
`recent_blockhash` to ensure the previous request is expired before signing another.
Note that the Solana cluster will reject any transaction with a `recent_blockhash`
beyond a certain *age*.
### Denial of Service
If the transaction data size is smaller than the size of the returned signature
(or descriptive error), a single client can flood the network. Considering
that a simple `Move` operation requires two public keys (each 32 bytes) and a
`fee` field, and that the returned signature is 64 bytes (and a byte to
indicate `Ok`), consideration for this attack may not be required.
In the current design, the drone accepts TCP connections. This allows clients
to DoS the service by simply opening lots of idle connections. Switching to UDP
may be preferred. The transaction data will be smaller than a UDP packet since
the transaction sent to the Solana cluster is already pinned to using UDP.

View File

@ -0,0 +1,11 @@
## Attack Vectors
### Colluding validation and replication clients
A colluding validation-client, may take the strategy to mark PoReps from non-colluding replicator nodes as invalid as an attempt to maximize the rewards for the colluding replicator nodes. In this case, it isnt feasible for the offended-against replicator nodes to petition the network for resolution as this would result in a network-wide vote on each offending PoRep and create too much overhead for the network to progress adequately. Also, this mitigation attempt would still be vulnerable to a >= 51% staked colluder.
Alternatively, transaction fees from submitted PoReps are pooled and distributed across validation-clients in proportion to the number of valid PoReps discounted by the number of invalid PoReps as voted by each validator-client. Thus invalid votes are directly dis-incentivized through this reward channel. Invalid votes that are revealed by replicator nodes as fishing PoReps, will not be discounted from the payout PoRep count.
Another collusion attack involves a validator-client who may take the strategy to ignore invalid PoReps from colluding replicator and vote them as valid. In this case, colluding replicator-clients would not have to store the data while still receiving rewards for validated PoReps. Additionally, colluding validator nodes would also receive rewards for validating these PoReps. To mitigate this attack, validators must randomly sample PoReps corresponding to the ledger block they are validating and because of this, there will be multiple validators that will receive the colluding replicators invalid submissions. These non-colluding validators will be incentivized to mark these PoReps as invalid as they have no way to determine whether the proposed invalid PoRep is actually a fishing PoRep, for which a confirmation vote would result in the validators stake being slashed.
In this case, the proportion of time a colluding pair will be successful has an upper limit determined by the % of stake of the network claimed by the colluding validator. This also sets bounds to the value of such an attack. For example, if a colluding validator controls 10% of the total validator stake, transaction fees will be lost (likely sent to mining pool) by the colluding replicator 90% of the time and so the attack vector is only profitable if the per-PoRep reward at least 90% higher than the average PoRep transaction fee. While, probabilistically, some colluding replicator-client PoReps will find their way to colluding validation-clients, the network can also monitor rates of paired (validator + replicator) discrepancies in voting patterns and censor identified colluders in these cases.

View File

@ -0,0 +1,18 @@
## Economic Sustainability
Long term economic sustainability is one of the guiding principles of Solanas economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solanas network, these components take the form of the remittances and deposits into and out of the reserve mining pool.
The dominant remittances from the Solana mining pool are validator and replicator rewards. The deposit mechanism is a flat, protocol-specified and adjusted, % of each transaction fee.
The Replicator rewards are to be delivered to replicators from the mining pool after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 3**
<!-- ![image alt text](porep_reward.png) -->
<p style="text-align:center;"><img src="img/porep_reward.png" alt="==PoRep Reward Curve ==" width="800"/></p>
**Figure 3**: Example PoRep reward design as a function of global network storage redundancy.
In the example shown in Figure 1, multiple per PoRep base rewards are explored (as a % of Tx Fee) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy (i.e. 10X).
The other protocol-based remittance goes to validation-clients as a reward distributed in proportion to stake-weight for voting to validate the ledger state. The functional issuance of this reward is described in [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md) and is designed to reduce over time until validators are incentivized solely through collection of transaction fees. Therefore, in the long-run, protocol-based rewards to replication-nodes will be the only remittances from the mining pool, and will have to be countered by the portion of each non-PoRep transaction fee that is directed back into the mining pool. I.e. for a long-term self-sustaining economy, replicator-client rewards must be subsidized through a minimum fee on each non-PoRep transaction pre-allocated to the mining pool. Through this constraint, we can write the following inequality:
**== WIP [here](https://docs.google.com/document/d/1HBDasdkjS4Ja9wC_tIUsZPVcxGAWTuYOq9zf6xoQNps/edit?usp=sharing) ==**

16
book/src/ed_overview.md Normal file
View File

@ -0,0 +1,16 @@
## Economic Design Overview
Solanas crypto-economic system is designed to promote a healthy, long term self-sustaining economy with participant incentives aligned to the security and decentralization of the network. The main participants in this economy are validation-clients and replication-clients. Their contributions to the network, state validation and data storage respectively, and their requisite remittance mechanisms are discussed below.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are protocol-derived issuances from a network-controlled reserve of tokens (sometimes referred to as the mining pool). These rewards will constitute the total reward delivered to replication clients and a portion of the total rewards for validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to join the network.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be specified as annual interest rates calculated per, real-time, Solana epoch [DEFINITION]. As discussed further below, the issuance rates are determined as a function of total network validator staked percentage and total replication provided by replicators in each previous epoch. The choice for validator and replicator client rewards to be based on participation rates, rather than a global fixed inflation or interest rate, emphasizes a protocol priority of overall economic security, rather than monetary supply predictability. Due to Solanas hard total supply cap of 1B tokens and the bounds of client participant rates in the protocol, we believe that global interest, and supply issuance, scenarios should be able to be modeled with reasonable uncertainties.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction (be it a state execution or proof-of-replication verification). A mechanism for continuous and long-term funding of the mining pool through a pre-dedicated portion of transaction fees is also discussed below.
A high-level schematic of Solanas crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics.md), [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. The [Replication-client Economics](ed_replication_client_economics.md) chapter will review the Solana network design for global ledger storage/redundancy and replicator-client economics ([Storage-replication rewards](ed_rce_storage_replication_rewards.md)) along with a replicator-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md). The [Economic Sustainability](ed_economic_sustainability.md) section dives deeper into Solanas design for long-term economic sustainability and outlines the constraints and conditions for a self-sustaining economy. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
<!-- ![img alt text](solana_economic_design.png) -->
<p style="text-align:center;"><img src="img/solana_economic_design.png" alt="== Solana Economic Design Diagram ==" width="800"/></p>
**Figure 1**: Schematic overview of Solana economic incentive design.

View File

@ -0,0 +1,5 @@
### Replication-client Reward Auto-delegation
The ability for Solana network participants to earn rewards by providing storage service is a unique on-boarding path that requires little hardware overhead and minimal upfront capital. It offers an avenue for individuals with extra-storage space on their home laptops or PCs to contribute to the security of the network and become integrated into the Solana economy.
To enhance this on-boarding ramp and facilitate further participation and investment in the Solana economy, replication-clients have the opportunity to auto-delegate their rewards to validation-clients of their choice. Much like the automatic reinvestment of stock dividends, in this scenario, a replicator-client can earn Solana tokens by providing some storage capacity to the network (i.e. via submitting valid PoReps), have the protocol-based rewards automatically assigned as delegation to a staked validator node and therefore earning interest in the validation-client reward pool.

View File

@ -0,0 +1,5 @@
### Storage-replication Rewards
Replicator-clients download, encrypt and submit PoReps for ledger block sections.3 PoReps submitted to the PoH stream, and subsequently validated, function as evidence that the submitting replicator client is indeed storing the assigned ledger block sections on local hard drive space as a service to the network. Therefore, replicator clients should earn protocol rewards proportional to the amount of storage, and the number of successfully validated PoReps, that they are verifiably providing to the network.
Additionally, replicator clients have the opportunity to capture a portion of slashed bounties [TBD] of dishonest validator clients. This can be accomplished by a replicator client submitting a verifiably false PoRep for which a dishonest validator client receives and signs as a valid PoRep. This reward incentive is to prevent lazy validators and minimize validator-replicator collusion attacks, more on this below.

View File

@ -0,0 +1,7 @@
## References
1. [https://blog.ethereum.org/2016/07/27/inflation-transaction-fees-cryptocurrency-monetary-policy/](https://blog.ethereum.org/2016/07/27/inflation-transaction-fees-cryptocurrency-monetary-policy/)
2. [https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281](https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281)
3. [https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281](https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281)

View File

@ -0,0 +1,3 @@
## Replication-client economics
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of replicators provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided. These rewards are captured by generating and entering Proofs of Replication (PoReps) into the PoH stream which can be validated by Validation nodes as described above in the [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md) chapter.

View File

@ -0,0 +1,3 @@
## Validation-client Economics
Validator-clients are eligible to receive protocol-based (i.e. via mining pool) rewards issued via stake-based annual interest rates by providing compute (CPU+GPU) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic schedule as a function of total amount of Solana tokens staked in the system and duration since network launch (genesis block). Additionally, these clients may earn revenue through two types of transaction fees: state-validation transaction fees and pooled Proof-of-Replication (PoRep) transaction fees. The distribution of these two types of transaction fees to the participating validation set are designed independently as economic goals and attack vectors are unique between the state- generation/validation mechanism and the ledger replication/validation mechanism. For clarity, we separately describe the design and motivation of the three types of potential revenue streams for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and PoRep-validation transaction fees.

View File

@ -0,0 +1,9 @@
### Replication-validation Transaction Fees
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by replicator-clients. In this case, validators are providing compute (CPU/GPU) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.2
While replication-clients are incentivized and rewarded through protocol-based rewards schedule (see [Replication-client Economics](ed_replication_client_economics.md)), validator-clients will be incentivized to include and validate PoReps in PoH through the distribution of the transaction fees associated with the submitted PoRep. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy (e.g. 10x data redundancy). It was chosen not to include a distribution of these rewards to PoRep validators, and to rely only on the collection of PoRep attached transaction fees, due to the fact that the confluence of two participation incentive modes (state-validation inflation rate via global staked % and replication-validation rewards based on global redundancy factor) on the incentives of a single network participant (a validator-client) potentially opened up a significant incentive-driven attack surface area.
The validation of PoReps by validation-clients is computationally more expensive than state-validation (detail in the [Economic Sustainability](ed_economic_sustainability.md) chapter), thus the transaction fees are expected to be proportionally higher. However, because replication-client rewards are distributed in proportion to and only after submitted PoReps are validated, they are uniquely motivated for the inclusion and validation of their proofs. This pressure is expected to generate an adequate market economy between replication-clients and validation-clients. Additionally, transaction fees submitted with PoReps have no minimum amount pre-allocated to the mining pool, as do state-validation transaction fees.
There are various attack vectors available for colluding validation and replication clients, as described in detail below in [Economic Sustainability](ed_economic_sustainability). To protect against various collusion attack vectors, for a given epoch, PoRep transaction fees are pooled, and redistributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of invalidated PoReps [DIAGRAM]. This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps (note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid).

View File

@ -0,0 +1,46 @@
### State-validation protocol-based rewards
Validator-clients have two functional roles in the Solana network
* Validate (vote) the current global state of that PoH along with any Proofs-of-Replication (see [Replication Client Economics](ed_replication_client_economics.md)) that they are eligible to validate
* Be elected as leader on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and Proofs-of-Replication and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. Compensation for validator-clients is provided via a protocol-based annual interest rate dispersed in proportion to the stake-weight of each validator (see below) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each non-PoRep transaction fee, less a protocol-specified amount that is returned to the mining pool (see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)). PoRep transaction fees are not collected directly by the leader client but pooled and returned to the validator set in proportion to the number of successfully validated PoReps. (see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md))
The protocol-based annual interest-rate (%) per epoch to be distributed to validation-clients is to be a function of:
* the current fraction of staked SOLs out of the current total circulating supply,
* the global time since the genesis block instantiation
* the up-time/participation [% of available slots/blocks that validator had opportunity to vote on?] of a given validator over the previous epoch.
The first two factors are protocol parameters only (i.e. independent of validator behavior in a given epoch) and describe a global validation reward schedule designed to both incentivize early participation and optimal security in the network. This schedule sets a maximum annual validator-client interest rate per epoch.
At any given point in time, this interest rate is pegged to a defined value given a specific % staked SOL out of the circulating supply (e.g. 10% interest rate when 66% of circulating SOL is staked). The interest rate adjusts as the square-root [TBD] of the % staked, leading to higher validation-client interest rates as the % staked drops below the targeted goal, thus incentivizing more participation leading to more security in the network. An example of such a schedule, for a specified point in time (e.g. network launch) is shown in **Table 1**.
| Percentage circulating supply staked [%] | Annual validator-client interest rate [%] |
| ---: | ---: |
| 5 | 13.87 |
| 15 | 13.31 |
| 25 | 12.73 |
| 35 | 12.12 |
| 45 | 11.48 |
| 55 | 10.80 |
| **66** | **10.00** |
| 75 | 9.29 |
| 85 | 8.44 |
**Table 1:** Example interest rate schedule based on % SOL staked out of circulating supply. In this case, interest rates are fixed at 10% for 66% of staked circulating supply
Over time, the interest rate, at any network staked percentage, will drop as described by an algorithmic schedule. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. This mining-pool provided interest rate will reduce over time until a network-chosen baseline value is reached. This is a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for both state-validation and ledger storage replication (PoReps) are not accounted for here. A validation-client interest rate schedule as a function of % network staked and time is shown in** Figure 2**.
<!-- ![== Validation Client Interest Rates Figure ==](validation_client_interest_rates.png =250x) -->
<p style="text-align:center;"><img src="img/validation_client_interest_rates.png" alt="drawing" width="800"/></p>
**Figure 2:** In this example schedule, the annual interest rate [%] reduces at around 16.7% per year, until it reaches the long-term, fixed, 4% rate.
This epoch-specific protocol-defined interest rate sets an upper limit of *protocol-generated* annual interest rate (not absolute total interest rate) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch. Each epoch is comprised of XXX slots. The protocol-defined interest rate is then discounted by the log [TBD] of the % of slots a given validator submitted a vote on a PoH branch during that epoch, see **Figure XX**

View File

@ -0,0 +1,20 @@
### State-validation Transaction Fees
Each message sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
* provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
* reduce network spam by introducing real cost to transactions,
* open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies (e.g. Bitcoin, Ethereum), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is sent to the mining pool, with the resulting fee going to the current leader processing the transaction. These pooled fees, then re-enter the system through rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing a redistribution avenue that protects against "tax evasion" attacks (i.e. side-channel fee payments)<sup>[1](ed_referenced.md)</sup>. Constraints on the fixed portion of transaction fees going to the mining pool, to establish long-term economic sustainability, are established and discussed in detail in the [Economic Sustainability](ed_economic_sustainability.md) section.
This minimum, protocol-earmarked, portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount (e.g. 50% of a block's capacity), the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
Additionally, the minimum protocol captured fee can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total procotol captured fee to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.

View File

@ -0,0 +1,29 @@
### Validation Stake Delegation
Running a Solana validation-client required relatively modest upfront hardware capital investment. **Table 2** provides an example hardware configuration to support ~1M tx/s with estimated off-the-shelf costs:
|Component|Example|Estimated Cost|
|--- |--- |--- |
|GPU|2x 2080 Ti|$2500|
|or|4x 1080 Ti|$2800|
|OS/Ledger Storage|Samsung 860 Evo 2TB|$370|
|Accounts storage|2x Samsung 970 Pro M.2 512GB|$340|
|RAM|32 Gb|$300|
|Motherboard|AMD x399|$400|
|CPU|AMD Threadripper 2920x|$650|
|Case||$100|
|Power supply|EVGA 1600W|$300|
|Network|> 500 mbps||
|Network (1)|Google webpass business bay area 1gbps unlimited|$5500/mo|
|Network (2)|Hurricane Electric bay area colo 1gbps|$500/mo|
**Table 2** example high-end hardware setup for running a Solana client.
Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solanas validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties still have two options to become involved in the Solana network/economy:
1. Delegation of previously acquired tokens with a reliable validation node to earn a portion of interest generated
2. Provide local storage space as a replication-client and receive rewards by submitting Proof-of-Replication (see [Replication-client Economics](ed_replication_client_economics.md)).
a. This participant has the additional option to directly delegate their earned storage rewards ([Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md))
Delegation of tokens to validation-clients, via option 1, provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature creates a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services.

104
book/src/fork-generation.md Normal file
View File

@ -0,0 +1,104 @@
# Fork Generation
The chapter describes how forks naturally occur as a consequence of [leader
rotation](leader-rotation.md).
## Overview
Nodes take turns being leader and generating the PoH that encodes state
changes. The cluster can tolerate loss of connection to any leader by
synthesizing what the leader ***would*** have generated had it been connected
but not ingesting any state changes. The possible number of forks is thereby
limited to a "there/not-there" skip list of forks that may arise on leader
rotation slot boundaries. At any given slot, only a single leader's
transactions will be accepted.
## Message Flow
1. Transactions are ingested by the current leader.
2. Leader filters valid transactions.
3. Leader executes valid transactions updating its state.
4. Leader packages transactions into entries based off its current PoH slot.
5. Leader transmits the entries to validator nodes (in signed blobs)
1. The PoH stream includes ticks; empty entries that indicate liveness of
the leader and the passage of time on the cluster.
2. A leader's stream begins with the tick entries necessary complete the PoH
back to the leaders most recently observed prior leader slot.
6. Validators retransmit entries to peers in their set and to further
downstream nodes.
7. Validators validate the transactions and execute them on their state.
8. Validators compute the hash of the state.
9. At specific times, i.e. specific PoH tick counts, validators transmit votes
to the leader.
1. Votes are signatures of the hash of the computed state at that PoH tick
count
2. Votes are also propagated via gossip
10. Leader executes the votes as any other transaction and broadcasts them to
the cluster.
11. Validators observe their votes and all the votes from the cluster.
## Partitions, Forks
Forks can arise at PoH tick counts that correspond to a vote. The next leader
may not have observed the last vote slot and may start their slot with
generated virtual PoH entries. These empty ticks are generated by all nodes in
the cluster at a cluster-configured rate for hashes/per/tick `Z`.
There are only two possible versions of the PoH during a voting slot: PoH with
`T` ticks and entries generated by the current leader, or PoH with just ticks.
The "just ticks" version of the PoH can be thought of as a virtual ledger, one
that all nodes in the cluster can derive from the last tick in the previous
slot.
Validators can ignore forks at other points (e.g. from the wrong leader), or
slash the leader responsible for the fork.
Validators vote based on a greedy choice to maximize their reward described in
[forks selection](fork-selection.md).
### Validator's View
#### Time Progression
The diagram below represents a validator's view of the
PoH stream with possible forks over time. L1, L2, etc. are leader slots, and
`E`s represent entries from that leader during that leader's slot. The `x`s
represent ticks only, and time flows downwards in the diagram.
<img alt="Fork generation" src="img/fork-generation.svg" class="center"/>
Note that an `E` appearing on 2 forks at the same slot is a slashable
condition, so a validator observing `E3` and `E3'` can slash L3 and safely
choose `x` for that slot. Once a validator commits to a forks, other forks can
be discarded below that tick count. For any slot, validators need only
consider a single "has entries" chain or a "ticks only" chain to be proposed by
a leader. But multiple virtual entries may overlap as they link back to the a
previous slot.
#### Time Division
It's useful to consider leader rotation over PoH tick count as time division of
the job of encoding state for the cluster. The following table presents the
above tree of forks as a time-divided ledger.
leader slot | L1 | L2 | L3 | L4 | L5
-------|----|----|----|----|----
data | E1| E2 | E3 | E4 | E5
ticks since prev | | | | x | xx
Note that only data from leader L3 will be accepted during leader slot L3.
Data from L3 may include "catchup" ticks back to a slot other than L2 if L3 did
not observe L2's data. L4 and L5's transmissions include the "ticks to prev"
PoH entries.
This arrangement of the network data streams permits nodes to save exactly this
to the ledger for replay, restart, and checkpoints.
### Leader's View
When a new leader begins a slot, it must first transmit any PoH (ticks)
required to link the new slot with the most recently observed and voted slot.
The fork the leader proposes would link the current slot to a previous fork
that the leader has voted on with virtual ticks.

233
book/src/fork-selection.md Normal file
View File

@ -0,0 +1,233 @@
# Fork Selection
This design describes a *Fork Selection* algorithm. It addresses the following
problems:
* Some forks may not end up accepted by the super-majority of the cluster, and
voters need to recover from voting on such forks.
* Many forks may be votable by different voters, and each voter may see a
different set of votable forks. The selected forks should eventually converge
for the cluster.
* Reward based votes have an associated risk. Voters should have the ability to
configure how much risk they take on.
* The [cost of rollback](#cost-of-rollback) needs to be computable. It is
important to clients that rely on some measurable form of Consistency. The
costs to break consistency need to be computable, and increase super-linearly
for older votes.
* ASIC speeds are different between nodes, and attackers could employ Proof of
History ASICS that are much faster than the rest of the cluster. Consensus
needs to be resistant to attacks that exploit the variability in Proof of
History ASIC speed.
For brevity this design assumes that a single voter with a stake is deployed as
an individual validator in the cluster.
## Time
The Solana cluster generates a source of time via a Verifiable Delay Function we
are calling [Proof of History](book/src/synchronization.md).
Proof of History is used to create a deterministic round robin schedule for all
the active leaders. At any given time only 1 leader, which can be computed from
the ledger itself, can propose a fork. For more details, see [fork
generation](fork-generation.md) and [leader rotation](leader-rotation.md).
## Lockouts
The purpose of the lockout is to force a validator to commit opportunity cost to
a specific fork. Lockouts are measured in slots, and therefor represent a
real-time forced delay that a validator needs to wait before breaking the
commitment to a fork.
Validators that violate the lockouts and vote for a diverging fork within the
lockout should be punished. The proposed punishment is to slash the validator
stake if a concurrent vote within a lockout for a non-descendant fork can be
proven to the cluster.
## Algorithm
The basic idea to this approach is to stack consensus votes and double lockouts.
Each vote in the stack is a confirmation of a fork. Each confirmed fork is an
ancestor of the fork above it. Each vote has a `lockout` in units of slots
before the validator can submit a vote that does not contain the confirmed fork
as an ancestor.
When a vote is added to the stack, the lockouts of all the previous votes in the
stack are doubled (more on this in [Rollback](#Rollback)). With each new vote,
a validator commits the previous votes to an ever-increasing lockout. At 32
votes we can consider the vote to be at `max lockout` any votes with a lockout
equal to or above `1<<32` are dequeued (FIFO). Dequeuing a vote is the trigger
for a reward. If a vote expires before it is dequeued, it and all the votes
above it are popped (LIFO) from the vote stack. The validator needs to start
rebuilding the stack from that point.
### Rollback
Before a vote is pushed to the stack, all the votes leading up to vote with a
lower lock time than the new vote are popped. After rollback lockouts are not
doubled until the validator catches up to the rollback height of votes.
For example, a vote stack with the following state:
| vote | vote time | lockout | lock expiration time |
|-----:|----------:|--------:|---------------------:|
| 4 | 4 | 2 | 6 |
| 3 | 3 | 4 | 7 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
*Vote 5* is at time 9, and the resulting state is
| vote | vote time | lockout | lock expiration time |
|-----:|----------:|--------:|---------------------:|
| 5 | 9 | 2 | 11 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
*Vote 6* is at time 10
| vote | vote time | lockout | lock expiration time |
|-----:|----------:|--------:|---------------------:|
| 6 | 10 | 2 | 12 |
| 5 | 9 | 4 | 13 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
At time 10 the new votes caught up to the previous votes. But *vote 2* expires
at 10, so the when *vote 7* at time 11 is applied the votes including and above
*vote 2* will be popped.
| vote | vote time | lockout | lock expiration time |
|-----:|----------:|--------:|---------------------:|
| 7 | 11 | 2 | 13 |
| 1 | 1 | 16 | 17 |
The lockout for vote 1 will not increase from 16 until the stack contains 5
votes.
### Slashing and Rewards
Validators should be rewarded for selecting the fork that the rest of the
cluster selected as often as possible. This is well-aligned with generating a
reward when the vote stack is full and the oldest vote needs to be dequeued.
Thus a reward should be generated for each successful dequeue.
### Cost of Rollback
Cost of rollback of *fork A* is defined as the cost in terms of lockout time to
the validator to confirm any other fork that does not include *fork A* as an
ancestor.
The **Economic Finality** of *fork A* can be calculated as the loss of all the
rewards from rollback of *fork A* and its descendants, plus the opportunity cost
of reward due to the exponentially growing lockout of the votes that have
confirmed *fork A*.
### Thresholds
Each validator can independently set a threshold of cluster commitment to a fork
before that validator commits to a fork. For example, at vote stack index 7,
the lockout is 256 time units. A validator may withhold votes and let votes 0-7
expire unless the vote at index 7 has at greater than 50% commitment in the
cluster. This allows each validator to independently control how much risk to
commit to a fork. Committing to forks at a higher frequency would allow the
validator to earn more rewards.
### Algorithm parameters
The following parameters need to be tuned:
* Number of votes in the stack before dequeue occurs (32).
* Rate of growth for lockouts in the stack (2x).
* Starting default lockout (2).
* Threshold depth for minimum cluster commitment before committing to the fork
(8).
* Minimum cluster commitment size at threshold depth (50%+).
### Free Choice
A "Free Choice" is an unenforcible validator action. There is no way for the
protocol to encode and enforce these actions since each validator can modify the
code and adjust the algorithm. A validator that maximizes self-reward over all
possible futures should behave in such a way that the system is stable, and the
local greedy choice should result in a greedy choice over all possible futures.
A set of validator that are engaging in choices to disrupt the protocol should
be bound by their stake weight to the denial of service. Two options exits for
validator:
* a validator can outrun previous validator in virtual generation and submit a
concurrent fork
* a validator can withhold a vote to observe multiple forks before voting
In both cases, the validator in the cluster have several forks to pick from
concurrently, even though each fork represents a different height. In both
cases it is impossible for the protocol to detect if the validator behavior is
intentional or not.
### Greedy Choice for Concurrent Forks
When evaluating multiple forks, each validator should use the following rules:
1. Forks must satisfy the *Threshold* rule.
2. Pick the fork that maximizes the total cluster lockout time for all the
ancestor forks.
3. Pick the fork that has the greatest amount of cluster transaction fees.
4. Pick the latest fork in terms of PoH.
Cluster transaction fees are fees that are deposited to the mining pool as
described in the [Staking Rewards](book/src/staking-rewards.md) section.
## PoH ASIC Resistance
Votes and lockouts grow exponentially while ASIC speed up is linear. There are
two possible attack vectors involving a faster ASIC.
### ASIC censorship
An attacker generates a concurrent fork that outruns previous leaders in an
effort to censor them. A fork proposed by this attacker will be available
concurrently with the next available leader. For nodes to pick this fork it
must satisfy the *Greedy Choice* rule.
1. Fork must have equal number of votes for the ancestor fork.
2. Fork cannot be so far a head as to cause expired votes.
3. Fork must have a greater amount of cluster transaction fees.
This attack is then limited to censoring the previous leaders fees, and
individual transactions. But it cannot halt the cluster, or reduce the
validator set compared to the concurrent fork. Fee censorship is limited to
access fees going to the leaders but not the validators.
### ASIC Rollback
An attacker generates a concurrent fork from an older block to try to rollback
the cluster. In this attack the concurrent fork is competing with forks that
have already been voted on. This attack is limited by the exponential growth of
the lockouts.
* 1 vote has a lockout of 2 slots. Concurrent fork must be at least 2 slots
ahead, and be produced in 1 slot. Therefore requires an ASIC 2x faster.
* 2 votes have a lockout of 4 slots. Concurrent fork must be at least 4 slots
ahead and produced in 2 slots. Therefore requires an ASIC 2x faster.
* 3 votes have a lockout of 8 slots. Concurrent fork must be at least 8 slots
ahead and produced in 3 slots. Therefore requires an ASIC 2.6x faster.
* 10 votes have a lockout of 1024 slots. 1024/10, or 102.4x faster ASIC.
* 20 votes have a lockout of 2^20 slots. 2^20/20, or 52,428.8x faster ASIC.

29
book/src/fullnode.md Normal file
View File

@ -0,0 +1,29 @@
# Anatomy of a Fullnode
<img alt="Fullnode block diagrams" src="img/fullnode.svg" class="center"/>
## Pipelining
The fullnodes make extensive use of an optimization common in CPU design,
called *pipelining*. Pipelining is the right tool for the job when there's a
stream of input data that needs to be processed by a sequence of steps, and
there's different hardware responsible for each. The quintessential example is
using a washer and dryer to wash/dry/fold several loads of laundry. Washing
must occur before drying and drying before folding, but each of the three
operations is performed by a separate unit. To maximize efficiency, one creates
a pipeline of *stages*. We'll call the washer one stage, the dryer another, and
the folding process a third. To run the pipeline, one adds a second load of
laundry to the washer just after the first load is added to the dryer.
Likewise, the third load is added to the washer after the second is in the
dryer and the first is being folded. In this way, one can make progress on
three loads of laundry simultaneously. Given infinite loads, the pipeline will
consistently complete a load at the rate of the slowest stage in the pipeline.
## Pipelining in the Fullnode
The fullnode contains two pipelined processes, one used in leader mode called
the TPU and one used in validator mode called the TVU. In both cases, the
hardware being pipelined is the same, the network input, the GPU cards, the CPU
cores, writes to disk, and the network output. What it does with that hardware
is different. The TPU exists to create ledger entries whereas the TVU exists
to validate them.

168
book/src/getting-started.md Normal file
View File

@ -0,0 +1,168 @@
# Getting Started
The Solana git repository contains all the scripts you might need to spin up your
own local testnet. Depending on what you're looking to achieve, you may want to
run a different variation, as the full-fledged, performance-enhanced
multinode testnet is considerably more complex to set up than a Rust-only,
singlenode testnode. If you are looking to develop high-level features, such
as experimenting with smart contracts, save yourself some setup headaches and
stick to the Rust-only singlenode demo. If you're doing performance optimization
of the transaction pipeline, consider the enhanced singlenode demo. If you're
doing consensus work, you'll need at least a Rust-only multinode demo. If you want
to reproduce our TPS metrics, run the enhanced multinode demo.
For all four variations, you'd need the latest Rust toolchain and the Solana
source code:
First, install Rust's package manager Cargo.
```bash
$ curl https://sh.rustup.rs -sSf | sh
$ source $HOME/.cargo/env
```
Now checkout the code from github:
```bash
$ git clone https://github.com/solana-labs/solana.git
$ cd solana
```
The demo code is sometimes broken between releases as we add new low-level
features, so if this is your first time running the demo, you'll improve
your odds of success if you check out the
[latest release](https://github.com/solana-labs/solana/releases)
before proceeding:
```bash
$ TAG=$(git describe --tags $(git rev-list --tags --max-count=1))
$ git checkout $TAG
```
### Configuration Setup
Ensure important programs such as the vote program are built before any
nodes are started
```bash
$ cargo build --all
```
The network is initialized with a genesis ledger and fullnode configuration files.
These files can be generated by running the following script.
```bash
$ ./multinode-demo/setup.sh
```
### Drone
In order for the fullnodes and clients to work, we'll need to
spin up a drone to give out some test tokens. The drone delivers Milton
Friedman-style "air drops" (free tokens to requesting clients) to be used in
test transactions.
Start the drone with:
```bash
$ ./multinode-demo/drone.sh
```
### Singlenode Testnet
Before you start a fullnode, make sure you know the IP address of the machine you
want to be the bootstrap leader for the demo, and make sure that udp ports 8000-10000 are
open on all the machines you want to test with.
Now start the bootstrap leader in a separate shell:
```bash
$ ./multinode-demo/bootstrap-leader.sh
```
Wait a few seconds for the server to initialize. It will print "leader ready..." when it's ready to
receive transactions. The leader will request some tokens from the drone if it doesn't have any.
The drone does not need to be running for subsequent leader starts.
### Multinode Testnet
To run a multinode testnet, after starting a leader node, spin up some
additional full nodes in separate shells:
```bash
$ ./multinode-demo/fullnode-x.sh
```
To run a performance-enhanced full node on Linux,
[CUDA 10.0](https://developer.nvidia.com/cuda-downloads) must be installed on
your system:
```bash
$ ./fetch-perf-libs.sh
$ SOLANA_CUDA=1 ./multinode-demo/bootstrap-leader.sh
$ SOLANA_CUDA=1 ./multinode-demo/fullnode-x.sh
```
### Testnet Client Demo
Now that your singlenode or multinode testnet is up and running let's send it
some transactions!
In a separate shell start the client:
```bash
$ ./multinode-demo/client.sh # runs against localhost by default
```
What just happened? The client demo spins up several threads to send 500,000 transactions
to the testnet as quickly as it can. The client then pings the testnet periodically to see
how many transactions it processed in that time. Take note that the demo intentionally
floods the network with UDP packets, such that the network will almost certainly drop a
bunch of them. This ensures the testnet has an opportunity to reach 710k TPS. The client
demo completes after it has convinced itself the testnet won't process any additional
transactions. You should see several TPS measurements printed to the screen. In the
multinode variation, you'll see TPS measurements for each validator node as well.
### Testnet Debugging
There are some useful debug messages in the code, you can enable them on a per-module and per-level
basis. Before running a leader or validator set the normal RUST\_LOG environment variable.
For example
* To enable `info` everywhere and `debug` only in the solana::banking_stage module:
```bash
$ export RUST_LOG=solana=info,solana::banking_stage=debug
```
* To enable BPF program logging:
```bash
$ export RUST_LOG=solana_bpf_loader=trace
```
Generally we are using `debug` for infrequent debug messages, `trace` for potentially frequent
messages and `info` for performance-related logging.
You can also attach to a running process with GDB. The leader's process is named
_solana-fullnode_:
```bash
$ sudo gdb
attach <PID>
set logging on
thread apply all bt
```
This will dump all the threads stack traces into gdb.txt
## Public Testnet
In this example the client connects to our public testnet. To run validators on the testnet you would need to open udp ports `8000-10000`.
```bash
$ ./multinode-demo/client.sh --network $(dig +short testnet.solana.com):8001 --duration 60
```
You can observe the effects of your client's transactions on our [dashboard](https://metrics.solana.com:3000/d/testnet/testnet-hud?orgId=2&from=now-30m&to=now&refresh=5s&var-testnet=testnet)

128
book/src/gossip.md Normal file
View File

@ -0,0 +1,128 @@
# Gossip Service
The Gossip Service acts as a gateway to nodes in the control plane. Fullnodes
use the service to ensure information is available to all other nodes in a cluster.
The service broadcasts information using a gossip protocol.
## Gossip Overview
Nodes continuously share signed data objects among themselves in order to
manage a cluster. For example, they share their contact information, ledger
height, and votes.
Every tenth of a second, each node sends a "push" message and/or a "pull"
message. Push and pull messages may elicit responses, and push messages may be
forwarded on to others in the cluster.
Gossip runs on a well-known UDP/IP port or a port in a well-known range. Once
a cluster is bootstrapped, nodes advertise to each other where to find their
gossip endpoint (a socket address).
## Gossip Records
Records shared over gossip are arbitrary, but signed and versioned (with a
timestamp) as needed to make sense to the node receiving them. If a node
recieves two records from the same source, it it updates its own copy with the
record with the most recent timestamp.
## Gossip Service Interface
### Push Message
A node sends a push message to tells the cluster it has information to share.
Nodes send push messages to `PUSH_FANOUT` push peers.
Upon receiving a push message, a node examines the message for:
1. Duplication: if the message has been seen before, the node responds with
`PushMessagePrune` and drops the message
2. New data: if the message is new to the node
* Stores the new information with an updated version in its cluster info and
purges any previous older value
* Stores the message in `pushed_once` (used for detecting duplicates,
purged after `PUSH_MSG_TIMEOUT * 5` ms)
* Retransmits the messages to its own push peers
3. Expiration: nodes drop push messages that are older than `PUSH_MSG_TIMEOUT`
### Push Peers, Prune Message
A nodes selects its push peers at random from the active set of known peers.
The node keeps this selection for a relatively long time. When a prune message
is received, the node drops the push peer that sent the prune. Prune is an
indication that there is another, faster path to that node than direct push.
The set of push peers is kept fresh by rotating a new node into the set every
`PUSH_MSG_TIMEOUT/2` milliseconds.
### Pull Message
A node sends a pull message to ask the cluster if there is any new information.
A pull message is sent to a single peer at random and comprises a Bloom filter
that represents things it already has. A node receiving a pull message
iterates over its values and constructs a pull response of things that miss the
filter and would fit in a message.
A node constructs the pull Bloom filter by iterating over current values and
recently purged values.
A node handles items in a pull response the same way it handles new data in a
push message.
## Purging
Nodes retain prior versions of values (those updated by a pull or push) and
expired values (those older than `GOSSIP_PULL_CRDS_TIMEOUT_MS`) in
`purged_values` (things I recently had). Nodes purge `purged_values` that are
older than `5 * GOSSIP_PULL_CRDS_TIMEOUT_MS`.
## Eclipse Attacks
An eclipse attack is an attempt to take over the set of node connections with
adversarial endpoints.
This is relevant to our implementation in the following ways.
* Pull messages select a random node from the network. An eclipse attack on
*pull* would require an attacker to influence the random selection in such a way
that only adversarial nodes are selected for pull.
* Push messages maintain an active set of nodes and select a random fanout for
every push message. An eclipse attack on *push* would influence the active set
selection, or the random fanout selection.
### Time and Stake based weights
Weights are calculated based on `time since last picked` and the `natural log` of the `stake weight`.
Taking the `ln` of the stake weight allows giving all nodes a fairer chance of network
coverage in a reasonable amount of time. It helps normalize the large possible `stake weight` differences between nodes.
This way a node with low `stake weight`, compared to a node with large `stake weight` will only have to wait a
few multiples of ln(`stake`) seconds before it gets picked.
There is no way for an adversary to influence these parameters.
### Pull Message
A node is selected as a pull target based on the weights described above.
### Push Message
A prune message can only remove an adversary from a potential connection.
Just like *pull message*, nodes are selected into the active set based on weights.
## Notable differences from PlumTree
The active push protocol described here is based on (Plum
Tree)[https://haslab.uminho.pt/jop/files/lpr07a.pdf]. The main differences are:
* Push messages have a wallclock that is signed by the originator. Once the
wallclock expires the message is dropped. A hop limit is difficult to implement
in an adversarial setting.
* Lazy Push is not implemented because its not obvious how to prevent an
adversary from forging the message fingerprint. A naive approach would allow an
adversary to be prioritized for pull based on their input.

Binary file not shown.

After

Width:  |  Height:  |  Size: 372 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 401 KiB

117
book/src/introduction.md Normal file
View File

@ -0,0 +1,117 @@
# What is Solana?
Solana is the name of an open source project that is implementing a new,
high-performance, permissionless blockchain. Solana is also the name of a
company headquartered in San Francisco that maintains the open source project.
# About this Book
This book describes the Solana open source project, a blockchain built from the
ground up for scale. The book covers why it's useful, how to use it, how it
works, and why it will continue to work long after the company Solana closes
its doors. The goal of the Solana architecture is to demonstrate there exists a
set of software algorithms that when used in combination to implement a
blockchain, removes software as a performance bottleneck, allowing transaction
throughput to scale proportionally with network bandwidth. The architecture
goes on to satisfy all three desirable properties of a proper blockchain:
it is scalable, secure and decentralized.
The architecture describes a theoretical upper bound of 710 thousand
transactions per second (tps) on a standard gigabit network and 28.4 million
tps on 40 gigabit. Furthermore, the architecture supports safe, concurrent
execution of programs authored in general purpose programming languages such as
C or Rust.
# Disclaimer
All claims, content, designs, algorithms, estimates, roadmaps, specifications,
and performance measurements described in this project are done with the
author's best effort. It is up to the reader to check and validate their
accuracy and truthfulness. Furthermore, nothing in this project constitutes a
solicitation for investment.
# History of the Solana Codebase
In November of 2017, Anatoly Yakovenko published a whitepaper describing Proof
of History, a technique for keeping time between computers that do not trust
one another. From Anatoly's previous experience designing distributed systems
at Qualcomm, Mesosphere and Dropbox, he knew that a reliable clock makes
network synchronization very simple. When synchronization is simple the
resulting network can be blazing fast, bound only by network bandwidth.
Anatoly watched as blockchain systems without clocks, such as Bitcoin and
Ethereum, struggled to scale beyond 15 transactions per second worldwide when
centralized payment systems such as Visa required peaks of 65,000 tps. Without a
clock, it was clear they'd never graduate to being the global payment system or
global supercomputer most had dreamed them to be. When Anatoly solved the problem of
getting computers that dont trust each other to agree on time, he knew he had
the key to bring 40 years of distributed systems research to the world of
blockchain. The resulting cluster wouldn't be just 10 times faster, or a 100
times, or a 1,000 times, but 10,000 times faster, right out of the gate!
Anatoly's implementation began in a private codebase and was implemented in the
C programming language. Greg Fitzgerald, who had previously worked with Anatoly
at semiconductor giant Qualcomm Incorporated, encouraged him to reimplement the
project in the Rust programming language. Greg had worked on the LLVM compiler
infrastructure, which underlies both the Clang C/C++ compiler as well as the
Rust compiler. Greg claimed that the language's safety guarantees would improve
software productivity and that its lack of a garbage collector would allow
programs to perform as well as those written in C. Anatoly gave it a shot and
just two weeks later, had migrated his entire codebase to Rust. Sold. With
plans to weave all the world's transactions together on a single, scalable
blockchain, Anatoly called the project Loom.
On February 13th of 2018, Greg began prototyping the first open source
implementation of Anatoly's whitepaper. The project was published to GitHub
under the name Silk in the loomprotocol organization. On February 28th, Greg
made his first release, demonstrating 10 thousand signed transactions could be
verified and processed in just over half a second. Shortly after, another
former Qualcomm cohort, Stephen Akridge, demonstrated throughput could be
massively improved by offloading signature verification to graphics processors.
Anatoly recruited Greg, Stephen and three others to co-found a company, then
called Loom.
Around the same time, Ethereum-based project Loom Network sprung up and many
people were confused about whether they were the same project. The Loom team decided it
would rebrand. They chose the name Solana, a nod to a small beach town North of
San Diego called Solana Beach, where Anatoly, Greg and Stephen lived and surfed
for three years when they worked for Qualcomm. On March 28th, the team created
the Solana Labs GitHub organization and renamed Greg's prototype Silk to
Solana.
In June of 2018, the team scaled up the technology to run on cloud-based
networks and on July 19th, published a 50-node, permissioned, public testnet
consistently supporting bursts of 250,000 transactions per second. In a later release in
December, called v0.10 Pillbox, the team published a permissioned testnet
running 150 nodes on a gigabit network and demonstrated soak tests processing
an *average* of 200 thousand transactions per second with bursts over 500
thousand. The project was also extended to support on-chain programs written in
the C programming language and run concurrently in a safe execution environment
called BPF.
# What is a Solana Cluster?
A cluster is a set of computers that work together and can be viewed from the
outside as a single system. A Solana cluster is a set of independently owned
computers working together (and sometimes against each other) to verify the
output of untrusted, user-submitted programs. A Solana cluster can be utilized
any time a user wants to preserve an immutable record of events in time or
programmatic interpretations of those events. One use is to track which of the
computers did meaningful work to keep the cluster running. Another use might be
to track the possession of real-world assets. In each case, the cluster
produces a record of events called the ledger. It will be preserved for the
lifetime of the cluster. As long as someone somewhere in the world maintains a
copy of the ledger, the output of its programs (which may contain a record of
who possesses what) will forever be reproducible, independent of the
organization that launched it.
# What are Sols?
A sol is the name of Solana's native token, which can be passed to nodes in a
Solana cluster in exchange for running an on-chain program or validating its
output. The Solana protocol defines that only 1 billion sols will ever exist,
but that the system may perform micropayments of fractional sols, and that a sol
may be split as many as 34 times. The fractional sol is called a *lamport*. It
is named in honor of Solana's biggest technical influence, [Leslie
Lamport](https://en.wikipedia.org/wiki/Leslie_Lamport). A lamport has a value
of approximately 0.0000000000582 sol (2^-34).

View File

@ -0,0 +1,3 @@
# JavaScript API
See [solana-web3](https://solana-labs.github.io/solana-web3.js/).

392
book/src/jsonrpc-api.md Normal file
View File

@ -0,0 +1,392 @@
JSON RPC API
===
Solana nodes accept HTTP requests using the [JSON-RPC 2.0](https://www.jsonrpc.org/specification) specification.
To interact with a Solana node inside a JavaScript application, use the [solana-web3.js](https://github.com/solana-labs/solana-web3.js) library, which gives a convenient interface for the RPC methods.
RPC HTTP Endpoint
---
**Default port:** 8899
eg. http://localhost:8899, http://192.168.1.88:8899
RPC PubSub WebSocket Endpoint
---
**Default port:** 8900
eg. ws://localhost:8900, http://192.168.1.88:8900
Methods
---
* [confirmTransaction](#confirmtransaction)
* [getAccountInfo](#getaccountinfo)
* [getBalance](#getbalance)
* [getRecentBlockhash](#getrecentblockhash)
* [getSignatureStatus](#getsignaturestatus)
* [getTransactionCount](#gettransactioncount)
* [requestAirdrop](#requestairdrop)
* [sendTransaction](#sendtransaction)
* [startSubscriptionChannel](#startsubscriptionchannel)
* [Subscription Websocket](#subscription-websocket)
* [accountSubscribe](#accountsubscribe)
* [accountUnsubscribe](#accountunsubscribe)
* [programSubscribe](#programsubscribe)
* [programUnsubscribe](#programunsubscribe)
* [signatureSubscribe](#signaturesubscribe)
* [signatureUnsubscribe](#signatureunsubscribe)
Request Formatting
---
To make a JSON-RPC request, send an HTTP POST request with a `Content-Type: application/json` header. The JSON request data should contain 4 fields:
* `jsonrpc`, set to `"2.0"`
* `id`, a unique client-generated identifying integer
* `method`, a string containing the method to be invoked
* `params`, a JSON array of ordered parameter values
Example using curl:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getBalance", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri"]}' 192.168.1.88:8899
```
The response output will be a JSON object with the following fields:
* `jsonrpc`, matching the request specification
* `id`, matching the request identifier
* `result`, requested data or success confirmation
Requests can be sent in batches by sending an array of JSON-RPC request objects as the data for a single POST.
Definitions
---
* Hash: A SHA-256 hash of a chunk of data.
* Pubkey: The public key of a Ed25519 key-pair.
* Signature: An Ed25519 signature of a chunk of data.
* Transaction: A Solana instruction signed by a client key-pair.
JSON RPC API Reference
---
### confirmTransaction
Returns a transaction receipt
##### Parameters:
* `string` - Signature of Transaction to confirm, as base-58 encoded string
##### Results:
* `boolean` - Transaction status, true if Transaction is confirmed
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"confirmTransaction", "params":["5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":true,"id":1}
```
---
### getBalance
Returns the balance of the account of provided Pubkey
##### Parameters:
* `string` - Pubkey of account to query, as base-58 encoded string
##### Results:
* `integer` - quantity, as a signed 64-bit integer
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getBalance", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":0,"id":1}
```
---
### getAccountInfo
Returns all information associated with the account of provided Pubkey
##### Parameters:
* `string` - Pubkey of account to query, as base-58 encoded string
##### Results:
The result field will be a JSON object with the following sub fields:
* `lamports`, number of lamports assigned to this account, as a signed 64-bit integer
* `owner`, array of 32 bytes representing the program this account has been assigned to
* `userdata`, array of bytes representing any userdata associated with the account
* `executable`, boolean indicating if the account contains a program (and is strictly read-only)
* `loader`, array of 32 bytes representing the loader for this program (if `executable`), otherwise all
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getAccountInfo", "params":["2gVkYWexTHR5Hb2aLeQN3tnngvWzisFKXDUPrgMHpdST"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"executable":false,"loader":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"owner":[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lamports":1,"userdata":[3,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,20,0,0,0,0,0,0,0,50,48,53,48,45,48,49,45,48,49,84,48,48,58,48,48,58,48,48,90,252,10,7,28,246,140,88,177,98,82,10,227,89,81,18,30,194,101,199,16,11,73,133,20,246,62,114,39,20,113,189,32,50,0,0,0,0,0,0,0,247,15,36,102,167,83,225,42,133,127,82,34,36,224,207,130,109,230,224,188,163,33,213,13,5,117,211,251,65,159,197,51,0,0,0,0,0,0]},"id":1}
```
---
### getRecentBlockhash
Returns a recent block hash from the ledger
##### Parameters:
None
##### Results:
* `string` - a Hash as base-58 encoded string
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getRecentBlockhash"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"GH7ome3EiwEr7tu9JuTh2dpYWBJK3z69Xm1ZE3MEE6JC","id":1}
```
---
### getSignatureStatus
Returns the status of a given signature. This method is similar to
[confirmTransaction](#confirmtransaction) but provides more resolution for error
events.
##### Parameters:
* `string` - Signature of Transaction to confirm, as base-58 encoded string
##### Results:
* `string` - Transaction status:
* `Confirmed` - Transaction was successful
* `SignatureNotFound` - Unknown transaction
* `ProgramRuntimeError` - An error occurred in the program that processed this Transaction
* `AccountInUse` - Another Transaction had a write lock one of the Accounts specified in this Transaction. The Transaction may succeed if retried
* `GenericFailure` - Some other error occurred. **Note**: In the future new Transaction statuses may be added to this list. It's safe to assume that all new statuses will be more specific error conditions that previously presented as `GenericFailure`
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getSignatureStatus", "params":["5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"SignatureNotFound","id":1}
```
---
### getTransactionCount
Returns the current Transaction count from the ledger
##### Parameters:
None
##### Results:
* `integer` - count, as unsigned 64-bit integer
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getTransactionCount"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":268,"id":1}
```
---
### requestAirdrop
Requests an airdrop of lamports to a Pubkey
##### Parameters:
* `string` - Pubkey of account to receive lamports, as base-58 encoded string
* `integer` - lamports, as a signed 64-bit integer
##### Results:
* `string` - Transaction Signature of airdrop, as base-58 encoded string
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"requestAirdrop", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri", 50]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW","id":1}
```
---
### sendTransaction
Creates new transaction
##### Parameters:
* `array` - array of octets containing a fully-signed Transaction
##### Results:
* `string` - Transaction Signature, as base-58 encoded string
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"sendTransaction", "params":[[61, 98, 55, 49, 15, 187, 41, 215, 176, 49, 234, 229, 228, 77, 129, 221, 239, 88, 145, 227, 81, 158, 223, 123, 14, 229, 235, 247, 191, 115, 199, 71, 121, 17, 32, 67, 63, 209, 239, 160, 161, 2, 94, 105, 48, 159, 235, 235, 93, 98, 172, 97, 63, 197, 160, 164, 192, 20, 92, 111, 57, 145, 251, 6, 40, 240, 124, 194, 149, 155, 16, 138, 31, 113, 119, 101, 212, 128, 103, 78, 191, 80, 182, 234, 216, 21, 121, 243, 35, 100, 122, 68, 47, 57, 13, 39, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 40, 240, 124, 194, 149, 155, 16, 138, 31, 113, 119, 101, 212, 128, 103, 78, 191, 80, 182, 234, 216, 21, 121, 243, 35, 100, 122, 68, 47, 57, 11, 12, 106, 49, 74, 226, 201, 16, 161, 192, 28, 84, 124, 97, 190, 201, 171, 186, 6, 18, 70, 142, 89, 185, 176, 154, 115, 61, 26, 163, 77, 1, 88, 98, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"2EBVM6cB8vAAD93Ktr6Vd8p67XPbQzCJX47MpReuiCXJAtcjaxpvWpcg9Ege1Nr5Tk3a2GFrByT7WPBjdsTycY9b","id":1}
```
---
### Subscription Websocket
After connect to the RPC PubSub websocket at `ws://<ADDRESS>/`:
- Submit subscription requests to the websocket using the methods below
- Multiple subscriptions may be active at once
---
### accountSubscribe
Subscribe to an account to receive notifications when the lamports or userdata
for a given account public key changes
##### Parameters:
* `string` - account Pubkey, as base-58 encoded string
##### Results:
* `integer` - Subscription id (needed to unsubscribe)
##### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"accountSubscribe", "params":["CM78CPUeXjn8o3yroDHxUtKsZZgoy4GPkPPXfouKNH12"]}
// Result
{"jsonrpc": "2.0","result": 0,"id": 1}
```
##### Notification Format:
```bash
{"jsonrpc": "2.0","method": "accountNotification", "params": {"result": {"executable":false,"loader":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"owner":[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lamports":1,"userdata":[3,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,20,0,0,0,0,0,0,0,50,48,53,48,45,48,49,45,48,49,84,48,48,58,48,48,58,48,48,90,252,10,7,28,246,140,88,177,98,82,10,227,89,81,18,30,194,101,199,16,11,73,133,20,246,62,114,39,20,113,189,32,50,0,0,0,0,0,0,0,247,15,36,102,167,83,225,42,133,127,82,34,36,224,207,130,109,230,224,188,163,33,213,13,5,117,211,251,65,159,197,51,0,0,0,0,0,0]},"subscription":0}}
```
---
### accountUnsubscribe
Unsubscribe from account change notifications
##### Parameters:
* `integer` - id of account Subscription to cancel
##### Results:
* `bool` - unsubscribe success message
##### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"accountUnsubscribe", "params":[0]}
// Result
{"jsonrpc": "2.0","result": true,"id": 1}
```
---
### programSubscribe
Subscribe to a program to receive notifications when the lamports or userdata
for a given account owned by the program changes
##### Parameters:
* `string` - program_id Pubkey, as base-58 encoded string
##### Results:
* `integer` - Subscription id (needed to unsubscribe)
##### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"programSubscribe", "params":["9gZbPtbtHrs6hEWgd6MbVY9VPFtS5Z8xKtnYwA2NynHV"]}
// Result
{"jsonrpc": "2.0","result": 0,"id": 1}
```
##### Notification Format:
* `string` - account Pubkey, as base-58 encoded string
* `object` - account info JSON object (see [getAccountInfo](#getaccountinfo) for field details)
```bash
{"jsonrpc":"2.0","method":"programNotification","params":{{"result":["8Rshv2oMkPu5E4opXTRyuyBeZBqQ4S477VG26wUTFxUM",{"executable":false,"lamports":1,"owner":[129,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"userdata":[1,1,1,0,0,0,0,0,0,0,20,0,0,0,0,0,0,0,50,48,49,56,45,49,50,45,50,52,84,50,51,58,53,57,58,48,48,90,235,233,39,152,15,44,117,176,41,89,100,86,45,61,2,44,251,46,212,37,35,118,163,189,247,84,27,235,178,62,55,89,0,0,0,0,50,0,0,0,0,0,0,0,235,233,39,152,15,44,117,176,41,89,100,86,45,61,2,44,251,46,212,37,35,118,163,189,247,84,27,235,178,62,45,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]}],"subscription":0}}
```
---
### programUnsubscribe
Unsubscribe from program-owned account change notifications
##### Parameters:
* `integer` - id of account Subscription to cancel
##### Results:
* `bool` - unsubscribe success message
##### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"programUnsubscribe", "params":[0]}
// Result
{"jsonrpc": "2.0","result": true,"id": 1}
```
---
### signatureSubscribe
Subscribe to a transaction signature to receive notification when the transaction is confirmed
On `signatureNotification`, the subscription is automatically cancelled
##### Parameters:
* `string` - Transaction Signature, as base-58 encoded string
##### Results:
* `integer` - subscription id (needed to unsubscribe)
##### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"signatureSubscribe", "params":["2EBVM6cB8vAAD93Ktr6Vd8p67XPbQzCJX47MpReuiCXJAtcjaxpvWpcg9Ege1Nr5Tk3a2GFrByT7WPBjdsTycY9b"]}
// Result
{"jsonrpc": "2.0","result": 0,"id": 1}
```
##### Notification Format:
```bash
{"jsonrpc": "2.0","method": "signatureNotification", "params": {"result": "Confirmed","subscription":0}}
```
---
### signatureUnsubscribe
Unsubscribe from signature confirmation notification
##### Parameters:
* `integer` - subscription id to cancel
##### Results:
* `bool` - unsubscribe success message
##### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"signatureUnsubscribe", "params":[0]}
// Result
{"jsonrpc": "2.0","result": true,"id": 1}
```

View File

@ -0,0 +1,101 @@
# Leader to Leader Transition
This design describes how leaders transition production of the PoH ledger
between each other as each leader generates its own slot.
## Challenges
Current leader and the next leader are both racing to generate the final tick
for the current slot. The next leader may arrive at that slot while still
processing the current leader's entries.
The ideal scenario would be that the next leader generated its own slot right
after it was able to vote for the current leader. It is very likely that the
next leader will arrive at their PoH slot height before the current leader
finishes broadcasting the entire block.
The next leader has to make the decision of attaching its own block to the last
completed block, or wait to finalize the pending block. It is possible that the
next leader will produce a block that proposes that the current leader failed,
even though the rest of the network observes that block succeeding.
The current leader has incentives to start its slot as early as possible to
capture economic rewards. Those incentives need to be balanced by the leader's
need to attach its block to a block that has the most commitment from the rest
of the network.
## Leader timeout
While a leader is actively receiving entries for the previous slot, the leader
can delay broadcasting the start of its block in real time. The delay is
locally configurable by each leader, and can be dynamically based on the
previous leader's behavior. If the previous leader's block is confirmed by the
leader's TVU before the timeout, the PoH is reset to the start of the slot and
this leader produces its block immediately.
The downsides:
* Leader delays its own slot, potentially allowing the next leader more time to
catch up.
The upsides compared to guards:
* All the space in a block is used for entries.
* The timeout is not fixed.
* The timeout is local to the leader, and therefore can be clever. The leader's
heuristic can take into account avalanche performance.
* This design doesn't require a ledger hard fork to update.
* The previous leader can redundantly transmit the last entry in the block to
the next leader, and the next leader can speculatively decide to trust it to
generate its block without verification of the previous block.
* The leader can speculatively generate the last tick from the last received
entry.
* The leader can speculatively process transactions and guess which ones are not
going to be encoded by the previous leader. This is also a censorship attack
vector. The current leader may withhold transactions that it receives from the
clients so it can encode them into its own slot. Once processed, entries can be
replayed into PoH quickly.
## Alternative design options
### Guard tick at the end of the slot
A leader does not produce entries in its block after the *penultimate tick*,
which is the last tick before the first tick of the next slot. The network
votes on the *last tick*, so the time difference between the *penultimate tick*
and the *last tick* is the forced delay for the entire network, as well as the
next leader before a new slot can be generated. The network can produce the
*last tick* from the *penultimate tick*.
If the next leader receives the *penultimate tick* before it produces its own
*first tick*, it will reset its PoH and produce the *first tick* from the
previous leader's *penultimate tick*. The rest of the network will also reset
its PoH to produce the *last tick* as the id to vote on.
The downsides:
* Every vote, and therefore confirmation, is delayed by a fixed timeout. 1 tick,
or around 100ms.
* Average case confirmation time for a transaction would be at least 50ms worse.
* It is part of the ledger definition, so to change this behavior would require
a hard fork.
* Not all the available space is used for entries.
The upsides compared to leader timeout:
* The next leader has received all the previous entries, so it can start
processing transactions without recording them into PoH.
* The previous leader can redundantly transmit the last entry containing the
*penultimate tick* to the next leader. The next leader can speculatively
generate the *last tick* as soon as it receives the *penultimate tick*, even
before verifying it.

167
book/src/leader-rotation.md Normal file
View File

@ -0,0 +1,167 @@
# Leader Rotation
At any given moment, a cluster expects only one fullnode to produce ledger
entries. By having only one leader at a time, all validators are able to replay
identical copies of the ledger. The drawback of only one leader at a time,
however, is that a malicious leader is capable of censoring votes and
transactions. Since censoring cannot be distinguished from the network dropping
packets, the cluster cannot simply elect a single node to hold the leader role
indefinitely. Instead, the cluster minimizes the influence of a malicious
leader by rotating which node takes the lead.
Each validator selects the expected leader using the same algorithm, described
below. When the validator receives a new signed ledger entry, it can be certain
that entry was produced by the expected leader. The order of slots which each
leader is assigned a slot is called a *leader schedule*.
## Leader Schedule Rotation
A validator rejects blocks that are not signed by the *slot leader*. The list
of identities of all slot leaders is called a *leader schedule*. The leader
schedule is recomputed locally and periodically. It assigns slot leaders for a
duration of time called an _epoch_. The schedule must be computed far in advance
of the slots it assigns, such that the ledger state it uses to compute the
schedule is finalized. That duration is called the *leader schedule offset*.
Solana sets the offset to the duration of slots until the next epoch. That is,
the leader schedule for an epoch is calculated from the ledger state at the
start of the previous epoch. The offset of one epoch is fairly arbitrary and
assumed to be sufficiently long such that all validators will have finalized
their ledger state before the next schedule is generated. A cluster may choose
to shorten the offset to reduce the time between stake changes and leader
schedule updates.
While operating without partitions lasting longer than an epoch, the schedule
only needs to be generated when the root fork crosses the epoch boundary. Since
the schedule is for the next epoch, any new stakes committed to the root fork
will not be active until the next epoch. The block used for generating the
leader schedule is the first block to cross the epoch boundary.
Without a partition lasting longer than an epoch, the cluster will work as
follows:
1. A validator continuously updates its own root fork as it votes.
2. The validator updates its leader schedule each time the slot height crosses
an epoch boundary.
For example:
The epoch duration is 100 slots. The root fork is updated from fork computed at
slot height 99 to a fork computed at slot height 102. Forks with slots at height
100,101 were skipped because of failures. The new leader schedule is computed
using fork at slot height 102. It is active from slot 200 until it is updated
again.
No inconsistency can exist because every validator that is voting with the
cluster has skipped 100 and 101 when its root passes 102. All validators,
regardless of voting pattern, would be committing to a root that is either 102,
or a descendant of 102.
### Leader Schedule Rotation with Epoch Sized Partitions.
The duration of the leader schedule offset has a direct relationship to the
likelihood of a cluster having an inconsistent view of the correct leader
schedule.
Consider the following scenario:
Two partitions that are generating half of the blocks each. Neither is coming
to a definitive supermajority fork. Both will cross epoch 100 and 200 without
actually committing to a root and therefore a cluster wide commitment to a new
leader schedule.
In this unstable scenario, multiple valid leader schedules exist.
* A leader schedule is generated for every fork whose direct parent is in the
previous epoch.
* The leader schedule is valid after the start of the next epoch for descendant
forks until it is updated.
Each partition's schedule will diverge after the partition lasts more than an
epoch. For this reason, the epoch duration should be selected to be much much
larger then slot time and the expected length for a fork to be committed to
root.
After observing the cluster for a sufficient amount of time, the leader schedule
offset can be selected based on the median partition duration and its standard
deviation. For example, an offset longer then the median partition duration
plus six standard deviations would reduce the likelihood of an inconsistent
ledger schedule in the cluster to 1 in 1 million.
## Leader Schedule Generation at Genesis
The genesis block declares the first leader for the first epoch. This leader
ends up scheduled for the first two epochs because the leader schedule is also
generated at slot 0 for the next epoch. The length of the first two epochs can
be specified in the genesis block as well. The minimum length of the first
epochs must be greater than or equal to the maximum rollback depth as defined in
[fork selection](fork-selection.md).
## Leader Schedule Generation Algorithm
Leader schedule is generated using a predefined seed. The process is as follows:
1. Periodically use the PoH tick height (a monotonically increasing counter) to
seed a stable pseudo-random algorithm.
2. At that height, sample the bank for all the staked accounts with leader
identities that have voted within a cluster-configured number of ticks. The
sample is called the *active set*.
3. Sort the active set by stake weight.
4. Use the random seed to select nodes weighted by stake to create a
stake-weighted ordering.
5. This ordering becomes valid after a cluster-configured number of ticks.
## Schedule Attack Vectors
### Seed
The seed that is selected is predictable but unbiasable. There is no grinding
attack to influence its outcome.
### Active Set
A leader can bias the active set by censoring validator votes. Two possible
ways exist for leaders to censor the active set:
* Ignore votes from validators
* Refuse to vote for blocks with votes from validators
To reduce the likelihood of censorship, the active set is calculated at the
leader schedule offset boundary over an *active set sampling duration*. The
active set sampling duration is long enough such that votes will have been
collected by multiple leaders.
### Staking
Leaders can censor new staking transactions or refuse to validate blocks with
new stakes. This attack is similar to censorship of validator votes.
### Validator operational key loss
Leaders and validators are expected to use ephemeral keys for operation, and
stake owners authorize the validators to do work with their stake via
delegation.
The cluster should be able to recover from the loss of all the ephemeral keys
used by leaders and validators, which could occur through a common software
vulnerability shared by all the nodes. Stake owners should be able to vote
directly co-sign a validator vote even though the stake is currently delegated
to a validator.
## Appending Entries
The lifetime of a leader schedule is called an *epoch*. The epoch is split into
*slots*, where each slot has a duration of `T` PoH ticks.
A leader transmits entries during its slot. After `T` ticks, all the
validators switch to the next scheduled leader. Validators must ignore entries
sent outside a leader's assigned slot.
All `T` ticks must be observed by the next leader for it to build its own
entries on. If entries are not observed (leader is down) or entries are invalid
(leader is buggy or malicious), the next leader must produce ticks to fill the
previous leader's slot. Note that the next leader should do repair requests in
parallel, and postpone sending ticks until it is confident other validators
also failed to observe the previous leader's entries. If a leader incorrectly
builds on its own ticks, the leader following it must replace all its ticks.

View File

@ -0,0 +1,84 @@
# Leader-to-Validator Transition
A fullnode typically operates as a validator. If, however, a staker delegates
its stake to a fullnode, it will occasionally be selected as a *slot leader*.
As a slot leader, the fullnode is responsible for producing blocks during an
assigned *slot*. A slot has a duration of some number of preconfigured *ticks*.
The duration of those ticks are estimated with a *PoH Recorder* described later
in this document.
## BankFork
BankFork tracks changes to the bank state over a specific slot. Once the final
tick has been registered the state is frozen. Any attempts to write to are
rejected.
## Validator
A validator operates on many different concurrent forks of the bank state until
it generates a PoH hash with a height within its leader slot.
## Slot Leader
A slot leader builds blocks on top of only one fork, the one it last voted on.
## PoH Recorder
Slot leaders and validators use a PoH Recorder for both estimating slot height
and for recording transactions.
### PoH Recorder when Validating
The PoH Recorder acts as a simple VDF when validating. It tells the validator
when it needs to switch to the slot leader role. Every time the validator votes
on a fork, it should use the fork's latest block id to re-seed the VDF.
Re-seeding solves two problems. First, it synchronizes its VDF to the leader's,
allowing it to more accurately determine when its leader slot begins. Second,
if the previous leader goes down, all wallclock time is accounted for in the
next leader's PoH stream. For example, if one block is missing when the leader
starts, the block it produces should have a PoH duration of two blocks. The
longer duration ensures the following leader isn't attempting to snip all the
transactions from the previous leader's slot.
### PoH Recorder when Leading
A slot leader use the PoH Recorder to record transactions, locking their
positions in time. The PoH hash must be derived from a previous leader's last
block. If it isn't, its block will fail PoH verification and be rejected by
the cluster.
The PoH Recorder also serves to inform the slot leader when its slot is over.
The leader needs to take care not to modify its bank if recording the
transaction would generate a PoH height outside its designated slot. The
leader, therefore, should not commit account changes until after it generates
the entry's PoH hash. When the PoH height falls outside its slot any
transactions in its pipeline may be dropped or forwarded to the next leader.
Forwarding is preferred, as it would minimize network congestion, allowing the
cluster to advertise higher TPS capacity.
## Fullnode Loop
The PoH Recorder manages the transition between modes. Once a ledger is
replayed, the validator can run until the recorder indicates it should be
the slot leader. As a slot leader, the node can then execute and record
transactions.
The loop is synchronized to PoH and does a synchronous start and stop of the
slot leader functionality. After stopping, the validator's TVU should find
itself in the same state as if a different leader had sent it the same block.
The following is pseudocode for the loop:
1. Query the LeaderScheduler for the next assigned slot.
2. Run the TVU over all the forks.
1. TVU will send votes to what it believes is the "best" fork.
2. After each vote, restart the PoH Recorder to run until the next assigned
slot.
3. When time to be a slot leader, start the TPU. Point it to the last fork the
TVU voted on.
4. Produce entries until the end of the slot.
1. For the duration of the slot, the TVU must not vote on other forks.
2. After the slot ends, the TPU freezes its BankFork. After freezing,
the TVU may resume voting.
5. Goto 1.

View File

@ -0,0 +1,39 @@
# Ledger Replication
Replication behavior yet to be implemented.
### Validator behavior
3. Every NUM\_KEY\_ROTATION\_TICKS it also validates samples received from
replicators. It signs the PoH hash at that point and uses the following
algorithm with the signature as the input:
- The low 5 bits of the first byte of the signature creates an index into
another starting byte of the signature.
- The validator then looks at the set of storage proofs where the byte of
the proof's sha state vector starting from the low byte matches exactly
with the chosen byte(s) of the signature.
- If the set of proofs is larger than the validator can handle, then it
increases to matching 2 bytes in the signature.
- Validator continues to increase the number of matching bytes until a
workable set is found.
- It then creates a mask of valid proofs and fake proofs and sends it to
the leader. This is a storage proof confirmation transaction.
5. After a lockout period of NUM\_SECONDS\_STORAGE\_LOCKOUT seconds, the
validator then submits a storage proof claim transaction which then causes the
distribution of the storage reward if no challenges were seen for the proof to
the validators and replicators party to the proofs.
### Replicator behavior
9. The replicator then generates another set of offsets which it submits a fake
proof with an incorrect sha state. It can be proven to be fake by providing the
seed for the hash result.
- A fake proof should consist of a replicator hash of a signature of a PoH
value. That way when the replicator reveals the fake proof, it can be
verified on chain.
10. The replicator monitors the ledger, if it sees a fake proof integrated, it
creates a challenge transaction and submits it to the current leader. The
transacation proves the validator incorrectly validated a fake storage proof.
The replicator is rewarded and the validator's staking balance is slashed or
frozen.

View File

@ -0,0 +1,190 @@
# Ledger Replication
At full capacity on a 1gbps network solana will generate 4 petabytes of data
per year. To prevent the network from centralizing around full nodes that have
to store the full data set this protocol proposes a way for mining nodes to
provide storage capacity for pieces of the network.
The basic idea to Proof of Replication is encrypting a dataset with a public
symmetric key using CBC encryption, then hash the encrypted dataset. The main
problem with the naive approach is that a dishonest storage node can stream the
encryption and delete the data as its hashed. The simple solution is to force
the hash to be done on the reverse of the encryption, or perhaps with a random
order. This ensures that all the data is present during the generation of the
proof and it also requires the validator to have the entirety of the encrypted
data present for verification of every proof of every identity. So the space
required to validate is `number_of_proofs * data_size`
## Optimization with PoH
Our improvement on this approach is to randomly sample the encrypted segments
faster than it takes to encrypt, and record the hash of those samples into the
PoH ledger. Thus the segments stay in the exact same order for every PoRep and
verification can stream the data and verify all the proofs in a single batch.
This way we can verify multiple proofs concurrently, each one on its own CUDA
core. The total space required for verification is `1_ledger_segment +
2_cbc_blocks * number_of_identities` with core count equal to
`number_of_identities`. We use a 64-byte chacha CBC block size.
## Network
Validators for PoRep are the same validators that are verifying transactions.
They have some stake that they have put up as collateral that ensures that
their work is honest. If you can prove that a validator verified a fake PoRep,
then the validators stake can be slashed.
Replicators are specialized *light clients*. They download a part of the ledger
and store it, and provide PoReps of storing the ledger. For each verified PoRep
replicators earn a reward of sol from the mining pool.
## Constraints
We have the following constraints:
* Verification requires generating the CBC blocks. That requires space of 2
blocks per identity, and 1 CUDA core per identity for the same dataset. So as
many identities at once should be batched with as many proofs for those
identities verified concurrently for the same dataset.
* Validators will randomly sample the set of storage proofs to the set that
they can handle, and only the creators of those chosen proofs will be
rewarded. The validator can run a benchmark whenever its hardware configuration
changes to determine what rate it can validate storage proofs.
## Validation and Replication Protocol
### Constants
1. NUM\_STORAGE\_ENTRIES: Number of entries in a segment of ledger data. The
unit of storage for a replicator.
2. NUM\_KEY\_ROTATION\_TICKS: Number of ticks to save a PoH value and cause a
key generation for the section of ledger just generated and the rotation of
another key in the set.
3. NUM\_STORAGE\_PROOFS: Number of storage proofs required for a storage proof
claim to be successfully rewarded.
4. RATIO\_OF\_FAKE\_PROOFS: Ratio of fake proofs to real proofs that a storage
mining proof claim has to contain to be valid for a reward.
5. NUM\_STORAGE\_SAMPLES: Number of samples required for a storage mining
proof.
6. NUM\_CHACHA\_ROUNDS: Number of encryption rounds performed to generate
encrypted state.
### Validator behavior
1. Validator joins the network and submits a storage validation capacity
transaction which tells the network how many proofs it can process in a given
period defined by NUM\_KEY\_ROTATION\_TICKS.
2. Every NUM\_KEY\_ROTATION\_TICKS the validator stores the PoH value at that
height.
3. Validator generates a storage proof confirmation transaction.
4. The storage proof confirmation transaction is integrated into the ledger.
6. Validator responds to RPC interfaces for what the last storage epoch PoH
value is and its entry\_height.
### Replicator behavior
1. Since a replicator is somewhat of a light client and not downloading all the
ledger data, they have to rely on other full nodes (validators) for
information. Any given validator may or may not be malicious and give incorrect
information, although there are not any obvious attack vectors that this could
accomplish besides having the replicator do extra wasted work. For many of the
operations there are a number of options depending on how paranoid a replicator
is:
- (a) replicator can ask a validator
- (b) replicator can ask multiple validators
- (c) replicator can subscribe to the full transaction stream and generate
the information itself
- (d) replicator can subscribe to an abbreviated transaction stream to
generate the information itself
2. A replicator obtains the PoH hash corresponding to the last key rotation
along with its entry\_height.
3. The replicator signs the PoH hash with its keypair. That signature is the
seed used to pick the segment to replicate and also the encryption key. The
replicator mods the signature with the entry\_height to get which segment to
replicate.
4. The replicator retrives the ledger by asking peer validators and
replicators. See 6.5.
5. The replicator then encrypts that segment with the key with chacha algorithm
in CBC mode with NUM\_CHACHA\_ROUNDS of encryption.
6. The replicator initializes a chacha rng with the signature from step 2 as
the seed.
7. The replicator generates NUM\_STORAGE\_SAMPLES samples in the range of the
entry size and samples the encrypted segment with sha256 for 32-bytes at each
offset value. Sampling the state should be faster than generating the encrypted
segment.
8. The replicator sends a PoRep proof transaction which contains its sha state
at the end of the sampling operation, its seed and the samples it used to the
current leader and it is put onto the ledger.
### Finding who has a given block of ledger
1. Validators monitor the transaction stream for storage mining proofs, and
keep a mapping of ledger segments by entry\_height to public keys. When it sees
a storage mining proof it updates this mapping and provides an RPC interface
which takes an entry\_height and hands back a list of public keys. The client
then looks up in their cluster\_info table to see which network address that
corresponds to and sends a repair request to retrieve the necessary blocks of
ledger.
2. Validators would need to prune this list which it could do by periodically
looking at the oldest entries in its mappings and doing a network query to see
if the storage host is still serving the first entry.
## Sybil attacks
For any random seed, we force everyone to use a signature that is derived from
a PoH hash. Everyone must use the same count, so the same PoH hash is signed by
every participant. The signatures are then each cryptographically tied to the
keypair, which prevents a leader from grinding on the resulting value for more
than 1 identity.
Since there are many more client identities then encryption identities, we need
to split the reward for multiple clients, and prevent Sybil attacks from
generating many clients to acquire the same block of data. To remain BFT we
want to avoid a single human entity from storing all the replications of a
single chunk of the ledger.
Our solution to this is to force the clients to continue using the same
identity. If the first round is used to acquire the same block for many client
identities, the second round for the same client identities will force a
redistribution of the signatures, and therefore PoRep identities and blocks.
Thus to get a reward for replicators need to store the first block for free and
the network can reward long lived client identities more than new ones.
## Validator attacks
- If a validator approves fake proofs, replicator can easily out them by
showing the initial state for the hash.
- If a validator marks real proofs as fake, no on-chain computation can be done
to distinguish who is correct. Rewards would have to rely on the results from
multiple validators in a stake-weighted fashion to catch bad actors and
replicators from being locked out of the network.
- Validator stealing mining proof results for itself. The proofs are derived
from a signature from a replicator, since the validator does not know the
private key used to generate the encryption key, it cannot be the generator of
the proof.
## Reward incentives
Fake proofs are easy to generate but difficult to verify. For this reason,
PoRep proof transactions generated by replicators may require a higher fee than
a normal transaction to represent the computational cost required by
validators.
Some percentage of fake proofs are also necessary to receive a reward from
storage mining.
## Notes
* We can reduce the costs of verification of PoRep by using PoH, and actually
make it feasible to verify a large number of proofs for a global dataset.
* We can eliminate grinding by forcing everyone to sign the same PoH hash and
use the signatures as the seed
* The game between validators and replicators is over random blocks and random
encryption identities and random data samples. The goal of randomization is
to prevent colluding groups from having overlap on data or validation.
* Replicator clients fish for lazy validators by submitting fake proofs that
they can prove are fake.
* To defend against Sybil client identities that try to store the same block we
force the clients to store for multiple rounds before receiving a reward.
* Validators should also get rewarded for validating submitted storage proofs
as incentive for storing the ledger. They can only validate proofs if they
are storing that slice of the ledger.

View File

@ -0,0 +1,63 @@
# Managing Forks in the Ledger
The ledger is permitted to fork at slot boundaries. The resulting data
structure forms a tree called a *blocktree*. When the fullnode interprets the
blocktree, it must maintain state for each fork in the chain. We call each
instance an *active fork*. It is the responsibility of a fullnode to weigh
those forks, such that it may eventually select a fork.
A fullnode selects a fork by submiting a vote to a slot leader on that fork.
The vote commits the fullnode for a duration of time called a *lockout period*.
The fullnode is not permitted to vote on a different fork until that lockout
period expires. Each subsequent vote on the same fork doubles the length of the
lockout period. After some cluster-configured number of votes (currently 32),
the length of the lockout period reaches what's called *max lockout*. Until the
max lockout is reached, the fullnode has the option to wait until the lockout
period is over and then vote on another fork. When it votes on another fork, it
performs a operation called *rollback*, whereby the state rolls back in time to
a shared checkpoint and then jumps forward to the tip of the fork that it just
voted on. The maximum distance that a fork may roll back is called the
*rollback depth*. Rollback depth is the number of votes required to achieve
max lockout. Whenever a fullnode votes, any checkpoints beyond the rollback
depth become unreachable. That is, there is no scenario in which the fullnode
will need to roll back beyond rollback depth. It therefore may safely *prune*
unreachable forks and *squash* all checkpoints beyond rollback depth into the
root checkpoint.
## Active Forks
An active fork is as a sequence of checkpoints that has a length at least one
longer than the rollback depth. The shortest fork will have a length exactly
one longer than the rollback depth. For example:
<img alt="Forks" src="img/forks.svg" class="center"/>
The following sequences are *active forks*:
* {4, 2, 1}
* {5, 2, 1}
* {6, 3, 1}
* {7, 3, 1}
## Pruning and Squashing
A fullnode may vote on any checkpoint in the tree. In the diagram above,
that's every node except the leaves of the tree. After voting, the fullnode
prunes nodes that fork from a distance farther than the rollback depth and then
takes the opportunity to minimize its memory usage by squashing any nodes it
can into the root.
Starting from the example above, wth a rollback depth of 2, consider a vote on
5 versus a vote on 6. First, a vote on 5:
<img alt="Forks after pruning" src="img/forks-pruned.svg" class="center"/>
The new root is 2, and any active forks that are not descendants from 2 are
pruned.
Alternatively, a vote on 6:
<img alt="Forks" src="img/forks-pruned2.svg" class="center"/>
The tree remains with a root of 1, since the active fork starting at 6 is only
2 checkpoints from the root.

View File

@ -0,0 +1,153 @@
# Persistent Account Storage
The set of Accounts represent the current computed state of all the transactions
that have been processed by a fullnode. Each fullnode needs to maintain this
entire set. Each block that is proposed by the network represents a change to
this set, and since each block is a potential rollback point the changes need to
be reversible.
Persistent storage like NVMEs are 20 to 40 times cheaper than DDR. The problem
with persistent storage is that write and read performance is much slower than
DDR and care must be taken in how data is read or written to. Both reads and
writes can be split between multiple storage drives and accessed in parallel.
This design proposes a data structure that allows for concurrent reads and
concurrent writes of storage. Writes are optimized by using an AppendVec data
structure, which allows a single writer to append while allowing access to many
concurrent readers. The accounts index maintains a pointer to a spot where the
account was appended to every fork, thus removing the need for explicit
checkpointing of state.
# AppendVec
AppendVec is a data structure that allows for random reads concurrent with a
single append-only writer. Growing or resizing the capacity of the AppendVec
requires exclusive access. This is implemented with an atomic `offset`, which
is updated at the end of a completed append.
The underlying memory for an AppendVec is a memory-mapped file. Memory-mapped
files allow for fast random access and paging is handled by the OS.
# Account Index
The account index is designed to support a single index for all the currently
forked Accounts.
```rust,ignore
type AppendVecId = usize;
type Fork = u64;
struct AccountMap(Hashmap<Fork, (AppendVecId, u64)>);
type AccountIndex = HashMap<Pubkey, AccountMap>;
```
The index is a map of account Pubkeys to a map of Forks and the location of the
Account data in an AppendVec. To get the version of an account for a specific Fork:
```rust,ignore
/// Load the account for the pubkey.
/// This function will load the account from the specified fork, falling back to the fork's parents
/// * fork - a virtual Accounts instance, keyed by Fork. Accounts keep track of their parents with Forks,
/// the persistent store
/// * pubkey - The Account's public key.
pub fn load_slow(&self, id: Fork, pubkey: &Pubkey) -> Option<&Account>
```
The read is satisfied by pointing to a memory-mapped location in the
`AppendVecId` at the stored offset. A reference can be returned without a copy.
## Root Forks
The [fork selection algorithm](fork-selection.md) eventually selects a fork as a
root fork and the fork is squashed. A squashed/root fork cannot be rolled back.
When a fork is squashed, all accounts in its parents not already present in the
fork are pulled up into the fork by updating the indexes. Accounts with zero
balance in the squashed fork are removed from fork by updating the indexes.
An account can be *garbage-collected* when squashing makes it unreachable.
Three possible options exist:
* Maintain a HashSet<u64> of root forks. One is expected to be created every
second. The entire tree can be garbage-collected later. Alternatively, if
every fork keeps a reference count of accounts, garbage collection could occur
any time an index location is updated.
* Remove any pruned forks from the index. Any remaining forks lower in number
than the root are can be considered root.
* Scan the index, migrate any old roots into the new one. Any remaining forks
lower than the new root can be deleted later.
# Append-only Writes
All the updates to Accounts occur as append-only updates. For every account
update, a new version is stored in the AppendVec.
It is possible to optimize updates within a single fork by returning a mutable
reference to an already stored account in a fork. The Bank already tracks
concurrent access of accounts and guarantees that a write to a specific account
fork will not be concurrent with a read to an account at that fork. To support
this operation, AppendVec should implement this function:
```rust,ignore
fn get_mut(&self, index: u64) -> &mut T;
```
This API allows for concurrent mutable access to a memory region at `index`. It
relies on the Bank to guarantee exclusive access to that index.
# Garbage collection
As accounts get updated, they move to the end of the AppendVec. Once capacity
has run out, a new AppendVec can be created and updates can be stored there.
Eventually references to an older AppendVec will disappear because all the
accounts have been updated, and the old AppendVec can be deleted.
To speed up this process, it's possible to move Accounts that have not been
recently updated to the front of a new AppendVec. This form of garbage
collection can be done without requiring exclusive locks to any of the data
structures except for the index update.
The initial implementation for garbage collection is that once all the accounts in
an AppendVec become stale versions, it gets reused. The accounts are not updated
or moved around once appended.
# Index Recovery
Each bank thread has exclusive access to the accounts during append, since the
accounts locks cannot be released until the data is committed. But there is no
explicit order of writes between the separate AppendVec files. To create an
ordering, the index maintains an atomic write version counter. Each append to
the AppendVec records the index write version number for that append in the
entry for the Account in the AppendVec.
To recover the index, all the AppendVec files can be read in any order, and the
latest write version for every fork should be stored in the index.
# Snapshots
To snapshot, the underlying memory-mapped files in the AppendVec need to be
flushed to disk. The index can be written out to disk as well.
# Performance
* Append-only writes are fast. SSDs and NVMEs, as well as all the OS level
kernel data structures, allow for appends to run as fast as PCI or NVMe bandwidth
will allow (2,700 MB/s).
* Each replay and banking thread writes concurrently to its own AppendVec.
* Each AppendVec could potentially be hosted on a separate NVMe.
* Each replay and banking thread has concurrent read access to all the
AppendVecs without blocking writes.
* Index requires an exclusive write lock for writes. Single-thread performance
for HashMap updates is on the order of 10m per second.
* Banking and Replay stages should use 32 threads per NVMe. NVMes have
optimal performance with 32 concurrent readers or writers.

65
book/src/programs.md Normal file
View File

@ -0,0 +1,65 @@
# Programming Model
A client *app* interacts with a Solana cluster by sending it *transactions*
with one or more *instructions*. The Solana *runtime* passes those instructions
to user-contributed *programs*. An instruction might, for example, tell a
program to move *lamports* from one *account* to another or create an interactive
contract that governs how lamports are moved. Instructions are executed
atomically. If any instruction is invalid, any changes made within the
transaction are discarded.
## Deploying Programs to a Cluster
<img alt="SDK tools" src="img/sdk-tools.svg" class="center"/>
As shown in the diagram above a client creates a program and compiles it to an
ELF shared object containing BPF bytecode and sends it to the Solana cluster.
The cluster stores the program locally and makes it available to clients via a
*program ID*. The program ID is a *public key* generated by the client and is
used to reference the program in subsequent transactions.
A program may be written in any programming language that can target the
Berkley Packet Filter (BPF) safe execution environment. The Solana SDK offers
the best support for C programs, which is compiled to BPF using the [LLVM
compiler infrastructure](https://llvm.org).
## Storing State between Transactions
If the program needs to store state between transactions, it does so using
*accounts*. Accounts are similar to files in operating systems such as Linux.
Like a file, an account may hold arbitrary data and that data persists beyond
the lifetime of a program. Also like a file, an account includes metadata that
tells the runtime who is allowed to access the data and how. Unlike a file, the
account includes metadata for the lifetime of the file. That lifetime is
expressed in "tokens", which is a number of fractional native tokens, called
*lamports*. Accounts are held in validator memory and pay "rent" to stay there.
Each fullnode periodically scan all accounts and collects rent. Any account
that drops to zero lamports is purged.
If an account is marked "executable", it will only be used by a *loader* to run
programs. For example, a BPF-compiled program is marked executable and loaded
by the BPF loader. No program is allowed to modify the contents of an
executable account.
An account also includes "owner" metadata. The owner is a program ID. The
runtime grants the program write access to the account if its ID matches the
owner. If an account is not owned by a program, the program is permitted to
read its data and credit the account.
In the same way that a Linux user uses a path to look up a file, a Solana
client uses public keys to look up accounts. To create an account, the client
generates a *keypair* and registers its public key using the CreateAccount
instruction. Once registered, transactions reference account keys to grant
programs access to accounts. The runtime grants programs read access by
default. To grant write access, the client must either assign the account to a
program or sign the transaction using the keypair's *secret key*. Since only
the holder of the secret key can produce valid signatures matching the
account's public key, the runtime recognizes the signature as authorization to
modify account data or debit the account.
After the runtime executes each of the transaction's instructions, it uses the
account metadata and transaction signatures to verify that none of the access
rules were violated. If a program violates an access rule, the runtime discards
all account changes made by all instructions and marks the transaction as
failed.

7
book/src/proposals.md Normal file
View File

@ -0,0 +1,7 @@
# Proposed Architectural Changes
The following architectural proposals have been accepted by the Solana team, but
are not yet fully implemented. The proposals may be implemented as described,
implemented differently as issues in the designs become evident, or not
implemented at all. If implemented, the descriptions will be moved from this
section to earlier chapters in a future version of this book.

View File

@ -0,0 +1,124 @@
# Reliable Vote Transmission
Validator votes are messages that have a critical function for consensus and
continuous operation of the network. Therefore it is critical that they are
reliably delivered and encoded into the ledger.
## Challenges
1. Leader rotation is triggered by PoH, which is clock with high drift. So many
nodes are likely to have an incorrect view if the next leader is active in
realtime or not.
2. The next leader may be easily be flooded. Thus a DDOS would not only prevent
delivery of regular transactions, but also consensus messages.
3. UDP is unreliable, and our asynchronous protocol requires any message that is
transmitted to be retransmitted until it is observed in the ledger.
Retransmittion could potentially cause an unintentional *thundering herd*
against the leader with a large number of validators. Worst case flood would be
`(num_nodes * num_retransmits)`.
4. Tracking if the vote has been transmitted or not via the ledger does not
guarantee it will appear in a confirmed block. The current observed block may
be unrolled. Validators would need to maintain state for each vote and fork.
## Design
1. Send votes as a push message through gossip. This ensures delivery of the
vote to all the next leaders, not just the next future one.
2. Leaders will read the Crds table for new votes and encode any new received
votes into the blocks they propose. This allows for validator votes to be
included in rollback forks by all the future leaders.
3. Validators that receive votes in the ledger will add them to their local crds
table, not as a push request, but simply add them to the table. This shortcuts
the push message protocol, so the validation messages do not need to be
retransmitted twice around the network.
4. CrdsValue for vote should look like this ``` Votes(Vec<Transaction>) ```
Each vote transaction should maintain a `wallclock` in its userdata. The merge
strategy for Votes will keep the last N set of votes as configured by the local
client. For push/pull the vector is traversed recursively and each Transaction
is treated as an individual CrdsValue with its own local wallclock and
signature.
Gossip is designed for efficient propagation of state. Messages that are sent
through gossip-push are batched and propagated with a minimum spanning tree to
the rest of the network. Any partial failures in the tree are actively repaired
with the gossip-pull protocol while minimizing the amount of data transfered
between any nodes.
## How this design solves the Challenges
1. Because there is no easy way for validators to be in sync with leaders on the
leader's "active" state, gossip allows for eventual delivery regardless of that
state.
2. Gossip will deliver the messages to all the subsequent leaders, so if the
current leader is flooded the next leader would have already received these
votes and is able to encode them.
3. Gossip minimizes the number of requests through the network by maintaining an
efficient spanning tree, and using bloom filters to repair state. So retransmit
back-off is not necessary and messages are batched.
4. Leaders that read the crds table for votes will encode all the new valid
votes that appear in the table. Even if this leader's block is unrolled, the
next leader will try to add the same votes without any additional work done by
the validator. Thus ensuring not only eventual delivery, but eventual encoding
into the ledger.
## Performance
1. Worst case propagation time to the next leader is Log(N) hops with a base
depending on the fanout. With our current default fanout of 6, it is about 6
hops to 20k nodes.
2. The leader should receive 20k validation votes aggregated by gossip-push into
64kb blobs. Which would reduce the number of packets for 20k network to 80
blobs.
3. Each validators votes is replicated across the entire network. To maintain a
queue of 5 previous votes the Crds table would grow by 25 megabytes. `(20,000
nodes * 256 bytes * 5)`.
## Two step implementation rollout
Initially the network can perform reliably with just 1 vote transmitted and
maintained through the network with the current Vote implementation. For small
networks a fanout of 6 is sufficient. With small network the memory and push
overhead is minor.
### Sub 1k validator network
1. Crds just maintains the validators latest vote.
2. Votes are pushed and retransmitted regardless if they are appearing in the
ledger.
3. Fanout of 6.
* Worst case 256kb memory overhead per node.
* Worst case 4 hops to propagate to every node.
* Leader should receive the entire validator vote set in 4 push message blobs.
### Sub 20k network
Everything above plus the following:
1. CRDS table maintains a vector of 5 latest validator votes.
2. Votes encode a wallclock. CrdsValue::Votes is a type that recurses into the
transaction vector for all the gossip protocols.
3. Increase fanout to 20.
* Worst case 25mb memory overhead per node.
* Sub 4 hops worst case to deliver to the entire network.
* 80 blobs received by the leader for all the validator messages.

116
book/src/runtime.md Normal file
View File

@ -0,0 +1,116 @@
# The Runtime
The runtime is a concurrent transaction processor. Transactions specify their
data dependencies upfront and dynamic memory allocation is explicit. By
separating program code from the state it operates on, the runtime is able to
choreograph concurrent access. Transactions accessing only credit-only
accounts are executed in parallel whereas transactions accessing writable
accounts are serialized. The runtime interacts with the program through an
entrypoint with a well-defined interface. The userdata stored in an account is
an opaque type, an array of bytes. The program has full control over its
contents.
The transaction structure specifies a list of public keys and signatures for
those keys and a sequential list of instructions that will operate over the
states associated with the account keys. For the transaction to be committed
all the instructions must execute successfully; if any abort the whole
transaction fails to commit.
### Account Structure
Accounts maintain a lamport balance and program-specific memory.
# Transaction Engine
The engine maps public keys to accounts and routes them to the program's
entrypoint.
## Execution
Transactions are batched and processed in a pipeline. The TPU and TVU follow a
slightly different path. The TPU runtime ensures that PoH record occurs before
memory is committed.
The TVU runtime ensures that PoH verification occurs before the runtime
processes any transactions.
<img alt="Runtime pipeline" src="img/runtime.svg" class="center"/>
At the *execute* stage, the loaded accounts have no data dependencies, so all the
programs can be executed in parallel.
The runtime enforces the following rules:
1. Only the *owner* program may modify the contents of an account. This means
that upon assignment userdata vector is guaranteed to be zero.
2. Total balances on all the accounts is equal before and after execution of a
transaction.
3. After the transaction is executed, balances of credit-only accounts must be
greater than or equal to the balances before the transaction.
4. All instructions in the transaction executed atomically. If one fails, all
account modifications are discarded.
Execution of the program involves mapping the program's public key to an
entrypoint which takes a pointer to the transaction, and an array of loaded
accounts.
## SystemProgram Interface
The interface is best described by the `Instruction::userdata` that the user
encodes.
* `CreateAccount` - This allows the user to create an account with an allocated
userdata array and assign it to a Program.
* `Assign` - Allows the user to assign an existing account to a program.
* `Move` - Moves lamports between accounts.
## Program State Security
For blockchain to function correctly, the program code must be resilient to user
inputs. That is why in this design the program specific code is the only code
that can change the state of the userdata byte array in the Accounts that are
assigned to it. It is also the reason why `Assign` or `CreateAccount` must zero
out the userdata. Otherwise there would be no possible way for the program to
distinguish the recently assigned account userdata from a natively generated
state transition without some additional metadata from the runtime to indicate
that this memory is assigned instead of natively generated.
To pass messages between programs, the receiving program must accept the message
and copy the state over. But in practice a copy isn't needed and is
undesirable. The receiving program can read the state belonging to other
Accounts without copying it, and during the read it has a guarantee of the
sender program's state.
## Notes
* There is no dynamic memory allocation. Client's need to use `CreateAccount`
instructions to create memory before passing it to another program. This
instruction can be composed into a single transaction with the call to the
program itself.
* `CreateAccount` and `Assign` guarantee that when account is assigned to the
program, the Account's userdata is zero initialized.
* Once assigned to program an Account cannot be reassigned.
* Runtime guarantees that a program's code is the only code that can modify
Account userdata that the Account is assigned to.
* Runtime guarantees that the program can only spend lamports that are in
accounts that are assigned to it.
* Runtime guarantees the balances belonging to accounts are balanced before
and after the transaction.
* Runtime guarantees that instructions all executed successfully when a
transaction is committed.
# Future Work
* [Continuations and Signals for long running
Transactions](https://github.com/solana-labs/solana/issues/1485)

View File

@ -0,0 +1,68 @@
# Stake Delegation and Rewards
Stakers are rewarded for helping validate the ledger. They do it by delegating
their stake to fullnodes. Those fullnodes do the legwork and send votes to the
stakers' staking accounts. The rest of the cluster uses those stake-weighted
votes to select a block when forks arise. Both the fullnode and staker need
some economic incentive to play their part. The fullnode needs to be
compensated for its hardware and the staker needs to be compensated for risking
getting its stake slashed. The economics are covered in [staking
rewards](staking-rewards.md). This chapter, on the other hand, describes the
underlying mechanics of its implementation.
## Vote and Rewards accounts
The rewards process is split into two on-chain programs. The Vote program
solves the problem of making stakes slashable. The Rewards account acts as
custodian of the rewards pool. It is responsible for paying out each staker
once the staker proves to the Rewards program that it participated in
validating the ledger.
The Vote account contains the following state information:
* votes - The submitted votes.
* `delegate_id` - An identity that may operate with the weight of this
account's stake. It is typically the identity of a fullnode, but may be any
identity involved in stake-weighted computations.
* `authorized_voter_id` - Only this identity is authorized to submit votes.
* `credits` - The amount of unclaimed rewards.
* `root_slot` - The last slot to reach the full lockout commitment necessary
for rewards.
The Rewards program is stateless and pays out reward when a staker submits its
Vote account to the program. Claiming a reward requires a transaction that
includes the following instructions:
1. `RewardsInstruction::RedeemVoteCredits`
2. `VoteInstruction::ClearCredits`
The Rewards program transfers lamports from the Rewards account to the Vote
account's public key. The Rewards program also ensures that the `ClearCredits`
instruction follows the `RedeemVoteCredits` instruction, such that a staker may
not claim rewards for the same work more than once.
### Delegating Stake
`VoteInstruction::DelegateStake` allows the staker to choose a fullnode to
validate the ledger on its behalf. By being a delegate, the fullnode is
entitled to collect transaction fees when its is leader. The larger the stake,
the more often the fullnode will be able to collect those fees.
### Authorizing a Vote Signer
`VoteInstruction::AuthorizeVoter` allows a staker to choose a signing service
for its votes. That service is responsible for ensuring the vote won't cause
the staker to be slashed.
## Limitations
Many stakers may delegate their stakes to the same fullnode. The fullnode must
send a separate vote to each staking account. If there are far more stakers
than fullnodes, that's a lot of network traffic. An alternative design might
have fullnodes submit each vote to just one account and then have each staker
submit that account along with their own to collect its reward.

136
book/src/staking-rewards.md Normal file
View File

@ -0,0 +1,136 @@
# Staking Rewards
Initial Proof of Stake (PoS) (i.e. using in-protocol asset, SOL, to provide
secure consensus) design ideas outlined here. Solana will implement a proof of
stake reward/security scheme for node validators in the cluster. The purpose is
threefold:
- Align validator incentives with that of the greater cluster through
skin-in-the-game deposits at risk
- Avoid 'nothing at stake' fork voting issues by implementing slashing rules
aimed at promoting fork convergence
- Provide an avenue for validator rewards provided as a function of validator
participation in the cluster.
While many of the details of the specific implementation are currently under
consideration and are expected to come into focus through specific modeling
studies and parameter exploration on the Solana testnet, we outline here our
current thinking on the main components of the PoS system. Much of this
thinking is based on the current status of Casper FFG, with optimizations and
specific attributes to be modified as is allowed by Solana's Proof of History
(PoH) blockchain data structure.
### General Overview
Solana's ledger validation design is based on a rotating, stake-weighted selected leader broadcasting transactions in a PoH data
structure to validating nodes. These nodes, upon receiving the leader's
broadcast, have the opportunity to vote on the current state and PoH height by
signing a transaction into the PoH stream.
To become a Solana validator, a fullnode must deposit/lock-up some amount
of SOL in a contract. This SOL will not be accessible for a specific time
period. The precise duration of the staking lockup period has not been
determined. However we can consider three phases of this time for which
specific parameters will be necessary:
- *Warm-up period*: which SOL is deposited and inaccessible to the node,
however PoH transaction validation has not begun. Most likely on the order of
days to weeks
- *Validation period*: a minimum duration for which the deposited SOL will be
inaccessible, at risk of slashing (see slashing rules below) and earning
rewards for the validator participation. Likely duration of months to a
year.
- *Cool-down period*: a duration of time following the submission of a
'withdrawal' transaction. During this period validation responsibilities have
been removed and the funds continue to be inaccessible. Accumulated rewards
should be delivered at the end of this period, along with the return of the
initial deposit.
Solana's trustless sense of time and ordering provided by its PoH data
structure, along with its
[avalanche](https://www.youtube.com/watch?v=qt_gDRXHrHQ&t=1s) data broadcast
and transmission design, should provide sub-second transaction confirmation times that scale
with the log of the number of nodes in the cluster. This means we shouldn't
have to restrict the number of validating nodes with a prohibitive 'minimum
deposits' and expect nodes to be able to become validators with nominal amounts
of SOL staked. At the same time, Solana's focus on high-throughput should create incentive for validation clients to provide high-performant and reliable hardware. Combined with potential a minimum network speed threshold to join as a validation-client, we expect a healthy validation delegation market to emerge. To this end, Solana's testnet will lead into a "Tour de SOL" validation-client competition, focusing on throughput and uptime to rank and reward testnet validators.
### Slashing rules
Unlike Proof of Work (PoW) where off-chain capital expenses are already
deployed at the time of block construction/voting, PoS systems require
capital-at-risk to prevent a logical/optimal strategy of multiple chain voting.
We intend to implement slashing rules which, if broken, result some amount of
the offending validator's deposited stake to be removed from circulation. Given
the ordering properties of the PoH data structure, we believe we can simplify
our slashing rules to the level of a voting lockout time assigned per vote.
I.e. Each vote has an associated lockout time (PoH duration) that represents a
duration by any additional vote from that validator must be in a PoH that
contains the original vote, or a portion of that validator's stake is
slashable. This duration time is a function of the initial vote PoH count and
all additional vote PoH counts. It will likely take the form:
Lockout<sub>i</sub>(PoH<sub>i</sub>, PoH<sub>j</sub>) = PoH<sub>j</sub> + K *
exp((PoH<sub>j</sub> - PoH<sub>i</sub>) / K)
Where PoH<sub>i</sub> is the height of the vote that the lockout is to be
applied to and PoH<sub>j</sub> is the height of the current vote on the same
fork. If the validator submits a vote on a different PoH fork on any
PoH<sub>k</sub> where k > j > i and PoH<sub>k</sub> < Lockout(PoH<sub>i</sub>,
PoH<sub>j</sub>), then a portion of that validator's stake is at risk of being
slashed.
In addition to the functional form lockout described above, early
implementation may be a numerical approximation based on a First In, First Out
(FIFO) data structure and the following logic:
- FIFO queue holding 32 votes per active validator
- new votes are pushed on top of queue (`push_front`)
- expired votes are popped off top (`pop_front`)
- as votes are pushed into the queue, the lockout of each queued vote doubles
- votes are removed from back of queue if `queue.len() > 32`
- the earliest and latest height that has been removed from the back of the
queue should be stored
It is likely that a reward will be offered as a % of the slashed amount to any
node that submits proof of this slashing condition being violated to the PoH.
#### Partial Slashing
In the schema described so far, when a validator votes on a given PoH stream,
they are committing themselves to that fork for a time determined by the vote
lockout. An open question is whether validators will be hesitant to begin
voting on an available fork if the penalties are perceived too harsh for an
honest mistake or flipped bit.
One way to address this concern would be a partial slashing design that results
in a slashable amount as a function of either:
1. the fraction of validators, out of the total validator pool, that were also
slashed during the same time period (ala Casper)
2. the amount of time since the vote was cast (e.g. a linearly increasing % of
total deposited as slashable amount over time), or both.
This is an area currently under exploration
### Penalties
As discussed in the [Economic Design](ed_overview.md) section, annual validator interest rates are to be specified as a
function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online
and actively participating in the validation process throughout the entirety of
their *validation period*. For validators that go offline/fail to validate
transactions during this period, their annual reward is effectively reduced.
Similarly, we may consider an algorithmic reduction in a validator's active
amount staked amount in the case that they are offline. I.e. if a validator is
inactive for some amount of time, either due to a partition or otherwise, the
amount of their stake that is considered active (eligible to earn rewards)
may be reduced. This design would be structured to help long-lived partitions
to eventually reach finality on their respective chains as the % of non-voting
total stake is reduced over time until a super-majority can be achieved by the
active validators in each partition. Similarly, upon re-engaging, the active
amount staked will come back online at some defined rate. Different rates of
stake reduction may be considered depending on the size of the partition/active
set.

View File

@ -0,0 +1,87 @@
# Synchronization
Fast, reliable synchronization is the biggest reason Solana is able to achieve
such high throughput. Traditional blockchains synchronize on large chunks of
transactions called blocks. By synchronizing on blocks, a transaction cannot be
processed until a duration called "block time" has passed. In Proof of Work
consensus, these block times need to be very large (~10 minutes) to minimize
the odds of multiple fullnodes producing a new valid block at the same time.
There's no such constraint in Proof of Stake consensus, but without reliable
timestamps, a fullnode cannot determine the order of incoming blocks. The
popular workaround is to tag each block with a [wallclock
timestamp](https://en.bitcoin.it/wiki/Block_timestamp). Because of clock drift
and variance in network latencies, the timestamp is only accurate within an
hour or two. To workaround the workaround, these systems lengthen block times
to provide reasonable certainty that the median timestamp on each block is
always increasing.
Solana takes a very different approach, which it calls *Proof of History* or
*PoH*. Leader nodes "timestamp" blocks with cryptographic proofs that some
duration of time has passed since the last proof. All data hashed into the
proof most certainly have occurred before the proof was generated. The node
then shares the new block with validator nodes, which are able to verify those
proofs. The blocks can arrive at validators in any order or even could be
replayed years later. With such reliable synchronization guarantees, Solana is
able to break blocks into smaller batches of transactions called *entries*.
Entries are streamed to validators in realtime, before any notion of block
consensus.
Solana technically never sends a *block*, but uses the term to describe the
sequence of entries that fullnodes vote on to achieve *confirmation*. In that
way, Solana's confirmation times can be compared apples to apples to
block-based systems. The current implementation sets block time to 800ms.
What's happening under the hood is that entries are streamed to validators as
quickly as a leader node can batch a set of valid transactions into an entry.
Validators process those entries long before it is time to vote on their
validity. By processing the transactions optimistically, there is effectively
no delay between the time the last entry is received and the time when the node
can vote. In the event consensus is **not** achieved, a node simply rolls back
its state. This optimisic processing technique was introduced in 1981 and
called [Optimistic Concurrency
Control](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.4735). It
can be applied to blockchain architecture where a cluster votes on a hash that
represents the full ledger up to some *block height*. In Solana, it is
implemented trivially using the last entry's PoH hash.
### Relationship to VDFs
The Proof of History technique was first described for use in blockchain by
Solana in November of 2017. In June of the following year, a similar technique
was described at Stanford and called a [verifiable delay
function](https://eprint.iacr.org/2018/601.pdf) or *VDF*.
A desirable property of a VDF is that verification time is very fast. Solana's
approach to verifying its delay function is proportional to the time it took to
create it. Split over a 4000 core GPU, it is sufficiently fast for Solana's
needs, but if you asked the authors of the paper cited above, they might tell you
([and have](https://github.com/solana-labs/solana/issues/388)) that Solana's
approach is algorithmically slow and it shouldn't be called a VDF. We argue the
term VDF should represent the category of verifiable delay functions and not
just the subset with certain performance characteristics. Until that's
resolved, Solana will likely continue using the term PoH for its
application-specific VDF.
Another difference between PoH and VDFs is that a VDF is used only for tracking
duration. PoH's hash chain, on the other hand, includes hashes of any data the
application observed. That data is a double-edged sword. On one side, the data
"proves history" - that the data most certainly existed before hashes after it.
On the side, it means the application can manipulate the hash chain by changing
*when* the data is hashed. The PoH chain therefore does not serve as a good
source of randomness whereas a VDF without that data could. Solana's [leader
rotation algorithm](#leader-rotation), for example, is derived only from the
VDF *height* and not its hash at that height.
### Relationship to Consensus Mechanisms
Proof of History is not a consensus mechanism, but it is used to improve the
performance of Solana's Proof of Stake consensus. It is also used to improve
the performance of the data plane and replication protocols.
### More on Proof of History
* [water clock
analogy](https://medium.com/solana-labs/proof-of-history-explained-by-a-water-clock-e682183417b8)
* [Proof of History
overview](https://medium.com/solana-labs/proof-of-history-a-clock-for-blockchain-cf47a61a9274)

312
book/src/terminology.md Normal file
View File

@ -0,0 +1,312 @@
# Terminology
The following terms are used throughout this book.
#### account
A persistent file addressed by [public key](#public-key) and with
[lamports](#lamport) tracking its lifetime.
#### app
A front-end application that interacts with a Solana cluster.
#### blob
A fraction of a [block](#block); the smallest unit sent between
[fullnodes](#fullnode).
#### block
A contiguous set of [entries](#entry) on the ledger covered by a
[vote](#ledger-vote). A [leader](#leader) produces at most one block per
[slot](#slot).
#### block height
The number of [blocks](#block) beneath the current block. The first block after
the [genesis block](#genesis-block) has height zero.
#### block id
The [entry id](#entry-id) of the last entry in a [block](#block).
#### bootstrap leader
The first [fullnode](#fullnode) to take the [leader](#leader) role.
#### CBC block
Smallest encrypted chunk of ledger, an encrypted ledger segment would be made of
many CBC blocks. `ledger_segment_size / cbc_block_size` to be exact.
#### client
A [node](#node) that utilizes the [cluster](#cluster).
#### cluster
A set of [fullnodes](#fullnode) maintaining a single [ledger](#ledger).
#### confirmation
The wallclock duration between a [leader](#leader) creating a [tick
entry](#tick) and recognizing a supermajority of [ledger votes](#ledger-vote)
with a ledger interpretation that matches the leader's.
#### control plane
A gossip network connecting all [nodes](#node) of a [cluster](#cluster).
#### data plane
A multicast network used to efficiently validate [entries](#entry) and gain
consensus.
#### drone
An off-chain service that acts as a custodian for a user's private key. It
typically serves to validate and sign transactions.
#### fake storage proof
A proof which has the same format as a storage proof, but the sha state is
actually from hashing a known ledger value which the storage client can reveal
and is also easily verifiable by the network on-chain.
#### entry
An entry on the [ledger](#ledger) either a [tick](#tick) or a [transactions
entry](#transactions-entry).
#### entry id
A globally unique identifier that is also a proof that the [entry](#entry) was
generated after a duration of time, all [transactions](#transaction) included
in the entry, and all previous entries on the [ledger](#ledger). See [Proof of
History](#proof-of-history).
#### epoch
The time, i.e. number of [slots](#slot), for which a [leader
schedule](#leader-schedule) is valid.
#### fork
A [ledger](#ledger) derived from common entries but then diverged.
#### fullnode
A full participant in the [cluster](#cluster) either a [leader](#leader) or
[validator](#validator) node.
#### fullnode state
The result of interpreting all programs on the ledger at a given [tick
height](#tick-height). It includes at least the set of all [accounts](#account)
holding nonzero [native tokens](#native-tokens).
#### genesis block
The configuration file that prepares the [ledger](#ledger) for the first [block](#block).
#### hash
A digital fingerprint of a sequence of bytes.
#### instruction
The smallest unit of a [program](#program) that a [client](#client) can include
in a [transaction](#instruction).
#### keypair
A [public key](#public-key) and corresponding [secret key](#secret-key).
#### lamport
A fractional [native token](#native-token) with the value of approximately
0.0000000000582 [sol](#sol) (2^-34).
#### loader
A [program](#program) with the ability to interpret the binary encoding of
other on-chain programs.
#### leader
The role of a [fullnode](#fullnode) when it is appending [entries](#entry) to
the [ledger](#ledger).
#### leader schedule
A sequence of [fullnode](#fullnode) [public keys](#public-key). The cluster
uses the leader schedule to determine which fullnode is the [leader](#leader)
at any moment in time.
#### ledger
A list of [entries](#entry) containing [transactions](#transaction) signed by
[clients](#client).
#### ledger segment
Portion of the ledger which is downloaded by the replicator where storage proof
data is derived.
#### ledger vote
A [hash](#hash) of the [fullnode's state](#fullnode-state) at a given [tick
height](#tick-height). It comprises a validator's affirmation that a
[block](#block) it has received has been verified, as well as a promise not to
vote for a conflicting [block](#block) (i.e. [fork](#fork)) for a specific
amount of time, the [lockout](#lockout) period.
#### light client
A type of [client](#client) that can verify it's pointing to a valid
[cluster](#cluster). It performs more ledger verification than a [thin
client](#thin-client) and less than a [fullnode](#fullnode).
#### lockout
The duration of time for which a [fullnode](#fullnode) is unable to
[vote](#ledger-vote) on another [fork](#fork).
#### native token
The [token](#token) used to track work done by [nodes](#node) in a cluster.
#### node
A computer participating in a [cluster](#cluster).
#### node count
The number of [fullnodes](#fullnode) participating in a [cluster](#cluster).
#### PoH
See [Proof of History](#proof-of-history).
#### program
The code that interprets [instructions](#instruction).
#### program id
The public key of the [account](#account) containing a [program](#program).
#### Proof of History
A stack of proofs, each which proves that some data existed before the proof
was created and that a precise duration of time passed before the previous
proof. Like a [VDF](#verifiable-delay-function), a Proof of History can be
verified in less time than it took to produce.
#### public key
The public key of a [keypair](#keypair).
#### replicator
Storage mining client, stores some part of the ledger enumerated in blocks and
submits storage proofs to the chain. Not a full-node.
#### runtime
The component of a [fullnode](#fullnode) responsible for [program](#program)
execution.
#### secret key
The private key of a [keypair](#keypair).
#### slot
The period of time for which a [leader](#leader) ingests transactions and
produces a [block](#block).
#### sol
The [native token](#native-token) tracked by a [cluster](#cluster) recognized
by the company Solana.
#### stake
Tokens forfeit to the [cluster](#cluster) if malicious [fullnode](#fullnode)
behavior can be proven.
#### storage proof
A set of sha hash state which is constructed by sampling the encrypted version
of the stored ledger segment at certain offsets.
#### storage proof challenge
A transaction from a replicator that verifiably proves that a validator
confirmed a fake proof.
#### storage proof claim
A transaction from a validator which is after the timeout period given from the
storage proof confirmation and which no successful challenges have been
observed which rewards the parties of the storage proofs and confirmations.
#### storage proof confirmation
A transaction by a validator which indicates the set of real and fake proofs
submitted by a storage miner. The transaction would contain a list of proof
hash values and a bit which says if this hash is valid or fake.
#### storage validation capacity
The number of keys and samples that a validator can verify each storage epoch.
#### thin client
A type of [client](#client) that trusts it is communicating with a valid
[cluster](#cluster).
#### tick
A ledger [entry](#entry) that estimates wallclock duration.
#### tick height
The Nth [tick](#tick) in the [ledger](#ledger).
#### token
A scarce, fungible member of a set of tokens.
#### tps
[Transactions](#transaction) per second.
#### transaction
One or more [instructions](#instruction) signed by the [client](#client) and
executed atomically.
#### transactions entry
A set of [transactions](#transaction) that may be executed in parallel.
#### validator
The role of a [fullnode](#fullnode) when it is validating the
[leader's](#leader) latest [entries](#entry).
#### VDF
See [verifiable delay function](#verifiable-delay-function).
#### verifiable delay function
A function that takes a fixed amount of time to execute that produces a proof
that it ran, which can then be verified in less time than it took to produce.
#### vote
See [ledger vote](#ledger-vote).

View File

@ -0,0 +1,64 @@
## Testing Programs
Applications send transactions to a Solana cluster and query validators to
confirm the transactions were processed and to check each transaction's result.
When the cluster doesn't behave as anticipated, it could be for a number of
reasons:
* The program is buggy
* The BPF loader rejected an unsafe program instruction
* The transaction was too big
* The transaction was invalid
* The Runtime tried to execute the transaction when another one was accessing
the same account
* The network dropped the transaction
* The cluster rolled back the ledger
* A validator responded to queries maliciously
### The Transact Trait
To troubleshoot, the application should retarget a lower-level component, where
fewer errors are possible. Retargeting can be done with different
implementations of the Transact trait.
When Futures 0.3.0 is released, the Transact trait may look like this:
```rust,ignore
trait Transact {
async fn send_transactions(txs: &[Transaction]) -> Vec<Result<(), BankError>>;
}
```
Users send transactions and asynchrounously await their results.
#### Transact with Clusters
The highest level implementation targets a Solana cluster, which may be a
deployed testnet or a local cluster running on a development machine.
#### Transact with the TPU
The next level is the TPU implementation of Transact. At the TPU level, the
application sends transactions over Rust channels, where there can be no
surprises from network queues or dropped packets. The TPU implements all
"normal" transaction errors. It does signature verification, may report
account-in-use errors, and otherwise results in the ledger, complete with proof
of history hashes.
### Low-level testing
### Testing with the Bank
Below the TPU level is the Bank. The Bank doesn't do signature verification or
generate a ledger. The Bank is a convenient layer at which to test new on-chain
programs. It allows developers to toggle between native program implementations
and BPF-compiled variants. No need for the Transact trait here. The Bank's API
is synchronous.
### Unit-testing with the Runtime
Below the Bank is the Runtime. The Runtime is the ideal test environment for
unit-testing. By statically linking the Runtime into a native program
implementation, the developer gains the shortest possible edit-compile-run
loop. Without any dynamic linking, stack traces include debug symbols and
program errors are straightforward to troubleshoot.

35
book/src/tictactoe.md Normal file
View File

@ -0,0 +1,35 @@
# Example app: Tic-Tac-Toe
[Click here to play
Tic-Tac-Toe](https://solana-example-tictactoe.herokuapp.com/) on the Solana
testnet. Open the link and wait for another player to join, or open the link
in a second browser tab to play against yourself. You will see that every
move a player makes stores a transaction on the ledger.
## Build and run Tic-Tac-Toe locally
First fetch the latest release of the example code:
```sh
$ git clone https://github.com/solana-labs/example-tictactoe.git
$ cd example-tictactoe
$ TAG=$(git describe --tags $(git rev-list --tags
--max-count=1))
$ git checkout $TAG
```
Next, follow the steps in the git repository's
[README](https://github.com/solana-labs/example-tictactoe/blob/master/README.md).
## Getting lamports to users
You may have noticed you interacted with the Solana cluster without first
needing to acquire lamports to pay transaction fees. Under the hood, the web
app creates a new ephemeral identity and sends a request to an off-chain
service for a signed transaction authorizing a user to start a new game.
The service is called a *drone*. When the app sends the signed transaction
to the Solana cluster, the drone's lamports are spent to pay the transaction
fee and start the game. In a real world app, the drone might request the user
watch an ad or pass a CAPTCHA before signing over its lamports.

3
book/src/tpu.md Normal file
View File

@ -0,0 +1,3 @@
# The Transaction Processing Unit
<img alt="TPU Block Diagram" src="img/tpu.svg" class="center"/>

3
book/src/tvu.md Normal file
View File

@ -0,0 +1,3 @@
# The Transaction Validation Unit
<img alt="TVU Block Diagram" src="img/tvu.svg" class="center"/>

View File

@ -0,0 +1,107 @@
# Secure Vote Signing
This design describes additional vote signing behavior that will make the
process more secure.
Currently, Solana implements a vote-signing service that evaluates each vote to
ensure it does not violate a slashing condition. The service could potentially
have different variations, depending on the hardware platform capabilities. In
particular, it could be used in conjunction with a secure enclave (such as SGX).
The enclave could generate an asymmetric key, exposing an API for user
(untrusted) code to sign the vote transactions, while keeping the vote-signing
private key in its protected memory.
The following sections outline how this architecture would work:
## Message Flow
1. The node initializes the enclave at startup
* The enclave generates an asymmetric key and returns the public key to the
node
* The keypair is ephemeral. A new keypair is generated on node bootup. A
new keypair might also be generated at runtime based on some TBD
criteria.
* The enclave returns its attestation report to the node
2. The node performs attestation of the enclave (e.g using Intel's IAS APIs)
* The node ensures that the Secure Enclave is running on a TPM and is
signed by a trusted party
3. The stakeholder of the node grants ephemeral key permission to use its stake.
This process is TBD.
4. The node's untrusted, non-enclave software calls trusted enclave software
using its interface to sign transactions and other data.
* In case of vote signing, the node needs to verify the PoH. The PoH
verification is an integral part of signing. The enclave would be
presented with some verifiable data to check before signing the vote.
* The process of generating the verifiable data in untrusted space is TBD
## PoH Verification
1. When the node votes on an en entry `X`, there's a lockout period `N`, for
which it cannot vote on a fork that does not contain `X` in its history.
2. Every time the node votes on the derivative of `X`, say `X+y`, the lockout
period for `X` increases by a factor `F` (i.e. the duration node cannot vote on
a fork that does not contain `X` increases).
* The lockout period for `X+y` is still `N` until the node votes again.
3. The lockout period increment is capped (e.g. factor `F` applies maximum 32
times).
4. The signing enclave must not sign a vote that violates this policy. This
means
* Enclave is initialized with `N`, `F` and `Factor cap`
* Enclave stores `Factor cap` number of entry IDs on which the node had
previously voted
* The sign request contains the entry ID for the new vote
* Enclave verifies that new vote's entry ID is on the correct fork
(following the rules #1 and #2 above)
## Ancestor Verification
This is alternate, albeit, less certain approach to verifying voting fork.
1. The validator maintains an active set of nodes in the cluster
2. It observes the votes from the active set in the last voting period
3. It stores the ancestor/last_tick at which each node voted
4. It sends new vote request to vote-signing service
* It includes previous votes from nodes in the active set, and their
corresponding ancestors
5. The signer checks if the previous votes contains a vote from the validator,
and the vote ancestor matches with majority of the nodes
* It signs the new vote if the check is successful
* It asserts (raises an alarm of some sort) if the check is unsuccessful
The premise is that the validator can be spoofed at most once to vote on
incorrect data. If someone hijacks the validator and submits a vote request for
bogus data, that vote will not be included in the PoH (as it'll be rejected by
the cluster). The next time the validator sends a request to sign the vote, the
signing service will detect that validator's last vote is missing (as part of
#5 above).
## Fork determination
Due to the fact that the enclave cannot process PoH, it has no direct knowledge
of fork history of a submitted validator vote. Each enclave should be initiated
with the current *active set* of public keys. A validator should submit its
current vote along with the votes of the active set (including itself) that it
observed in the slot of its previous vote. In this way, the enclave can surmise
the votes accompanying the validator's previous vote and thus the fork being
voted on. This is not possible for the validator's initial submitted vote, as
it will not have a 'previous' slot to reference. To account for this, a short
voting freeze should apply until the second vote is submitted containing the
votes within the active set, along with it's own vote, at the height of the
initial vote.
## Enclave configuration
A staking client should be configurable to prevent voting on inactive forks.
This mechanism should use the client's known active set `N_active` along with a
threshold vote `N_vote` and a threshold depth `N_depth` to determine whether or
not to continue voting on a submitted fork. This configuration should take the
form of a rule such that the client will only vote on a fork if it observes
more than `N_vote` at `N_depth`. Practically, this represents the client from
confirming that it has observed some probability of economic finality of the
submitted fork at a depth where an additional vote would create a lockout for
an undesirable amount of time if that fork turns out not to be live.
## Challenges
1. Generation of verifiable data in untrusted space for PoH verification in the
enclave.
2. Need infrastructure for granting stake to an ephemeral key.

Some files were not shown because too many files have changed in this diff Show More