* Fix funding math; make generate_keypairs and fund_keys consistent
* Add test, and fix inconsistencies it exposes
* De-pow math, and use assert_eq in tests for better failure msgs
* Update config program to accommodate multiple signers (#4946)
* Update config program to accommodate multiple signers
* Update install CLI
* Remove account_type u32; add handling for unsigned keys in list
* ConfigKeys doc
* Make config_api more robust (#4980)
* Make config_api more robust
* Add test and update store instruction
* Improve signature checks in config_api (#5001)
automerge
* Add validator-info CLI (#4970)
* Add validator-info CLI
* Add GetProgramAccounts method to solana-client
* Update validator-info args, and add get subcommand
* Update ValidatorInfo lengths
* Add account filter for get --all
* Update testnet participation doc to reflect validator-info
* Flesh out tests
* Review comments
* reduce replicode in accounts, fix cast to i64 (#5025)
* add accounts_index_scan_accounts (#5020)
* Plumb scan_accounts into accounts_db, adding load from storage (#5029)
* Fix getProgramAccounts RPC (#5024)
* Use scan_accounts to load accounts by program_id
* Add bank test
* Use get_program_accounts in RPC
* Rebase for v0.16
* core/rpc: Name magic number for minted lamports in tests genesis block
* core/rpc: Expose bank::capitalization() via getSolTotalSupply RPC method
* book: Add entry for getTotalSupply RPC method
* Add DeadSlots column family
* Filter dead forks from get_slots_since
* Mark erroring slots as dead in replay stage, add test
* Mark dead forks in progress instead of removing them
* Fix logging process_entries failures in replay_stage
* Unignore test_fail_entry_verification_leader
* Refactor BroadcastStage to support custom implementations, add FailEntryVerificationBroadcastRun implementation
* Plumb switch on broadcast type through validator
* Add test for validator generating non-verifiable entries to local_cluster
* Fix bad initializers
* Refactor broadcast run code into utils
* Only a single snapshot is maintained to avoid unbounded disk growth
* Snapshot is stored as a compressed tar archive for faster rsyncing
* Any validator node may now generate snapshots
* Updated testnet scripts to generate snapshots on the blockstreamer node
Use more chunks to avoid duplicate signature failures:
Duplicate signatures can occur because bank.clear_signatures()
can occur before the bank has actually committed the signatures
to the status cache and then error out on the next set of transactions.
* add MiningPool fund validator MinigPools from inflation
* fixup
* finish rename of MINIMUM_SLOT_LENGTH to MINIMUM_SLOTS_PER_EPOCH
* deterministic miningpool location
* point_value, not credit_value... use f64
* Add support for contracts based on account data to Budget
* Add program_id to account constraints
* No longer require a signature for the account data witness
* Rename bank::store to store_account
* fmt
* Add a doc
* clippy
* Add signature to blob
* Change Signable trait to support returning references to signable data
* Add signing to broadcast
* Verify signatures in window_service
* Add testing for signatures to erasure
* Add RPC for getting current slot, consume RPC call in test_repairman_catchup for more deterministic results
* make runtime depend on bpf_loader
* remove vote redundancy, move bpf_loader to genesis, export program\! from bpf_loader crate
* move bpf_loader specification into genesis
* bpf tests to use genesis with bpf
* need to avoid depending on programs, except for macros
* Add credit-only debit/data check to verify_instruction
* Store credits and pass to accounts_db
* Add InstructionErrors and tests
* Relax account locks for credit-only accounts
* Collect credit-only account credits before passing to accounts_db to store properly
* Convert System Transfer accounts to credit-only, and fixup test
* Functionalize collect_accounts to unit test
* Review comments
* Rebase
* Add accounts_index bench
* Don't take the accounts index lock unless needed
* Accounts_index remove insert return vec and add capacity stats
* Use hashbrown hashmap for accounts_index
* Rework PoRep design doc
* Define the stages of the PoRep game
* Add that the stages are really transactions
* Update turn count
* Review comment + clarification
* More clarification
* Rephrase for clarity
* Revert "Prevent run.sh from running beyond the first epoch under normal use (#4498)"
This reverts commit d343c409e6.
* Separate bootstrap leader's stake lamports from its identity lamports
The local cluster that run.sh starts will typically only have a single
node, the bootstrap leader. With epoch warmup enabled, run.sh will fail
after ~90 seconds once the warmup period has been exceeded due to lack
of votes from other validators.
As a workaround, disable epoch warmup and set slots-per-epoch to 1
million to keep run.sh alive for more than a fortnight.
* Sort repairmen before shuffling so order is the same across all validators
* Reduce repair redundancy to 1 for now
* Fix local cache of roots so that 1) Timestamps are only updated to acknowledge a repair was sent 2) Roots are updated even when timestamps aren't updated to keep in sync with network
* Refactor code, add test
* Add credit-only flag to AccountMeta, default to false
* Sort keys by is_credit_only within signed/unsigned groupings
* Process and de-dupe program keys along with other account keys
* Add message helper functions
* Fix test
* Improve comment
* s/is_credit_only/is_debitable
* Add InstructionKeys helper struct, and simplify program_position method
* Some counters for real poh perf analysis
* more metrics
* Comment on CPU affinity change, and reduce hash batch size based on TPS perf
* review comments
* Add num_readonly_accounts slice
* Impl programs in account_keys
* Emulate current account-loading functionality using program-account_keys (breaks exchange_program_api tests)
* Fix test
* Add temporary exchange faucet id
* Update chacha golden
* Split num_credit_only_accounts into separate fields
* Improve readability
* Move message field constants into Message
* Add MessageHeader struct and fixup comments
* Add bootstrap storage account to genesis
* Add storage account genesis command to run.sh
* Update airdrop for all validators
* Remove unhelpful Short for arg
* Set the correct program owner
* Use Rc to prevent clone of Packets
* Fix min => max in banking_stage threads.
Coalesce packet buffers better since a larger batch will
be faster through banking and sigverify.
Deconstruct batches into banking_stage from sigverify since
sigverify likes to accumulate batches but then a single banking_stage
thread will be stuck with a large batch. Maximize parallelism by
creating more chunks of work for banking_stage.
* Remove unused PohServiceConfig::Step
* Clarify variable name
* Poh::hash() now takes an iteration counter
* man -> max
* Inline functions with single call site
* Move PohServiceConfig into GenesisBlock
* Add plumbing to enable real PoH on testnets
* Batch hashes to improve PoH hash rate
* Ensure a constant hashes_per_tick
* Remove PohEntry mixin field
* Poh/PohEntry no longer maintains tick_height
* Ensure a constant hashes_per_tick
* ci/localnet-sanity.sh: Use real PoH
* Rework Poh/PohService to keep PohRecorder unlocked as much as possible while hashing
* Correctly remove replicator from data plane after its done repairing
* Update discover to report nodes and replicators separately
* Fix print and condition to be spy
Cleanup
Raise limit on submission threshold
Pick nits and add metrics point
fmt
Fixup compiler warning
Cleanup if-else
Append new point to vec rather than submit
* support passive staking with wallet, use it
* fixups
* clippy
* cleanup app generation in wallet, finish fullnode.sh staking
* _id and _keypair => pubkey
use keygen, not wallet to get pubkey
* found 'em
* Be able to create bank snapshots
* fix clippy
* load snapshot on start
* regenerate account index from the storage
* Remove rc feature dependency
* cleanup
* save snapshot for slot 0
* Add Epoch Slots to gossip
* Add new gossip structure to support Repair
* remove unnecessary clones
* Setup dummy fast repair in repair_service
* PR comments
* Introduce record locks on txs that will be recorded
* Add tests for LockedAccountsResults
* Fix broken bench
* Exit process_entries on detecting conflicting locks within same entry
* Add optional depth parameter to pubsub, and store in subscriptions
* Pass bank_forks into rpc_subscription; add method to check depth before notify and impl for account subscriptions
* Impl check-depth for signature subscriptions
* Impl check-depth for program subscriptions
* Plumb fork id through accounts
* Use fork id and root to prevent repeated account notifications; also s/Depth/Confirmations
* Write tests in terms of bank_forks
* Fixup accounts tests
* Add pubsub-confirmations tests
* Update pubsub documentation
* add support for single-crate coverage to help iterate, update to latest grcov
* shellcheck
* fixup
* remove unused
* install grcov before setting RUSTFLAGS ;)
* rely on nightly having grcov installed
* net.sh: Add -F to discard validator nodes that didn't bootup successfully
* Relax sanity node count when validator bootup failure is permitted
* Less sanity for testnet-demo
* net.sh: Add -F to discard validator nodes that didn't bootup successfully
* testnet-demo now runs across more GCE zones
* Save zone info to config file
* Add geoip whitelist for common data centers
* Skip more of start
* Include -x for config
* Fetch private key from first validator node if necessary
* Correct -r propagation
And reduce metrics rate for exchange contract counters.
Since we can go 10s-100s thousands of contracts per second,
some metrics would be dropped if submitting every time.
* Fix inserting bogus is_last blobs into blocktree
* Check for pre-existing blob before insert
* Ignore test that performs concurrent writes on blocktree as that is not supported
* Fix erasure metadata race condition
* make erasure return the underlying error without wrapping it in the `solana::Error` type
* Add metric for erasure failures
* add tests to `ErasureMeta` indexing logic
* Add test to ensure erasure recovery failures don't cause panics
* Add support for Azure instances in testnet creation
* Fixup
* Fix shellcheck errors
* More shellcheck and cleanup node creation and deletion
* More shellcheck and cleanup node creation and deletion
* Fixup instance wait API
* Fix revieew comments and add GPU installation extension
* Revert "Revert "account storage is not in sync with the index after gc (#3914)" (#3936)"
This reverts commit 4f47fc00bc.
* fix get_exclusive_storage
* clippy
* account storage is not in sync with the index after gc
* builds
* clippy fmt
* test
* purge dead forks on store
* rm println
* also fixed count_stores
* comments
* * rename Entry::serialized_size() to Entry::to_blob_size() to better
reduce confusion with bincode, et al. and to better reflect its
real meaning
* fix implementation of to_blob_size() to actually return what happens
when we do entries.to_blobs() (i.e. we serialize Vec<Entry>, not Entry)
* update tests to be more rigorous
* clippy
* Migrate from ring to ed25519-dalek
* Move gen_keypair_file test to a more appropriate location
* Fixup bench-exchange and add helper fn for single deterministic keypair
* Update golden
* fix erasure, more tests for full blobs, more metrics
* Revert "Revert "Use Rust erasure library and turn on erasure (#3768)" (#3827)"
This reverts commit 4b8cb72977.
* split out erasure into new crate; add implementation using rust reed-solomon-library
* Track erasures with a &[bool] instead of indexes
* fix bug that reported the number of erasures incorrectly
* Introduce erasure `Session` for consistent config
* Increase test coverage; fix bugs
* Add ability to remove blobs from erasure meta tracking. test added
* Track deletion of coding blobs in blocktree via ErasureMeta. Added to
test
* Remove unused functions in blocktree
* add randomness to recovery thread to exercise recovery due to either new
data or coding blobs
* Add unit test for ErasureMeta index handling
* Re-enable test in broadcast stage
* Rename programs to instruction_processors
* Updates around the code base to support instruction_processors rename
* Kabab instruction_processors
* Update Cargo.toml files and scripts to use instruction-processors
* Update Cargo.toml to use instruction-processors
* Update CI scripts to use instruction-processors
* Add --bootstrap-leader-lamports
* Generalize --no-stake into --stake NUM
* Use a large stake for net/ fullnodes
* Setup vote account before starting fullnode to avoid mixed log output
- packets are buffered on leader rotation, when the next leader is
unknown
- shortening the wait allows the banking stage to poll for next
leader more frequently
This will allow BankClient to spin up a thread to use the Bank.
It'll also ease the transaction from BankClient to ThinClient since
it won't let you depend on Bank.
Drawback, you the transition from Bank to BankClient will be harder
because the Bank methods are inaccessible.
Use transaction count instead of batch count,
and set the recv_start from when we finished processing
the previous batch to get a more accurate number.
* Move append_vec bench to the crate with append_vec
* Use black_box to tell the compiler not to optimize away test data
```
pub fn black_box<T>(dummy: T) -> T {
unsafe {
let ret = std::ptr::read_volatile(&dummy);
std::mem::forget(dummy);
ret
}
}
```
* Revert "Use black_box to tell the compiler not to optimize away test data"
This reverts commit 5610b8ee95.
* Use black_box to tell the compiler not to optimize away test data
* Create bench directories
* Refactor Storage Program
* Replace KeyedAccount trait with StorageAccount struct
* Implement State for Account, not StorageAccount
* Make State trait more generic
* Move validation check into function
The previous code was assuming the instruction index and the
program_id index were the same. That's always true for
single-instruction transactions, but not for multiples.
Added CompiledInstruction::program_id() so that we don't need to pass
around instruction indexes just for Message::program_id().
Also added Message.program_ids() that returns a slice so that we
can move those pubkeys into Message::account_keys.
Prefer the term "instruction processor" over "program". Reserve
the term "native" for the loader and shared object it loads.
Compiling an instruction processor to BPF shouldn't imply changing
to a non-native entrypoint.
* Implement finalizer so that all locked accounts are dropped when finalizer goes out of scope
* Add test for tx error with lock conflict
* Fix double unlock from destructor running after a call to unlock
* Add contains_all_parents flag to SlotMeta to prep for tracking detached heads
* Add new DetachedHeads column family
* Remove has_complete_parents
* Fix test
verify_signature() was only used in a test that was testing
binary layout. It only worked because the test transaction only
had one signature.
from() was only used by verify_signature() and that's something
we'd typically called `pubkey()`.
hash() didn't return the hash of the Transaction, as you might
guess. It's only used for PoH, so move it into Entry.
highlight.js has a big dictionary of words. When git-grep includes
one of those words, it floods the screen with the whole dictionary.
Mark it as binary so that it'll now just report one line:
Binary file book/theme/highlight.js matches
This module predates Accounts. That was a separate module because
it used to be part of Bank and those types could be sent to any
smart contract. Now each instruction processor defines for itself
what instructions it accepts.
* Restart node test (#3459)
* Add test to local_cluster for restarting a node
* fix so that we don't hit end of epoch - leader not found before trying to transfer
* Do not look for confirmations, b/c nobody is voting on empty transmissions in this single node test
is_rooted is now is_connected and (still) indicates the set of connected
completed slots. 'rooted' slot terminology is used for a different
meaning in bank_forks and replay_stage.
* use bincode for SSTable serialization; add tests
* Fix bug uncovered in merge algorithm by unit tests
* use bincode in write-ahead-log serialization
* Add helper `Fill` trait for zeroing buffers
Add an interface to query the storage slot a
replicator is holding on storage_addr port.
Fix logic to poll blocktree for all slots
replicated being filled.
Add test logic to ask replicator what slot it
is replicating and then download an entry in
the slot.
* move core tests to core
* remove window
* fix up flaky tests
* test_entryfication needs a singly-threaded banking_stage
* move core benches to core
* remove unnecessary dependencies
* remove core as a member for now, test it like runtime
* stop running tests twice
* remove duplicate runs of tests in perf
* Lift all shared mutable state into Kvstore
commit is now an AtomicUsize
In-memory table and write-log are now struct members behind individual RwLocks
* locktower components and tests
* integrate locktower into replay stage
* track locktower duration
* make sure threshold is checked after simulating the vote
* check vote lockouts using the VoteState program
* duplicate vote test
* epoch stakes
* disable impossible to verify tests
These tests were from back in the day when Bank(then-called Accountant)
would call `verify_plan()` on all transactions. Nowadays `verify_plan`
is only useful to the client. At can be used to ensure a transaction
won't trigger runtime errors.
A sequence of instructions. A client compiles the script and then uses
the compiled script to construction a transaction. Then it adds a
adds a blockhash, signs the transaction, and sends it off for
processing.
The correct thing to do here is retry until you get a
BlockhashNotFound error, and then send another request
to the drone for a new signed transaction.
Also, this test is an integration test masquerading as a unit-test..
* Rename userdata to data
Instead of saying "userdata", which is ambiguous and imprecise,
say "instruction data" or "account data".
Also, add `ProgramError::InvalidInstructionData`
Fixes#2761
* Don't vote for empty leader transmissions
* Add is_delta flag to bank to detect empty leader transmissions
* Plumb new is_votable flag through replay stage
* Fix PohRecorder tests
* Change is_delta to AtomicBool to avoid making Bank references mutable
* Reset start slot in poh_recorder when working bank is cleared, so that connsecutive TPU's will start from the correct place
* Use proper max tick height calculation
* Test for not voting on empty transmission
* tests for is_votable
* Mostly implement key-value store and add integration points
Essential key-value store functionality is implemented, needs more work to be integrated, tested, and activated.
Behind the `kvstore` feature.
* wake up replay stage when the poh bank is cleared
* bump ticks per second
* Increase ticks per slot to match faster tick rate
* Remove check that working bank must be the bank for the greatest slot
* Make start_leader() skip starting TPU for slots we've already been leader for
This code wasn't updated after we started batching instructions.
The current code does allocations instead of using CreateAccount.
The runtime shouldn't allow that, so getting this code out of the
way before we lock down the runtime.
Vote program currently offers no path to vote with the
authorized voter. There should be a
VoteInstruction::new_authorized_vote() that accepts the
keypair of the authorized voter and the pubkey of the vote
account. The only option in current code is
VoteInstruction::new_vote() that accepts the voter's keypair
and assumes that pubkey is the vote account.
Multiple threads can enter the read lock and
all store the new empty set to account_maps.
Check again after taking write lock to make sure
only one thread actually inserts the new entry.
Also:
* Add an assertion to the transaction builder if not enough
keypairs were provided for all keys that require signatures.
* Expose bugs in the runtime.
vote_index not being maintained correctly during a squash.
The tokens==0 shielding accounts were being inserted with
owner=default Pubkey so they didn't know they are vote accounts
and should update the vote accounts set.
* Moved supermajority functions into new module, staking_utils
* Move staking functions out of bank, and into staking_utils, change get_supermajority_slot to only use state from epoch boundary
* Move bank slot height in staked_nodes_at_slot() to be bank id
* Add slot 3 back to ASCII art
* New slot-oriented diagrams
When 1-block-per-slotm, slots are drawn vertically. That's the ideal
case. Abandoning a block is what should look like something forking
off to the side.
* Add lockouts to vote program
* Rename MAX_VOTE_HISTORY TO MAX_LOCKOUT_HISTORY, change process_vote() to only pop votes after MAX_LOCKOUT_HISTORY + 1 votes have arrived
* Correctly calculate serialized size of an Option, rename root_block to root_slot
* add bank.id() which can be used by BankForks, blocktree_processor
* add bank.hash(), make hash_internal_state() private
* add bank.freeze()/is_frozen(), also useful for blocktree_processor, eventual freeze()ing in replay
We have lots of tests that work off genesis block. Also, one
might want to generate a future leader schedule under the assumption
the stakers stay the same.
I had suggested the last_id, but that puts an unnecessary dependency
on LastIdsQueue. Using epoch height is pretty interesting in that
given the same set of stakers, you simply increment the seed once
per epoch.
Also, tighten up the LeaderSchedule code.
Cleanup poh_recorder and poh_service.
* ticks are sent only if poh.tick_height > WorkingBank::min_tick_height and <= WorkingBank::max_tick_height
* entries are recorded only if poh.tick_height >= WorkingBank::min_tick_height and < WorkingBank::max_tick_height
Also, update LeaderScheduler's code to use node_id as well.
Unfortuntely, no unit tests for this, because there's currently
only one way to set staker_id/node_id, and they are both set
to the same value.
It's useful for unit-testing, but generally isn't a variable
validators should be modifying. Blockstream and BlockstreamService
were the only ones using it. Switching them from a hard-coded 10
to the default didn't cause any test failures, so running with it.
This ensures GenesisBlock is always configured with the same
ticks_per_slot as LeaderScheduler. This will make it easier
to migrate to bank-generated schedules.
Anything that affects how the ledger is interpreted needs to be
in the genesis block or someplace on the ledger before later
parts of the ledger are interpreted. We currently don't have an
on-chain program for cluster parameters, so that leaves only
the genesis block option.
This test passes consistently when the test suite is run with a
single thread. It fails consistently on MacOS when run as part
of the unit-test suite.
No idea why it passes in CI.
Borrow checker
query the previous parents accounts
cleanup!
s/tree/parents
Tests! Last_ids need to be inherited as well otherwise nothing works.
new_from_parent
* Use the accounts list from parents up to finalized bank for Account::load apis.
* Borrow checker
* query the previous parents accounts
* cleanup!
* s/tree/parents
* Tests! Last_ids need to be inherited as well otherwise nothing works.
Functional change: the leader scheduler is no longer implicitly
updated by PohRecorder via register_tick(). That's intended to
be a "feature" (crossing fingers).
By passing a config and not a Arc'ed LeaderScheduler, callers
need to use `Bank::leader_scheduler` to access the scheduler.
By using the new constructor, there should be no incentive to
reach into the bank for that object.
That way we don't need to TODOs saying "don't forget to iterate
over checkpoints too". It should be assumed that when the bank
references its previous checkpoint all its methods would
acknowledge it.
Fullnode was the only real consumer of process_ledger and it was
only there to process a Blocktree. Blocktree is a tree, and a
ledger is a sequence, so something's clearly not right here.
Drop all other dependencies on process_ledger (only one test) so
that it can be fixed up in isolation.
Making `solana_vote_program` is not an option because
then vote_program's entrypoint conflicts with reward_program's
entrypoint.
This unfortunately turns the SDK into a dumping ground for all
things shared between vote_program and other programs. Better
would be to create a solana-vote-api crate similar to the
solana-rewards-api crate.
Now clients can use all the libraries to create transactions
and disect account data without needing to be constrained about
what can be compiled into a shared object or BPF.
Likewise, program development can move forward without being
concerned with bloating the shared object.
* Modify db_ledger to support per_slot metadata, add signal for updates, and add chaining to slots in db_ledger
* Modify replay stage to ask db_ledger for updates based on slots
* Add repair send/receive metrics
* Add repair service, remove old repair code
* Fix tmp_copy_ledger and setup for tests to account for multiple slots and tick limits within slots
The assert_counters() helper creates unreadable tests and makes
us have to update every test any time a counter is added. Instead,
we can just assert the values of any particular counters the test
may have affected.
* Modify replay stage to ask db_ledger for updates instead of reading from upstream channel
* Add signal for db_ledger to update listeners about updates
* fix flaky test
- Update leader/validator pipeline stage graph, as any node can be
doing either of the roles
- Update network stats graphs to remove hostname based filtering
And add a new 'staker_id' VoteState member variable to offer a path to
work our way out. Update leader scheduler to use staker_id, but
continue setting it to 'from_id' for the moment.
No functional changes here.
The bpfloader crate was triggering cargo to perform excessive rebuilds
of in-workspace dependencies. Unclear why exactly, but seems related to
the special dual crate-type employed by bpfloader.
* Groundwork for entry tree, align constants with definitions in the book
* Fix edge case in test, node can keep generating ticks between handle_role_transition and exit() call
* Connect TPU's broadcast service with TVU's blob fetch stage
- This is needed since ledger is being written only in TVU now
* fix clippy warnings
* fix failing test
* fix broken tests
* fixed failing tests
Split up StatusDeque into different modules
* LastIdQueue tracks last_ids
* StatusCache keeps track of signature statuses
* StatusCache stores success as a bit in a bloom filter
* Overhead for 1m Ok transactions is 4mb in memory
* Less concurrency between the objects, last_id and status_cache are read and written to at different points in the pipeline
* Each object has its own strategy for merging into the root checkpoint
* Add timeout to Replicator::new; used when polling for leader
* Add timeout functionality to replicator ledger download
Shares the same timeout as polling for leader
Defaults to 30 seconds
* Add docs for Replicator::new
* split load+execute from commit in bank, insert record between them in TPU code
* clippy
* remove clear_signatures() race with commit_transactions()
* add #[test] back
* Allow chained BudgetExpr via indirection
Change `And`, `Or`, and `After` expressions to contain
`Box<BudgetExpr>`s instead of directly holding payments
* run cargo fmt
* Revert "Fix link to book in Local Testnet section (#2416)"
This reverts commit 710c0c9980.
* Revert "Add current leader information to dashboard (#2413)"
This reverts commit f0300c1711.
* Revert "remove window code from most places (#2389)"
This reverts commit e3c0bd5a3f.
* Add trait for RpcRequestHandler trait for RpcClient and add MockRpcClient for unit tests
* Add request_airdrop integration test
* Add timestamp_tx, witness_tx, and cancel_tx to wallet integration tests; add wallet integration tests to test-stable
* Add test cases
* Ignore plentiful sleeps in unit tests
* Use unwrap() on locks
An error there generally indicates a programmer error, not a
runtime error, so a detailed runtime message is not generally useful.
* Only clone Arcs when passing them across thread boundaries
* Cleanup TPU forwarder
By separating the query from the update, all the branches get easier to
test. Also, the update operation gets so simple, that we see it
belongs over in packet.rs.
* Thanks clippy
cute
This trait is for bloom, not crds.
Slightly better would be to put it in the SDK so that the trait
implementations could go into hash and pubkey, but if we don't
want compatibility constraints, this is the next best thing.
Unfortunately due to our multi-crate repo, as soon as
|./scripts/increment-cargo-version.sh| is run after a release, |cargo
package| will fail for crates that depend on other in-tree crates, as
the new crate version has not yet been published to crates.io.
For now this means that we need to continue flying blind and be prepared
to deal with minor publishing issues on each new release.
* test_encrypt_files_many_keys_multiple_keys passing
- buffer chunk size unified between single key and multiple key path,
which shouldn't be necessary but can fix later.
* test_encrypt_file_many_keys_bad_key_length passing
* Use vote signer service in fullnode
* Use native types for signature and pubkey, and address other review comments
* Start local vote signer if a remote service address is not provided
* Rebased to master
* Fixes after rebase
Add a command-line argument (min-hashes) to restrict the entries
processed by ledger-tool. For example, --min-hashes 1 will strip
"empty" Entries, i.e. those with num_hashes = 0.
Add basic ledger tool test
* implement per-thread, per-batch sleep in ms (throttling allows easier UI development)
* tidy up sleep() call with Duration::from_millis() instead of Duration::new()
* fixup indentation style
* removing multinode-demo/client-throttled.sh (same functionality available via arguments)
* Also implement more storage contract logic
* Add transactions for proof validation,
* Move storage state members into system storage account userdata
Not sure how else to comment, but I'd encourage keeping the content timeless rather than an update. So don't talk about recent releases or next we're doing X. (just a suggestion ;)
@ -39,10 +75,10 @@ Install rustc, cargo and rustfmt:
```bash
$ curl https://sh.rustup.rs -sSf | sh
$ source$HOME/.cargo/env
$ rustup component add rustfmt-preview
$ rustup component add rustfmt
```
If your rustc version is lower than 1.31.0, please update it:
If your rustc version is lower than 1.34.0, please update it:
```bash
$ rustup update
@ -64,7 +100,12 @@ $ cd solana
Build
```bash
$ cargo build --all
$ cargo build
```
Then to run a minimal local cluster
```bash
$ ./run.sh
```
Testing
@ -73,40 +114,28 @@ Testing
Run the test suite:
```bash
$ cargo test --all
```
To emulate all the tests that will run on a Pull Request, run:
```bash
$ ./ci/run-local.sh
$ cargo test
```
Local Testnet
---
Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://solana-labs.github.io/solana/getting-started.html).
Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://solana-labs.github.io/book/getting-started.html).
Remote Testnets
---
We maintain several testnets:
*`testnet` - public stable testnet accessible via testnet.solana.com, with an https proxy for web apps at api.testnet.solana.com. Runs 24/7
*`testnet` - public stable testnet accessible via testnet.solana.com. Runs 24/7
*`testnet-beta` - public beta channel testnet accessible via beta.testnet.solana.com. Runs 24/7
*`testnet-edge` - public edge channel testnet accessible via edge.testnet.solana.com. Runs 24/7
*`testnet-perf` - permissioned stable testnet running a 24/7 soak test
*`testnet-beta-perf` - permissioned beta channel testnet running a multi-hour soak test weekday mornings
*`testnet-edge-perf` - permissioned edge channel testnet running a multi-hour soak test weekday mornings
## Deploy process
They are deployed with the `ci/testnet-manager.sh` script through a list of [scheduled
@ -43,8 +43,7 @@ the `master` branch as late as possible prior to the milestone release.
### v*X.Y.Z* release tag
The release tags are created as desired by the owner of the given stabilization
branch, and cause that *X.Y.Z* release to be shipped to https://crates.io,
https://snapcraft.io/, and elsewhere.
branch, and cause that *X.Y.Z* release to be shipped to https://crates.io
Immediately after a new v*X.Y.Z* branch tag has been created, the `Cargo.toml`
patch version number (*Z*) of the stabilization branch is incremented by the
@ -62,29 +61,116 @@ There are three release channels that map to branches as follows:
## Release Steps
### Changing channels
When cutting a new channel branch these pre-steps are required:
### Creating a new branch from master
#### Create the new branch
1. Pick your branch point for release on master.
2. Create the branch. The name should be "v" + the first 2 "version" fields from Cargo.toml. For example, a Cargo.toml with version = "0.9.0" implies the next branch name is "v0.9".
4. Push the new branch to the solana repository
3. Update Cargo.toml on master to the next semantic version (e.g. 0.9.0 -> 0.10.0) by running `./scripts/increment-cargo-version.sh`.
5. Land your Cargo.toml change as a master PR.
1. Create the branch. The name should be "v" + the first 2 "version" fields
from Cargo.toml. For example, a Cargo.toml with version = "0.9.0" implies
the next branch name is "v0.9".
1. Note the Cargo.toml in the repo root directory does not contain a version. Look at any other Cargo.toml file.
1. Create a new branch and push this branch to the solana repository.
1.`git checkout -b <branchname>`
1.`git push -u origin <branchname>`
At this point, ci/channel-info.sh should show your freshly cut release branch as "BETA_CHANNEL" and the previous release branch as "STABLE_CHANNEL".
#### Update master with the next version
### Updating channels (i.e. "making a release")
1. After the new branch has been created and pushed, update Cargo.toml on **master** to the next semantic version (e.g. 0.9.0 -> 0.10.0)
by running `./scripts/increment-cargo-version.sh`, then rebuild with
`cargo build` to cause a refresh of `Cargo.lock`.
1. Push your Cargo.toml change and the autogenerated Cargo.lock changes to the
master branch
At this point, `ci/channel-info.sh` should show your freshly cut release branch as
"BETA_CHANNEL" and the previous release branch as "STABLE_CHANNEL".
in book/src/testnet-participation.md on the release (beta) branch.
### Make the Release
We use [github's Releases UI](https://github.com/solana-labs/solana/releases) for tagging a release.
1. Go [there ;)](https://github.com/solana-labs/solana/releases).
2. Click "Draft new release". The release tag must exactly match the `version` field in `/Cargo.toml` prefixed by `v` (ie, `<branchname>.X`).
3. If the first major release on the branch (e.g. v0.8.0), paste in [this template](https://raw.githubusercontent.com/solana-labs/solana/master/.github/RELEASE_TEMPLATE.md) and fill it in.
4. Test the release by generating a tag using semver's rules. First try at a release should be `<branchname>.X-rc.0`.
5. Verify release automation:
1. Click "Draft new release". The release tag must exactly match the `version`
field in `/Cargo.toml` prefixed by `v` (ie, `<branchname>.X`).
1. If the Cargo.toml verion field is **0.12.3**, then the release tag must be **v0.12.3**
1. If this is the first release on the branch (e.g. v0.13.**0**), paste in [this
1. Test the release by generating a tag using semver's rules. First try at a
release should be `<branchname>.X-rc.0`.
1. Verify release automation:
1. [Crates.io](https://crates.io/crates/solana) should have an updated Solana version.
2. ...
6. After testnet deployment, verify that testnets are running correct software. http://metrics.solana.com should show testnet running on a hash from your newly created branch.
7. Once the release has been made, update Cargo.toml on release to the next semantic version (e.g. 0.9.0 -> 0.9.1) by running `./scripts/increment-cargo-version.sh patch`.
1. Once the release has been made, update Cargo.toml on the release branch to the next
semantic version (e.g. 0.9.0 -> 0.9.1) by running
`./scripts/increment-cargo-version.sh patch`, then rebuild with `cargo
build` to cause a refresh of `Cargo.lock`.
1. Push your Cargo.toml change and the autogenerated Cargo.lock changes to the
release branch.
### Publish updated Book
We maintain three copies of the "book" as official documentation:
1) "Book" is the documentation for the latest official release. This should get manually updated whenever a new release is made. It is published here:
https://solana-labs.github.io/book/
2) "Book-edge" tracks the tip of the master branch and updates automatically.
https://solana-labs.github.io/book-edge/
3) "Book-beta" tracks the tip of the beta branch and updates automatically.
https://solana-labs.github.io/book-beta/
To manually trigger an update of the "Book", create a new job of the manual-update-book pipeline.
Set the tag of the latest release as the PUBLISH_BOOK_TAG environment variable.
After a block reaches finality, all blocks from that one on down
to the genesis block form a linear chain with the familiar name
blockchain. Until that point, however, the validator must maintain all
potentially valid chains, called *forks*. The process by which forks
naturally form as a result of leader rotation is described in
[fork generation](fork-generation.md). The *blocktree* data structure
described here is how a validator copes with those forks until blocks
are finalized.
The blocktree allows a validator to record every blob it observes
on the network, in any order, as long as the blob is signed by the expected
leader for a given slot.
Blobs are moved to a fork-able key space the tuple of `leader slot` + `blob
index` (within the slot). This permits the skip-list structure of the Solana
protocol to be stored in its entirety, without a-priori choosing which fork to
follow, which Entries to persist or when to persist them.
Repair requests for recent blobs are served out of RAM or recent files and out
of deeper storage for less recent blobs, as implemented by the store backing
Blocktree.
### Functionalities of Blocktree
1. Persistence: the Blocktree lives in the front of the nodes verification
pipeline, right behind network receive and signature verification. If the
blob received is consistent with the leader schedule (i.e. was signed by the
leader for the indicated slot), it is immediately stored.
2. Repair: repair is the same as window repair above, but able to serve any
blob that's been received. Blocktree stores blobs with signatures,
preserving the chain of origination.
3. Forks: Blocktree supports random access of blobs, so can support a
validator's need to rollback and replay from a Bank checkpoint.
4. Restart: with proper pruning/culling, the Blocktree can be replayed by
ordered enumeration of entries from slot 0. The logic of the replay stage
(i.e. dealing with forks) will have to be used for the most recent entries in
the Blocktree.
### Blocktree Design
1. Entries in the Blocktree are stored as key-value pairs, where the key is the concatenated
slot index and blob index for an entry, and the value is the entry data. Note blob indexes are zero-based for each slot (i.e. they're slot-relative).
2. The Blocktree maintains metadata for each slot, in the `SlotMeta` struct containing:
*`slot_index` - The index of this slot
*`num_blocks` - The number of blocks in the slot (used for chaining to a previous slot)
*`consumed` - The highest blob index `n`, such that for all `m < n`, there exists a blob in this slot with blob index equal to `n` (i.e. the highest consecutive blob index).
*`received` - The highest received blob index for the slot
*`next_slots` - A list of future slots this slot could chain to. Used when rebuilding
the ledger to find possible fork points.
*`last_index` - The index of the blob that is flagged as the last blob for this slot. This flag on a blob will be set by the leader for a slot when they are transmitting the last blob for a slot.
*`is_rooted` - True iff every block from 0...slot forms a full sequence without any holes. We can derive is_rooted for each slot with the following rules. Let slot(n) be the slot with index `n`, and slot(n).is_full() is true if the slot with index `n` has all the ticks expected for that slot. Let is_rooted(n) be the statement that "the slot(n).is_rooted is true". Then:
is_rooted(0)
is_rooted(n+1) iff (is_rooted(n) and slot(n).is_full()
3. Chaining - When a blob for a new slot `x` arrives, we check the number of blocks (`num_blocks`) for that new slot (this information is encoded in the blob). We then know that this new slot chains to slot `x - num_blocks`.
4. Subscriptions - The Blocktree records a set of slots that have been "subscribed" to. This means entries that chain to these slots will be sent on the Blocktree channel for consumption by the ReplayStage. See the `Blocktree APIs` for details.
5. Update notifications - The Blocktree notifies listeners when slot(n).is_rooted is flipped from false to true for any `n`.
### Blocktree APIs
The Blocktree offers a subscription based API that ReplayStage uses to ask for entries it's interested in. The entries will be sent on a channel exposed by the Blocktree. These subscription API's are as follows:
1.`fn get_slots_since(slot_indexes: &[u64]) -> Vec<SlotMeta>`: Returns new slots connecting to any element of the list `slot_indexes`.
2.`fn get_slot_entries(slot_index: u64, entry_start_index: usize, max_entries: Option<u64>) -> Vec<Entry>`: Returns the entry vector for the slot starting with `entry_start_index`, capping the result at `max` if `max_entries == Some(max)`, otherwise, no upper limit on the length of the return vector is imposed.
Note: Cumulatively, this means that the replay stage will now have to know when a slot is finished, and subscribe to the next slot it's interested in to get the next set of entries. Previously, the burden of chaining slots fell on the Blocktree.
### Interfacing with Bank
The bank exposes to replay stage:
1.`prev_hash`: which PoH chain it's working on as indicated by the hash of the last
entry it processed
2.`tick_height`: the ticks in the PoH chain currently being verified by this
bank
3.`votes`: a stack of records that contain:
1.`prev_hashes`: what anything after this vote must chain to in PoH
2.`tick_height`: the tick height at which this vote was cast
3.`lockout period`: how long a chain must be observed to be in the ledger to
be able to be chained below this vote
Replay stage uses Blocktree APIs to find the longest chain of entries it can
hang off a previous vote. If that chain of entries does not hang off the
latest vote, the replay stage rolls back the bank to that vote and replays the
chain from there.
### Pruning Blocktree
Once Blocktree entries are old enough, representing all the possible forks
becomes less useful, perhaps even problematic for replay upon restart. Once a
validator's votes have reached max lockout, however, any Blocktree contents
that are not on the PoH chain for that vote for can be pruned, expunged.
Replicator nodes will be responsible for storing really old ledger contents,
and validators need only persist their bank periodically.
A colluding validation-client, may take the strategy to mark PoReps from non-colluding replicator nodes as invalid as an attempt to maximize the rewards for the colluding replicator nodes. In this case, it isn’t feasible for the offended-against replicator nodes to petition the network for resolution as this would result in a network-wide vote on each offending PoRep and create too much overhead for the network to progress adequately. Also, this mitigation attempt would still be vulnerable to a >= 51% staked colluder.
Alternatively, transaction fees from submitted PoReps are pooled and distributed across validation-clients in proportion to the number of valid PoReps discounted by the number of invalid PoReps as voted by each validator-client. Thus invalid votes are directly dis-incentivized through this reward channel. Invalid votes that are revealed by replicator nodes as fishing PoReps, will not be discounted from the payout PoRep count.
Another collusion attack involves a validator-client who may take the strategy to ignore invalid PoReps from colluding replicator and vote them as valid. In this case, colluding replicator-clients would not have to store the data while still receiving rewards for validated PoReps. Additionally, colluding validator nodes would also receive rewards for validating these PoReps. To mitigate this attack, validators must randomly sample PoReps corresponding to the ledger block they are validating and because of this, there will be multiple validators that will receive the colluding replicator’s invalid submissions. These non-colluding validators will be incentivized to mark these PoReps as invalid as they have no way to determine whether the proposed invalid PoRep is actually a fishing PoRep, for which a confirmation vote would result in the validator’s stake being slashed.
In this case, the proportion of time a colluding pair will be successful has an upper limit determined by the % of stake of the network claimed by the colluding validator. This also sets bounds to the value of such an attack. For example, if a colluding validator controls 10% of the total validator stake, transaction fees will be lost (likely sent to mining pool) by the colluding replicator 90% of the time and so the attack vector is only profitable if the per-PoRep reward at least 90% higher than the average PoRep transaction fee. While, probabilistically, some colluding replicator-client PoReps will find their way to colluding validation-clients, the network can also monitor rates of paired (validator + replicator) discrepancies in voting patterns and censor identified colluders in these cases.
Long term economic sustainability is one of the guiding principles of Solana’s economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solana’s network, these components take the form of the remittances and deposits into and out of the reserve ‘mining pool’.
The dominant remittances from the Solana mining pool are validator and replicator rewards. The deposit mechanism is a flat, protocol-specified and adjusted, % of each transaction fee.
The Replicator rewards are to be delivered to replicators from the mining pool after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 3**
**Figure 3**: Example PoRep reward design as a function of global network storage redundancy.
In the example shown in Figure 1, multiple per PoRep base rewards are explored (as a % of Tx Fee) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy (i.e. 10X).
The other protocol-based remittance goes to validation-clients as a reward distributed in proportion to stake-weight for voting to validate the ledger state. The functional issuance of this reward is described in [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md) and is designed to reduce over time until validators are incentivized solely through collection of transaction fees. Therefore, in the long-run, protocol-based rewards to replication-nodes will be the only remittances from the mining pool, and will have to be countered by the portion of each non-PoRep transaction fee that is directed back into the mining pool. I.e. for a long-term self-sustaining economy, replicator-client rewards must be subsidized through a minimum fee on each non-PoRep transaction pre-allocated to the mining pool. Through this constraint, we can write the following inequality:
The preceeding sections, outlined in the [Economic Design Overview](ed_overview.md), describe a long-term vision of a sustainable Solana economy. Of course, we don't expect the final implementation to perfectly match what has been described above. We intend to fully engage with network stakeholders throughout the implementation phases (i.e. pre-testnet, testnet, mainnet) to ensure the system supports, and is representative of, the various network participants' interests. The first step toward this goal, however, is outlining a some desired MVP economic features to be available for early pre-testnet and testnet participants. Below is a rough sketch outlining basic economic functionality from which a more complete and functional system can be developed.
### MVP Economic Features
* Faucet to deliver testnet SOLs to validators for staking and dapp development.
* Mechanism by which validators are rewarded in proportion to their stake. Interest rate mechansism (i.e. to be determined by total % staked) to come later.
* Ability to delegate tokens to validator nodes.
* Replicators to receive fixed, arbitrary reward for submitting validated PoReps. Reward size mechanism (i.e. PoRep reward as a function of total ledger redundancy) to come later.
* Pooling of replicator PoRep transaction fees and weighted distribution to validators based on PoRep verification (see [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). It will be useful to test this protection against attacks on testnet.
* Nice-to-have: auto-delegation of replicator rewards to validator.
Solana’s crypto-economic system is designed to promote a healthy, long term self-sustaining economy with participant incentives aligned to the security and decentralization of the network. The main participants in this economy are validation-clients and replication-clients. Their contributions to the network, state validation and data storage respectively, and their requisite remittance mechanisms are discussed below.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are protocol-derived issuances from a network-controlled reserve of tokens (sometimes referred to as the ‘mining pool’). These rewards will constitute the total reward delivered to replication clients and a portion of the total rewards for validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to join the network.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be specified as annual interest rates calculated per, real-time, Solana epoch [DEFINITION]. As discussed further below, the issuance rates are determined as a function of total network validator staked percentage and total replication provided by replicators in each previous epoch. The choice for validator and replicator client rewards to be based on participation rates, rather than a global fixed inflation or interest rate, emphasizes a protocol priority of overall economic security, rather than monetary supply predictability. Due to Solana’s hard total supply cap of 1B tokens and the bounds of client participant rates in the protocol, we believe that global interest, and supply issuance, scenarios should be able to be modeled with reasonable uncertainties.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction (be it a state execution or proof-of-replication verification). A mechanism for continuous and long-term funding of the mining pool through a pre-dedicated portion of transaction fees is also discussed below.
A high-level schematic of Solana’s crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics.md), [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. The [Replication-client Economics](ed_replication_client_economics.md) chapter will review the Solana network design for global ledger storage/redundancy and replicator-client economics ([Storage-replication rewards](ed_rce_storage_replication_rewards.md)) along with a replicator-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md). The [Economic Sustainability](ed_economic_sustainability.md) section dives deeper into Solana’s design for long-term economic sustainability and outlines the constraints and conditions for a self-sustaining economy. An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
<!--  -->
The ability for Solana network participant’s to earn rewards by providing storage service is a unique on-boarding path that requires little hardware overhead and minimal upfront capital. It offers an avenue for individuals with extra-storage space on their home laptops or PCs to contribute to the security of the network and become integrated into the Solana economy.
To enhance this on-boarding ramp and facilitate further participation and investment in the Solana economy, replication-clients have the opportunity to auto-delegate their rewards to validation-clients of their choice. Much like the automatic reinvestment of stock dividends, in this scenario, a replicator-client can earn Solana tokens by providing some storage capacity to the network (i.e. via submitting valid PoReps), have the protocol-based rewards automatically assigned as delegation to a staked validator node and therefore earning interest in the validation-client reward pool.
Replicator-clients download, encrypt and submit PoReps for ledger block sections.3 PoReps submitted to the PoH stream, and subsequently validated, function as evidence that the submitting replicator client is indeed storing the assigned ledger block sections on local hard drive space as a service to the network. Therefore, replicator clients should earn protocol rewards proportional to the amount of storage, and the number of successfully validated PoReps, that they are verifiably providing to the network.
Additionally, replicator clients have the opportunity to capture a portion of slashed bounties [TBD] of dishonest validator clients. This can be accomplished by a replicator client submitting a verifiably false PoRep for which a dishonest validator client receives and signs as a valid PoRep. This reward incentive is to prevent lazy validators and minimize validator-replicator collusion attacks, more on this below.
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of replicators provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided. These rewards are captured by generating and entering Proofs of Replication (PoReps) into the PoH stream which can be validated by Validation nodes as described above in the [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md) chapter.
Validator-clients are eligible to receive protocol-based (i.e. via mining pool) rewards issued via stake-based annual interest rates by providing compute (CPU+GPU) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic schedule as a function of total amount of Solana tokens staked in the system and duration since network launch (genesis block). Additionally, these clients may earn revenue through two types of transaction fees: state-validation transaction fees and pooled Proof-of-Replication (PoRep) transaction fees. The distribution of these two types of transaction fees to the participating validation set are designed independently as economic goals and attack vectors are unique between the state- generation/validation mechanism and the ledger replication/validation mechanism. For clarity, we separately describe the design and motivation of the three types of potential revenue streams for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and PoRep-validation transaction fees.
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by replicator-clients. In this case, validators are providing compute (CPU/GPU) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.2
While replication-clients are incentivized and rewarded through protocol-based rewards schedule (see [Replication-client Economics](ed_replication_client_economics.md)), validator-clients will be incentivized to include and validate PoReps in PoH through the distribution of the transaction fees associated with the submitted PoRep. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy (e.g. 10x data redundancy). It was chosen not to include a distribution of these rewards to PoRep validators, and to rely only on the collection of PoRep attached transaction fees, due to the fact that the confluence of two participation incentive modes (state-validation inflation rate via global staked % and replication-validation rewards based on global redundancy factor) on the incentives of a single network participant (a validator-client) potentially opened up a significant incentive-driven attack surface area.
The validation of PoReps by validation-clients is computationally more expensive than state-validation (detail in the [Economic Sustainability](ed_economic_sustainability.md) chapter), thus the transaction fees are expected to be proportionally higher. However, because replication-client rewards are distributed in proportion to and only after submitted PoReps are validated, they are uniquely motivated for the inclusion and validation of their proofs. This pressure is expected to generate an adequate market economy between replication-clients and validation-clients. Additionally, transaction fees submitted with PoReps have no minimum amount pre-allocated to the mining pool, as do state-validation transaction fees.
There are various attack vectors available for colluding validation and replication clients, as described in detail below in [Economic Sustainability](ed_economic_sustainability). To protect against various collusion attack vectors, for a given epoch, PoRep transaction fees are pooled, and redistributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of invalidated PoReps [DIAGRAM]. This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps (note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid).
Validator-clients have two functional roles in the Solana network
* Validate (vote) the current global state of that PoH along with any Proofs-of-Replication (see [Replication Client Economics](ed_replication_client_economics.md)) that they are eligible to validate
* Be elected as ‘leader’ on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and Proofs-of-Replication and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. Compensation for validator-clients is provided via a protocol-based annual interest rate dispersed in proportion to the stake-weight of each validator (see below) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each non-PoRep transaction fee, less a protocol-specified amount that is returned to the mining pool (see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)). PoRep transaction fees are not collected directly by the leader client but pooled and returned to the validator set in proportion to the number of successfully validated PoReps. (see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md))
The protocol-based annual interest-rate (%) per epoch to be distributed to validation-clients is to be a function of:
* the current fraction of staked SOLs out of the current total circulating supply,
* the global time since the genesis block instantiation
* the up-time/participation [% of available slots/blocks that validator had opportunity to vote on?] of a given validator over the previous epoch.
The first two factors are protocol parameters only (i.e. independent of validator behavior in a given epoch) and describe a global validation reward schedule designed to both incentivize early participation and optimal security in the network. This schedule sets a maximum annual validator-client interest rate per epoch.
At any given point in time, this interest rate is pegged to a defined value given a specific % staked SOL out of the circulating supply (e.g. 10% interest rate when 66% of circulating SOL is staked). The interest rate adjusts as the square-root [TBD] of the % staked, leading to higher validation-client interest rates as the % staked drops below the targeted goal, thus incentivizing more participation leading to more security in the network. An example of such a schedule, for a specified point in time (e.g. network launch) is shown in **Table 1**.
**Table 1:** Example interest rate schedule based on % SOL staked out of circulating supply. In this case, interest rates are fixed at 10% for 66% of staked circulating supply
Over time, the interest rate, at any network staked percentage, will drop as described by an algorithmic schedule. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. This mining-pool provided interest rate will reduce over time until a network-chosen baseline value is reached. This is a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for both state-validation and ledger storage replication (PoReps) are not accounted for here. A validation-client interest rate schedule as a function of % network staked and time is shown in** Figure 2**.
**Figure 2:** In this example schedule, the annual interest rate [%] reduces at around 16.7% per year, until it reaches the long-term, fixed, 4% rate.
This epoch-specific protocol-defined interest rate sets an upper limit of *protocol-generated* annual interest rate (not absolute total interest rate) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch. Each epoch is comprised of XXX slots. The protocol-defined interest rate is then discounted by the log [TBD] of the % of slots a given validator submitted a vote on a PoH branch during that epoch, see **Figure XX**
Each message sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
* provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
* reduce network spam by introducing real cost to transactions,
* open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies (e.g. Bitcoin, Ethereum), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is sent to the mining pool, with the resulting fee going to the current leader processing the transaction. These pooled fees, then re-enter the system through rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing a redistribution avenue that protects against "tax evasion" attacks (i.e. side-channel fee payments)<sup>[1](ed_referenced.md)</sup>. Constraints on the fixed portion of transaction fees going to the mining pool, to establish long-term economic sustainability, are established and discussed in detail in the [Economic Sustainability](ed_economic_sustainability.md) section.
This minimum, protocol-earmarked, portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount (e.g. 50% of a block's capacity), the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
Additionally, the minimum protocol captured fee can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total procotol captured fee to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.
Running a Solana validation-client required relatively modest upfront hardware capital investment. **Table 2** provides an example hardware configuration to support ~1M tx/s with estimated ‘off-the-shelf’ costs:
|Component|Example|Estimated Cost|
|--- |--- |--- |
|GPU|2x 2080 Ti|$2500|
|or|4x 1080 Ti|$2800|
|OS/Ledger Storage|Samsung 860 Evo 2TB|$370|
|Accounts storage|2x Samsung 970 Pro M.2 512GB|$340|
|RAM|32 Gb|$300|
|Motherboard|AMD x399|$400|
|CPU|AMD Threadripper 2920x|$650|
|Case||$100|
|Power supply|EVGA 1600W|$300|
|Network|> 500 mbps||
|Network (1)|Google webpass business bay area 1gbps unlimited|$5500/mo|
|Network (2)|Hurricane Electric bay area colo 1gbps|$500/mo|
**Table 2** example high-end hardware setup for running a Solana client.
Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solana’s validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties still have two options to become involved in the Solana network/economy:
1. Delegation of previously acquired tokens with a reliable validation node to earn a portion of interest generated
2. Provide local storage space as a replication-client and receive rewards by submitting Proof-of-Replication (see [Replication-client Economics](ed_replication_client_economics.md)).
a. This participant has the additional option to directly delegate their earned storage rewards ([Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md))
Delegation of tokens to validation-clients, via option 1, provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature creates a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services.
You can observe the effects of your client's transactions on our [dashboard](https://metrics.solana.com:3000/d/testnet/testnet-hud?orgId=2&from=now-30m&to=now&refresh=5s&var-testnet=testnet)
## Linux Snap
A Linux [Snap](https://snapcraft.io/) is available, which can be used to easily
get Solana running on supported Linux systems without building anything from
source for evaluation. Note that CUDA is not supported by the Snap so
performance will be limited.
The `edge` Snap channel is updated daily with the latest
development from the `master` branch. To install:
```bash
$ sudo snap install solana --edge --devmode
```
Once installed the usual Solana programs will be available as `solona.*` instead
of `solana-*`. For example, `solana.fullnode` instead of `solana-fullnode`.
Update to the latest version at any time with:
```bash
$ snap info solana
$ sudo snap refresh solana --devmode
```
### Daemon Support
The snap supports running fullnodes and a drone as system daemons.
Run `sudo snap get solana` to view the current daemon configuration. To view
daemon logs:
1. Run `sudo snap logs -n=all solana` to view the daemon initialization log
2. Runtime logging can be found under `/var/snap/solana/current/bootstrap-leader/`,
`/var/snap/solana/current/fullnode/`, or `/var/snap/solana/current/drone/` depending
on which `mode=` was selected. Within each log directory the file `current`
contains the latest log, and the files `*.s` (if present) contain older rotated
logs.
Disable the daemon at any time by running:
```bash
$ sudo snap set solana mode=
```
Runtime configuration files for the daemon can be found in
`/var/snap/solana/current/config`.
#### Leader Daemon
```bash
$ sudo snap set solana mode=bootstrap-leader
```
`rsync` must be configured and running on the leader.
1. Ensure rsync is installed with `sudo apt-get -y install rsync`
2. Edit `/etc/rsyncd.conf` to include the following
```ini
[config]
path = /var/snap/solana/current/config
hosts allow = *
read only = true
```
3. Run `sudo systemctl enable rsync; sudo systemctl start rsync`
4. Test by running `rsync -Pzravv rsync://<ip-address-of-leader>/config
solana-config` from another machine. **If the leader is running on a cloud
provider it may be necessary to configure the Firewall rules to permit ingress
to port tcp:873, tcp:9900 and the port range udp:8000-udp:10000**
To run both the Leader and Drone:
```bash
$ sudo snap set solana mode=bootstrap-leader+drone
```
#### Validator daemon
```bash
$ sudo snap set solana mode=fullnode
```
By default the node will attempt to connect to **testnet.solana.com**, override the
cluster entrypoint IP address by running:
```bash
$ sudo snap set solana mode=fullnode entrypoint-ip=127.0.0.1 #<-- change IP address
```
It's assumed that the node at the entrypoint IP will be running `rsync`
configured as described in the previous **Leader daemon** section.
*`string` - Signature of Transaction to confirm, as base-58 encoded string
##### Results:
*`string` - Transaction status:
*`Confirmed` - Transaction was successful
*`SignatureNotFound` - Unknown transaction
*`ProgramRuntimeError` - An error occurred in the program that processed this Transaction
*`AccountInUse` - Another Transaction had a write lock one of the Accounts specified in this Transaction. The Transaction may succeed if retried
*`GenericFailure` - Some other error occurred. **Note**: In the future new Transaction statuses may be added to this list. It's safe to assume that all new statuses will be more specific error conditions that previously presented as `GenericFailure`
*`null` - Unknown transaction
*`object` - Transaction status:
*`"Ok": null` - Transaction was successful
*`"Err": <ERR>` - Transaction failed with TransactionError <ERR> [TransactionError definitions](https://github.com/solana-labs/solana/blob/master/sdk/src/transaction.rs#L14)
The RepairService is in charge of retrieving missing blobs that failed to be delivered by primary communication protocols like Avalanche. It is in charge of managing the protocols described below in the `Repair Protocols` section below.
# Challenges:
1) Validators can fail to receive particular blobs due to network failures
2) Consider a scenario where blocktree contains the set of slots {1, 3, 5}. Then Blocktree receives blobs for some slot 7, where for each of the blobs b, b.parent == 6, so then the parent-child relation 6 -> 7 is stored in blocktree. However, there is no way to chain these slots to any of the existing banks in Blocktree, and thus the `Blob Repair` protocol will not repair these slots. If these slots happen to be part of the main chain, this will halt replay progress on this node.
3) Validators that find themselves behind the cluster by an entire epoch struggle/fail to catch up because they do not have a leader schedule for future epochs. If nodes were to blindly accept repair blobs in these future epochs, this exposes nodes to spam.
# Repair Protocols
The repair protocol makes best attempts to progress the forking structure of Blocktree.
The different protocol strategies to address the above challenges:
1. Blob Repair (Addresses Challenge #1):
This is the most basic repair protocol, with the purpose of detecting and filling "holes" in the ledger. Blocktree tracks the latest root slot. RepairService will then periodically iterate every fork in blocktree starting from the root slot, sending repair requests to validators for any missing blobs. It will send at most some `N` repair reqeusts per iteration.
Note: Validators will only accept blobs within the current verifiable epoch (epoch the validator has a leader schedule for).
The goal of this protocol is to discover the chaining relationship of "orphan" slots that do not currently chain to any known fork.
* Blocktree will track the set of "orphan" slots in a separate column family.
* RepairService will periodically make `RequestOrphan` requests for each of the orphans in blocktree.
`RequestOrphan(orphan)` request - `orphan` is the orphan slot that the requestor wants to know the parents of
`RequestOrphan(orphan)` response - The highest blobs for each of the first `N` parents of the requested `orphan`
On receiving the responses `p`, where `p` is some blob in a parent slot, validators will:
* Insert an empty `SlotMeta` in blocktree for `p.slot` if it doesn't already exist.
* If `p.slot` does exist, update the parent of `p` based on `parents`
Note: that once these empty slots are added to blocktree, the `Blob Repair` protocol should attempt to fill those slots.
Note: Validators will only accept responses containing blobs within the current verifiable epoch (epoch the validator has a leader schedule for).
3. Repairmen (Addresses Challenge #3):
This part of the repair protocol is the primary mechanism by which new nodes joining the cluster catch up after loading a snapshot. This protocol works in a "forward" fashion, so validators can verify every blob that they receive against a known leader schedule.
Each validator advertises in gossip:
* Current root
* The set of all completed slots in the confirmed epochs (an epoch that was calculated based on a bank <= current root) past the current root
Observers of this gossip message with higher epochs (repairmen) send blobs to catch the lagging node up with the rest of the cluster. The repairmen are responsible for sending the slots within the epochs that are confrimed by the advertised `root` in gossip. The repairmen divide the responsibility of sending each of the missing slots in these epochs based on a random seed (simple blob.index iteration by N, seeded with the repairman's node_pubkey). Ideally, each repairman in an N node cluster (N nodes whose epochs are higher than that of the repairee) sends 1/N of the missing blobs. Both data and coding blobs for missing slots are sent. Repairmen do not send blobs again to the same validator until they see the message in gossip updated, at which point they perform another iteration of this protocol.
Gossip messages are updated every time a validator receives a complete slot within the epoch. Completed slots are detected by blocktree and sent over a channel to RepairService. It is important to note that we know that by the time a slot X is complete, the epoch schedule must exist for the epoch that contains slot X because WindowService will reject blobs for unconfirmed epochs. When a newly completed slot is detected, we also update the current root if it has changed since the last update. The root is made available to RepairService through Blocktree, which holds the latest root.
Initial Proof of Stake (PoS) (i.e. using in-protocol asset, SOL, to provide
secure consensus) design ideas outlined here. Solana will implement a proof of
stake reward/security scheme for node validators in the cluster. The purpose is
A Proof of Stake (PoS), (i.e. using in-protocol asset, SOL, to provide
secure consensus) design is outlined here. Solana implements a proof of
stake reward/security scheme for validator nodes in the cluster. The purpose is
threefold:
- Align validator incentives with that of the greater cluster through
@ -22,8 +22,7 @@ specific attributes to be modified as is allowed by Solana's Proof of History
### General Overview
Solana's ledger validation design is based on a rotating, stake-weighted
randomly selected leader broadcasting transactions in a PoH data
Solana's ledger validation design is based on a rotating, stake-weighted selected leader broadcasting transactions in a PoH data
structure to validating nodes. These nodes, upon receiving the leader's
broadcast, have the opportunity to vote on the current state and PoH height by
signing a transaction into the PoH stream.
@ -49,44 +48,13 @@ specific parameters will be necessary:
Solana's trustless sense of time and ordering provided by its PoH data
structure, along with its
[avalanche](https://www.youtube.com/watch?v=qt_gDRXHrHQ&t=1s) data broadcast
and transmission design, should provide subsecond confirmation times that scale
[turbine](https://www.youtube.com/watch?v=qt_gDRXHrHQ&t=1s) data broadcast
and transmission design, should provide sub-second transaction confirmation times that scale
with the log of the number of nodes in the cluster. This means we shouldn't
have to restrict the number of validating nodes with a prohibitive 'minimum
deposits' and expect nodes to be able to become validators with nominal amounts
of SOL staked. This should also render validation pools, a proposed solution
for economic censorship imposed by minimum staking amounts currently described
in Casper, unnecessary and remove the concern for needing to put slashable
stake at risk while relying on others to play by the rules.
of SOL staked. At the same time, Solana's focus on high-throughput should create incentive for validation clients to provide high-performant and reliable hardware. Combined with potential a minimum network speed threshold to join as a validation-client, we expect a healthy validation delegation market to emerge. To this end, Solana's testnet will lead into a "Tour de SOL" validation-client competition, focusing on throughput and uptime to rank and reward testnet validators.
### Stake-weighted Rewards
Rewards are expected to be paid out to active validators as a function of
validator activity and as a proportion of the percentage of SOL they have at
stake out of the entirety of the staking pool.
We expect to define a baseline annual validator payout/inflation rate based on
the total SOL deposited. E.g. 10% annual interest on SOL deposited with X total
SOL deposited as slashable on the cluster. This is the same design as currently
proposed in Casper FFG which has additionally specifies how inflation rates
adjust as a function of total ETH deposited. Specifically, Casper validator
returns are proportional to the inverse square root of the total deposits and
initial annual rates are estimated as:
| Deposit Size | Annual Validator Interest |
|-------------:|--------------------------:|
| 2.5M ETH | 10.12% |
| 10M ETH | 5.00% |
| 20M ETH | 3.52% |
| 40M ETH | 2.48% |
This has the nice property of potentially incentivizing participation around a
target deposit size. Incentivisation of specific participation rates more
directly (rather than deposit size) may something also worth exploring.
The specifics of the Solana validator reward scheme are to be worked out in
parallel with a design for transaction fee assignment as well as our storage
mining reward scheme.
### Slashing rules
@ -96,7 +64,7 @@ capital-at-risk to prevent a logical/optimal strategy of multiple chain voting.
We intend to implement slashing rules which, if broken, result some amount of
the offending validator's deposited stake to be removed from circulation. Given
the ordering properties of the PoH data structure, we believe we can simplify
our slashing rules to the level of a voting lockout time assigned per vote.
our slashing rules to the level of a voting lockout time assigned per vote.
I.e. Each vote has an associated lockout time (PoH duration) that represents a
duration by any additional vote from that validator must be in a PoH that
@ -142,15 +110,15 @@ in a slashable amount as a function of either:
1. the fraction of validators, out of the total validator pool, that were also
slashed during the same time period (ala Casper)
2. the amount of time since the vote was cast (e.g. a linearly increasing % of
total deposited as slashable amount over time), or both.
total deposited as slashable amount over time), or both.
This is an area currently under exploration
### Penalties
As previously discussed, annual validator reward rates are to be specified as a
function of total amount staked. The cluster rewards validators who are online
As discussed in the [Economic Design](ed_overview.md) section, annual validator interest rates are to be specified as a
function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online
and actively participating in the validation process throughout the entirety of
their *validation period*. For validators that go offline/fail to validate
transactions during this period, their annual reward is effectively reduced.
@ -54,9 +54,9 @@ function](https://eprint.iacr.org/2018/601.pdf) or *VDF*.
A desirable property of a VDF is that verification time is very fast. Solana's
approach to verifying its delay function is proportional to the time it took to
create it. Split over a 4000 core GPU, it is sufficiently fast for Solana's
needs, but if you asked the authors the paper cited above, they might tell you
needs, but if you asked the authors of the paper cited above, they might tell you
([and have](https://github.com/solana-labs/solana/issues/388)) that Solana's
approach is algorithmically slow it shouldn't be called a VDF. We argue the
approach is algorithmically slow and it shouldn't be called a VDF. We argue the
term VDF should represent the category of verifiable delay functions and not
just the subset with certain performance characteristics. Until that's
resolved, Solana will likely continue using the term PoH for its
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.