* Revert "Add test program for BPF memory corruption bug (#5603)"
This reverts commit 63d62c33c6.
* Revert "Revert "Add test program for BPF memory corruption bug (#5603)""
This reverts commit 9502082cda.
* Fix clippy and fmt issues
* net: init-metrics.sh - urlencode influx password
* old backticks bad!
* Move urlencode() to common.sh
* Make urlencode() vars local
Co-Authored-By: Michael Vines <mvines@gmail.com>
* Insert data shreds in blocktree and database
* Integrate data shreds with rest of the code base
* address review comments, and some clippy fixes
* Fixes to some tests
* more test fixes
* ignore some local cluster tests
* ignore replicator local cluster tests
* Coalesce gossip pull requests and serve them in batches
* batch all filters and immediately respond to messages in gossip
* Fix tests
* make download_from_replicator perform a greedy recv
* Remove unnecessary entry_height from BankInfo
* Refactor process_blocktree to support process_blocktree_from_root
* Refactor to process blocktree after loading from snapshot
* On restart make sure bank_forks contains all the banks between the root and the tip of each fork, not just the head of each fork
* Account for 1 tick_per_slot in bank 0 so that blockhash of bank0 matches the tick
* fixed bloom filter math
* Add split each pull request into multiple pulls with different filters
* Rework CrdsFilter to generate all possible masks to cover the keyspace
* Limit the bloom sizes such that each pull request is no larger than mtu
* Rate limit transaction counters
* @sakridge feedback
* Set default high metrics rate for multinode demo
* Fix tests
* Swap defaults and fix env var tests
* Only set metrics rate if not already set
* Implement shred erasure recovery and reassembly
* fixes and unit test
* clippy
* review comments, additional tests, and some fixes
* address review comments
* more tests and cleanup
* Remove 'configured_flag' for vote/storage account, instead detect if they exist with the wallet
* Require --voting-keypair when using release binaries
* Refuse to delegate stake to a vote account with a stale root slot
* Remove sdk-c from the virtual manifest temporarily
For an unknown reason |cargo clippy| is getting stuck in CI
intermittently when trying to build this crate.
* Revert "Revert "Default log level to to RUST_LOG=solana=info (#5296)" (#5302)"
This reverts commit 7796e87814.
* Default to error logs, override with info only for those programs that need it
* Add confidence cache to BankForks
* Include stake-weighted lockouts in cache
* Add cache test
* Move confidence cache updates to handle_votable_bank
* Prune confidence cache on prune_non_root()
* Spin thread to process aggregate_stake_lockouts
* Add long-running thread for stake_weighted_lockouts computation
* change paths to something accounts_db (the singleton) owns, fixes SIGILL
* fail deserialize if paths don't work
serialize paths, too
* test that paths are populated from a bank snapshot
* accounts_index: RwLock per-account
Lots of lock contention on the accounts_index lock,
only take write-lock on accounts_index if we need to insert/remove an
account.
For updates, take a read-lock and then write-lock on the individual
account.
* Remove unneeded enumerate and add comments.
* runtime: Add bench for accounts::hash_internal_state
* fixup! cargo fmt
* fixup! cargo clippy
* fixup! Use a more representitive number of accounts
* fixup! More descriptive name for accounts creation helper
* groom poh_recorder
* fixup
* nits
* slot() from the outside means "the slot the recorder is working on"
* remove redundant check
* review comments, put next_tick back in the "is reset" check
* remove redundant check
* Enable V100 GPUs over 3 regions for TdS cluster
* Turn on secondary config-local drive for tds net
* Enable long args bypass for GPU machine details
* bypass quoted long arg
* Pull external account file from wget
* typo
* Symlink config-local instead of changing the path variables
* Fix link path
* Add support for additional disks for config-local
* Restore wrongly deleted lines
* Shellcheck
* add args in the right place dummy
* Fix nits
* typo
* var naming cleanup
* Add stub function for remaining cloud providers
* Use local erasure session to create/broadcast coding blobs
* Individual session for each recovery (as the config might be different)
* address review comments
* new constructors for session and coding generator
* unit test for dynamic erasure config
* Decouple Segments from Turns in Storage
* Get replicator local cluster tests running in a reasonable amount of time
* Fix unused imports
* Document new RPC APIs
* Check for exit while polling
* Implement new Index Column
* Correct slicing of blobs
* Mark coding blobs as coding when they're recovered
* Prevent broadcast stages from mixing coding and data blobs in blocktree
* Mark recovered blobs as present in the index
* Fix indexing error in recovery
* Fix broken tests, and some bug fixes
* increase min stack size for coverage runs
* Ensure signable data is not out of range
* Add a broadcast stage that puts bad sizes in blobs
* Resign blob after modifyign size
* Remove assertions that fail when size != meta.size
* Return result from deserializing blobs in blocktree instead of assuming deserialization will succeed
* Mark bad deserialization as dead fork
* Add test for corrupted blobs in blocktree and replay_stage
* Add crate to serialize some tests
* Ignore unused attribute warning
* Enable parallel run in CI
* Try to fix lograte tests
* Fix interdependent counter tests
* Broadcast run for injecting fake blobs in turbine
* address review comments
* new local cluster test that uses fake blob broadcast
* added a test to make sure tvu_peers ordering is guaranteed
* Fix funding math; make generate_keypairs and fund_keys consistent
* Add test, and fix inconsistencies it exposes
* De-pow math, and use assert_eq in tests for better failure msgs
* change consecutive leader slots to 4
* reduce polling frequency for transaction signature confirmation
* adjust wait time for transaction signature confirmation
* fix nominal test
* fix flakiness in wallet pay test
* Add committable transactions that cause errors like InstructionErrors back to retryable list on MaxHeightReached
* Remove unnecessary logic
* Add comments/renaming for clarity
* Implement CreditOnlyLocks
* Update credit-only atomic on account load
* Update credit-only atomic after bank.freeze_lock; store credits if all credit-only lock references are dropped
* Commit credit-only credits on bank freeze
* Update core to CreditAccountLocks
* Impl credit-only in System Transfer
* Rework CreditAccountLocks, test, and fix bugs
* Review comments: Pass CreditAccountLocks by reference; Tighten up insert block
* Only store credits on completed slot
* Check balance in bench_exchange funding to ensure commit_credits has completed
* Add is_debitable info to KeyedAccount meta to pass into programs
* Reinstate CreditOnlyLocks check on lock_account
* Rework CreditAccountLocks to remove strong_count usage
* Add multi-threaded credit-only locks test
* Improve RwLocks usage
* Review comments: panic if bad things happen; tighter code
* Assert lock_accounts race does not happen
* Revert panic if bad things happen; not a bad thing
* Page-pin packet memory for cuda
Bring back recyclers and pin offset buffers
* Add packet recycler to streamer
* Add set_pinnable to sigverify vecs to pin them
* Add packets reset test
* Add test for recycler and reduce the gc lock critical section
* Add comments/tests to cuda_runtime
* Add recycler to recv_blobs path.
* Add trace/names for debug and PacketsRecycler to bench-streamer
* Predict realloc and unpin beforehand.
* Add helper to reserve and pin
* Cap buffered packets length
* Call cuda wrapper functions
* Add storage point tracking and tie in storage rewards to epochs and economics
* Prevent validators from updating their validations for a segment
* Fix test
* Retain syscall scoping for readability
* Update Credits to own epoch tracking
* core/rpc: Name magic number for minted lamports in tests genesis block
* core/rpc: Expose bank::capitalization() via getSolTotalSupply RPC method
* book: Add entry for getTotalSupply RPC method
* Add DeadSlots column family
* Filter dead forks from get_slots_since
* Mark erroring slots as dead in replay stage, add test
* Mark dead forks in progress instead of removing them
* Fix logging process_entries failures in replay_stage
* Unignore test_fail_entry_verification_leader
* Refactor BroadcastStage to support custom implementations, add FailEntryVerificationBroadcastRun implementation
* Plumb switch on broadcast type through validator
* Add test for validator generating non-verifiable entries to local_cluster
* Fix bad initializers
* Refactor broadcast run code into utils
* Only a single snapshot is maintained to avoid unbounded disk growth
* Snapshot is stored as a compressed tar archive for faster rsyncing
* Any validator node may now generate snapshots
* Updated testnet scripts to generate snapshots on the blockstreamer node
Use more chunks to avoid duplicate signature failures:
Duplicate signatures can occur because bank.clear_signatures()
can occur before the bank has actually committed the signatures
to the status cache and then error out on the next set of transactions.
* add MiningPool fund validator MinigPools from inflation
* fixup
* finish rename of MINIMUM_SLOT_LENGTH to MINIMUM_SLOTS_PER_EPOCH
* deterministic miningpool location
* point_value, not credit_value... use f64
* Add support for contracts based on account data to Budget
* Add program_id to account constraints
* No longer require a signature for the account data witness
* Rename bank::store to store_account
* fmt
* Add a doc
* clippy
* Add signature to blob
* Change Signable trait to support returning references to signable data
* Add signing to broadcast
* Verify signatures in window_service
* Add testing for signatures to erasure
* Add RPC for getting current slot, consume RPC call in test_repairman_catchup for more deterministic results
* make runtime depend on bpf_loader
* remove vote redundancy, move bpf_loader to genesis, export program\! from bpf_loader crate
* move bpf_loader specification into genesis
* bpf tests to use genesis with bpf
* need to avoid depending on programs, except for macros
* Add credit-only debit/data check to verify_instruction
* Store credits and pass to accounts_db
* Add InstructionErrors and tests
* Relax account locks for credit-only accounts
* Collect credit-only account credits before passing to accounts_db to store properly
* Convert System Transfer accounts to credit-only, and fixup test
* Functionalize collect_accounts to unit test
* Review comments
* Rebase
* Add accounts_index bench
* Don't take the accounts index lock unless needed
* Accounts_index remove insert return vec and add capacity stats
* Use hashbrown hashmap for accounts_index
* Rework PoRep design doc
* Define the stages of the PoRep game
* Add that the stages are really transactions
* Update turn count
* Review comment + clarification
* More clarification
* Rephrase for clarity
* Revert "Prevent run.sh from running beyond the first epoch under normal use (#4498)"
This reverts commit d343c409e6.
* Separate bootstrap leader's stake lamports from its identity lamports
The local cluster that run.sh starts will typically only have a single
node, the bootstrap leader. With epoch warmup enabled, run.sh will fail
after ~90 seconds once the warmup period has been exceeded due to lack
of votes from other validators.
As a workaround, disable epoch warmup and set slots-per-epoch to 1
million to keep run.sh alive for more than a fortnight.
* Sort repairmen before shuffling so order is the same across all validators
* Reduce repair redundancy to 1 for now
* Fix local cache of roots so that 1) Timestamps are only updated to acknowledge a repair was sent 2) Roots are updated even when timestamps aren't updated to keep in sync with network
* Refactor code, add test
* Add credit-only flag to AccountMeta, default to false
* Sort keys by is_credit_only within signed/unsigned groupings
* Process and de-dupe program keys along with other account keys
* Add message helper functions
* Fix test
* Improve comment
* s/is_credit_only/is_debitable
* Add InstructionKeys helper struct, and simplify program_position method
* Some counters for real poh perf analysis
* more metrics
* Comment on CPU affinity change, and reduce hash batch size based on TPS perf
* review comments
* Add num_readonly_accounts slice
* Impl programs in account_keys
* Emulate current account-loading functionality using program-account_keys (breaks exchange_program_api tests)
* Fix test
* Add temporary exchange faucet id
* Update chacha golden
* Split num_credit_only_accounts into separate fields
* Improve readability
* Move message field constants into Message
* Add MessageHeader struct and fixup comments
* Add bootstrap storage account to genesis
* Add storage account genesis command to run.sh
* Update airdrop for all validators
* Remove unhelpful Short for arg
* Set the correct program owner
* Use Rc to prevent clone of Packets
* Fix min => max in banking_stage threads.
Coalesce packet buffers better since a larger batch will
be faster through banking and sigverify.
Deconstruct batches into banking_stage from sigverify since
sigverify likes to accumulate batches but then a single banking_stage
thread will be stuck with a large batch. Maximize parallelism by
creating more chunks of work for banking_stage.
* Remove unused PohServiceConfig::Step
* Clarify variable name
* Poh::hash() now takes an iteration counter
* man -> max
* Inline functions with single call site
* Move PohServiceConfig into GenesisBlock
* Add plumbing to enable real PoH on testnets
* Batch hashes to improve PoH hash rate
* Ensure a constant hashes_per_tick
* Remove PohEntry mixin field
* Poh/PohEntry no longer maintains tick_height
* Ensure a constant hashes_per_tick
* ci/localnet-sanity.sh: Use real PoH
* Rework Poh/PohService to keep PohRecorder unlocked as much as possible while hashing
* Correctly remove replicator from data plane after its done repairing
* Update discover to report nodes and replicators separately
* Fix print and condition to be spy
Cleanup
Raise limit on submission threshold
Pick nits and add metrics point
fmt
Fixup compiler warning
Cleanup if-else
Append new point to vec rather than submit
* support passive staking with wallet, use it
* fixups
* clippy
* cleanup app generation in wallet, finish fullnode.sh staking
* _id and _keypair => pubkey
use keygen, not wallet to get pubkey
* found 'em
* Be able to create bank snapshots
* fix clippy
* load snapshot on start
* regenerate account index from the storage
* Remove rc feature dependency
* cleanup
* save snapshot for slot 0
* Add Epoch Slots to gossip
* Add new gossip structure to support Repair
* remove unnecessary clones
* Setup dummy fast repair in repair_service
* PR comments
* Introduce record locks on txs that will be recorded
* Add tests for LockedAccountsResults
* Fix broken bench
* Exit process_entries on detecting conflicting locks within same entry
* Add optional depth parameter to pubsub, and store in subscriptions
* Pass bank_forks into rpc_subscription; add method to check depth before notify and impl for account subscriptions
* Impl check-depth for signature subscriptions
* Impl check-depth for program subscriptions
* Plumb fork id through accounts
* Use fork id and root to prevent repeated account notifications; also s/Depth/Confirmations
* Write tests in terms of bank_forks
* Fixup accounts tests
* Add pubsub-confirmations tests
* Update pubsub documentation
* add support for single-crate coverage to help iterate, update to latest grcov
* shellcheck
* fixup
* remove unused
* install grcov before setting RUSTFLAGS ;)
* rely on nightly having grcov installed
* net.sh: Add -F to discard validator nodes that didn't bootup successfully
* Relax sanity node count when validator bootup failure is permitted
* Less sanity for testnet-demo
* net.sh: Add -F to discard validator nodes that didn't bootup successfully
* testnet-demo now runs across more GCE zones
* Save zone info to config file
* Add geoip whitelist for common data centers
* Skip more of start
* Include -x for config
* Fetch private key from first validator node if necessary
* Correct -r propagation
And reduce metrics rate for exchange contract counters.
Since we can go 10s-100s thousands of contracts per second,
some metrics would be dropped if submitting every time.
* Fix inserting bogus is_last blobs into blocktree
* Check for pre-existing blob before insert
* Ignore test that performs concurrent writes on blocktree as that is not supported
* Fix erasure metadata race condition
* make erasure return the underlying error without wrapping it in the `solana::Error` type
* Add metric for erasure failures
* add tests to `ErasureMeta` indexing logic
* Add test to ensure erasure recovery failures don't cause panics
* Add support for Azure instances in testnet creation
* Fixup
* Fix shellcheck errors
* More shellcheck and cleanup node creation and deletion
* More shellcheck and cleanup node creation and deletion
* Fixup instance wait API
* Fix revieew comments and add GPU installation extension
* Revert "Revert "account storage is not in sync with the index after gc (#3914)" (#3936)"
This reverts commit 4f47fc00bc.
* fix get_exclusive_storage
* clippy
* account storage is not in sync with the index after gc
* builds
* clippy fmt
* test
* purge dead forks on store
* rm println
* also fixed count_stores
* comments
* * rename Entry::serialized_size() to Entry::to_blob_size() to better
reduce confusion with bincode, et al. and to better reflect its
real meaning
* fix implementation of to_blob_size() to actually return what happens
when we do entries.to_blobs() (i.e. we serialize Vec<Entry>, not Entry)
* update tests to be more rigorous
* clippy
* Migrate from ring to ed25519-dalek
* Move gen_keypair_file test to a more appropriate location
* Fixup bench-exchange and add helper fn for single deterministic keypair
* Update golden
* fix erasure, more tests for full blobs, more metrics
* Revert "Revert "Use Rust erasure library and turn on erasure (#3768)" (#3827)"
This reverts commit 4b8cb72977.
* split out erasure into new crate; add implementation using rust reed-solomon-library
* Track erasures with a &[bool] instead of indexes
* fix bug that reported the number of erasures incorrectly
* Introduce erasure `Session` for consistent config
* Increase test coverage; fix bugs
* Add ability to remove blobs from erasure meta tracking. test added
* Track deletion of coding blobs in blocktree via ErasureMeta. Added to
test
* Remove unused functions in blocktree
* add randomness to recovery thread to exercise recovery due to either new
data or coding blobs
* Add unit test for ErasureMeta index handling
* Re-enable test in broadcast stage
* Rename programs to instruction_processors
* Updates around the code base to support instruction_processors rename
* Kabab instruction_processors
* Update Cargo.toml files and scripts to use instruction-processors
* Update Cargo.toml to use instruction-processors
* Update CI scripts to use instruction-processors
* Add --bootstrap-leader-lamports
* Generalize --no-stake into --stake NUM
* Use a large stake for net/ fullnodes
* Setup vote account before starting fullnode to avoid mixed log output
Solana’s crypto-economic system is designed to promote a healthy, long term self-sustaining economy with participant incentives aligned to the security and decentralization of the network. The main participants in this economy are validation-clients and replication-clients. Their contributions to the network, state validation and data storage respectively, and their requisite remittance mechanisms are discussed below.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are protocol-derived issuances from a network-controlled reserve of tokens (sometimes referred to as the ‘mining pool’). These rewards will constitute the total reward delivered to replication clients and a portion of the total rewards for validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to join the network.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are protocol-derived issuances from a protocol-defined, global inflation rate. These rewards will constitute the total reward delivered to replication clients and a portion of the total rewards for validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to join the network.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be specified as annual interest rates calculated per, real-time, Solana epoch [DEFINITION]. As discussed further below, the issuance rates are determined as a function of total network validator staked percentage and total replication provided by replicators in each previous epoch. The choice for validator and replicator client rewards to be based on participation rates, rather than a global fixed inflation or interest rate, emphasizes a protocol priority of overall economic security, rather than monetary supply predictability. Due to Solana’s hard total supply cap of 1B tokens and the bounds of client participant rates in the protocol, we believe that global interest, and supply issuance, scenarios should be able to be modeled with reasonable uncertainties.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be a result of a global supply inflation rate, calculated per Solana epoch and distributed amongst the active validator set. As discussed further below, the per annum inflation rate is based on a pre-determined disinflationary schedule. This provides the network with monetary supply predictability which supports long term economic stability and security.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction (be it a state execution or proof-of-replication verification). A mechanism for continuous and long-term funding of the mining pool through a pre-dedicated portion of transaction fees is also discussed below.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction (be it a state execution or proof-of-replication verification). A mechanism for continuous and long-term economic stability through partial burning of each transaction fee is also discussed below.
A high-level schematic of Solana’s crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics.md), [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. The [Replication-client Economics](ed_replication_client_economics.md) chapter will review the Solana network design for global ledger storage/redundancy and replicator-client economics ([Storage-replication rewards](ed_rce_storage_replication_rewards.md)) along with a replicator-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md). The [Economic Sustainability](ed_economic_sustainability.md) section dives deeper into Solana’s design for long-term economic sustainability and outlines the constraints and conditions for a self-sustaining economy. An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
A high-level schematic of Solana’s crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics.md), [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. Additionally, in [Storage Rent Economics](ed_storage_rend_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. The [Replication-client Economics](ed_replication_client_economics.md) chapter will review the Solana network design for global ledger storage/redundancy and replicator-client economics ([Storage-replication rewards](ed_rce_storage_replication_rewards.md)) along with a replicator-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md). The [Economic Sustainability](ed_economic_sustainability.md) section dives deeper into Solana’s design for long-term economic sustainability and outlines the constraints and conditions for a self-sustaining economy. An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
<!--  -->
Validator-clients are eligible to receive protocol-based (i.e. via mining pool) rewards issued via stake-based annual interest rates by providing compute (CPU+GPU) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic schedule as a function of total amount of Solana tokens staked in the system and duration since network launch (genesis block). Additionally, these clients may earn revenue through two types of transaction fees: state-validation transaction fees and pooled Proof-of-Replication (PoRep) transaction fees. The distribution of these two types of transaction fees to the participating validation set are designed independently as economic goals and attack vectors are unique between the state- generation/validation mechanism and the ledger replication/validation mechanism. For clarity, we separately describe the design and motivation of the three types of potential revenue streams for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and PoRep-validation transaction fees.
Validator-clients are eligible to receive protocol-based (i.e. via inflation) rewards issued via stake-based annual interest rates (calculated per epoch) by providing compute (CPU+GPU) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic disinflationary schedule as a function of total amount of circulating tokens. Additionally, these clients may earn revenue through fees via state-validation transactions and Proof-of-Replication (PoRep) transactions. For clarity, we separately describe the design and motivation of these revenue distriubutions for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and rent, and PoRep-validation transaction fees.
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by replicator-clients. In this case, validators are providing compute (CPU/GPU) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.2
While replication-clients are incentivized and rewarded through protocol-based rewards schedule (see [Replication-client Economics](ed_replication_client_economics.md)), validator-clients will be incentivized to include and validate PoReps in PoH through the distribution of the transaction fees associated with the submitted PoRep. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy (e.g. 10x data redundancy). It was chosen not to include a distribution of these rewards to PoRep validators, and to rely only on the collection of PoRep attached transaction fees, due to the fact that the confluence of two participation incentive modes (state-validation inflation rate via global staked % and replication-validation rewards based on global redundancy factor) on the incentives of a single network participant (a validator-client) potentially opened up a significant incentive-driven attack surface area.
While replication-clients are incentivized and rewarded through protocol-based rewards schedule (see [Replication-client Economics](ed_replication_client_economics.md)), validator-clients will be incentivized to include and validate PoReps in PoH through collection of transaction fees associated with the submitted PoReps and distribution of protocol rewards proportional to the validated PoReps. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy (e.g. 10x data redundancy).
The validation of PoReps by validation-clients is computationally more expensive than state-validation (detail in the [Economic Sustainability](ed_economic_sustainability.md) chapter), thus the transaction fees are expected to be proportionally higher. However, because replication-client rewards are distributed in proportion to and only after submitted PoReps are validated, they are uniquely motivated for the inclusion and validation of their proofs. This pressure is expected to generate an adequate market economy between replication-clients and validation-clients. Additionally, transaction fees submitted with PoReps have no minimum amount pre-allocated to the mining pool, as do state-validation transaction fees.
The validation of PoReps by validation-clients is computationally more expensive than state-validation (detail in the [Economic Sustainability](ed_economic_sustainability.md) chapter), thus the transaction fees are expected to be proportionally higher.
There are various attack vectors available for colluding validation and replication clients, as described in detail below in [Economic Sustainability](ed_economic_sustainability). To protect against various collusion attack vectors, for a given epoch, PoRep transaction fees are pooled, and redistributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of invalidated PoReps [DIAGRAM]. This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps (note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid).
There are various attack vectors available for colluding validation and replication clients, as described in detail below in [Economic Sustainability](ed_economic_sustainability). To protect against various collusion attack vectors, for a given epoch, validator rewards aredistributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of PoReps that mismatch the replicators challenge. The PoRep challenge game is described in [Ledger Replication](https://github.com/solana-labs/solana/blob/master/book/src/ledger-replication.md#the-porep-game). This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps (note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid).
Validator-clients have two functional roles in the Solana network
Validator-clients have two functional roles in the Solana network:
* Validate (vote) the current global state of that PoH along with any Proofs-of-Replication (see [Replication Client Economics](ed_replication_client_economics.md)) that they are eligible to validate
* Validate (vote) the current global state of that PoH along with any Proofs-of-Replication (see [Replication Client Economics](ed_replication_client_economics.md)) that they are eligible to validate.
* Be elected as ‘leader’ on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and Proofs-of-Replication and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. Compensation for validator-clients is provided via a protocol-based annual interest rate dispersed in proportion to the stake-weight of each validator (see below) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each non-PoRep transaction fee, less a protocol-specified amount that is returned to the mining pool (see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)). PoRep transaction fees are not collected directly by the leader client but pooled and returned to the validator set in proportion to the number of successfully validated PoReps. (see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md))
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. Compensation for validator-clients is provided via a protocol-based annual inflation rate dispersed in proportion to the stake-weight of each validator (see below) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each transaction fee, less a protocol-specified amount that is destroyed (see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)). PoRep transaction fees are also collected by the leader client and validator PoRep rewards are distributed in proportion to the number of validated PoReps less the number of PoReps that mismatch a replicator's challenge. (see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md))
The protocol-based annual interest-rate (%) per epoch to be distributed to validation-clients is to be a function of:
The effective protocol-based annual interestrate (%) per epoch to be distributed to validation-clients is to be a function of:
* the current fraction of staked SOLs out of the current total circulating supply,
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule
* the global time since the genesis block instantiation
* the fraction of staked SOLs out of the current total circulating supply,
* the up-time/participation [% of available slots/blocks that validator had opportunity to vote on?] of a given validator over the previous epoch.
* the up-time/participation [% of available slots that validator had opportunity to vote on] of a given validator over the previous epoch.
The first two factors are protocol parameters only (i.e. independent of validator behavior in a given epoch) and describe a global validation reward schedule designed to both incentivize early participation and optimal security in the network. This schedule sets a maximum annual validator-client interest rate per epoch.
The first factor is a function of protocol parameters only (i.e. independent of validator behavior in a given epoch) and results in a global validation reward schedule designed to incentivize early participation, provide clear montetary stability and provide optimal security in the network.
At any given point in time, this interest rate is pegged to a defined value given a specific % staked SOL out of the circulating supply (e.g. 10% interest rate when 66% of circulating SOL is staked). The interestrate adjusts as the square-root [TBD] of the % staked, leading to higher validation-client interest rates as the % staked drops below the targeted goal, thus incentivizing more participation leading to more security in the network. An example of such a schedule, for a specified point in time (e.g. network launch) is shown in **Table 1**.
At any given point in time, a specific validator's interest rate can be determined based on the porportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For an illustrative example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year (the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch). With these broad assumptions, the 10-year inflation rate (adjusted daily for this example) is shown in **Figure 2**, while the total circulating token supply is illustrated in **Figure 3**. Neglected in this toy-model is the inflation supression due to the portion of each transaction fee that is to be destroyed.
**Figure 2:** In this example schedule, the annual inflation rate [%] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
**Table 1:** Example interest rate schedule based on % SOL staked out of circulating supply. In this case, interest rates are fixed at 10% for 66% of staked circulating supply
**Figure 3:** The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in **Figure 2**
Over time, the interest rate, at any network staked percentage, will drop as described by an algorithmic schedule. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. This mining-pool provided interest rate will reduce over time until a network-chosen baseline value is reached. This is a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for both state-validation and ledger storage replication (PoReps) are not accounted for here. A validation-client interest rate schedule as a function of % network staked and time is shown in** Figure 2**.
Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabalize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for both state-validation and ledger storage replication (PoReps) are not accounted for here.
Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators and replicator nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assummed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in** Figure 4**.
**Figure 2:**In this example schedule, the annual interest rate [%] reduces at around 16.7% per year, until it reaches the long-term, fixed, 4% rate.
**Figure 4:**Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.
This epoch-specific protocol-defined interest rate sets an upper limit of *protocol-generated* annual interest rate (not absolute total interest rate) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch. Each epoch is comprised of XXX slots. The protocol-defined interest rate is then discounted by the log [TBD] of the % of slots a given validator submitted a vote on a PoH branch during that epoch, see **Figure XX**
This epoch-specific protocol-defined interest rate sets an upper limit of *protocol-generated* annual interest rate (not absolute total interest rate) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch.
Each message sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
Each transaction sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
* provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
@ -10,11 +10,11 @@ Each message sent through the network, to be processed by the current leader val
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies (e.g. Bitcoin, Ethereum), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is sent to the mining pool, with the resulting fee going to the current leader processing the transaction. These pooled fees, then re-enter the system through rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
Many current blockchain economies (e.g. Bitcoin, Ethereum), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is destroyed, with the remaining fee going to the current leader processing the transaction. A scheduled global inflation rate provides a source for rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing a redistribution avenue that protects against "tax evasion" attacks (i.e. side-channel fee payments)<sup>[1](ed_referenced.md)</sup>. Constraints on the fixed portion of transaction fees going to the mining pool, to establish long-term economic sustainability, are established and discussed in detail in the [Economic Sustainability](ed_economic_sustainability.md) section.
Transaction fees are set by the network cluster based on recent historical throughput, see [Congestion Driven Fees](transaction-fees.md#congestion-driven-fees). This minimum portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount, the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block untilit reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
This minimum, protocol-earmarked, portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount (e.g. 50% of a block's capacity), the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
As mentioned, a fixed-proportion of each transaction fee is to be destroyed. The intent of this design is to retain leader incentive toinclude as many transactions as possible within the leader-slot time, while providing an inflation limiting mechansim that protects against "tax evasion" attacks (i.e. side-channel fee payments)<sup>[1](ed_referenced.md)</sup>.
Additionally, the minimum protocol captured fee can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total procotol captured fee to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.
Additionally, the burnt fees can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total fees destroyed to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the burnt fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.
You can observe the effects of your client's transactions on our [dashboard](https://metrics.solana.com:3000/d/testnet/testnet-hud?orgId=2&from=now-30m&to=now&refresh=5s&var-testnet=testnet)
The RepairService is in charge of retrieving missing blobs that failed to be delivered by primary communication protocols like Avalanche. It is in charge of managing the protocols described below in the `Repair Protocols` section below.
# Challenges:
1) Validators can fail to receive particular blobs due to network failures
2) Consider a scenario where blocktree contains the set of slots {1, 3, 5}. Then Blocktree receives blobs for some slot 7, where for each of the blobs b, b.parent == 6, so then the parent-child relation 6 -> 7 is stored in blocktree. However, there is no way to chain these slots to any of the existing banks in Blocktree, and thus the `Blob Repair` protocol will not repair these slots. If these slots happen to be part of the main chain, this will halt replay progress on this node.
3) Validators that find themselves behind the cluster by an entire epoch struggle/fail to catch up because they do not have a leader schedule for future epochs. If nodes were to blindly accept repair blobs in these future epochs, this exposes nodes to spam.
# Repair Protocols
The repair protocol makes best attempts to progress the forking structure of Blocktree.
The different protocol strategies to address the above challenges:
1. Blob Repair (Addresses Challenge #1):
This is the most basic repair protocol, with the purpose of detecting and filling "holes" in the ledger. Blocktree tracks the latest root slot. RepairService will then periodically iterate every fork in blocktree starting from the root slot, sending repair requests to validators for any missing blobs. It will send at most some `N` repair reqeusts per iteration.
Note: Validators will only accept blobs within the current verifiable epoch (epoch the validator has a leader schedule for).
The goal of this protocol is to discover the chaining relationship of "orphan" slots that do not currently chain to any known fork.
* Blocktree will track the set of "orphan" slots in a separate column family.
* RepairService will periodically make `RequestOrphan` requests for each of the orphans in blocktree.
`RequestOrphan(orphan)` request - `orphan` is the orphan slot that the requestor wants to know the parents of
`RequestOrphan(orphan)` response - The highest blobs for each of the first `N` parents of the requested `orphan`
On receiving the responses `p`, where `p` is some blob in a parent slot, validators will:
* Insert an empty `SlotMeta` in blocktree for `p.slot` if it doesn't already exist.
* If `p.slot` does exist, update the parent of `p` based on `parents`
Note: that once these empty slots are added to blocktree, the `Blob Repair` protocol should attempt to fill those slots.
Note: Validators will only accept responses containing blobs within the current verifiable epoch (epoch the validator has a leader schedule for).
3. Repairmen (Addresses Challenge #3):
This part of the repair protocol is the primary mechanism by which new nodes joining the cluster catch up after loading a snapshot. This protocol works in a "forward" fashion, so validators can verify every blob that they receive against a known leader schedule.
Each validator advertises in gossip:
* Current root
* The set of all completed slots in the confirmed epochs (an epoch that was calculated based on a bank <= current root) past the current root
Observers of this gossip message with higher epochs (repairmen) send blobs to catch the lagging node up with the rest of the cluster. The repairmen are responsible for sending the slots within the epochs that are confrimed by the advertised `root` in gossip. The repairmen divide the responsibility of sending each of the missing slots in these epochs based on a random seed (simple blob.index iteration by N, seeded with the repairman's node_pubkey). Ideally, each repairman in an N node cluster (N nodes whose epochs are higher than that of the repairee) sends 1/N of the missing blobs. Both data and coding blobs for missing slots are sent. Repairmen do not send blobs again to the same validator until they see the message in gossip updated, at which point they perform another iteration of this protocol.
Gossip messages are updated every time a validator receives a complete slot within the epoch. Completed slots are detected by blocktree and sent over a channel to RepairService. It is important to note that we know that by the time a slot X is complete, the epoch schedule must exist for the epoch that contains slot X because WindowService will reject blobs for unconfirmed epochs. When a newly completed slot is detected, we also update the current root if it has changed since the last update. The root is made available to RepairService through Blocktree, which holds the latest root.
Stakers are rewarded for helping validate the ledger. They do it by delegating
their stake to fullnodes. Those fullnodes do the legwork and send votes to the
stakers' staking accounts. The rest of the cluster uses those stake-weighted
votes to select a block when forks arise. Both the fullnode and staker need
some economic incentive to play their part. The fullnode needs to be
compensated for its hardware and the staker needs to be compensated for risking
getting its stake slashed. The economics are covered in [staking
Stakers are rewarded for helping to validate the ledger. They do this by
delegating their stake to validator nodes. Those validators do the legwork of
replaying the ledger and send votes to a per-node vote account to which stakers
can delegate their stakes. The rest of the cluster uses those stake-weighted
votes to select a block when forks arise. Both the validator and staker need
some economic incentive to play their part. The validator needs to be
compensated for its hardware and the staker needs to be compensated for the risk
of getting its stake slashed. The economics are covered in [staking
rewards](staking-rewards.md). This chapter, on the other hand, describes the
underlying mechanics of its implementation.
## Vote and Rewards accounts
## Basic Design
The rewards process is split into two on-chain programs. The Vote program
solves the problem of making stakes slashable. The Rewards account acts as
custodian of the rewards pool. It is responsible for paying out each staker
once the staker proves to the Rewards program that it participated in
validating the ledger.
The general idea is that the validator owns a Vote account. The Vote account
tracks validator votes, counts validator generated credits, and provides any
additional validator specific state. The Vote account is not aware of any
stakes delegated to it and has no staking weight.
The Vote account contains the following state information:
A separate Stake account (created by a staker) names a Vote account to which the
stake is delegated. Rewards generated are proportional to the amount of
lamports staked. The Stake account is owned by the staker only. Some portion of the lamports
stored in this account are the stake.
* votes - The submitted votes.
## Passive Delegation
*`delegate_id` - An identity that may operate with the weight of this
account's stake. It is typically the identity of a fullnode, but may be any
identity involved in stake-weighted computations.
Any number of Stake accounts can delegate to a single
Vote account without an interactive action from the identity controlling
the Vote account or submitting votes to the account.
*`authorized_voter_id` - Only this identity is authorized to submit votes.
The total stake allocated to a Vote account can be calculated by the sum of
all the Stake accounts that have the Vote account pubkey as the
`StakeState::Stake::voter_pubkey`.
*`credits` - The amount of unclaimed rewards.
## Vote and Stake accounts
*`root_slot` - The last slot to reach the full lockout commitment necessary
for rewards.
The rewards process is split into two on-chain programs. The Vote program solves
the problem of making stakes slashable. The Stake account acts as custodian of
the rewards pool, and provides passive delegation. The Stake program is
responsible for paying out each staker once the staker proves to the Stake
program that its delegate has participated in validating the ledger.
The Rewards program is stateless and pays out reward when a staker submits its
Vote account to the program. Claiming a reward requires a transaction that
includes the following instructions:
### VoteState
1.`RewardsInstruction::RedeemVoteCredits`
2.`VoteInstruction::ClearCredits`
VoteState is the current state of all the votes the validator has submitted to
the network. VoteState contains the following state information:
The Rewards program transfers lamports from the Rewards account to the Vote
account's public key. The Rewards program also ensures that the `ClearCredits`
instruction follows the `RedeemVoteCredits` instruction, such that a staker may
not claim rewards for the same work more than once.
*`votes` - The submitted votes data structure.
*`credits` - The total number of rewards this vote program has generated over its
lifetime.
### Delegating Stake
*`root_slot` - The last slot to reach the full lockout commitment necessary for
rewards.
`VoteInstruction::DelegateStake` allows the staker to choose a fullnode to
validate the ledger on its behalf. By being a delegate, the fullnode is
entitled to collect transaction fees when its is leader. The larger the stake,
the more often the fullnode will be able to collect those fees.
*`commission` - The commission taken by this VoteState for any rewards claimed by
staker's Stake accounts. This is the percentage ceiling of the reward.
### Authorizing a Vote Signer
* Account::lamports - The accumulated lamports from the commission. These do not
count as stakes.
*`authorized_vote_signer` - Only this identity is authorized to submit votes. This field can only modified by this identity.
### VoteInstruction::Initialize
*`account[0]` - RW - The VoteState
`VoteState::authorized_vote_signer` is initialized to `account[0]`
other VoteState members defaulted
### VoteInstruction::AuthorizeVoteSigner(Pubkey)
*`account[0]` - RW - The VoteState
`VoteState::authorized_vote_signer` is set to to `Pubkey`, the transaction must by
signed by the Vote account's current `authorized_vote_signer`. <br>
`VoteInstruction::AuthorizeVoter` allows a staker to choose a signing service
for its votes. That service is responsible for ensuring the vote won't cause
the staker to be slashed.
## Limitations
Many stakers may delegate their stakes to the same fullnode. The fullnode must
send a separate vote to each staking account. If there are far more stakers
than fullnodes, that's a lot of network traffic. An alternative design might
have fullnodes submit each vote to just one account and then have each staker
submit that account along with their own to collect its reward.
### VoteInstruction::Vote(Vec<Vote>)
*`account[0]` - RW - The VoteState
`VoteState::lockouts` and `VoteState::credits` are updated according to voting lockout rules see [Tower BFT](tower-bft.md)
*`account[1]` - RO - A list of some N most recent slots and their hashes for the vote to be verified against.
### StakeState
A StakeState takes one of three forms, StakeState::Uninitialized, StakeState::Stake and StakeState::RewardsPool.
### StakeState::Stake
StakeState::Stake is the current delegation preference of the **staker** and
contains the following state information:
* Account::lamports - The lamports available for staking.
*`stake` - the staked amount (subject to warm up and cool down) for generating rewards, always less than or equal to Account::lamports
*`voter_pubkey` - The pubkey of the VoteState instance the lamports are
delegated to.
*`credits_observed` - The total credits claimed over the lifetime of the
program.
*`activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up.
*`deactivated` - the epoch at which this stake will be completely de-activated, which is `cool down` epochs after StakeInstruction::Deactivate is issued.
### StakeState::RewardsPool
To avoid a single network wide lock or contention in redemption, 256 RewardsPools are part of genesis under pre-determined keys, each with std::u64::MAX credits to be able to satisfy redemptions according to point value.
The Stakes and the RewardsPool are accounts that are owned by the same `Stake` program.
### StakeInstruction::DelegateStake(u64)
The Stake account is moved from Uninitialized to StakeState::Stake form. This is
how stakers choose their initial delegate validator node and activate their
stake account lamports.
*`account[0]` - RW - The StakeState::Stake instance. <br>
`StakeState::Stake::credits_observed` is initialized to `VoteState::credits`,<br>
`StakeState::Stake::voter_pubkey` is initialized to `account[1]`,<br>
`StakeState::Stake::stake` is initialized to the u64 passed as an argument above,<br>
`StakeState::Stake::activated` is initialized to current Bank epoch, and<br>
`StakeState::Stake::deactivated` is initialized to std::u64::MAX
*`account[1]` - R - The VoteState instance.
*`account[2]` - R - sysvar::current account, carries information about current Bank epoch
*`account[3]` - R - stake_api::Config accoount, carries warmup, cooldown, and slashing configuration
### StakeInstruction::RedeemVoteCredits
The staker or the owner of the Stake account sends a transaction with this
instruction to claim rewards.
The Vote account and the Stake account pair maintain a lifetime counter of total
rewards generated and claimed. Rewards are paid according to a point value
supplied by the Bank from inflation. A `point` is one credit * one staked
lamport, rewards paid are proportional to the number of lamports staked.
*`account[0]` - RW - The StakeState::Stake instance that is redeeming rewards.
*`account[1]` - R - The VoteState instance, must be the same as `StakeState::voter_pubkey`
*`account[2]` - RW - The StakeState::RewardsPool instance that will fulfill the request (picked at random).
*`account[3]` - R - sysvar::rewards account from the Bank that carries point value.
*`account[4]` - R - sysvar::stake_history account from the Bank that carries stake warmup/cooldown history
Reward is paid out for the difference between `VoteState::credits` to
`StakeState::Stake::credits_observed`, multiplied by `sysvar::rewards::Rewards::validator_point_value`.
`StakeState::Stake::credits_observed` is updated to`VoteState::credits`. The commission is deposited into the Vote account token
balance, and the reward is deposited to the Stake account token balance.
```rust,ignore
let credits_to_claim = vote_state.credits - stake_state.credits_observed;
@ -58,6 +58,17 @@ with a ledger interpretation that matches the leader's.
A gossip network connecting all [nodes](#node) of a [cluster](#cluster).
#### cooldown period
Some number of epochs after stake has been deactivated while it progressively
becomes available for withdrawal. During this period, the stake is considered to
be "deactivating". More info about:
[warmup and cooldown](stake-delegation-and-rewards.md#stake-warmup-cooldown-withdrawal)
#### credit
See [vote credit](#vote-credit).
#### data plane
A multicast network used to efficiently validate [entries](#entry) and gain
@ -91,6 +102,10 @@ History](#proof-of-history).
The time, i.e. number of [slots](#slot), for which a [leader
schedule](#leader-schedule) is valid.
#### finality
When nodes representing 2/3rd of the stake have a common [root](#root).
#### fork
A [ledger](#ledger) derived from common entries but then diverged.
@ -189,6 +204,10 @@ The number of [fullnodes](#fullnode) participating in a [cluster](#cluster).
See [Proof of History](#proof-of-history).
#### point
A weighted [credit](#credit) in a rewards regime. In the validator [rewards regime](staking-rewards.md), the number of points owed to a stake during redemption is the product of the [vote credits](#vote-credit) earned and the number of lamports staked.
#### program
The code that interprets [instructions](#instruction).
@ -213,6 +232,15 @@ The public key of a [keypair](#keypair).
Storage mining client, stores some part of the ledger enumerated in blocks and
submits storage proofs to the chain. Not a full-node.
#### root
A [block](#block) or [slot](#slot) that has reached maximum [lockout](#lockout)
on a validator. The root is the highest block that is an ancestor of all active
forks on a validator. All ancestor blocks of a root are also transitively a
root. Blocks that are not an ancestor and not a descendant of the root are
excluded from consideration for consensus and can be discarded.
#### runtime
The component of a [fullnode](#fullnode) responsible for [program](#program)
@ -263,6 +291,11 @@ hash values and a bit which says if this hash is valid or fake.
The number of keys and samples that a validator can verify each storage epoch.
#### sysvar
A synthetic [account](#account) provided by the runtime to allow programs to
access network state such as current tick height, rewards [points](#point) values, etc.
#### thin client
A type of [client](#client) that trusts it is communicating with a valid
@ -310,3 +343,15 @@ that it ran, which can then be verified in less time than it took to produce.
#### vote
See [ledger vote](#ledger-vote).
#### vote credit
A reward tally for validators. A vote credit is awarded to a validator in its
vote account when the validator reaches a [root](#root).
#### warmup period
Some number of epochs after stake has been delegated while it progressively
becomes effective. During this period, the stake is considered to be
"activating". More info about:
[warmup and cooldown](stake-delegation-and-rewards.md#stake-warmup-cooldown-withdrawal)
Inspect for your contributions to our [metrics dashboard](https://metrics.solana.com:3000/d/U9-26Cqmk/testnet-monitor-cloud?refresh=60s&orgId=2&var-hostid=All).
Since the testnet is not intended for stress testing of max transaction
throughput, a higher-end machine with a GPU is not necessary to participate.
However ensure the machine used is not behind a residential NAT to avoid NAT
traversal issues. A cloud-hosted machine works best. **Ensure that IP ports
8000 through 10000 are not blocked for Internet inbound and outbound traffic.**
Prebuilt binaries are available for Linux x86_64 (Ubuntu 18.04 recommended).
MacOS or WSL users may build from source.
## Recommended Setups
For a performance testnet with many transactions we have some preliminary recommended setups:
| | Low end | Medium end | High end | Notes |
| --- | ---------|------------|----------| -- |
| CPU | AMD Threadripper 1900x | AMD Threadripper 2920x | AMD Threadripper 2950x | Consider a 10Gb-capable motherboard with as many PCIe lanes and m.2 slots as possible. |
| RAM | 16GB | 32GB | 64GB | |
| OS Drive | Samsung 860 Evo 2TB | Samsung 860 Evo 4TB | Samsung 860 Evo 4TB | Or equivalent SSD |
| Accounts Drive(s) | None | Samsung 970 Pro 1TB | 2x Samsung 970 Pro 1TB | |
| GPU | 4x Nvidia 1070 or 2x Nvidia 1080 Ti or 2x Nvidia 2070 | 2x Nvidia 2080 Ti | 4x Nvidia 2080 Ti | Any number of cuda-capable GPUs are supported on Linux platforms. |
## GPU Requirements
CUDA is required to make use of the GPU on your system. The provided Solana
release binaries are built on Ubuntu 18.04 with <a
As noted in the overview, solana currently maintains several testnets, each featuring a validator that can serve as the entrypoint to the cluster for your validator.
Current testnet entrypoints:
- Stable, testnet.solana.com
- Beta, beta.testnet.solana.com
- Edge, edge.testnet.solana.com
Prior to mainnet, the testnets may be running different versions of solana
software, which may feature breaking changes. Generally, the edge testnet tracks
the tip of master, beta tracks the latest tagged minor release, and stable
tracks the most stable tagged release.
### Get Testnet Version
You can submit a JSON-RPC request to see the specific version of the cluster.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.