* Add snapshot hash of full accounts state
* Use normal hashing for the accounts delta state
* Add merkle
(cherry picked from commit 947a339714)
Co-authored-by: sakridge <sakridge@gmail.com>
* Add keypair_util_from_path helper
* Cli: impl config.keypair as a trait object
* SDK: Add Debug and PartialEq for dyn Signer
* ClapUtils: Arg parsing from pubkey+signers to Presigner
* Impl Signers for &dyn Signer collections
* CLI: Add helper for getting signers from args
* CLI: Replace SigningAuthority with Signer trait-objs
* CLI: Drop disused signers command field
* CLI: Drop redundant tests
* Add clap validator that handles all current signer types
* clap_utils: Factor Presigner resolution to helper
* SDK: `From` for boxing Signer implementors to trait objects
* SDK: Derive `Clone` for `Presigner`
* Remove panic
* Cli: dedup signers in transfer for remote-wallet ergonomics
* Update docs vis-a-vis ASK changes
* Cli: update transaction types to use new dynamic-signer methods
* CLI: Fix tests No. 1
what to do about write_keypair outstanding
* Work around `CliConfig`'s signer not necessarily being a `Keypair`
* CLI: Fix tests No. 2
* Remove unused arg
* Remove unused methods
* Move offline arg constants upstream
* Make cli signing fallible
Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
* Update epoch slots to include all missing slots
* new test for compress/decompress
* address review comments
* limit cache based on size, instead of comparing roots
* Add fallible methods to KeypairUtil
* Add RemoteKeypair struct and impl KeypairUtil
* Implement RemoteKeypair in keygen; also add parse_keypair_path for cleanup
* Show insufficient purge_zero_lamport_account logic
* Add another pass to detect non-deleted values and increment the count
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Fixup sign_transaction; pass derivation_path by reference
* Pass total message length as BE u16
* Remove live integration tests (to ledger-app-solana)
* CLI: Don't sanity-check stake account when offline
* Add test helper returning vote pubkey with validator
* Delegate to the BSL. No need to force
* Be sure our offline ops are truly offline
* Specify our authorities correctly
* checks
* Add CrdsValue timeout checks on Pull Responses
* Allow older values to enter Crds as long as a ContactInfo exists
* Allow staked contact infos to be inserted into crds if they haven't expired
* Try and handle oveflows
* Fix test
* Some comments
* Fix compile
* fix test deadlock
* Add a test for processing timed out values received via pull response
When a new root is created, the oldest slot is popped off
but when the logic checks for identical slots, it assumes
that any difference means a slot was popped off the front.
* Use solana-cli config keypair in solana-keygen
* s/infile/keypair for consistency across modules and more generality across access methods
* Move config into separate crate
* Consensus fix, don't consider threshold check if
lockouts are not increased
* Change partition tests to wait for epoch with > lockout slots
* Use atomic bool to signal partition
* Verb-noun-ify Nonce API
* Unify instruction naming with API naming
The more verbose nonce_account/NonceAccount was chosen for clarity
that these instructions work on a unique species of system account
* Rename bootstrap leader to bootstrap validator
It's a normal validator as soon as other validators enter the
leader schedule.
* cargo fmt
* Fix build
Thanks @CriesofCarrots!
* Split timestamp calculation into separate fn for math unit testing
* Add failing test
* Fix failing test; also bump stakes to near expected cluster max supply
* Don't error on timestamp of slot 0
* Spy just for RPC to avoid premature supermajority
* Make gossip_content_info private
Co-Authored-By: Michael Vines <mvines@gmail.com>
* Fix misindent...
Co-authored-by: Michael Vines <mvines@gmail.com>
* Consolidate entry tick verification into one function
* Mark bad slots as dead in blocktree processor
* more feedback
* Add bank.is_complete
* feedback
* Bank: Return nonce pubkey/account from `check_tx_durable_nonce`
* Forward account with HashAgeKind::DurableNonce
* Add durable nonce helper for HashAgeKind
* Add nonce util for advancing stored nonce in runtime
* Advance nonce in runtime
* Store rolled back nonce account on TX InstructionError
* nonce: Add test for replayed InstErr fee theft
* save limit deserialize
* save
* Save
* Clean up
* rustfmt
* rustfmt
* Just comment out to please CI
* Fix ci...
* Move code
* Rustfmt
* Crean up control flow
* Add another comment
* Introduce predetermined constant limit on snapshot data files (deserialize side)
* Introduce predetermined constant limit on snapshot data files (serialize side)
* rustfmt
* Tweak message
* Revert dynamic memory limit
* Limit size of snapshot data file (de)serialization
* Fix test breakage
* Clean up
* Fix uses formatting
* Rename: deserialize_{for,from}_snapshot
* Simplify comment
* Use Slot
* Provide slot for status cache
* Align variable name with snapshot_status_cache_file_path
* Define serialize_snapshot_data_file_with_metrics
* Fix build.......
* De-marco serialize_snapshot_data_file_with_metrics
* Revert u64 => Slot
* Propose Solana ABI management
* Mention fuzz testing
* Address minor review comments
* Remove versioning and unit tests
* Rename
* Clean up a bit
* Pass through Grammarly
* Yet more tweaks...
* Check append vec file size
* Don't use panic
* Clean up a bit
* Clean up
* Clean ups
* Change assertion into sanization check
* Remove...
* Clean up
* More clean up
* More clean up
* Use assert_matches
This reverts commit a217920561.
This commit is causing trouble when the TdS cluster is reset and
validators running an older genesis config are still present.
Occasionally an RPC URL from an older validator will be selected,
causing a new node to fail to boot.
The blocksteamer instance is the TdS cluster entrypoint. Running an
additional solana-gossip node allows other participants to join a
cluster even if the validator node on the blocksteamer instance goes down.
* Stabilize fn coverage by pruning all updated files
* Pruning didn't work; Switch to clean room dir
* Oh, shellcheck...
* Remove the data_dir variable
* Comment about relationale for find + while read
* Move implemented proposals to implemented section of the book
Leave "Slashing" commentary in a new proposal.
* Remove considered considerations
@CriesofCarrots says meh about the first concern, and has moved the
second concern into a GitHub issue #7485.
* Update "limit-ledger-size" to use DeleteRange for much faster deletes
* Update core/src/ledger_cleanup_service.rs
Co-Authored-By: Michael Vines <mvines@gmail.com>
* Rewrite more idiomatically
* Move max_ledger_slots to a fn for clippy
* Remove unused import
* Detect when all columns have been purged and fix a bug in deletion
* Check that more than 1 column is actually deleted
* Add helper to test that ledger meets minimum slot bounds
* Remove manual batching of deletes
* Refactor to keep some N slots older than the highest root
* Define MAX_LEDGER_SLOTS that ledger_cleanup_service will try to keep around
* Refactor compact range
* Clean up align_to_8byte!
* small clean up
* Strictly sanitize mmapped AppendVec files
* Clean up
* Fix typo
* Rename align_to_8byte => u64_align
* Fix typo
* Clean up unsafe into methods of StoredAccount
* Made oddness more apparent
* Yet more clarification
* Promote a PR comment into a src comment
* Fix typo...
* Move ref_executable_byte out of tests impl
* Add blocktree timestamp helper functions and tests
* Flesh out blocktree::get_block_time
* Move stakes up into rpc to make testing easier; expand tests
* Review comments
* Fix up is_amount to handle floats for SOL; expand amount_of test
* Use required_lamports_from and is_amount across CLI
* Remove obsolete test (now handled by clap)
* Towards accounting for all tokens
* Move 5m tokens back into the big pool
* Flesh out batch 4
* Add a script to generate ValidatorInfo structs from a CSV file
* Remove commented out code and improve test
* Rework transaction processing result forwarding
Durable nonce prereq
* Add Durable Nonce program API
* Add runtime changes for Durable Nonce program
* Register Durable Nonce program
* Concise comments and bad math
* Fix c/p error
* Add rent sysvar to withdraw ix
* Remove rent exempt required balance from Meta struct
* Use the helper
* Add intermittent timestamp to Vote
* Add timestamp to VoteState, add timestamp processing to program
* Print recent timestamp with solana show-vote-account
* Add offset of 1 to timestamp Vote interval to initialize at node boot (slot 1)
* Review comments
* Cache last_timestamp in Tower and use for interval check
* Move work into Tower method
* Clarify timestamping interval
* Replace tuple with struct
* Fix repair when most peers are incapable of serving requests
* Add a test for getting the lowest slot in blocktree
* Replace some more u64s with Slot
* Refactor local cluster to support killing a partition
* Rework run_network_partition
* Introduce fixed leader schedule
* Plumb fixed schedule into test
* Add validator timestamp oracle proposal
* Make timestamping part of the Vote program
* Describe extending Vote to include timestamp: Option<UnixTimestamp>
* Qualify getBlockTime-eligible blocks as rooted
* New daemon to tune system parameters like PoH service priority
* fixes for Linux
* integrate with poh_service
* fixes
* address review comments
* remove `dead_code` directive
* Colo: Dump escaping mess in remote script templates
* Colo: Rename script templates so shellcheck can get 'em
* shellcheck and nits
* Brace all of the things
* Consistent heredoc tags
* Use bash built-in square bracketing consistently
* simplify logic
* add investor stake placeholders
fixups
fixups
review comments, fixups
make more data-looky for easier management
rent may be zero
rework with more tables, derived keys
fixups
rebase-fix
fixups
fixups
* genesis is now too big to boot in 10 seconds
* Use clap_utils
* Create genesis.tar.bz2 in solana-genesis
* Remove shell-based genesis.tar.bz2 generation
* Make Option=>Result conv more rusty
* stop using solana_logger
* Simplify by just using vec!
* clean up abit
* Allow vest's terminator to recapture tokens
* Less code
* Add a VestAll instruction
The terminator may decide it's impractical to maintain a vest
contract and want to make all tokens immediately redeemable.
* Pass blocktree into execute_batch, if persist_transaction_status
* Add validator arg to enable persistent transaction status store
* Pass blocktree into banking_stage, if persist_transaction_status
* Add validator params to bash scripts
* Expose actual transaction statuses outside Bank; add tests
* Fix benches
* Offload transaction status writes to a separate thread
* Enable persistent transaction status along with rpc service
* nudge
* Review comments
--gossip-port now specifies exactly that, the gossip port to use. The
new --gossip-host argument can be used to specify the DNS name/IP
address for gossip if --entrypoint is not supplied (when --entrypoint is
supplied, the gossip address is automatically set to the node's ip
address as observed by the entrypoint)
* Fix bank hash not changing when no internal state has changed
* Fix unnecessary call to hash_internal_state
* Add blockhash into the bank_hash
* Add blockhash into the bank_hash and update tests
* Refactor accounts_db slot_hashes
* More clarity in comments
* Add clippy suggestion
* Grammar
* Fix compile after clippy made me break it
* Schooled by clippy
* Add non-fungible token program
* Remove issuer and id from state
* Boot NftInstruction and NftState
* Rename NFT to Ownable
Maybe this should be "Owned" to avoid confusion with an Ownable trait?
* Rename directory
* Delete unreachable branch
* Don't use copy_from_slice - need an error, not a panic.
* Rename contract_pubkey to account_pubkey
* run.sh: Create genesis file for ad-hoc validators
* run.sh: Prefer release under NDEBUG
* run.sh: Add sanity test for run.sh
* run.sh: Conditionally re-gen drone and faucet keys
* Make shellcheck happy
* Address code review comments
* Clean up a bit
* Remove the name "blob" from archivers
* Remove the name "blob" from broadcast
* Remove the name "blob" from Cluset Info
* Remove the name "blob" from Repair
* Remove the name "blob" from a bunch more places
* Remove the name "blob" from tests and book
* Remove Blobs and switch to Packets
* Fix some gossip messages not respecting MTU size
* Failure to serialize is not fatal
* Add log macros
* Remove unused extern
* Apparently macro use is required
* Explicitly scope macro
* Fix test compile
* Make solana-validator check vote account at start
* Don't abort tests...
* Fix test breakage
* Remove extra semicolon
* Attempt to fix cluster-tests
* rustfmt
* Change behavior of vote_account ephemeral pubkeys
* save
* clean up
* clean up
* rustfmt && clippy
* Reorder for simpler diff
* Fix rebase...
* Fix message a bit
* Still more rebase fixes....
* Fix yet more
* Use find_map over filter_map & next and revert message
* More through error checks
* rustfmt & clippy
* Revert
* Revert core/src/validator.rs
* Cleanup
* Cleanup
* Cleanup
* Rebase fix
* Make clippy & rustfmt happy
* save
* Clean up
* Show rpc error detail
* Check node lamports only after pubkey matching
* rustfmt
* keygen: grind --ignore-case was not honored
* keygen: Improve grind --ignore-case ergonomics
Don't silently require the user to know their search term needs to be lowercase
* fmt
* Name anonymous parameters for clarity
* Add CommitmentConfig to select bank for rpc
* Add commitment information to jsonrpc docs
* Update send_and_confirm retries as per commitment defaults
* Pass CommitmentConfig into client requests; also various 'use' cleanup
* Use _with_commitment methods to speed local_cluster tests
* Pass CommitmentConfig into Archiver in order to enable quick confirmations in local_cluster tests
* Restore solana ping speed
* Increase wallet-sanity timeout to account for longer confirmation time
* Add 'cmake' to default DC node installer
* Add 'sysstat' to default DC node installer
For 'iostat'
* Add 'perf' to default DC node installer
* Add 'iftop' to default DC node installer
* vote array
wip
wip
wip
update
gossip index should match tower index
tests build
clippy
test index after expired vote
test
bank specific last vote sync time
* verify
* we are likely to see many more warnings about old votes now
* SDK: Add sysvar to expose recent block hashes to programs
* Blockhashes is one word
* Missed one
* Avoid allocs on update
* unwrap_or_else
* Use iterators
* Add microbench
* Revert "unwrap_or_else"
This reverts commit a8f8c3bfbe.
* Revert "Avoid allocs on update"
This reverts commit 486f01790c.
* sign gpu shreds
* wip
* checks
* tests build
* test
* tests
* test
* nits
* sign cpu test
* write out the sigs in parallel
* clippy
* cpu test
* prepare secret for gpu
* woot!
* update
* bump perf libs
This node get overloaded at high TPS trying to manage both a validator
and the blockexplorer. Reduce it's workload by turning off sigverify,
which doesn't really matter since this node doesn't even vote
* Specifiy machine type without necessarily enabling GPU
* Make long arg, extend --enable-gpu to automation
* Set machine types only in one place
* Fixup
* Fixup flag in automation
* Typo
* shellcheck
* owner_checks
* only system program may assign owner, and only if pre.owner is system
* moar coverage!
* moar coverage, allow re-assignment IFF data is zeroed
* credit_only_credits_forwarding
* whack transfer_now()
* fixup
* bench should retry the airdrop TX
* fixup
* try to make bench-exchange a bit more robust, informative
* Cut down on liberal use of borrow()
* No need to map_err(Into::into)
* Group From instances
* Remove Direction indirection
* Let rustfmt order imports
* Better copypasta
* Cleanup copypasta
* Add explicit lifetimes so that it doesn't get pegged to 'static when we upgrade rocksdb
* Remove redundant type aliases
* Async poh verify
* Up ticks_per_s to 160
GPU poh verify needs shorter poh sequences or it takes forever to
verify. Keep slot time the same at 400ms.
* Fix stats
* Don't halt on ticks
* Increase retries for local_cluster tests and make repairman test serial
* Add script to publish testnet results to slack
* Obscure webhook URL
* fixup
* Replace read with cat redirection
* Turn back on net restart
* Pick nits
* Make symlink before trying to delete its contents
* Display test config in slack and pick Trents nit not to maybe rm -rf /*
* Clean up results print
* Minor nits
* Turn the test settings back up to 11
* typo
* Shellcheck
* Just a few more fields
* fix payload formatting
* Del clear-config.sh
* Mount secondary
* Add commit SHA link and Grafana time range URL
* Add fancy buttons instead of text URLs
* Tighten up test config display
* Fixup display nits
* chellsheck
* Rebase and fix typo
* Make parse_command consistent
* Strip pubkey out of parse_stake_create_account
* Move validator-info args into module
* Strip pubkey out of parse_validator_info_command
* Strip pubkey out of parse_vote_create_account
* Strip pubkey out of balance parsing
* Strip pubkey out of parse pay
* Only verify keypair existence if command requires it
* Use struct instead of tuple
* Remove core::result dependency from blocktree
* Remove core::result dependency from shred
* Move Packet from core::packet to sdk::packet
This way we don't need to split perf_libs yet.
* Disable packet when compiling BPF programs
* Stabilize some banking stage tests
Fixes#5660
* Fix CI...
* clean up
* Fix ci
* Address review nits
* Use bank.max_tick_height due to off-by-one for no PohRecord's clearing bank
* Fix CI...
* Use bank.max_tick_height() instead for clarity
* collect rent from credit debit accounts
* collect rent from credit only account
* rent_collector now can deduct partial rent + no mem copy + improved design
* adding a test to test credit only rent
* add bank level test for rent deduction
* add test to check if hash value changes or not
* adding test scenario for lamport circulation
* collect rent from credit-debit account
* collect rent from credit only account
* improved design for rent collection
* only process if collected rent is non zero
* rent_collector now can deduct partial rent + no mem copy
* adding a test to test credit only rent
* add bank level test for rent deduction
* add test to check if hash value changes or not
* adding test scenario for lamport circulation
* combining rent debtors into credit only locks
* SDK: Refactor (read|write)_keypair
Split file opening and data writing operations
Drop filename == "-" stdio signal. It is an app-level feature
* keygen: Move all non-key printing to stderr
* keygen: Adapt to SDK refactor
* keygen: Factor keypair output out to a helper function
* Refactor blocktree processor args and support full leader cache
* Add entry callback option
* Rename num_threads to override_num_threads
* Add test for entry callback
* Refactor cached leader schedule changes
* Add tests for blocktree process options
* Refactor test
* @mvines feedback
* add missing convenience method
* require vote account to be exempt
* make stake account rent exempt
* making executable rent exempt
* rent will be initialized in genesis
* add test for update_rent
* split wallet staking commands
* elide real home
* unit->UNIT for usage
* unit->UNIT, don't try to run SUBCOMMANDS: ;)
* more fixup
* fixups
* actually check
* shellcheck
* preserve #6158 after rebase
* fixup
* test
* too hard
* remove test
* server side new rpc endpoint
* client side rpc
* take data_len as usize
Co-Authored-By: Tyera Eulberg <teulberg@gmail.com>
* add test and documentation
* Remove serialization of AccountStorageEntry fields
* Add metric for evaluating BankRc serialization time
* Serialize AppendVec current len
* Add dashboard metrics
* Move flush of AppendVecs to packaging thread
* Move status cache serialization to the Snapshot Packager service
* Minor comment updates
* use ok_or_else instead of ok_or
* satus cache
* Remove assert when snapshot format is wrong
* Fix compile
* Remove slots_to_snapshot from bank forks
* Address review comment
* Remove unused imports
* Change confidence parameters
* Add status_cache_ancestors to get all relevant ancestors of a bank including roots from status cache
* Fix and add tests
* Clippy
* require vote account to be exempt
* make stake account rent exempt
* add rent exempted system instruction
* use rent exemption instruction in vote and stake api
* use rent exempted account while creating executable account
* updating chacha golden hash as instruction data has changed
* rent will be initialized for genesis bank too
* Check if an update is current before deploying it again
* Add (new) update command to deploy testnet updates
* Add --deploy-if-newer flag to permit conditional net updates
* Release builds for test
* Remove setting thread count in local cluster
* Increase timeout
* Move local cluster to separate job
* Extract out local cluster test from bench-tps
* Make local cluster inaccessible from outside crate
* Update test-stable.sh to exclude local_cluster in stable, include it in local-cluster CI job
* Move bench-exchange to local cluster
* Remove local cluster from coverage
* Clarify runtime vs program rules
And define "smart contract"
* Apply review feedback
* Rename secret key to private key
* Rename pubkey to public key in book
"pubkey" is a great shorthand in code, but it's not common in the
industry or something we want to spend time explaining to users.
* rename rent.rs to rent_calculator.rs
* add rent sysvar
* integrate rent_calculator with bank
* rent_calculator integration with genesis
* add test for rent sysvar
* Add mnenomic keypair generation and recovery to cli
* Use password input to retrieve mnemonic phrase
* Direct users without keypair file to use solana-keygen
* Cleanup shreds to remove FirstShred data structure
* Also reduce size used by parent slot information in shred header
* clippy
* fixes
* fix chacha test
* Refactor shreds to prevent insertion of any metadata on bad shreds
* Refactor fetching Index in blocktree
* Refactor get_slot_meta_entry
* Re-enable local cluster test
* cleanup
* Add tests for success/fail insertion of coding/data shreds
* Remove assert
* Fix and add tests for should_insert coding and data blobs
* btc_spv program directories
* add spv-instruction spv-state
* added spv_processor file
* cargo.tomls - bump versions, rm unneccessary deps
* add btc_spv_bin and top lvl workspace entry
* hex_decode util & errors
* add header parsing test
* update dependencies
* rustfmt
* refactor Requests
* fix dependencies/versions
* clippy fixes
* test improvements
* add gitignores
Add framework for the rest of the BTC-SPV stuff to be built on top of. This PR defines the components, data structures, accessors, etc. but is not quite complete. It still needs the headerstore component finished along with some of the validation utils, hashing stuff, and more tests.
* Factor out hardcoded testnet ssh key path
* Build/create test net ssh key path
* Rename testnet ssh dir
* Give testnetSSHDir a more generic name
* shellcheck
* favor hardcoded paths over `paths.sh`
* Put instance-startup-complete stamp in the scratch dir as well
* Rename `/solana` > `/solana-scratch`
churn
cleanup
reverse test slot hashes
test check_slots_are_valid
updates
only send the minimum bank vote difference
fixup! only send the minimum bank vote difference
some banks may not have a voting account setup
fixup! votes only need slots and the last bank hash
fixup! fixup! votes only need slots and the last bank hash
fmt
fixed compare
fixed vote
fixup! fixed vote
poke ci
filter the local votes via the last bank vote
Summary of Changes:
This change adds functionality to randomize tx execution for every entry. It does this by implementing OrderedIterator that iterates tx slice as per the order specified. The order is generated randomly for every entry.
* Integrate coding shreds and recovery
* More tests for shreds and some fixes
* address review comments
* fixes to code shred generation
* unignore tests
* fixes to recovery
* Revert "Add test program for BPF memory corruption bug (#5603)"
This reverts commit 63d62c33c6.
* Revert "Revert "Add test program for BPF memory corruption bug (#5603)""
This reverts commit 9502082cda.
* Fix clippy and fmt issues
* net: init-metrics.sh - urlencode influx password
* old backticks bad!
* Move urlencode() to common.sh
* Make urlencode() vars local
Co-Authored-By: Michael Vines <mvines@gmail.com>
* Insert data shreds in blocktree and database
* Integrate data shreds with rest of the code base
* address review comments, and some clippy fixes
* Fixes to some tests
* more test fixes
* ignore some local cluster tests
* ignore replicator local cluster tests
* Coalesce gossip pull requests and serve them in batches
* batch all filters and immediately respond to messages in gossip
* Fix tests
* make download_from_replicator perform a greedy recv
* Remove unnecessary entry_height from BankInfo
* Refactor process_blocktree to support process_blocktree_from_root
* Refactor to process blocktree after loading from snapshot
* On restart make sure bank_forks contains all the banks between the root and the tip of each fork, not just the head of each fork
* Account for 1 tick_per_slot in bank 0 so that blockhash of bank0 matches the tick
* fixed bloom filter math
* Add split each pull request into multiple pulls with different filters
* Rework CrdsFilter to generate all possible masks to cover the keyspace
* Limit the bloom sizes such that each pull request is no larger than mtu
* Rate limit transaction counters
* @sakridge feedback
* Set default high metrics rate for multinode demo
* Fix tests
* Swap defaults and fix env var tests
* Only set metrics rate if not already set
* Implement shred erasure recovery and reassembly
* fixes and unit test
* clippy
* review comments, additional tests, and some fixes
* address review comments
* more tests and cleanup
* Remove 'configured_flag' for vote/storage account, instead detect if they exist with the wallet
* Require --voting-keypair when using release binaries
* Refuse to delegate stake to a vote account with a stale root slot
* Remove sdk-c from the virtual manifest temporarily
For an unknown reason |cargo clippy| is getting stuck in CI
intermittently when trying to build this crate.
* Revert "Revert "Default log level to to RUST_LOG=solana=info (#5296)" (#5302)"
This reverts commit 7796e87814.
* Default to error logs, override with info only for those programs that need it
@ -23,12 +23,12 @@ It's possible for a centralized database to process 710,000 transactions per sec
Furthermore, and much to our surprise, it can be implemented using a mechanism that has existed in Bitcoin since day one. The Bitcoin feature is called nLocktime and it can be used to postdate transactions using block height instead of a timestamp. As a Bitcoin client, you'd use block height instead of a timestamp if you don't trust the network. Block height turns out to be an instance of what's being called a Verifiable Delay Function in cryptography circles. It's a cryptographically secure way to say time has passed. In Solana, we use a far more granular verifiable delay function, a SHA 256 hash chain, to checkpoint the ledger and coordinate consensus. With it, we implement Optimistic Concurrency Control and are now well en route towards that theoretical limit of 710,000 transactions per second.
Architecture
Documentation
===
Before you jump into the code, review the online book [Solana: Blockchain Rebuilt for Scale](https://solana-labs.github.io/book/).
Before you jump into the code, review the documentation [Solana: Blockchain Rebuilt for Scale](https://docs.solana.com).
(The _latest_ development version of the online book is also [available here](https://solana-labs.github.io/book-edge/).)
(The _latest_ development version of the docs is [available here](https://docs.solana.com/v/master).)
Release Binaries
===
@ -78,7 +78,7 @@ $ source $HOME/.cargo/env
$ rustup component add rustfmt
```
If your rustc version is lower than 1.34.0, please update it:
If your rustc version is lower than 1.39.0, please update it:
```bash
$ rustup update
@ -87,7 +87,8 @@ $ rustup update
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:
Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://solana-labs.github.io/book/getting-started.html).
Start your own testnet locally, instructions are in the online docs [Solana: Blockchain Rebuild for Scale: Getting Started](https://docs.solana.com/building-from-source).
Remote Testnets
---
We maintain several testnets:
*`testnet` - public stable testnet accessible via devnet.solana.com. Runs 24/7
*`testnet` - public stable testnet accessible via testnet.solana.com. Runs 24/7
*`testnet-beta` - public beta channel testnet accessible via beta.testnet.solana.com. Runs 24/7
*`testnet-edge` - public edge channel testnet accessible via edge.testnet.solana.com. Runs 24/7
## Deploy process
@ -240,5 +238,3 @@ problem is solved by this code?" On the other hand, if a test does fail and you
better way to solve the same problem, a Pull Request with your solution would most certainly be
welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please
@ -59,81 +59,90 @@ There are three release channels that map to branches as follows:
* beta - tracks the largest (and latest) `vX.Y` stabilization branch, more stable.
* stable - tracks the second largest `vX.Y` stabilization branch, most stable.
## Release Steps
## Steps to Create a Branch
### Creating a new branch from master
#### Create the new branch
1. Pick your branch point for release on master.
1. Create the branch. The name should be "v" + the first 2 "version" fields
### Create the new branch
1. Check out the latest commit on `master` branch:
```
git fetch --all
git checkout upstream/master
```
1. Determine the new branch name. The name should be "v" + the first 2 version fields
from Cargo.toml. For example, a Cargo.toml with version = "0.9.0" implies
the next branch name is "v0.9".
1. Note the Cargo.toml in the repo root directory does not contain a version. Look at any other Cargo.toml file.
1. Create a new branch and push this branch to the solana repository.
1.`git checkout -b <branchname>`
1.`git push -u origin <branchname>`
1. Create the new branch and push this branch to the `solana` repository:
```
git checkout -b <branchname>
git push -u origin <branchname>
```
#### Update master with the next version
### Update master branch with the next version
1. After the new branch has been created and pushed, update Cargo.toml on **master** to the next semantic version (e.g. 0.9.0 -> 0.10.0)
by running `./scripts/increment-cargo-version.sh`, then rebuild with
`cargo build` to cause a refresh of `Cargo.lock`.
1. Push your Cargo.toml change and the autogenerated Cargo.lock changes to the
master branch
1. After the new branch has been created and pushed, update the Cargo.toml files on **master** to the next semantic version (e.g. 0.9.0 -> 0.10.0) with:
```
scripts/increment-cargo-version.sh minor
```
1. Rebuild to get an updated version of `Cargo.lock`:
```
cargo build
```
1. Push all the changed Cargo.toml and Cargo.lock files to the `master` branch with something like:
```
git co -b version_update
git ls-files -m | xargs git add
git commit -m 'Update Cargo.toml versions from X.Y to X.Y+1'
git push -u origin version_update
```
1. Confirm that your freshly cut release branch is shown as `BETA_CHANNEL` and the previous release branch as `STABLE_CHANNEL`:
```
ci/channel_info.sh
```
At this point, `ci/channel-info.sh` should show your freshly cut release branch as
"BETA_CHANNEL" and the previous release branch as "STABLE_CHANNEL".
## Steps to Create a Release
### Create the Release Tag on GitHub
1. Go to [GitHub's Releases UI](https://github.com/solana-labs/solana/releases) for tagging a release.
1. Click "Draft new release". The release tag must exactly match the `version`
field in `/Cargo.toml` prefixed by `v`.
1. If the Cargo.toml verion field is **0.12.3**, then the release tag must be **v0.12.3**
1. Make sure the Target Branch field matches the branch you want to make a release on.
1. If you want to release v0.12.0, the target branch must be v0.12
1. If this is the first release on the branch (e.g. v0.13.**0**), paste in [this
template](https://raw.githubusercontent.com/solana-labs/solana/master/.github/RELEASE_TEMPLATE.md). Engineering Lead can provide summary contents for release notes if needed.
1. Click "Save Draft", then confirm the release notes look good and the tag name and branch are correct. Go back into edit the release and click "Publish release" when ready.
### Update release branch with the next patch version
1. After the new release has been tagged, update the Cargo.toml files on **release branch** to the next semantic version (e.g. 0.9.0 -> 0.9.1) with:
```
scripts/increment-cargo-version.sh patch
```
1. Rebuild to get an updated version of `Cargo.lock`:
```
cargo build
```
1. Push all the changed Cargo.toml and Cargo.lock files to the **release branch** with something like:
```
git co -b version_update
git ls-files -m | xargs git add
git commit -m 'Update Cargo.toml versions from X.Y.Z to X.Y.Z+1'
git push -u origin version_update
```
### Verify release automation success
1. Go to [Solana Releases](https://github.com/solana-labs/solana/releases) and click on the latest release that you just published. Verify that all of the build artifacts are present. This can take up to 90 minutes after creating the tag.
1. The `solana-secondary` Buildkite pipeline handles creating the binary tarballs and updated crates. Look for a job under the tag name of the release: https://buildkite.com/solana-labs/solana-secondary
1. [Crates.io](https://crates.io/crates/solana) should have an updated Solana version.
### Update documentation
TODO: Documentation update procedure is WIP as we move to gitbook
in book/src/testnet-participation.md on the release (beta) branch.
Document the new recommended version by updating `docs/src/running-archiver.md` and `docs/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version.
### Make the Release
### Update software on devnet.solana.com
We use [github's Releases UI](https://github.com/solana-labs/solana/releases) for tagging a release.
1. Go [there ;)](https://github.com/solana-labs/solana/releases).
1. Click "Draft new release". The release tag must exactly match the `version`
field in `/Cargo.toml` prefixed by `v` (ie, `<branchname>.X`).
1. If the Cargo.toml verion field is **0.12.3**, then the release tag must be **v0.12.3**
1. If this is the first release on the branch (e.g. v0.13.**0**), paste in [this
After a block reaches finality, all blocks from that one on down
to the genesis block form a linear chain with the familiar name
blockchain. Until that point, however, the validator must maintain all
potentially valid chains, called *forks*. The process by which forks
naturally form as a result of leader rotation is described in
[fork generation](fork-generation.md). The *blocktree* data structure
described here is how a validator copes with those forks until blocks
are finalized.
The blocktree allows a validator to record every blob it observes
on the network, in any order, as long as the blob is signed by the expected
leader for a given slot.
Blobs are moved to a fork-able key space the tuple of `leader slot` + `blob
index` (within the slot). This permits the skip-list structure of the Solana
protocol to be stored in its entirety, without a-priori choosing which fork to
follow, which Entries to persist or when to persist them.
Repair requests for recent blobs are served out of RAM or recent files and out
of deeper storage for less recent blobs, as implemented by the store backing
Blocktree.
### Functionalities of Blocktree
1. Persistence: the Blocktree lives in the front of the nodes verification
pipeline, right behind network receive and signature verification. If the
blob received is consistent with the leader schedule (i.e. was signed by the
leader for the indicated slot), it is immediately stored.
2. Repair: repair is the same as window repair above, but able to serve any
blob that's been received. Blocktree stores blobs with signatures,
preserving the chain of origination.
3. Forks: Blocktree supports random access of blobs, so can support a
validator's need to rollback and replay from a Bank checkpoint.
4. Restart: with proper pruning/culling, the Blocktree can be replayed by
ordered enumeration of entries from slot 0. The logic of the replay stage
(i.e. dealing with forks) will have to be used for the most recent entries in
the Blocktree.
### Blocktree Design
1. Entries in the Blocktree are stored as key-value pairs, where the key is the concatenated
slot index and blob index for an entry, and the value is the entry data. Note blob indexes are zero-based for each slot (i.e. they're slot-relative).
2. The Blocktree maintains metadata for each slot, in the `SlotMeta` struct containing:
*`slot_index` - The index of this slot
*`num_blocks` - The number of blocks in the slot (used for chaining to a previous slot)
*`consumed` - The highest blob index `n`, such that for all `m < n`, there exists a blob in this slot with blob index equal to `n` (i.e. the highest consecutive blob index).
*`received` - The highest received blob index for the slot
*`next_slots` - A list of future slots this slot could chain to. Used when rebuilding
the ledger to find possible fork points.
*`last_index` - The index of the blob that is flagged as the last blob for this slot. This flag on a blob will be set by the leader for a slot when they are transmitting the last blob for a slot.
*`is_rooted` - True iff every block from 0...slot forms a full sequence without any holes. We can derive is_rooted for each slot with the following rules. Let slot(n) be the slot with index `n`, and slot(n).is_full() is true if the slot with index `n` has all the ticks expected for that slot. Let is_rooted(n) be the statement that "the slot(n).is_rooted is true". Then:
is_rooted(0)
is_rooted(n+1) iff (is_rooted(n) and slot(n).is_full()
3. Chaining - When a blob for a new slot `x` arrives, we check the number of blocks (`num_blocks`) for that new slot (this information is encoded in the blob). We then know that this new slot chains to slot `x - num_blocks`.
4. Subscriptions - The Blocktree records a set of slots that have been "subscribed" to. This means entries that chain to these slots will be sent on the Blocktree channel for consumption by the ReplayStage. See the `Blocktree APIs` for details.
5. Update notifications - The Blocktree notifies listeners when slot(n).is_rooted is flipped from false to true for any `n`.
### Blocktree APIs
The Blocktree offers a subscription based API that ReplayStage uses to ask for entries it's interested in. The entries will be sent on a channel exposed by the Blocktree. These subscription API's are as follows:
1.`fn get_slots_since(slot_indexes: &[u64]) -> Vec<SlotMeta>`: Returns new slots connecting to any element of the list `slot_indexes`.
2.`fn get_slot_entries(slot_index: u64, entry_start_index: usize, max_entries: Option<u64>) -> Vec<Entry>`: Returns the entry vector for the slot starting with `entry_start_index`, capping the result at `max` if `max_entries == Some(max)`, otherwise, no upper limit on the length of the return vector is imposed.
Note: Cumulatively, this means that the replay stage will now have to know when a slot is finished, and subscribe to the next slot it's interested in to get the next set of entries. Previously, the burden of chaining slots fell on the Blocktree.
### Interfacing with Bank
The bank exposes to replay stage:
1.`prev_hash`: which PoH chain it's working on as indicated by the hash of the last
entry it processed
2.`tick_height`: the ticks in the PoH chain currently being verified by this
bank
3.`votes`: a stack of records that contain:
1.`prev_hashes`: what anything after this vote must chain to in PoH
2.`tick_height`: the tick height at which this vote was cast
3.`lockout period`: how long a chain must be observed to be in the ledger to
be able to be chained below this vote
Replay stage uses Blocktree APIs to find the longest chain of entries it can
hang off a previous vote. If that chain of entries does not hang off the
latest vote, the replay stage rolls back the bank to that vote and replays the
chain from there.
### Pruning Blocktree
Once Blocktree entries are old enough, representing all the possible forks
becomes less useful, perhaps even problematic for replay upon restart. Once a
validator's votes have reached max lockout, however, any Blocktree contents
that are not on the PoH chain for that vote for can be pruned, expunged.
Replicator nodes will be responsible for storing really old ledger contents,
and validators need only persist their bank periodically.
A colluding validation-client, may take the strategy to mark PoReps from non-colluding replicator nodes as invalid as an attempt to maximize the rewards for the colluding replicator nodes. In this case, it isn’t feasible for the offended-against replicator nodes to petition the network for resolution as this would result in a network-wide vote on each offending PoRep and create too much overhead for the network to progress adequately. Also, this mitigation attempt would still be vulnerable to a >= 51% staked colluder.
Alternatively, transaction fees from submitted PoReps are pooled and distributed across validation-clients in proportion to the number of valid PoReps discounted by the number of invalid PoReps as voted by each validator-client. Thus invalid votes are directly dis-incentivized through this reward channel. Invalid votes that are revealed by replicator nodes as fishing PoReps, will not be discounted from the payout PoRep count.
Another collusion attack involves a validator-client who may take the strategy to ignore invalid PoReps from colluding replicator and vote them as valid. In this case, colluding replicator-clients would not have to store the data while still receiving rewards for validated PoReps. Additionally, colluding validator nodes would also receive rewards for validating these PoReps. To mitigate this attack, validators must randomly sample PoReps corresponding to the ledger block they are validating and because of this, there will be multiple validators that will receive the colluding replicator’s invalid submissions. These non-colluding validators will be incentivized to mark these PoReps as invalid as they have no way to determine whether the proposed invalid PoRep is actually a fishing PoRep, for which a confirmation vote would result in the validator’s stake being slashed.
In this case, the proportion of time a colluding pair will be successful has an upper limit determined by the % of stake of the network claimed by the colluding validator. This also sets bounds to the value of such an attack. For example, if a colluding validator controls 10% of the total validator stake, transaction fees will be lost (likely sent to mining pool) by the colluding replicator 90% of the time and so the attack vector is only profitable if the per-PoRep reward at least 90% higher than the average PoRep transaction fee. While, probabilistically, some colluding replicator-client PoReps will find their way to colluding validation-clients, the network can also monitor rates of paired (validator + replicator) discrepancies in voting patterns and censor identified colluders in these cases.
Long term economic sustainability is one of the guiding principles of Solana’s economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solana’s network, these components take the form of the remittances and deposits into and out of the reserve ‘mining pool’.
The dominant remittances from the Solana mining pool are validator and replicator rewards. The deposit mechanism is a flat, protocol-specified and adjusted, % of each transaction fee.
The Replicator rewards are to be delivered to replicators from the mining pool after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 3**
**Figure 3**: Example PoRep reward design as a function of global network storage redundancy.
In the example shown in Figure 1, multiple per PoRep base rewards are explored (as a % of Tx Fee) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy (i.e. 10X).
The other protocol-based remittance goes to validation-clients as a reward distributed in proportion to stake-weight for voting to validate the ledger state. The functional issuance of this reward is described in [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md) and is designed to reduce over time until validators are incentivized solely through collection of transaction fees. Therefore, in the long-run, protocol-based rewards to replication-nodes will be the only remittances from the mining pool, and will have to be countered by the portion of each non-PoRep transaction fee that is directed back into the mining pool. I.e. for a long-term self-sustaining economy, replicator-client rewards must be subsidized through a minimum fee on each non-PoRep transaction pre-allocated to the mining pool. Through this constraint, we can write the following inequality:
The preceeding sections, outlined in the [Economic Design Overview](ed_overview.md), describe a long-term vision of a sustainable Solana economy. Of course, we don't expect the final implementation to perfectly match what has been described above. We intend to fully engage with network stakeholders throughout the implementation phases (i.e. pre-testnet, testnet, mainnet) to ensure the system supports, and is representative of, the various network participants' interests. The first step toward this goal, however, is outlining a some desired MVP economic features to be available for early pre-testnet and testnet participants. Below is a rough sketch outlining basic economic functionality from which a more complete and functional system can be developed.
### MVP Economic Features
* Faucet to deliver testnet SOLs to validators for staking and dapp development.
* Mechanism by which validators are rewarded in proportion to their stake. Interest rate mechansism (i.e. to be determined by total % staked) to come later.
* Ability to delegate tokens to validator nodes.
* Replicators to receive fixed, arbitrary reward for submitting validated PoReps. Reward size mechanism (i.e. PoRep reward as a function of total ledger redundancy) to come later.
* Pooling of replicator PoRep transaction fees and weighted distribution to validators based on PoRep verification (see [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). It will be useful to test this protection against attacks on testnet.
* Nice-to-have: auto-delegation of replicator rewards to validator.
Solana’s crypto-economic system is designed to promote a healthy, long term self-sustaining economy with participant incentives aligned to the security and decentralization of the network. The main participants in this economy are validation-clients and replication-clients. Their contributions to the network, state validation and data storage respectively, and their requisite remittance mechanisms are discussed below.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are protocol-derived issuances from a network-controlled reserve of tokens (sometimes referred to as the ‘mining pool’). These rewards will constitute the total reward delivered to replication clients and a portion of the total rewards for validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to join the network.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be specified as annual interest rates calculated per, real-time, Solana epoch [DEFINITION]. As discussed further below, the issuance rates are determined as a function of total network validator staked percentage and total replication provided by replicators in each previous epoch. The choice for validator and replicator client rewards to be based on participation rates, rather than a global fixed inflation or interest rate, emphasizes a protocol priority of overall economic security, rather than monetary supply predictability. Due to Solana’s hard total supply cap of 1B tokens and the bounds of client participant rates in the protocol, we believe that global interest, and supply issuance, scenarios should be able to be modeled with reasonable uncertainties.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction (be it a state execution or proof-of-replication verification). A mechanism for continuous and long-term funding of the mining pool through a pre-dedicated portion of transaction fees is also discussed below.
A high-level schematic of Solana’s crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics.md), [State-validation Protocol-based Rewards](ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. The [Replication-client Economics](ed_replication_client_economics.md) chapter will review the Solana network design for global ledger storage/redundancy and replicator-client economics ([Storage-replication rewards](ed_rce_storage_replication_rewards.md)) along with a replicator-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_rce_replication_client_reward_auto_delegation.md). The [Economic Sustainability](ed_economic_sustainability.md) section dives deeper into Solana’s design for long-term economic sustainability and outlines the constraints and conditions for a self-sustaining economy. An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
<!--  -->
The ability for Solana network participant’s to earn rewards by providing storage service is a unique on-boarding path that requires little hardware overhead and minimal upfront capital. It offers an avenue for individuals with extra-storage space on their home laptops or PCs to contribute to the security of the network and become integrated into the Solana economy.
To enhance this on-boarding ramp and facilitate further participation and investment in the Solana economy, replication-clients have the opportunity to auto-delegate their rewards to validation-clients of their choice. Much like the automatic reinvestment of stock dividends, in this scenario, a replicator-client can earn Solana tokens by providing some storage capacity to the network (i.e. via submitting valid PoReps), have the protocol-based rewards automatically assigned as delegation to a staked validator node and therefore earning interest in the validation-client reward pool.
Replicator-clients download, encrypt and submit PoReps for ledger block sections.3 PoReps submitted to the PoH stream, and subsequently validated, function as evidence that the submitting replicator client is indeed storing the assigned ledger block sections on local hard drive space as a service to the network. Therefore, replicator clients should earn protocol rewards proportional to the amount of storage, and the number of successfully validated PoReps, that they are verifiably providing to the network.
Additionally, replicator clients have the opportunity to capture a portion of slashed bounties [TBD] of dishonest validator clients. This can be accomplished by a replicator client submitting a verifiably false PoRep for which a dishonest validator client receives and signs as a valid PoRep. This reward incentive is to prevent lazy validators and minimize validator-replicator collusion attacks, more on this below.
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of replicators provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided. These rewards are captured by generating and entering Proofs of Replication (PoReps) into the PoH stream which can be validated by Validation nodes as described above in the [Replication-validation Transaction Fees](ed_vce_replication_validation_transaction_fees.md) chapter.
Validator-clients are eligible to receive protocol-based (i.e. via mining pool) rewards issued via stake-based annual interest rates by providing compute (CPU+GPU) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic schedule as a function of total amount of Solana tokens staked in the system and duration since network launch (genesis block). Additionally, these clients may earn revenue through two types of transaction fees: state-validation transaction fees and pooled Proof-of-Replication (PoRep) transaction fees. The distribution of these two types of transaction fees to the participating validation set are designed independently as economic goals and attack vectors are unique between the state- generation/validation mechanism and the ledger replication/validation mechanism. For clarity, we separately describe the design and motivation of the three types of potential revenue streams for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and PoRep-validation transaction fees.
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by replicator-clients. In this case, validators are providing compute (CPU/GPU) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.2
While replication-clients are incentivized and rewarded through protocol-based rewards schedule (see [Replication-client Economics](ed_replication_client_economics.md)), validator-clients will be incentivized to include and validate PoReps in PoH through the distribution of the transaction fees associated with the submitted PoRep. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy (e.g. 10x data redundancy). It was chosen not to include a distribution of these rewards to PoRep validators, and to rely only on the collection of PoRep attached transaction fees, due to the fact that the confluence of two participation incentive modes (state-validation inflation rate via global staked % and replication-validation rewards based on global redundancy factor) on the incentives of a single network participant (a validator-client) potentially opened up a significant incentive-driven attack surface area.
The validation of PoReps by validation-clients is computationally more expensive than state-validation (detail in the [Economic Sustainability](ed_economic_sustainability.md) chapter), thus the transaction fees are expected to be proportionally higher. However, because replication-client rewards are distributed in proportion to and only after submitted PoReps are validated, they are uniquely motivated for the inclusion and validation of their proofs. This pressure is expected to generate an adequate market economy between replication-clients and validation-clients. Additionally, transaction fees submitted with PoReps have no minimum amount pre-allocated to the mining pool, as do state-validation transaction fees.
There are various attack vectors available for colluding validation and replication clients, as described in detail below in [Economic Sustainability](ed_economic_sustainability). To protect against various collusion attack vectors, for a given epoch, PoRep transaction fees are pooled, and redistributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of invalidated PoReps [DIAGRAM]. This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps (note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid).
Validator-clients have two functional roles in the Solana network
* Validate (vote) the current global state of that PoH along with any Proofs-of-Replication (see [Replication Client Economics](ed_replication_client_economics.md)) that they are eligible to validate
* Be elected as ‘leader’ on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and Proofs-of-Replication and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. Compensation for validator-clients is provided via a protocol-based annual interest rate dispersed in proportion to the stake-weight of each validator (see below) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each non-PoRep transaction fee, less a protocol-specified amount that is returned to the mining pool (see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)). PoRep transaction fees are not collected directly by the leader client but pooled and returned to the validator set in proportion to the number of successfully validated PoReps. (see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md))
The protocol-based annual interest-rate (%) per epoch to be distributed to validation-clients is to be a function of:
* the current fraction of staked SOLs out of the current total circulating supply,
* the global time since the genesis block instantiation
* the up-time/participation [% of available slots/blocks that validator had opportunity to vote on?] of a given validator over the previous epoch.
The first two factors are protocol parameters only (i.e. independent of validator behavior in a given epoch) and describe a global validation reward schedule designed to both incentivize early participation and optimal security in the network. This schedule sets a maximum annual validator-client interest rate per epoch.
At any given point in time, this interest rate is pegged to a defined value given a specific % staked SOL out of the circulating supply (e.g. 10% interest rate when 66% of circulating SOL is staked). The interest rate adjusts as the square-root [TBD] of the % staked, leading to higher validation-client interest rates as the % staked drops below the targeted goal, thus incentivizing more participation leading to more security in the network. An example of such a schedule, for a specified point in time (e.g. network launch) is shown in **Table 1**.
**Table 1:** Example interest rate schedule based on % SOL staked out of circulating supply. In this case, interest rates are fixed at 10% for 66% of staked circulating supply
Over time, the interest rate, at any network staked percentage, will drop as described by an algorithmic schedule. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. This mining-pool provided interest rate will reduce over time until a network-chosen baseline value is reached. This is a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for both state-validation and ledger storage replication (PoReps) are not accounted for here. A validation-client interest rate schedule as a function of % network staked and time is shown in** Figure 2**.
**Figure 2:** In this example schedule, the annual interest rate [%] reduces at around 16.7% per year, until it reaches the long-term, fixed, 4% rate.
This epoch-specific protocol-defined interest rate sets an upper limit of *protocol-generated* annual interest rate (not absolute total interest rate) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch. Each epoch is comprised of XXX slots. The protocol-defined interest rate is then discounted by the log [TBD] of the % of slots a given validator submitted a vote on a PoH branch during that epoch, see **Figure XX**
Each message sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
* provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
* reduce network spam by introducing real cost to transactions,
* open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies (e.g. Bitcoin, Ethereum), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is sent to the mining pool, with the resulting fee going to the current leader processing the transaction. These pooled fees, then re-enter the system through rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing a redistribution avenue that protects against "tax evasion" attacks (i.e. side-channel fee payments)<sup>[1](ed_referenced.md)</sup>. Constraints on the fixed portion of transaction fees going to the mining pool, to establish long-term economic sustainability, are established and discussed in detail in the [Economic Sustainability](ed_economic_sustainability.md) section.
This minimum, protocol-earmarked, portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount (e.g. 50% of a block's capacity), the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
Additionally, the minimum protocol captured fee can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total procotol captured fee to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.
You can observe the effects of your client's transactions on our [dashboard](https://metrics.solana.com:3000/d/testnet/testnet-hud?orgId=2&from=now-30m&to=now&refresh=5s&var-testnet=testnet)
Solana nodes accept HTTP requests using the [JSON-RPC 2.0](https://www.jsonrpc.org/specification) specification.
To interact with a Solana node inside a JavaScript application, use the [solana-web3.js](https://github.com/solana-labs/solana-web3.js) library, which gives a convenient interface for the RPC methods.
The RepairService is in charge of retrieving missing blobs that failed to be delivered by primary communication protocols like Avalanche. It is in charge of managing the protocols described below in the `Repair Protocols` section below.
# Challenges:
1) Validators can fail to receive particular blobs due to network failures
2) Consider a scenario where blocktree contains the set of slots {1, 3, 5}. Then Blocktree receives blobs for some slot 7, where for each of the blobs b, b.parent == 6, so then the parent-child relation 6 -> 7 is stored in blocktree. However, there is no way to chain these slots to any of the existing banks in Blocktree, and thus the `Blob Repair` protocol will not repair these slots. If these slots happen to be part of the main chain, this will halt replay progress on this node.
3) Validators that find themselves behind the cluster by an entire epoch struggle/fail to catch up because they do not have a leader schedule for future epochs. If nodes were to blindly accept repair blobs in these future epochs, this exposes nodes to spam.
# Repair Protocols
The repair protocol makes best attempts to progress the forking structure of Blocktree.
The different protocol strategies to address the above challenges:
1. Blob Repair (Addresses Challenge #1):
This is the most basic repair protocol, with the purpose of detecting and filling "holes" in the ledger. Blocktree tracks the latest root slot. RepairService will then periodically iterate every fork in blocktree starting from the root slot, sending repair requests to validators for any missing blobs. It will send at most some `N` repair reqeusts per iteration.
Note: Validators will only accept blobs within the current verifiable epoch (epoch the validator has a leader schedule for).
The goal of this protocol is to discover the chaining relationship of "orphan" slots that do not currently chain to any known fork.
* Blocktree will track the set of "orphan" slots in a separate column family.
* RepairService will periodically make `RequestOrphan` requests for each of the orphans in blocktree.
`RequestOrphan(orphan)` request - `orphan` is the orphan slot that the requestor wants to know the parents of
`RequestOrphan(orphan)` response - The highest blobs for each of the first `N` parents of the requested `orphan`
On receiving the responses `p`, where `p` is some blob in a parent slot, validators will:
* Insert an empty `SlotMeta` in blocktree for `p.slot` if it doesn't already exist.
* If `p.slot` does exist, update the parent of `p` based on `parents`
Note: that once these empty slots are added to blocktree, the `Blob Repair` protocol should attempt to fill those slots.
Note: Validators will only accept responses containing blobs within the current verifiable epoch (epoch the validator has a leader schedule for).
3. Repairmen (Addresses Challenge #3):
This part of the repair protocol is the primary mechanism by which new nodes joining the cluster catch up after loading a snapshot. This protocol works in a "forward" fashion, so validators can verify every blob that they receive against a known leader schedule.
Each validator advertises in gossip:
* Current root
* The set of all completed slots in the confirmed epochs (an epoch that was calculated based on a bank <= current root) past the current root
Observers of this gossip message with higher epochs (repairmen) send blobs to catch the lagging node up with the rest of the cluster. The repairmen are responsible for sending the slots within the epochs that are confrimed by the advertised `root` in gossip. The repairmen divide the responsibility of sending each of the missing slots in these epochs based on a random seed (simple blob.index iteration by N, seeded with the repairman's node_pubkey). Ideally, each repairman in an N node cluster (N nodes whose epochs are higher than that of the repairee) sends 1/N of the missing blobs. Both data and coding blobs for missing slots are sent. Repairmen do not send blobs again to the same validator until they see the message in gossip updated, at which point they perform another iteration of this protocol.
Gossip messages are updated every time a validator receives a complete slot within the epoch. Completed slots are detected by blocktree and sent over a channel to RepairService. It is important to note that we know that by the time a slot X is complete, the epoch schedule must exist for the epoch that contains slot X because WindowService will reject blobs for unconfirmed epochs. When a newly completed slot is detected, we also update the current root if it has changed since the last update. The root is made available to RepairService through Blocktree, which holds the latest root.
Stakers are rewarded for helping to validate the ledger. They do this by
delegating their stake to validator nodes. Those validators do the legwork of
replaying the ledger and send votes to a per-node vote account to which stakers
can delegate their stakes. The rest of the cluster uses those stake-weighted
votes to select a block when forks arise. Both the validator and staker need
some economic incentive to play their part. The validator needs to be
compensated for its hardware and the staker needs to be compensated for the risk
of getting its stake slashed. The economics are covered in [staking
rewards](staking-rewards.md). This chapter, on the other hand, describes the
underlying mechanics of its implementation.
## Basic Design
The general idea is that the validator owns a Vote account. The Vote account
tracks validator votes, counts validator generated credits, and provides any
additional validator specific state. The Vote account is not aware of any
stakes delegated to it and has no staking weight.
A separate Stake account (created by a staker) names a Vote account to which the
stake is delegated. Rewards generated are proportional to the amount of
lamports staked. The Stake account is owned by the staker only. Some portion of the lamports
stored in this account are the stake.
## Passive Delegation
Any number of Stake accounts can delegate to a single
Vote account without an interactive action from the identity controlling
the Vote account or submitting votes to the account.
The total stake allocated to a Vote account can be calculated by the sum of
all the Stake accounts that have the Vote account pubkey as the
`StakeState::Stake::voter_pubkey`.
## Vote and Stake accounts
The rewards process is split into two on-chain programs. The Vote program solves
the problem of making stakes slashable. The Stake account acts as custodian of
the rewards pool, and provides passive delegation. The Stake program is
responsible for paying out each staker once the staker proves to the Stake
program that its delegate has participated in validating the ledger.
### VoteState
VoteState is the current state of all the votes the validator has submitted to
the network. VoteState contains the following state information:
*`votes` - The submitted votes data structure.
*`credits` - The total number of rewards this vote program has generated over its
lifetime.
*`root_slot` - The last slot to reach the full lockout commitment necessary for
rewards.
*`commission` - The commission taken by this VoteState for any rewards claimed by
staker's Stake accounts. This is the percentage ceiling of the reward.
* Account::lamports - The accumulated lamports from the commission. These do not
count as stakes.
*`authorized_vote_signer` - Only this identity is authorized to submit votes. This field can only modified by this identity.
### VoteInstruction::Initialize
*`account[0]` - RW - The VoteState
`VoteState::authorized_vote_signer` is initialized to `account[0]`
other VoteState members defaulted
### VoteInstruction::AuthorizeVoteSigner(Pubkey)
*`account[0]` - RW - The VoteState
`VoteState::authorized_vote_signer` is set to to `Pubkey`, the transaction must by
signed by the Vote account's current `authorized_vote_signer`. <br>
`VoteInstruction::AuthorizeVoter` allows a staker to choose a signing service
for its votes. That service is responsible for ensuring the vote won't cause
the staker to be slashed.
### VoteInstruction::Vote(Vec<Vote>)
*`account[0]` - RW - The VoteState
`VoteState::lockouts` and `VoteState::credits` are updated according to voting lockout rules see [Tower BFT](tower-bft.md)
*`account[1]` - RO - A list of some N most recent slots and their hashes for the vote to be verified against.
### StakeState
A StakeState takes one of three forms, StakeState::Uninitialized, StakeState::Stake and StakeState::RewardsPool.
### StakeState::Stake
StakeState::Stake is the current delegation preference of the **staker** and
contains the following state information:
* Account::lamports - The lamports available for staking.
*`stake` - the staked amount (subject to warm up and cool down) for generating rewards, always less than or equal to Account::lamports
*`voter_pubkey` - The pubkey of the VoteState instance the lamports are
delegated to.
*`credits_observed` - The total credits claimed over the lifetime of the
program.
*`activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up.
*`deactivated` - the epoch at which this stake will be completely de-activated, which is `cool down` epochs after StakeInstruction::Deactivate is issued.
### StakeState::RewardsPool
To avoid a single network wide lock or contention in redemption, 256 RewardsPools are part of genesis under pre-determined keys, each with std::u64::MAX credits to be able to satisfy redemptions according to point value.
The Stakes and the RewardsPool are accounts that are owned by the same `Stake` program.
### StakeInstruction::DelegateStake(u64)
The Stake account is moved from Uninitialized to StakeState::Stake form. This is
how stakers choose their initial delegate validator node and activate their
stake account lamports.
*`account[0]` - RW - The StakeState::Stake instance. <br>
`StakeState::Stake::credits_observed` is initialized to `VoteState::credits`,<br>
`StakeState::Stake::voter_pubkey` is initialized to `account[1]`,<br>
`StakeState::Stake::stake` is initialized to the u64 passed as an argument above,<br>
`StakeState::Stake::activated` is initialized to current Bank epoch, and<br>
`StakeState::Stake::deactivated` is initialized to std::u64::MAX
*`account[1]` - R - The VoteState instance.
*`account[2]` - R - syscall::current account, carries information about current Bank epoch
### StakeInstruction::RedeemVoteCredits
The staker or the owner of the Stake account sends a transaction with this
instruction to claim rewards.
The Vote account and the Stake account pair maintain a lifetime counter of total
rewards generated and claimed. Rewards are paid according to a point value
supplied by the Bank from inflation. A `point` is one credit * one staked
lamport, rewards paid are proportional to the number of lamports staked.
*`account[0]` - RW - The StakeState::Stake instance that is redeeming rewards.
*`account[1]` - R - The VoteState instance, must be the same as `StakeState::voter_pubkey`
*`account[2]` - RW - The StakeState::RewardsPool instance that will fulfill the request (picked at random).
*`account[3]` - R - syscall::rewards account from the Bank that carries point value.
Reward is paid out for the difference between `VoteState::credits` to
`StakeState::Stake::credits_observed`, multiplied by `syscall::rewards::Rewards::validator_point_value`.
`StakeState::Stake::credits_observed` is updated to`VoteState::credits`. The commission is deposited into the Vote account token
balance, and the reward is deposited to the Stake account token balance.
```rust,ignore
let credits_to_claim = vote_state.credits - stake_state.credits_observed;
A Proof of Stake (PoS), (i.e. using in-protocol asset, SOL, to provide
secure consensus) design is outlined here. Solana implements a proof of
stake reward/security scheme for validator nodes in the cluster. The purpose is
threefold:
- Align validator incentives with that of the greater cluster through
skin-in-the-game deposits at risk
- Avoid 'nothing at stake' fork voting issues by implementing slashing rules
aimed at promoting fork convergence
- Provide an avenue for validator rewards provided as a function of validator
participation in the cluster.
While many of the details of the specific implementation are currently under
consideration and are expected to come into focus through specific modeling
studies and parameter exploration on the Solana testnet, we outline here our
current thinking on the main components of the PoS system. Much of this
thinking is based on the current status of Casper FFG, with optimizations and
specific attributes to be modified as is allowed by Solana's Proof of History
(PoH) blockchain data structure.
### General Overview
Solana's ledger validation design is based on a rotating, stake-weighted selected leader broadcasting transactions in a PoH data
structure to validating nodes. These nodes, upon receiving the leader's
broadcast, have the opportunity to vote on the current state and PoH height by
signing a transaction into the PoH stream.
To become a Solana validator, a fullnode must deposit/lock-up some amount
of SOL in a contract. This SOL will not be accessible for a specific time
period. The precise duration of the staking lockup period has not been
determined. However we can consider three phases of this time for which
specific parameters will be necessary:
- *Warm-up period*: which SOL is deposited and inaccessible to the node,
however PoH transaction validation has not begun. Most likely on the order of
days to weeks
- *Validation period*: a minimum duration for which the deposited SOL will be
inaccessible, at risk of slashing (see slashing rules below) and earning
rewards for the validator participation. Likely duration of months to a
year.
- *Cool-down period*: a duration of time following the submission of a
'withdrawal' transaction. During this period validation responsibilities have
been removed and the funds continue to be inaccessible. Accumulated rewards
should be delivered at the end of this period, along with the return of the
initial deposit.
Solana's trustless sense of time and ordering provided by its PoH data
structure, along with its
[turbine](https://www.youtube.com/watch?v=qt_gDRXHrHQ&t=1s) data broadcast
and transmission design, should provide sub-second transaction confirmation times that scale
with the log of the number of nodes in the cluster. This means we shouldn't
have to restrict the number of validating nodes with a prohibitive 'minimum
deposits' and expect nodes to be able to become validators with nominal amounts
of SOL staked. At the same time, Solana's focus on high-throughput should create incentive for validation clients to provide high-performant and reliable hardware. Combined with potential a minimum network speed threshold to join as a validation-client, we expect a healthy validation delegation market to emerge. To this end, Solana's testnet will lead into a "Tour de SOL" validation-client competition, focusing on throughput and uptime to rank and reward testnet validators.
### Slashing rules
Unlike Proof of Work (PoW) where off-chain capital expenses are already
deployed at the time of block construction/voting, PoS systems require
capital-at-risk to prevent a logical/optimal strategy of multiple chain voting.
We intend to implement slashing rules which, if broken, result some amount of
the offending validator's deposited stake to be removed from circulation. Given
the ordering properties of the PoH data structure, we believe we can simplify
our slashing rules to the level of a voting lockout time assigned per vote.
I.e. Each vote has an associated lockout time (PoH duration) that represents a
duration by any additional vote from that validator must be in a PoH that
contains the original vote, or a portion of that validator's stake is
slashable. This duration time is a function of the initial vote PoH count and
all additional vote PoH counts. It will likely take the form:
Lockout<sub>i</sub>(PoH<sub>i</sub>, PoH<sub>j</sub>) = PoH<sub>j</sub> + K *
exp((PoH<sub>j</sub> - PoH<sub>i</sub>) / K)
Where PoH<sub>i</sub> is the height of the vote that the lockout is to be
applied to and PoH<sub>j</sub> is the height of the current vote on the same
fork. If the validator submits a vote on a different PoH fork on any
PoH<sub>k</sub> where k > j > i and PoH<sub>k</sub><Lockout(PoH<sub>i</sub>,
PoH<sub>j</sub>), then a portion of that validator's stake is at risk of being
slashed.
In addition to the functional form lockout described above, early
implementation may be a numerical approximation based on a First In, First Out
(FIFO) data structure and the following logic:
- FIFO queue holding 32 votes per active validator
- new votes are pushed on top of queue (`push_front`)
- expired votes are popped off top (`pop_front`)
- as votes are pushed into the queue, the lockout of each queued vote doubles
- votes are removed from back of queue if `queue.len() > 32`
- the earliest and latest height that has been removed from the back of the
queue should be stored
It is likely that a reward will be offered as a % of the slashed amount to any
node that submits proof of this slashing condition being violated to the PoH.
#### Partial Slashing
In the schema described so far, when a validator votes on a given PoH stream,
they are committing themselves to that fork for a time determined by the vote
lockout. An open question is whether validators will be hesitant to begin
voting on an available fork if the penalties are perceived too harsh for an
honest mistake or flipped bit.
One way to address this concern would be a partial slashing design that results
in a slashable amount as a function of either:
1. the fraction of validators, out of the total validator pool, that were also
slashed during the same time period (ala Casper)
2. the amount of time since the vote was cast (e.g. a linearly increasing % of
total deposited as slashable amount over time), or both.
This is an area currently under exploration
### Penalties
As discussed in the [Economic Design](ed_overview.md) section, annual validator interest rates are to be specified as a
function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online
and actively participating in the validation process throughout the entirety of
their *validation period*. For validators that go offline/fail to validate
transactions during this period, their annual reward is effectively reduced.
Similarly, we may consider an algorithmic reduction in a validator's active
amount staked amount in the case that they are offline. I.e. if a validator is
inactive for some amount of time, either due to a partition or otherwise, the
amount of their stake that is considered ‘active’ (eligible to earn rewards)
may be reduced. This design would be structured to help long-lived partitions
to eventually reach finality on their respective chains as the % of non-voting
total stake is reduced over time until a super-majority can be achieved by the
active validators in each partition. Similarly, upon re-engaging, the ‘active’
amount staked will come back online at some defined rate. Different rates of
stake reduction may be considered depending on the size of the partition/active
Since the testnet is not intended for stress testing of max transaction
throughput, a higher-end machine with a GPU is not necessary to participate.
However ensure the machine used is not behind a residential NAT to avoid NAT
traversal issues. A cloud-hosted machine works best. **Ensure that IP ports
8000 through 10000 are not blocked for Internet inbound and outbound traffic.**
Prebuilt binaries are available for Linux x86_64 (Ubuntu 18.04 recommended).
MacOS or WSL users may build from source.
For a performance testnet with many transactions we have some preliminary recommended setups:
| | Low end | Medium end | High end | Notes |
| --- | ---------|------------|----------| -- |
| CPU | AMD Threadripper 1900x | AMD Threadripper 2920x | AMD Threadripper 2950x | Consider a 10Gb-capable motherboard with as many PCIe lanes and m.2 slots as possible. |
| RAM | 16GB | 32GB | 64GB | |
| OS Drive | Samsung 860 Evo 2TB | Samsung 860 Evo 4TB | Samsung 860 Evo 4TB | Or equivalent SSD |
| Accounts Drive(s) | None | Samsung 970 Pro 1TB | 2x Samsung 970 Pro 1TB | |
| GPU | 4x Nvidia 1070 or 2x Nvidia 1080 Ti or 2x Nvidia 2070 | 2x Nvidia 2080 Ti | 4x Nvidia 2080 Ti | Any number of cuda-capable GPUs are supported on Linux platforms. |
#### GPU Requirements
CUDA is required to make use of the GPU on your system. The provided Solana
release binaries are built on Ubuntu 18.04 with <a
Transactions currently include a fee field that indicates the maximum fee field
a slot leader is permitted to charge to process a transaction. The cluster, on
the other hand, agrees on a minimum fee. If the network is congested, the slot
leader may prioritize the transactions offering higher fees. That means the
client won't know how much was collected until the transaction is confirmed by
the cluster and the remaining balance is checked. It smells of exactly what we
dislike about Ethereum's "gas", non-determinism.
### Congestion-driven fees
Each validator uses *signatures per slot* (SPS) to estimate network congestion
and *SPS target* to estimate the desired processing capacity of the cluster.
The validator learns the SPS target from the genesis block, whereas it
calculates SPS from recently processed transactions. The genesis block also
defines a target `lamports_per_signature`, which is the fee to charge per
signature when the cluster is operating at *SPS target*.
### Calculating fees
The client uses the JSON RPC API to query the cluster for the current fee
parameters. Those parameters are tagged with a blockhash and remain valid
until that blockhash is old enough to be rejected by the slot leader.
Before sending a transaction to the cluster, a client may submit the
transaction and fee account data to an SDK module called the *fee calculator*.
So long as the client's SDK version matches the slot leader's version, the
client is assured that its account will be changed exactly the same number of
lamports as returned by the fee calculator.
### Fee Parameters
In the first implementation of this design, the only fee parameter is
`lamports_per_signature`. The more signatures the cluster needs to verify, the
higher the fee. The exact number of lamports is determined by the ratio of SPS
to the SPS target. At the end of each slot, the cluster lowers
`lamports_per_signature` when SPS is below the target and raises it when above
the target. The minimum value for `lamports_per_signature` is 50% of the target
`lamports_per_signature` and the maximum value is 10x the target
`lamports_per_signature'
Future parameters might include:
* `lamports_per_pubkey` - cost to load an account
* `lamports_per_slot_distance` - higher cost to load very old accounts
* `lamports_per_byte` - cost per size of account loaded
* `lamports_per_bpf_instruction` - cost to run a program
### Attacks
#### Hijacking the SPS Target
A group of validators can centralize the cluster if they can convince it to
raise the SPS Target above a point where the rest of the validators can keep
up. Raising the target will cause fees to drop, presumably creating more demand
and therefore higher TPS. If the validator doesn't have hardware that can
process that many transactions that fast, its confirmation votes will
eventually get so long that the cluster will be forced to boot it.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.