* Drop the recommendation that `--expected-shred-version` be set by validators
`--expected-shred-version` is another knob for users to get wrong and is
documentation that can get stale due to cluster restarts. Turns out
it's also generally not required anymore either because:
1. The cluster entrypoint can always be expected to be using the correct
shred version, and that shred version will be adopted by the new node
(earlier this was not the case when the `solana-gossip spy` node on
mainnet-beta.solana.com:8001 ran with shred version 0)
2. On a cluster restart, `--expected-bank-hash` is a much stronger
assertion that the validator is starting from the correct place (and
didn't exist when `--expected-shred-version` was first recommended)
(cherry picked from commit 4ada4d43f2)
# Conflicts:
# docs/src/clusters.md
# docs/src/running-validator/restart-cluster.md
* Update clusters.md
Co-authored-by: Michael Vines <mvines@gmail.com>
* getMinimumBalanceForRentExemption now only responds to valid account lengths
(cherry picked from commit 9e96180ce4)
# Conflicts:
# core/src/rpc.rs
* Update rpc.rs
Co-authored-by: Michael Vines <mvines@gmail.com>
* Compress snapshot archive within the validator to reduce system dependencies
(cherry picked from commit d3750b47d2)
* Default snapshot compression to zstd instead of bzip2 for quicker snapshot generation
(cherry picked from commit 9ade73841f)
Co-authored-by: Michael Vines <mvines@gmail.com>
* Rpc: add getMultipleAccounts endpoint (#12005)
* Add rpc endpoint to return the state of multiple accounts from the same bank
* Add docs
* Review comments: Dedupe account code, default to base64, add max const
* Add get_multiple_accounts to rpc-client
(cherry picked from commit b22de369b7)
# Conflicts:
# core/src/rpc.rs
* Fix conflicts
* Use new_response for consistency
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
* Add (hidden) --use-deprecated-loader flag to `solana deploy`
(cherry picked from commit de736e00ad)
# Conflicts:
# cli/src/cli.rs
# cli/tests/deploy.rs
* resolve conflicts
Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
* Align host addresses
* support new program abi
* update epoch rollout
* Enforce aligned pointers in cross-program invocations
(cherry picked from commit 9290e561e1)
# Conflicts:
# programs/bpf_loader/src/lib.rs
Co-authored-by: Jack May <jack@solana.com>
* Sync FD limit and max maps to 500k
(cherry picked from commit 11951eb009)
* Expand system tuning docs
(cherry picked from commit 5354df8c1c)
* clippy
Co-authored-by: Trent Nelson <trent@solana.com>
* sdk: Make PubKey::create_program_address available in program unit tests (#11745)
* sdk: Make PubKey::create_program_address available in program unit tests
This finishes the work started in #11604 to have
`create_program_address` available when `target_arch` is not `bpf` and
`program` is enabled. Otherwise, there is an undefined reference error
to `sol_create_program_address`, which is only defined in `bpf`.
A small test to simply call the function has been added in order to catch
the problem in the future.
The default dependency to `solana-sdk/default` doesn't cause a problem with
existing programs since `build.sh` always specifies
`--no-default-features`, and programs in `solana-program-library` all
use it too.
* Add `default-features = false` for inter-program dependencies
Fix the build error found during CI. The `--no-default-features` flag
only applies to the top-level package, and not to dependencies. A program that
depends on another program, i.e. `128bit` which depends on `128bit_dep`,
must specify `default-features = false` when including that package,
otherwise the `bpf` build will try to pull in default packages, which
includes `std`.
(cherry picked from commit 9a366281d3)
# Conflicts:
# programs/bpf/rust/128bit/Cargo.toml
# programs/bpf/rust/invoke/Cargo.toml
# programs/bpf/rust/many_args/Cargo.toml
# programs/bpf/rust/param_passing/Cargo.toml
* Fix merge conflicts
Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
* Allow the sendTransaction preflight commitment level to be configured
(cherry picked from commit b660704faa)
# Conflicts:
# cli/src/cli.rs
# core/src/rpc.rs
* rebase
Co-authored-by: Michael Vines <mvines@gmail.com>
* Add per-request cap; also use clap-utils
* Clean up arg names and take cap inputs as SOL
(cherry picked from commit 71d5409b3b)
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* Refactor bigtable apis to accept start and end keys
* Make helper fn to deserialize cell data
* Refactor get_confirmed_signatures_for_address to use get_row_data range
* Add until param to get_confirmed_signatures_for_address
* Add until param to blockstore api
* Plumb until through client/cli
* Simplify client params
(cherry picked from commit 6c5b8f324a)
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* Use index to filter address-signatures correctly
* Pull additional keys to account for filtered records
* Clarify variable name
(cherry picked from commit 820af533a4)
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* Add failing test for decoding ShortU16 alias values
(cherry picked from commit 338f66f9aa)
* Factor out ShortU16 deser vistor logic to helper
(cherry picked from commit 6222fbcc66)
* Reimplement decode_len() with ShortU16 vistor helper
(cherry picked from commit 30dbe257cf)
Co-authored-by: Trent Nelson <trent@solana.com>
* Fix bad rent in Bank::deposit as if since epoch 0 (#10468)
* Fix bad rent in Bank::deposit as if since epoch 0
* Remove redundant predicate
* Rename
* Start to add tests with some cleanup
* Forgot to add refactor code...
* Enchance test
* Really fix rent timing in deposit with robust test
* Simplify new behavior by disabling rent altogether
(cherry picked from commit 6c242f3fec)
# Conflicts:
# runtime/src/accounts.rs
# runtime/src/rent_collector.rs
* Fix conflict
* Fix clippy
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Add config param to specify offset/length for single and program account info (#11515)
* Add config param to specify dataSlice for account info and program accounts
* Use match instead of if
(cherry picked from commit 88ca04dbdb)
# Conflicts:
# core/src/rpc.rs
* Fix conflicts
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
* Freeze address-signature index in the middle of slot to show failure case
* Secondary filter on signature
* Use AddressSignatures iterator instead of manually decrementing slots
* Remove unused method
* Add metrics
* Add transaction-status-index doccumentation
(cherry picked from commit de5fb3ba0e)
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* Force program address off the curve (#11323)
(cherry picked from commit 03263c850a)
* nudge
* trailing whitespace
Co-authored-by: Jack May <jack@solana.com>
Co-authored-by: Trent Nelson <trent@solana.com>
* Add failing test for unsane tx in RPC preflight
(cherry picked from commit e25846e1ad)
* Add From for SanitizeError > TransactionError
(cherry picked from commit 3f73affb2e)
* Sanitize transactions during RPC preflight test
(cherry picked from commit 29b3265dc7)
* Harden RPC preflight test inputs
(cherry picked from commit 14339dec0a)
Co-authored-by: Trent Nelson <trent@solana.com>
* Allow inspection of signature verification failures
(cherry picked from commit 251f974b50)
* Test that off-curve pubkeys fail signature verify
(cherry picked from commit c421d7f1b8)
Co-authored-by: Trent Nelson <trent@solana.com>
* Withdraw authority no longer implies a custodian
Before this change, if the withdraw authority and custodian had
the same public key, then a withdraw authority signature would
imply a custodian signature and lockup would be not be enforced.
After this change, the client's withdraw instruction must
explictly reference a custodian account in its optional sixth
account argument.
Likewise, the fee-payer no longer implies either a withdraw
authority or custodian.
* Fix test
The test was configuring the stake account with the fee-payer as
the withdraw authority, but then passing in a different key to
the withdraw instruction's withdraw authority parameter. It only
worked because the second transaction was signed by the fee-payer.
* Add failing test for TX sent via RPC with no signatures
(cherry picked from commit b962b2ce2d)
* Dereplicode send_transaction and request_airdrop RPC handlers
(cherry picked from commit a7079e4dde)
* Add new RPC error for TXs with no signatures
(cherry picked from commit 9778fedd7a)
* Reject TXs sent via RPC with no signatures
(cherry picked from commit a888f2f516)
Co-authored-by: Trent Nelson <trent@solana.com>
* Windows binaries are now built with the MSVC instead of the GNU toolchain.
Update `solana-install-init` target info to match
(cherry picked from commit 01ff6846f7)
# Conflicts:
# ci/publish-tarball.sh
* Update publish-tarball.sh
Co-authored-by: Michael Vines <mvines@gmail.com>
* Split out get-first-err for unit testing
* Add failing test
* Add missing ordering
(cherry picked from commit 6c38369042)
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* fix rewards points (#10914)
* fix rewards points
* fixups
* * verify that we don't spend more in rewards than we've allocated for rewards
* purge f64s from calculations that could be done with integers
* test typical values
* simplify iteration over delegations some
* fixups
* Use try_from
* Add a comment for commission_split()
* Add assertion to detect inconsistent reward dist.
* Fix vote_balance_and_staked
* Don't overwrite accounts with stale copies
* Fix CI...
* Add tests for vote_balance_and_staked
* Add test for the determinism of update_rewards
* Revert "Don't overwrite accounts with stale copies"
This reverts commit 9886d085a6.
* Make stake_delegation_accounts to return hashmap
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
(cherry picked from commit 7cc2a6801b)
# Conflicts:
# runtime/src/serde_snapshot/tests.rs
* Fix conflict
Co-authored-by: Rob Walker <rwalker@rwalker.com>
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Plumb replay vote channel
* Don't send redundant slots to repair_service
* Update test
* Keep gossip only for debugging gossip in the future
* Add comments
* Switch to using select()
* Fix replay -> gossip vote not counting toward gossip only stake
* tests
Co-authored-by: Carl <carl@solana.com>
* Add Bank support for "upgrade epochs" where all non-vote transactions will be rejected
(cherry picked from commit e5d8c4383f)
# Conflicts:
# runtime/src/bank.rs
* Fix merge conflict
Co-authored-by: Michael Vines <mvines@gmail.com>
* Revert get_inflation()
* Temporalily disable inflation to fix it later
* Use match and proper type aliases
(cherry picked from commit d81c7250b0)
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Fix hygiene issues in `declare_program!` and `declare_loader!`
The `declare_program!` and `declare_loader!` macros both expand to
new macro definitions (based on the `$name` argument). These 'inner'
macros make use of the special `$crate` metavariable to access items in
the crate where the 'inner' macros is defined.
However, this only works due to a bug in rustc. When a macro is
expanded, all `$crate` tokens in its output are 'marked' as being
resolved in the defining crate of that macro. An inner macro (including
the body of its arms) is 'just' another set of tokens that appears in
the body of the outer macro, so any `$crate` identifiers used there are
resolved relative to the 'outer' macro.
For example, consider the following code:
```rust
macro_rules! outer {
() => {
macro_rules! inner {
() => {
$crate::Foo
}
}
}
}
```
The path `$crate::Foo` will be resolved relative to the crate that defines `outer`,
**not** the crate which defines `inner`.
However, rustc currently loses this extra resolution information
(referred to as 'hygiene' information) when a crate is serialized.
In the above example, this means that the macro `inner` (which gets
defined in whatever crate invokes `outer!`) will behave differently
depending on which crate it is invoked from:
When `inner` is invoked from the same crate in which it is defined,
the hygiene information will still be available,
which will cause `$crate::Foo` to be resolved in the crate which defines 'outer'.
When `inner` is invoked from a different crate, it will be loaded from
the metadata of the crate which defines 'inner'. Since the hygiene
information is currently lost, rust will 'forget' that `$crate::Foo` is
supposed to be resolved in the context of 'outer'. Instead, it will be
resolved relative to the crate which defines 'inner', which can cause
incorrect code to compile.
This bug will soon be fixed in rust (see https://github.com/rust-lang/rust/pull/72121),
which will break `declare_program!` and `declare_loader!`. Fortunately,
it's possible to obtain the desired behavior (`$crate` resolving in the
context of the 'inner' macro) by use of a procedural macro.
This commit adds a `respan!` proc-macro to the `sdk/macro` crate.
Using the newly-stabilized (on Nightly) `Span::resolved_at` method,
the `$crate` identifier can be made to be resolved in the context of the
proper crate.
Since `Span::resolved_at` is only stable on the latest nightly,
referencing it on an earlier version of Rust will cause a compilation error.
This requires the `rustversion` crate to be used, which allows conditionally
compiling code epending on the Rust compiler version in use. Since this method is already
stabilized in the latest nightly, there will never be a situation where
the hygiene bug is fixed (e.g. https://github.com/rust-lang/rust/pull/72121)
is merged but we are unable to call `Span::resolved_at`.
(cherry picked from commit 05445c718e)
# Conflicts:
# Cargo.lock
# sdk/Cargo.toml
* Replace FIXME with an issue link
(cherry picked from commit b0cb2b0106)
* Update lock files
(cherry picked from commit 42f88484f4)
# Conflicts:
# programs/bpf/Cargo.lock
# programs/librapay/Cargo.lock
# programs/move_loader/Cargo.lock
* Split comment over multiple lines
Due to https://github.com/rust-lang/rustfmt/issues/4325, leaving this as
one line causes rustfmt to add extra indentation to the surrounding
code.
(cherry picked from commit fed69e96a9)
* Fix clippy lints
(cherry picked from commit e7387f60a7)
* Apply #![feature(proc_macro_hygiene)] when needed
This allows the rust-bpf-builder toolchain to build the sdk
(cherry picked from commit 95490ff56e)
# Conflicts:
# sdk/build.rs
# sdk/src/lib.rs
* Update Cargo.toml
* Update lib.rs
* Add rustc_version
* lock file updates
Co-authored-by: Aaron Hill <aa1ronham@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
Co-authored-by: Michael Vines <mvines@gmail.com>
* Bump spl-memo
* spl memo linking windows (#11000)
* Update spl-memo to fix windows linking error
* Only programs need the stubs
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Add failing test
* Pass fee_calculator to prepare_if_nonce_account; only overwrite in error case
(cherry picked from commit 25228ca957)
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* Add RpcFilterType, and implement CompareBytes for getProgramAccounts
* Accept bytes in bs58
* Rename to memcmp
* Add Memcmp optional encoding field
* Add dataSize filter
* Update docs
* Clippy
* Simplify tests that don't need to test account contents; add multiple-filter tests
* Skip and warn for hard-forks which are less than the start slot
Option is used during a restart, but then after the restart is
complete, then the option is not needed if the starting slot
is past the hard-fork since the hard-fork should already be
in the snapshot it started from.
* Update ledger/src/blockstore_processor.rs
Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 658de5b347)
Co-authored-by: sakridge <sakridge@gmail.com>
* remote-node.sh: Factor out init wait to own script
* remote-node.sh: Allow nodes to initialize asynchronously
* testnet-automation: Plumb --async-node-init
(cherry picked from commit 7021e1c584)
Co-authored-by: Trent Nelson <trent@solana.com>
* Dont skip eager rent collect across gapped epochs
* Adjust style and comment
* Adjust ascii chart and comment a bit
* Moar assert
* Relax the partition_count assert for completeness
* Tweak comment...
* tweak a bit
* Add gating logic
* Address reviews
* small formatting
* Clarify the code by replacing auto_generated...
* small formatting
* small formatting
* small formatting
* small formatting
* Narrow down conditional compilation scope
(cherry picked from commit 50f7ed80c8)
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Add prune message counter
* Switch to us verification time to match other counters
* Add separate transaction/poh verify timing
Co-authored-by: sakridge <sakridge@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: publish-docs.sh <maintainers@solana.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
(cherry picked from commit 36ca43e15b)
Co-authored-by: Dan Albert <dan@solana.com>
* Multi-version snapshot support
* rustfmt
* Remove CLI options and runtime support for selection output snapshot version.
Address some clippy complaints.
* Muzzle clippy type complexity warning.
Despite clippy's suggestion, it is not currently possible to create type aliases
for traits and so everything within the 'Box<...>' cannot be type aliased.
This then leaves creating full blown traits, and either implementing
said traits by closure (somehow) or moving the closures into new structs
implementing said traits which seems a bit of a palaver.
Alternatively it is possible to define and use the type alias 'type ResultBox<T> = Result<Box<T>>'
which does seems rather pointless and not a great reduction in complexity but is enough to keep clippy happy.
In the end I simply went with squelching the clippy warning.
* Remove now unused Serialize/Deserialize trait implementations for AccountStorageEntry and AppendVec
* refactor versioned de/serialisers
* rename serde_utils to serde_snapshot
* move call to accounts_db.generate_index() back down to context_accountsdb_from_stream()
* update version 1.1.1 to 1.2.0
remove nested use of serialize_bytes
* cleanups
* Add back measurement of account storage entry serialization.
Remove construction of Vec and HashMap temporaries during serialization.
* consolidate serialisation test cases into serde_snapshot.
clean up leakage of implementation details in serde_snapshot.
* move short term / legacy snapshot code into child module
* add serialize_iter_as_tuple
* preliminary integration of following commit
commit 6d58b73c47294bfb93465d5a83cd2175660b6e6d
Author: Ryo Onodera <ryoqun@gmail.com>
Date: Wed May 20 14:02:02 2020 +0900
Confine snapshot 1.1 relic to versioned codepath
* refactored serde_snapshot, rustfmt
legacy accounts_db format now "owns" both leading u64s, legacy bank_rc format has none
* reduce type complexity (clippy)
* Fixup commitment-aggregation metric
* Trigger notifications after commitment-cache update
* Fixup fn name
* Add single-confirmation commitment level
* Rename to highest_confirmed_slot
* Pass commitment-cache info directly to notifications
* Use match
* Update commitment docs
* Update out of date pubsub docs
* Don't discard transaction records
Not enough certainty, because only half the blockhashes
are returned via the RPC call. Even if all 300, there's
still the possiblity other validators are behind, and
a superamajority will vote on on a block that includes
the transaction.
* fmt
* lamports->SOL in user-facing error msg
* Check for sufficient balance for spend and fee
* Add ALL option to solana transfer
* Rework TransferAmount to check for sign_only in parse
* Refactor TransferAmount & fee-check handling to be more general
* Add addl checks mechanism
* Move checks out of cli.rs
* Rename to SpendAmount to be more general & move
* Impl ALL/spend helpers for create-nonce-account
* Impl spend helpers for create-vote-account
* Impl ALL/spend helpers for create-stake-account
* Impl spend helpers for ping
* Impl ALL/spend helpers for pay
* Impl spend helpers for validator-info
* Remove unused fns
* Remove retry_get_balance
* Add a couple unit tests
* Rework send_util fn signatures
* Initial commit
* Execute transfers
* Refactor for testing
* Cleanup readme
* Rewrite
* Cleanup
* Cleanup
* Cleanup client
* Use a Null Client to move prints closer to where messages are sent
* Upgrade Solana
* Move core functionality into its own module
* Handle transaction errors
* Merge allocations
* Fixes
* Cleanup readme
* Fix markdown
* Add example input
* Add integration test - currently fails
* Add integration test
* Add metrics
* Use RpcClient in dry-run, just don't send messages
* More metrics
* Fix dry run with no keys
* Only require one approval if fee-payer is the sender keypair
* Fix bugs
* Don't create the transaction log if nothing to put into it;
otherwise the next innvocation won't add the header
* Apply previous transactions to allocations with matching recipients
* Bail out of any account already has a balance
* Polish
* Add new 'balances' command
* 9 decimal places
* Add missing file
* Better dry-run; keypair options now optional
* Change field name from 'bid' to 'accepted'
Also, tolerate precision change from 2 decimal places to 4
* Write to transaction log immediately
* Rename allocations_csv to bids_csv
So that we can bypass bids_csv with an allocations CSV file
* Upgrade Solana
* Remove faucet from integration test
* Cleaner integration test
Won't work until this lands and is released:
https://github.com/solana-labs/solana/pull/9717
* Update README
* Add TravicCI script to build and test (#1)
* Add distribute-stake command (#2)
* Distribute -> DistributeTokens (#3)
* Cache cargo deps (#4)
* Add docs (#5)
* Switch to latest Solana 1.1 release (#7)
* distribute -> distribute-tokens (#9)
* Switch from CSV to a pickledb database (#8)
* Switch from CSV to a pickledb database
* Allow PickleDb errors to bubble up
* Dedup
* Hoist db
* Add finalized field to TransactionInfo
* Don't allow RPC client to resign transactions
* Remove dead code
* Use transport::Result
* Record unconfirmed transaction
* Fix: separate stake account per allocation
* Catch transport errors
* Panic if we attempt to replay a transaction that hasn't been finalized
* Attempt to fix CI
PickleDb isn't calling flush() or close() after writing to files.
No issue on MacOS, but looks racy in CI.
* Revert "Attempt to fix CI"
This reverts commit 1632394f636c54402b3578120e8817dd1660e19b.
* Poll for signature before returning
* Add --sol-for-fees option for stake distributions
* Add --allocations-csv option (#14)
* Add allocations-csv option
* Add tests or GTFO
* Apply review feedback
* apply feedback
* Add read_allocations function
* Update arg_parser.rs
* Fix balances command (#17)
* Fix balances command
* Fix readme
* Add --force to transfer to non-empty accounts (#18)
* Add --no-wait (#16)
* Add ThinClient methods to implement --no-wait
* Plumb --no-wait through
No tests yet
* Check transaction status on startup
* Easier to test
* Wait until transaction is finalized before checking if it failed with an error
It's possible that a minority fork thinks it failed.
* Add unit tests
* Remove dead code and rustfmt
* Don't flush database to file if doing a dry-run
* Continue when transactions not yet finalized (#20)
If those transactions are dropped, the next run will execute them.
* Return the number of confirmations (#21)
* Add read_allocations() unit-test (#22)
Delete the copy-pasted top-level test.
Fixes#19
* Add a CSV printer (#23)
* Remove all the copypasta (#24)
* Move resolve_distribute_stake_args into its own function
* Add stake args to token args
* Unify option names
* Move Command::DistributeStake into DistributeTokens
* Remove process_distribute_stake
* Only unique signers
* Use sender keypair to fund new fee-payer accounts
* Unify distribute_tokens and distribute_stake
* Rename print-database command to transaction-log (#25)
* Send all transactions as quickly as possible, then wait (#26)
* Send all transactions as quickly as possible, then wait
* Exit when finalized or blockhashes have expired
* Don't need blockhash in the CSV output
* Better types
CSV library was choking on Pubkey as a type. PickleDb doesn't have that problem.
* Resend if blockhash has not expired
* Attempt to fix CI
* Move log to stderr
* Add constructor, tuck away client (#30)
* Add constructor, tuck away client
* Fix unwrap() caught by CI
* Fix optional option flagged as required
* Bunch of cleanup (#31)
* Remove untested --no-wait feature
* Make --transactions-db an option, not an arg
So that in the future, we can make it optional
* Remove more untested features
Too many false positives in that santity check. Use --dry-run
instead.
* Add dry-run mode to ThinClient
* Cleaner dry-run
* Make key parameters required
Just don't use them in --dry-run
* Add option to write the transaction log
--dry-run doesn't write to the database. Use this option if you
want a copy of the transaction log before the final run.
* Revert --transaction-log addition
Implement #27 first
* Fix CI
* Update readme
* Fix CI in copypasta
* Sort transaction log by finalized date (#33)
* Make --transaction-db option implicit (#34)
* Move db functionality into its own module (#35)
* Move db functionality into its own module
* Rename tokens module to commands
* Version bump
* Upgrade Solana
* Add solana-tokens to build
* Remove Cargo.lock
* Remove vscode file
* Remove TravisCI build script
* Install solana-tokens
Co-authored-by: Dan Albert <dan@solana.com>
* Switch AccountsIndex.account_maps from HashMap to BTreeMap
* Introduce eager rent collection
* Start to add tests
* Avoid too short eager rent collection cycles
* Add more tests
* Add more tests...
* Refacotr!!!!!!
* Refactoring follow up
* More tiny cleanups
* Don't rewrite 0-lamport accounts to be deterministic
* Refactor a bit
* Do hard fork, restore tests, and perf. mitigation
* Fix build...
* Refactor and add switch over for testnet (TdS)
* Use to_be_bytes
* cleanup
* More tiny cleanup
* Rebase cleanup
* Set Bank::genesis_hash when resuming from snapshot
* Reorder fns and clean ups
* Better naming and commenting
* Yet more naming clarifications
* Make prefix width strictly uniform for 2-base partition_count
* Fix typo...
* Revert cluster-dependent gate
* kick ci?
* kick ci?
* kick ci?
* Account for descendants < root not existing in BankForks, purge ancestors/descendants map for consistency with BankForks and progress map
Co-authored-by: Carl <carl@solana.com>
* Switch subscriptions to use commitment instead of confirmations
* Add bank method to return account and last-modified slot
* Add last_modified_slot to subscription data and use to filter account subscriptions
* Update tests to non-zero last_notified_slot
* Add accounts subscriptions to test; fails at higher tx load
* Pass BankForks to RpcSubscriptions
* Use BankForks on add_account_subscription to properly initialize last_notified_slot
* Bundle subscriptions
* Check for non-equality
* Use commitment to initialize last_notified_slot; revert context.slot chage
* Focus bench on squash
Squash performance does not depend on adding a small number accounts. It mainly depends on total number of accounts that need to be hashed during freeze operation. New bank means a clone of accounts db, so we don't get previous errors in log.
* Fix fmt and add slot counter
Indexing into accounts array does not match account_keys otherwise.
Also enforce program accounts not at index 0
Enforce at least 1 Read-write signing fee-payer account.
* Revert "Untar is called for shred archives that do not exist. (#9565)"
This reverts commit 729cb5eec6.
* Revert "Dont insert shred payload into rocksdb (#9366)"
This reverts commit 5ed39de8c5.
* Add largest_confirmed_root to BlockCommitmentCache
* clippy
* Add blockstore to BlockCommitmentCache to check root
* Add rooted_stake helper fn and test
* Nodes that are behind should correctly id confirmed roots
* Simplify rooted_stake collector
* Experiment to backpropagate Cargo.lock updates to all lock files
* Move most of dependabot-specific code to its own file
* Various cleanups
* Fine tune..
* Clean up shells and stop obscure API...
* Clean up node setup scripts for new CI boxes
* Move files under ci directory
* Set CUDA env var to setup cuda drivers
* Fixup and add README
* shellcheck
* Apply review feedback, rename dir and setup files
Co-authored-by: publish-docs.sh <maintainers@solana.com>
* Add filesystem wallet page
* Move validator paper wallet instructions to validator page
* Remove paper wallet staking section
* Add steps for multiple fs and paper wallets
* Add keypair convention page and better multi-wallet example
* Introduce background AppendVec shrink mechanism
* Support ledger tool
* Clean up
* save
* save
* Fix CI
* More clean up
* Add tests
* Clean up yet more
* Use account.hash...
* Fix typo....
* Add comment
* Rename accounts_cleanup_service
* Fix purging happening every slot when cleanup service is not started at slot 0
* Purge by shred count instead of slots since slots can have variable
number of shreds
* Add runtime methods to simply get status and slot
* Add helper function to get slot confirmation_count from BlockCommitmentCache
* Return cluster confirmations in getSignatureStatus
* Remove use of invalid get_signature_confirmation_status
* Remove unused methods
* Update pubsub to use cluster confirmations
* Fix test_check_signature_subscribe failure
* Refactor confirmations to read commitment cache only once
* Review comments
* Use bank, root from BlockCommitmentCache
* Update docs
* Add metric for block-commitment aggregations
Co-authored-by: Justin Starry <justin@solana.com>
* Separate client types into own crate, so ledger does not need it
Removes about 50 crates of dependency from ledger
* Drop Rpc name from transaction-status types
* CLI: Fix `create-nonce-account --seed ...`
* CLI: Add test another for `create-nonce-account --seed...`
Explicitly demonstrates a partner workflow with the following
requirements:
1) Nonce account address derived from an offline nonce
authority address
2) Fully online account creation
3) Account creation in a single signing session
* alphabetize
* SDK: Add `NullSigner` implementation
* SDK: Split `Transaction::verify()` to gain access to results
* CLI: Minor refactor of --sign_only result parsing
* CLI: Enable paritial signing
Signers specified by pubkey, but without a matching --signer arg
supplied fall back to a `NullSigner` when --sign-only is in effect.
This allows their pubkey to be used for TX construction as usual,
but leaves their `sign_message()` a NOP. As such, with --sign-only
in effect, signing and verification must be done separately, with
the latter's per-signature results considered
* CLI: Surface/report missing/bad signers to user
* CLI: Suppress --sign-only JSON output
* nits
* Docs for multi-session offline signing
* Accounts hash consistency halting
* Add option to inject account hash faults for testing.
Enable option in local cluster test to see that node halts.
* feat: cli command for vote account withdraw
* Rework names
* Update to flexible signer, and make consistent with other cli apis
* Add integration test
* Clean up default help msg
Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
* Add failing test
* Fix create-address-with-seed regression
* Add apis to enable generating a pubkey from all various signers
* Enable other signers as --from in create-with-seed
* Copy current state version to v0
* Add `FeeCalculator` to nonce state
* fixup compile
* Dump v0 handling...
Since we new account data is all zeros, new `Current` versioned accounts
look like v0. We could hack around this with some data size checks, but
the `account_utils::*State` traits are applied to `Account`, not the
state data, so we're kind SOL...
* Create more representative test `RecentBlockhashes`
* Improve CLI nonce account display
Co-Authored-By: Michael Vines <mvines@gmail.com>
* Fix that last bank test...
* clippy/fmt
Co-authored-by: Michael Vines <mvines@gmail.com>
* watchtower: send error message as notification
* watchtower: send all clear notification when ok again
* watchtower: add twilio sms notifications
* watchtower: flag to suppress duplicate notifications
* remove trailing space character
* changes as per suggestion on PR
* all changes together
* cargo fmt
* Do periodic inbound compaction for rooted slots
* Add comment
* nits
* Consider not_compacted_roots in cleanup_dead_slot
* Renames in AccountsIndex
* Rename to reflect expansion of removed accounts
* Fix a comment
* rename
* Parallelize clean over AccountsIndex
* Some niceties
* Reduce locks and real chunked parallelism
* Measure each step for sampling opportunities
* Just noticed par iter is maybe lazy
* Replace storage scan with optimized index scan
* Various clean-ups
* Clear uncleared_roots even if no updates
* SDK: Split new `FeeRateGovernor` out of `FeeCalculator`
Leaving `FeeCalculator` to *only* calculate transaction fees
* Replace `FeeCalculator` with `FeeRateGovernor` as appropriate
* Expose recent `FeeRateGovernor` to clients
* Move `burn()` back into `FeeCalculator`
Appease BPF tests
* Revert "Move `burn()` back into `FeeCalculator`"
This reverts commit f3035624307196722b62ff8b74c12cfcc13b1941.
* Adjust BPF `Fee` sysvar test to reflect removal of `burn()` from `FeeCalculator`
* Make `FeeRateGovernor`'s `lamports_per_signature` private
* rebase artifacts
* fmt
* Drop 'Recent'
* Drop _with_commitment variant
* Use a more portable integer for `target_signatures_per_slot`
* Add docs for `getReeRateCalculator` JSON RPC method
* Don't return `lamports_per_signature` in `getFeeRateGovernor` JSONRPC reply
* Rename (keypair util is not a thing)
* Add method to generate_unique_signers
* Cli: refactor signer handling and remote-wallet init
* Fixup unit tests
* Fixup intergation tests
* Update keypair path print statement
* Remove &None
* Use deterministic key in test
* Retain storage-account as index
* Make signer index-handling less brittle
* Cache pubkey on RemoteKeypair::new
* Make signer_of consistent + return pubkey
* Remove &matches double references
* Nonce authorities need special handling
Solana™ is a new blockchain architecture built from the ground up for scale. The architecture supports
up to 710 thousand transactions per second on a gigabit network.
Disclaimer
===
All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the author's best effort. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore nothing in this project constitutes a solicitation for investment.
Introduction
===
It's possible for a centralized database to process 710,000 transactions per second on a standard gigabit network if the transactions are, on average, no more than 176 bytes. A centralized database can also replicate itself and maintain high availability without significantly compromising that transaction rate using the distributed system technique known as Optimistic Concurrency Control [\[H.T.Kung, J.T.Robinson (1981)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.4735). At Solana, we're demonstrating that these same theoretical limits apply just as well to blockchain on an adversarial network. The key ingredient? Finding a way to share time when nodes can't trust one-another. Once nodes can trust time, suddenly ~40 years of distributed systems research becomes applicable to blockchain!
> Perhaps the most striking difference between algorithms obtained by our method and ones based upon timeout is that using timeout produces a traditional distributed algorithm in which the processes operate asynchronously, while our method produces a globally synchronous one in which every process does the same thing at (approximately) the same time. Our method seems to contradict the whole purpose of distributed processing, which is to permit different processes to operate independently and perform different functions. However, if a distributed system is really a single system, then the processes must be synchronized in some way. Conceptually, the easiest way to synchronize processes is to get them all to do the same thing at the same time. Therefore, our method is used to implement a kernel that performs the necessary synchronization--for example, making sure that two different processes do not try to modify a file at the same time. Processes might spend only a small fraction of their time executing the synchronizing kernel; the rest of the time, they can operate independently--e.g., accessing different files. This is an approach we have advocated even when fault-tolerance is not required. The method's basic simplicity makes it easier to understand the precise properties of a system, which is crucial if one is to know just how fault-tolerant the system is. [\[L.Lamport (1984)\]](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.1078)
Furthermore, and much to our surprise, it can be implemented using a mechanism that has existed in Bitcoin since day one. The Bitcoin feature is called nLocktime and it can be used to postdate transactions using block height instead of a timestamp. As a Bitcoin client, you'd use block height instead of a timestamp if you don't trust the network. Block height turns out to be an instance of what's being called a Verifiable Delay Function in cryptography circles. It's a cryptographically secure way to say time has passed. In Solana, we use a far more granular verifiable delay function, a SHA 256 hash chain, to checkpoint the ledger and coordinate consensus. With it, we implement Optimistic Concurrency Control and are now well en route towards that theoretical limit of 710,000 transactions per second.
Architecture
===
Before you jump into the code, review the online book [Solana: Blockchain Rebuilt for Scale](https://docs.solana.com/book/).
(The _latest_ development version of the online book is also [available here](https://docs.solana.com/book/v/master/).)
Release Binaries
===
Official release binaries are available at [Github Releases](https://github.com/solana-labs/solana/releases).
Additionally we provide pre-release binaries for the latest code on the edge and
beta channels. Note that these pre-release binaries may be less stable than an
Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://docs.solana.com/book/building-from-source).
Remote Testnets
---
### Starting a local testnet
Start your own testnet locally, instructions are in the [online docs](https://docs.solana.com/bench-tps).
### Accessing the remote testnet
*`testnet` - public stable testnet accessible via devnet.solana.com. Runs 24/7
## Deploy process
They are deployed with the `ci/testnet-manager.sh` script through a list of [scheduled
First install the nightly build of rustc. `cargo bench` requires use of the
unstable features only available in the nightly build.
@ -211,13 +79,11 @@ Run the benchmarks:
$ cargo +nightly bench
```
Release Process
---
# Release Process
The release process for this project is described [here](RELEASE.md).
Code coverage
---
# Code coverage
To generate code coverage statistics:
@ -226,7 +92,6 @@ $ scripts/coverage.sh
$ open target/cov/lcov-local/index.html
```
Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer
productivity metric. When a developer makes a change to the codebase, presumably it's a *solution* to
some problem. Our unit-test suite is how we encode the set of *problems* the codebase solves. Running
@ -238,3 +103,7 @@ problem is solved by this code?" On the other hand, if a test does fail and you
better way to solve the same problem, a Pull Request with your solution would most certainly be
welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please
send us that patch!
# Disclaimer
All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the author's best effort. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore nothing in this project constitutes a solicitation for investment.
@ -116,7 +116,8 @@ There are three release channels that map to branches as follows:
1. After the new release has been tagged, update the Cargo.toml files on **release branch** to the next semantic version (e.g. 0.9.0 -> 0.9.1) with:
```
scripts/increment-cargo-version.sh patch
$ scripts/increment-cargo-version.sh patch
$ ./scripts/cargo-for-all-lock-files.sh tree
```
1. Rebuild to get an updated version of `Cargo.lock`:
```
@ -138,7 +139,7 @@ There are three release channels that map to branches as follows:
### Update documentation
TODO: Documentation update procedure is WIP as we move to gitbook
Document the new recommended version by updating `book/src/running-archiver.md` and `book/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version.
Document the new recommended version by updating `docs/src/running-archiver.md` and `docs/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version.
An _app_ interacts with a Solana cluster by sending it _transactions_ with one or more _instructions_. The Solana _runtime_ passes those instructions to user-contributed _programs_. An instruction might, for example, tell a program to transfer _lamports_ from one _account_ to another or create an interactive contract that governs how lamports are transfered. Instructions are executed sequentially and atomically. If any instruction is invalid, any changes made within the transaction are discarded.
### Accounts and Signatures
Each transaction explicitly lists all account public keys referenced by the transaction's instructions. A subset of those public keys are each accompanied by a transaction signature. Those signatures signal on-chain programs that the account holder has authorized the transaction. Typically, the program uses the authorization to permit debiting the account or modifying its data.
The transaction also marks some accounts as _read-only accounts_. The runtime permits read-only accounts to be read concurrently. If a program attempts to modify a read-only account, the transaction is rejected by the runtime.
### Recent Blockhash
A Transaction includes a recent blockhash to prevent duplication and to give transactions lifetimes. Any transaction that is completely identical to a previous one is rejected, so adding a newer blockhash allows multiple transactions to repeat the exact same action. Transactions also have lifetimes that are defined by the blockhash, as any transaction whose blockhash is too old will be rejected.
### Instructions
Each instruction specifies a single program account \(which must be marked executable\), a subset of the transaction's accounts that should be passed to the program, and a data byte array instruction that is passed to the program. The program interprets the data array and operates on the accounts specified by the instructions. The program can return successfully, or with an error code. An error return causes the entire transaction to fail immediately.
## Deploying Programs to a Cluster

As shown in the diagram above a client creates a program and compiles it to an ELF shared object containing BPF bytecode and sends it to the Solana cluster. The cluster stores the program locally and makes it available to clients via a _program ID_. The program ID is a _public key_ generated by the client and is used to reference the program in subsequent transactions.
A program may be written in any programming language that can target the Berkley Packet Filter \(BPF\) safe execution environment. The Solana SDK offers the best support for C programs, which is compiled to BPF using the [LLVM compiler infrastructure](https://llvm.org).
## Storing State between Transactions
If the program needs to store state between transactions, it does so using _accounts_. Accounts are similar to files in operating systems such as Linux. Like a file, an account may hold arbitrary data and that data persists beyond the lifetime of a program. Also like a file, an account includes metadata that tells the runtime who is allowed to access the data and how. Unlike a file, the account includes metadata for the lifetime of the file. That lifetime is expressed in "tokens", which is a number of fractional native tokens, called _lamports_. Accounts are held in validator memory and pay "rent" to stay there. Each validator periodically scan all accounts and collects rent. Any account that drops to zero lamports is purged.
If an account is marked "executable", it will only be used by a _loader_ to run programs. For example, a BPF-compiled program is marked executable and loaded by the BPF loader. No program is allowed to modify the contents of an executable account.
An account also includes "owner" metadata. The owner is a program ID. The runtime grants the program write access to the account if its ID matches the owner. If an account is not owned by a program, the program is permitted to read its data and credit the account.
In the same way that a Linux user uses a path to look up a file, a Solana client uses public keys to look up accounts. To create an account, the client generates a _keypair_ and registers its public key using the `CreateAccount` instruction. The account created by `CreateAccount` is called a _system account_ and is owned by a built-in program called the System program. The System program allows clients to transfer lamports and assign account ownership.
The runtime only permits the owner to debit the account or modify its data. The program then defines additional rules for whether the client can modify accounts it owns. In the case of the System program, it allows users to transfer lamports by recognizing transaction signatures. If it sees the client signed the transaction using the keypair's _private key_, it knows the client authorized the token transfer.
After the runtime executes each of the transaction's instructions, it uses the account metadata to verify that none of the access rules were violated. If a program violates an access rule, the runtime discards all account changes made by all instructions and marks the transaction as failed.
## Smart Contracts
Programs don't always require transaction signatures, as the System program does. Instead, the program may manage _smart contracts_. A smart contract is a set of constraints that once satisfied, signal to a program that a token transfer or account update is permitted. For example, one could use the Budget program to create a smart contract that authorizes a token transfer only after some date. Once evidence that the date has past, the contract progresses, and token transfer completes.
[Click here to play Tic-Tac-Toe](https://solana-example-tictactoe.herokuapp.com/) on the Solana testnet. Open the link and wait for another player to join, or open the link in a second browser tab to play against yourself. You will see that every move a player makes stores a transaction on the ledger.
## Build and run Tic-Tac-Toe locally
First fetch the latest release of the example code:
Next, follow the steps in the git repository's [README](https://github.com/solana-labs/example-tictactoe/blob/master/README.md).
## Getting lamports to users
You may have noticed you interacted with the Solana cluster without first needing to acquire lamports to pay transaction fees. Under the hood, the web app creates a new ephemeral identity and sends a request to an off-chain service for a signed transaction authorizing a user to start a new game. The service is called a _drone_. When the app sends the signed transaction to the Solana cluster, the drone's lamports are spent to pay the transaction fee and start the game. In a real world app, the drone might request the user watch an ad or pass a CAPTCHA before signing over its lamports.
Solana’s crypto-economic system is designed to promote a healthy, long term self-sustaining economy with participant incentives aligned to the security and decentralization of the network. The main participants in this economy are validation-clients and replication-clients. Their contributions to the network, state validation and data storage respectively, and their requisite incentive mechanisms are discussed below.
The main channels of participant remittances are referred to as protocol-based rewards and transaction fees. Protocol-based rewards are issuances from a global, protocol-defined, inflation rate. These rewards will constitute the total reward delivered to replication and validation clients, the remaining sourced from transaction fees. In the early days of the network, it is likely that protocol-based rewards, deployed based on predefined issuance schedule, will drive the majority of participant incentives to participate in the network.
These protocol-based rewards, to be distributed to participating validation and replication clients, are to be a result of a global supply inflation rate, calculated per Solana epoch and distributed amongst the active validator set. As discussed further below, the per annum inflation rate is based on a pre-determined disinflationary schedule. This provides the network with monetary supply predictability which supports long term economic stability and security.
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction \(be it a state execution or proof-of-replication verification\). A mechanism for long-term economic stability and forking protection through partial burning of each transaction fee is also discussed below.
A high-level schematic of Solana’s crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics/), [State-validation Protocol-based Rewards](ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_validation_client_economics/ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. Additionally, in [Storage Rent Economics](ed_storage_rent_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. The [Replication-client Economics](ed_replication_client_economics/) chapter will review the Solana network design for global ledger storage/redundancy and archiver-client economics \([Storage-replication rewards](ed_replication_client_economics/ed_rce_storage_replication_rewards.md)\) along with an archiver-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md). An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
**Figure 1**: Schematic overview of Solana economic incentive design.
A colluding validation-client, may take the strategy to mark PoReps from non-colluding archiver nodes as invalid as an attempt to maximize the rewards for the colluding archiver nodes. In this case, it isn’t feasible for the offended-against archiver nodes to petition the network for resolution as this would result in a network-wide vote on each offending PoRep and create too much overhead for the network to progress adequately. Also, this mitigation attempt would still be vulnerable to a >= 51% staked colluder.
Alternatively, transaction fees from submitted PoReps are pooled and distributed across validation-clients in proportion to the number of valid PoReps discounted by the number of invalid PoReps as voted by each validator-client. Thus invalid votes are directly dis-incentivized through this reward channel. Invalid votes that are revealed by archiver nodes as fishing PoReps, will not be discounted from the payout PoRep count.
Another collusion attack involves a validator-client who may take the strategy to ignore invalid PoReps from colluding archiver and vote them as valid. In this case, colluding archiver-clients would not have to store the data while still receiving rewards for validated PoReps. Additionally, colluding validator nodes would also receive rewards for validating these PoReps. To mitigate this attack, validators must randomly sample PoReps corresponding to the ledger block they are validating and because of this, there will be multiple validators that will receive the colluding archiver’s invalid submissions. These non-colluding validators will be incentivized to mark these PoReps as invalid as they have no way to determine whether the proposed invalid PoRep is actually a fishing PoRep, for which a confirmation vote would result in the validator’s stake being slashed.
In this case, the proportion of time a colluding pair will be successful has an upper limit determined by the % of stake of the network claimed by the colluding validator. This also sets bounds to the value of such an attack. For example, if a colluding validator controls 10% of the total validator stake, transaction fees will be lost \(likely sent to mining pool\) by the colluding archiver 90% of the time and so the attack vector is only profitable if the per-PoRep reward at least 90% higher than the average PoRep transaction fee. While, probabilistically, some colluding archiver-client PoReps will find their way to colluding validation-clients, the network can also monitor rates of paired \(validator + archiver\) discrepancies in voting patterns and censor identified colluders in these cases.
Long term economic sustainability is one of the guiding principles of Solana’s economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solana’s network, these components take the form of token issuance \(via inflation\) and token burning’.
The dominant remittances from the Solana mining pool are validator and archiver rewards. The disinflationary mechanism is a flat, protocol-specified and adjusted, % of each transaction fee.
The Archiver rewards are to be delivered to archivers as a portion of the network inflation after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 3**
**Figure 3**: Example PoRep reward design as a function of global network storage redundancy.
In the example shown in Figure 1, multiple per PoRep base rewards are explored \(as a % of Tx Fee\) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy \(i.e. 10X\).
The preceeding sections, outlined in the [Economic Design Overview](./), describe a long-term vision of a sustainable Solana economy. Of course, we don't expect the final implementation to perfectly match what has been described above. We intend to fully engage with network stakeholders throughout the implementation phases \(i.e. pre-testnet, testnet, mainnet\) to ensure the system supports, and is representative of, the various network participants' interests. The first step toward this goal, however, is outlining a some desired MVP economic features to be available for early pre-testnet and testnet participants. Below is a rough sketch outlining basic economic functionality from which a more complete and functional system can be developed.
## MVP Economic Features
* Faucet to deliver testnet SOLs to validators for staking and application development.
* Mechanism by which validators are rewarded via network inflation.
* Ability to delegate tokens to validator nodes
* Validator set commission fees on interest from delegated tokens.
* Archivers to receive fixed, arbitrary reward for submitting validated PoReps. Reward size mechanism \(i.e. PoRep reward as a function of total ledger redundancy\) to come later.
* Pooling of archiver PoRep transaction fees and weighted distribution to validators based on PoRep verification \(see [Replication-validation Transaction Fees](ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md). It will be useful to test this protection against attacks on testnet.
* Nice-to-have: auto-delegation of archiver rewards to validator.
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of archivers provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided, as proved by successfully submitting Proofs-of-Replication to the cluster.. These rewards are captured by generating and entering Proofs of Replication \(PoReps\) into the PoH stream which can be validated by Validation nodes as described above in the [Replication-validation Transaction Fees](../ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md) chapter.
The ability for Solana network participants to earn rewards by providing storage service is a unique on-boarding path that requires little hardware overhead and minimal upfront capital. It offers an avenue for individuals with extra-storage space on their home laptops or PCs to contribute to the security of the network and become integrated into the Solana economy.
To enhance this on-boarding ramp and facilitate further participation and investment in the Solana economy, replication-clients have the opportunity to auto-delegate their rewards to validation-clients of their choice. Much like the automatic reinvestment of stock dividends, in this scenario, an archiver-client can earn Solana tokens by providing some storage capacity to the network \(i.e. via submitting valid PoReps\), have the protocol-based rewards automatically assigned as delegation to a staked validator node of the archiver's choice and earn interest, less a fee, from the validation-client's network participation.
Archiver-clients download, encrypt and submit PoReps for ledger block sections.3 PoReps submitted to the PoH stream, and subsequently validated, function as evidence that the submitting archiver client is indeed storing the assigned ledger block sections on local hard drive space as a service to the network. Therefore, archiver clients should earn protocol rewards proportional to the amount of storage, and the number of successfully validated PoReps, that they are verifiably providing to the network.
Additionally, archiver clients have the opportunity to capture a portion of slashed bounties \[TBD\] of dishonest validator clients. This can be accomplished by an archiver client submitting a verifiably false PoRep for which a dishonest validator client receives and signs as a valid PoRep. This reward incentive is to prevent lazy validators and minimize validator-archiver collusion attacks, more on this below.
Each transaction that is submitted to the Solana ledger imposes costs. Transaction fees paid by the submitter, and collected by a validator, in theory, account for the acute, transacitonal, costs of validating and adding that data to the ledger. At the same time, our compensation design for archivers (see [Replication-client Economics](ed_replication_client_economics.md)), in theory, accounts for the long term storage of the historical ledger. Unaccounted in this process is the mid-term storage of active ledger state, necessarily maintined by the rotating validator set. This type of storage imposes costs not only to validators but also to the broader network as active state grows so does data transmission and validation overhead. To account for these costs, we describe here our preliminary design and implementation of storage rent.
Storage rent can be paid via one of two methods:
Method 1: Set it and forget it
With this approach, accounts with two-years worth of rent deposits secured are exempt from network rent charges. By maintaining this minimum-balance, the broader network benefits from reduced liquitity and the account holder can trust that their `Account::data` will be retained for continual access/usage.
Method 2: Pay per byte
If an account has less than two-years worth of deposited rent the network charges rent on a per-epoch basis, in credit for the next epoch (but in arrears when necessary). This rent is deducted at a rate specified in genesis, in lamports per kilobyte-year.
For information on the technical implementation details of this design, see the [Rent](rent.md) section.
Validator-clients are eligible to receive protocol-based \(i.e. inflation-based\) rewards issued via stake-based annual interest rates \(calculated per epoch\) by providing compute \(CPU+GPU\) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic disinflationary schedule as a function of total amount of circulating tokens. The network is expected to launch with an annual inflation rate around 15%, set to decrease by 15% per year until a long-term stable rate of 1-2% is reached. These issuances are to be split and distributed to participating validators and archivers, with around 90% of the issued tokens allocated for validator rewards. Because the network will be distributing a fixed amount of inflation rewards across the stake-weighted valdiator set, any individual validator's interest rate will be a function of the amount of staked SOL in relation to the circulating SOL.
Additionally, validator clients may earn revenue through fees via state-validation transactions and Proof-of-Replication \(PoRep\) transactions. For clarity, we separately describe the design and motivation of these revenue distriubutions for validation-clients below: state-validation protocol-based rewards, state-validation transaction fees and rent, and PoRep-validation transaction fees.
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by archiver-clients. In this case, validators are providing compute \(CPU/GPU\) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.
While replication-clients are incentivized and rewarded through protocol-based rewards schedule \(see [Replication-client Economics](../ed_replication_client_economics/)\), validator-clients will be incentivized to include and validate PoReps in PoH through collection of transaction fees associated with the submitted PoReps and distribution of protocol rewards proportional to the validated PoReps. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy \(e.g. 10x data redundancy\).
The validation of PoReps by validation-clients is computationally more expensive than state-validation \(detail in the [Economic Sustainability](../ed_economic_sustainability.md) chapter\), thus the transaction fees are expected to be proportionally higher.
There are various attack vectors available for colluding validation and replication clients, also described in detail below in [Economic Sustainability](../ed_economic_sustainability/README.md). To protect against various collusion attack vectors, for a given epoch, validator rewards are distributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of PoReps that mismatch the archivers challenge. The PoRep challenge game is described in [Ledger Replication](https://github.com/solana-labs/solana/blob/master/book/src/ledger-replication.md#the-porep-game). This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps \(note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid\).
Validator-clients have two functional roles in the Solana network:
* Validate \(vote\) the current global state of that PoH along with any Proofs-of-Replication \(see [Replication Client Economics](../ed_replication_client_economics/)\) that they are eligible to validate.
* Be elected as ‘leader’ on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and Proofs-of-Replication and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. As previously discussed, compensation for validator-clients is provided via a protocol-based annual inflation rate dispersed in proportion to the stake-weight of each validator \(see below\) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each transaction fee, less a protocol-specified amount that is destroyed \(see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)\). PoRep transaction fees are also collected by the leader client and validator PoRep rewards are distributed in proportion to the number of validated PoReps less the number of PoReps that mismatch an archiver's challenge. \(see [Replication-client Transaction Fees](ed_vce_replication_validation_transaction_fees.md)\)
The effective protocol-based annual interest rate \(%\) per epoch received by validation-clients is to be a function of:
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](.)\)
* the fraction of staked SOLs out of the current total circulating supply,
* the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
The first factor is a function of protocol parameters only \(i.e. independent of validator behavior in a given epoch\) and results in a global validation reward schedule designed to incentivize early participation, provide clear montetary stability and provide optimal security in the network.
At any given point in time, a specific validator's interest rate can be determined based on the porportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year \(the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch\). With these broad assumptions, the 10-year inflation rate \(adjusted daily for this example\) is shown in **Figure 2**, while the total circulating token supply is illustrated in **Figure 3**. Neglected in this toy-model is the inflation supression due to the portion of each transaction fee that is to be destroyed.
 \*\*Figure 2:\*\* In this example schedule, the annual inflation rate \[%\] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
 \*\*Figure 3:\*\* The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in \*\*Figure 2\*\* Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabalize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation and ledger storage replication \(PoReps\) are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators and archiver nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assummed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in\*\* Figure 4\*\*.
**Figure 4:** Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.
This epoch-specific protocol-defined interest rate sets an upper limit of _protocol-generated_ annual interest rate \(not absolute total interest rate\) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch.
Running a Solana validation-client required relatively modest upfront hardware capital investment. **Table 2** provides an example hardware configuration to support ~1M tx/s with estimated ‘off-the-shelf’ costs:
| Network \(1\) | Google webpass business bay area 1gbps unlimited | $5500/mo |
| Network \(2\) | Hurricane Electric bay area colo 1gbps | $500/mo |
**Table 2** example high-end hardware setup for running a Solana client.
Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solana’s validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties still have two options to become involved in the Solana network/economy:
1. Delegation of previously acquired tokens with a reliable validation node to earn a portion of interest generated
2. Provide local storage space as a replication-client and receive rewards by submitting Proof-of-Replication \(see [Replication-client Economics](../ed_replication_client_economics/)\).
a. This participant has the additional option to directly delegate their earned storage rewards \([Replication-client Reward Auto-delegation](../ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md)\)
Delegation of tokens to validation-clients, via option 1, provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature intends to create a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services.
Accounts on Solana may have owner-controlled state \(`Account::data`\) that's separate from the account's balance \(`Account::lamports`\). Since validators on the network need to maintain a working copy of this state in memory, the network charges a time-and-space based fee for this resource consumption, also known as Rent.
## Two-tiered rent regime
Accounts which maintain a minimum balance equivalent to 2 years of rent payments are exempt. Accounts whose balance falls below this threshold are charged rent at a rate specified in genesis, in lamports per kilobyte-year. The network charges rent on a per-epoch basis, in credit for the next epoch \(but in arrears when necessary\), and `Account::rent_epoch` keeps track of the next time rent should be collected from the account.
## Collecting rent
Rent is due at account creation time for one epoch's worth of time, and the new account has `Account::rent_epoch` of `current_epoch + 1`. After that, the bank deducts rent from accounts during normal transaction processing as part of the load phase.
If the account is in the exempt regime, `Account::rent_epoch` is simply pushed to `current_epoch + 1`.
If the account is non-exempt, the difference between the next epoch and `Account::rent_epoch` is used to calculate the amount of rent owed by this account \(via `Rent::due()`\). Any fractional lamports of the calculation are truncated. Rent due is deducted from `Account::lamports` and `Account::rent_epoch` is updated to the next epoch. If the amount of rent due is less than one lamport, no changes are made to the account.
Accounts whose balance is insufficient to satisfy the rent that would be due simply fail to load.
A percentage of the rent collected is destroyed. The rest is distributed to validator accounts by stake weight, a la transaction fees, at the end of every slot.
## Read-only accounts
Read-only accounts are not being charged rent in current implementation.
## Design considerations, others considered
Under this design, it is possible to have accounts that linger, never get touched, and never have to pay rent. `Noop` instructions that name these accounts can be used to "garbage collect", but it'd also be possible for accounts that never get touched to migrate out of a validator's working set, thereby reducing memory consumption and obviating the need to charge rent.
### Ad-hoc collection
Collecting rent on an as-needed basis \(i.e. whenever accounts were loaded/accessed\) was considered. The issues with such an approach are:
* accounts loaded as "credit only" for a transaction could very reasonably be expected to have rent due,
but would not be writable during any such transaction
* a mechanism to "beat the bushes" \(i.e. go find accounts that need to pay rent\) is desirable,
lest accounts that are loaded infrequently get a free ride
### System instruction for collecting rent
Collecting rent via a system instruction was considered, as it would naturally have distributed rent to active and stake-weighted nodes and could have been done incrementally. However:
* it would have adversely affected network throughput
* it would require special-casing by the runtime, as accounts with non-SystemProgram owners may be debited by this instruction
* someone would have to issue the transactions
### Account scans on every epoch
Scanning the entire Bank for accounts that owe rent at the beginning of each epoch was considered. This would have been an expensive operation, and would require that the entire current state of the network be present on every validator at the beginning of each epoch.
Solana is an open source project implementing a new, high-performance, permissionless blockchain. Solana is also the name of a company headquartered in San Francisco that maintains the open source project.
## About this Book
This book describes the Solana open source project, a blockchain built from the ground up for scale. The book covers why Solana is useful, how to use it, how it works, and why it will continue to work long after the company Solana closes its doors. The goal of the Solana architecture is to demonstrate there exists a set of software algorithms that when used in combination to implement a blockchain, removes software as a performance bottleneck, allowing transaction throughput to scale proportionally with network bandwidth. The architecture goes on to satisfy all three desirable properties of a proper blockchain: it is scalable, secure and decentralized.
The architecture describes a theoretical upper bound of 710 thousand transactions per second \(tps\) on a standard gigabit network and 28.4 million tps on 40 gigabit. Furthermore, the architecture supports safe, concurrent execution of programs authored in general purpose programming languages such as C or Rust.
## Disclaimer
All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the author's best effort. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore, nothing in this project constitutes a solicitation for investment.
## History of the Solana Codebase
In November of 2017, Anatoly Yakovenko published a whitepaper describing Proof of History, a technique for keeping time between computers that do not trust one another. From Anatoly's previous experience designing distributed systems at Qualcomm, Mesosphere and Dropbox, he knew that a reliable clock makes network synchronization very simple. When synchronization is simple the resulting network can be blazing fast, bound only by network bandwidth.
Anatoly watched as blockchain systems without clocks, such as Bitcoin and Ethereum, struggled to scale beyond 15 transactions per second worldwide when centralized payment systems such as Visa required peaks of 65,000 tps. Without a clock, it was clear they'd never graduate to being the global payment system or global supercomputer most had dreamed them to be. When Anatoly solved the problem of getting computers that don’t trust each other to agree on time, he knew he had the key to bring 40 years of distributed systems research to the world of blockchain. The resulting cluster wouldn't be just 10 times faster, or a 100 times, or a 1,000 times, but 10,000 times faster, right out of the gate!
Anatoly's implementation began in a private codebase and was implemented in the C programming language. Greg Fitzgerald, who had previously worked with Anatoly at semiconductor giant Qualcomm Incorporated, encouraged him to reimplement the project in the Rust programming language. Greg had worked on the LLVM compiler infrastructure, which underlies both the Clang C/C++ compiler as well as the Rust compiler. Greg claimed that the language's safety guarantees would improve software productivity and that its lack of a garbage collector would allow programs to perform as well as those written in C. Anatoly gave it a shot and just two weeks later, had migrated his entire codebase to Rust. Sold. With plans to weave all the world's transactions together on a single, scalable blockchain, Anatoly called the project Loom.
On February 13th of 2018, Greg began prototyping the first open source implementation of Anatoly's whitepaper. The project was published to GitHub under the name Silk in the loomprotocol organization. On February 28th, Greg made his first release, demonstrating 10 thousand signed transactions could be verified and processed in just over half a second. Shortly after, another former Qualcomm cohort, Stephen Akridge, demonstrated throughput could be massively improved by offloading signature verification to graphics processors. Anatoly recruited Greg, Stephen and three others to co-found a company, then called Loom.
Around the same time, Ethereum-based project Loom Network sprung up and many people were confused about whether they were the same project. The Loom team decided it would rebrand. They chose the name Solana, a nod to a small beach town North of San Diego called Solana Beach, where Anatoly, Greg and Stephen lived and surfed for three years when they worked for Qualcomm. On March 28th, the team created the Solana Labs GitHub organization and renamed Greg's prototype Silk to Solana.
In June of 2018, the team scaled up the technology to run on cloud-based networks and on July 19th, published a 50-node, permissioned, public testnet consistently supporting bursts of 250,000 transactions per second. In a later release in December, called v0.10 Pillbox, the team published a permissioned testnet running 150 nodes on a gigabit network and demonstrated soak tests processing an _average_ of 200 thousand transactions per second with bursts over 500 thousand. The project was also extended to support on-chain programs written in the C programming language and run concurrently in a safe execution environment called BPF.
## What is a Solana Cluster?
A cluster is a set of computers that work together and can be viewed from the outside as a single system. A Solana cluster is a set of independently owned computers working together \(and sometimes against each other\) to verify the output of untrusted, user-submitted programs. A Solana cluster can be utilized any time a user wants to preserve an immutable record of events in time or programmatic interpretations of those events. One use is to track which of the computers did meaningful work to keep the cluster running. Another use might be to track the possession of real-world assets. In each case, the cluster produces a record of events called the ledger. It will be preserved for the lifetime of the cluster. As long as someone somewhere in the world maintains a copy of the ledger, the output of its programs \(which may contain a record of who possesses what\) will forever be reproducible, independent of the organization that launched it.
## What are SOLs?
A SOL is the name of Solana's native token, which can be passed to nodes in a Solana cluster in exchange for running an on-chain program or validating its output. The system may perform micropayments of fractional SOLs, which are called _lamports_. They are named in honor of Solana's biggest technical influence, [Leslie Lamport](https://en.wikipedia.org/wiki/Leslie_Lamport). A lamport has a value of 0.000000001 SOL.
The following architectural proposals have been accepted by the Solana team, but are not yet fully implemented. The proposals may be implemented as described, implemented differently as issues in the designs become evident, or not implemented at all. If implemented, the descriptions will be moved from this section to earlier chapters in a future version of this book.
Currently, there is no way to create instruction `pay_and_launch_missiles` that executes `token_instruction::pay` from the `acme` program. The workaround is to extend the `acme` program with the implementation of the `token` program, and create `token` accounts with `ACME_PROGRAM_ID`, which the `acme` program is permitted to modify. With that workaround, `acme` can modify token-like accounts created by the `acme` program, but not token accounts created by the `token` program.
## Proposed Solution
The goal of this design is to modify Solana's runtime such that an on-chain program can invoke an instruction from another program.
Given two on-chain programs `token` and `acme`, each implementing instructions `pay()` and `launch_missiles()` respectively, we would ideally like to implement the `acme` module with a call to a function defined in the `token` module:
The above code would require that the `token` crate be dynamically linked, so that a custom linker could intercept calls and validate accesses to `keyed_accounts`. That is, even though the client intends to modify both `token` and `acme` accounts, only `token` program is permitted to modify the `token` account, and only the `acme` program is permitted to modify the `acme` account.
Backing off from that ideal cross-program call, a slightly more verbose solution is to expose token's existing `process_instruction()` entrypoint to the acme program:
let instruction = token_instruction::pay(&alice_pubkey);
process_instruction(&instruction)?;
launch_missiles(keyed_accounts)?;
}
```
where `process_instruction()` is built into Solana's runtime and responsible for routing the given instruction to the `token` program via the instruction's `program_id` field. Before invoking `pay()`, the runtime must also ensure that `acme` didn't modify any accounts owned by `token`. It does this by calling `runtime::verify_account_changes()` and then afterward updating all the `pre_*` variables to tentatively commit `acme`'s account modifications. After `pay()` completes, the runtime must again ensure that `token` didn't modify any accounts owned by `acme`. It should call `verify_account_changes()` again, but this time with the `token` program ID. Lastly, after `pay_and_launch_missiles()` completes, the runtime must call `verify_account_changes()` one more time, where it normally would, but using all updated `pre_*` variables. If executing `pay_and_launch_missiles()` up to `pay()` made no invalid account changes, `pay()` made no invalid changes, and executing from `pay()` until `pay_and_launch_missiles()` returns made no invalid changes, then the runtime can transitively assume `pay_and_launch_missiles()` as whole made no invalid account changes, and therefore commit all account modifications.
### Setting `KeyedAccount.is_signer`
When `process_instruction()` is invoked, the runtime must create a new `KeyedAccounts` parameter using the signatures from the _original_ transaction data. Since the `token` program is immutable and existed on-chain prior to the `acme` program, the runtime can safely treat the transaction signature as a signature of a transaction with a `token` instruction. When the runtime sees the given instruction references `alice_pubkey`, it looks up the key in the transaction to see if that key corresponds to a transaction signature. In this case it does and so sets `KeyedAccount.is_signer`, thereby authorizing the `token` program to modify Alice's account.
The storage epoch should be the number of slots which results in around 100GB-1TB of ledger to be generated for archivers to store. Archivers will start storing ledger when a given fork has a high probability of not being rolled back.
## Validator behavior
1. Every NUM\_KEY\_ROTATION\_TICKS it also validates samples received from
archivers. It signs the PoH hash at that point and uses the following
algorithm with the signature as the input:
* The low 5 bits of the first byte of the signature creates an index into
another starting byte of the signature.
* The validator then looks at the set of storage proofs where the byte of
the proof's sha state vector starting from the low byte matches exactly
with the chosen byte\(s\) of the signature.
* If the set of proofs is larger than the validator can handle, then it
increases to matching 2 bytes in the signature.
* Validator continues to increase the number of matching bytes until a
workable set is found.
* It then creates a mask of valid proofs and fake proofs and sends it to
the leader. This is a storage proof confirmation transaction.
2. After a lockout period of NUM\_SECONDS\_STORAGE\_LOCKOUT seconds, the
validator then submits a storage proof claim transaction which then causes the
distribution of the storage reward if no challenges were seen for the proof to
the validators and archivers party to the proofs.
## Archiver behavior
1. The archiver then generates another set of offsets which it submits a fake
proof with an incorrect sha state. It can be proven to be fake by providing the
seed for the hash result.
* A fake proof should consist of an archiver hash of a signature of a PoH
value. That way when the archiver reveals the fake proof, it can be
verified on chain.
2. The archiver monitors the ledger, if it sees a fake proof integrated, it
creates a challenge transaction and submits it to the current leader. The
transacation proves the validator incorrectly validated a fake storage proof.
The archiver is rewarded and the validator's staking balance is slashed or
frozen.
## Storage proof contract logic
Each archiver and validator will have their own storage account. The validator's account would be separate from their gossip id similiar to their vote account. These should be implemented as two programs one which handles the validator as the keysigner and one for the archiver. In that way when the programs reference other accounts, they can check the program id to ensure it is a validator or archiver account they are referencing.
### SubmitMiningProof
```text
SubmitMiningProof {
slot: u64,
sha_state: Hash,
signature: Signature,
};
keys = [archiver_keypair]
```
Archivers create these after mining their stored ledger data for a certain hash value. The slot is the end slot of the segment of ledger they are storing, the sha\_state the result of the archiver using the hash function to sample their encrypted ledger segment. The signature is the signature that was created when they signed a PoH value for the current storage epoch. The list of proofs from the current storage epoch should be saved in the account state, and then transfered to a list of proofs for the previous epoch when the epoch passes. In a given storage epoch a given archiver should only submit proofs for one segment.
The program should have a list of slots which are valid storage mining slots. This list should be maintained by keeping track of slots which are rooted slots in which a significant portion of the network has voted on with a high lockout value, maybe 32-votes old. Every SLOTS\_PER\_SEGMENT number of slots would be added to this set. The program should check that the slot is in this set. The set can be maintained by receiving a AdvertiseStorageRecentBlockHash and checking with its bank/Tower BFT state.
The program should do a signature verify check on the signature, public key from the transaction submitter and the message of the previous storage epoch PoH value.
A validator will submit this transaction to indicate that a set of proofs for a given segment are valid/not-valid or skipped where the validator did not look at it. The keypairs for the archivers that it looked at should be referenced in the keys so the program logic can go to those accounts and see that the proofs are generated in the previous epoch. The sampling of the storage proofs should be verified ensuring that the correct proofs are skipped by the validator according to the logic outlined in the validator behavior of sampling.
The included archiver keys will indicate the the storage samples which are being referenced; the length of the proof\_mask should be verified against the set of storage proofs in the referenced archiver account\(s\), and should match with the number of proofs submitted in the previous storage epoch in the state of said archiver account.
### ClaimStorageReward
```text
ClaimStorageReward {
}
keys = [validator_keypair or archiver_keypair, validator/archiver_keypairs (unsigned)]
```
Archivers and validators will use this transaction to get paid tokens from a program state where SubmitStorageProof, ProofValidation and ChallengeProofValidations are in a state where proofs have been submitted and validated and there are no ChallengeProofValidations referencing those proofs. For a validator, it should reference the archiver keypairs to which it has validated proofs in the relevant epoch. And for an archiver it should reference validator keypairs for which it has validated and wants to be rewarded.
### ChallengeProofValidation
```text
ChallengeProofValidation {
proof_index: u64,
hash_seed_value: Vec<u8>,
}
keys = [archiver_keypair, validator_keypair]
```
This transaction is for catching lazy validators who are not doing the work to validate proofs. An archiver will submit this transaction when it sees a validator has approved a fake SubmitMiningProof transaction. Since the archiver is a light client not looking at the full chain, it will have to ask a validator or some set of validators for this information maybe via RPC call to obtain all ProofValidations for a certain segment in the previous storage epoch. The program will look in the validator account state see that a ProofValidation is submitted in the previous storage epoch and hash the hash\_seed\_value and see that the hash matches the SubmitMiningProof transaction and that the validator marked it as valid. If so, then it will save the challenge to the list of challenges that it has in its state.
### AdvertiseStorageRecentBlockhash
```text
AdvertiseStorageRecentBlockhash {
hash: Hash,
slot: u64,
}
```
Validators and archivers will submit this to indicate that a new storage epoch has passed and that the storage proofs which are current proofs should now be for the previous epoch. Other transactions should check to see that the epoch that they are referencing is accurate according to current chain state.
This document describes how to setup an archiver in the testnet
Please note some of the information and instructions described here may change in future releases.
## Overview
Archivers are specialized light clients. They download a part of the ledger \(a.k.a Segment\) and store it. They earn rewards for storing segments.
The testnet features a validator running at devnet.solana.com, which serves as the entrypoint to the cluster for your archiver node.
Additionally there is a blockexplorer available at [http://devnet.solana.com/](http://devnet.solana.com/).
The testnet is configured to reset the ledger daily, or sooner should the hourly automated cluster sanity test fail.
## Machine Requirements
Archivers don't need specialized hardware. Anything with more than 128GB of disk space will be able to participate in the cluster as an archiver node.
Currently the disk space requirements are very low but we expect them to change in the future.
Prebuilt binaries are available for Linux x86\_64 \(Ubuntu 18.04 recommended\), macOS, and Windows.
### Confirm The Testnet Is Reachable
Before starting an archiver node, sanity check that the cluster is accessible to your machine by running some simple commands. If any of the commands fail, please retry 5-10 minutes later to confirm the testnet is not just restarting itself before debugging further.
Fetch the current transaction count over JSON RPC:
```bash
curl -X POST -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","id":1, "method":"getTransactionCount"}' http://devnet.solana.com:8899
```
Inspect the blockexplorer at [http://devnet.solana.com/](http://devnet.solana.com/) for activity.
View the [metrics dashboard](https://metrics.solana.com:3000/d/testnet-beta/testnet-monitor-beta?var-testnet=testnet) for more detail on cluster activity.
## Archiver Setup
#### Obtaining The Software
#### Bootstrap with `solana-install`
The `solana-install` tool can be used to easily install and upgrade the cluster software.
#### Linux and mac OS
```bash
curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.18.0/install/solana-install-init.sh | sh -s
```
Alternatively build the `solana-install` program from source and run the following command to obtain the same result:
```bash
solana-install init
```
#### Windows
Download and install **solana-install-init** from [https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest)
After a successful install, `solana-install update` may be used to easily update the software to a newer version at any time.
#### Download Prebuilt Binaries
If you would rather not use `solana-install` to manage the install, you can manually download and install the binaries.
#### Linux
Download the binaries by navigating to [https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest), download **solana-release-x86\_64-unknown-linux-gnu.tar.bz2**, then extract the archive:
```bash
tar jxf solana-release-x86_64-unknown-linux-gnu.tar.bz2
cd solana-release/
exportPATH=$PWD/bin:$PATH
```
#### mac OS
Download the binaries by navigating to [https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest), download **solana-release-x86\_64-apple-darwin.tar.bz2**, then extract the archive:
```bash
tar jxf solana-release-x86_64-apple-darwin.tar.bz2
cd solana-release/
exportPATH=$PWD/bin:$PATH
```
#### Windows
Download the binaries by navigating to [https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest), download **solana-release-x86\_64-pc-windows-msvc.tar.bz2**, then extract it into a folder. It is a good idea to add this extracted folder to your windows PATH.
## Starting The Archiver
Try running following command to join the gossip network and view all the other nodes in the cluster:
* We recommend a CPU with the highest number of cores as possible. AMD Threadripper or Intel Server \(Xeon\) CPUs are fine.
* We recommend AMD Threadripper as you get a larger number of cores for parallelization compared to Intel.
* Threadripper also has a cost-per-core advantage and a greater number of PCIe lanes compared to the equivalent Intel part. PoH \(Proof of History\) is based on sha256 and Threadripper also supports sha256 hardware instructions.
* SSD size and I/O style \(SATA vs NVMe/M.2\) for a validator
* Minimum example - Samsung 860 Evo 2TB
* Mid-range example - Samsung 860 Evo 4TB
* High-end example - Samsung 860 Evo 4TB
* GPUs
* While a CPU-only node may be able to keep up with the initial idling network, once transaction throughput increases, GPUs will be necessary
* What kind of GPU?
* We recommend Nvidia 2080Ti or 1080Ti series consumer GPU or Tesla series server GPUs.
* We do not currently support OpenCL and therefore do not support AMD GPUs. We have a bounty out for someone to port us to OpenCL. Interested? [Check out our GitHub.](https://github.com/solana-labs/solana)
* Power Consumption
* Approximate power consumption for a validator node running an AMD Threadripper 2950W and 2x 2080Ti GPUs is 800-1000W.
### Preconfigured Setups
Here are our recommendations for low, medium, and high end machine specifications:
| | Low end | Medium end | High end | Notes |
| :--- | :--- | :--- | :--- | :--- |
| CPU | AMD Threadripper 1900x | AMD Threadripper 2920x | AMD Threadripper 2950x | Consider a 10Gb-capable motherboard with as many PCIe lanes and m.2 slots as possible. |
| RAM | 16GB | 32GB | 64GB | |
| OS Drive | Samsung 860 Evo 2TB | Samsung 860 Evo 4TB | Samsung 860 Evo 4TB | Or equivalent SSD |
| Accounts Drive\(s\) | None | Samsung 970 Pro 1TB | 2x Samsung 970 Pro 1TB | |
| GPU | 4x Nvidia 1070 or 2x Nvidia 1080 Ti or 2x Nvidia 2070 | 2x Nvidia 2080 Ti | 4x Nvidia 2080 Ti | Any number of cuda-capable GPUs are supported on Linux platforms. |
## Software
* We build and run on Ubuntu 18.04. Some users have had trouble when running on Ubuntu 16.04
* See [Validator Software](validator-software.md) for the current Solana software release.
Be sure to ensure that the machine used is not behind a residential NAT to avoid
NAT traversal issues. A cloud-hosted machine works best. **Ensure that IP ports 8000 through 10000 are not blocked for Internet inbound and outbound traffic.**
For more information on port forwarding with regards to residential networks,
see [this document](http://www.mcs.sdsmt.edu/lpyeatt/courses/314/PortForwardingSetup.pdf).
Prebuilt binaries are available for Linux x86\_64 \(Ubuntu 18.04 recommended\).
MacOS or WSL users may build from source.
## GPU Requirements
CUDA is required to make use of the GPU on your system. The provided Solana
release binaries are built on Ubuntu 18.04 with [CUDA Toolkit 10.1 update 1](https://developer.nvidia.com/cuda-toolkit-archive). If your machine is using
a different CUDA version then you will need to rebuild from source.
* [\#validator-support](https://discord.gg/rZsenD) General support channel for any Validator related queries.
* [\#tourdesol](https://discord.gg/BdujK2) Discussion and support channel for Tour de SOL participants ([What is Tour de SOL?](https://solana.com/tds/)).
* [\#tourdesol-announcements](https://discord.gg/Q5TxEC) The single source of truth for critical information relating to Tour de SOL
* [\#tourdesol-stage0](https://discord.gg/Xf8tES) Discussion for events within Tour de SOL Stage 0. Stage 0 includes all the dry-run
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.