Move from gitbook to docusaurus, build docs in Travis CI (#10970)

* fix: ignore unknown fields in more RPC responses

* Remove mdbook infrastructure

* Delete gitattributes and other theme related items

Move all docs to /docs folder to support Docusaurus

* all docs need to be moved to /docs

* can be changed in the future

Add Docusaurus infrastructure

* initialize docusaurus repo

Remove trailing whitespace, add support for eslint

Change Docusaurus configuration to support `src`

* No need to rename the folder! Change a setting and we're all good to
go.

* Fixing rebase items

* Remove unneccessary markdown file, fix type

* Some fonts are hard to read. Others, not so much. Rubik, you've been
sidelined. Roboto, into the limelight!

* As much as we all love tutorials, I think we all can navigate around a
markdown file. Say goodbye, `mdx.md`.

* Setup deployment infrastructure

* Move docs job from buildkite to travic

* Fix travis config

* Add vercel token to travis config

* Only deploy docs after merge

* Docker rust env

* Revert "Docker rust env"

This reverts commit f84bc208e807aab1c0d97c7588bbfada1fedfa7c.

* Build CLI usage from docker

* Pacify shellcheck

* Run job on PR and new commits for publication

* Update README

* Fix svg image building

* shellcheck

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Ryan Shea <rmshea@users.noreply.github.com>
Co-authored-by: publish-docs.sh <maintainers@solana.com>
This commit is contained in:
Dan Albert
2020-07-10 23:11:07 -06:00
committed by GitHub
parent 4046f87134
commit ffeac298a2
172 changed files with 2862 additions and 3429 deletions

View File

@ -1,4 +1,5 @@
# Implemented Design Proposals
---
title: Implemented Design Proposals
---
The following design proposals are fully implemented.

View File

@ -1,4 +1,6 @@
# Solana ABI management process
---
title: Solana ABI management process
---
This document proposes the Solana ABI management process. The ABI management
process is an engineering practice and a supporting technical framework to avoid
@ -109,7 +111,7 @@ This part is a bit complex. There is three inter-depending parts: `AbiExample`,
First, the generated test creates an example instance of the digested type with
a trait called `AbiExample`, which should be implemented for all of digested
types like the `Serialize` and return `Self` like the `Default` trait. Usually,
it's provided via generic trait specialization for most of common types. Also
it's provided via generic trait specialization for most of common types. Also
it is possible to `derive` for `struct` and `enum` and can be hand-written if
needed.

View File

@ -1,4 +1,6 @@
# Commitment
---
title: Commitment
---
The commitment metric aims to give clients a measure of the network confirmation
and stake levels on a particular block. Clients can then use this information to
@ -47,9 +49,10 @@ banks are not included in the commitment calculations here.
Now we can naturally augment the above computation to also build a
`BlockCommitment` array for every bank `b` by:
1) Adding a `ForkCommitmentCache` to collect the `BlockCommitment` structs
2) Replacing `f` with `f'` such that the above computation also builds this
`BlockCommitment` for every bank `b`.
1. Adding a `ForkCommitmentCache` to collect the `BlockCommitment` structs
2. Replacing `f` with `f'` such that the above computation also builds this
`BlockCommitment` for every bank `b`.
We will proceed with the details of 2) as 1) is trivial.
@ -75,6 +78,7 @@ Now more specifically, we augment the above computation to:
```
where `f'` is defined as:
```text
fn f`(
stake: &mut Stake,

View File

@ -1,4 +1,6 @@
# Cross-Program Invocation
---
title: Cross-Program Invocation
---
## Problem
@ -67,13 +69,13 @@ mod acme {
`invoke()` is built into Solana's runtime and is responsible for routing the given instruction to the `token` program via the instruction's `program_id` field.
Before invoking `pay()`, the runtime must ensure that `acme` didn't modify any accounts owned by `token`. It does this by applying the runtime's policy to the current state of the accounts at the time `acme` calls `invoke` vs. the initial state of the accounts at the beginning of the `acme`'s instruction. After `pay()` completes, the runtime must again ensure that `token` didn't modify any accounts owned by `acme` by again applying the runtime's policy, but this time with the `token` program ID. Lastly, after `pay_and_launch_missiles()` completes, the runtime must apply the runtime policy one more time, where it normally would, but using all updated `pre_*` variables. If executing `pay_and_launch_missiles()` up to `pay()` made no invalid account changes, `pay()` made no invalid changes, and executing from `pay()` until `pay_and_launch_missiles()` returns made no invalid changes, then the runtime can transitively assume `pay_and_launch_missiles()` as whole made no invalid account changes, and therefore commit all these account modifications.
Before invoking `pay()`, the runtime must ensure that `acme` didn't modify any accounts owned by `token`. It does this by applying the runtime's policy to the current state of the accounts at the time `acme` calls `invoke` vs. the initial state of the accounts at the beginning of the `acme`'s instruction. After `pay()` completes, the runtime must again ensure that `token` didn't modify any accounts owned by `acme` by again applying the runtime's policy, but this time with the `token` program ID. Lastly, after `pay_and_launch_missiles()` completes, the runtime must apply the runtime policy one more time, where it normally would, but using all updated `pre_*` variables. If executing `pay_and_launch_missiles()` up to `pay()` made no invalid account changes, `pay()` made no invalid changes, and executing from `pay()` until `pay_and_launch_missiles()` returns made no invalid changes, then the runtime can transitively assume `pay_and_launch_missiles()` as whole made no invalid account changes, and therefore commit all these account modifications.
### Instructions that require privileges
The runtime uses the privileges granted to the caller program to determine what privileges can be extended to the callee. Privileges in this context refer to signers and writable accounts. For example, if the instruction the caller is processing contains a signer or writable account, then the caller can invoke an instruction that also contains that signer and/or writable account.
The runtime uses the privileges granted to the caller program to determine what privileges can be extended to the callee. Privileges in this context refer to signers and writable accounts. For example, if the instruction the caller is processing contains a signer or writable account, then the caller can invoke an instruction that also contains that signer and/or writable account.
This privilege extension relies on the fact that programs are immutable. In the case of the `acme` program, the runtime can safely treat the transaction's signature as a signature of a `token` instruction. When the runtime sees the `token` instruction references `alice_pubkey`, it looks up the key in the `acme` instruction to see if that key corresponds to a signed account. In this case, it does and thereby authorizes the `token` program to modify Alice's account.
This privilege extension relies on the fact that programs are immutable. In the case of the `acme` program, the runtime can safely treat the transaction's signature as a signature of a `token` instruction. When the runtime sees the `token` instruction references `alice_pubkey`, it looks up the key in the `acme` instruction to see if that key corresponds to a signed account. In this case, it does and thereby authorizes the `token` program to modify Alice's account.
### Program signed accounts
@ -86,11 +88,11 @@ To sign an account with program derived addresses, a program may `invoke_signed(
invoke_signed(
&instruction,
accounts,
&[&["First addresses seed"],
&[&["First addresses seed"],
&["Second addresses first seed", "Second addresses second seed"]],
)?;
```
### Reentrancy
Reentrancy is currently limited to direct self recursion capped at a fixed depth. This restriction prevents situations where a program might invoke another from an intermediary state without the knowledge that it might later be called back into. Direct recursion gives the program full control of its state at the point that it gets called back.
Reentrancy is currently limited to direct self recursion capped at a fixed depth. This restriction prevents situations where a program might invoke another from an intermediary state without the knowledge that it might later be called back into. Direct recursion gives the program full control of its state at the point that it gets called back.

View File

@ -1,4 +1,6 @@
# Durable Transaction Nonces
---
title: Durable Transaction Nonces
---
## Problem
@ -11,8 +13,8 @@ offline network participants.
## Requirements
1) The transaction's signature needs to cover the nonce value
2) The nonce must not be reusable, even in the case of signing key disclosure
1. The transaction's signature needs to cover the nonce value
2. The nonce must not be reusable, even in the case of signing key disclosure
## A Contract-based Solution
@ -25,8 +27,8 @@ When making use of a durable nonce, the client must first query its value from
account data. A transaction is now constructed in the normal way, but with the
following additional requirements:
1) The durable nonce value is used in the `recent_blockhash` field
2) An `AdvanceNonceAccount` instruction is the first issued in the transaction
1. The durable nonce value is used in the `recent_blockhash` field
2. An `AdvanceNonceAccount` instruction is the first issued in the transaction
### Contract Mechanics
@ -63,7 +65,7 @@ WithdrawInstruction(to, lamports)
success
```
A client wishing to use this feature starts by creating a nonce account under
A client wishing to use this feature starts by creating a nonce account under
the system program. This account will be in the `Uninitialized` state with no
stored hash, and thus unusable.
@ -95,11 +97,7 @@ can be changed using the `AuthorizeNonceAccount` instruction. It takes one param
the `Pubkey` of the new authority. Executing this instruction grants full
control over the account and its balance to the new authority.
{% hint style="info" %}
`AdvanceNonceAccount`, `WithdrawNonceAccount` and `AuthorizeNonceAccount` all require the current
[nonce authority](../offline-signing/durable-nonce.md#nonce-authority) for the
account to sign the transaction.
{% endhint %}
> `AdvanceNonceAccount`, `WithdrawNonceAccount` and `AuthorizeNonceAccount` all require the current [nonce authority](../offline-signing/durable-nonce.md#nonce-authority) for the account to sign the transaction.
### Runtime Support
@ -114,11 +112,11 @@ instruction as the first instruction in the transaction.
If the runtime determines that a Durable Transaction Nonce is in use, it will
take the following additional actions to validate the transaction:
1) The `NonceAccount` specified in the `Nonce` instruction is loaded.
2) The `NonceState` is deserialized from the `NonceAccount`'s data field and
confirmed to be in the `Initialized` state.
3) The nonce value stored in the `NonceAccount` is tested to match against the
one specified in the transaction's `recent_blockhash` field.
1. The `NonceAccount` specified in the `Nonce` instruction is loaded.
2. The `NonceState` is deserialized from the `NonceAccount`'s data field and
confirmed to be in the `Initialized` state.
3. The nonce value stored in the `NonceAccount` is tested to match against the
one specified in the transaction's `recent_blockhash` field.
If all three of the above checks succeed, the transaction is allowed to continue
validation.

View File

@ -1,4 +1,6 @@
# Cluster Economics
---
title: Cluster Economics
---
**Subject to change.**
@ -12,6 +14,6 @@ Transaction fees are market-based participant-to-participant transfers, attached
A high-level schematic of Solanas crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics/README.md), [State-validation Protocol-based Rewards](ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md). Also, the section titled [Validation Stake Delegation](ed_validation_client_economics/ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunities and marketplace. Additionally, in [Storage Rent Economics](ed_storage_rent_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section.
![](../../.gitbook/assets/economic_design_infl_230719.png)
![](/img/economic_design_infl_230719.png)
**Figure 1**: Schematic overview of Solana economic incentive design.

View File

@ -1,4 +1,6 @@
# Economic Sustainability
---
title: Economic Sustainability
---
**Subject to change.**

View File

@ -1,4 +1,6 @@
# Economic Design MVP
---
title: Economic Design MVP
---
**Subject to change.**
@ -6,7 +8,7 @@ The preceding sections, outlined in the [Economic Design Overview](../README.md)
## MVP Economic Features
* Faucet to deliver testnet SOLs to validators for staking and application development.
* Mechanism by which validators are rewarded via network inflation.
* Ability to delegate tokens to validator nodes
* Validator set commission fees on interest from delegated tokens.
- Faucet to deliver testnet SOLs to validators for staking and application development.
- Mechanism by which validators are rewarded via network inflation.
- Ability to delegate tokens to validator nodes
- Validator set commission fees on interest from delegated tokens.

View File

@ -1,6 +1,7 @@
# References
---
title: References
---
1. [https://blog.ethereum.org/2016/07/27/inflation-transaction-fees-cryptocurrency-monetary-policy/](https://blog.ethereum.org/2016/07/27/inflation-transaction-fees-cryptocurrency-monetary-policy/)
2. [https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281](https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281)
3. [https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281](https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281)

View File

@ -1,4 +1,6 @@
## Storage Rent Economics
---
title: Storage Rent Economics
---
Each transaction that is submitted to the Solana ledger imposes costs. Transaction fees paid by the submitter, and collected by a validator, in theory, account for the acute, transactional, costs of validating and adding that data to the ledger. Unaccounted in this process is the mid-term storage of active ledger state, necessarily maintained by the rotating validator set. This type of storage imposes costs not only to validators but also to the broader network as active state grows so does data transmission and validation overhead. To account for these costs, we describe here our preliminary design and implementation of storage rent.
@ -13,6 +15,3 @@ Method 2: Pay per byte
If an account has less than two-years worth of deposited rent the network charges rent on a per-epoch basis, in credit for the next epoch. This rent is deducted at a rate specified in genesis, in lamports per kilobyte-year.
For information on the technical implementation details of this design, see the [Rent](../rent.md) section.

View File

@ -1,8 +1,9 @@
# Validation-client Economics
---
title: Validation-client Economics
---
**Subject to change.**
Validator-clients are eligible to receive protocol-based \(i.e. inflation-based\) rewards issued via stake-based annual interest rates \(calculated per epoch\) by providing compute \(CPU+GPU\) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic disinflationary schedule as a function of total amount of circulating tokens. The network is expected to launch with an annual inflation rate around 15%, set to decrease by 15% per year until a long-term stable rate of 1-2% is reached. These issuances are to be split and distributed to participating validators, with around 90% of the issued tokens allocated for validator rewards. Because the network will be distributing a fixed amount of inflation rewards across the stake-weighted validator set, any individual validator's interest rate will be a function of the amount of staked SOL in relation to the circulating SOL.
Additionally, validator clients may earn revenue through fees via state-validation transactions. For clarity, we separately describe the design and motivation of these revenue distributions for validation-clients below: state-validation protocol-based rewards and state-validation transaction fees and rent.

View File

@ -1,33 +1,35 @@
# State-validation Protocol-based Rewards
---
title: State-validation Protocol-based Rewards
---
**Subject to change.**
Validator-clients have two functional roles in the Solana network:
* Validate \(vote\) the current global state of that PoH.
* Be elected as leader on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
- Validate \(vote\) the current global state of that PoH.
- Be elected as leader on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. As previously discussed, compensation for validator-clients is provided via a protocol-based annual inflation rate dispersed in proportion to the stake-weight of each validator \(see below\) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each transaction fee, less a protocol-specified amount that is destroyed \(see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)\).
The effective protocol-based annual interest rate \(%\) per epoch received by validation-clients is to be a function of:
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](README.md)\)
* the fraction of staked SOLs out of the current total circulating supply,
* the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
- the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](README.md)\)
- the fraction of staked SOLs out of the current total circulating supply,
- the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
The first factor is a function of protocol parameters only \(i.e. independent of validator behavior in a given epoch\) and results in a global validation reward schedule designed to incentivize early participation, provide clear monetary stability and provide optimal security in the network.
At any given point in time, a specific validator's interest rate can be determined based on the proportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year \(the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch\). With these broad assumptions, the 10-year inflation rate \(adjusted daily for this example\) is shown in **Figure 1**, while the total circulating token supply is illustrated in **Figure 2**. Neglected in this toy-model is the inflation suppression due to the portion of each transaction fee that is to be destroyed.
![](../../../.gitbook/assets/p_ex_schedule.png)
![](/img/p_ex_schedule.png)
**Figure 1:** In this example schedule, the annual inflation rate \[%\] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
![](../../../.gitbook/assets/p_ex_supply.png)
![](/img/p_ex_supply.png)
**Figure 2:** The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in **Figure 1**. Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabilize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assumed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in **Figure 3**.
![](../../../.gitbook/assets/p_ex_interest.png)
![](/img/p_ex_interest.png)
**Figure 3:** Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.

View File

@ -1,13 +1,15 @@
# State-validation Transaction Fees
---
title: State-validation Transaction Fees
---
**Subject to change.**
Each transaction sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
* provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
* reduce network spam by introducing real cost to transactions,
* open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
- provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
- reduce network spam by introducing real cost to transactions,
- open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
- and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies \(e.g. Bitcoin, Ethereum\), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is destroyed, with the remaining fee going to the current leader processing the transaction. A scheduled global inflation rate provides a source for rewards distributed to validation-clients, through the process described above.

View File

@ -1,27 +1,28 @@
# Validation Stake Delegation
---
title: Validation Stake Delegation
---
**Subject to change.**
Running a Solana validation-client required relatively modest upfront hardware capital investment. **Table 2** provides an example hardware configuration to support ~1M tx/s with estimated off-the-shelf costs:
| Component | Example | Estimated Cost |
| :--- | :--- | :--- |
| GPU | 2x 2080 Ti | $2500 |
| or | 4x 1080 Ti | $2800 |
| OS/Ledger Storage | Samsung 860 Evo 2TB | $370 |
| Accounts storage | 2x Samsung 970 Pro M.2 512GB | $340 |
| RAM | 32 Gb | $300 |
| Motherboard | AMD x399 | $400 |
| CPU | AMD Threadripper 2920x | $650 |
| Case | | $100 |
| Power supply | EVGA 1600W | $300 |
| Network | &gt; 500 mbps | |
| Network \(1\) | Google webpass business bay area 1gbps unlimited | $5500/mo |
| Network \(2\) | Hurricane Electric bay area colo 1gbps | $500/mo |
| Component | Example | Estimated Cost |
| :---------------- | :----------------------------------------------- | :------------- |
| GPU | 2x 2080 Ti | \$2500 |
| or | 4x 1080 Ti | \$2800 |
| OS/Ledger Storage | Samsung 860 Evo 2TB | \$370 |
| Accounts storage | 2x Samsung 970 Pro M.2 512GB | \$340 |
| RAM | 32 Gb | \$300 |
| Motherboard | AMD x399 | \$400 |
| CPU | AMD Threadripper 2920x | \$650 |
| Case | | \$100 |
| Power supply | EVGA 1600W | \$300 |
| Network | &gt; 500 mbps | |
| Network \(1\) | Google webpass business bay area 1gbps unlimited | \$5500/mo |
| Network \(2\) | Hurricane Electric bay area colo 1gbps | \$500/mo |
**Table 2** example high-end hardware setup for running a Solana client.
Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solanas validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties can become involved in the Solana network/economy via delegation of previously acquired tokens with a reliable validation node to earn a portion of the interest generated.
Delegation of tokens to validation-clients provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature intends to create a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services.

View File

@ -1,4 +1,6 @@
# Embedding the Move Langauge
---
title: Embedding the Move Langauge
---
## Problem
@ -10,15 +12,15 @@ The biggest design difference between Solana's runtime and Libra's Move VM is ho
This proposal attempts to define a way to embed the Move VM such that:
* cross-module invocations within Move do not require the runtime's
- cross-module invocations within Move do not require the runtime's
cross-program runtime checks
* Move programs can leverage functionality in other Solana programs and vice
- Move programs can leverage functionality in other Solana programs and vice
versa
* Solana's runtime parallelism is exposed to batches of Move and non-Move
- Solana's runtime parallelism is exposed to batches of Move and non-Move
transactions
@ -33,4 +35,3 @@ All data accounts owned by Move modules must set their owners to the loader, `MO
### Interacting with Solana programs
To invoke instructions in non-Move programs, Solana would need to extend the Move VM with a `process_instruction()` system call. It would work the same as `process_instruction()` Rust BPF programs.

View File

@ -1,4 +1,6 @@
# Cluster Software Installation and Updates
---
title: Cluster Software Installation and Updates
---
Currently users are required to build the solana cluster software themselves from the git repository and manually update it, which is error prone and inconvenient.
@ -93,11 +95,11 @@ To guard against rollback attacks, `solana-install` will refuse to install an up
A release archive is expected to be a tar file compressed with bzip2 with the following internal structure:
* `/version.yml` - a simple YAML file containing the field `"target"` - the
- `/version.yml` - a simple YAML file containing the field `"target"` - the
target tuple. Any additional fields are ignored.
* `/bin/` -- directory containing available programs in the release.
- `/bin/` -- directory containing available programs in the release.
`solana-install` will symlink this directory to
@ -105,7 +107,7 @@ A release archive is expected to be a tar file compressed with bzip2 with the fo
variable.
* `...` -- any additional files and directories are permitted
- `...` -- any additional files and directories are permitted
## solana-install Tool
@ -113,9 +115,9 @@ The `solana-install` tool is used by the user to install and update their cluste
It manages the following files and directories in the user's home directory:
* `~/.config/solana/install/config.yml` - user configuration and information about currently installed software version
* `~/.local/share/solana/install/bin` - a symlink to the current release. eg, `~/.local/share/solana-update/<update-pubkey>-<manifest_signature>/bin`
* `~/.local/share/solana/install/releases/<download_sha256>/` - contents of a release
- `~/.config/solana/install/config.yml` - user configuration and information about currently installed software version
- `~/.local/share/solana/install/bin` - a symlink to the current release. eg, `~/.local/share/solana-update/<update-pubkey>-<manifest_signature>/bin`
- `~/.local/share/solana/install/releases/<download_sha256>/` - contents of a release
### Command-line Interface
@ -212,4 +214,3 @@ ARGS:
The program will be restarted upon a successful software update
```

View File

@ -1,4 +1,6 @@
# Leader-to-Leader Transition
---
title: Leader-to-Leader Transition
---
This design describes how leaders transition production of the PoH ledger between each other as each leader generates its own slot.
@ -18,19 +20,19 @@ While a leader is actively receiving entries for the previous slot, the leader c
The downsides:
* Leader delays its own slot, potentially allowing the next leader more time to
- Leader delays its own slot, potentially allowing the next leader more time to
catch up.
The upsides compared to guards:
* All the space in a block is used for entries.
* The timeout is not fixed.
* The timeout is local to the leader, and therefore can be clever. The leader's heuristic can take into account turbine performance.
* This design doesn't require a ledger hard fork to update.
* The previous leader can redundantly transmit the last entry in the block to the next leader, and the next leader can speculatively decide to trust it to generate its block without verification of the previous block.
* The leader can speculatively generate the last tick from the last received entry.
* The leader can speculatively process transactions and guess which ones are not going to be encoded by the previous leader. This is also a censorship attack vector. The current leader may withhold transactions that it receives from the clients so it can encode them into its own slot. Once processed, entries can be replayed into PoH quickly.
- All the space in a block is used for entries.
- The timeout is not fixed.
- The timeout is local to the leader, and therefore can be clever. The leader's heuristic can take into account turbine performance.
- This design doesn't require a ledger hard fork to update.
- The previous leader can redundantly transmit the last entry in the block to the next leader, and the next leader can speculatively decide to trust it to generate its block without verification of the previous block.
- The leader can speculatively generate the last tick from the last received entry.
- The leader can speculatively process transactions and guess which ones are not going to be encoded by the previous leader. This is also a censorship attack vector. The current leader may withhold transactions that it receives from the clients so it can encode them into its own slot. Once processed, entries can be replayed into PoH quickly.
## Alternative design options
@ -42,13 +44,12 @@ If the next leader receives the _penultimate tick_ before it produces its own _f
The downsides:
* Every vote, and therefore confirmation, is delayed by a fixed timeout. 1 tick, or around 100ms.
* Average case confirmation time for a transaction would be at least 50ms worse.
* It is part of the ledger definition, so to change this behavior would require a hard fork.
* Not all the available space is used for entries.
- Every vote, and therefore confirmation, is delayed by a fixed timeout. 1 tick, or around 100ms.
- Average case confirmation time for a transaction would be at least 50ms worse.
- It is part of the ledger definition, so to change this behavior would require a hard fork.
- Not all the available space is used for entries.
The upsides compared to leader timeout:
* The next leader has received all the previous entries, so it can start processing transactions without recording them into PoH.
* The previous leader can redundantly transmit the last entry containing the _penultimate tick_ to the next leader. The next leader can speculatively generate the _last tick_ as soon as it receives the _penultimate tick_, even before verifying it.
- The next leader has received all the previous entries, so it can start processing transactions without recording them into PoH.
- The previous leader can redundantly transmit the last entry containing the _penultimate tick_ to the next leader. The next leader can speculatively generate the _last tick_ as soon as it receives the _penultimate tick_, even before verifying it.

View File

@ -1,4 +1,6 @@
# Leader-to-Validator Transition
---
title: Leader-to-Validator Transition
---
A validator typically spends its time validating blocks. If, however, a staker delegates its stake to a validator, it will occasionally be selected as a _slot leader_. As a slot leader, the validator is responsible for producing blocks during an assigned _slot_. A slot has a duration of some number of preconfigured _ticks_. The duration of those ticks are estimated with a _PoH Recorder_ described later in this document.
@ -48,4 +50,3 @@ The loop is synchronized to PoH and does a synchronous start and stop of the slo
the TVU may resume voting.
5. Goto 1.

View File

@ -1,4 +1,6 @@
# Persistent Account Storage
---
title: Persistent Account Storage
---
## Persistent Account Storage
@ -49,9 +51,9 @@ An account can be _garbage-collected_ when squashing makes it unreachable.
Three possible options exist:
* Maintain a HashSet of root forks. One is expected to be created every second. The entire tree can be garbage-collected later. Alternatively, if every fork keeps a reference count of accounts, garbage collection could occur any time an index location is updated.
* Remove any pruned forks from the index. Any remaining forks lower in number than the root are can be considered root.
* Scan the index, migrate any old roots into the new one. Any remaining forks lower than the new root can be deleted later.
- Maintain a HashSet of root forks. One is expected to be created every second. The entire tree can be garbage-collected later. Alternatively, if every fork keeps a reference count of accounts, garbage collection could occur any time an index location is updated.
- Remove any pruned forks from the index. Any remaining forks lower in number than the root are can be considered root.
- Scan the index, migrate any old roots into the new one. Any remaining forks lower than the new root can be deleted later.
## Append-only Writes
@ -85,10 +87,9 @@ To snapshot, the underlying memory-mapped files in the AppendVec need to be flus
## Performance
* Append-only writes are fast. SSDs and NVMEs, as well as all the OS level kernel data structures, allow for appends to run as fast as PCI or NVMe bandwidth will allow \(2,700 MB/s\).
* Each replay and banking thread writes concurrently to its own AppendVec.
* Each AppendVec could potentially be hosted on a separate NVMe.
* Each replay and banking thread has concurrent read access to all the AppendVecs without blocking writes.
* Index requires an exclusive write lock for writes. Single-thread performance for HashMap updates is on the order of 10m per second.
* Banking and Replay stages should use 32 threads per NVMe. NVMes have optimal performance with 32 concurrent readers or writers.
- Append-only writes are fast. SSDs and NVMEs, as well as all the OS level kernel data structures, allow for appends to run as fast as PCI or NVMe bandwidth will allow \(2,700 MB/s\).
- Each replay and banking thread writes concurrently to its own AppendVec.
- Each AppendVec could potentially be hosted on a separate NVMe.
- Each replay and banking thread has concurrent read access to all the AppendVecs without blocking writes.
- Index requires an exclusive write lock for writes. Single-thread performance for HashMap updates is on the order of 10m per second.
- Banking and Replay stages should use 32 threads per NVMe. NVMes have optimal performance with 32 concurrent readers or writers.

View File

@ -1,4 +1,6 @@
# Program Derived Addresses
---
title: Program Derived Addresses
---
## Problem
@ -7,14 +9,14 @@ other programs as defined in the [Cross-Program Invocations](cross-program-invoc
design.
The lack of programmatic signature generation limits the kinds of programs
that can be implemented in Solana. A program may be given the
that can be implemented in Solana. A program may be given the
authority over an account and later want to transfer that authority to another.
This is impossible today because the program cannot act as the signer in the transaction that gives authority.
For example, if two users want
to make a wager on the outcome of a game in Solana, they must each
transfer their wager's assets to some intermediary that will honor
their agreement. Currently, there is no way to implement this intermediary
their agreement. Currently, there is no way to implement this intermediary
as a program in Solana because the intermediary program cannot transfer the
assets to the winner.
@ -22,24 +24,24 @@ This capability is necessary for many DeFi applications since they
require assets to be transferred to an escrow agent until some event
occurs that determines the new owner.
* Decentralized Exchanges that transfer assets between matching bid and
ask orders.
- Decentralized Exchanges that transfer assets between matching bid and
ask orders.
* Auctions that transfer assets to the winner.
- Auctions that transfer assets to the winner.
* Games or prediction markets that collect and redistribute prizes to
the winners.
- Games or prediction markets that collect and redistribute prizes to
the winners.
## Proposed Solution
The key to the design is two-fold:
1. Allow programs to control specific addresses, called Program-Addresses, in such a way that no external
user can generate valid transactions with signatures for those
addresses.
user can generate valid transactions with signatures for those
addresses.
2. Allow programs to programmatically sign for Program-Addresses that are
present in instructions invoked via [Cross-Program Invocations](cross-program-invocation.md).
present in instructions invoked via [Cross-Program Invocations](cross-program-invocation.md).
Given the two conditions, users can securely transfer or assign
the authority of on-chain assets to Program-Addresses and the program
@ -48,13 +50,13 @@ can then assign that authority elsewhere at its discretion.
### Private keys for Program Addresses
A Program -Address has no private key associated with it, and generating
a signature for it is impossible. While it has no private key of
a signature for it is impossible. While it has no private key of
its own, it can issue an instruction that includes the Program-Address as a signer.
### Hash-based generated Program Addresses
All 256-bit values are valid ed25519 curve points and valid ed25519 public
keys. All are equally secure and equally as hard to break.
keys. All are equally secure and equally as hard to break.
Based on this assumption, Program Addresses can be deterministically
derived from a base seed using a 256-bit preimage resistant hash function.
@ -81,7 +83,7 @@ pub fn create_address_with_seed(
```
Programs can deterministically derive any number of addresses by
using keywords. These keywords can symbolically identify how the addresses are used.
using keywords. These keywords can symbolically identify how the addresses are used.
```rust,ignore
//! Generate a derived program address
@ -146,9 +148,9 @@ fn transfer_one_token_from_escrow(
### Instructions that require signers
The addresses generated with `create_program_address` are indistinguishable
from any other public key. The only way for the runtime to verify that the
from any other public key. The only way for the runtime to verify that the
address belongs to a program is for the program to supply the keywords used
to generate the address.
The runtime will internally call `create_program_address`, and compare the
result against the addresses supplied in the instruction.
result against the addresses supplied in the instruction.

View File

@ -1,4 +1,6 @@
# Read-Only Accounts
---
title: Read-Only Accounts
---
This design covers the handling of readonly and writable accounts in the [runtime](../validator/runtime.md). Multiple transactions that modify the same account must be processed serially so that they are always replayed in the same order. Otherwise, this could introduce non-determinism to the ledger. Some transactions, however, only need to read, and not modify, the data in particular accounts. Multiple transactions that only read the same account can be processed in parallel, since replay order does not matter, providing a performance benefit.
@ -10,7 +12,7 @@ Runtime transaction processing rules need to be updated slightly. Programs still
Readonly accounts have the following property:
* Read-only access to all account fields, including lamports (cannot be credited or debited), and account data
- Read-only access to all account fields, including lamports (cannot be credited or debited), and account data
Instructions that credit, debit, or modify the readonly account will fail.

View File

@ -1,4 +1,6 @@
# Reliable Vote Transmission
---
title: Reliable Vote Transmission
---
Validator votes are messages that have a critical function for consensus and continuous operation of the network. Therefore it is critical that they are reliably delivered and encoded into the ledger.
@ -56,4 +58,3 @@ Everything above plus the following:
4. Worst case 25mb memory overhead per node.
5. Sub 4 hops worst case to deliver to the entire network.
6. 80 shreds received by the leader for all the validator messages.

View File

@ -1,4 +1,6 @@
# Rent
---
title: Rent
---
Accounts on Solana may have owner-controlled state \(`Account::data`\) that's separate from the account's balance \(`Account::lamports`\). Since validators on the network need to maintain a working copy of this state in memory, the network charges a time-and-space based fee for this resource consumption, also known as Rent.
@ -42,11 +44,11 @@ As the overall consequence of this design, all of accounts is stored equally as
Collecting rent on an as-needed basis \(i.e. whenever accounts were loaded/accessed\) was considered. The issues with such an approach are:
* accounts loaded as "credit only" for a transaction could very reasonably be expected to have rent due,
- accounts loaded as "credit only" for a transaction could very reasonably be expected to have rent due,
but would not be writable during any such transaction
* a mechanism to "beat the bushes" \(i.e. go find accounts that need to pay rent\) is desirable,
- a mechanism to "beat the bushes" \(i.e. go find accounts that need to pay rent\) is desirable,
lest accounts that are loaded infrequently get a free ride
@ -54,6 +56,6 @@ Collecting rent on an as-needed basis \(i.e. whenever accounts were loaded/acces
Collecting rent via a system instruction was considered, as it would naturally have distributed rent to active and stake-weighted nodes and could have been done incrementally. However:
* it would have adversely affected network throughput
* it would require special-casing by the runtime, as accounts with non-SystemProgram owners may be debited by this instruction
* someone would have to issue the transactions
- it would have adversely affected network throughput
- it would require special-casing by the runtime, as accounts with non-SystemProgram owners may be debited by this instruction
- someone would have to issue the transactions

View File

@ -1,4 +1,6 @@
# Repair Service
---
title: Repair Service
---
## Repair Service
@ -19,25 +21,27 @@ repair these slots. If these slots happen to be part of the main chain, this
will halt replay progress on this node.
## Repair-related primitives
Epoch Slots:
Each validator advertises separately on gossip the various parts of an
`Epoch Slots`:
* The `stash`: An epoch-long compressed set of all completed slots.
* The `cache`: The Run-length Encoding (RLE) of the latest `N` completed
slots starting from some some slot `M`, where `N` is the number of slots
that will fit in an MTU-sized packet.
Each validator advertises separately on gossip the various parts of an
`Epoch Slots`:
`Epoch Slots` in gossip are updated every time a validator receives a
complete slot within the epoch. Completed slots are detected by blockstore
and sent over a channel to RepairService. It is important to note that we
know that by the time a slot `X` is complete, the epoch schedule must exist
for the epoch that contains slot `X` because WindowService will reject
shreds for unconfirmed epochs.
- The `stash`: An epoch-long compressed set of all completed slots.
- The `cache`: The Run-length Encoding (RLE) of the latest `N` completed
slots starting from some some slot `M`, where `N` is the number of slots
that will fit in an MTU-sized packet.
`Epoch Slots` in gossip are updated every time a validator receives a
complete slot within the epoch. Completed slots are detected by blockstore
and sent over a channel to RepairService. It is important to note that we
know that by the time a slot `X` is complete, the epoch schedule must exist
for the epoch that contains slot `X` because WindowService will reject
shreds for unconfirmed epochs.
Every `N/2` completed slots, the oldest `N/2` slots are moved from the
`cache` into the `stash`. The base value `M` for the RLE should also
be updated.
Every `N/2` completed slots, the oldest `N/2` slots are moved from the
`cache` into the `stash`. The base value `M` for the RLE should also
be updated.
## Repair Request Protocols
The repair protocol makes best attempts to progress the forking structure of
@ -46,28 +50,29 @@ Blockstore.
The different protocol strategies to address the above challenges:
1. Shred Repair \(Addresses Challenge \#1\): This is the most basic repair
protocol, with the purpose of detecting and filling "holes" in the ledger.
Blockstore tracks the latest root slot. RepairService will then periodically
iterate every fork in blockstore starting from the root slot, sending repair
requests to validators for any missing shreds. It will send at most some `N`
repair reqeusts per iteration. Shred repair should prioritize repairing
forks based on the leader's fork weight. Validators should only send repair
requests to validators who have marked that slot as completed in their
EpochSlots. Validators should prioritize repairing shreds in each slot
that they are responsible for retransmitting through turbine. Validators can
compute which shreds they are responsible for retransmitting because the
seed for turbine is based on leader id, slot, and shred index.
protocol, with the purpose of detecting and filling "holes" in the ledger.
Blockstore tracks the latest root slot. RepairService will then periodically
iterate every fork in blockstore starting from the root slot, sending repair
requests to validators for any missing shreds. It will send at most some `N`
repair reqeusts per iteration. Shred repair should prioritize repairing
forks based on the leader's fork weight. Validators should only send repair
requests to validators who have marked that slot as completed in their
EpochSlots. Validators should prioritize repairing shreds in each slot
that they are responsible for retransmitting through turbine. Validators can
compute which shreds they are responsible for retransmitting because the
seed for turbine is based on leader id, slot, and shred index.
Note: Validators will only accept shreds within the current verifiable
epoch \(epoch the validator has a leader schedule for\).
2. Preemptive Slot Repair \(Addresses Challenge \#2\): The goal of this
protocol is to discover the chaining relationship of "orphan" slots that do not
currently chain to any known fork. Shred repair should prioritize repairing
orphan slots based on the leader's fork weight.
* Blockstore will track the set of "orphan" slots in a separate column family.
* RepairService will periodically make `Orphan` requests for each of
the orphans in blockstore.
protocol is to discover the chaining relationship of "orphan" slots that do not
currently chain to any known fork. Shred repair should prioritize repairing
orphan slots based on the leader's fork weight.
- Blockstore will track the set of "orphan" slots in a separate column family.
- RepairService will periodically make `Orphan` requests for each of
the orphans in blockstore.
`Orphan(orphan)` request - `orphan` is the orphan slot that the
requestor wants to know the parents of `Orphan(orphan)` response -
@ -77,9 +82,9 @@ orphan slots based on the leader's fork weight.
On receiving the responses `p`, where `p` is some shred in a parent slot,
validators will:
* Insert an empty `SlotMeta` in blockstore for `p.slot` if it doesn't
already exist.
* If `p.slot` does exist, update the parent of `p` based on `parents`
- Insert an empty `SlotMeta` in blockstore for `p.slot` if it doesn't
already exist.
- If `p.slot` does exist, update the parent of `p` based on `parents`
Note: that once these empty slots are added to blockstore, the
`Shred Repair` protocol should attempt to fill those slots.
@ -95,10 +100,9 @@ randomly select a validator in a stake-weighted fashion.
## Repair Response Protocol
When a validator receives a request for a shred `S`, they respond with the
shred if they have it.
shred if they have it.
When a validator receives a shred through a repair response, they check
`EpochSlots` to see if <= `1/3` of the network has marked this slot as
completed. If so, they resubmit this shred through its associated turbine
path, but only if this validator has not retransmitted this shred before.

View File

@ -1,4 +1,6 @@
# Snapshot Verification
---
title: Snapshot Verification
---
## Problem
@ -18,11 +20,11 @@ To verify the snapshot, we do the following:
On account store of non-zero lamport accounts, we hash the following data:
* Account owner
* Account data
* Account pubkey
* Account lamports balance
* Fork the account is stored on
- Account owner
- Account data
- Account pubkey
- Account lamports balance
- Fork the account is stored on
Use this resulting hash value as input to an expansion function which expands the hash value into an image value.
The function will create a 440 byte block of data where the first 32 bytes are the hash value, and the next 440 - 32 bytes are
@ -42,7 +44,7 @@ a validator bank to read that an account is not present when it really should be
An attack on the xor state could be made to influence its value:
Thus the 440 byte image size comes from this paper, avoiding xor collision with 0 \(or thus any other given bit pattern\): \[[https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9\_19.pdf](https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9_19.pdf)\]
Thus the 440 byte image size comes from this paper, avoiding xor collision with 0 \(or thus any other given bit pattern\): \[[https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9_19.pdf](https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9_19.pdf)\]
The math provides 128 bit security in this case:
@ -52,4 +54,3 @@ k=2^40 accounts
n=440
2^(40) * 2^(448 * 8 / 41) ~= O(2^(128))
```

View File

@ -1,16 +1,18 @@
# Staking Rewards
---
title: Staking Rewards
---
A Proof of Stake \(PoS\), \(i.e. using in-protocol asset, SOL, to provide secure consensus\) design is outlined here. Solana implements a proof of stake reward/security scheme for validator nodes in the cluster. The purpose is threefold:
* Align validator incentives with that of the greater cluster through
- Align validator incentives with that of the greater cluster through
skin-in-the-game deposits at risk
* Avoid 'nothing at stake' fork voting issues by implementing slashing rules
- Avoid 'nothing at stake' fork voting issues by implementing slashing rules
aimed at promoting fork convergence
* Provide an avenue for validator rewards provided as a function of validator
- Provide an avenue for validator rewards provided as a function of validator
participation in the cluster.
@ -22,13 +24,13 @@ Solana's ledger validation design is based on a rotating, stake-weighted selecte
To become a Solana validator, one must deposit/lock-up some amount of SOL in a contract. This SOL will not be accessible for a specific time period. The precise duration of the staking lockup period has not been determined. However we can consider three phases of this time for which specific parameters will be necessary:
* _Warm-up period_: which SOL is deposited and inaccessible to the node,
- _Warm-up period_: which SOL is deposited and inaccessible to the node,
however PoH transaction validation has not begun. Most likely on the order of
days to weeks
* _Validation period_: a minimum duration for which the deposited SOL will be
- _Validation period_: a minimum duration for which the deposited SOL will be
inaccessible, at risk of slashing \(see slashing rules below\) and earning
@ -36,7 +38,7 @@ To become a Solana validator, one must deposit/lock-up some amount of SOL in a c
year.
* _Cool-down period_: a duration of time following the submission of a
- _Cool-down period_: a duration of time following the submission of a
'withdrawal' transaction. During this period validation responsibilities have
@ -53,4 +55,3 @@ Solana's trustless sense of time and ordering provided by its PoH data structure
As discussed in the [Economic Design](../implemented-proposals/ed_overview/README.md) section, annual validator interest rates are to be specified as a function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online and actively participating in the validation process throughout the entirety of their _validation period_. For validators that go offline/fail to validate transactions during this period, their annual reward is effectively reduced.
Similarly, we may consider an algorithmic reduction in a validator's active amount staked amount in the case that they are offline. I.e. if a validator is inactive for some amount of time, either due to a partition or otherwise, the amount of their stake that is considered active \(eligible to earn rewards\) may be reduced. This design would be structured to help long-lived partitions to eventually reach finality on their respective chains as the % of non-voting total stake is reduced over time until a supermajority can be achieved by the active validators in each partition. Similarly, upon re-engaging, the active amount staked will come back online at some defined rate. Different rates of stake reduction may be considered depending on the size of the partition/active set.

View File

@ -1,18 +1,20 @@
# Testing Programs
---
title: Testing Programs
---
Applications send transactions to a Solana cluster and query validators to confirm the transactions were processed and to check each transaction's result. When the cluster doesn't behave as anticipated, it could be for a number of reasons:
* The program is buggy
* The BPF loader rejected an unsafe program instruction
* The transaction was too big
* The transaction was invalid
* The Runtime tried to execute the transaction when another one was accessing
- The program is buggy
- The BPF loader rejected an unsafe program instruction
- The transaction was too big
- The transaction was invalid
- The Runtime tried to execute the transaction when another one was accessing
the same account
* The network dropped the transaction
* The cluster rolled back the ledger
* A validator responded to queries maliciously
- The network dropped the transaction
- The cluster rolled back the ledger
- A validator responded to queries maliciously
## The AsyncClient and SyncClient Traits
@ -49,4 +51,3 @@ Below the TPU level is the Bank. The Bank doesn't do signature verification or g
## Unit-testing with the Runtime
Below the Bank is the Runtime. The Runtime is the ideal test environment for unit-testing. By statically linking the Runtime into a native program implementation, the developer gains the shortest possible edit-compile-run loop. Without any dynamic linking, stack traces include debug symbols and program errors are straightforward to troubleshoot.

View File

@ -1,12 +1,14 @@
# Tower BFT
---
title: Tower BFT
---
This design describes Solana's _Tower BFT_ algorithm. It addresses the following problems:
* Some forks may not end up accepted by the supermajority of the cluster, and voters need to recover from voting on such forks.
* Many forks may be votable by different voters, and each voter may see a different set of votable forks. The selected forks should eventually converge for the cluster.
* Reward based votes have an associated risk. Voters should have the ability to configure how much risk they take on.
* The [cost of rollback](tower-bft.md#cost-of-rollback) needs to be computable. It is important to clients that rely on some measurable form of Consistency. The costs to break consistency need to be computable, and increase super-linearly for older votes.
* ASIC speeds are different between nodes, and attackers could employ Proof of History ASICS that are much faster than the rest of the cluster. Consensus needs to be resistant to attacks that exploit the variability in Proof of History ASIC speed.
- Some forks may not end up accepted by the supermajority of the cluster, and voters need to recover from voting on such forks.
- Many forks may be votable by different voters, and each voter may see a different set of votable forks. The selected forks should eventually converge for the cluster.
- Reward based votes have an associated risk. Voters should have the ability to configure how much risk they take on.
- The [cost of rollback](tower-bft.md#cost-of-rollback) needs to be computable. It is important to clients that rely on some measurable form of Consistency. The costs to break consistency need to be computable, and increase super-linearly for older votes.
- ASIC speeds are different between nodes, and attackers could employ Proof of History ASICS that are much faster than the rest of the cluster. Consensus needs to be resistant to attacks that exploit the variability in Proof of History ASIC speed.
For brevity this design assumes that a single voter with a stake is deployed as an individual validator in the cluster.
@ -35,35 +37,35 @@ Before a vote is pushed to the stack, all the votes leading up to vote with a lo
For example, a vote stack with the following state:
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 4 | 4 | 2 | 6 |
| 3 | 3 | 4 | 7 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 4 | 4 | 2 | 6 |
| 3 | 3 | 4 | 7 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
_Vote 5_ is at time 9, and the resulting state is
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 5 | 9 | 2 | 11 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 5 | 9 | 2 | 11 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
_Vote 6_ is at time 10
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 6 | 10 | 2 | 12 |
| 5 | 9 | 4 | 13 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 6 | 10 | 2 | 12 |
| 5 | 9 | 4 | 13 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
At time 10 the new votes caught up to the previous votes. But _vote 2_ expires at 10, so the when _vote 7_ at time 11 is applied the votes including and above _vote 2_ will be popped.
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 7 | 11 | 2 | 13 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 7 | 11 | 2 | 13 |
| 1 | 1 | 16 | 17 |
The lockout for vote 1 will not increase from 16 until the stack contains 5 votes.
@ -85,18 +87,18 @@ Each validator can independently set a threshold of cluster commitment to a fork
The following parameters need to be tuned:
* Number of votes in the stack before dequeue occurs \(32\).
* Rate of growth for lockouts in the stack \(2x\).
* Starting default lockout \(2\).
* Threshold depth for minimum cluster commitment before committing to the fork \(8\).
* Minimum cluster commitment size at threshold depth \(50%+\).
- Number of votes in the stack before dequeue occurs \(32\).
- Rate of growth for lockouts in the stack \(2x\).
- Starting default lockout \(2\).
- Threshold depth for minimum cluster commitment before committing to the fork \(8\).
- Minimum cluster commitment size at threshold depth \(50%+\).
### Free Choice
A "Free Choice" is an unenforcible validator action. There is no way for the protocol to encode and enforce these actions since each validator can modify the code and adjust the algorithm. A validator that maximizes self-reward over all possible futures should behave in such a way that the system is stable, and the local greedy choice should result in a greedy choice over all possible futures. A set of validator that are engaging in choices to disrupt the protocol should be bound by their stake weight to the denial of service. Two options exits for validator:
* a validator can outrun previous validator in virtual generation and submit a concurrent fork
* a validator can withhold a vote to observe multiple forks before voting
- a validator can outrun previous validator in virtual generation and submit a concurrent fork
- a validator can withhold a vote to observe multiple forks before voting
In both cases, the validator in the cluster have several forks to pick from concurrently, even though each fork represents a different height. In both cases it is impossible for the protocol to detect if the validator behavior is intentional or not.
@ -129,8 +131,8 @@ This attack is then limited to censoring the previous leaders fees, and individu
An attacker generates a concurrent fork from an older block to try to rollback the cluster. In this attack the concurrent fork is competing with forks that have already been voted on. This attack is limited by the exponential growth of the lockouts.
* 1 vote has a lockout of 2 slots. Concurrent fork must be at least 2 slots ahead, and be produced in 1 slot. Therefore requires an ASIC 2x faster.
* 2 votes have a lockout of 4 slots. Concurrent fork must be at least 4 slots ahead and produced in 2 slots. Therefore requires an ASIC 2x faster.
* 3 votes have a lockout of 8 slots. Concurrent fork must be at least 8 slots ahead and produced in 3 slots. Therefore requires an ASIC 2.6x faster.
* 10 votes have a lockout of 1024 slots. 1024/10, or 102.4x faster ASIC.
* 20 votes have a lockout of 2^20 slots. 2^20/20, or 52,428.8x faster ASIC.
- 1 vote has a lockout of 2 slots. Concurrent fork must be at least 2 slots ahead, and be produced in 1 slot. Therefore requires an ASIC 2x faster.
- 2 votes have a lockout of 4 slots. Concurrent fork must be at least 4 slots ahead and produced in 2 slots. Therefore requires an ASIC 2x faster.
- 3 votes have a lockout of 8 slots. Concurrent fork must be at least 8 slots ahead and produced in 3 slots. Therefore requires an ASIC 2.6x faster.
- 10 votes have a lockout of 1024 slots. 1024/10, or 102.4x faster ASIC.
- 20 votes have a lockout of 2^20 slots. 2^20/20, or 52,428.8x faster ASIC.

View File

@ -1,4 +1,6 @@
# Deterministic Transaction Fees
---
title: Deterministic Transaction Fees
---
Transactions currently include a fee field that indicates the maximum fee field a slot leader is permitted to charge to process a transaction. The cluster, on the other hand, agrees on a minimum fee. If the network is congested, the slot leader may prioritize the transactions offering higher fees. That means the client won't know how much was collected until the transaction is confirmed by the cluster and the remaining balance is checked. It smells of exactly what we dislike about Ethereum's "gas", non-determinism.
@ -14,14 +16,14 @@ Before sending a transaction to the cluster, a client may submit the transaction
## Fee Parameters
In the first implementation of this design, the only fee parameter is `lamports_per_signature`. The more signatures the cluster needs to verify, the higher the fee. The exact number of lamports is determined by the ratio of SPS to the SPS target. At the end of each slot, the cluster lowers `lamports_per_signature` when SPS is below the target and raises it when above the target. The minimum value for `lamports_per_signature` is 50% of the target `lamports_per_signature` and the maximum value is 10x the target \`lamports\_per\_signature'
In the first implementation of this design, the only fee parameter is `lamports_per_signature`. The more signatures the cluster needs to verify, the higher the fee. The exact number of lamports is determined by the ratio of SPS to the SPS target. At the end of each slot, the cluster lowers `lamports_per_signature` when SPS is below the target and raises it when above the target. The minimum value for `lamports_per_signature` is 50% of the target `lamports_per_signature` and the maximum value is 10x the target \`lamports_per_signature'
Future parameters might include:
* `lamports_per_pubkey` - cost to load an account
* `lamports_per_slot_distance` - higher cost to load very old accounts
* `lamports_per_byte` - cost per size of account loaded
* `lamports_per_bpf_instruction` - cost to run a program
- `lamports_per_pubkey` - cost to load an account
- `lamports_per_slot_distance` - higher cost to load very old accounts
- `lamports_per_byte` - cost per size of account loaded
- `lamports_per_bpf_instruction` - cost to run a program
## Attacks

View File

@ -1,4 +1,6 @@
# Validator Timestamp Oracle
---
title: Validator Timestamp Oracle
---
Third-party users of Solana sometimes need to know the real-world time a block
was produced, generally to meet compliance requirements for external auditors or
@ -10,17 +12,18 @@ The general outline of the proposed implementation is as follows:
- At regular intervals, each validator records its observed time for a known slot
on-chain (via a Timestamp added to a slot Vote)
- A client can request a block time for a rooted block using the `getBlockTime`
RPC method. When a client requests a timestamp for block N:
RPC method. When a client requests a timestamp for block N:
1. A validator determines a "cluster" timestamp for a recent timestamped slot
before block N by observing all the timestamped Vote instructions recorded on
the ledger that reference that slot, and determining the stake-weighted mean
timestamp.
before block N by observing all the timestamped Vote instructions recorded on
the ledger that reference that slot, and determining the stake-weighted mean
timestamp.
2. This recent mean timestamp is then used to calculate the timestamp of
block N using the cluster's established slot duration
block N using the cluster's established slot duration
Requirements:
- Any validator replaying the ledger in the future must come up with the same
time for every block since genesis
- Estimated block times should not drift more than an hour or so before resolving
@ -43,8 +46,7 @@ records its observed time by including a timestamp in its Vote instruction
submission. The corresponding slot for the timestamp is the newest Slot in the
Vote vector (`Vote::slots.iter().max()`). It is signed by the validator's
identity keypair as a usual Vote. In order to enable this reporting, the Vote
struct needs to be extended to include a timestamp field, `timestamp:
Option<UnixTimestamp>`, which will be set to `None` in most Votes.
struct needs to be extended to include a timestamp field, `timestamp: Option<UnixTimestamp>`, which will be set to `None` in most Votes.
This proposal suggests that Vote instructions with `Some(timestamp)` be issued
every 30min, which should be short enough to prevent block times drifting very
@ -67,7 +69,7 @@ A validator's vote account will hold its most recent slot-timestamp in VoteState
### Vote Program
The on-chain Vote program needs to be extended to process a timestamp sent with
a Vote instruction from validators. In addition to its current process\_vote
a Vote instruction from validators. In addition to its current process_vote
functionality (including loading the correct Vote account and verifying that the
transaction signer is the expected validator), this process needs to compare the
timestamp and corresponding slot to the currently stored values to verify that
@ -86,7 +88,7 @@ let timestamp_slot = floor(current_slot / timestamp_interval);
Then the validator needs to gather all Vote WithTimestamp transactions from the
ledger that reference that slot, using `Blockstore::get_slot_entries()`. As these
transactions could have taken some time to reach and be processed by the leader,
the validator needs to scan several completed blocks after the timestamp\_slot to
the validator needs to scan several completed blocks after the timestamp_slot to
get a reasonable set of Timestamps. The exact number of slots will need to be
tuned: More slots will enable greater cluster participation and more timestamp
datapoints; fewer slots will speed how long timestamp filtering takes.
@ -109,5 +111,5 @@ let block_n_timestamp = mean_timestamp + (block_n_slot_offset * slot_duration);
```
where `block_n_slot_offset` is the difference between the slot of block N and
the timestamp\_slot, and `slot_duration` is derived from the cluster's
the timestamp_slot, and `slot_duration` is derived from the cluster's
`slots_per_year` stored in each Bank