Move from gitbook to docusaurus, build docs in Travis CI (#10970)
* fix: ignore unknown fields in more RPC responses * Remove mdbook infrastructure * Delete gitattributes and other theme related items Move all docs to /docs folder to support Docusaurus * all docs need to be moved to /docs * can be changed in the future Add Docusaurus infrastructure * initialize docusaurus repo Remove trailing whitespace, add support for eslint Change Docusaurus configuration to support `src` * No need to rename the folder! Change a setting and we're all good to go. * Fixing rebase items * Remove unneccessary markdown file, fix type * Some fonts are hard to read. Others, not so much. Rubik, you've been sidelined. Roboto, into the limelight! * As much as we all love tutorials, I think we all can navigate around a markdown file. Say goodbye, `mdx.md`. * Setup deployment infrastructure * Move docs job from buildkite to travic * Fix travis config * Add vercel token to travis config * Only deploy docs after merge * Docker rust env * Revert "Docker rust env" This reverts commit f84bc208e807aab1c0d97c7588bbfada1fedfa7c. * Build CLI usage from docker * Pacify shellcheck * Run job on PR and new commits for publication * Update README * Fix svg image building * shellcheck Co-authored-by: Michael Vines <mvines@gmail.com> Co-authored-by: Ryan Shea <rmshea@users.noreply.github.com> Co-authored-by: publish-docs.sh <maintainers@solana.com>
This commit is contained in:
@ -1,3 +1,5 @@
|
||||
# Accepted Design Proposals
|
||||
---
|
||||
title: Accepted Design Proposals
|
||||
---
|
||||
|
||||
The following architectural proposals have been accepted by the Solana team, but are not yet fully implemented. The proposals may be implemented as described, implemented differently as issues in the designs become evident, or not implemented at all. If implemented, the proposal will be moved to [Implemented Proposals](../implemented-proposals/README.md) and the details will be added to relevant sections of the docs.
|
||||
|
@ -1,10 +1,12 @@
|
||||
# Bankless Leader
|
||||
---
|
||||
title: Bankless Leader
|
||||
---
|
||||
|
||||
A bankless leader does the minimum amount of work to produce a valid block. The leader is tasked with ingress transactions, sorting and filtering valid transactions, arranging them into entries, shredding the entries and broadcasting the shreds. While a validator only needs to reassemble the block and replay execution of well formed entries. The leader does 3x more memory operations before any bank execution than the validator per processed transaction.
|
||||
|
||||
## Rationale
|
||||
|
||||
Normal bank operation for a spend needs to do 2 loads and 2 stores. With this design leader just does 1 load. so 4x less account\_db work before generating the block. The store operations are likely to be more expensive than reads.
|
||||
Normal bank operation for a spend needs to do 2 loads and 2 stores. With this design leader just does 1 load. so 4x less account_db work before generating the block. The store operations are likely to be more expensive than reads.
|
||||
|
||||
When replay stage starts processing the same transactions, it can assume that PoH is valid, and that all the entries are safe for parallel execution. The fee accounts that have been loaded to produce the block are likely to still be in memory, so the additional load should be warm and the cost is likely to be amortized.
|
||||
|
||||
@ -25,7 +27,7 @@ The balance cache lookups must reference the same base fork for the entire durat
|
||||
Prior to the balance check, the leader validates all the signatures in the transaction.
|
||||
|
||||
1. Verify the accounts are not in use and BlockHash is valid.
|
||||
2. Check if the fee account is present in the cache, or load the account from accounts\_db and store the lamport balance in the cache.
|
||||
2. Check if the fee account is present in the cache, or load the account from accounts_db and store the lamport balance in the cache.
|
||||
3. If the balance is less than the fee, drop the transaction.
|
||||
4. Subtract the fee from the balance.
|
||||
5. For all the keys in the transaction that are Credit-Debit and are referenced by an instruction, reduce their balance to 0 in the cache. The account fee is declared as Credit-Debit, but as long as it is not used in any instruction its balance will not be reduced to 0.
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Block Confirmation
|
||||
---
|
||||
title: Block Confirmation
|
||||
---
|
||||
|
||||
A validator votes on a PoH hash for two purposes. First, the vote indicates it
|
||||
believes the ledger is valid up until that point in time. Second, since many
|
||||
@ -14,16 +16,16 @@ height of the block it is voting on. The account stores the 32 highest heights.
|
||||
|
||||
### Problems
|
||||
|
||||
* Only the validator knows how to find its own votes directly.
|
||||
- Only the validator knows how to find its own votes directly.
|
||||
|
||||
Other components, such as the one that calculates confirmation time, needs to
|
||||
be baked into the validator code. The validator code queries the bank for all
|
||||
accounts owned by the vote program.
|
||||
|
||||
* Voting ballots do not contain a PoH hash. The validator is only voting that
|
||||
- Voting ballots do not contain a PoH hash. The validator is only voting that
|
||||
it has observed an arbitrary block at some height.
|
||||
|
||||
* Voting ballots do not contain a hash of the bank state. Without that hash,
|
||||
- Voting ballots do not contain a hash of the bank state. Without that hash,
|
||||
there is no evidence that the validator executed the transactions and
|
||||
verified there were no double spends.
|
||||
|
||||
@ -50,8 +52,8 @@ log the time since the NewBlock transaction was submitted.
|
||||
|
||||
### Finality and Payouts
|
||||
|
||||
[Tower BFT](../implemented-proposals/tower-bft.md) is the proposed fork selection algorithm. It proposes
|
||||
that payment to miners be postponed until the *stack* of validator votes reaches
|
||||
[Tower BFT](../implemented-proposals/tower-bft.md) is the proposed fork selection algorithm. It proposes
|
||||
that payment to miners be postponed until the _stack_ of validator votes reaches
|
||||
a certain depth, at which point rollback is not economically feasible. The vote
|
||||
program may therefore implement Tower BFT. Vote instructions would need to
|
||||
reference a global Tower account so that it can track cross-block state.
|
||||
@ -62,7 +64,7 @@ reference a global Tower account so that it can track cross-block state.
|
||||
|
||||
Using programs and accounts to implement this is a bit tedious. The hardest
|
||||
part is figuring out how much space to allocate in NewBlock. The two variables
|
||||
are the *active set* and the stakes of those validators. If we calculate the
|
||||
are the _active set_ and the stakes of those validators. If we calculate the
|
||||
active set at the time NewBlock is submitted, the number of validators to
|
||||
allocate space for is known upfront. If, however, we allow new validators to
|
||||
vote on old blocks, then we'd need a way to allocate space dynamically.
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Cluster Test Framework
|
||||
---
|
||||
title: Cluster Test Framework
|
||||
---
|
||||
|
||||
This document proposes the Cluster Test Framework \(CTF\). CTF is a test harness that allows tests to execute against a local, in-process cluster or a deployed cluster.
|
||||
|
||||
@ -99,4 +101,3 @@ pub fn test_large_invalid_gossip_nodes(
|
||||
verify_spends(&cluster);
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -1,12 +1,14 @@
|
||||
# Inter-chain Transaction Verification
|
||||
---
|
||||
title: Inter-chain Transaction Verification
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
Inter-chain applications are not new to the digital asset ecosystem; in fact, even the smaller centralized exchanges still categorically dwarf all single chain applications put together in terms of users and volume. They command massive valuations and have spent years effectively optimizing their core products for a broad range of end users. However, their basic operations center around mechanisms that require their users to unilaterally trust them, typically with little to no recourse or protection from accidental loss. This has led to the broader digital asset ecosystem being fractured along network lines because interoperability solutions typically:
|
||||
|
||||
* Are technically complex to fully implement
|
||||
* Create unstable network scale incentive structures
|
||||
* Require consistent and high level cooperation between stakeholders
|
||||
- Are technically complex to fully implement
|
||||
- Create unstable network scale incentive structures
|
||||
- Require consistent and high level cooperation between stakeholders
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
@ -36,9 +38,9 @@ The Solana Inter-chain SPV mechanism consists of the following components and pa
|
||||
|
||||
A contract deployed on Solana which statelessly verifies SPV proofs for the caller. It takes as arguments for validation:
|
||||
|
||||
* An SPV proof in the correct format of the blockchain associated with the program
|
||||
* Reference\(s\) to the relevant block headers to compare that proof against
|
||||
* The necessary parameters of the transaction to verify
|
||||
- An SPV proof in the correct format of the blockchain associated with the program
|
||||
- Reference\(s\) to the relevant block headers to compare that proof against
|
||||
- The necessary parameters of the transaction to verify
|
||||
|
||||
If the proof in question is successfully validated, the SPV program saves proof
|
||||
|
||||
@ -54,9 +56,9 @@ A contract deployed on Solana which statelessly verifies SPV proofs for the call
|
||||
|
||||
A contract deployed on Solana which coordinates and intermediates the interaction between Clients and Provers and manages the validation of requests, headers, proofs, etc. It is the primary point of access for Client contracts to access the inter-chain. SPV mechanism. It offers the following core features:
|
||||
|
||||
* Submit Proof Request - allows client to place a request for a specific proof or set of proofs
|
||||
* Cancel Proof Request - allows client to invalidate a pending request
|
||||
* Fill Proof Request - used by Provers to submit for validation a proof corresponding to a given Proof Request
|
||||
- Submit Proof Request - allows client to place a request for a specific proof or set of proofs
|
||||
- Cancel Proof Request - allows client to invalidate a pending request
|
||||
- Fill Proof Request - used by Provers to submit for validation a proof corresponding to a given Proof Request
|
||||
|
||||
The SPV program maintains a publicly available listing of valid pending Proof
|
||||
|
||||
@ -90,15 +92,14 @@ An account-based data structure used to maintain block headers for the purpose o
|
||||
|
||||
Store Headers in program sub-accounts indexed by Public address:
|
||||
|
||||
* Each sub-account holds one header and has a public key matching the blockhash
|
||||
* Requires same number of account data lookups as confirmations per verification
|
||||
* Limit on number of confirmations \(15-20\) via max transaction data ceiling
|
||||
* No network-wide duplication of individual headers
|
||||
- Each sub-account holds one header and has a public key matching the blockhash
|
||||
- Requires same number of account data lookups as confirmations per verification
|
||||
- Limit on number of confirmations \(15-20\) via max transaction data ceiling
|
||||
- No network-wide duplication of individual headers
|
||||
|
||||
Linked List of multiple sub-accounts storing headers:
|
||||
|
||||
* Maintain sequential index of storage accounts, many headers per storage account
|
||||
* Max 2 account data lookups for >99.9% of verifications \(1 for most\)
|
||||
* Compact sequential data address format allows any number of confirmations and fast lookups
|
||||
* Facilitates network-wide header duplication inefficiencies
|
||||
|
||||
- Maintain sequential index of storage accounts, many headers per storage account
|
||||
- Max 2 account data lookups for >99.9% of verifications \(1 for most\)
|
||||
- Compact sequential data address format allows any number of confirmations and fast lookups
|
||||
- Facilitates network-wide header duplication inefficiencies
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Ledger Replication
|
||||
---
|
||||
title: Ledger Replication
|
||||
---
|
||||
|
||||
Note: this ledger replication solution was partially implemented, but not
|
||||
completed. The partial implementation was removed by
|
||||
@ -28,7 +30,7 @@ Archivers are specialized _light clients_. They download a part of the ledger \(
|
||||
|
||||
We have the following constraints:
|
||||
|
||||
* Verification requires generating the CBC blocks. That requires space of 2
|
||||
- Verification requires generating the CBC blocks. That requires space of 2
|
||||
|
||||
blocks per identity, and 1 CUDA core per identity for the same dataset. So as
|
||||
|
||||
@ -36,7 +38,7 @@ We have the following constraints:
|
||||
|
||||
identities verified concurrently for the same dataset.
|
||||
|
||||
* Validators will randomly sample the set of storage proofs to the set that
|
||||
- Validators will randomly sample the set of storage proofs to the set that
|
||||
|
||||
they can handle, and only the creators of those chosen proofs will be
|
||||
|
||||
@ -48,31 +50,31 @@ We have the following constraints:
|
||||
|
||||
### Constants
|
||||
|
||||
1. SLOTS\_PER\_SEGMENT: Number of slots in a segment of ledger data. The
|
||||
1. SLOTS_PER_SEGMENT: Number of slots in a segment of ledger data. The
|
||||
|
||||
unit of storage for an archiver.
|
||||
|
||||
2. NUM\_KEY\_ROTATION\_SEGMENTS: Number of segments after which archivers
|
||||
2. NUM_KEY_ROTATION_SEGMENTS: Number of segments after which archivers
|
||||
|
||||
regenerate their encryption keys and select a new dataset to store.
|
||||
|
||||
3. NUM\_STORAGE\_PROOFS: Number of storage proofs required for a storage proof
|
||||
3. NUM_STORAGE_PROOFS: Number of storage proofs required for a storage proof
|
||||
|
||||
claim to be successfully rewarded.
|
||||
|
||||
4. RATIO\_OF\_FAKE\_PROOFS: Ratio of fake proofs to real proofs that a storage
|
||||
4. RATIO_OF_FAKE_PROOFS: Ratio of fake proofs to real proofs that a storage
|
||||
|
||||
mining proof claim has to contain to be valid for a reward.
|
||||
|
||||
5. NUM\_STORAGE\_SAMPLES: Number of samples required for a storage mining
|
||||
5. NUM_STORAGE_SAMPLES: Number of samples required for a storage mining
|
||||
|
||||
proof.
|
||||
|
||||
6. NUM\_CHACHA\_ROUNDS: Number of encryption rounds performed to generate
|
||||
6. NUM_CHACHA_ROUNDS: Number of encryption rounds performed to generate
|
||||
|
||||
encrypted state.
|
||||
|
||||
7. NUM\_SLOTS\_PER\_TURN: Number of slots that define a single storage epoch or
|
||||
7. NUM_SLOTS_PER_TURN: Number of slots that define a single storage epoch or
|
||||
|
||||
a "turn" of the PoRep game.
|
||||
|
||||
@ -114,14 +116,14 @@ We have the following constraints:
|
||||
|
||||
depending on how paranoid an archiver is:
|
||||
|
||||
* \(a\) archiver can ask a validator
|
||||
* \(b\) archiver can ask multiple validators
|
||||
* \(c\) archiver can ask other archivers
|
||||
* \(d\) archiver can subscribe to the full transaction stream and generate
|
||||
- \(a\) archiver can ask a validator
|
||||
- \(b\) archiver can ask multiple validators
|
||||
- \(c\) archiver can ask other archivers
|
||||
- \(d\) archiver can subscribe to the full transaction stream and generate
|
||||
|
||||
the information itself \(assuming the slot is recent enough\)
|
||||
|
||||
* \(e\) archiver can subscribe to an abbreviated transaction stream to
|
||||
- \(e\) archiver can subscribe to an abbreviated transaction stream to
|
||||
|
||||
generate the information itself \(assuming the slot is recent enough\)
|
||||
|
||||
@ -181,17 +183,17 @@ The Proof of Replication game has 4 primary stages. For each "turn" multiple PoR
|
||||
The 4 stages of the PoRep Game are as follows:
|
||||
|
||||
1. Proof submission stage
|
||||
* Archivers: submit as many proofs as possible during this stage
|
||||
* Validators: No-op
|
||||
- Archivers: submit as many proofs as possible during this stage
|
||||
- Validators: No-op
|
||||
2. Proof verification stage
|
||||
* Archivers: No-op
|
||||
* Validators: Select archivers and verify their proofs from the previous turn
|
||||
- Archivers: No-op
|
||||
- Validators: Select archivers and verify their proofs from the previous turn
|
||||
3. Proof challenge stage
|
||||
* Archivers: Submit the proof mask with justifications \(for fake proofs submitted 2 turns ago\)
|
||||
* Validators: No-op
|
||||
- Archivers: Submit the proof mask with justifications \(for fake proofs submitted 2 turns ago\)
|
||||
- Validators: No-op
|
||||
4. Reward collection stage
|
||||
* Archivers: Collect rewards for 3 turns ago
|
||||
* Validators: Collect rewards for 3 turns ago
|
||||
- Archivers: Collect rewards for 3 turns ago
|
||||
- Validators: Collect rewards for 3 turns ago
|
||||
|
||||
For each turn of the PoRep game, both Validators and Archivers evaluate each stage. The stages are run as separate transactions on the storage program.
|
||||
|
||||
@ -207,7 +209,7 @@ For each turn of the PoRep game, both Validators and Archivers evaluate each sta
|
||||
|
||||
The validator provides an RPC interface to access the this map. Using this API, clients
|
||||
|
||||
can map a segment to an archiver's network address \(correlating it via cluster\_info table\).
|
||||
can map a segment to an archiver's network address \(correlating it via cluster_info table\).
|
||||
|
||||
The clients can then send repair requests to the archiver to retrieve segments.
|
||||
|
||||
@ -223,17 +225,17 @@ Our solution to this is to force the clients to continue using the same identity
|
||||
|
||||
## Validator attacks
|
||||
|
||||
* If a validator approves fake proofs, archiver can easily out them by
|
||||
- If a validator approves fake proofs, archiver can easily out them by
|
||||
|
||||
showing the initial state for the hash.
|
||||
|
||||
* If a validator marks real proofs as fake, no on-chain computation can be done
|
||||
- If a validator marks real proofs as fake, no on-chain computation can be done
|
||||
|
||||
to distinguish who is correct. Rewards would have to rely on the results from
|
||||
|
||||
multiple validators to catch bad actors and archivers from being denied rewards.
|
||||
|
||||
* Validator stealing mining proof results for itself. The proofs are derived
|
||||
- Validator stealing mining proof results for itself. The proofs are derived
|
||||
|
||||
from a signature from an archiver, since the validator does not know the
|
||||
|
||||
@ -249,29 +251,29 @@ Some percentage of fake proofs are also necessary to receive a reward from stora
|
||||
|
||||
## Notes
|
||||
|
||||
* We can reduce the costs of verification of PoRep by using PoH, and actually
|
||||
- We can reduce the costs of verification of PoRep by using PoH, and actually
|
||||
|
||||
make it feasible to verify a large number of proofs for a global dataset.
|
||||
|
||||
* We can eliminate grinding by forcing everyone to sign the same PoH hash and
|
||||
- We can eliminate grinding by forcing everyone to sign the same PoH hash and
|
||||
|
||||
use the signatures as the seed
|
||||
|
||||
* The game between validators and archivers is over random blocks and random
|
||||
- The game between validators and archivers is over random blocks and random
|
||||
|
||||
encryption identities and random data samples. The goal of randomization is
|
||||
|
||||
to prevent colluding groups from having overlap on data or validation.
|
||||
|
||||
* Archiver clients fish for lazy validators by submitting fake proofs that
|
||||
- Archiver clients fish for lazy validators by submitting fake proofs that
|
||||
|
||||
they can prove are fake.
|
||||
|
||||
* To defend against Sybil client identities that try to store the same block we
|
||||
- To defend against Sybil client identities that try to store the same block we
|
||||
|
||||
force the clients to store for multiple rounds before receiving a reward.
|
||||
|
||||
* Validators should also get rewarded for validating submitted storage proofs
|
||||
- Validators should also get rewarded for validating submitted storage proofs
|
||||
|
||||
as incentive for storing the ledger. They can only validate proofs if they
|
||||
|
||||
@ -287,35 +289,35 @@ The storage epoch should be the number of slots which results in around 100GB-1T
|
||||
|
||||
## Validator behavior
|
||||
|
||||
1. Every NUM\_KEY\_ROTATION\_TICKS it also validates samples received from
|
||||
1. Every NUM_KEY_ROTATION_TICKS it also validates samples received from
|
||||
|
||||
archivers. It signs the PoH hash at that point and uses the following
|
||||
|
||||
algorithm with the signature as the input:
|
||||
|
||||
* The low 5 bits of the first byte of the signature creates an index into
|
||||
- The low 5 bits of the first byte of the signature creates an index into
|
||||
|
||||
another starting byte of the signature.
|
||||
|
||||
* The validator then looks at the set of storage proofs where the byte of
|
||||
- The validator then looks at the set of storage proofs where the byte of
|
||||
|
||||
the proof's sha state vector starting from the low byte matches exactly
|
||||
|
||||
with the chosen byte\(s\) of the signature.
|
||||
|
||||
* If the set of proofs is larger than the validator can handle, then it
|
||||
- If the set of proofs is larger than the validator can handle, then it
|
||||
|
||||
increases to matching 2 bytes in the signature.
|
||||
|
||||
* Validator continues to increase the number of matching bytes until a
|
||||
- Validator continues to increase the number of matching bytes until a
|
||||
|
||||
workable set is found.
|
||||
|
||||
* It then creates a mask of valid proofs and fake proofs and sends it to
|
||||
- It then creates a mask of valid proofs and fake proofs and sends it to
|
||||
|
||||
the leader. This is a storage proof confirmation transaction.
|
||||
|
||||
2. After a lockout period of NUM\_SECONDS\_STORAGE\_LOCKOUT seconds, the
|
||||
2. After a lockout period of NUM_SECONDS_STORAGE_LOCKOUT seconds, the
|
||||
|
||||
validator then submits a storage proof claim transaction which then causes the
|
||||
|
||||
@ -331,7 +333,7 @@ The storage epoch should be the number of slots which results in around 100GB-1T
|
||||
|
||||
seed for the hash result.
|
||||
|
||||
* A fake proof should consist of an archiver hash of a signature of a PoH
|
||||
- A fake proof should consist of an archiver hash of a signature of a PoH
|
||||
|
||||
value. That way when the archiver reveals the fake proof, it can be
|
||||
|
||||
@ -362,9 +364,9 @@ SubmitMiningProof {
|
||||
keys = [archiver_keypair]
|
||||
```
|
||||
|
||||
Archivers create these after mining their stored ledger data for a certain hash value. The slot is the end slot of the segment of ledger they are storing, the sha\_state the result of the archiver using the hash function to sample their encrypted ledger segment. The signature is the signature that was created when they signed a PoH value for the current storage epoch. The list of proofs from the current storage epoch should be saved in the account state, and then transfered to a list of proofs for the previous epoch when the epoch passes. In a given storage epoch a given archiver should only submit proofs for one segment.
|
||||
Archivers create these after mining their stored ledger data for a certain hash value. The slot is the end slot of the segment of ledger they are storing, the sha_state the result of the archiver using the hash function to sample their encrypted ledger segment. The signature is the signature that was created when they signed a PoH value for the current storage epoch. The list of proofs from the current storage epoch should be saved in the account state, and then transfered to a list of proofs for the previous epoch when the epoch passes. In a given storage epoch a given archiver should only submit proofs for one segment.
|
||||
|
||||
The program should have a list of slots which are valid storage mining slots. This list should be maintained by keeping track of slots which are rooted slots in which a significant portion of the network has voted on with a high lockout value, maybe 32-votes old. Every SLOTS\_PER\_SEGMENT number of slots would be added to this set. The program should check that the slot is in this set. The set can be maintained by receiving a AdvertiseStorageRecentBlockHash and checking with its bank/Tower BFT state.
|
||||
The program should have a list of slots which are valid storage mining slots. This list should be maintained by keeping track of slots which are rooted slots in which a significant portion of the network has voted on with a high lockout value, maybe 32-votes old. Every SLOTS_PER_SEGMENT number of slots would be added to this set. The program should check that the slot is in this set. The set can be maintained by receiving a AdvertiseStorageRecentBlockHash and checking with its bank/Tower BFT state.
|
||||
|
||||
The program should do a signature verify check on the signature, public key from the transaction submitter and the message of the previous storage epoch PoH value.
|
||||
|
||||
@ -379,7 +381,7 @@ keys = [validator_keypair, archiver_keypair(s) (unsigned)]
|
||||
|
||||
A validator will submit this transaction to indicate that a set of proofs for a given segment are valid/not-valid or skipped where the validator did not look at it. The keypairs for the archivers that it looked at should be referenced in the keys so the program logic can go to those accounts and see that the proofs are generated in the previous epoch. The sampling of the storage proofs should be verified ensuring that the correct proofs are skipped by the validator according to the logic outlined in the validator behavior of sampling.
|
||||
|
||||
The included archiver keys will indicate the the storage samples which are being referenced; the length of the proof\_mask should be verified against the set of storage proofs in the referenced archiver account\(s\), and should match with the number of proofs submitted in the previous storage epoch in the state of said archiver account.
|
||||
The included archiver keys will indicate the the storage samples which are being referenced; the length of the proof_mask should be verified against the set of storage proofs in the referenced archiver account\(s\), and should match with the number of proofs submitted in the previous storage epoch in the state of said archiver account.
|
||||
|
||||
### ClaimStorageReward
|
||||
|
||||
@ -401,7 +403,7 @@ ChallengeProofValidation {
|
||||
keys = [archiver_keypair, validator_keypair]
|
||||
```
|
||||
|
||||
This transaction is for catching lazy validators who are not doing the work to validate proofs. An archiver will submit this transaction when it sees a validator has approved a fake SubmitMiningProof transaction. Since the archiver is a light client not looking at the full chain, it will have to ask a validator or some set of validators for this information maybe via RPC call to obtain all ProofValidations for a certain segment in the previous storage epoch. The program will look in the validator account state see that a ProofValidation is submitted in the previous storage epoch and hash the hash\_seed\_value and see that the hash matches the SubmitMiningProof transaction and that the validator marked it as valid. If so, then it will save the challenge to the list of challenges that it has in its state.
|
||||
This transaction is for catching lazy validators who are not doing the work to validate proofs. An archiver will submit this transaction when it sees a validator has approved a fake SubmitMiningProof transaction. Since the archiver is a light client not looking at the full chain, it will have to ask a validator or some set of validators for this information maybe via RPC call to obtain all ProofValidations for a certain segment in the previous storage epoch. The program will look in the validator account state see that a ProofValidation is submitted in the previous storage epoch and hash the hash_seed_value and see that the hash matches the SubmitMiningProof transaction and that the validator marked it as valid. If so, then it will save the challenge to the list of challenges that it has in its state.
|
||||
|
||||
### AdvertiseStorageRecentBlockhash
|
||||
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Optimistic Confirmation and Slashing
|
||||
---
|
||||
title: Optimistic Confirmation and Slashing
|
||||
---
|
||||
|
||||
Progress on optimistic confirmation can be tracked here
|
||||
|
||||
@ -7,7 +9,7 @@ https://github.com/solana-labs/solana/projects/52
|
||||
At the end of May, the mainnet-beta is moving to 1.1, and testnet is
|
||||
moving to 1.2. With 1.2, testnet will behave as if it has optimistic
|
||||
finality as long as at least no more than 4.66% of the validators are
|
||||
acting maliciously. Applications can assume that 2/3+ votes observed in
|
||||
acting maliciously. Applications can assume that 2/3+ votes observed in
|
||||
gossip confirm a block or that at least 4.66% of the network is violating
|
||||
the protocol.
|
||||
|
||||
@ -16,38 +18,37 @@ the protocol.
|
||||
The general idea is that validators must continue voting following their
|
||||
last fork, unless the validator can construct a proof that their current
|
||||
fork may not reach finality. The way validators construct this proof is
|
||||
by collecting votes for all the forks excluding their own. If the set
|
||||
by collecting votes for all the forks excluding their own. If the set
|
||||
of valid votes represents over 1/3+X of the epoch stake weight, there
|
||||
may not be a way for the validators current fork to reach 2/3+ finality.
|
||||
The validator hashes the proof (creates a witness) and submits it with
|
||||
their vote for the alternative fork. But if 2/3+ votes for the same
|
||||
their vote for the alternative fork. But if 2/3+ votes for the same
|
||||
block, it is impossible for any of the validators to construct this proof,
|
||||
and therefore no validator is able to switch forks and this block will
|
||||
be eventually finalized.
|
||||
|
||||
|
||||
## Tradeoffs
|
||||
|
||||
The safety margin is 1/3+X, where X represents the minimum amount of stake
|
||||
that will be slashed in case the protocol is violated. The tradeoff is
|
||||
that liveness is now reduced by 2X in the worst case. If more than 1/3 -
|
||||
that liveness is now reduced by 2X in the worst case. If more than 1/3 -
|
||||
2X of the network is unavailable, the network may stall and will only
|
||||
resume finalizing blocks after the network recovers below 1/3 - 2X of
|
||||
failing nodes. So far, we haven’t observed a large unavailability hit
|
||||
failing nodes. So far, we haven’t observed a large unavailability hit
|
||||
on our mainnet, cosmos, or tezos. For our network, which is primarily
|
||||
composed of high availability systems, this seems unlikely. Currently,
|
||||
we have set the threshold percentage to 4.66%, which means that if 23.68%
|
||||
have failed the network may stop finalizing blocks. For our network,
|
||||
have failed the network may stop finalizing blocks. For our network,
|
||||
which is primarily composed of high availability systems a 23.68% drop
|
||||
in availabilty seems unlinkely. 1:10^12 odds assuming five 4.7% staked
|
||||
in availabilty seems unlinkely. 1:10^12 odds assuming five 4.7% staked
|
||||
nodes with 0.995 of uptime.
|
||||
|
||||
## Security
|
||||
|
||||
Long term average votes per slot has been 670,000,000 votes / 12,000,000
|
||||
slots, or 55 out of 64 voting validators. This includes missed blocks due
|
||||
slots, or 55 out of 64 voting validators. This includes missed blocks due
|
||||
to block producer failures. When a client sees 55/64, or ~86% confirming
|
||||
a block, it can expect that ~24% or `(86 - 66.666.. + 4.666..)%` of
|
||||
a block, it can expect that ~24% or `(86 - 66.666.. + 4.666..)%` of
|
||||
the network must be slashed for this block to fail full finalization.
|
||||
|
||||
## Why Solana?
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Optimistic Confirmation
|
||||
---
|
||||
title: Optimistic Confirmation
|
||||
---
|
||||
|
||||
## Primitives
|
||||
|
||||
@ -16,11 +18,11 @@ Given a vote `vote(X, S)`, let `S.last == vote.last` be the last slot in `S`.
|
||||
Now we define some "Optimistic Slashing" slashing conditions. The intuition
|
||||
for these is described below:
|
||||
|
||||
* `Intuition`: If a validator submits `vote(X, S)`, the same validator
|
||||
should not have voted on a different fork that "overlaps" this fork.
|
||||
More concretely, this validator should not have cast another vote
|
||||
`vote(X', S')` where the range `[X, S.last]` overlaps the range
|
||||
`[X', S'.last]`, `X != X'`, as shown below:
|
||||
- `Intuition`: If a validator submits `vote(X, S)`, the same validator
|
||||
should not have voted on a different fork that "overlaps" this fork.
|
||||
More concretely, this validator should not have cast another vote
|
||||
`vote(X', S')` where the range `[X, S.last]` overlaps the range
|
||||
`[X', S'.last]`, `X != X'`, as shown below:
|
||||
|
||||
```text
|
||||
+-------+
|
||||
@ -72,7 +74,7 @@ More concretely, this validator should not have cast another vote
|
||||
|
||||
In the diagram above, note that the vote for `S.last` must have been sent after
|
||||
the vote for `S'.last` (due to lockouts, the higher vote must have been sent
|
||||
later). Thus, the sequence of votes must have been: `X ... S'.last ... S.last`.
|
||||
later). Thus, the sequence of votes must have been: `X ... S'.last ... S.last`.
|
||||
This means after the vote on `S'.last`, the validator must have switched back
|
||||
to the other fork at some slot `s > S'.last > X`. Thus, the vote for `S.last`
|
||||
should have used `s` as the "reference" point, not `X`, because that was the
|
||||
@ -82,21 +84,21 @@ To enforce this, we define the "Optimistic Slashing" slashing conditions. Given
|
||||
any two distinct votes `vote(X, S)`and `vote(X', S')` by the same validator,
|
||||
the votes must satisfy:
|
||||
|
||||
* `X <= S.last`, `X' <= S'.last`
|
||||
* All `s` in `S` are ancestors/descendants of one another,
|
||||
all `s'` in `S'` are ancsestors/descendants of one another,
|
||||
*
|
||||
* `X == X'` implies `S` is parent of `S'` or `S'` is a parent of `S`
|
||||
* `X' > X` implies `X' > S.last` and `S'.last > S.last`
|
||||
and for all `s` in `S`, `s + lockout(s) < X'`
|
||||
* `X > X'` implies `X > S'.last` and `S.last > S'.last`
|
||||
and for all `s` in `S'`, `s + lockout(s) < X`
|
||||
- `X <= S.last`, `X' <= S'.last`
|
||||
- All `s` in `S` are ancestors/descendants of one another,
|
||||
all `s'` in `S'` are ancsestors/descendants of one another,
|
||||
-
|
||||
- `X == X'` implies `S` is parent of `S'` or `S'` is a parent of `S`
|
||||
- `X' > X` implies `X' > S.last` and `S'.last > S.last`
|
||||
and for all `s` in `S`, `s + lockout(s) < X'`
|
||||
- `X > X'` implies `X > S'.last` and `S.last > S'.last`
|
||||
and for all `s` in `S'`, `s + lockout(s) < X`
|
||||
|
||||
(The last two rules imply the ranges cannot overlap):
|
||||
Otherwise the validator is slashed.
|
||||
|
||||
`Range(vote)` - Given a vote `v = vote(X, S)`, define `Range(v)` to be the range
|
||||
of slots `[X, S.last]`.
|
||||
of slots `[X, S.last]`.
|
||||
|
||||
`SP(old_vote, new_vote)` - This is the "Switching Proof" for `old_vote`, the
|
||||
validator's latest vote. Such a proof is necessary anytime a validator switches
|
||||
@ -113,12 +115,11 @@ The proof is a list of elements `(validator_id, validator_vote(X, S))`, where:
|
||||
1. The sum of the stakes of all the validator id's `> 1/3`
|
||||
|
||||
2. For each `(validator_id, validator_vote(X, S))`, there exists some slot `s`
|
||||
in `S` where:
|
||||
* a.`s` is not a common ancestor of both `validator_vote.last` and
|
||||
`old_vote.last` and `new_vote.last`.
|
||||
* b. `s` is not a descendant of `validator_vote.last`.
|
||||
* c. `s + s.lockout() >= old_vote.last` (implies validator is still locked
|
||||
out on slot `s` at slot `old_vote.last`).
|
||||
in `S` where:
|
||||
_ a.`s` is not a common ancestor of both `validator_vote.last` and
|
||||
`old_vote.last` and `new_vote.last`.
|
||||
_ b. `s` is not a descendant of `validator_vote.last`. \* c. `s + s.lockout() >= old_vote.last` (implies validator is still locked
|
||||
out on slot `s` at slot `old_vote.last`).
|
||||
|
||||
Switching forks without a valid switching proof is slashable.
|
||||
|
||||
@ -140,12 +141,13 @@ A block `B` that has reached optimistic confirmation will not be reverted
|
||||
unless at least one validator is slashed.
|
||||
|
||||
## Proof:
|
||||
|
||||
Assume for the sake of contradiction, a block `B` has achieved
|
||||
`optimistic confirmation` at some slot `B + n` for some `n`, and:
|
||||
|
||||
* Another block `B'` that is not a parent or descendant of `B`
|
||||
was finalized.
|
||||
* No validators violated any slashing conditions.
|
||||
- Another block `B'` that is not a parent or descendant of `B`
|
||||
was finalized.
|
||||
- No validators violated any slashing conditions.
|
||||
|
||||
By the definition of `optimistic confirmation`, this means `> 2/3` of validators
|
||||
have each shown some vote `v` of the form `Vote(X, S)` where `X <= B <= v.last`.
|
||||
@ -164,12 +166,13 @@ Because we know from above `X` for all such votes made by `v` is unique, we know
|
||||
there is such a unique `maximal` vote.
|
||||
|
||||
### Lemma 1:
|
||||
|
||||
`Claim:` Given a vote `Vote(X, S)` made by a validator `V` in the
|
||||
`Optimistic Validators` set, and `S` contains a vote for a slot `s`
|
||||
for which:
|
||||
|
||||
* `s + s.lockout > B`,
|
||||
* `s` is not an ancestor or descendant of `B`,
|
||||
- `s + s.lockout > B`,
|
||||
- `s` is not an ancestor or descendant of `B`,
|
||||
|
||||
then `X > B`.
|
||||
|
||||
@ -264,6 +267,7 @@ Since none of these cases are valid, the assumption must have been invalid,
|
||||
and the claim is proven.
|
||||
|
||||
### Lemma 2:
|
||||
|
||||
Recall `B'` was the block finalized on a different fork than
|
||||
"optimistically" confirmed" block `B`.
|
||||
|
||||
@ -308,13 +312,13 @@ true that `B' > X`
|
||||
```
|
||||
|
||||
`Proof`: Let `Vote(X, S)` be a vote in the `Optimistic Votes` set. Then by
|
||||
definition, given the "optimistcally confirmed" block `B`, `X <= B <= S.last`.
|
||||
definition, given the "optimistcally confirmed" block `B`, `X <= B <= S.last`.
|
||||
|
||||
Because `X` is a parent of `B`, and `B'` is not a parent or ancestor of `B`,
|
||||
then:
|
||||
|
||||
* `B' != X`
|
||||
* `B'` is not a parent of `X`
|
||||
- `B' != X`
|
||||
- `B'` is not a parent of `X`
|
||||
|
||||
Now consider if `B'` < `X`:
|
||||
|
||||
@ -324,16 +328,17 @@ and `B'` is not a parent of `X`, then the validator should not have been able
|
||||
to vote on the higher slot `X` that does not descend from `B'`.
|
||||
|
||||
### Proof of Safety:
|
||||
|
||||
We now aim to show at least one of the validators in the
|
||||
`Optimistic Validators` set violated a slashing rule.
|
||||
|
||||
First note that in order for `B'` to have been rooted, there must have been
|
||||
`> 2/3` stake that voted on `B'` or a descendant of `B'`. Given that the
|
||||
`> 2/3` stake that voted on `B'` or a descendant of `B'`. Given that the
|
||||
`Optimistic Validator` set also contains `> 2/3` of the staked validators,
|
||||
it follows that `> 1/3` of the staked validators:
|
||||
|
||||
* Rooted `B'` or a descendant of `B'`
|
||||
* Also submitted a vote `v` of the form `Vote(X, S)` where `X <= B <= v.last`.
|
||||
- Rooted `B'` or a descendant of `B'`
|
||||
- Also submitted a vote `v` of the form `Vote(X, S)` where `X <= B <= v.last`.
|
||||
|
||||
Let the `Delinquent` set be the set of validators that meet the above
|
||||
criteria.
|
||||
@ -341,10 +346,10 @@ criteria.
|
||||
By definition, in order to root `B'`, each validator `V` in `Delinquent`
|
||||
must have each made some "switching vote" of the form `Vote(X_v, S_v)` where:
|
||||
|
||||
* `S_v.last > B'`
|
||||
* `S_v.last` is a descendant of `B'`, so it can't be a descendant of `B`
|
||||
* Because `S_v.last` is not a descendant of `B`, then `X_v` cannot be a
|
||||
descendant or ancestor of `B`.
|
||||
- `S_v.last > B'`
|
||||
- `S_v.last` is a descendant of `B'`, so it can't be a descendant of `B`
|
||||
- Because `S_v.last` is not a descendant of `B`, then `X_v` cannot be a
|
||||
descendant or ancestor of `B`.
|
||||
|
||||
By definition, this delinquent validator `V` also made some vote `Vote(X, S)`
|
||||
in the `Optimistic Votes` where by definition of that set (optimistically
|
||||
@ -377,19 +382,20 @@ fact that the set of validators in the `Optimistic Voters` set consists of
|
||||
a switching proof),`Vote(X_w, S_w)` that was included in validator `V`'s
|
||||
switching proof for slot `X'`, where `S_w` contains a slot `s` such that:
|
||||
|
||||
* `s` is not a common ancestor of `S.last` and `X'`
|
||||
* `s` is not a descendant of `S.last`.
|
||||
* `s' + s'.lockout > S.last`
|
||||
- `s` is not a common ancestor of `S.last` and `X'`
|
||||
- `s` is not a descendant of `S.last`.
|
||||
- `s' + s'.lockout > S.last`
|
||||
|
||||
Because `B` is an ancestor of `S.last`, it is also true then:
|
||||
* `s` is not a common ancestor of `B` and `X'`
|
||||
* `s' + s'.lockout > B`
|
||||
|
||||
- `s` is not a common ancestor of `B` and `X'`
|
||||
- `s' + s'.lockout > B`
|
||||
|
||||
which was included in `V`'s switching proof.
|
||||
|
||||
Now because `W` is also a member of `Optimistic Voters`, then by the `Lemma 1`
|
||||
above, given a vote by `W`, `Vote(X_w, S_w)`, where `S_w` contains a vote for
|
||||
a slot `s` where `s + s.lockout > B`, and `s` is not an ancestor of `B`, then
|
||||
a slot `s` where `s + s.lockout > B`, and `s` is not an ancestor of `B`, then
|
||||
`X_w > B`.
|
||||
|
||||
Because validator `V` included vote `Vote(X_w, S_w)` in its proof of switching
|
||||
@ -399,4 +405,4 @@ for slot `X'`, then his implies validator `V'` submitted vote `Vote(X_w, S_w)`
|
||||
|
||||
But this is a contradiction because we chose `Vote(X', S')` to be the first vote
|
||||
made by any validator in the `Optimistic Voters` set where `X' > B` and `X'` is
|
||||
not a descendant of `B`.
|
||||
not a descendant of `B`.
|
||||
|
@ -1,9 +1,11 @@
|
||||
# Rust Clients
|
||||
---
|
||||
title: Rust Clients
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
High-level tests, such as bench-tps, are written in terms of the `Client`
|
||||
trait. When we execute these tests as part of the test suite, we use the
|
||||
trait. When we execute these tests as part of the test suite, we use the
|
||||
low-level `BankClient` implementation. When we need to run the same test
|
||||
against a cluster, we use the `ThinClient` implementation. The problem with
|
||||
that approach is that it means the trait will continually expand to include new
|
||||
@ -24,7 +26,7 @@ of `Client`. We would then add a new implementation of `Client`, called
|
||||
`ThinClient` currently resides.
|
||||
|
||||
After this reorg, any code needing a client would be written in terms of
|
||||
`ThinClient`. In unit tests, the functionality would be invoked with
|
||||
`ThinClient`. In unit tests, the functionality would be invoked with
|
||||
`ThinClient<BankClient>`, whereas `main()` functions, benchmarks and
|
||||
integration tests would invoke it with `ThinClient<ClusterClient>`.
|
||||
|
||||
@ -46,7 +48,7 @@ that the `Custom(String)` field should be changed to `Custom(Box<dyn Error>)`.
|
||||
`RpcClientTng` and an `AsyncClient` implementation
|
||||
4. Move all unit-tests from `BankClient` to `ThinClientTng<BankClient>`
|
||||
5. Add `ClusterClient`
|
||||
5. Move `ThinClient` users to `ThinClientTng<ClusterClient>`
|
||||
6. Delete `ThinClient` and rename `ThinClientTng` to `ThinClient`
|
||||
7. Move `RpcClient` users to new `ThinClient<ClusterClient>`
|
||||
8. Delete `RpcClient` and rename `RpcClientTng` to `RpcClient`
|
||||
6. Move `ThinClient` users to `ThinClientTng<ClusterClient>`
|
||||
7. Delete `ThinClient` and rename `ThinClientTng` to `ThinClient`
|
||||
8. Move `RpcClient` users to new `ThinClient<ClusterClient>`
|
||||
9. Delete `RpcClient` and rename `RpcClientTng` to `RpcClient`
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Simple Payment and State Verification
|
||||
---
|
||||
title: Simple Payment and State Verification
|
||||
---
|
||||
|
||||
It is often useful to allow low resourced clients to participate in a Solana
|
||||
cluster. Be this participation economic or contract execution, verification
|
||||
@ -67,11 +69,11 @@ sorted by signature.
|
||||
|
||||
A Block-Merkle is the Merkle Root of all the Entry-Merkles sequenced in the block.
|
||||
|
||||

|
||||

|
||||
|
||||
A Bank-Hash is the hash of the concatenation of the Block-Merkle and Accounts-Hash
|
||||
|
||||

|
||||

|
||||
|
||||
An Accounts-Hash is the hash of the concatentation of the state hashes of each
|
||||
account modified during the current slot.
|
||||
@ -86,7 +88,7 @@ code, but a single status bit to indicate the transaction's success.
|
||||
### Account State Verification
|
||||
|
||||
An account's state (balance or other data) can be verified by submitting a
|
||||
transaction with a ___TBD___ Instruction to the cluster. The client can then
|
||||
transaction with a **_TBD_** Instruction to the cluster. The client can then
|
||||
use a [Transaction Inclusion Proof](#transaction-inclusion-proof) to verify
|
||||
whether the cluster agrees that the acount has reached the expected state.
|
||||
|
||||
@ -102,13 +104,13 @@ of consecutive validation votes.
|
||||
|
||||
It contains the following:
|
||||
|
||||
* Transaction -> Entry-Merkle -> Block-Merkle -> Bank-Hash
|
||||
- Transaction -> Entry-Merkle -> Block-Merkle -> Bank-Hash
|
||||
|
||||
And a vector of PoH entries:
|
||||
|
||||
* Validator vote entries
|
||||
* Ticks
|
||||
* Light entries
|
||||
- Validator vote entries
|
||||
- Ticks
|
||||
- Light entries
|
||||
|
||||
```text
|
||||
/// This Entry definition skips over the transactions and only contains the
|
||||
@ -148,8 +150,8 @@ generated state.
|
||||
|
||||
For example:
|
||||
|
||||
* Epoch validator accounts and their stakes and weights.
|
||||
* Computed fee rates
|
||||
- Epoch validator accounts and their stakes and weights.
|
||||
- Computed fee rates
|
||||
|
||||
These values should have an entry in the Bank-Hash. They should live under known
|
||||
accounts, and therefore have an index into the hash concatenation.
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Slashing rules
|
||||
---
|
||||
title: Slashing rules
|
||||
---
|
||||
|
||||
Unlike Proof of Work \(PoW\) where off-chain capital expenses are already
|
||||
deployed at the time of block construction/voting, PoS systems require
|
||||
@ -28,12 +30,12 @@ In addition to the functional form lockout described above, early
|
||||
implementation may be a numerical approximation based on a First In, First Out
|
||||
\(FIFO\) data structure and the following logic:
|
||||
|
||||
* FIFO queue holding 32 votes per active validator
|
||||
* new votes are pushed on top of queue \(`push_front`\)
|
||||
* expired votes are popped off top \(`pop_front`\)
|
||||
* as votes are pushed into the queue, the lockout of each queued vote doubles
|
||||
* votes are removed from back of queue if `queue.len() > 32`
|
||||
* the earliest and latest height that has been removed from the back of the
|
||||
- FIFO queue holding 32 votes per active validator
|
||||
- new votes are pushed on top of queue \(`push_front`\)
|
||||
- expired votes are popped off top \(`pop_front`\)
|
||||
- as votes are pushed into the queue, the lockout of each queued vote doubles
|
||||
- votes are removed from back of queue if `queue.len() > 32`
|
||||
- the earliest and latest height that has been removed from the back of the
|
||||
queue should be stored
|
||||
|
||||
It is likely that a reward will be offered as a % of the slashed amount to any
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Snapshot Verification
|
||||
---
|
||||
title: Snapshot Verification
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Tick Verification
|
||||
---
|
||||
title: Tick Verification
|
||||
---
|
||||
|
||||
This design the criteria and validation of ticks in a slot. It also describes
|
||||
error handling and slashing conditions encompassing how the system handles
|
||||
@ -7,8 +9,8 @@ transmissions that do not meet these requirements.
|
||||
# Slot structure
|
||||
|
||||
Each slot must contain an expected `ticks_per_slot` number of ticks. The last
|
||||
shred in a slot must contain only the entirety of the last tick, and nothing
|
||||
else. The leader must also mark this shred containing the last tick with the
|
||||
shred in a slot must contain only the entirety of the last tick, and nothing
|
||||
else. The leader must also mark this shred containing the last tick with the
|
||||
`LAST_SHRED_IN_SLOT` flag. Between ticks, there must be `hashes_per_tick`
|
||||
number of hashes.
|
||||
|
||||
@ -16,54 +18,53 @@ number of hashes.
|
||||
|
||||
Malicious transmissions `T` are handled in two ways:
|
||||
|
||||
1) If a leader can generate some erronenous transmission `T` and also some
|
||||
alternate transmission `T'` for the same slot without violating any slashing
|
||||
rules for duplicate transmissions (for instance if `T'` is a subset of `T`),
|
||||
then the cluster must handle the possibility of both transmissions being live.
|
||||
1. If a leader can generate some erronenous transmission `T` and also some
|
||||
alternate transmission `T'` for the same slot without violating any slashing
|
||||
rules for duplicate transmissions (for instance if `T'` is a subset of `T`),
|
||||
then the cluster must handle the possibility of both transmissions being live.
|
||||
|
||||
Thus this means we cannot mark the erronenous transmission `T` as dead because
|
||||
the cluster may have reached consensus on `T'`. These cases necessitate a
|
||||
the cluster may have reached consensus on `T'`. These cases necessitate a
|
||||
slashing proof to punish this bad behavior.
|
||||
|
||||
2) Otherwise, we can simply mark the slot as dead and not playable. A slashing
|
||||
proof may or may not be necessary depending on feasibility.
|
||||
2. Otherwise, we can simply mark the slot as dead and not playable. A slashing
|
||||
proof may or may not be necessary depending on feasibility.
|
||||
|
||||
# Blockstore receiving shreds
|
||||
|
||||
When blockstore receives a new shred `s`, there are two cases:
|
||||
|
||||
1) `s` is marked as `LAST_SHRED_IN_SLOT`, then check if there exists a shred
|
||||
`s'` in blockstore for that slot where `s'.index > s.index` If so, together `s`
|
||||
and `s'` constitute a slashing proof.
|
||||
1. `s` is marked as `LAST_SHRED_IN_SLOT`, then check if there exists a shred
|
||||
`s'` in blockstore for that slot where `s'.index > s.index` If so, together `s`
|
||||
and `s'` constitute a slashing proof.
|
||||
|
||||
2) Blockstore has already received a shred `s'` marked as `LAST_SHRED_IN_SLOT`
|
||||
with index `i`. If `s.index > i`, then together `s` and `s'`constitute a
|
||||
slashing proof. In this case, blockstore will also not insert `s`.
|
||||
|
||||
3) Duplicate shreds for the same index are ignored. Non-duplicate shreds for
|
||||
the same index are a slashable condition. Details for this case are covered
|
||||
in the `Leader Duplicate Block Slashing` section.
|
||||
2. Blockstore has already received a shred `s'` marked as `LAST_SHRED_IN_SLOT`
|
||||
with index `i`. If `s.index > i`, then together `s` and `s'`constitute a
|
||||
slashing proof. In this case, blockstore will also not insert `s`.
|
||||
|
||||
3. Duplicate shreds for the same index are ignored. Non-duplicate shreds for
|
||||
the same index are a slashable condition. Details for this case are covered
|
||||
in the `Leader Duplicate Block Slashing` section.
|
||||
|
||||
# Replaying and validating ticks
|
||||
|
||||
1) Replay stage replays entries from blockstore, keeping track of the number of
|
||||
ticks it has seen per slot, and verifying there are `hashes_per_tick` number of
|
||||
hashes between ticcks. After the tick from this last shred has been played,
|
||||
replay stage then checks the total number of ticks.
|
||||
1. Replay stage replays entries from blockstore, keeping track of the number of
|
||||
ticks it has seen per slot, and verifying there are `hashes_per_tick` number of
|
||||
hashes between ticcks. After the tick from this last shred has been played,
|
||||
replay stage then checks the total number of ticks.
|
||||
|
||||
Failure scenario 1: If ever there are two consecutive ticks between which the
|
||||
number of hashes is `!= hashes_per_tick`, mark this slot as dead.
|
||||
|
||||
Failure scenario 2: If the number of ticks != `ticks_per_slot`, mark slot as
|
||||
dead.
|
||||
Failure scenario 2: If the number of ticks != `ticks_per_slot`, mark slot as
|
||||
dead.
|
||||
|
||||
Failure scenario 3: If the number of ticks reaches `ticks_per_slot`, but we still
|
||||
haven't seen the `LAST_SHRED_IN_SLOT`, mark this slot as dead.
|
||||
haven't seen the `LAST_SHRED_IN_SLOT`, mark this slot as dead.
|
||||
|
||||
2) When ReplayStage reaches a shred marked as the last shred, it checks if this
|
||||
last shred is a tick.
|
||||
2. When ReplayStage reaches a shred marked as the last shred, it checks if this
|
||||
last shred is a tick.
|
||||
|
||||
Failure scenario: If the signed shred with the `LAST_SHRED_IN_SLOT` flag cannot
|
||||
be deserialized into a tick (either fails to deserialize or deserializes into
|
||||
an entry), mark this slot as dead.
|
||||
be deserialized into a tick (either fails to deserialize or deserializes into
|
||||
an entry), mark this slot as dead.
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Validator
|
||||
---
|
||||
title: Validator
|
||||
---
|
||||
|
||||
## History
|
||||
|
||||
@ -36,17 +38,17 @@ We unwrap the many abstraction layers and build a single pipeline that can
|
||||
toggle leader mode on whenever the validator's ID shows up in the leader
|
||||
schedule.
|
||||
|
||||

|
||||

|
||||
|
||||
## Notable changes
|
||||
|
||||
* Hoist FetchStage and BroadcastStage out of TPU
|
||||
* BankForks renamed to Banktree
|
||||
* TPU moves to new socket-free crate called solana-tpu.
|
||||
* TPU's BankingStage absorbs ReplayStage
|
||||
* TVU goes away
|
||||
* New RepairStage absorbs Shred Fetch Stage and repair requests
|
||||
* JSON RPC Service is optional - used for debugging. It should instead be part
|
||||
- Hoist FetchStage and BroadcastStage out of TPU
|
||||
- BankForks renamed to Banktree
|
||||
- TPU moves to new socket-free crate called solana-tpu.
|
||||
- TPU's BankingStage absorbs ReplayStage
|
||||
- TVU goes away
|
||||
- New RepairStage absorbs Shred Fetch Stage and repair requests
|
||||
- JSON RPC Service is optional - used for debugging. It should instead be part
|
||||
of a separate `solana-blockstreamer` executable.
|
||||
* New MulticastStage absorbs retransmit part of RetransmitStage
|
||||
* MulticastStage downstream of Blockstore
|
||||
- New MulticastStage absorbs retransmit part of RetransmitStage
|
||||
- MulticastStage downstream of Blockstore
|
||||
|
@ -1,4 +1,6 @@
|
||||
# Secure Vote Signing
|
||||
---
|
||||
title: Secure Vote Signing
|
||||
---
|
||||
|
||||
## Secure Vote Signing
|
||||
|
||||
@ -11,21 +13,25 @@ The following sections outline how this architecture would work:
|
||||
### Message Flow
|
||||
|
||||
1. The node initializes the enclave at startup
|
||||
* The enclave generates an asymmetric key and returns the public key to the
|
||||
|
||||
- The enclave generates an asymmetric key and returns the public key to the
|
||||
|
||||
node
|
||||
|
||||
* The keypair is ephemeral. A new keypair is generated on node bootup. A
|
||||
- The keypair is ephemeral. A new keypair is generated on node bootup. A
|
||||
|
||||
new keypair might also be generated at runtime based on some to be determined
|
||||
|
||||
criteria.
|
||||
|
||||
* The enclave returns its attestation report to the node
|
||||
- The enclave returns its attestation report to the node
|
||||
|
||||
2. The node performs attestation of the enclave \(e.g using Intel's IAS APIs\)
|
||||
* The node ensures that the Secure Enclave is running on a TPM and is
|
||||
|
||||
- The node ensures that the Secure Enclave is running on a TPM and is
|
||||
|
||||
signed by a trusted party
|
||||
|
||||
3. The stakeholder of the node grants ephemeral key permission to use its stake.
|
||||
|
||||
This process is to be determined.
|
||||
@ -34,13 +40,13 @@ The following sections outline how this architecture would work:
|
||||
|
||||
using its interface to sign transactions and other data.
|
||||
|
||||
* In case of vote signing, the node needs to verify the PoH. The PoH
|
||||
- In case of vote signing, the node needs to verify the PoH. The PoH
|
||||
|
||||
verification is an integral part of signing. The enclave would be
|
||||
|
||||
presented with some verifiable data to check before signing the vote.
|
||||
|
||||
* The process of generating the verifiable data in untrusted space is to be determined
|
||||
- The process of generating the verifiable data in untrusted space is to be determined
|
||||
|
||||
### PoH Verification
|
||||
|
||||
@ -54,7 +60,7 @@ The following sections outline how this architecture would work:
|
||||
|
||||
a fork that does not contain `X` increases\).
|
||||
|
||||
* The lockout period for `X+y` is still `N` until the node votes again.
|
||||
- The lockout period for `X+y` is still `N` until the node votes again.
|
||||
|
||||
3. The lockout period increment is capped \(e.g. factor `F` applies maximum 32
|
||||
|
||||
@ -64,21 +70,21 @@ The following sections outline how this architecture would work:
|
||||
|
||||
means
|
||||
|
||||
* Enclave is initialized with `N`, `F` and `Factor cap`
|
||||
* Enclave stores `Factor cap` number of entry IDs on which the node had
|
||||
- Enclave is initialized with `N`, `F` and `Factor cap`
|
||||
- Enclave stores `Factor cap` number of entry IDs on which the node had
|
||||
|
||||
previously voted
|
||||
|
||||
* The sign request contains the entry ID for the new vote
|
||||
* Enclave verifies that new vote's entry ID is on the correct fork
|
||||
- The sign request contains the entry ID for the new vote
|
||||
- Enclave verifies that new vote's entry ID is on the correct fork
|
||||
|
||||
\(following the rules \#1 and \#2 above\)
|
||||
|
||||
### Ancestor Verification
|
||||
|
||||
This is alternate, albeit, less certain approach to verifying voting fork. 1. The validator maintains an active set of nodes in the cluster 2. It observes the votes from the active set in the last voting period 3. It stores the ancestor/last\_tick at which each node voted 4. It sends new vote request to vote-signing service
|
||||
This is alternate, albeit, less certain approach to verifying voting fork. 1. The validator maintains an active set of nodes in the cluster 2. It observes the votes from the active set in the last voting period 3. It stores the ancestor/last_tick at which each node voted 4. It sends new vote request to vote-signing service
|
||||
|
||||
* It includes previous votes from nodes in the active set, and their
|
||||
- It includes previous votes from nodes in the active set, and their
|
||||
|
||||
corresponding ancestors
|
||||
|
||||
@ -86,8 +92,8 @@ This is alternate, albeit, less certain approach to verifying voting fork. 1. Th
|
||||
|
||||
and the vote ancestor matches with majority of the nodes
|
||||
|
||||
* It signs the new vote if the check is successful
|
||||
* It asserts \(raises an alarm of some sort\) if the check is unsuccessful
|
||||
- It signs the new vote if the check is successful
|
||||
- It asserts \(raises an alarm of some sort\) if the check is unsuccessful
|
||||
|
||||
The premise is that the validator can be spoofed at most once to vote on incorrect data. If someone hijacks the validator and submits a vote request for bogus data, that vote will not be included in the PoH \(as it'll be rejected by the cluster\). The next time the validator sends a request to sign the vote, the signing service will detect that validator's last vote is missing \(as part of
|
||||
|
||||
@ -108,4 +114,3 @@ A staking client should be configurable to prevent voting on inactive forks. Thi
|
||||
enclave.
|
||||
|
||||
2. Need infrastructure for granting stake to an ephemeral key.
|
||||
|
||||
|
Reference in New Issue
Block a user