Move from gitbook to docusaurus, build docs in Travis CI (#10970)
* fix: ignore unknown fields in more RPC responses * Remove mdbook infrastructure * Delete gitattributes and other theme related items Move all docs to /docs folder to support Docusaurus * all docs need to be moved to /docs * can be changed in the future Add Docusaurus infrastructure * initialize docusaurus repo Remove trailing whitespace, add support for eslint Change Docusaurus configuration to support `src` * No need to rename the folder! Change a setting and we're all good to go. * Fixing rebase items * Remove unneccessary markdown file, fix type * Some fonts are hard to read. Others, not so much. Rubik, you've been sidelined. Roboto, into the limelight! * As much as we all love tutorials, I think we all can navigate around a markdown file. Say goodbye, `mdx.md`. * Setup deployment infrastructure * Move docs job from buildkite to travic * Fix travis config * Add vercel token to travis config * Only deploy docs after merge * Docker rust env * Revert "Docker rust env" This reverts commit f84bc208e807aab1c0d97c7588bbfada1fedfa7c. * Build CLI usage from docker * Pacify shellcheck * Run job on PR and new commits for publication * Update README * Fix svg image building * shellcheck Co-authored-by: Michael Vines <mvines@gmail.com> Co-authored-by: Ryan Shea <rmshea@users.noreply.github.com> Co-authored-by: publish-docs.sh <maintainers@solana.com>
This commit is contained in:
@@ -1,6 +1,8 @@
|
||||
# Anatomy of a Validator
|
||||
---
|
||||
title: Anatomy of a Validator
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
## Pipelining
|
||||
|
||||
|
@@ -1,4 +1,6 @@
|
||||
# Blockstore
|
||||
---
|
||||
title: Blockstore
|
||||
---
|
||||
|
||||
After a block reaches finality, all blocks from that one on down to the genesis block form a linear chain with the familiar name blockchain. Until that point, however, the validator must maintain all potentially valid chains, called _forks_. The process by which forks naturally form as a result of leader rotation is described in [fork generation](../cluster/fork-generation.md). The _blockstore_ data structure described here is how a validator copes with those forks until blocks are finalized.
|
||||
|
||||
@@ -40,21 +42,23 @@ Repair requests for recent shreds are served out of RAM or recent files and out
|
||||
|
||||
1. Entries in the Blockstore are stored as key-value pairs, where the key is the concatenated slot index and shred index for an entry, and the value is the entry data. Note shred indexes are zero-based for each slot \(i.e. they're slot-relative\).
|
||||
2. The Blockstore maintains metadata for each slot, in the `SlotMeta` struct containing:
|
||||
* `slot_index` - The index of this slot
|
||||
* `num_blocks` - The number of blocks in the slot \(used for chaining to a previous slot\)
|
||||
* `consumed` - The highest shred index `n`, such that for all `m < n`, there exists a shred in this slot with shred index equal to `n` \(i.e. the highest consecutive shred index\).
|
||||
* `received` - The highest received shred index for the slot
|
||||
* `next_slots` - A list of future slots this slot could chain to. Used when rebuilding
|
||||
|
||||
- `slot_index` - The index of this slot
|
||||
- `num_blocks` - The number of blocks in the slot \(used for chaining to a previous slot\)
|
||||
- `consumed` - The highest shred index `n`, such that for all `m < n`, there exists a shred in this slot with shred index equal to `n` \(i.e. the highest consecutive shred index\).
|
||||
- `received` - The highest received shred index for the slot
|
||||
- `next_slots` - A list of future slots this slot could chain to. Used when rebuilding
|
||||
|
||||
the ledger to find possible fork points.
|
||||
|
||||
* `last_index` - The index of the shred that is flagged as the last shred for this slot. This flag on a shred will be set by the leader for a slot when they are transmitting the last shred for a slot.
|
||||
* `is_rooted` - True iff every block from 0...slot forms a full sequence without any holes. We can derive is\_rooted for each slot with the following rules. Let slot\(n\) be the slot with index `n`, and slot\(n\).is\_full\(\) is true if the slot with index `n` has all the ticks expected for that slot. Let is\_rooted\(n\) be the statement that "the slot\(n\).is\_rooted is true". Then:
|
||||
- `last_index` - The index of the shred that is flagged as the last shred for this slot. This flag on a shred will be set by the leader for a slot when they are transmitting the last shred for a slot.
|
||||
- `is_rooted` - True iff every block from 0...slot forms a full sequence without any holes. We can derive is_rooted for each slot with the following rules. Let slot\(n\) be the slot with index `n`, and slot\(n\).is_full\(\) is true if the slot with index `n` has all the ticks expected for that slot. Let is_rooted\(n\) be the statement that "the slot\(n\).is_rooted is true". Then:
|
||||
|
||||
is_rooted\(0\) is_rooted\(n+1\) iff \(is_rooted\(n\) and slot\(n\).is_full\(\)
|
||||
|
||||
is\_rooted\(0\) is\_rooted\(n+1\) iff \(is\_rooted\(n\) and slot\(n\).is\_full\(\)
|
||||
3. Chaining - When a shred for a new slot `x` arrives, we check the number of blocks \(`num_blocks`\) for that new slot \(this information is encoded in the shred\). We then know that this new slot chains to slot `x - num_blocks`.
|
||||
4. Subscriptions - The Blockstore records a set of slots that have been "subscribed" to. This means entries that chain to these slots will be sent on the Blockstore channel for consumption by the ReplayStage. See the `Blockstore APIs` for details.
|
||||
5. Update notifications - The Blockstore notifies listeners when slot\(n\).is\_rooted is flipped from false to true for any `n`.
|
||||
5. Update notifications - The Blockstore notifies listeners when slot\(n\).is_rooted is flipped from false to true for any `n`.
|
||||
|
||||
## Blockstore APIs
|
||||
|
||||
|
@@ -1,4 +1,6 @@
|
||||
# Gossip Service
|
||||
---
|
||||
title: Gossip Service
|
||||
---
|
||||
|
||||
The Gossip Service acts as a gateway to nodes in the control plane. Validators use the service to ensure information is available to all other nodes in a cluster. The service broadcasts information using a gossip protocol.
|
||||
|
||||
@@ -24,15 +26,17 @@ Upon receiving a push message, a node examines the message for:
|
||||
|
||||
1. Duplication: if the message has been seen before, the node drops the message and may respond with `PushMessagePrune` if forwarded from a low staked node
|
||||
2. New data: if the message is new to the node
|
||||
* Stores the new information with an updated version in its cluster info and
|
||||
|
||||
- Stores the new information with an updated version in its cluster info and
|
||||
|
||||
purges any previous older value
|
||||
|
||||
* Stores the message in `pushed_once` \(used for detecting duplicates,
|
||||
- Stores the message in `pushed_once` \(used for detecting duplicates,
|
||||
|
||||
purged after `PUSH_MSG_TIMEOUT * 5` ms\)
|
||||
|
||||
* Retransmits the messages to its own push peers
|
||||
- Retransmits the messages to its own push peers
|
||||
|
||||
3. Expiration: nodes drop push messages that are older than `PUSH_MSG_TIMEOUT`
|
||||
|
||||
### Push Peers, Prune Message
|
||||
@@ -59,8 +63,8 @@ An eclipse attack is an attempt to take over the set of node connections with ad
|
||||
|
||||
This is relevant to our implementation in the following ways.
|
||||
|
||||
* Pull messages select a random node from the network. An eclipse attack on _pull_ would require an attacker to influence the random selection in such a way that only adversarial nodes are selected for pull.
|
||||
* Push messages maintain an active set of nodes and select a random fanout for every push message. An eclipse attack on _push_ would influence the active set selection, or the random fanout selection.
|
||||
- Pull messages select a random node from the network. An eclipse attack on _pull_ would require an attacker to influence the random selection in such a way that only adversarial nodes are selected for pull.
|
||||
- Push messages maintain an active set of nodes and select a random fanout for every push message. An eclipse attack on _push_ would influence the active set selection, or the random fanout selection.
|
||||
|
||||
### Time and Stake based weights
|
||||
|
||||
@@ -86,6 +90,5 @@ The active push protocol described here is based on
|
||||
[Plum Tree](https://haslab.uminho.pt/sites/default/files/jop/files/lpr07a.pdf).
|
||||
The main differences are:
|
||||
|
||||
* Push messages have a wallclock that is signed by the originator. Once the wallclock expires the message is dropped. A hop limit is difficult to implement in an adversarial setting.
|
||||
* Lazy Push is not implemented because its not obvious how to prevent an adversary from forging the message fingerprint. A naive approach would allow an adversary to be prioritized for pull based on their input.
|
||||
|
||||
- Push messages have a wallclock that is signed by the originator. Once the wallclock expires the message is dropped. A hop limit is difficult to implement in an adversarial setting.
|
||||
- Lazy Push is not implemented because its not obvious how to prevent an adversary from forging the message fingerprint. A naive approach would allow an adversary to be prioritized for pull based on their input.
|
||||
|
@@ -1,4 +1,6 @@
|
||||
# The Runtime
|
||||
---
|
||||
title: The Runtime
|
||||
---
|
||||
|
||||
## The Runtime
|
||||
|
||||
@@ -20,7 +22,7 @@ Transactions are batched and processed in a pipeline. The TPU and TVU follow a s
|
||||
|
||||
The TVU runtime ensures that PoH verification occurs before the runtime processes any transactions.
|
||||
|
||||

|
||||

|
||||
|
||||
At the _execute_ stage, the loaded accounts have no data dependencies, so all the programs can be executed in parallel.
|
||||
|
||||
@@ -37,13 +39,13 @@ Execution of the program involves mapping the program's public key to an entrypo
|
||||
|
||||
The interface is best described by the `Instruction::data` that the user encodes.
|
||||
|
||||
* `CreateAccount` - This allows the user to create an account with an allocated data array and assign it to a Program.
|
||||
* `CreateAccountWithSeed` - Same as `CreateAccount`, but the new account's address is derived from
|
||||
- the funding account's pubkey,
|
||||
- a mnemonic string (seed), and
|
||||
- the pubkey of the Program
|
||||
* `Assign` - Allows the user to assign an existing account to a program.
|
||||
* `Transfer` - Transfers lamports between accounts.
|
||||
- `CreateAccount` - This allows the user to create an account with an allocated data array and assign it to a Program.
|
||||
- `CreateAccountWithSeed` - Same as `CreateAccount`, but the new account's address is derived from
|
||||
- the funding account's pubkey,
|
||||
- a mnemonic string (seed), and
|
||||
- the pubkey of the Program
|
||||
- `Assign` - Allows the user to assign an existing account to a program.
|
||||
- `Transfer` - Transfers lamports between accounts.
|
||||
|
||||
### Program State Security
|
||||
|
||||
@@ -53,15 +55,15 @@ To pass messages between programs, the receiving program must accept the message
|
||||
|
||||
### Notes
|
||||
|
||||
* There is no dynamic memory allocation. Client's need to use `CreateAccount` instructions to create memory before passing it to another program. This instruction can be composed into a single transaction with the call to the program itself.
|
||||
* `CreateAccount` and `Assign` guarantee that when account is assigned to the program, the Account's data is zero initialized.
|
||||
* Transactions that assign an account to a program or allocate space must be signed by the Account address' private key unless the Account is being created by `CreateAccountWithSeed`, in which case there is no corresponding private key for the account's address/pubkey.
|
||||
* Once assigned to program an Account cannot be reassigned.
|
||||
* Runtime guarantees that a program's code is the only code that can modify Account data that the Account is assigned to.
|
||||
* Runtime guarantees that the program can only spend lamports that are in accounts that are assigned to it.
|
||||
* Runtime guarantees the balances belonging to accounts are balanced before and after the transaction.
|
||||
* Runtime guarantees that instructions all executed successfully when a transaction is committed.
|
||||
- There is no dynamic memory allocation. Client's need to use `CreateAccount` instructions to create memory before passing it to another program. This instruction can be composed into a single transaction with the call to the program itself.
|
||||
- `CreateAccount` and `Assign` guarantee that when account is assigned to the program, the Account's data is zero initialized.
|
||||
- Transactions that assign an account to a program or allocate space must be signed by the Account address' private key unless the Account is being created by `CreateAccountWithSeed`, in which case there is no corresponding private key for the account's address/pubkey.
|
||||
- Once assigned to program an Account cannot be reassigned.
|
||||
- Runtime guarantees that a program's code is the only code that can modify Account data that the Account is assigned to.
|
||||
- Runtime guarantees that the program can only spend lamports that are in accounts that are assigned to it.
|
||||
- Runtime guarantees the balances belonging to accounts are balanced before and after the transaction.
|
||||
- Runtime guarantees that instructions all executed successfully when a transaction is committed.
|
||||
|
||||
## Future Work
|
||||
|
||||
* [Continuations and Signals for long running Transactions](https://github.com/solana-labs/solana/issues/1485)
|
||||
- [Continuations and Signals for long running Transactions](https://github.com/solana-labs/solana/issues/1485)
|
||||
|
@@ -1,3 +1,5 @@
|
||||
# TPU
|
||||
---
|
||||
title: TPU
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
@@ -1,7 +1,9 @@
|
||||
# TVU
|
||||
---
|
||||
title: TVU
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
## Retransmit Stage
|
||||
|
||||

|
||||

|
||||
|
Reference in New Issue
Block a user