Compare commits

..

8 Commits

670 changed files with 20502 additions and 38774 deletions

View File

@ -1,4 +1,4 @@
root: ./docs/src
root: ./book/src
structure:
readme: introduction.md

6
.gitignore vendored
View File

@ -1,6 +1,6 @@
/docs/html/
/docs/src/tests.ok
/docs/src/.gitbook/assets/*.svg
/book/html/
/book/src/tests.ok
/book/src/.gitbook/assets/*.svg
/farf/
/solana-release/
/solana-release.tar.bz2

View File

@ -19,6 +19,22 @@ pull_request_rules:
label:
add:
- automerge
- name: v0.21 backport
conditions:
- base=master
- label=v0.21
actions:
backport:
branches:
- v0.21
- name: v0.22 backport
conditions:
- base=master
- label=v0.22
actions:
backport:
branches:
- v0.22
- name: v0.23 backport
conditions:
- base=master
@ -27,27 +43,3 @@ pull_request_rules:
backport:
branches:
- v0.23
- name: v1.0 backport
conditions:
- base=master
- label=v1.0
actions:
backport:
branches:
- v1.0
- name: v1.1 backport
conditions:
- base=master
- label=v1.1
actions:
backport:
branches:
- v1.1
- name: v1.2 backport
conditions:
- base=master
- label=v1.2
actions:
backport:
branches:
- v1.2

View File

@ -45,7 +45,7 @@ $ git pull --rebase upstream master
If there are no functional changes, PRs can be very large and that's no
problem. If, however, your changes are making meaningful changes or additions,
then about 1.0.1 lines of changes is about the most you should ask a Solana
then about 1,000 lines of changes is about the most you should ask a Solana
maintainer to review.
### Should I send small PRs as I develop large, new components?
@ -224,20 +224,21 @@ Inventing new terms is allowed, but should only be done when the term is widely
used and understood. Avoid introducing new 3-letter terms, which can be
confused with 3-letter acronyms.
[Terms currently in use](docs/src/terminology.md)
[Terms currently in use](book/src/terminology.md)
## Design Proposals
Solana's architecture is described by docs generated from markdown files in
the `docs/src/` directory, maintained by an *editor* (currently @garious). To
add a design proposal, you'll need to include it in the
[Accepted Design Proposals](https://docs.solana.com/proposals)
section of the Solana docs. Here's the full process:
Solana's architecture is described by a book generated from markdown files in
the `book/src/` directory, maintained by an *editor* (currently @garious). To
add a design proposal, you'll need to at least propose a change the content
under the [Accepted Design
Proposals](https://docs.solana.com/book/v/master/proposals) chapter. Here's
the full process:
1. Propose a design by creating a PR that adds a markdown document to the
`docs/src/proposals` directory and references it from the [table of
contents](docs/src/SUMMARY.md). Add any relevant *maintainers* to the PR
directory `book/src/` and references it from the [table of
contents](book/src/SUMMARY.md). Add any relevant *maintainers* to the PR
review.
2. The PR being merged indicates your proposed change was accepted and that the
maintainers support your plan of attack.

2327
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -4,10 +4,7 @@ members = [
"bench-streamer",
"bench-tps",
"banking-bench",
"chacha",
"chacha-cuda",
"chacha-sys",
"cli-config",
"client",
"core",
"faucet",
@ -41,9 +38,6 @@ members = [
"programs/vest",
"programs/vote",
"archiver",
"archiver-lib",
"archiver-utils",
"remote-wallet",
"runtime",
"sdk",
"sdk-c",
@ -51,6 +45,7 @@ members = [
"sys-tuner",
"upload-perf",
"net-utils",
"fixed-buf",
"vote-signer",
"cli",
"rayon-threadlimit",

View File

@ -23,12 +23,12 @@ It's possible for a centralized database to process 710,000 transactions per sec
Furthermore, and much to our surprise, it can be implemented using a mechanism that has existed in Bitcoin since day one. The Bitcoin feature is called nLocktime and it can be used to postdate transactions using block height instead of a timestamp. As a Bitcoin client, you'd use block height instead of a timestamp if you don't trust the network. Block height turns out to be an instance of what's being called a Verifiable Delay Function in cryptography circles. It's a cryptographically secure way to say time has passed. In Solana, we use a far more granular verifiable delay function, a SHA 256 hash chain, to checkpoint the ledger and coordinate consensus. With it, we implement Optimistic Concurrency Control and are now well en route towards that theoretical limit of 710,000 transactions per second.
Documentation
Architecture
===
Before you jump into the code, review the documentation [Solana: Blockchain Rebuilt for Scale](https://docs.solana.com).
Before you jump into the code, review the online book [Solana: Blockchain Rebuilt for Scale](https://docs.solana.com/book/).
(The _latest_ development version of the docs is [available here](https://docs.solana.com/v/master).)
(The _latest_ development version of the online book is also [available here](https://docs.solana.com/book/v/master/).)
Release Binaries
===
@ -87,8 +87,7 @@ $ rustup update
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:
```bash
$ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang
$ sudo apt-get install libssl-dev pkg-config zlib1g-dev llvm clang
```
Download the source code:
@ -121,13 +120,16 @@ $ cargo test
Local Testnet
---
Start your own testnet locally, instructions are in the online docs [Solana: Blockchain Rebuild for Scale: Getting Started](https://docs.solana.com/building-from-source).
Start your own testnet locally, instructions are in the book [Solana: Blockchain Rebuild for Scale: Getting Started](https://docs.solana.com/book/getting-started).
Remote Testnets
---
* `testnet` - public stable testnet accessible via devnet.solana.com. Runs 24/7
We maintain several testnets:
* `testnet` - public stable testnet accessible via testnet.solana.com. Runs 24/7
* `testnet-beta` - public beta channel testnet accessible via beta.testnet.solana.com. Runs 24/7
* `testnet-edge` - public edge channel testnet accessible via edge.testnet.solana.com. Runs 24/7
## Deploy process

View File

@ -138,11 +138,30 @@ There are three release channels that map to branches as follows:
### Update documentation
TODO: Documentation update procedure is WIP as we move to gitbook
Document the new recommended version by updating `docs/src/running-archiver.md` and `docs/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version.
Document the new recommended version by updating `book/src/running-archiver.md` and `book/src/validator-testnet.md` on the release (beta) branch to point at the `solana-install` for the upcoming release version.
### Update software on devnet.solana.com
#### Publish updated Book
We maintain three copies of the "book" as official documentation:
The testnet running on devnet.solana.com is set to use a fixed release tag
1) "Book" is the documentation for the latest official release. This should get manually updated whenever a new release is made. It is published here:
https://solana-labs.github.io/book/
2) "Book-edge" tracks the tip of the master branch and updates automatically.
https://solana-labs.github.io/book-edge/
3) "Book-beta" tracks the tip of the beta branch and updates automatically.
https://solana-labs.github.io/book-beta/
To manually trigger an update of the "Book", create a new job of the manual-update-book pipeline.
Set the tag of the latest release as the PUBLISH_BOOK_TAG environment variable.
```bash
PUBLISH_BOOK_TAG=v0.16.6
```
https://buildkite.com/solana-labs/manual-update-book
### Update software on testnet.solana.com
The testnet running on testnet.solana.com is set to use a fixed release tag
which is set in the Buildkite testnet-management pipeline.
This tag needs to be updated and the testnet restarted after a new release
tag is created.
@ -182,4 +201,4 @@ TESTNET_OP=create-and-start
### Alert the community
Notify Discord users on #validator-support that a new release for
devnet.solana.com is available
testnet.solana.com is available

View File

@ -1,39 +0,0 @@
[package]
name = "solana-archiver-lib"
version = "1.0.1"
description = "Solana Archiver Library"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
edition = "2018"
[dependencies]
bincode = "1.2.1"
crossbeam-channel = "0.3"
ed25519-dalek = "=1.0.0-pre.1"
log = "0.4.8"
rand = "0.6.5"
rand_chacha = "0.1.1"
solana-client = { path = "../client", version = "1.0.1" }
solana-storage-program = { path = "../programs/storage", version = "1.0.1" }
thiserror = "1.0"
serde = "1.0.104"
serde_json = "1.0.46"
serde_derive = "1.0.103"
solana-net-utils = { path = "../net-utils", version = "1.0.1" }
solana-chacha = { path = "../chacha", version = "1.0.1" }
solana-chacha-sys = { path = "../chacha-sys", version = "1.0.1" }
solana-ledger = { path = "../ledger", version = "1.0.1" }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-perf = { path = "../perf", version = "1.0.1" }
solana-sdk = { path = "../sdk", version = "1.0.1" }
solana-core = { path = "../core", version = "1.0.1" }
solana-archiver-utils = { path = "../archiver-utils", version = "1.0.1" }
solana-metrics = { path = "../metrics", version = "1.0.1" }
[dev-dependencies]
hex = "0.4.0"
[lib]
name = "solana_archiver_lib"

View File

@ -1,11 +0,0 @@
#[macro_use]
extern crate log;
#[macro_use]
extern crate serde_derive;
#[macro_use]
extern crate solana_metrics;
pub mod archiver;
mod result;

View File

@ -1,48 +0,0 @@
use serde_json;
use solana_client::client_error;
use solana_ledger::blockstore;
use solana_sdk::transport;
use std::any::Any;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ArchiverError {
#[error("IO error")]
IO(#[from] std::io::Error),
#[error("blockstore error")]
BlockstoreError(#[from] blockstore::BlockstoreError),
#[error("crossbeam error")]
CrossbeamSendError(#[from] crossbeam_channel::SendError<u64>),
#[error("send error")]
SendError(#[from] std::sync::mpsc::SendError<u64>),
#[error("join error")]
JoinError(Box<dyn Any + Send + 'static>),
#[error("transport error")]
TransportError(#[from] transport::TransportError),
#[error("client error")]
ClientError(#[from] client_error::ClientError),
#[error("Json parsing error")]
JsonError(#[from] serde_json::error::Error),
#[error("Storage account has no balance")]
EmptyStorageAccountBalance,
#[error("No RPC peers..")]
NoRpcPeers,
#[error("Couldn't download full segment")]
SegmentDownloadError,
}
impl std::convert::From<Box<dyn Any + Send + 'static>> for ArchiverError {
fn from(e: Box<dyn Any + Send + 'static>) -> ArchiverError {
ArchiverError::JoinError(e)
}
}

View File

@ -1,25 +0,0 @@
[package]
name = "solana-archiver-utils"
version = "1.0.1"
description = "Solana Archiver Utils"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
edition = "2018"
[dependencies]
log = "0.4.8"
rand = "0.6.5"
solana-chacha = { path = "../chacha", version = "1.0.1" }
solana-chacha-sys = { path = "../chacha-sys", version = "1.0.1" }
solana-ledger = { path = "../ledger", version = "1.0.1" }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-perf = { path = "../perf", version = "1.0.1" }
solana-sdk = { path = "../sdk", version = "1.0.1" }
[dev-dependencies]
hex = "0.4.0"
[lib]
name = "solana_archiver_utils"

View File

@ -1,120 +0,0 @@
#[macro_use]
extern crate log;
use solana_sdk::hash::{Hash, Hasher};
use std::fs::File;
use std::io::{self, BufReader, ErrorKind, Read, Seek, SeekFrom};
use std::mem::size_of;
use std::path::Path;
pub fn sample_file(in_path: &Path, sample_offsets: &[u64]) -> io::Result<Hash> {
let in_file = File::open(in_path)?;
let metadata = in_file.metadata()?;
let mut buffer_file = BufReader::new(in_file);
let mut hasher = Hasher::default();
let sample_size = size_of::<Hash>();
let sample_size64 = sample_size as u64;
let mut buf = vec![0; sample_size];
let file_len = metadata.len();
if file_len < sample_size64 {
return Err(io::Error::new(ErrorKind::Other, "file too short!"));
}
for offset in sample_offsets {
if *offset > (file_len - sample_size64) / sample_size64 {
return Err(io::Error::new(ErrorKind::Other, "offset too large"));
}
buffer_file.seek(SeekFrom::Start(*offset * sample_size64))?;
trace!("sampling @ {} ", *offset);
match buffer_file.read(&mut buf) {
Ok(size) => {
assert_eq!(size, buf.len());
hasher.hash(&buf);
}
Err(e) => {
warn!("Error sampling file");
return Err(e);
}
}
}
Ok(hasher.result())
}
#[cfg(test)]
mod tests {
use super::*;
use rand::{thread_rng, Rng};
use std::fs::{create_dir_all, remove_file};
use std::io::Write;
use std::path::PathBuf;
extern crate hex;
fn tmp_file_path(name: &str) -> PathBuf {
use std::env;
let out_dir = env::var("FARF_DIR").unwrap_or_else(|_| "farf".to_string());
let mut rand_bits = [0u8; 32];
thread_rng().fill(&mut rand_bits[..]);
let mut path = PathBuf::new();
path.push(out_dir);
path.push("tmp");
create_dir_all(&path).unwrap();
path.push(format!("{}-{:?}", name, hex::encode(rand_bits)));
println!("path: {:?}", path);
path
}
#[test]
fn test_sample_file() {
solana_logger::setup();
let in_path = tmp_file_path("test_sample_file_input.txt");
let num_strings = 4096;
let string = "12foobar";
{
let mut in_file = File::create(&in_path).unwrap();
for _ in 0..num_strings {
in_file.write(string.as_bytes()).unwrap();
}
}
let num_samples = (string.len() * num_strings / size_of::<Hash>()) as u64;
let samples: Vec<_> = (0..num_samples).collect();
let res = sample_file(&in_path, samples.as_slice());
let ref_hash: Hash = Hash::new(&[
173, 251, 182, 165, 10, 54, 33, 150, 133, 226, 106, 150, 99, 192, 179, 1, 230, 144,
151, 126, 18, 191, 54, 67, 249, 140, 230, 160, 56, 30, 170, 52,
]);
let res = res.unwrap();
assert_eq!(res, ref_hash);
// Sample just past the end
assert!(sample_file(&in_path, &[num_samples]).is_err());
remove_file(&in_path).unwrap();
}
#[test]
fn test_sample_file_invalid_offset() {
let in_path = tmp_file_path("test_sample_file_invalid_offset_input.txt");
{
let mut in_file = File::create(&in_path).unwrap();
for _ in 0..4096 {
in_file.write("123456foobar".as_bytes()).unwrap();
}
}
let samples = [0, 200000];
let res = sample_file(&in_path, &samples);
assert!(res.is_err());
remove_file(in_path).unwrap();
}
#[test]
fn test_sample_file_missing_file() {
let in_path = tmp_file_path("test_sample_file_that_doesnt_exist.txt");
let samples = [0, 5];
let res = sample_file(&in_path, &samples);
assert!(res.is_err());
}
}

View File

@ -2,19 +2,18 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-archiver"
version = "1.0.1"
version = "0.22.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.33.0"
console = "0.9.2"
solana-clap-utils = { path = "../clap-utils", version = "1.0.1" }
solana-core = { path = "../core", version = "1.0.1" }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-metrics = { path = "../metrics", version = "1.0.1" }
solana-archiver-lib = { path = "../archiver-lib", version = "1.0.1" }
solana-net-utils = { path = "../net-utils", version = "1.0.1" }
solana-sdk = { path = "../sdk", version = "1.0.1" }
console = "0.9.1"
solana-clap-utils = { path = "../clap-utils", version = "0.22.0" }
solana-core = { path = "../core", version = "0.22.0" }
solana-logger = { path = "../logger", version = "0.22.0" }
solana-metrics = { path = "../metrics", version = "0.22.0" }
solana-net-utils = { path = "../net-utils", version = "0.22.0" }
solana-sdk = { path = "../sdk", version = "0.22.0" }

View File

@ -1,6 +1,5 @@
use clap::{crate_description, crate_name, App, Arg};
use console::style;
use solana_archiver_lib::archiver::Archiver;
use solana_clap_utils::{
input_validators::is_keypair,
keypair::{
@ -9,10 +8,11 @@ use solana_clap_utils::{
},
};
use solana_core::{
archiver::Archiver,
cluster_info::{Node, VALIDATOR_PORT_RANGE},
contact_info::ContactInfo,
};
use solana_sdk::{commitment_config::CommitmentConfig, signature::Signer};
use solana_sdk::{commitment_config::CommitmentConfig, signature::KeypairUtil};
use std::{net::SocketAddr, path::PathBuf, process::exit, sync::Arc};
fn main() {

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-banking-bench"
version = "1.0.1"
version = "0.22.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -10,11 +10,11 @@ homepage = "https://solana.com/"
[dependencies]
log = "0.4.6"
rayon = "1.2.0"
solana-core = { path = "../core", version = "1.0.1" }
solana-ledger = { path = "../ledger", version = "1.0.1" }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-runtime = { path = "../runtime", version = "1.0.1" }
solana-measure = { path = "../measure", version = "1.0.1" }
solana-sdk = { path = "../sdk", version = "1.0.1" }
solana-core = { path = "../core", version = "0.22.0" }
solana-ledger = { path = "../ledger", version = "0.22.0" }
solana-logger = { path = "../logger", version = "0.22.0" }
solana-runtime = { path = "../runtime", version = "0.22.0" }
solana-measure = { path = "../measure", version = "0.22.0" }
solana-sdk = { path = "../sdk", version = "0.22.0" }
rand = "0.6.5"
crossbeam-channel = "0.3"

View File

@ -10,7 +10,7 @@ use solana_core::packet::to_packets_chunked;
use solana_core::poh_recorder::PohRecorder;
use solana_core::poh_recorder::WorkingBankEntry;
use solana_ledger::bank_forks::BankForks;
use solana_ledger::{blockstore::Blockstore, get_tmp_ledger_path};
use solana_ledger::{blocktree::Blocktree, get_tmp_ledger_path};
use solana_measure::measure::Measure;
use solana_runtime::bank::Bank;
use solana_sdk::hash::Hash;
@ -139,11 +139,11 @@ fn main() {
let mut verified: Vec<_> = to_packets_chunked(&transactions.clone(), PACKETS_PER_BATCH);
let ledger_path = get_tmp_ledger_path!();
{
let blockstore = Arc::new(
Blockstore::open(&ledger_path).expect("Expected to be able to open database ledger"),
let blocktree = Arc::new(
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger"),
);
let (exit, poh_recorder, poh_service, signal_receiver) =
create_test_recorder(&bank, &blockstore, None);
create_test_recorder(&bank, &blocktree, None);
let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info);
let cluster_info = Arc::new(RwLock::new(cluster_info));
let banking_stage = BankingStage::new(
@ -162,8 +162,8 @@ fn main() {
// If it is dropped before poh_service, then poh_service will error when
// calling send() on the channel.
let signal_receiver = Arc::new(signal_receiver);
let mut total_us = 0;
let mut tx_total_us = 0;
let mut total = 0;
let mut tx_total = 0;
let mut txs_processed = 0;
let mut root = 1;
let collector = Pubkey::new_rand();
@ -173,7 +173,6 @@ fn main() {
chunk_len,
num_threads,
};
let mut total_sent = 0;
for _ in 0..ITERS {
let now = Instant::now();
let mut sent = 0;
@ -224,7 +223,7 @@ fn main() {
);
assert!(txs_processed < bank.transaction_count());
txs_processed = bank.transaction_count();
tx_total_us += duration_as_us(&now.elapsed());
tx_total += duration_as_us(&now.elapsed());
let mut poh_time = Measure::start("poh_time");
poh_recorder.lock().unwrap().reset(
@ -256,21 +255,20 @@ fn main() {
poh_time.as_us(),
);
} else {
tx_total_us += duration_as_us(&now.elapsed());
tx_total += duration_as_us(&now.elapsed());
}
// This signature clear may not actually clear the signatures
// in this chunk, but since we rotate between CHUNKS then
// we should clear them by the time we come around again to re-use that chunk.
bank.clear_signatures();
total_us += duration_as_us(&now.elapsed());
total += duration_as_us(&now.elapsed());
debug!(
"time: {} us checked: {} sent: {}",
duration_as_us(&now.elapsed()),
txes / CHUNKS,
sent,
);
total_sent += sent;
if bank.slot() > 0 && bank.slot() % 16 == 0 {
for tx in transactions.iter_mut() {
@ -286,11 +284,11 @@ fn main() {
}
eprintln!(
"{{'name': 'banking_bench_total', 'median': '{}'}}",
(1000.0 * 1000.0 * total_sent as f64) / (total_us as f64),
total / ITERS as u64,
);
eprintln!(
"{{'name': 'banking_bench_tx_total', 'median': '{}'}}",
(1000.0 * 1000.0 * total_sent as f64) / (tx_total_us as f64),
tx_total / ITERS as u64,
);
drop(verified_sender);
@ -302,5 +300,5 @@ fn main() {
sleep(Duration::from_secs(1));
debug!("waited for poh_service");
}
let _unused = Blockstore::destroy(&ledger_path);
let _unused = Blocktree::destroy(&ledger_path);
}

View File

@ -2,33 +2,40 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-exchange"
version = "1.0.1"
version = "0.22.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
bincode = "1.2.1"
bs58 = "0.3.0"
clap = "2.32.0"
env_logger = "0.7.1"
itertools = "0.8.2"
log = "0.4.8"
num-derive = "0.3"
num-traits = "0.2"
rand = "0.6.5"
rayon = "1.2.0"
serde_json = "1.0.46"
serde = "1.0.104"
serde_derive = "1.0.103"
serde_json = "1.0.44"
serde_yaml = "0.8.11"
solana-clap-utils = { path = "../clap-utils", version = "1.0.1" }
solana-core = { path = "../core", version = "1.0.1" }
solana-genesis = { path = "../genesis", version = "1.0.1" }
solana-client = { path = "../client", version = "1.0.1" }
solana-faucet = { path = "../faucet", version = "1.0.1" }
solana-exchange-program = { path = "../programs/exchange", version = "1.0.1" }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-metrics = { path = "../metrics", version = "1.0.1" }
solana-net-utils = { path = "../net-utils", version = "1.0.1" }
solana-runtime = { path = "../runtime", version = "1.0.1" }
solana-sdk = { path = "../sdk", version = "1.0.1" }
solana-clap-utils = { path = "../clap-utils", version = "0.22.0" }
solana-core = { path = "../core", version = "0.22.0" }
solana-genesis = { path = "../genesis", version = "0.22.0" }
solana-client = { path = "../client", version = "0.22.0" }
solana-faucet = { path = "../faucet", version = "0.22.0" }
solana-exchange-program = { path = "../programs/exchange", version = "0.22.0" }
solana-logger = { path = "../logger", version = "0.22.0" }
solana-metrics = { path = "../metrics", version = "0.22.0" }
solana-net-utils = { path = "../net-utils", version = "0.22.0" }
solana-runtime = { path = "../runtime", version = "0.22.0" }
solana-sdk = { path = "../sdk", version = "0.22.0" }
untrusted = "0.7.0"
ws = "0.9.1"
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "1.0.1" }
solana-local-cluster = { path = "../local-cluster", version = "0.22.0" }

View File

@ -15,7 +15,7 @@ use solana_sdk::{
client::{Client, SyncClient},
commitment_config::CommitmentConfig,
pubkey::Pubkey,
signature::{Keypair, Signer},
signature::{Keypair, KeypairUtil},
timing::{duration_as_ms, duration_as_s},
transaction::Transaction,
{system_instruction, system_program},
@ -701,7 +701,7 @@ fn verify_funding_transfer<T: SyncClient + ?Sized>(
false
}
pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>], lamports: u64) {
pub fn fund_keys(client: &dyn Client, source: &Keypair, dests: &[Arc<Keypair>], lamports: u64) {
let total = lamports * (dests.len() as u64 + 1);
let mut funded: Vec<(&Keypair, u64)> = vec![(source, total)];
let mut notfunded: Vec<&Arc<Keypair>> = dests.iter().collect();
@ -824,11 +824,7 @@ pub fn fund_keys<T: Client>(client: &T, source: &Keypair, dests: &[Arc<Keypair>]
}
}
pub fn create_token_accounts<T: Client>(
client: &T,
signers: &[Arc<Keypair>],
accounts: &[Keypair],
) {
pub fn create_token_accounts(client: &dyn Client, signers: &[Arc<Keypair>], accounts: &[Keypair]) {
let mut notfunded: Vec<(&Arc<Keypair>, &Keypair)> = signers.iter().zip(accounts).collect();
while !notfunded.is_empty() {
@ -972,12 +968,7 @@ fn generate_keypairs(num: u64) -> Vec<Keypair> {
rnd.gen_n_keypairs(num)
}
pub fn airdrop_lamports<T: Client>(
client: &T,
faucet_addr: &SocketAddr,
id: &Keypair,
amount: u64,
) {
pub fn airdrop_lamports(client: &dyn Client, faucet_addr: &SocketAddr, id: &Keypair, amount: u64) {
let balance = client.get_balance_with_commitment(&id.pubkey(), CommitmentConfig::recent());
let balance = balance.unwrap_or(0);
if balance >= amount {

View File

@ -1,7 +1,7 @@
use clap::{crate_description, crate_name, value_t, App, Arg, ArgMatches};
use solana_core::gen_keys::GenKeys;
use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::signature::{read_keypair_file, Keypair};
use solana_sdk::signature::{read_keypair_file, Keypair, KeypairUtil};
use std::net::SocketAddr;
use std::process::exit;
use std::time::Duration;

View File

@ -5,7 +5,7 @@ pub mod order_book;
use crate::bench::{airdrop_lamports, create_client_accounts_file, do_bench_exchange, Config};
use log::*;
use solana_core::gossip_service::{discover_cluster, get_multi_client};
use solana_sdk::signature::Signer;
use solana_sdk::signature::KeypairUtil;
fn main() {
solana_logger::setup();

View File

@ -10,13 +10,12 @@ use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster};
use solana_runtime::bank::Bank;
use solana_runtime::bank_client::BankClient;
use solana_sdk::genesis_config::create_genesis_config;
use solana_sdk::signature::{Keypair, Signer};
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::process::exit;
use std::sync::mpsc::channel;
use std::time::Duration;
#[test]
#[ignore]
fn test_exchange_local_cluster() {
solana_logger::setup();

View File

@ -2,14 +2,14 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-streamer"
version = "1.0.1"
version = "0.22.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.33.0"
solana-clap-utils = { path = "../clap-utils", version = "1.0.1" }
solana-core = { path = "../core", version = "1.0.1" }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-net-utils = { path = "../net-utils", version = "1.0.1" }
solana-clap-utils = { path = "../clap-utils", version = "0.22.0" }
solana-core = { path = "../core", version = "0.22.0" }
solana-logger = { path = "../logger", version = "0.22.0" }
solana-net-utils = { path = "../net-utils", version = "0.22.0" }

View File

@ -1,5 +1,6 @@
use clap::{crate_description, crate_name, App, Arg};
use solana_core::packet::{Packet, Packets, PacketsRecycler, PACKET_DATA_SIZE};
use solana_core::result::Result;
use solana_core::streamer::{receiver, PacketReceiver};
use std::cmp::max;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
@ -7,7 +8,7 @@ use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::mpsc::channel;
use std::sync::Arc;
use std::thread::sleep;
use std::thread::{spawn, JoinHandle, Result};
use std::thread::{spawn, JoinHandle};
use std::time::Duration;
use std::time::SystemTime;

View File

@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-tps"
version = "1.0.1"
version = "0.22.0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@ -12,26 +12,28 @@ bincode = "1.2.1"
clap = "2.33.0"
log = "0.4.8"
rayon = "1.2.0"
serde_json = "1.0.46"
serde = "1.0.104"
serde_derive = "1.0.103"
serde_json = "1.0.44"
serde_yaml = "0.8.11"
solana-clap-utils = { path = "../clap-utils", version = "1.0.1" }
solana-core = { path = "../core", version = "1.0.1" }
solana-genesis = { path = "../genesis", version = "1.0.1" }
solana-client = { path = "../client", version = "1.0.1" }
solana-faucet = { path = "../faucet", version = "1.0.1" }
solana-librapay = { path = "../programs/librapay", version = "1.0.1", optional = true }
solana-logger = { path = "../logger", version = "1.0.1" }
solana-metrics = { path = "../metrics", version = "1.0.1" }
solana-measure = { path = "../measure", version = "1.0.1" }
solana-net-utils = { path = "../net-utils", version = "1.0.1" }
solana-runtime = { path = "../runtime", version = "1.0.1" }
solana-sdk = { path = "../sdk", version = "1.0.1" }
solana-move-loader-program = { path = "../programs/move_loader", version = "1.0.1", optional = true }
solana-clap-utils = { path = "../clap-utils", version = "0.22.0" }
solana-core = { path = "../core", version = "0.22.0" }
solana-genesis = { path = "../genesis", version = "0.22.0" }
solana-client = { path = "../client", version = "0.22.0" }
solana-faucet = { path = "../faucet", version = "0.22.0" }
solana-librapay = { path = "../programs/librapay", version = "0.22.0", optional = true }
solana-logger = { path = "../logger", version = "0.22.0" }
solana-metrics = { path = "../metrics", version = "0.22.0" }
solana-measure = { path = "../measure", version = "0.22.0" }
solana-net-utils = { path = "../net-utils", version = "0.22.0" }
solana-runtime = { path = "../runtime", version = "0.22.0" }
solana-sdk = { path = "../sdk", version = "0.22.0" }
solana-move-loader-program = { path = "../programs/move_loader", version = "0.22.0", optional = true }
[dev-dependencies]
serial_test = "0.3.2"
serial_test_derive = "0.4.0"
solana-local-cluster = { path = "../local-cluster", version = "1.0.1" }
serial_test_derive = "0.3.1"
solana-local-cluster = { path = "../local-cluster", version = "0.22.0" }
[features]
move = ["solana-librapay", "solana-move-loader-program"]

View File

@ -7,7 +7,7 @@ use solana_faucet::faucet::request_airdrop_transaction;
#[cfg(feature = "move")]
use solana_librapay::{create_genesis, upload_mint_script, upload_payment_script};
use solana_measure::measure::Measure;
use solana_metrics::{self, datapoint_info};
use solana_metrics::{self, datapoint_debug};
use solana_sdk::{
client::Client,
clock::{DEFAULT_TICKS_PER_SECOND, DEFAULT_TICKS_PER_SLOT, MAX_PROCESSING_AGE},
@ -15,13 +15,14 @@ use solana_sdk::{
fee_calculator::FeeCalculator,
hash::Hash,
pubkey::Pubkey,
signature::{Keypair, Signer},
signature::{Keypair, KeypairUtil},
system_instruction, system_transaction,
timing::{duration_as_ms, duration_as_s, duration_as_us, timestamp},
transaction::Transaction,
};
use std::{
collections::{HashSet, VecDeque},
cmp,
collections::VecDeque,
net::SocketAddr,
process::exit,
sync::{
@ -65,9 +66,10 @@ fn get_recent_blockhash<T: Client>(client: &T) -> (Hash, FeeCalculator) {
}
pub fn do_bench_tps<T>(
client: Arc<T>,
clients: Vec<T>,
config: Config,
gen_keypairs: Vec<Keypair>,
keypair0_balance: u64,
libra_args: Option<LibraKeys>,
) -> u64
where
@ -80,9 +82,13 @@ where
duration,
tx_count,
sustained,
num_lamports_per_account,
..
} = config;
let clients: Vec<_> = clients.into_iter().map(Arc::new).collect();
let client = &clients[0];
let mut source_keypair_chunks: Vec<Vec<&Keypair>> = Vec::new();
let mut dest_keypair_chunks: Vec<VecDeque<&Keypair>> = Vec::new();
assert!(gen_keypairs.len() >= 2 * tx_count);
@ -109,17 +115,20 @@ where
let maxes = Arc::new(RwLock::new(Vec::new()));
let sample_period = 1; // in seconds
info!("Sampling TPS every {} second...", sample_period);
let sample_thread = {
let exit_signal = exit_signal.clone();
let maxes = maxes.clone();
let client = client.clone();
Builder::new()
.name("solana-client-sample".to_string())
.spawn(move || {
sample_txs(&exit_signal, &maxes, sample_period, &client);
})
.unwrap()
};
let v_threads: Vec<_> = clients
.iter()
.map(|client| {
let exit_signal = exit_signal.clone();
let maxes = maxes.clone();
let client = client.clone();
Builder::new()
.name("solana-client-sample".to_string())
.spawn(move || {
sample_txs(&exit_signal, &maxes, sample_period, &client);
})
.unwrap()
})
.collect();
let shared_txs: SharedTransactions = Arc::new(RwLock::new(VecDeque::new()));
@ -165,10 +174,11 @@ where
// generate and send transactions for the specified duration
let start = Instant::now();
let keypair_chunks = source_keypair_chunks.len();
let keypair_chunks = source_keypair_chunks.len() as u64;
let mut reclaim_lamports_back_to_source_account = false;
let mut chunk_index = 0;
let mut i = keypair0_balance;
while start.elapsed() < duration {
let chunk_index = (i % keypair_chunks) as usize;
generate_txs(
&shared_txs,
&recent_blockhash,
@ -187,9 +197,7 @@ where
sleep(Duration::from_millis(1));
}
} else {
while !shared_txs.read().unwrap().is_empty()
|| shared_tx_active_thread_count.load(Ordering::Relaxed) > 0
{
while shared_tx_active_thread_count.load(Ordering::Relaxed) > 0 {
sleep(Duration::from_millis(1));
}
}
@ -198,11 +206,8 @@ where
// transaction signatures even when blockhash is reused.
dest_keypair_chunks[chunk_index].rotate_left(1);
// Move on to next chunk
chunk_index = (chunk_index + 1) % keypair_chunks;
// Switch directions after transfering for each "chunk"
if chunk_index == 0 {
i += 1;
if should_switch_directions(num_lamports_per_account, keypair_chunks, i) {
reclaim_lamports_back_to_source_account = !reclaim_lamports_back_to_source_account;
}
}
@ -210,9 +215,11 @@ where
// Stop the sampling threads so it will collect the stats
exit_signal.store(true, Ordering::Relaxed);
info!("Waiting for sampler threads...");
if let Err(err) = sample_thread.join() {
info!(" join() failed with: {:?}", err);
info!("Waiting for validator threads...");
for t in v_threads {
if let Err(err) = t.join() {
info!(" join() failed with: {:?}", err);
}
}
// join the tx send threads
@ -244,7 +251,7 @@ where
fn metrics_submit_lamport_balance(lamport_balance: u64) {
info!("Token balance: {}", lamport_balance);
datapoint_info!(
datapoint_debug!(
"bench-tps-lamport_balance",
("balance", lamport_balance, i64)
);
@ -375,7 +382,7 @@ fn generate_txs(
duration_as_ms(&duration),
blockhash,
);
datapoint_info!(
datapoint_debug!(
"bench-tps-generate_txs",
("duration", duration_as_us(&duration), i64)
);
@ -481,7 +488,7 @@ fn do_tx_transfers<T: Client>(
duration_as_ms(&transfer_start.elapsed()),
tx_len as f32 / duration_as_s(&transfer_start.elapsed()),
);
datapoint_info!(
datapoint_debug!(
"bench-tps-do_tx_transfers",
("duration", duration_as_us(&transfer_start.elapsed()), i64),
("count", tx_len, i64)
@ -493,218 +500,177 @@ fn do_tx_transfers<T: Client>(
}
}
fn verify_funding_transfer<T: Client>(client: &Arc<T>, tx: &Transaction, amount: u64) -> bool {
fn verify_funding_transfer<T: Client>(client: &T, tx: &Transaction, amount: u64) -> bool {
for a in &tx.message().account_keys[1..] {
match client.get_balance_with_commitment(a, CommitmentConfig::recent()) {
Ok(balance) => return balance >= amount,
Err(err) => error!("failed to get balance {:?}", err),
if client
.get_balance_with_commitment(a, CommitmentConfig::recent())
.unwrap_or(0)
>= amount
{
return true;
}
}
false
}
trait FundingTransactions<'a> {
fn fund<T: 'static + Client + Send + Sync>(
&mut self,
client: &Arc<T>,
to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)],
to_lamports: u64,
);
fn make(&mut self, to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)]);
fn sign(&mut self, blockhash: Hash);
fn send<T: Client>(&self, client: &Arc<T>);
fn verify<T: 'static + Client + Send + Sync>(&mut self, client: &Arc<T>, to_lamports: u64);
}
impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
fn fund<T: 'static + Client + Send + Sync>(
&mut self,
client: &Arc<T>,
to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)],
to_lamports: u64,
) {
self.make(to_fund);
let mut tries = 0;
while !self.is_empty() {
info!(
"{} {} each to {} accounts in {} txs",
if tries == 0 {
"transferring"
} else {
" retrying"
},
to_lamports,
self.len() * MAX_SPENDS_PER_TX as usize,
self.len(),
);
let (blockhash, _fee_calculator) = get_recent_blockhash(client.as_ref());
// re-sign retained to_fund_txes with updated blockhash
self.sign(blockhash);
self.send(&client);
// Sleep a few slots to allow transactions to process
sleep(Duration::from_secs(1));
self.verify(&client, to_lamports);
// retry anything that seems to have dropped through cracks
// again since these txs are all or nothing, they're fine to
// retry
tries += 1;
}
info!("transferred");
}
fn make(&mut self, to_fund: &[(&'a Keypair, Vec<(Pubkey, u64)>)]) {
let mut make_txs = Measure::start("make_txs");
let to_fund_txs: Vec<(&Keypair, Transaction)> = to_fund
.par_iter()
.map(|(k, t)| {
let tx = Transaction::new_unsigned_instructions(system_instruction::transfer_many(
&k.pubkey(),
&t,
));
(*k, tx)
})
.collect();
make_txs.stop();
debug!(
"make {} unsigned txs: {}us",
to_fund_txs.len(),
make_txs.as_us()
);
self.extend(to_fund_txs);
}
fn sign(&mut self, blockhash: Hash) {
let mut sign_txs = Measure::start("sign_txs");
self.par_iter_mut().for_each(|(k, tx)| {
tx.sign(&[*k], blockhash);
});
sign_txs.stop();
debug!("sign {} txs: {}us", self.len(), sign_txs.as_us());
}
fn send<T: Client>(&self, client: &Arc<T>) {
let mut send_txs = Measure::start("send_txs");
self.iter().for_each(|(_, tx)| {
client.async_send_transaction(tx.clone()).expect("transfer");
});
send_txs.stop();
debug!("send {} txs: {}us", self.len(), send_txs.as_us());
}
fn verify<T: 'static + Client + Send + Sync>(&mut self, client: &Arc<T>, to_lamports: u64) {
let starting_txs = self.len();
let verified_txs = Arc::new(AtomicUsize::new(0));
let too_many_failures = Arc::new(AtomicBool::new(false));
let loops = if starting_txs < 1000 { 3 } else { 1 };
// Only loop multiple times for small (quick) transaction batches
for _ in 0..loops {
let failed_verify = Arc::new(AtomicUsize::new(0));
let client = client.clone();
let verified_txs = &verified_txs;
let failed_verify = &failed_verify;
let too_many_failures = &too_many_failures;
let verified_set: HashSet<Pubkey> = self
.par_iter()
.filter_map(move |(k, tx)| {
if too_many_failures.load(Ordering::Relaxed) {
return None;
}
let verified = if verify_funding_transfer(&client, &tx, to_lamports) {
verified_txs.fetch_add(1, Ordering::Relaxed);
Some(k.pubkey())
} else {
failed_verify.fetch_add(1, Ordering::Relaxed);
None
};
let verified_txs = verified_txs.load(Ordering::Relaxed);
let failed_verify = failed_verify.load(Ordering::Relaxed);
let remaining_count = starting_txs.saturating_sub(verified_txs + failed_verify);
if failed_verify > 100 && failed_verify > verified_txs {
too_many_failures.store(true, Ordering::Relaxed);
warn!(
"Too many failed transfers... {} remaining, {} verified, {} failures",
remaining_count, verified_txs, failed_verify
);
}
if remaining_count % 100 == 0 {
info!(
"Verifying transfers... {} remaining, {} verified, {} failures",
remaining_count, verified_txs, failed_verify
);
}
verified
})
.collect();
self.retain(|(k, _)| !verified_set.contains(&k.pubkey()));
if self.is_empty() {
break;
}
info!("Looping verifications");
let verified_txs = verified_txs.load(Ordering::Relaxed);
let failed_verify = failed_verify.load(Ordering::Relaxed);
let remaining_count = starting_txs.saturating_sub(verified_txs + failed_verify);
info!(
"Verifying transfers... {} remaining, {} verified, {} failures",
remaining_count, verified_txs, failed_verify
);
sleep(Duration::from_millis(100));
}
}
}
/// fund the dests keys by spending all of the source keys into MAX_SPENDS_PER_TX
/// on every iteration. This allows us to replay the transfers because the source is either empty,
/// or full
pub fn fund_keys<T: 'static + Client + Send + Sync>(
client: Arc<T>,
pub fn fund_keys<T: Client>(
client: &T,
source: &Keypair,
dests: &[Keypair],
total: u64,
max_fee: u64,
lamports_per_account: u64,
mut extra: u64,
) {
let mut funded: Vec<&Keypair> = vec![source];
let mut funded_funds = total;
let mut not_funded: Vec<&Keypair> = dests.iter().collect();
while !not_funded.is_empty() {
// Build to fund list and prepare funding sources for next iteration
let mut new_funded: Vec<&Keypair> = vec![];
let mut to_fund: Vec<(&Keypair, Vec<(Pubkey, u64)>)> = vec![];
let to_lamports = (funded_funds - lamports_per_account - max_fee) / MAX_SPENDS_PER_TX;
for f in funded {
let start = not_funded.len() - MAX_SPENDS_PER_TX as usize;
let dests: Vec<_> = not_funded.drain(start..).collect();
let spends: Vec<_> = dests.iter().map(|k| (k.pubkey(), to_lamports)).collect();
to_fund.push((f, spends));
new_funded.extend(dests.into_iter());
let mut funded: Vec<(&Keypair, u64)> = vec![(source, total)];
let mut notfunded: Vec<&Keypair> = dests.iter().collect();
let lamports_per_account = (total - (extra * max_fee)) / (notfunded.len() as u64 + 1);
info!(
"funding keys {} with lamports: {:?} total: {}",
dests.len(),
client.get_balance(&source.pubkey()),
total
);
while !notfunded.is_empty() {
let mut new_funded: Vec<(&Keypair, u64)> = vec![];
let mut to_fund = vec![];
info!("creating from... {}", funded.len());
let mut build_to_fund = Measure::start("build_to_fund");
for f in &mut funded {
let max_units = cmp::min(notfunded.len() as u64, MAX_SPENDS_PER_TX);
if max_units == 0 {
break;
}
let start = notfunded.len() - max_units as usize;
let fees = if extra > 0 { max_fee } else { 0 };
let per_unit = (f.1 - lamports_per_account - fees) / max_units;
let moves: Vec<_> = notfunded[start..]
.iter()
.map(|k| (k.pubkey(), per_unit))
.collect();
notfunded[start..]
.iter()
.for_each(|k| new_funded.push((k, per_unit)));
notfunded.truncate(start);
if !moves.is_empty() {
to_fund.push((f.0, moves));
}
extra -= 1;
}
build_to_fund.stop();
debug!("build to_fund vec: {}us", build_to_fund.as_us());
// try to transfer a "few" at a time with recent blockhash
// assume 4MB network buffers, and 512 byte packets
const FUND_CHUNK_LEN: usize = 4 * 1024 * 1024 / 512;
to_fund.chunks(FUND_CHUNK_LEN).for_each(|chunk| {
Vec::<(&Keypair, Transaction)>::with_capacity(chunk.len()).fund(
&client,
chunk,
to_lamports,
);
});
let mut tries = 0;
info!("funded: {} left: {}", new_funded.len(), not_funded.len());
let mut make_txs = Measure::start("make_txs");
// this set of transactions just initializes us for bookkeeping
#[allow(clippy::clone_double_ref)] // sigh
let mut to_fund_txs: Vec<_> = chunk
.par_iter()
.map(|(k, m)| {
let tx = Transaction::new_unsigned_instructions(
system_instruction::transfer_many(&k.pubkey(), &m),
);
(k.clone(), tx)
})
.collect();
make_txs.stop();
debug!(
"make {} unsigned txs: {}us",
to_fund_txs.len(),
make_txs.as_us()
);
let amount = chunk[0].1[0].1;
while !to_fund_txs.is_empty() {
let receivers = to_fund_txs
.iter()
.fold(0, |len, (_, tx)| len + tx.message().instructions.len());
info!(
"{} {} to {} in {} txs",
if tries == 0 {
"transferring"
} else {
" retrying"
},
amount,
receivers,
to_fund_txs.len(),
);
let (blockhash, _fee_calculator) = get_recent_blockhash(client);
// re-sign retained to_fund_txes with updated blockhash
let mut sign_txs = Measure::start("sign_txs");
to_fund_txs.par_iter_mut().for_each(|(k, tx)| {
tx.sign(&[*k], blockhash);
});
sign_txs.stop();
debug!("sign {} txs: {}us", to_fund_txs.len(), sign_txs.as_us());
let mut send_txs = Measure::start("send_txs");
to_fund_txs.iter().for_each(|(_, tx)| {
client.async_send_transaction(tx.clone()).expect("transfer");
});
send_txs.stop();
debug!("send {} txs: {}us", to_fund_txs.len(), send_txs.as_us());
let mut verify_txs = Measure::start("verify_txs");
let mut starting_txs = to_fund_txs.len();
let mut verified_txs = 0;
let mut failed_verify = 0;
// Only loop multiple times for small (quick) transaction batches
for _ in 0..(if starting_txs < 1000 { 3 } else { 1 }) {
let mut timer = Instant::now();
to_fund_txs.retain(|(_, tx)| {
if timer.elapsed() >= Duration::from_secs(5) {
if failed_verify > 0 {
debug!("total txs failed verify: {}", failed_verify);
}
info!(
"Verifying transfers... {} remaining",
starting_txs - verified_txs
);
timer = Instant::now();
}
let verified = verify_funding_transfer(client, &tx, amount);
if verified {
verified_txs += 1;
} else {
failed_verify += 1;
}
!verified
});
if to_fund_txs.is_empty() {
break;
}
debug!("Looping verifications");
info!("Verifying transfers... {} remaining", to_fund_txs.len());
sleep(Duration::from_millis(100));
}
starting_txs -= to_fund_txs.len();
verify_txs.stop();
debug!("verified {} txs: {}us", starting_txs, verify_txs.as_us());
// retry anything that seems to have dropped through cracks
// again since these txs are all or nothing, they're fine to
// retry
tries += 1;
}
info!("transferred");
});
info!("funded: {} left: {}", new_funded.len(), notfunded.len());
funded = new_funded;
funded_funds = to_lamports;
}
}
@ -712,14 +678,14 @@ pub fn airdrop_lamports<T: Client>(
client: &T,
faucet_addr: &SocketAddr,
id: &Keypair,
desired_balance: u64,
tx_count: u64,
) -> Result<()> {
let starting_balance = client.get_balance(&id.pubkey()).unwrap_or(0);
metrics_submit_lamport_balance(starting_balance);
info!("starting balance {}", starting_balance);
if starting_balance < desired_balance {
let airdrop_amount = desired_balance - starting_balance;
if starting_balance < tx_count {
let airdrop_amount = tx_count - starting_balance;
info!(
"Airdropping {:?} lamports from {} for {}",
airdrop_amount,
@ -844,6 +810,17 @@ fn compute_and_report_stats(
);
}
// First transfer 2/3 of the lamports to the dest accounts
// then ping-pong 1/3 of the lamports back to the other account
// this leaves 1/3 lamport buffer in each account
fn should_switch_directions(num_lamports_per_account: u64, keypair_chunks: u64, i: u64) -> bool {
if i < keypair_chunks * (2 * num_lamports_per_account) / 3 {
return false;
}
i % (keypair_chunks * num_lamports_per_account / 3) == 0
}
pub fn generate_keypairs(seed_keypair: &Keypair, count: u64) -> (Vec<Keypair>, u64) {
let mut seed = [0u8; 32];
seed.copy_from_slice(&seed_keypair.to_bytes()[..32]);
@ -1027,25 +1004,23 @@ fn fund_move_keys<T: Client>(
info!("done funding keys, took {} ms", funding_time.as_ms());
}
pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
client: Arc<T>,
pub fn generate_and_fund_keypairs<T: Client>(
client: &T,
faucet_addr: Option<SocketAddr>,
funding_key: &Keypair,
keypair_count: usize,
lamports_per_account: u64,
use_move: bool,
) -> Result<(Vec<Keypair>, Option<LibraKeys>)> {
) -> Result<(Vec<Keypair>, Option<LibraKeys>, u64)> {
info!("Creating {} keypairs...", keypair_count);
let (mut keypairs, extra) = generate_keypairs(funding_key, keypair_count as u64);
info!("Get lamports...");
// Sample the first keypair, to prevent lamport loss on repeated solana-bench-tps executions
let first_key = keypairs[0].pubkey();
let first_keypair_balance = client.get_balance(&first_key).unwrap_or(0);
// Sample the last keypair, to check if funding was already completed
let last_key = keypairs[keypair_count - 1].pubkey();
let last_keypair_balance = client.get_balance(&last_key).unwrap_or(0);
// Sample the first keypair, see if it has lamports, if so then resume.
// This logic is to prevent lamport loss on repeated solana-bench-tps executions
let last_keypair_balance = client
.get_balance(&keypairs[keypair_count - 1].pubkey())
.unwrap_or(0);
#[cfg(feature = "move")]
let mut move_keypairs_ret = None;
@ -1053,38 +1028,31 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
#[cfg(not(feature = "move"))]
let move_keypairs_ret = None;
// Repeated runs will eat up keypair balances from transaction fees. In order to quickly
// start another bench-tps run without re-funding all of the keypairs, check if the
// keypairs still have at least 80% of the expected funds. That should be enough to
// pay for the transaction fees in a new run.
let enough_lamports = 8 * lamports_per_account / 10;
if first_keypair_balance < enough_lamports || last_keypair_balance < enough_lamports {
let (_blockhash, fee_calculator) = get_recent_blockhash(client.as_ref());
let max_fee = fee_calculator.max_lamports_per_signature;
let extra_fees = extra * max_fee;
let total_keypairs = keypairs.len() as u64 + 1; // Add one for funding keypair
let mut total = lamports_per_account * total_keypairs + extra_fees;
if lamports_per_account > last_keypair_balance {
let (_blockhash, fee_calculator) = get_recent_blockhash(client);
let account_desired_balance =
lamports_per_account - last_keypair_balance + fee_calculator.max_lamports_per_signature;
let extra_fees = extra * fee_calculator.max_lamports_per_signature;
let mut total = account_desired_balance * (1 + keypairs.len() as u64) + extra_fees;
if use_move {
total *= 3;
}
let funding_key_balance = client.get_balance(&funding_key.pubkey()).unwrap_or(0);
info!(
"Funding keypair balance: {} max_fee: {} lamports_per_account: {} extra: {} total: {}",
funding_key_balance, max_fee, lamports_per_account, extra, total
);
info!("Previous key balance: {} max_fee: {} lamports_per_account: {} extra: {} desired_balance: {} total: {}",
last_keypair_balance, fee_calculator.max_lamports_per_signature, lamports_per_account, extra,
account_desired_balance, total
);
if client.get_balance(&funding_key.pubkey()).unwrap_or(0) < total {
airdrop_lamports(client.as_ref(), &faucet_addr.unwrap(), funding_key, total)?;
airdrop_lamports(client, &faucet_addr.unwrap(), funding_key, total)?;
}
#[cfg(feature = "move")]
{
if use_move {
let libra_genesis_keypair =
create_genesis(&funding_key, client.as_ref(), 10_000_000);
let libra_mint_program_id = upload_mint_script(&funding_key, client.as_ref());
let libra_pay_program_id = upload_payment_script(&funding_key, client.as_ref());
let libra_genesis_keypair = create_genesis(&funding_key, client, 10_000_000);
let libra_mint_program_id = upload_mint_script(&funding_key, client);
let libra_pay_program_id = upload_payment_script(&funding_key, client);
// Generate another set of keypairs for move accounts.
// Still fund the solana ones which will be used for fees.
@ -1092,7 +1060,7 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
let mut rnd = GenKeys::new(seed);
let move_keypairs = rnd.gen_n_keypairs(keypair_count as u64);
fund_move_keys(
client.as_ref(),
client,
funding_key,
&move_keypairs,
total / 3,
@ -1117,15 +1085,15 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
funding_key,
&keypairs,
total,
max_fee,
lamports_per_account,
fee_calculator.max_lamports_per_signature,
extra,
);
}
// 'generate_keypairs' generates extra keys to be able to have size-aligned funding batches for fund_keys.
keypairs.truncate(keypair_count);
Ok((keypairs, move_keypairs_ret))
Ok((keypairs, move_keypairs_ret, last_keypair_balance))
}
#[cfg(test)]
@ -1137,11 +1105,30 @@ mod tests {
use solana_sdk::fee_calculator::FeeCalculator;
use solana_sdk::genesis_config::create_genesis_config;
#[test]
fn test_switch_directions() {
assert_eq!(should_switch_directions(30, 1, 0), false);
assert_eq!(should_switch_directions(30, 1, 1), false);
assert_eq!(should_switch_directions(30, 1, 20), true);
assert_eq!(should_switch_directions(30, 1, 21), false);
assert_eq!(should_switch_directions(30, 1, 30), true);
assert_eq!(should_switch_directions(30, 1, 90), true);
assert_eq!(should_switch_directions(30, 1, 91), false);
assert_eq!(should_switch_directions(30, 2, 0), false);
assert_eq!(should_switch_directions(30, 2, 1), false);
assert_eq!(should_switch_directions(30, 2, 20), false);
assert_eq!(should_switch_directions(30, 2, 40), true);
assert_eq!(should_switch_directions(30, 2, 90), false);
assert_eq!(should_switch_directions(30, 2, 100), true);
assert_eq!(should_switch_directions(30, 2, 101), false);
}
#[test]
fn test_bench_tps_bank_client() {
let (genesis_config, id) = create_genesis_config(10_000);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let clients = vec![BankClient::new(bank)];
let mut config = Config::default();
config.id = id;
@ -1149,24 +1136,23 @@ mod tests {
config.duration = Duration::from_secs(5);
let keypair_count = config.tx_count * config.keypair_multiplier;
let (keypairs, _move_keypairs) =
generate_and_fund_keypairs(client.clone(), None, &config.id, keypair_count, 20, false)
let (keypairs, _move_keypairs, _keypair_balance) =
generate_and_fund_keypairs(&clients[0], None, &config.id, keypair_count, 20, false)
.unwrap();
do_bench_tps(client, config, keypairs, None);
do_bench_tps(clients, config, keypairs, 0, None);
}
#[test]
fn test_bench_tps_fund_keys() {
let (genesis_config, id) = create_genesis_config(10_000);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let client = BankClient::new(bank);
let keypair_count = 20;
let lamports = 20;
let (keypairs, _move_keypairs) =
generate_and_fund_keypairs(client.clone(), None, &id, keypair_count, lamports, false)
.unwrap();
let (keypairs, _move_keypairs, _keypair_balance) =
generate_and_fund_keypairs(&client, None, &id, keypair_count, lamports, false).unwrap();
for kp in &keypairs {
assert_eq!(
@ -1184,16 +1170,23 @@ mod tests {
let fee_calculator = FeeCalculator::new(11, 0);
genesis_config.fee_calculator = fee_calculator;
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let client = BankClient::new(bank);
let keypair_count = 20;
let lamports = 20;
let (keypairs, _move_keypairs) =
generate_and_fund_keypairs(client.clone(), None, &id, keypair_count, lamports, false)
.unwrap();
let (keypairs, _move_keypairs, _keypair_balance) =
generate_and_fund_keypairs(&client, None, &id, keypair_count, lamports, false).unwrap();
let max_fee = client
.get_recent_blockhash_with_commitment(CommitmentConfig::recent())
.unwrap()
.1
.max_lamports_per_signature;
for kp in &keypairs {
assert_eq!(client.get_balance(&kp.pubkey()).unwrap(), lamports);
assert_eq!(
client.get_balance(&kp.pubkey()).unwrap(),
lamports + max_fee
);
}
}
}

View File

@ -1,10 +1,10 @@
use clap::{crate_description, crate_name, App, Arg, ArgMatches};
use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::fee_calculator::FeeCalculator;
use solana_sdk::signature::{read_keypair_file, Keypair};
use solana_sdk::signature::{read_keypair_file, Keypair, KeypairUtil};
use std::{net::SocketAddr, process::exit, time::Duration};
const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::LAMPORTS_PER_SOL;
const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::SOL_LAMPORTS;
/// Holds the configuration for a single run of the benchmark
pub struct Config {

View File

@ -4,15 +4,15 @@ use solana_bench_tps::cli;
use solana_core::gossip_service::{discover_cluster, get_client, get_multi_client};
use solana_genesis::Base64Account;
use solana_sdk::fee_calculator::FeeCalculator;
use solana_sdk::signature::{Keypair, Signer};
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_program;
use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc};
use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit};
/// Number of signatures for all transactions in ~1 week at ~100K TPS
pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7;
fn main() {
solana_logger::setup_with_default("solana=info");
solana_logger::setup_with_filter("solana=info");
solana_metrics::set_panic_hook("bench-tps");
let matches = cli::build_args(solana_clap_utils::version!()).get_matches();
@ -82,12 +82,12 @@ fn main() {
);
exit(1);
}
Arc::new(client)
client
} else {
Arc::new(get_client(&nodes))
get_client(&nodes)
};
let (keypairs, move_keypairs) = if *read_from_client_file && !use_move {
let (keypairs, move_keypairs, keypair_balance) = if *read_from_client_file && !use_move {
let path = Path::new(&client_ids_and_stake_file);
let file = File::open(path).unwrap();
@ -117,10 +117,10 @@ fn main() {
// This prevents the amount of storage needed for bench-tps accounts from creeping up
// across multiple runs.
keypairs.sort_by(|x, y| x.pubkey().to_string().cmp(&y.pubkey().to_string()));
(keypairs, None)
(keypairs, None, last_balance)
} else {
generate_and_fund_keypairs(
client.clone(),
&client,
Some(*faucet_addr),
&id,
keypair_count,
@ -133,5 +133,11 @@ fn main() {
})
};
do_bench_tps(client, cli_config, keypairs, move_keypairs);
do_bench_tps(
vec![client],
cli_config,
keypairs,
keypair_balance,
move_keypairs,
);
}

View File

@ -8,8 +8,8 @@ use solana_faucet::faucet::run_local_faucet;
use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster};
#[cfg(feature = "move")]
use solana_sdk::move_loader::solana_move_loader_program;
use solana_sdk::signature::{Keypair, Signer};
use std::sync::{mpsc::channel, Arc};
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::sync::mpsc::channel;
use std::time::Duration;
fn test_bench_tps_local_cluster(config: Config) {
@ -36,10 +36,10 @@ fn test_bench_tps_local_cluster(config: Config) {
100_000_000,
);
let client = Arc::new(create_client(
let client = create_client(
(cluster.entry_point_info.rpc, cluster.entry_point_info.tpu),
VALIDATOR_PORT_RANGE,
));
);
let (addr_sender, addr_receiver) = channel();
run_local_faucet(faucet_keypair, addr_sender, None);
@ -48,8 +48,8 @@ fn test_bench_tps_local_cluster(config: Config) {
let lamports_per_account = 100;
let keypair_count = config.tx_count * config.keypair_multiplier;
let (keypairs, move_keypairs) = generate_and_fund_keypairs(
client.clone(),
let (keypairs, move_keypairs, _keypair_balance) = generate_and_fund_keypairs(
&client,
Some(faucet_addr),
&config.id,
keypair_count,
@ -58,7 +58,7 @@ fn test_bench_tps_local_cluster(config: Config) {
)
.unwrap();
let _total = do_bench_tps(client, config, keypairs, move_keypairs);
let _total = do_bench_tps(vec![client], config, keypairs, 0, move_keypairs);
#[cfg(not(debug_assertions))]
assert!(_total > 100);

View File

@ -1,7 +1,7 @@
Building the Solana book
---
Install dependencies, build, and test the docs:
Install the book's dependnecies, build, and test the book:
```bash
$ ./build.sh
@ -19,7 +19,7 @@ Render markdown as HTML:
$ make build
```
Render and view the docs:
Render and view the book:
```bash
$ make open

View File

@ -24,7 +24,7 @@ msc {
... ;
Validator abox Validator [label="\nmax\nlockout\n"];
|||;
Cluster box Cluster [label="credits redeemed (at epoch)"];
StakerX => Cluster [label="StakeState::RedeemCredits()"];
StakerY => Cluster [label="StakeState::RedeemCredits()"] ;
}

View File

@ -0,0 +1,18 @@
+------------+
| Bank-Merkle|
+------------+
^ ^
/ \
+-----------------+ +-------------+
| Bank-Diff-Merkle| | Block-Merkle|
+-----------------+ +-------------+
^ ^
/ \
+------+ +--------------------------+
| Hash | | Previous Bank-Diff-Merkle|
+------+ +--------------------------+
^ ^
/ \
+---------------+ +---------------+
| Hash(Account1)| | Hash(Account2)|
+---------------+ +---------------+

View File

@ -18,9 +18,9 @@
| | `-------` `--------` `--+---------` | | | | |
| | ^ ^ | | | `------------` |
| | | | v | | |
| | | .--+---------. | | |
| | | | Blockstore | | | |
| | | `------------` | | .------------. |
| | | .--+--------. | | |
| | | | Blocktree | | | |
| | | `-----------` | | .------------. |
| | | ^ | | | | |
| | | | | | | Downstream | |
| | .--+--. .-------+---. | | | Validators | |

View File

@ -5,9 +5,9 @@ cd "$(dirname "$0")"
usage=$(cargo -q run -p solana-cli -- -C ~/.foo --help | sed 's|'"$HOME"'|~|g')
out=${1:-src/cli/usage.md}
out=${1:-src/api-reference/cli.md}
cat src/cli/.usage.md.header > "$out"
cat src/api-reference/.cli.md > "$out"
section() {
declare mark=${2:-"###"}

6
book/build.sh Executable file
View File

@ -0,0 +1,6 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
make -j"$(nproc)" test

View File

@ -1,6 +1,6 @@
BOB_SRCS=$(wildcard art/*.bob)
MSC_SRCS=$(wildcard art/*.msc)
MD_SRCS=$(wildcard src/*.md src/*/*.md)
MD_SRCS=$(wildcard src/*.md)
SVG_IMGS=$(BOB_SRCS:art/%.bob=src/.gitbook/assets/%.svg) $(MSC_SRCS:art/%.msc=src/.gitbook/assets/%.svg)

View File

Before

Width:  |  Height:  |  Size: 542 KiB

After

Width:  |  Height:  |  Size: 542 KiB

View File

Before

Width:  |  Height:  |  Size: 256 KiB

After

Width:  |  Height:  |  Size: 256 KiB

View File

Before

Width:  |  Height:  |  Size: 269 KiB

After

Width:  |  Height:  |  Size: 269 KiB

View File

@ -1,54 +1,14 @@
# Table of contents
* [Introduction](introduction.md)
* [Using Solana from the Command-line](cli/README.md)
* [Command-line Usage](cli/usage.md)
* [Remote Wallet](remote-wallet/README.md)
* [Ledger Hardware Wallet](remote-wallet/ledger.md)
* [Paper Wallet](paper-wallet/README.md)
* [Installation](paper-wallet/installation.md)
* [Paper Wallet Usage](paper-wallet/usage.md)
* [Offline Signing](offline-signing/README.md)
* [Durable Transaction Nonces](offline-signing/durable-nonce.md)
* [Developing Applications](apps/README.md)
* [Example: Web Wallet](apps/webwallet.md)
* [Example: Tic-Tac-Toe](apps/tictactoe.md)
* [Drones](apps/drones.md)
* [Anatomy of a Transaction](transaction.md)
* [JSON RPC API](apps/jsonrpc-api.md)
* [JavaScript API](apps/javascript-api.md)
* [Participating in Tour de SOL](tour-de-sol/README.md)
* [Useful Links & Discussion](tour-de-sol/useful-links.md)
* [Registration](tour-de-sol/registration/README.md)
* [How To Register](tour-de-sol/registration/how-to-register.md)
* [Terms of Participation](tour-de-sol/registration/terms-of-participation.md)
* [Rewards](tour-de-sol/registration/rewards.md)
* [Confidentiality](tour-de-sol/registration/confidentiality.md)
* [Registration FAQ](tour-de-sol/registration/validator-registration-and-rewards-faq.md)
* [Participation](tour-de-sol/participation/README.md)
* [Requirements to run a validator](tour-de-sol/participation/validator-technical-requirements.md)
* [Steps to create a validator](tour-de-sol/participation/steps-to-create-a-validator/README.md)
* [Install the Solana software](tour-de-sol/participation/steps-to-create-a-validator/install-the-solana-software.md)
* [Create a validator public key](tour-de-sol/participation/steps-to-create-a-validator/validator-public-key-registration.md)
* [Create and configure a Solana validator](tour-de-sol/participation/steps-to-create-a-validator/connecting-your-validator.md)
* [Confirm the Solana network is running](tour-de-sol/participation/steps-to-create-a-validator/confirm-the-solana-network-is-running.md)
* [Connect to the Solana network](tour-de-sol/participation/steps-to-create-a-validator/connect-to-the-solana-network.md)
* [Validator catch-up](tour-de-sol/participation/steps-to-create-a-validator/monitoring-your-validator.md)
* [Staking](tour-de-sol/participation/steps-to-create-a-validator/delegating-stake.md)
* [Publish information about your validator](tour-de-sol/participation/steps-to-create-a-validator/publishing-information-about-your-validator.md)
* [Dry Run 6](tour-de-sol/participation/dry-run-6.md)
* [Submitting Bugs](tour-de-sol/submitting-bugs.md)
* [Running a Validator](running-validator/README.md)
* [Validator Requirements](running-validator/validator-reqs.md)
* [Choosing a Testnet](running-validator/validator-testnet.md)
* [Installing the Validator Software](running-validator/validator-software.md)
* [Starting a Validator](running-validator/validator-start.md)
* [Staking](running-validator/validator-stake.md)
* [Monitoring a Validator](running-validator/validator-monitor.md)
* [Publishing Validator Info](running-validator/validator-info.md)
* [Troubleshooting](running-validator/validator-troubleshoot.md)
* [Running an Archiver](running-archiver.md)
* [Understanding Solana's Architecture](cluster/README.md)
* [Terminology](terminology.md)
* [Getting Started](getting-started/README.md)
* [Testnet Participation](getting-started/testnet-participation.md)
* [Example Client: Web Wallet](getting-started/webwallet.md)
* [Programming Model](programs/README.md)
* [Example: Tic-Tac-Toe](programs/tictactoe.md)
* [Drones](programs/drones.md)
* [A Solana Cluster](cluster/README.md)
* [Synchronization](cluster/synchronization.md)
* [Leader Rotation](cluster/leader-rotation.md)
* [Fork Generation](cluster/fork-generation.md)
@ -60,13 +20,45 @@
* [Performance Metrics](cluster/performance-metrics.md)
* [Anatomy of a Validator](validator/README.md)
* [TPU](validator/tpu.md)
* [TVU](validator/tvu.md)
* [Blockstore](validator/blockstore.md)
* [TVU](validator/tvu/README.md)
* [Blocktree](validator/tvu/blocktree.md)
* [Gossip Service](validator/gossip.md)
* [The Runtime](validator/runtime.md)
* [Building from Source](building-from-source.md)
* [Terminology](terminology.md)
* [Anatomy of a Transaction](transaction.md)
* [Running a Validator](running-validator/README.md)
* [Validator Requirements](running-validator/validator-reqs.md)
* [Choosing a Testnet](running-validator/validator-testnet.md)
* [Installing the Validator Software](running-validator/validator-software.md)
* [Starting a Validator](running-validator/validator-start.md)
* [Staking](running-validator/validator-stake.md)
* [Monitoring a Validator](running-validator/validator-monitor.md)
* [Publishing Validator Info](running-validator/validator-info.md)
* [Troubleshooting](running-validator/validator-troubleshoot.md)
* [Running an Archiver](running-archiver.md)
* [Paper Wallet](paper-wallet/README.md)
* [Installation](paper-wallet/installation.md)
* [Paper Wallet Usage](paper-wallet/usage.md)
* [Offline Signing](offline-signing/README.md)
* [API Reference](api-reference/README.md)
* [Transaction](api-reference/transaction-api.md)
* [Instruction](api-reference/instruction-api.md)
* [Blockstreamer](api-reference/blockstreamer.md)
* [JSON RPC API](api-reference/jsonrpc-api.md)
* [JavaScript API](api-reference/javascript-api.md)
* [solana CLI](api-reference/cli.md)
* [Accepted Design Proposals](proposals/README.md)
* [Ledger Replication](proposals/ledger-replication-to-implement.md)
* [Secure Vote Signing](proposals/vote-signing-to-implement.md)
* [Cluster Test Framework](proposals/cluster-test-framework.md)
* [Validator](proposals/validator-proposal.md)
* [Simple Payment and State Verification](proposals/simple-payment-and-state-verification.md)
* [Cross-Program Invocation](proposals/cross-program-invocation.md)
* [Inter-chain Transaction Verification](proposals/interchain-transaction-verification.md)
* [Snapshot Verification](proposals/snapshot-verification.md)
* [Bankless Leader](proposals/bankless-leader.md)
* [Slashing](proposals/slashing.md)
* [Implemented Design Proposals](implemented-proposals/README.md)
* [Blocktree](implemented-proposals/blocktree.md)
* [Cluster Software Installation and Updates](implemented-proposals/installer.md)
* [Cluster Economics](implemented-proposals/ed_overview/README.md)
* [Validation-client Economics](implemented-proposals/ed_overview/ed_validation_client_economics/README.md)
@ -77,7 +69,6 @@
* [Replication-client Economics](implemented-proposals/ed_overview/ed_replication_client_economics/README.md)
* [Storage-replication Rewards](implemented-proposals/ed_overview/ed_replication_client_economics/ed_rce_storage_replication_rewards.md)
* [Replication-client Reward Auto-delegation](implemented-proposals/ed_overview/ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md)
* [Storage Rent Economics](implemented-proposals/ed_overview/ed_storage_rent_economics.md)
* [Economic Sustainability](implemented-proposals/ed_overview/ed_economic_sustainability.md)
* [Attack Vectors](implemented-proposals/ed_overview/ed_attack_vectors.md)
* [Economic Design MVP](implemented-proposals/ed_overview/ed_mvp.md)
@ -96,19 +87,3 @@
* [Rent](implemented-proposals/rent.md)
* [Durable Transaction Nonces](implemented-proposals/durable-tx-nonces.md)
* [Validator Timestamp Oracle](implemented-proposals/validator-timestamp-oracle.md)
* [Commitment](implemented-proposals/commitment.md)
* [Snapshot Verification](implemented-proposals/snapshot-verification.md)
* [Accepted Design Proposals](proposals/README.md)
* [Ledger Replication](proposals/ledger-replication-to-implement.md)
* [Secure Vote Signing](proposals/vote-signing-to-implement.md)
* [Cluster Test Framework](proposals/cluster-test-framework.md)
* [Validator](proposals/validator-proposal.md)
* [Simple Payment and State Verification](proposals/simple-payment-and-state-verification.md)
* [Cross-Program Invocation](proposals/cross-program-invocation.md)
* [Inter-chain Transaction Verification](proposals/interchain-transaction-verification.md)
* [Snapshot Verification](proposals/snapshot-verification.md)
* [Bankless Leader](proposals/bankless-leader.md)
* [Slashing](proposals/slashing.md)
* [Tick Verification](proposals/tick-verification.md)
* [Block Confirmation](proposals/block-confirmation.md)
* [ABI Management](proposals/abi-management.md)

View File

@ -22,6 +22,12 @@ $ solana airdrop 2
// Return
"2.00000000 SOL"
// Command
$ solana airdrop 123 --lamports
// Return
"123 lamports"
```
### Get Balance

View File

@ -0,0 +1,4 @@
# API Reference
The following sections contain API references material you may find useful when developing applications utilizing a Solana cluster.

View File

@ -0,0 +1,28 @@
# Blockstreamer
Solana supports a node type called an _blockstreamer_. This validator variation is intended for applications that need to observe the data plane without participating in transaction validation or ledger replication.
A blockstreamer runs without a vote signer, and can optionally stream ledger entries out to a Unix domain socket as they are processed. The JSON-RPC service still functions as on any other node.
To run a blockstreamer, include the argument `no-signer` and \(optional\) `blockstream` socket location:
```bash
$ ./multinode-demo/validator-x.sh --no-signer --blockstream <SOCKET>
```
The stream will output a series of JSON objects:
* An Entry event JSON object is sent when each ledger entry is processed, with the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "entry"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `entry`, the entry, as JSON object
* A Block event JSON object is sent when a block is complete, with the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "block"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `l`, the slot leader id, as base-58 encoded string
* `hash`, the [blockhash](terminology.md#blockhash), as base-58 encoded string

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,38 @@
# Instruction
For the purposes of building a [Transaction](../transaction.md), a more verbose instruction format is used:
* **Instruction:**
* **program\_id:** The pubkey of the on-chain program that executes the
instruction
* **accounts:** An ordered list of accounts that should be passed to
the program processing the instruction, including metadata detailing
if an account is a signer of the transaction and if it is a credit
only account.
* **data:** A byte array that is passed to the program executing the
instruction
A more compact form is actually included in a `Transaction`:
* **CompiledInstruction:**
* **program\_id\_index:** The index of the `program_id` in the
`account_keys` list
* **accounts:** An ordered list of indices into `account_keys`
specifying the accounds that should be passed to the program
processing the instruction.
* **data:** A byte array that is passed to the program executing the
instruction

View File

@ -0,0 +1,994 @@
# JSON RPC API
Solana nodes accept HTTP requests using the [JSON-RPC 2.0](https://www.jsonrpc.org/specification) specification.
To interact with a Solana node inside a JavaScript application, use the [solana-web3.js](https://github.com/solana-labs/solana-web3.js) library, which gives a convenient interface for the RPC methods.
## RPC HTTP Endpoint
**Default port:** 8899 eg. [http://localhost:8899](http://localhost:8899), [http://192.168.1.88:8899](http://192.168.1.88:8899)
## RPC PubSub WebSocket Endpoint
**Default port:** 8900 eg. ws://localhost:8900, [http://192.168.1.88:8900](http://192.168.1.88:8900)
## Methods
* [confirmTransaction](jsonrpc-api.md#confirmtransaction)
* [getAccountInfo](jsonrpc-api.md#getaccountinfo)
* [getBalance](jsonrpc-api.md#getbalance)
* [getBlockCommitment](jsonrpc-api.md#getblockcommitment)
* [getBlockTime](jsonrpc-api.md#getblocktime)
* [getClusterNodes](jsonrpc-api.md#getclusternodes)
* [getConfirmedBlock](jsonrpc-api.md#getconfirmedblock)
* [getConfirmedBlocks](jsonrpc-api.md#getconfirmedblocks)
* [getEpochInfo](jsonrpc-api.md#getepochinfo)
* [getEpochSchedule](jsonrpc-api.md#getepochschedule)
* [getGenesisHash](jsonrpc-api.md#getgenesishash)
* [getLeaderSchedule](jsonrpc-api.md#getleaderschedule)
* [getMinimumBalanceForRentExemption](jsonrpc-api.md#getminimumbalanceforrentexemption)
* [getNumBlocksSinceSignatureConfirmation](jsonrpc-api.md#getnumblockssincesignatureconfirmation)
* [getProgramAccounts](jsonrpc-api.md#getprogramaccounts)
* [getRecentBlockhash](jsonrpc-api.md#getrecentblockhash)
* [getSignatureStatus](jsonrpc-api.md#getsignaturestatus)
* [getSlot](jsonrpc-api.md#getslot)
* [getSlotLeader](jsonrpc-api.md#getslotleader)
* [getSlotsPerSegment](jsonrpc-api.md#getslotspersegment)
* [getStorageTurn](jsonrpc-api.md#getstorageturn)
* [getStorageTurnRate](jsonrpc-api.md#getstorageturnrate)
* [getTransactionCount](jsonrpc-api.md#gettransactioncount)
* [getTotalSupply](jsonrpc-api.md#gettotalsupply)
* [getVersion](jsonrpc-api.md#getversion)
* [getVoteAccounts](jsonrpc-api.md#getvoteaccounts)
* [requestAirdrop](jsonrpc-api.md#requestairdrop)
* [sendTransaction](jsonrpc-api.md#sendtransaction)
* [startSubscriptionChannel](jsonrpc-api.md#startsubscriptionchannel)
* [Subscription Websocket](jsonrpc-api.md#subscription-websocket)
* [accountSubscribe](jsonrpc-api.md#accountsubscribe)
* [accountUnsubscribe](jsonrpc-api.md#accountunsubscribe)
* [programSubscribe](jsonrpc-api.md#programsubscribe)
* [programUnsubscribe](jsonrpc-api.md#programunsubscribe)
* [signatureSubscribe](jsonrpc-api.md#signaturesubscribe)
* [signatureUnsubscribe](jsonrpc-api.md#signatureunsubscribe)
## Request Formatting
To make a JSON-RPC request, send an HTTP POST request with a `Content-Type: application/json` header. The JSON request data should contain 4 fields:
* `jsonrpc`, set to `"2.0"`
* `id`, a unique client-generated identifying integer
* `method`, a string containing the method to be invoked
* `params`, a JSON array of ordered parameter values
Example using curl:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getBalance", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri"]}' 192.168.1.88:8899
```
The response output will be a JSON object with the following fields:
* `jsonrpc`, matching the request specification
* `id`, matching the request identifier
* `result`, requested data or success confirmation
Requests can be sent in batches by sending an array of JSON-RPC request objects as the data for a single POST.
## Definitions
* Hash: A SHA-256 hash of a chunk of data.
* Pubkey: The public key of a Ed25519 key-pair.
* Signature: An Ed25519 signature of a chunk of data.
* Transaction: A Solana instruction signed by a client key-pair.
## Configuring State Commitment
Solana nodes choose which bank state to query based on a commitment requirement
set by the client. Clients may specify either:
* `{"commitment":"max"}` - the node will query the most recent bank having reached `MAX_LOCKOUT_HISTORY` confirmations
* `{"commitment":"recent"}` - the node will query its most recent bank state
The commitment parameter should be included as the last element in the `params` array:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getBalance", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri",{"commitment":"max"}]}' 192.168.1.88:8899
```
#### Default:
If commitment configuration is not provided, the node will default to `"commitment":"max"`
Only methods that query bank state accept the commitment parameter. They are indicated in the API Reference below.
#### RpcResponse Structure
Many methods that take a commitment parameter return an RpcResponse JSON object comprised of two parts:
* `context` : An RpcResponseContext JSON structure including a `slot` field at which the operation was evaluated.
* `value` : The value returned by the operation itself.
## JSON RPC API Reference
### confirmTransaction
Returns a transaction receipt
#### Parameters:
* `string` - Signature of Transaction to confirm, as base-58 encoded string
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `RpcResponse<boolean>` - RpcResponse JSON object with `value` field set to Transaction status, boolean true if Transaction is confirmed
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"confirmTransaction", "params":["5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"context":{"slot":1},"value":true},"id":1}
```
### getAccountInfo
Returns all information associated with the account of provided Pubkey
#### Parameters:
* `string` - Pubkey of account to query, as base-58 encoded string
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
The result value will be an RpcResponse JSON object containing an AccountInfo JSON object.
* `RpcResponse<AccountInfo>`, RpcResponse JSON object with `value` field set to AccountInfo, a JSON object containing:
* `lamports`, number of lamports assigned to this account, as a u64
* `owner`, array of 32 bytes representing the program this account has been assigned to
* `data`, array of bytes representing any data associated with the account
* `executable`, boolean indicating if the account contains a program \(and is strictly read-only\)
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getAccountInfo", "params":["2gVkYWexTHR5Hb2aLeQN3tnngvWzisFKXDUPrgMHpdST"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"context":{"slot":1},"value":{"executable":false,"owner":[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lamports":1,"data":[3,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0.22.0,0,0,0,0,0,0,50,48,53,48,45,48,49,45,48,49,84,48,48,58,48,48,58,48,48,90,252,10,7,28,246,140,88,177,98,82,10,227,89,81,18,30,194,101,199,16,11,73,133,20,246,62,114,39,20,113,189,32,50,0,0,0,0,0,0,0,247,15,36,102,167,83,225,42,133,127,82,34,36,224,207,130,109,230,224,188,163,33,213,13,5,117,211,251,65,159,197,51,0,0,0,0,0,0]}},"id":1}
```
### getBalance
Returns the balance of the account of provided Pubkey
#### Parameters:
* `string` - Pubkey of account to query, as base-58 encoded string
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `RpcResponse<u64>` - RpcResponse JSON object with `value` field set to quantity
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getBalance", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"context":{"slot":1},"value":0},"id":1}
```
### getBlockCommitment
Returns commitment for particular block
#### Parameters:
* `u64` - block, identified by Slot
#### Results:
The result field will be an array with two fields:
* Commitment
* `null` - Unknown block
* `object` - BlockCommitment
* `array` - commitment, array of u64 integers logging the amount of cluster stake in lamports that has voted on the block at each depth from 0 to `MAX_LOCKOUT_HISTORY`
* 'integer' - total active stake, in lamports, of the current epoch
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getBlockCommitment","params":[5]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":[{"commitment":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,32]},42],"id":1}
```
### getBlockTime
Returns the estimated production time of a block. Validators report their UTC
time to the ledger on a regular interval. A block's time is calculated as an
offset from the median value of the most recent validator time report.
#### Parameters:
* `u64` - block, identified by Slot
#### Results:
* `null` - block has not yet been produced
* `i64` - estimated production time, as Unix timestamp (seconds since the Unix epoch)
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getBlockTime","params":[5]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":1574721591,"id":1}
```
### getClusterNodes
Returns information about all the nodes participating in the cluster
#### Parameters:
None
#### Results:
The result field will be an array of JSON objects, each with the following sub fields:
* `pubkey` - Node public key, as base-58 encoded string
* `gossip` - Gossip network address for the node
* `tpu` - TPU network address for the node
* `rpc` - JSON RPC network address for the node, or `null` if the JSON RPC service is not enabled
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getClusterNodes"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":[{"gossip":"10.239.6.48:8001","pubkey":"9QzsJf7LPLj8GkXbYT3LFDKqsj2hHG7TA3xinJHu8epQ","rpc":"10.239.6.48:8899","tpu":"10.239.6.48:8856"}],"id":1}
```
### getConfirmedBlock
Returns identity and transaction information about a confirmed block in the ledger
#### Parameters:
* `integer` - slot, as u64 integer
#### Results:
The result field will be an object with the following fields:
* `blockhash` - the blockhash of this block
* `previousBlockhash` - the blockhash of this block's parent
* `parentSlot` - the slot index of this block's parent
* `transactions` - an array of tuples containing:
* [Transaction](transaction-api.md) object, in JSON format
* Transaction status object, containing:
* `status` - Transaction status:
* `"Ok": null` - Transaction was successful
* `"Err": <ERR>` - Transaction failed with TransactionError [TransactionError definitions](https://github.com/solana-labs/solana/blob/master/sdk/src/transaction.rs#L14)
* `fee` - fee this transaction was charged, as u64 integer
* `preBalances` - array of u64 account balances from before the transaction was processed
* `postBalances` - array of u64 account balances after the transaction was processed
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"method":"getConfirmedBlock","params":[430]}' localhost:8899
// Result
{"jsonrpc":"2.0","result":{"blockhash":[165,245,120,183,32,205,89,222,249,114,229,49,250,231,149,122,156,232,181,83,238,194,157,153,7,213,180,54,177,6,25,101],"parentSlot":429,"previousBlockhash":[21,108,181,90,139,241,212,203,45,78,232,29,161,31,159,188,110,82,81,11,250,74,47,140,188,28,23,96,251,164,208,166],"transactions":[[{"message":{"accountKeys":[[5],[219,181,202,40,52,148,34,136,186,59,137,160,250,225,234,17,244,160,88,116,24,176,30,227,68,11,199,38,141,68,131,228],[233,48,179,56,91,40,254,206,53,48,196,176,119,248,158,109,121,77,11,69,108,160,128,27,228,122,146,249,53,184,68,87],[6,167,213,23,25,47,10,175,198,242,101,227,251,119,204,122,218,130,197,41,208,190,59,19,110,45,0,85,32,0,0,0],[6,167,213,23,24,199,116,201,40,86,99,152,105,29,94,182,139,94,184,163,155,75,109,92,115,85,91,33,0,0,0,0],[7,97,72,29,53,116,116,187,124,77,118,36,235,211,189,179,216,53,94,115,209,16,67,252,13,163,83,128,0,0,0,0]],"header":{"numReadonlySignedAccounts":0,"numReadonlyUnsignedAccounts":3,"numRequiredSignatures":2},"instructions":[[1],{"accounts":[[3],1,2,3],"data":[[52],2,0,0,0,1,0,0,0,0,0,0,0,173,1,0,0,0,0,0,0,86,55,9,248,142,238,135,114,103,83,247,124,67,68,163,233,55,41,59,129,64,50,110,221,234,234,27,213,205,193,219,50],"program_id_index":4}],"recentBlockhash":[21,108,181,90,139,241,212,203,45,78,232,29,161,31,159,188,110,82,81,11,250,74,47,140,188,28,23,96,251,164,208,166]},"signatures":[[2],[119,9,95,108,35,95,7,1,69,101,65,45,5,204,61,114,172,88,123,238,32,201,135,229,57,50,13,21,106,216,129,183,238,43,37,101,148,81,56,232,88,136,80,65,46,189,39,106,94,13,238,54,186,48,118,186,0,62,121,122,172,171,66,5],[78,40,77,250,10,93,6,157,48,173,100,40,251,9,7,218,7,184,43,169,76,240,254,34,235,48,41,175,119,126,75,107,106,248,45,161,119,48,174,213,57,69,111,225,245,60,148,73,124,82,53,6,203,126,120,180,111,169,89,64,29,23,237,13]]},{"fee":100000,"status":{"Ok":null},"preBalances":[499998337500,15298080,1,1,1],"postBalances":[499998237500,15298080,1,1,1]}]]},"id":1}
```
### getConfirmedBlocks
Returns a list of confirmed blocks
#### Parameters:
* `integer` - start_slot, as u64 integer
* `integer` - (optional) end_slot, as u64 integer
#### Results:
The result field will be an array of u64 integers listing confirmed blocks
between start_slot and either end_slot, if provided, or latest confirmed block,
inclusive.
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"method":"getConfirmedBlocks","params":[5, 10]}' localhost:8899
// Result
{"jsonrpc":"2.0","result":[5,6,7,8,9,10],"id":1}
```
### getEpochInfo
Returns information about the current epoch
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
The result field will be an object with the following fields:
* `epoch`, the current epoch
* `slotIndex`, the current slot relative to the start of the current epoch
* `slotsInEpoch`, the number of slots in this epoch
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getEpochInfo"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"epoch":3,"slotIndex":126,"slotsInEpoch":256},"id":1}
```
### getEpochSchedule
Returns epoch schedule information from this cluster's genesis config
#### Parameters:
None
#### Results:
The result field will be an object with the following fields:
* `slots_per_epoch`, the maximum number of slots in each epoch
* `leader_schedule_slot_offset`, the number of slots before beginning of an epoch to calculate a leader schedule for that epoch
* `warmup`, whether epochs start short and grow
* `first_normal_epoch`, first normal-length epoch, log2(slots_per_epoch) - log2(MINIMUM_SLOTS_PER_EPOCH)
* `first_normal_slot`, MINIMUM_SLOTS_PER_EPOCH * (2.pow(first_normal_epoch) - 1)
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getEpochSchedule"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"first_normal_epoch":8,"first_normal_slot":8160,"leader_schedule_slot_offset":8192,"slots_per_epoch":8192,"warmup":true},"id":1}
```
### getGenesisHash
Returns the genesis hash
#### Parameters:
None
#### Results:
* `string` - a Hash as base-58 encoded string
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getGenesisHash"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"GH7ome3EiwEr7tu9JuTh2dpYWBJK3z69Xm1ZE3MEE6JC","id":1}
```
### getLeaderSchedule
Returns the leader schedule for an epoch
#### Parameters:
* `slot` - (optional) Fetch the leader schedule for the epoch that corresponds to the provided slot. If unspecified, the leader schedule for the current epoch is fetched
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
The result field will be a dictionary of leader public keys \(as base-58 encoded
strings\) and their corresponding leader slot indices as values (indices are to
the first slot in the requested epoch)
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getLeaderSchedule"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"4Qkev8aNZcqFNSRhQzwyLMFSsi94jHqE8WNVTJzTP99F":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]},"id":1}
```
### getMinimumBalanceForRentExemption
Returns minimum balance required to make account rent exempt.
#### Parameters:
* `u64` - account data length
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `u64` - minimum lamports required in account
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getMinimumBalanceForRentExemption", "params":[50]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":500,"id":1}
```
### getNumBlocksSinceSignatureConfirmation
Returns the current number of blocks since signature has been confirmed.
#### Parameters:
* `string` - Signature of Transaction to confirm, as base-58 encoded string
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `u64` - count
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getNumBlocksSinceSignatureConfirmation", "params":["5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":8,"id":1}
```
### getProgramAccounts
Returns all accounts owned by the provided program Pubkey
#### Parameters:
* `string` - Pubkey of program, as base-58 encoded string
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
The result field will be an array of arrays. Each sub array will contain:
* `string` - the account Pubkey as base-58 encoded string and a JSON object, with the following sub fields:
* `lamports`, number of lamports assigned to this account, as a u64
* `owner`, array of 32 bytes representing the program this account has been assigned to
* `data`, array of bytes representing any data associated with the account
* `executable`, boolean indicating if the account contains a program \(and is strictly read-only\)
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getProgramAccounts", "params":["8nQwAgzN2yyUzrukXsCa3JELBYqDQrqJ3UyHiWazWxHR"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":[["BqGKYtAKu69ZdWEBtZHh4xgJY1BYa2YBiBReQE3pe383", {"executable":false,"owner":[50,28,250,90,221,24,94,136,147,165,253,136,1,62,196,215,225,34,222,212,99,84,202,223,245,13,149,99,149,231,91,96],"lamports":1,"data":[]], ["4Nd1mBQtrMJVYVfKf2PJy9NZUZdTAsp7D4xWLs4gDB4T", {"executable":false,"owner":[50,28,250,90,221,24,94,136,147,165,253,136,1,62,196,215,225,34,222,212,99,84,202,223,245,13,149,99,149,231,91,96],"lamports":10,"data":[]]]},"id":1}
```
### getRecentBlockhash
Returns a recent block hash from the ledger, and a fee schedule that can be used to compute the cost of submitting a transaction using it.
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
An RpcResponse containing an array consisting of a string blockhash and FeeCalculator JSON object.
* `RpcResponse<array>` - RpcResponse JSON object with `value` field set to an array including:
* `string` - a Hash as base-58 encoded string
* `FeeCalculator object` - the fee schedule for this block hash
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getRecentBlockhash"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"context":{"slot":1},"value":["GH7ome3EiwEr7tu9JuTh2dpYWBJK3z69Xm1ZE3MEE6JC",{"lamportsPerSignature": 0}]},"id":1}
```
### getSignatureStatus
Returns the status of a given signature. This method is similar to [confirmTransaction](jsonrpc-api.md#confirmtransaction) but provides more resolution for error events.
#### Parameters:
* `string` - Signature of Transaction to confirm, as base-58 encoded string
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `null` - Unknown transaction
* `object` - Transaction status:
* `"Ok": null` - Transaction was successful
* `"Err": <ERR>` - Transaction failed with TransactionError [TransactionError definitions](https://github.com/solana-labs/solana/blob/master/sdk/src/transaction.rs#L14)
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "method":"getSignatureStatus", "params":["5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW"]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"SignatureNotFound","id":1}
```
### getSlot
Returns the current slot the node is processing
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `u64` - Current slot
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getSlot"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"1234","id":1}
```
### getSlotLeader
Returns the current slot leader
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `string` - Node Id as base-58 encoded string
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getSlotLeader"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"ENvAW7JScgYq6o4zKZwewtkzzJgDzuJAFxYasvmEQdpS","id":1}
```
### getSlotsPerSegment
Returns the current storage segment size in terms of slots
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `u64` - Number of slots in a storage segment
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getSlotsPerSegment"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"1024","id":1}
```
### getStorageTurn
Returns the current storage turn's blockhash and slot
#### Parameters:
None
#### Results:
An array consisting of
* `string` - a Hash as base-58 encoded string indicating the blockhash of the turn slot
* `u64` - the current storage turn slot
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getStorageTurn"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":["GH7ome3EiwEr7tu9JuTh2dpYWBJK3z69Xm1ZE3MEE6JC", "2048"],"id":1}
```
### getStorageTurnRate
Returns the current storage turn rate in terms of slots per turn
#### Parameters:
None
#### Results:
* `u64` - Number of slots in storage turn
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getStorageTurnRate"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"1024","id":1}
```
### getTransactionCount
Returns the current Transaction count from the ledger
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `u64` - count
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getTransactionCount"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":268,"id":1}
```
### getTotalSupply
Returns the current total supply in Lamports
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
* `u64` - Total supply
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getTotalSupply"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":10126,"id":1}
```
### getVersion
Returns the current solana versions running on the node
#### Parameters:
None
#### Results:
The result field will be a JSON object with the following sub fields:
* `solana-core`, software version of solana-core
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"solana-core": "0.17.2"},"id":1}
```
### getVoteAccounts
Returns the account info and associated stake for all the voting accounts in the current bank.
#### Parameters:
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment)
#### Results:
The result field will be a JSON object of `current` and `delinquent` accounts, each containing an array of JSON objects with the following sub fields:
* `votePubkey` - Vote account public key, as base-58 encoded string
* `nodePubkey` - Node public key, as base-58 encoded string
* `activatedStake` - the stake, in lamports, delegated to this vote account and active in this epoch
* `epochVoteAccount` - bool, whether the vote account is staked for this epoch
* `commission`, percentage (0-100) of rewards payout owed to the vote account
* `lastVote` - Most recent slot voted on by this vote account
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVoteAccounts"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":{"current":[{"commission":0,"epochVoteAccount":true,"nodePubkey":"B97CCUW3AEZFGy6uUg6zUdnNYvnVq5VG8PUtb2HayTDD","lastVote":147,"activatedStake":42,"votePubkey":"3ZT31jkAGhUaw8jsy4bTknwBMP8i4Eueh52By4zXcsVw"}],"delinquent":[{"commission":127,"epochVoteAccount":false,"nodePubkey":"6ZPxeQaDo4bkZLRsdNrCzchNQr5LN9QMc9sipXv9Kw8f","lastVote":0,"activatedStake":0,"votePubkey":"CmgCk4aMS7KW1SHX3s9K5tBJ6Yng2LBaC8MFov4wx9sm"}]},"id":1}
```
### requestAirdrop
Requests an airdrop of lamports to a Pubkey
#### Parameters:
* `string` - Pubkey of account to receive lamports, as base-58 encoded string
* `integer` - lamports, as a u64
* `object` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment) (used for retrieving blockhash and verifying airdrop success)
#### Results:
* `string` - Transaction Signature of airdrop, as base-58 encoded string
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"requestAirdrop", "params":["83astBRguLMdt2h5U1Tpdq5tjFoJ6noeGwaY3mDLVcri", 50]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"5VERv8NMvzbJMEkV8xnrLkEaWRtSz9CosKDYjCJjBRnbJLgp8uirBgmQpjKhoR4tjF3ZpRzrFmBV6UjKdiSZkQUW","id":1}
```
### sendTransaction
Creates new transaction
#### Parameters:
* `array` - array of octets containing a fully-signed Transaction
#### Results:
* `string` - Transaction Signature, as base-58 encoded string
#### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"sendTransaction", "params":[[61, 98, 55, 49, 15, 187, 41, 215, 176, 49, 234, 229, 228, 77, 129, 221, 239, 88, 145, 227, 81, 158, 223, 123, 14, 229, 235, 247, 191, 115, 199, 71, 121, 17, 32, 67, 63, 209, 239, 160, 161, 2, 94, 105, 48, 159, 235, 235, 93, 98, 172, 97, 63, 197, 160, 164, 192, 20, 92, 111, 57, 145, 251, 6, 40, 240, 124, 194, 149, 155, 16, 138, 31, 113, 119, 101, 212, 128, 103, 78, 191, 80, 182, 234, 216, 21, 121, 243, 35, 100, 122, 68, 47, 57, 13, 39, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 50, 0, 0, 0, 0, 0, 0, 0, 40, 240, 124, 194, 149, 155, 16, 138, 31, 113, 119, 101, 212, 128, 103, 78, 191, 80, 182, 234, 216, 21, 121, 243, 35, 100, 122, 68, 47, 57, 11, 12, 106, 49, 74, 226, 201, 16, 161, 192, 28, 84, 124, 97, 190, 201, 171, 186, 6, 18, 70, 142, 89, 185, 176, 154, 115, 61, 26, 163, 77, 1, 88, 98, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"2EBVM6cB8vAAD93Ktr6Vd8p67XPbQzCJX47MpReuiCXJAtcjaxpvWpcg9Ege1Nr5Tk3a2GFrByT7WPBjdsTycY9b","id":1}
```
### Subscription Websocket
After connect to the RPC PubSub websocket at `ws://<ADDRESS>/`:
* Submit subscription requests to the websocket using the methods below
* Multiple subscriptions may be active at once
* All subscriptions take an optional `confirmations` parameter, which defines
how many confirmed blocks the node should wait before sending a notification.
The greater the number, the more likely the notification is to represent
consensus across the cluster, and the less likely it is to be affected by
forking or rollbacks. If unspecified, the default value is 0; the node will
send a notification as soon as it witnesses the event. The maximum
`confirmations` wait length is the cluster's `MAX_LOCKOUT_HISTORY`, which
represents the economic finality of the chain.
### accountSubscribe
Subscribe to an account to receive notifications when the lamports or data for a given account public key changes
#### Parameters:
* `string` - account Pubkey, as base-58 encoded string
* `integer` - optional, number of confirmed blocks to wait before notification.
Default: 0, Max: `MAX_LOCKOUT_HISTORY` \(greater integers rounded down\)
#### Results:
* `integer` - Subscription id \(needed to unsubscribe\)
#### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"accountSubscribe", "params":["CM78CPUeXjn8o3yroDHxUtKsZZgoy4GPkPPXfouKNH12"]}
{"jsonrpc":"2.0", "id":1, "method":"accountSubscribe", "params":["CM78CPUeXjn8o3yroDHxUtKsZZgoy4GPkPPXfouKNH12", 15]}
// Result
{"jsonrpc": "2.0","result": 0,"id": 1}
```
#### Notification Format:
```bash
{"jsonrpc": "2.0","method": "accountNotification", "params": {"result": {"executable":false,"owner":[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lamports":1,"data":[3,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0.22.0,0,0,0,0,0,0,50,48,53,48,45,48,49,45,48,49,84,48,48,58,48,48,58,48,48,90,252,10,7,28,246,140,88,177,98,82,10,227,89,81,18,30,194,101,199,16,11,73,133,20,246,62,114,39,20,113,189,32,50,0,0,0,0,0,0,0,247,15,36,102,167,83,225,42,133,127,82,34,36,224,207,130,109,230,224,188,163,33,213,13,5,117,211,251,65,159,197,51,0,0,0,0,0,0]},"subscription":0}}
```
### accountUnsubscribe
Unsubscribe from account change notifications
#### Parameters:
* `integer` - id of account Subscription to cancel
#### Results:
* `bool` - unsubscribe success message
#### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"accountUnsubscribe", "params":[0]}
// Result
{"jsonrpc": "2.0","result": true,"id": 1}
```
### programSubscribe
Subscribe to a program to receive notifications when the lamports or data for a given account owned by the program changes
#### Parameters:
* `string` - program\_id Pubkey, as base-58 encoded string
* `integer` - optional, number of confirmed blocks to wait before notification.
Default: 0, Max: `MAX_LOCKOUT_HISTORY` \(greater integers rounded down\)
#### Results:
* `integer` - Subscription id \(needed to unsubscribe\)
#### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"programSubscribe", "params":["9gZbPtbtHrs6hEWgd6MbVY9VPFtS5Z8xKtnYwA2NynHV"]}
{"jsonrpc":"2.0", "id":1, "method":"programSubscribe", "params":["9gZbPtbtHrs6hEWgd6MbVY9VPFtS5Z8xKtnYwA2NynHV", 15]}
// Result
{"jsonrpc": "2.0","result": 0,"id": 1}
```
#### Notification Format:
* `string` - account Pubkey, as base-58 encoded string
* `object` - account info JSON object \(see [getAccountInfo](jsonrpc-api.md#getaccountinfo) for field details\)
```bash
{"jsonrpc":"2.0","method":"programNotification","params":{{"result":["8Rshv2oMkPu5E4opXTRyuyBeZBqQ4S477VG26wUTFxUM",{"executable":false,"lamports":1,"owner":[129,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"data":[1,1,1,0,0,0,0,0,0,0.22.0,0,0,0,0,0,0,50,48,49,56,45,49,50,45,50,52,84,50,51,58,53,57,58,48,48,90,235,233,39,152,15,44,117,176,41,89,100,86,45,61,2,44,251,46,212,37,35,118,163,189,247,84,27,235,178,62,55,89,0,0,0,0,50,0,0,0,0,0,0,0,235,233,39,152,15,44,117,176,41,89,100,86,45,61,2,44,251,46,212,37,35,118,163,189,247,84,27,235,178,62,45,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]}],"subscription":0}}
```
### programUnsubscribe
Unsubscribe from program-owned account change notifications
#### Parameters:
* `integer` - id of account Subscription to cancel
#### Results:
* `bool` - unsubscribe success message
#### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"programUnsubscribe", "params":[0]}
// Result
{"jsonrpc": "2.0","result": true,"id": 1}
```
### signatureSubscribe
Subscribe to a transaction signature to receive notification when the transaction is confirmed On `signatureNotification`, the subscription is automatically cancelled
#### Parameters:
* `string` - Transaction Signature, as base-58 encoded string
* `integer` - optional, number of confirmed blocks to wait before notification.
Default: 0, Max: `MAX_LOCKOUT_HISTORY` \(greater integers rounded down\)
#### Results:
* `integer` - subscription id \(needed to unsubscribe\)
#### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"signatureSubscribe", "params":["2EBVM6cB8vAAD93Ktr6Vd8p67XPbQzCJX47MpReuiCXJAtcjaxpvWpcg9Ege1Nr5Tk3a2GFrByT7WPBjdsTycY9b"]}
{"jsonrpc":"2.0", "id":1, "method":"signatureSubscribe", "params":["2EBVM6cB8vAAD93Ktr6Vd8p67XPbQzCJX47MpReuiCXJAtcjaxpvWpcg9Ege1Nr5Tk3a2GFrByT7WPBjdsTycY9b", 15]}
// Result
{"jsonrpc": "2.0","result": 0,"id": 1}
```
#### Notification Format:
```bash
{"jsonrpc": "2.0","method": "signatureNotification", "params": {"result": "Confirmed","subscription":0}}
```
### signatureUnsubscribe
Unsubscribe from signature confirmation notification
#### Parameters:
* `integer` - subscription id to cancel
#### Results:
* `bool` - unsubscribe success message
#### Example:
```bash
// Request
{"jsonrpc":"2.0", "id":1, "method":"signatureUnsubscribe", "params":[0]}
// Result
{"jsonrpc": "2.0","result": true,"id": 1}
```

View File

@ -0,0 +1,62 @@
# Transaction
## Components of a `Transaction`
* **Transaction:**
* **message:** Defines the transaction
* **header:** Details the account types of and signatures required by
the transaction
* **num\_required\_signatures:** The total number of signatures
required to make the transaction valid.
* **num\_credit\_only\_signed\_accounts:** The last
`num_readonly_signed_accounts` signatures refer to signing
credit only accounts. Credit only accounts can be used concurrently
by multiple parallel transactions, but their balance may only be
increased, and their account data is read-only.
* **num\_credit\_only\_unsigned\_accounts:** The last
`num_readonly_unsigned_accounts` public keys in `account_keys` refer
to non-signing credit only accounts
* **account\_keys:** List of public keys used by the transaction, including
by the instructions and for signatures. The first
`num_required_signatures` public keys must sign the transaction.
* **recent\_blockhash:** The ID of a recent ledger entry. Validators will
reject transactions with a `recent_blockhash` that is too old.
* **instructions:** A list of [instructions](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/instruction.md) that are
run sequentially and committed in one atomic transaction if all
succeed.
* **signatures:** A list of signatures applied to the transaction. The
list is always of length `num_required_signatures`, and the signature
at index `i` corresponds to the public key at index `i` in `account_keys`.
The list is initialized with empty signatures \(i.e. zeros\), and
populated as signatures are added.
## Transaction Signing
A `Transaction` is signed by using an ed25519 keypair to sign the serialization of the `message`. The resulting signature is placed at the index of `signatures` matching the index of the keypair's public key in `account_keys`.
## Transaction Serialization
`Transaction`s \(and their `message`s\) are serialized and deserialized using the [bincode](https://crates.io/crates/bincode) crate with a non-standard vector serialization that uses only one byte for the length if it can be encoded in 7 bits, 2 bytes if it fits in 14 bits, or 3 bytes if it requires 15 or 16 bits. The vector serialization is defined by Solana's [short-vec](https://github.com/solana-labs/solana/blob/master/sdk/src/short_vec.rs).

View File

@ -1,10 +1,10 @@
# A Solana Cluster
A Solana cluster is a set of validators working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this section, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes.
A Solana cluster is a set of validators working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this chapter, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes.
## Creating a Cluster
Before starting any validators, one first needs to create a _genesis config_. The config references two public keys, a _mint_ and a _bootstrap validator_. The validator holding the bootstrap validator's private key is responsible for appending the first entries to the ledger. It initializes its internal state with the mint's account. That account will hold the number of native tokens defined by the genesis config. The second validator then contacts the bootstrap validator to register as a _validator_ or _archiver_. Additional validators then register with any registered member of the cluster.
Before starting any validators, one first needs to create a _genesis config_. The config references two public keys, a _mint_ and a _bootstrap leader_. The validator holding the bootstrap leader's private key is responsible for appending the first entries to the ledger. It initializes its internal state with the mint's account. That account will hold the number of native tokens defined by the genesis config. The second validator then contacts the bootstrap leader to register as a _validator_ or _archiver_. Additional validators then register with any registered member of the cluster.
A validator receives all entries from the leader and submits votes confirming those entries are valid. After voting, the validator is expected to store those entries until archiver nodes submit proofs that they have stored copies of it. Once the validator observes a sufficient number of copies exist, it deletes its copy.
@ -37,4 +37,4 @@ Solana rotates leaders at fixed intervals, called _slots_. Each leader may only
Next, transactions are broken into batches so that a node can send transactions to multiple parties without making multiple copies. If, for example, the leader needed to send 60 transactions to 6 nodes, it would break that collection of 60 into batches of 10 transactions and send one to each node. This allows the leader to put 60 transactions on the wire, not 60 transactions for each node. Each node then shares its batch with its peers. Once the node has collected all 6 batches, it reconstructs the original set of 60 transactions.
A batch of transactions can only be split so many times before it is so small that header information becomes the primary consumer of network bandwidth. At the time of this writing, the approach is scaling well up to about 150 validators. To scale up to hundreds of thousands of validators, each node can apply the same technique as the leader node to another set of nodes of equal size. We call the technique [_Turbine Block Propogation_](turbine-block-propagation.md).
A batch of transactions can only be split so many times before it is so small that header information becomes the primary consumer of network bandwidth. At the time of this writing, the approach is scaling well up to about 150 validators. To scale up to hundreds of thousands of validators, each node can apply the same technique as the leader node to another set of nodes of equal size. We call the technique _data plane fanout_; learn more in the [data plan fanout](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/data-plane-fanout.md) section.

View File

@ -1,6 +1,6 @@
# Fork Generation
This section describes how forks naturally occur as a consequence of [leader rotation](leader-rotation.md).
The chapter describes how forks naturally occur as a consequence of [leader rotation](leader-rotation.md).
## Overview

View File

@ -1,6 +1,6 @@
# Managing Forks
The ledger is permitted to fork at slot boundaries. The resulting data structure forms a tree called a _blockstore_. When the validator interprets the blockstore, it must maintain state for each fork in the chain. We call each instance an _active fork_. It is the responsibility of a validator to weigh those forks, such that it may eventually select a fork.
The ledger is permitted to fork at slot boundaries. The resulting data structure forms a tree called a _blocktree_. When the validator interprets the blocktree, it must maintain state for each fork in the chain. We call each instance an _active fork_. It is the responsibility of a validator to weigh those forks, such that it may eventually select a fork.
A validator selects a fork by submiting a vote to a slot leader on that fork. The vote commits the validator for a duration of time called a _lockout period_. The validator is not permitted to vote on a different fork until that lockout period expires. Each subsequent vote on the same fork doubles the length of the lockout period. After some cluster-configured number of votes \(currently 32\), the length of the lockout period reaches what's called _max lockout_. Until the max lockout is reached, the validator has the option to wait until the lockout period is over and then vote on another fork. When it votes on another fork, it performs a operation called _rollback_, whereby the state rolls back in time to a shared checkpoint and then jumps forward to the tip of the fork that it just voted on. The maximum distance that a fork may roll back is called the _rollback depth_. Rollback depth is the number of votes required to achieve max lockout. Whenever a validator votes, any checkpoints beyond the rollback depth become unreachable. That is, there is no scenario in which the validator will need to roll back beyond rollback depth. It therefore may safely _prune_ unreachable forks and _squash_ all checkpoints beyond rollback depth into the root checkpoint.

View File

@ -1,6 +1,6 @@
# Stake Delegation and Rewards
Stakers are rewarded for helping to validate the ledger. They do this by delegating their stake to validator nodes. Those validators do the legwork of replaying the ledger and send votes to a per-node vote account to which stakers can delegate their stakes. The rest of the cluster uses those stake-weighted votes to select a block when forks arise. Both the validator and staker need some economic incentive to play their part. The validator needs to be compensated for its hardware and the staker needs to be compensated for the risk of getting its stake slashed. The economics are covered in [staking rewards](../implemented-proposals/staking-rewards.md). This section, on the other hand, describes the underlying mechanics of its implementation.
Stakers are rewarded for helping to validate the ledger. They do this by delegating their stake to validator nodes. Those validators do the legwork of replaying the ledger and send votes to a per-node vote account to which stakers can delegate their stakes. The rest of the cluster uses those stake-weighted votes to select a block when forks arise. Both the validator and staker need some economic incentive to play their part. The validator needs to be compensated for its hardware and the staker needs to be compensated for the risk of getting its stake slashed. The economics are covered in [staking rewards](../proposals/staking-rewards.md). This chapter, on the other hand, describes the underlying mechanics of its implementation.
## Basic Design
@ -29,7 +29,11 @@ VoteState is the current state of all the votes the validator has submitted to t
* Account::lamports - The accumulated lamports from the commission. These do not count as stakes.
* `authorized_voter` - Only this identity is authorized to submit votes. This field can only modified by this identity.
* `node_pubkey` - The Solana node that votes in this account.
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address and the authorized vote signer
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's
```text
address and the authorized vote signer
```
### VoteInstruction::Initialize\(VoteInit\)
@ -44,11 +48,13 @@ VoteState is the current state of all the votes the validator has submitted to t
Updates the account with a new authorized voter or withdrawer, according to the VoteAuthorize parameter \(`Voter` or `Withdrawer`\). The transaction must be by signed by the Vote account's current `authorized_voter` or `authorized_withdrawer`.
* `account[0]` - RW - The VoteState
`VoteState::authorized_voter` or `authorized_withdrawer` is set to to `Pubkey`.
### VoteInstruction::Vote\(Vote\)
* `account[0]` - RW - The VoteState
`VoteState::lockouts` and `VoteState::credits` are updated according to voting lockout rules see [Tower BFT](../implemented-proposals/tower-bft.md)
* `account[1]` - RO - `sysvar::slot_hashes` A list of some N most recent slots and their hashes for the vote to be verified against.
@ -67,10 +73,18 @@ StakeState::Stake is the current delegation preference of the **staker** and con
* `voter_pubkey` - The pubkey of the VoteState instance the lamports are delegated to.
* `credits_observed` - The total credits claimed over the lifetime of the program.
* `activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up.
* `deactivated` - the epoch at which this stake was de-activated, some cool down epochs are required before the account is fully deactivated, and the stake available for withdrawal
* `deactivated` - the epoch at which this stake was de-activated, some cool down epochs are required before the account
```text
is fully deactivated, and the stake available for withdrawal
```
* `authorized_staker` - the pubkey of the entity that must sign delegation, activation, and deactivation transactions
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address, and the authorized staker
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's
```text
address, and the authorized staker
```
### StakeState::RewardsPool
@ -80,22 +94,42 @@ The Stakes and the RewardsPool are accounts that are owned by the same `Stake` p
### StakeInstruction::DelegateStake
The Stake account is moved from Initialized to StakeState::Stake form, or from a deactivated (i.e. fully cooled-down) StakeState::Stake to activated StakeState::Stake. This is how stakers choose the vote account and validator node to which their stake account lamports are delegated. The transaction must be signed by the stake's `authorized_staker`.
The Stake account is moved from Ininitialized to StakeState::Stake form. This is how stakers choose their initial delegate validator node and activate their stake account lamports. The transaction must be signed by the stake's `authorized_staker`. If the stake account is already StakeState::Stake \(i.e. already activated\), the stake is re-delegated. Stakes may be re-delegated at any time, and updated stakes are reflected immediately, but only one re-delegation is permitted per epoch.
* `account[0]` - RW - The StakeState::Stake instance. `StakeState::Stake::credits_observed` is initialized to `VoteState::credits`, `StakeState::Stake::voter_pubkey` is initialized to `account[1]`. If this is the initial delegation of stake, `StakeState::Stake::stake` is initialized to the account's balance in lamports, `StakeState::Stake::activated` is initialized to the current Bank epoch, and `StakeState::Stake::deactivated` is initialized to std::u64::MAX
* `account[1]` - R - The VoteState instance.
* `account[2]` - R - sysvar::clock account, carries information about current Bank epoch
* `account[3]` - R - sysvar::stakehistory account, carries information about stake history
* `account[4]` - R - stake::Config accoount, carries warmup, cooldown, and slashing configuration
* `account[3]` - R - stake::Config accoount, carries warmup, cooldown, and slashing configuration
### StakeInstruction::Authorize\(Pubkey, StakeAuthorize\)
Updates the account with a new authorized staker or withdrawer, according to the StakeAuthorize parameter \(`Staker` or `Withdrawer`\). The transaction must be by signed by the Stakee account's current `authorized_staker` or `authorized_withdrawer`. Any stake lock-up must have expired, or the lock-up custodian must also sign the transaction.
Updates the account with a new authorized staker or withdrawer, according to the StakeAuthorize parameter \(`Staker` or `Withdrawer`\). The transaction must be by signed by the Stakee account's current `authorized_staker` or `authorized_withdrawer`.
* `account[0]` - RW - The StakeState
`StakeState::authorized_staker` or `authorized_withdrawer` is set to to `Pubkey`.
### StakeInstruction::RedeemVoteCredits
The staker or the owner of the Stake account sends a transaction with this instruction to claim rewards.
The Vote account and the Stake account pair maintain a lifetime counter of total rewards generated and claimed. Rewards are paid according to a point value supplied by the Bank from inflation. A `point` is one credit \* one staked lamport, rewards paid are proportional to the number of lamports staked.
* `account[0]` - RW - The StakeState::Stake instance that is redeeming rewards.
* `account[1]` - R - The VoteState instance, must be the same as `StakeState::voter_pubkey`
* `account[2]` - RW - The StakeState::RewardsPool instance that will fulfill the request \(picked at random\).
* `account[3]` - R - sysvar::rewards account from the Bank that carries point value.
* `account[4]` - R - sysvar::stake\_history account from the Bank that carries stake warmup/cooldown history
Reward is paid out for the difference between `VoteState::credits` to `StakeState::Stake::credits_observed`, multiplied by `sysvar::rewards::Rewards::validator_point_value`. `StakeState::Stake::credits_observed` is updated to`VoteState::credits`. The commission is deposited into the Vote account token balance, and the reward is deposited to the Stake account token balance and the stake account's `stake` is increased by the same amount \(re-invested\).
```text
let credits_to_claim = vote_state.credits - stake_state.credits_observed;
stake_state.credits_observed = vote_state.credits;
```
`credits_to_claim` is used to compute the reward and commission, and `StakeState::Stake::credits_observed` is updated to the latest `VoteState::credits` value.
### StakeInstruction::Deactivate
A staker may wish to withdraw from the network. To do so he must first deactivate his stake, and wait for cool down.
@ -128,11 +162,11 @@ Lamports build up over time in a Stake account and any excess over activated sta
## Staking Rewards
The specific mechanics and rules of the validator rewards regime is outlined here. Rewards are earned by delegating stake to a validator that is voting correctly. Voting incorrectly exposes that validator's stakes to [slashing](../proposals/slashing.md).
The specific mechanics and rules of the validator rewards regime is outlined here. Rewards are earned by delegating stake to a validator that is voting correctly. Voting incorrectly exposes that validator's stakes to [slashing](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/staking-and-rewards.md).
### Basics
The network pays rewards from a portion of network [inflation](../terminology.md#inflation). The number of lamports available to pay rewards for an epoch is fixed and must be evenly divided among all staked nodes according to their relative stake weight and participation. The weighting unit is called a [point](../terminology.md#point).
The network pays rewards from a portion of network [inflation](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/inflation.md). The number of lamports available to pay rewards for an epoch is fixed and must be evenly divided among all staked nodes according to their relative stake weight and participation. The weighting unit is called a [point](../terminology.md#point).
Rewards for an epoch are not available until the end of that epoch.
@ -154,7 +188,7 @@ Stakers who have delegated to that validator earn points in proportion to their
Stakes, once delegated, do not become effective immediately. They must first pass through a warm up period. During this period some portion of the stake is considered "effective", the rest is considered "activating". Changes occur on epoch boundaries.
The stake program limits the rate of change to total network stake, reflected in the stake program's `config::warmup_rate` \(set to 25% per epoch in the current implementation\).
The stake program limits the rate of change to total network stake, reflected in the stake program's `config::warmup_rate` \(typically 25% per epoch\).
The amount of stake that can be warmed up each epoch is a function of the previous epoch's total effective stake, total activating stake, and the stake program's configured warmup rate.
@ -166,14 +200,14 @@ Rewards are paid against the "effective" portion of the stake for that epoch.
#### Warmup example
Consider the situation of a single stake of 1.0.1 activated at epoch N, with network warmup rate of 20%, and a quiescent total network stake at epoch N of 2,000.
Consider the situation of a single stake of 1,000 activated at epoch N, with network warmup rate of 20%, and a quiescent total network stake at epoch N of 2,000.
At epoch N+1, the amount available to be activated for the network is 400 \(20% of 200\), and at epoch N, this example stake is the only stake activating, and so is entitled to all of the warmup room available.
| epoch | effective | activating | total effective | total activating |
| :--- | ---: | ---: | ---: | ---: |
| N-1 | | | 2,000 | 0 |
| N | 0 | 1.0.1 | 2,000 | 1.0.1 |
| N | 0 | 1,000 | 2,000 | 1,000 |
| N+1 | 400 | 600 | 2,400 | 600 |
| N+2 | 880 | 120 | 2,880 | 120 |
| N+3 | 1000 | 0 | 3,000 | 0 |
@ -183,7 +217,7 @@ Were 2 stakes \(X and Y\) to activate at epoch N, they would be awarded a portio
| epoch | X eff | X act | Y eff | Y act | total effective | total activating |
| :--- | ---: | ---: | ---: | ---: | ---: | ---: |
| N-1 | | | | | 2,000 | 0 |
| N | 0 | 1.0.1 | 0 | 200 | 2,000 | 1,200 |
| N | 0 | 1,000 | 0 | 200 | 2,000 | 1,200 |
| N+1 | 333 | 667 | 67 | 133 | 2,400 | 800 |
| N+2 | 733 | 267 | 146 | 54 | 2,880 | 321 |
| N+3 | 1000 | 0 | 200 | 0 | 3,200 | 0 |
@ -194,4 +228,4 @@ Only lamports in excess of effective+activating stake may be withdrawn at any ti
### Lock-up
Stake accounts support the notion of lock-up, wherein the stake account balance is unavailable for withdrawal until a specified time. Lock-up is specified as an epoch height, i.e. the minimum epoch height that must be reached by the network before the stake account balance is available for withdrawal, unless the transaction is also signed by a specified custodian. This information is gathered when the stake account is created, and stored in the Lockup field of the stake account's state. Changing the authorized staker or withdrawer is also subject to lock-up, as such an operation is effectively a transfer.
Stake accounts support the notion of lock-up, wherein the stake account balance is unavailable for withdrawal until a specified time. Lock-up is specified as an epoch height, i.e. the minimum epoch height that must be reached by the network before the stake account balance is available for withdrawal, unless the transaction is also signed by a specified custodian. This information is gathered when the stake account is created, and stored in the Lockup field of the stake account's state.

View File

@ -8,7 +8,7 @@ During its slot, the leader node distributes shreds between the validator nodes
In order for data plane fanout to work, the entire cluster must agree on how the cluster is divided into neighborhoods. To achieve this, all the recognized validator nodes \(the TVU peers\) are sorted by stake and stored in a list. This list is then indexed in different ways to figure out neighborhood boundaries and retransmit peers. For example, the leader will simply select the first nodes to make up layer 0. These will automatically be the highest stake holders, allowing the heaviest votes to come back to the leader first. Layer-0 and lower-layer nodes use the same logic to find their neighbors and next layer peers.
To reduce the possibility of attack vectors, each shred is transmitted over a random tree of neighborhoods. Each node uses the same set of nodes representing the cluster. A random tree is generated from the set for each shred using a seed derived from the leader id, slot and shred index.
To reduce the possibility of attack vectors, each shred is transmitted over a random tree of neighborhoods. Each node uses the same set of nodes representing the cluster. A random tree is generated from the set for each shred using randomness derived from the shred itself. Since the random seed is not known in advance, attacks that try to eclipse neighborhoods from certain leaders or blocks become very difficult, and should require almost complete control of the stake in the cluster.
## Layer and Neighborhood Structure

View File

@ -1,12 +1,12 @@
## Storage Rent Economics
Each transaction that is submitted to the Solana ledger imposes costs. Transaction fees paid by the submitter, and collected by a validator, in theory, account for the acute, transactional, costs of validating and adding that data to the ledger. At the same time, our compensation design for archivers (see [Replication-client Economics](ed_replication_client_economics.md)), in theory, accounts for the long term storage of the historical ledger. Unaccounted in this process is the mid-term storage of active ledger state, necessarily maintained by the rotating validator set. This type of storage imposes costs not only to validators but also to the broader network as active state grows so does data transmission and validation overhead. To account for these costs, we describe here our preliminary design and implementation of storage rent.
Each transaction that is submitted to the Solana ledger imposes costs. Transaction fees paid by the submitter, and collected by a validator, in theory, account for the acute, transacitonal, costs of validating and adding that data to the ledger. At the same time, our compensation design for archivers (see [Replication-client Economics](ed_replication_client_economics.md)), in theory, accounts for the long term storage of the historical ledger. Unaccounted in this process is the mid-term storage of active ledger state, necessarily maintined by the rotating validator set. This type of storage imposes costs not only to validators but also to the broader network as active state grows so does data transmission and validation overhead. To account for these costs, we describe here our preliminary design and implementation of storage rent.
Storage rent can be paid via one of two methods:
Method 1: Set it and forget it
With this approach, accounts with two-years worth of rent deposits secured are exempt from network rent charges. By maintaining this minimum-balance, the broader network benefits from reduced liquidity and the account holder can trust that their `Account::data` will be retained for continual access/usage.
With this approach, accounts with two-years worth of rent deposits secured are exempt from network rent charges. By maintaining this minimum-balance, the broader network benefits from reduced liquitity and the account holder can trust that their `Account::data` will be retained for continual access/usage.
Method 2: Pay per byte

View File

@ -1,4 +1,4 @@
# Building from Source
# Getting Started
The Solana git repository contains all the scripts you might need to spin up your own local testnet. Depending on what you're looking to achieve, you may want to run a different variation, as the full-fledged, performance-enhanced multinode testnet is considerably more complex to set up than a Rust-only, singlenode testnode. If you are looking to develop high-level features, such as experimenting with smart contracts, save yourself some setup headaches and stick to the Rust-only singlenode demo. If you're doing performance optimization of the transaction pipeline, consider the enhanced singlenode demo. If you're doing consensus work, you'll need at least a Rust-only multinode demo. If you want to reproduce our TPS metrics, run the enhanced multinode demo.
@ -52,12 +52,12 @@ $ NDEBUG=1 ./multinode-demo/faucet.sh
### Singlenode Testnet
Before you start a validator, make sure you know the IP address of the machine you want to be the bootstrap validator for the demo, and make sure that udp ports 8000-1.0.1 are open on all the machines you want to test with.
Before you start a validator, make sure you know the IP address of the machine you want to be the bootstrap leader for the demo, and make sure that udp ports 8000-10000 are open on all the machines you want to test with.
Now start the bootstrap validator in a separate shell:
Now start the bootstrap leader in a separate shell:
```bash
$ NDEBUG=1 ./multinode-demo/bootstrap-validator.sh
$ NDEBUG=1 ./multinode-demo/bootstrap-leader.sh
```
Wait a few seconds for the server to initialize. It will print "leader ready..." when it's ready to receive transactions. The leader will request some tokens from the faucet if it doesn't have any. The faucet does not need to be running for subsequent leader starts.
@ -74,7 +74,7 @@ To run a performance-enhanced validator on Linux, [CUDA 10.0](https://developer.
```bash
$ ./fetch-perf-libs.sh
$ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/bootstrap-validator.sh
$ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/bootstrap-leader.sh
$ NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/validator.sh
```
@ -121,40 +121,12 @@ thread apply all bt
This will dump all the threads stack traces into gdb.txt
### Blockstreamer
Solana supports a node type called a _blockstreamer_. This validator variation is intended for applications that need to observe the data plane without participating in transaction validation or ledger replication.
A blockstreamer runs without a vote signer, and can optionally stream ledger entries out to a Unix domain socket as they are processed. The JSON-RPC service still functions as on any other node.
To run a blockstreamer, include the argument `no-signer` and \(optional\) `blockstream` socket location:
```bash
$ NDEBUG=1 ./multinode-demo/validator-x.sh --no-signer --blockstream <SOCKET>
```
The stream will output a series of JSON objects:
* An Entry event JSON object is sent when each ledger entry is processed, with the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "entry"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `entry`, the entry, as JSON object
* A Block event JSON object is sent when a block is complete, with the following fields:
* `dt`, the system datetime, as RFC3339-formatted string
* `t`, the event type, always "block"
* `s`, the slot height, as unsigned 64-bit integer
* `h`, the tick height, as unsigned 64-bit integer
* `l`, the slot leader id, as base-58 encoded string
* `hash`, the [blockhash](terminology.md#blockhash), as base-58 encoded string
## Public Testnet
In this example the client connects to our public testnet. To run validators on the testnet you would need to open udp ports `8000-1.0.1`.
In this example the client connects to our public testnet. To run validators on the testnet you would need to open udp ports `8000-10000`.
```bash
$ NDEBUG=1 ./multinode-demo/bench-tps.sh --entrypoint devnet.solana.com:8001 --faucet devnet.solana.com:9900 --duration 60 --tx_count 50
$ NDEBUG=1 ./multinode-demo/bench-tps.sh --entrypoint testnet.solana.com:8001 --faucet testnet.solana.com:9900 --duration 60 --tx_count 50
```
You can observe the effects of your client's transactions on our [dashboard](https://metrics.solana.com:3000/d/testnet/testnet-hud?orgId=2&from=now-30m&to=now&refresh=5s&var-testnet=testnet)

View File

@ -0,0 +1,7 @@
# Testnet Participation
Participate in our testnet:
* [Running a Validator](../running-validator/)
* [Running an Archiver](../running-archiver.md)

View File

@ -0,0 +1,90 @@
# Blocktree
After a block reaches finality, all blocks from that one on down to the genesis block form a linear chain with the familiar name blockchain. Until that point, however, the validator must maintain all potentially valid chains, called _forks_. The process by which forks naturally form as a result of leader rotation is described in [fork generation](../cluster/fork-generation.md). The _blocktree_ data structure described here is how a validator copes with those forks until blocks are finalized.
The blocktree allows a validator to record every shred it observes on the network, in any order, as long as the shred is signed by the expected leader for a given slot.
Shreds are moved to a fork-able key space the tuple of `leader slot` + `shred index` \(within the slot\). This permits the skip-list structure of the Solana protocol to be stored in its entirety, without a-priori choosing which fork to follow, which Entries to persist or when to persist them.
Repair requests for recent shreds are served out of RAM or recent files and out of deeper storage for less recent shreds, as implemented by the store backing Blocktree.
## Functionalities of Blocktree
1. Persistence: the Blocktree lives in the front of the nodes verification
pipeline, right behind network receive and signature verification. If the
shred received is consistent with the leader schedule \(i.e. was signed by the
leader for the indicated slot\), it is immediately stored.
2. Repair: repair is the same as window repair above, but able to serve any
shred that's been received. Blocktree stores shreds with signatures,
preserving the chain of origination.
3. Forks: Blocktree supports random access of shreds, so can support a
validator's need to rollback and replay from a Bank checkpoint.
4. Restart: with proper pruning/culling, the Blocktree can be replayed by
ordered enumeration of entries from slot 0. The logic of the replay stage
\(i.e. dealing with forks\) will have to be used for the most recent entries in
the Blocktree.
## Blocktree Design
1. Entries in the Blocktree are stored as key-value pairs, where the key is the concatenated slot index and shred index for an entry, and the value is the entry data. Note shred indexes are zero-based for each slot \(i.e. they're slot-relative\).
2. The Blocktree maintains metadata for each slot, in the `SlotMeta` struct containing:
* `slot_index` - The index of this slot
* `num_blocks` - The number of blocks in the slot \(used for chaining to a previous slot\)
* `consumed` - The highest shred index `n`, such that for all `m < n`, there exists a shred in this slot with shred index equal to `n` \(i.e. the highest consecutive shred index\).
* `received` - The highest received shred index for the slot
* `next_slots` - A list of future slots this slot could chain to. Used when rebuilding
the ledger to find possible fork points.
* `last_index` - The index of the shred that is flagged as the last shred for this slot. This flag on a shred will be set by the leader for a slot when they are transmitting the last shred for a slot.
* `is_rooted` - True iff every block from 0...slot forms a full sequence without any holes. We can derive is\_rooted for each slot with the following rules. Let slot\(n\) be the slot with index `n`, and slot\(n\).is\_full\(\) is true if the slot with index `n` has all the ticks expected for that slot. Let is\_rooted\(n\) be the statement that "the slot\(n\).is\_rooted is true". Then:
is\_rooted\(0\) is\_rooted\(n+1\) iff \(is\_rooted\(n\) and slot\(n\).is\_full\(\)
3. Chaining - When a shred for a new slot `x` arrives, we check the number of blocks \(`num_blocks`\) for that new slot \(this information is encoded in the shred\). We then know that this new slot chains to slot `x - num_blocks`.
4. Subscriptions - The Blocktree records a set of slots that have been "subscribed" to. This means entries that chain to these slots will be sent on the Blocktree channel for consumption by the ReplayStage. See the `Blocktree APIs` for details.
5. Update notifications - The Blocktree notifies listeners when slot\(n\).is\_rooted is flipped from false to true for any `n`.
## Blocktree APIs
The Blocktree offers a subscription based API that ReplayStage uses to ask for entries it's interested in. The entries will be sent on a channel exposed by the Blocktree. These subscription API's are as follows: 1. `fn get_slots_since(slot_indexes: &[u64]) -> Vec<SlotMeta>`: Returns new slots connecting to any element of the list `slot_indexes`.
1. `fn get_slot_entries(slot_index: u64, entry_start_index: usize, max_entries: Option<u64>) -> Vec<Entry>`: Returns the entry vector for the slot starting with `entry_start_index`, capping the result at `max` if `max_entries == Some(max)`, otherwise, no upper limit on the length of the return vector is imposed.
Note: Cumulatively, this means that the replay stage will now have to know when a slot is finished, and subscribe to the next slot it's interested in to get the next set of entries. Previously, the burden of chaining slots fell on the Blocktree.
## Interfacing with Bank
The bank exposes to replay stage:
1. `prev_hash`: which PoH chain it's working on as indicated by the hash of the last
entry it processed
2. `tick_height`: the ticks in the PoH chain currently being verified by this
bank
3. `votes`: a stack of records that contain: 1. `prev_hashes`: what anything after this vote must chain to in PoH 2. `tick_height`: the tick height at which this vote was cast 3. `lockout period`: how long a chain must be observed to be in the ledger to
be able to be chained below this vote
Replay stage uses Blocktree APIs to find the longest chain of entries it can hang off a previous vote. If that chain of entries does not hang off the latest vote, the replay stage rolls back the bank to that vote and replays the chain from there.
## Pruning Blocktree
Once Blocktree entries are old enough, representing all the possible forks becomes less useful, perhaps even problematic for replay upon restart. Once a validator's votes have reached max lockout, however, any Blocktree contents that are not on the PoH chain for that vote for can be pruned, expunged.
Archiver nodes will be responsible for storing really old ledger contents, and validators need only persist their bank periodically.

View File

@ -28,7 +28,7 @@ lockout on a bank `b`.
This computation is performed on a votable candidate bank `b` as follows.
```text
```
let output: HashMap<b, StakeLockout> = HashMap::new();
for vote_account in b.vote_accounts {
for v in vote_account.vote_stack {
@ -62,7 +62,7 @@ votes > v as the number of confirmations will be lower).
Now more specifically, we augment the above computation to:
```text
```
let output: HashMap<b, StakeLockout> = HashMap::new();
let fork_commitment_cache = ForkCommitmentCache::default();
for vote_account in b.vote_accounts {
@ -76,7 +76,7 @@ Now more specifically, we augment the above computation to:
```
where `f'` is defined as:
```text
```
fn f`(
stake_lockout: &mut StakeLockout,
some_ancestor: &mut BlockCommitment,

View File

@ -0,0 +1,123 @@
# Durable Transaction Nonces
## Problem
To prevent replay, Solana transactions contain a nonce field populated with a
"recent" blockhash value. A transaction containing a blockhash that is too old
(~2min as of this writing) is rejected by the network as invalid. Unfortunately
certain use cases, such as custodial services, require more time to produce a
signature for the transaction. A mechanism is needed to enable these potentially
offline network participants.
## Requirements
1) The transaction's signature needs to cover the nonce value
2) The nonce must not be reusable, even in the case of signing key disclosure
## A Contract-based Solution
Here we describe a contract-based solution to the problem, whereby a client can
"stash" a nonce value for future use in a transaction's `recent_blockhash`
field. This approach is akin to the Compare and Swap atomic instruction,
implemented by some CPU ISAs.
When making use of a durable nonce, the client must first query its value from
account data. A transaction is now constructed in the normal way, but with the
following additional requirements:
1) The durable nonce value is used in the `recent_blockhash` field
2) A `Nonce` instruction is issued (first?)
3) The appropriate transaction flag is set, signaling that the usual
hash age check should be skipped and the previous requirements enforced. This
may be unnecessary, see [Runtime Support](#runtime-support) below
### Contract Mechanics
TODO: svgbob this into a flowchart
```text
Start
Create Account
state = Uninitialized
NonceInstruction
if state == Uninitialized
if account.balance < rent_exempt
error InsufficientFunds
state = Initialized
elif state != Initialized
error BadState
if sysvar.recent_blockhashes.is_empty()
error EmptyRecentBlockhashes
if !sysvar.recent_blockhashes.contains(stored_nonce)
error NotReady
stored_hash = sysvar.recent_blockhashes[0]
success
WithdrawInstruction(to, lamports)
if state == Uninitialized
if !signers.contains(owner)
error MissingRequiredSignatures
elif state == Initialized
if !sysvar.recent_blockhashes.contains(stored_nonce)
error NotReady
if lamports != account.balance && lamports + rent_exempt > account.balance
error InsufficientFunds
account.balance -= lamports
to.balance += lamports
success
```
A client wishing to use this feature starts by creating a nonce account and
depositing sufficient lamports as to make it rent-exempt. The resultant account
will be in the `Uninitialized` state with no stored hash and thus unusable.
The `Nonce` instruction is used to request that a new nonce be stored for the
calling account. The first `Nonce` instruction run on a newly created account
will drive the account's state to `Initialized`. As such, a `Nonce` instruction
MUST be issued before the account can be used.
To discard a `NonceAccount`, the client should issue a `Withdraw` instruction
which withdraws all lamports, leaving a zero balance and making the account
eligible for deletion.
`Nonce` and `Withdraw` instructions each will only succeed if the stored
blockhash is no longer resident in sysvar.recent_blockhashes.
### Runtime Support
The contract alone is not sufficient for implementing this feature. To enforce
an extant `recent_blockhash` on the transaction and prevent fee theft via
failed transaction replay, runtime modifications are necessary.
Any transaction failing the usual `check_hash_age` validation will be tested
for a Durable Transaction Nonce. This specifics of this test are undecided, some
options:
1) Require that the `Nonce` instruction be the first in the transaction
* + No ABI changes
* + Fast and simple
* - Sets a precedent that may lead to incompatible instruction combinations
2) Blind search for a `Nonce` instruction over all instructions in the
transaction
* + No ABI changes
* - Potentially slow
3) [2], but guarded by a transaction flag
* - ABI changes
* - Wire size increase
* + We'll probably end up with some sort of flags eventually anyway
Current prototyping will use [1]. If it is determined that a Durable Transaction
Nonce is in use, the runtime will take the following actions to validate the
transaction:
1) The `NonceAccount` specified in the `Nonce` instruction is loaded.
2) The `NonceState` is deserialized from the `NonceAccount`'s data field and
confirmed to be in the `Initialized` state.
3) The nonce value stored in the `NonceAccount` is tested to match against the
one specified in the transaction's `recent_blockhash` field.
If all three of the above checks succeed, the transaction is allowed to continue
validation.
### Open Questions
* Should this feature be restricted in the number of uses per transaction?

View File

@ -10,8 +10,7 @@ These protocol-based rewards, to be distributed to participating validation and
Transaction fees are market-based participant-to-participant transfers, attached to network interactions as a necessary motivation and compensation for the inclusion and execution of a proposed transaction \(be it a state execution or proof-of-replication verification\). A mechanism for long-term economic stability and forking protection through partial burning of each transaction fee is also discussed below.
A high-level schematic of Solanas crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics/), [State-validation Protocol-based Rewards](ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md). Also, the section titled [Validation Stake Delegation](ed_validation_client_economics/ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunities and marketplace. Additionally, in [Storage Rent Economics](ed_storage_rent_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. [Replication-client Economics](ed_replication_client_economics/) will review the Solana network design for global ledger storage/redundancy and archiver-client economics \([Storage-replication rewards](ed_replication_client_economics/ed_rce_storage_replication_rewards.md)\) along with an archiver-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md). An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
![](../../.gitbook/assets/economic_design_infl_230719.png)
A high-level schematic of Solanas crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics/), [State-validation Protocol-based Rewards](ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md) and [Replication-validation Transaction Fees](ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md). Also, the chapter titled [Validation Stake Delegation](ed_validation_client_economics/ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunties and marketplace. Additionally, in [Storage Rent Economics](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_storage_rent_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. The [Replication-client Economics](ed_replication_client_economics/) chapter will review the Solana network design for global ledger storage/redundancy and archiver-client economics \([Storage-replication rewards](ed_replication_client_economics/ed_rce_storage_replication_rewards.md)\) along with an archiver-to-validator delegation mechanism designed to aide participant on-boarding into the Solana economy discussed in [Replication-client Reward Auto-delegation](ed_replication_client_economics/ed_rce_replication_client_reward_auto_delegation.md). An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section. Finally, in chapter [Attack Vectors](ed_attack_vectors.md), various attack vectors will be described and potential vulnerabilities explored and parameterized.
**Figure 1**: Schematic overview of Solana economic incentive design.

View File

@ -2,15 +2,13 @@
**Subject to change.**
Long term economic sustainability is one of the guiding principles of Solanas economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solanas network, these components take the form of token issuance \(via inflation\) and token burning.
Long term economic sustainability is one of the guiding principles of Solanas economic design. While it is impossible to predict how decentralized economies will develop over time, especially economies with flexible decentralized governances, we can arrange economic components such that, under certain conditions, a sustainable economy may take shape in the long term. In the case of Solanas network, these components take the form of token issuance \(via inflation\) and token burning.
The dominant remittances from the Solana mining pool are validator and archiver rewards. The disinflationary mechanism is a flat, protocol-specified and adjusted, % of each transaction fee.
The Archiver rewards are to be delivered to archivers as a portion of the network inflation after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 1**
The Archiver rewards are to be delivered to archivers as a portion of the network inflation after successful PoRep validation. The per-PoRep reward amount is determined as a function of the total network storage redundancy at the time of the PoRep validation and the network goal redundancy. This function is likely to take the form of a discount from a base reward to be delivered when the network has achieved and maintained its goal redundancy. An example of such a reward function is shown in **Figure 3**
![](../../.gitbook/assets/porep_reward.png)
**Figure 3**: Example PoRep reward design as a function of global network storage redundancy.
**Figure 1**: Example PoRep reward design as a function of global network storage redundancy.
In the example shown in **Figure 1**, multiple per PoRep base rewards are explored \(as a % of Tx Fee\) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy \(i.e. 10X\).
In the example shown in Figure 1, multiple per PoRep base rewards are explored \(as a % of Tx Fee\) to be delivered when the global ledger replication redundancy meets 10X. When the global ledger replication redundancy is less than 10X, the base reward is discounted as a function of the square of the ratio of the actual ledger replication redundancy to the goal redundancy \(i.e. 10X\).

View File

@ -2,7 +2,7 @@
**Subject to change.**
The preceding sections, outlined in the [Economic Design Overview](./), describe a long-term vision of a sustainable Solana economy. Of course, we don't expect the final implementation to perfectly match what has been described above. We intend to fully engage with network stakeholders throughout the implementation phases \(i.e. pre-testnet, testnet, mainnet\) to ensure the system supports, and is representative of, the various network participants' interests. The first step toward this goal, however, is outlining a some desired MVP economic features to be available for early pre-testnet and testnet participants. Below is a rough sketch outlining basic economic functionality from which a more complete and functional system can be developed.
The preceeding sections, outlined in the [Economic Design Overview](./), describe a long-term vision of a sustainable Solana economy. Of course, we don't expect the final implementation to perfectly match what has been described above. We intend to fully engage with network stakeholders throughout the implementation phases \(i.e. pre-testnet, testnet, mainnet\) to ensure the system supports, and is representative of, the various network participants' interests. The first step toward this goal, however, is outlining a some desired MVP economic features to be available for early pre-testnet and testnet participants. Below is a rough sketch outlining basic economic functionality from which a more complete and functional system can be developed.
## MVP Economic Features

View File

@ -2,5 +2,5 @@
**Subject to change.**
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of archivers provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided, as proved by successfully submitting Proofs-of-Replication to the cluster.. These rewards are captured by generating and entering Proofs of Replication \(PoReps\) into the PoH stream which can be validated by Validation nodes as described in [Replication-validation Transaction Fees](../ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md).
Replication-clients should be rewarded for providing the network with storage space. Incentivization of the set of archivers provides data security through redundancy of the historical ledger. Replication nodes are rewarded in proportion to the amount of ledger data storage provided, as proved by successfully submitting Proofs-of-Replication to the cluster.. These rewards are captured by generating and entering Proofs of Replication \(PoReps\) into the PoH stream which can be validated by Validation nodes as described above in the [Replication-validation Transaction Fees](../ed_validation_client_economics/ed_vce_replication_validation_transaction_fees.md) chapter.

View File

@ -0,0 +1,12 @@
# Replication-validation Transaction Fees
**Subject to change.**
As previously mentioned, validator-clients will also be responsible for validating PoReps submitted into the PoH stream by archiver-clients. In this case, validators are providing compute \(CPU/GPU\) and light storage resources to confirm that these replication proofs could only be generated by a client that is storing the referenced PoH leger block.
While replication-clients are incentivized and rewarded through protocol-based rewards schedule \(see [Replication-client Economics](../ed_replication_client_economics/)\), validator-clients will be incentivized to include and validate PoReps in PoH through collection of transaction fees associated with the submitted PoReps and distribution of protocol rewards proportional to the validated PoReps. As will be described in detail in the Section 3.1, replication-client rewards are protocol-based and designed to reward based on a global data redundancy factor. I.e. the protocol will incentivize replication-client participation through rewards based on a target ledger redundancy \(e.g. 10x data redundancy\).
The validation of PoReps by validation-clients is computationally more expensive than state-validation \(detail in the [Economic Sustainability](../ed_economic_sustainability.md) chapter\), thus the transaction fees are expected to be proportionally higher.
There are various attack vectors available for colluding validation and replication clients, also described in detail below in [Economic Sustainability](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_economic_sustainability/README.md). To protect against various collusion attack vectors, for a given epoch, validator rewards are distributed across participating validation-clients in proportion to the number of validated PoReps in the epoch less the number of PoReps that mismatch the archivers challenge. The PoRep challenge game is described in [Ledger Replication](https://github.com/solana-labs/solana/blob/master/book/src/ledger-replication.md#the-porep-game). This design rewards validators proportional to the number of PoReps they process and validate, while providing negative pressure for validation-clients to submit lazy or malicious invalid votes on submitted PoReps \(note that it is computationally prohibitive to determine whether a validator-client has marked a valid PoRep as invalid\).

View File

@ -11,24 +11,20 @@ Validator-client rewards for these services are to be distributed at the end of
The effective protocol-based annual interest rate \(%\) per epoch received by validation-clients is to be a function of:
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](.)\)
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_validartion_client_economics.md)\)
* the fraction of staked SOLs out of the current total circulating supply,
* the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
The first factor is a function of protocol parameters only \(i.e. independent of validator behavior in a given epoch\) and results in a global validation reward schedule designed to incentivize early participation, provide clear monetary stability and provide optimal security in the network.
The first factor is a function of protocol parameters only \(i.e. independent of validator behavior in a given epoch\) and results in a global validation reward schedule designed to incentivize early participation, provide clear montetary stability and provide optimal security in the network.
At any given point in time, a specific validator's interest rate can be determined based on the proportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year \(the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch\). With these broad assumptions, the 10-year inflation rate \(adjusted daily for this example\) is shown in **Figure 1**, while the total circulating token supply is illustrated in **Figure 2**. Neglected in this toy-model is the inflation suppression due to the portion of each transaction fee that is to be destroyed.
At any given point in time, a specific validator's interest rate can be determined based on the porportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year \(the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch\). With these broad assumptions, the 10-year inflation rate \(adjusted daily for this example\) is shown in **Figure 2**, while the total circulating token supply is illustrated in **Figure 3**. Neglected in this toy-model is the inflation supression due to the portion of each transaction fee that is to be destroyed.
![](../../../.gitbook/assets/p_ex_schedule.png)
![drawing](../../../.gitbook/assets/p_ex_schedule.png) \*\*Figure 2:\*\* In this example schedule, the annual inflation rate \[%\] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
**Figure 1:** In this example schedule, the annual inflation rate \[%\] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
![drawing](../../../.gitbook/assets/p_ex_supply.png) \*\*Figure 3:\*\* The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in \*\*Figure 2\*\* Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabalize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation and ledger storage replication \(PoReps\) are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators and archiver nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assummed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in\*\* Figure 4\*\*.
![](../../../.gitbook/assets/p_ex_supply.png)
![drawing](../../../.gitbook/assets/p_ex_interest.png)
**Figure 2:** The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in **Figure 1**. Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabilize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation and ledger storage replication \(PoReps\) are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators and archiver nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assumed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in **Figure 3**.
![](../../../.gitbook/assets/p_ex_interest.png)
**Figure 3:** Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.
**Figure 4:** Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.
This epoch-specific protocol-defined interest rate sets an upper limit of _protocol-generated_ annual interest rate \(not absolute total interest rate\) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch.

View File

@ -11,8 +11,9 @@ Each transaction sent through the network, to be processed by the current leader
Many current blockchain economies \(e.g. Bitcoin, Ethereum\), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is destroyed, with the remaining fee going to the current leader processing the transaction. A scheduled global inflation rate provides a source for rewards distributed to validation-clients, through the process described above, and replication-clients, as discussed below.
Transaction fees are set by the network cluster based on recent historical throughput, see [Congestion Driven Fees](../../transaction-fees.md#congestion-driven-fees). This minimum portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilization. By monitoring a protocol specified gas usage with respect to a desired, target usage amount, the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
Transaction fees are set by the network cluster based on recent historical throughput, see [Congestion Driven Fees](../../transaction-fees.md#congestion-driven-fees). This minimum portion of each transaction fee can be dynamically adjusted depending on historical gas usage. In this way, the protocol can use the minimum fee to target a desired hardware utilisation. By monitoring a protocol specified gas usage with respect to a desired, target usage amount, the minimum fee can be raised/lowered which should, in turn, lower/raise the actual gas usage per block until it reaches the target amount. This adjustment process can be thought of as similar to the difficulty adjustment algorithm in the Bitcoin protocol, however in this case it is adjusting the minimum transaction fee to guide the transaction processing hardware usage to a desired level.
As mentioned, a fixed-proportion of each transaction fee is to be destroyed. The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing an inflation limiting mechanism that protects against "tax evasion" attacks \(i.e. side-channel fee payments\)[1](../ed_references.md).
As mentioned, a fixed-proportion of each transaction fee is to be destroyed. The intent of this design is to retain leader incentive to include as many transactions as possible within the leader-slot time, while providing an inflation limiting mechansim that protects against "tax evasion" attacks \(i.e. side-channel fee payments\)[1](https://github.com/solana-labs/solana/tree/aacead62c0eb052068172eba6b53fc85874d6d54/book/src/ed_referenced.md).
Additionally, the burnt fees can be a consideration in fork selection. In the case of a PoH fork with a malicious, censoring leader, we would expect the total fees destroyed to be less than a comparable honest fork, due to the fees lost from censoring. If the censoring leader is to compensate for these lost protocol fees, they would have to replace the burnt fees on their fork themselves, thus potentially reducing the incentive to censor in the first place.

View File

@ -11,7 +11,7 @@ This document proposes an easy to use software install and updater that can be u
The easiest install method for supported platforms:
```bash
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/solana-install-init.sh | sh
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.18.0/install/solana-install-init.sh | sh
```
This script will check github for the latest tagged release and download and run the `solana-install-init` binary from there.
@ -20,7 +20,7 @@ If additional arguments need to be specified during the installation, the follow
```bash
$ init_args=.... # arguments for `solana-install-init ...`
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/solana-install-init.sh | sh -s - ${init_args}
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.18.0/install/solana-install-init.sh | sh -s - ${init_args}
```
### Fetch and run a pre-built installer from a Github release
@ -28,7 +28,7 @@ $ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/
With a well-known release URL, a pre-built binary can be obtained for supported platforms:
```bash
$ curl -o solana-install-init https://github.com/solana-labs/solana/releases/download/v1.0.0/solana-install-init-x86_64-apple-darwin
$ curl -o solana-install-init https://github.com/solana-labs/solana/releases/download/v0.18.0/solana-install-init-x86_64-apple-darwin
$ chmod +x ./solana-install-init
$ ./solana-install-init --help
```
@ -77,7 +77,7 @@ pub struct UpdateManifest {
pub download_sha256: String, // SHA256 digest of the release tar.bz2 file
}
/// Data of an Update Manifest program Account.
/// Userdata of an Update Manifest program Account.
#[derive(Serialize, Deserialize, Default, Debug, PartialEq)]
pub struct SignedUpdateManifest {
pub manifest: UpdateManifest,
@ -154,7 +154,7 @@ FLAGS:
OPTIONS:
-d, --data_dir <PATH> Directory to store install data [default: .../Library/Application Support/solana]
-u, --url <URL> JSON RPC URL for the solana cluster [default: http://devnet.solana.com:8899]
-u, --url <URL> JSON RPC URL for the solana cluster [default: http://testnet.solana.com:8899]
-p, --pubkey <PUBKEY> Public key of the update manifest [default: 9XX329sPuskWhH4DQh6k16c87dHKhXLBZTL3Gxmve8Gp]
```

Some files were not shown because too many files have changed in this diff Show More