solana/docs/src/proposals/cluster-test-framework.md
Dan Albert ffeac298a2
Move from gitbook to docusaurus, build docs in Travis CI (#10970)
* fix: ignore unknown fields in more RPC responses

* Remove mdbook infrastructure

* Delete gitattributes and other theme related items

Move all docs to /docs folder to support Docusaurus

* all docs need to be moved to /docs

* can be changed in the future

Add Docusaurus infrastructure

* initialize docusaurus repo

Remove trailing whitespace, add support for eslint

Change Docusaurus configuration to support `src`

* No need to rename the folder! Change a setting and we're all good to
go.

* Fixing rebase items

* Remove unneccessary markdown file, fix type

* Some fonts are hard to read. Others, not so much. Rubik, you've been
sidelined. Roboto, into the limelight!

* As much as we all love tutorials, I think we all can navigate around a
markdown file. Say goodbye, `mdx.md`.

* Setup deployment infrastructure

* Move docs job from buildkite to travic

* Fix travis config

* Add vercel token to travis config

* Only deploy docs after merge

* Docker rust env

* Revert "Docker rust env"

This reverts commit f84bc208e807aab1c0d97c7588bbfada1fedfa7c.

* Build CLI usage from docker

* Pacify shellcheck

* Run job on PR and new commits for publication

* Update README

* Fix svg image building

* shellcheck

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Ryan Shea <rmshea@users.noreply.github.com>
Co-authored-by: publish-docs.sh <maintainers@solana.com>
2020-07-11 05:11:07 +00:00

3.8 KiB

title
title
Cluster Test Framework

This document proposes the Cluster Test Framework CTF. CTF is a test harness that allows tests to execute against a local, in-process cluster or a deployed cluster.

Motivation

The goal of CTF is to provide a framework for writing tests independent of where and how the cluster is deployed. Regressions can be captured in these tests and the tests can be run against deployed clusters to verify the deployment. The focus of these tests should be on cluster stability, consensus, fault tolerance, API stability.

Tests should verify a single bug or scenario, and should be written with the least amount of internal plumbing exposed to the test.

Design Overview

Tests are provided an entry point, which is a contact_info::ContactInfo structure, and a keypair that has already been funded.

Each node in the cluster is configured with a validator::ValidatorConfig at boot time. At boot time this configuration specifies any extra cluster configuration required for the test. The cluster should boot with the configuration when it is run in-process or in a data center.

Once booted, the test will discover the cluster through a gossip entry point and configure any runtime behaviors via validator RPC.

Test Interface

Each CTF test starts with an opaque entry point and a funded keypair. The test should not depend on how the cluster is deployed, and should be able to exercise all the cluster functionality through the publicly available interfaces.

use crate::contact_info::ContactInfo;
use solana_sdk::signature::{Keypair, Signer};
pub fn test_this_behavior(
    entry_point_info: &ContactInfo,
    funding_keypair: &Keypair,
    num_nodes: usize,
)

Cluster Discovery

At test start, the cluster has already been established and is fully connected. The test can discover most of the available nodes over a few second.

use crate::gossip_service::discover_nodes;

// Discover the cluster over a few seconds.
let cluster_nodes = discover_nodes(&entry_point_info, num_nodes);

Cluster Configuration

To enable specific scenarios, the cluster needs to be booted with special configurations. These configurations can be captured in validator::ValidatorConfig.

For example:

let mut validator_config = ValidatorConfig::default();
validator_config.rpc_config.enable_validator_exit = true;
let local = LocalCluster::new_with_config(
                num_nodes,
                10_000,
                100,
                &validator_config
                );

How to design a new test

For example, there is a bug that shows that the cluster fails when it is flooded with invalid advertised gossip nodes. Our gossip library and protocol may change, but the cluster still needs to stay resilient to floods of invalid advertised gossip nodes.

Configure the RPC service:

let mut validator_config = ValidatorConfig::default();
validator_config.rpc_config.enable_rpc_gossip_push = true;
validator_config.rpc_config.enable_rpc_gossip_refresh_active_set = true;

Wire the RPCs and write a new test:

pub fn test_large_invalid_gossip_nodes(
    entry_point_info: &ContactInfo,
    funding_keypair: &Keypair,
    num_nodes: usize,
) {
    let cluster = discover_nodes(&entry_point_info, num_nodes);

    // Poison the cluster.
    let client = create_client(entry_point_info.client_facing_addr(), VALIDATOR_PORT_RANGE);
    for _ in 0..(num_nodes * 100) {
        client.gossip_push(
            cluster_info::invalid_contact_info()
        );
    }
    sleep(Durration::from_millis(1000));

    // Force refresh of the active set.
    for node in &cluster {
        let client = create_client(node.client_facing_addr(), VALIDATOR_PORT_RANGE);
        client.gossip_refresh_active_set();
    }

    // Verify that spends still work.
    verify_spends(&cluster);
}