Files
solana/ci
Tao Zhu db85d659b9 Cost model 1.7 (#20188)
* Cost Model to limit transactions which are not parallelizeable (#16694)

* * Add following to banking_stage:
  1. CostModel as immutable ref shared between threads, to provide estimated cost for transactions.
  2. CostTracker which is shared between threads, tracks transaction costs for each block.

* replace hard coded program ID with id() calls

* Add Account Access Cost as part of TransactionCost. Account Access cost are weighted differently between read and write, signed and non-signed.

* Establish instruction_execution_cost_table, add function to update or insert instruction cost, unit tested. It is read-only for now; it allows Replay to insert realtime instruction execution costs to the table.

* add test for cost_tracker atomically try_add operation, serves as safety guard for future changes

* check cost against local copy of cost_tracker, return transactions that would exceed limit as unprocessed transaction to be buffered; only apply bank processed transactions cost to tracker;

* bencher to new banking_stage with max cost limit to allow cost model being hit consistently during bench iterations

* replay stage feed back program cost (#17731)

* replay stage feeds back realtime per-program execution cost to cost model;

* program cost execution table is initialized into empty table, no longer populated with hardcoded numbers;

* changed cost unit to microsecond, using value collected from mainnet;

* add ExecuteCostTable with fixed capacity for security concern, when its limit is reached, programs with old age AND less occurrence will be pushed out to make room for new programs.

* investigate system performance test degradation  (#17919)

* Add stats and counter around cost model ops, mainly:
- calculate transaction cost
- check transaction can fit in a block
- update block cost tracker after transactions are added to block
- replay_stage to update/insert execution cost to table

* Change mutex on cost_tracker to RwLock

* removed cloning cost_tracker for local use, as the metrics show clone is very expensive.

* acquire and hold locks for block of TXs, instead of acquire and release per transaction;

* remove redundant would_fit check from cost_tracker update execution path

* refactor cost checking with less frequent lock acquiring

* avoid many Transaction_cost heap allocation when calculate cost, which
is in the hot path - executed per transaction.

* create hashmap with new_capacity to reduce runtime heap realloc.

* code review changes: categorize stats, replace explicit drop calls, concisely initiate to default

* address potential deadlock by acquiring locks one at time

* Persist cost table to blockstore (#18123)

* Add `ProgramCosts` Column Family to blockstore, implement LedgerColumn; add `delete_cf` to Rocks
* Add ProgramCosts to compaction excluding list alone side with TransactionStatusIndex in one place: `excludes_from_compaction()`

* Write cost table to blockstore after `replay_stage` replayed active banks; add stats to measure persist time
* Deletes program from `ProgramCosts` in blockstore when they are removed from cost_table in memory
* Only try to persist to blockstore when cost_table is changed.
* Restore cost table during validator startup

* Offload `cost_model` related operations from replay main thread to dedicated service thread, add channel to send execute_timings between these threads;
* Move `cost_update_service` to its own module; replay_stage is now decoupled from cost_model.

* log warning when channel send fails (#18391)

* Aggregate cost_model into cost_tracker (#18374)

* * aggregate cost_model into cost_tracker, decouple it from banking_stage to prevent accidental deadlock. * Simplified code, removed unused functions

* review fixes

* update ledger tool to restore cost table from blockstore (#18489)

* update ledger tool to restore cost model from blockstore when compute-slot-cost

* Move initialize_cost_table into cost_model, so the function can be tested and shared between validator and ledger-tool

* refactor and simplify a test

* manually fix merge conflicts

* Per-program id timings (#17554)

* more manual fixing

* solve a merge conflict

* featurize cost model

* more merge fix

* cost model uses compute_unit to replace microsecond as cost unit
(#18934)

* Reject blocks for costs above the max block cost (#18994)

* Update block max cost limit to fix performance regession (#19276)

* replace function with const var for better readability (#19285)

* Add few more metrics data points (#19624)

* periodically report sigverify_stage stats (#19674)

* manual merge

* cost model nits (#18528)

* Accumulate consumed units (#18714)

* tx wide compute budget (#18631)

* more manual merge

* ignore zerorize drop security

* - update const cost values with data collected by #19627
- update cost calculation to closely proposed fee schedule #16984

* add transaction cost histogram metrics (#20350)

* rebase to 1.7.15

* add tx count and thread id to stats (#20451)
each stat reports and resets when slot changes

* remove cost_model feature_set

* ignore vote transactions from cost model

Co-authored-by: sakridge <sakridge@gmail.com>
Co-authored-by: Jeff Biseda <jbiseda@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
2021-10-06 15:55:29 -06:00
..
2021-03-30 08:46:32 -07:00
2021-10-06 15:55:29 -06:00
2021-07-17 06:07:45 +00:00
2021-03-30 08:46:32 -07:00

Our CI infrastructure is built around BuildKite with some additional GitHub integration provided by https://github.com/mvines/ci-gate

Agent Queues

We define two Agent Queues: queue=default and queue=cuda. The default queue should be favored and runs on lower-cost CPU instances. The cuda queue is only necessary for running tests that depend on GPU (via CUDA) access -- CUDA builds may still be run on the default queue, and the buildkite artifact system used to transfer build products over to a GPU instance for testing.

Buildkite Agent Management

Manual Node Setup for Colocated Hardware

This section describes how to set up a new machine that does not have a pre-configured image with all the requirements installed. Used for custom-built hardware at a colocation or office facility. Also works for vanilla Ubuntu cloud instances.

Pre-Requisites

  • Install Ubuntu 18.04 LTS Server
  • Log in as a local or remote user with sudo privileges

Install Core Requirements

Non-GPU enabled machines
sudo ./setup-new-buildkite-agent/setup-new-machine.sh
GPU-enabled machines
  • 1 or more NVIDIA GPUs should be installed in the machine (tested with 2080Ti)
sudo CUDA=1 ./setup-new-buildkite-agent/setup-new-machine.sh

Configure Node for Buildkite-agent based CI

  • Install buildkite-agent and set up it user environment with:
sudo ./setup-new-buildkite-agent/setup-buildkite.sh
  • Copy the pubkey contents from ~buildkite-agent/.ssh/id_ecdsa.pub and add the pubkey as an authorized SSH key on github.
    • In net/scripts/solana-user-authorized_keys.sh
    • Bug mvines to add it to the "solana-grimes" github user
  • Edit /etc/buildkite-agent/buildkite-agent.cfg and/or /etc/systemd/system/buildkite-agent@* to the desired configuration of the agent(s)
  • Copy ejson keys from another CI node at /opt/ejson/keys/ to the same location on the new node.
  • Start the new agent(s) with sudo systemctl enable --now buildkite-agent

Reference

This section contains details regarding previous CI setups that have been used, and that we may return to one day.

Buildkite Azure Setup

Create a new Azure-based "queue=default" agent by running the following command:

$ az vm create \
   --resource-group ci \
   --name XYZ \
   --image boilerplate \
   --admin-username $(whoami) \
   --ssh-key-value ~/.ssh/id_rsa.pub

The "boilerplate" image contains all the required packages pre-installed so the new machine should immediately show up in the Buildkite agent list once it has been provisioned and be ready for service.

Creating a "queue=cuda" agent follows the same process but additionally:

  1. Resize the image from the Azure port to include a GPU
  2. Edit the tags field in /etc/buildkite-agent/buildkite-agent.cfg to tags="queue=cuda,queue=default" and decrease the value of the priority field by one

Updating the CI Disk Image

  1. Create a new VM Instance as described above
  2. Modify it as required
  3. When ready, ssh into the instance and start a root shell with sudo -i. Then prepare it for deallocation by running: waagent -deprovision+user; cd /etc; ln -s ../run/systemd/resolve/stub-resolv.conf resolv.conf
  4. Run az vm deallocate --resource-group ci --name XYZ
  5. Run az vm generalize --resource-group ci --name XYZ
  6. Run az image create --resource-group ci --source XYZ --name boilerplate
  7. Goto the ci resource group in the Azure portal and remove all resources with the XYZ name in them

Buildkite AWS CloudFormation Setup

AWS CloudFormation is currently inactive, although it may be restored in the future

AWS CloudFormation can be used to scale machines up and down based on the current CI load. If no machine is currently running it can take up to 60 seconds to spin up a new instance, please remain calm during this time.

AMI

We use a custom AWS AMI built via https://github.com/solana-labs/elastic-ci-stack-for-aws/tree/solana/cuda.

Use the following process to update this AMI as dependencies change:

$ export AWS_ACCESS_KEY_ID=my_access_key
$ export AWS_SECRET_ACCESS_KEY=my_secret_access_key
$ git clone https://github.com/solana-labs/elastic-ci-stack-for-aws.git -b solana/cuda
$ cd elastic-ci-stack-for-aws/
$ make build
$ make build-ami

Watch for the "amazon-ebs: AMI:" log message to extract the name of the new AMI. For example:

amazon-ebs: AMI: ami-07118545e8b4ce6dc

The new AMI should also now be visible in your EC2 Dashboard. Go to the desired AWS CloudFormation stack, update the ImageId field to the new AMI id, and apply the stack changes.

Buildkite GCP Setup

CI runs on Google Cloud Platform via two Compute Engine Instance groups: ci-default and ci-cuda. Autoscaling is currently disabled and the number of VM Instances in each group is manually adjusted.

Updating a CI Disk Image

Each Instance group has its own disk image, ci-default-vX and ci-cuda-vY, where X and Y are incremented each time the image is changed.

The manual process to update a disk image is as follows:

  1. Create a new VM Instance using the disk image to modify.
  2. Once the VM boots, ssh to it and modify the disk as desired.
  3. Stop the VM Instance running the modified disk. Remember the name of the VM disk
  4. From another machine, gcloud auth login, then create a new Disk Image based off the modified VM Instance:
 $ gcloud compute images create ci-default-$(date +%Y%m%d%H%M) --source-disk xxx --source-disk-zone us-east1-b --family ci-default

or

  $ gcloud compute images create ci-cuda-$(date +%Y%m%d%H%M) --source-disk xxx --source-disk-zone us-east1-b --family ci-cuda
  1. Delete the new VM instance.
  2. Go to the Instance templates tab, find the existing template named ci-default-vX or ci-cuda-vY and select it. Use the "Copy" button to create a new Instance template called ci-default-vX+1 or ci-cuda-vY+1 with the newly created Disk image.
  3. Go to the Instance Groups tag and find the applicable group, ci-default or ci-cuda. Edit the Instance Group in two steps: (a) Set the number of instances to 0 and wait for them all to terminate, (b) Update the Instance template and restore the number of instances to the original value.
  4. Clean up the previous version by deleting it from Instance Templates and Images.