Compare commits

..

91 Commits

Author SHA1 Message Date
Michael Vines
42c7d57fc0 Revert "Bump version to 0.18.0-pre1"
This reverts commit 14f6d5c82b.
2019-08-05 15:53:55 -07:00
Tyera Eulberg
efd09ecd37 Revert fork metrics (#5427)
* Revert "Remove duplicate row (#5419)"

This reverts commit a81dd80d60.

* Revert "Log fork stake-percentage in metrics, and display (#5406)"

This reverts commit 92e419f1c7.
2019-08-05 15:53:36 -07:00
Michael Vines
14f6d5c82b Bump version to 0.18.0-pre1 2019-08-05 15:11:44 -07:00
Michael Vines
c7710fdd24 Add wallet get-slot command and document how to use it (#5424)
* Add wallet get-slot command and document how to use it

* ,
2019-08-05 13:17:03 -07:00
Michael Vines
b5aa03dd7c Rename --config-dir to --ledger (progress towards deleting validator.sh) (#5423) 2019-08-05 12:42:52 -07:00
Tyera Eulberg
a81dd80d60 Remove duplicate row (#5419) 2019-08-05 11:45:52 -06:00
Michael Vines
09ca92d416 Surface --voting-keypair to release users (#5420)
* Remove 'configured_flag' for vote/storage account, instead detect if they exist with the wallet

* Require --voting-keypair when using release binaries
2019-08-05 10:39:16 -07:00
Michael Vines
56ed033233 Remove unused var 2019-08-04 21:29:20 -07:00
Michael Vines
e56efe237c Move testnet from ec2 tp gcp 2019-08-04 21:02:27 -07:00
Michael Vines
3f0ff45de0 Move edge/beta testnets from ec2 to gcp 2019-08-04 20:42:28 -07:00
Michael Vines
3709dc6558 Reduce size of cpu-only gcp instances 2019-08-04 20:36:23 -07:00
Michael Vines
6ec0318bae Reduce AWS node count 2019-08-03 23:50:52 -07:00
Tyera Eulberg
92e419f1c7 Log fork stake-percentage in metrics, and display (#5406)
* Log fork stake percentage data

* Add fork stake percentage to dashboard

* Call out parent slot
2019-08-02 19:16:23 -06:00
Michael Vines
ccc0f2d956 show-stake-account now works for reward pool accounts (#5416)
automerge
2019-08-02 17:15:26 -07:00
Pankaj Garg
80bb0158b7 Initial implementation of packet shredder (#5401)
* Initial implementation of packet shredder

* tests

* clippy

* review comments
2019-08-02 15:53:42 -07:00
Michael Vines
f12592826f Disable snapshots #5411 2019-08-02 15:48:51 -07:00
Michael Vines
8d38777c1f Remove stray --stake 0 2019-08-02 15:06:40 -07:00
sakridge
832dfd4ab0 Change bank to not create default (#5409) 2019-08-02 14:46:53 -07:00
Michael Vines
04d2db4dbb Force boot_from_snapshot=0 for now 2019-08-02 14:21:45 -07:00
Michael Vines
6f269e5a0e Improve error messages when a vote account is rejected for delegation (#5407) 2019-08-02 10:09:09 -07:00
Michael Vines
eb3991b9ba Replay stage log message nits (#5408) 2019-08-02 10:08:42 -07:00
Michael Vines
aee63f15c2 Rename state.tgz to snapshot.tgz to match rpc service 2019-08-02 10:07:29 -07:00
Michael Vines
aced847735 validator-info --help text tweaks (#5402) 2019-08-02 08:30:08 -07:00
Tyera Eulberg
e360e63b74 getProgramAccounts to check for existing validator-info (#5404) 2019-08-02 07:40:54 -07:00
Michael Vines
a6c4525998 RPC to the bootstrap leader instead of the local node, which may not yet be fully initialized 2019-08-01 23:34:55 -07:00
Michael Vines
77b196a226 Show vote account details 2019-08-01 23:34:25 -07:00
Michael Vines
b6b9c2cf56 Delegate stake from the pre-created identity keypair if it exists 2019-08-01 23:00:15 -07:00
Michael Vines
59d900977d Avoid airdroping when airdrops are disabled 2019-08-01 22:43:09 -07:00
Michael Vines
0f5acb86d3 wallet: Refuse to delegate stake to a vote account with a stale root slot (#5282)
* Refuse to delegate stake to a vote account with a stale root slot

* Remove sdk-c from the virtual manifest temporarily

For an unknown reason |cargo clippy| is getting stuck in CI
intermittently when trying to build this crate.
2019-08-01 21:08:24 -07:00
Michael Vines
911dee24c5 Give a unique port range for each validator node (#5397)
automerge
2019-08-01 14:37:59 -07:00
dependabot-preview[bot]
f03e066ec5 Bump log from 0.4.7 to 0.4.8 (#5382)
Bumps [log](https://github.com/rust-lang/log) from 0.4.7 to 0.4.8.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang-nursery/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/commits)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-08-01 14:31:18 -07:00
Rob Walker
f7d3f55566 fix epoch_stakes again (#5396) 2019-08-01 14:27:47 -07:00
Michael Vines
4298b1f595 Document the --limit-ledger-size flag (#5393) 2019-08-01 14:06:40 -07:00
Michael Vines
870503ee36 Introduce delegate-stake.sh for adding stake to a validator.sh (#5380) 2019-08-01 13:48:00 -07:00
Michael Vines
4d14abbd04 Document getSlot 2019-08-01 13:16:23 -07:00
Michael Vines
5212b2716c Don't rebuild/retest release tags (#5385) 2019-08-01 13:11:42 -07:00
TristanDebrunner
97c0573c7d Change default location of solana.h to OUT_DIR (#5389)
automerge
2019-08-01 12:33:30 -07:00
Justin Starry
43cc9fcb1d Update mean tx/s to use the correct counter (#5390) 2019-08-01 15:30:36 -04:00
Justin Starry
47b5ba44e9 Add tag suffix to remaining metrics host_id queries (#5388) 2019-08-01 14:43:13 -04:00
Justin Starry
e95397e0a8 Clarify that host_id is a tag in metrics influx queries (#5387) 2019-08-01 14:34:07 -04:00
dependabot-preview[bot]
c7cdf8ba93 Bump winreg from 0.6.1 to 0.6.2 (#5367)
Bumps [winreg](https://github.com/gentoo90/winreg-rs) from 0.6.1 to 0.6.2.
- [Release notes](https://github.com/gentoo90/winreg-rs/releases)
- [Commits](https://github.com/gentoo90/winreg-rs/compare/v0.6.1...v0.6.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-08-01 08:48:29 -07:00
Michael Vines
6ee734e1b4 Depersonalize paths 2019-08-01 08:36:54 -07:00
Justin Starry
3ab1b46ef7 Fix vote metrics (#5377) 2019-08-01 09:11:49 -04:00
Patrick Amato
22891b39d6 bench-exc: readme changes (#5373)
replace token pair, direction
replace "swapper" with "matcher"
2019-07-31 23:08:56 -06:00
sakridge
b6ce7ec782 Default to solana=info log level for drone (#5374)
Otherwise prints nothing..
2019-07-31 20:00:52 -07:00
Justin Starry
a41c7451f1 Add testnet prefix to the metrics queries without it (#5376) 2019-07-31 21:07:25 -04:00
carllin
6cb2040a1b Snapshot Packaging Service (#5262)
* Snapshot serialization and packaging
2019-07-31 17:58:10 -07:00
Michael Vines
937f9ad049 Teach solana-install about release channels (#5372)
$ solana-install init edge  # <-- setup an install using the edge channel
$ solana-install update     # <-- update to the latest edge channel release
2019-07-31 17:30:17 -07:00
sakridge
c2fc0f2418 Plumb libra accounts to genesis (#5333)
* Plumb move_loader to genesis

* Remove core dependency on genesis-programs
2019-07-31 16:10:55 -07:00
Rob Walker
9278201198 fix epoch_stakes (#5355)
* fix epoch_stakes

* fix stake_state to use stakers_epoch

* don't allow withdrawal before deactivation
2019-07-31 15:13:26 -07:00
Pankaj Garg
149a63100d remove no-snapshot option from tds testnet (#5368) 2019-07-31 14:51:54 -07:00
Jack May
d09afdbefe Synchronize and cleanup instruction processor lists (#5356) 2019-07-31 14:28:14 -07:00
Michael Vines
1d6bafbc77 Move tds to edge (#5366) 2019-07-31 14:18:05 -07:00
dependabot-preview[bot]
01d2b4e952 Bump jsonrpc-http-server from 12.1.0 to 13.0.0 (#5361)
* Bump jsonrpc-http-server from 12.1.0 to 13.0.0

Bumps [jsonrpc-http-server](https://github.com/paritytech/jsonrpc) from 12.1.0 to 13.0.0.
- [Release notes](https://github.com/paritytech/jsonrpc/releases)
- [Commits](https://github.com/paritytech/jsonrpc/compare/v12.1.0...v13.0.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

* Update all jsonrpc crates to v13.0.0
2019-07-31 14:30:47 -06:00
sakridge
05f3437601 Handle paying for move transactions with unique solana system transactions (#5317) 2019-07-31 11:15:14 -07:00
Michael Vines
f859243191 Remove unused var 2019-07-31 10:51:30 -07:00
Michael Vines
9ddc25283c Adapt validator sanity args 2019-07-31 10:46:25 -07:00
Michael Vines
388d4a8592 Remove obsolete --generate-snapshots argument 2019-07-31 10:26:22 -07:00
Patrick Amato
0b0b679120 exchange update: replace direction (#5362)
* replace direction with OrderSide

* bench exchange: update direction uses to OrderSide
2019-07-31 11:19:09 -06:00
dependabot-preview[bot]
3b752876ac Bump ws from 0.8.1 to 0.9.0 (#5360)
Bumps [ws](https://github.com/housleyjk/ws-rs) from 0.8.1 to 0.9.0.
- [Release notes](https://github.com/housleyjk/ws-rs/releases)
- [Changelog](https://github.com/housleyjk/ws-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/housleyjk/ws-rs/compare/v0.8.1...v0.9.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-31 10:13:52 -07:00
Michael Vines
9b8b7dbfd7 Avoid setting RUST_LOG to the empty string (#5338) 2019-07-31 10:13:30 -07:00
Michael Vines
c209e14e40 validator.sh now supports an --entrypoint arg, mimicking the solana-validator CLI API (#5363) 2019-07-31 09:54:39 -07:00
Michael Vines
6df1f6450f Drop rsync address 2019-07-31 09:24:49 -07:00
Jack May
6d7cb23c61 Add command to create genesis accounts (#5343) 2019-07-30 23:43:12 -07:00
Michael Vines
bd7e269280 Kill rsync (#5336)
automerge
2019-07-30 22:43:47 -07:00
Pankaj Garg
b05b42d74d Reduce max blob size (#5345)
* Reduce max blob size

* ignore test_star_network_push_rstar_200
2019-07-30 22:15:07 -07:00
dependabot-preview[bot]
af733a678a Bump serde_derive from 1.0.97 to 1.0.98 (#5314)
Bumps [serde_derive](https://github.com/serde-rs/serde) from 1.0.97 to 1.0.98.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.97...v1.0.98)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-30 21:45:34 -07:00
Michael Vines
8a5045f05c Bump timeouts for publish docker/tarball builds 2019-07-30 20:09:47 -07:00
Michael Vines
4a336eb5ff ValidatorConfig path reform: use Path/PathBuf for paths (#5353) 2019-07-30 19:47:24 -07:00
dependabot-preview[bot]
b7e08052ae Bump serde from 1.0.97 to 1.0.98 (#5315)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.97 to 1.0.98.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.97...v1.0.98)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-30 19:46:50 -07:00
dependabot-preview[bot]
f6a4acfac3 Bump dirs from 2.0.1 to 2.0.2 (#5312)
Bumps [dirs](https://github.com/soc/dirs-rs) from 2.0.1 to 2.0.2.
- [Release notes](https://github.com/soc/dirs-rs/releases)
- [Commits](https://github.com/soc/dirs-rs/commits)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-30 19:46:39 -07:00
Jack May
68eff230f0 Fix name-id reporting dependency (#5354) 2019-07-30 16:22:20 -07:00
Michael Vines
c78db6a94b ledger path reform: use Path/PathBuf instead of strings (#5344) 2019-07-30 15:53:41 -07:00
Michael Vines
294d9288d2 Update remote-node.sh to use bootstrap-leader.sh (#5352) 2019-07-30 15:53:03 -07:00
carllin
7dc5cc26a6 Make max_epoch check in next_leader_at in leader schedule (#5342) 2019-07-30 15:51:02 -07:00
Sagar Dhawan
d7a2b790dc Limit the size of gossip push and gossip pull response (#5348)
* Limit the size of gossip push and gossip pull response

* Remove Default::default

* Rename var
2019-07-30 15:43:17 -07:00
Pankaj Garg
a7a10e12c7 Forward transactions as packets instead of blobs (#5334)
* Forward transactions as packets instead of blobs

* clippy
2019-07-30 14:50:02 -07:00
sakridge
8d243221f0 Ignore flaky local cluster tests (#5347)
* Add logging to local_cluster tests

* Ignore flaky test_leader_failure_4, test_repairman_catchup

And crashing banking benchmarks.
2019-07-30 13:48:46 -07:00
Justin Starry
84368697af Fix metrics when leader does not report metrics (#5291) 2019-07-30 16:18:33 -04:00
Rob Walker
4a57cd3300 Update bank.rs 2019-07-30 11:33:06 -07:00
Michael Vines
2214d2dbb5 Eject bootstrap-leader support from fullnode.sh (#5301) 2019-07-29 21:25:28 -07:00
Rob Walker
50a991fdf9 add executable checks to verify_instruction (#5326) 2019-07-29 15:29:20 -07:00
Michael Vines
4e093525c7 Default to error logs, override with info only for those programs that need it (#5321)
* Revert "Revert "Default log level to to RUST_LOG=solana=info (#5296)" (#5302)"

This reverts commit 7796e87814.

* Default to error logs, override with info only for those programs that need it
2019-07-29 10:57:00 -07:00
Michael Vines
506b305959 Move coverage back to the default queue (#5318) 2019-07-28 22:20:54 -07:00
Michael Vines
e83efcfc80 Tidy test-checks.sh (#5319) 2019-07-28 22:19:03 -07:00
sakridge
4f1c881227 Add --use_move mode to bench-tps (#5311)
* Add --use_move mode to bench-tps

substitute for global flag.

* Use cuda queue for coverage build.
2019-07-28 10:43:42 -07:00
sakridge
a642168369 Add move to bench-tps (#5250) 2019-07-27 15:28:00 -07:00
Greg Fitzgerald
8d296d0969 Move credit-only and Move proposals to the implemented section of the book (#5308)
automerge
2019-07-27 15:08:44 -07:00
Greg Fitzgerald
68b11c1c29 Pull all libra crates from crates.io (#5306) 2019-07-27 15:06:27 -06:00
sakridge
c209718a6f Add libray_api (#5304)
Simple move-based payment api
2019-07-27 12:11:51 -07:00
Dan Albert
b8835312bb Update cargo.toml files to 0.18.0-pre0 (#5303) 2019-07-27 11:42:06 -06:00
155 changed files with 4003 additions and 2571 deletions

View File

@@ -1,3 +1,4 @@
os: Visual Studio 2017
version: '{build}'
branches:
@@ -15,7 +16,7 @@ build_script:
notifications:
- provider: Slack
incoming_webhook:
secure: GJsBey+F5apAtUm86MHVJ68Uqa6WN1SImcuIc4TsTZrDhA8K1QWUNw9FFQPybUWDyOcS5dly3kubnUqlGt9ux6Ad2efsfRIQYWv0tOVXKeY=
secure: 6HnLbeS6/Iv7JSMrrHQ7V9OSIjH/3KFzvZiinNWgQqEN0e9A6zaE4MwEXUYDWbcvVJiQneWit6dswY8Scoms2rS1PWEN5N6sjgLgyzroptc=
channel: ci-status
on_build_success: false
on_build_failure: true
@@ -24,16 +25,16 @@ notifications:
deploy:
- provider: S3
access_key_id:
secure: fTbJl6JpFebR40J7cOWZ2mXBa3kIvEiXgzxAj6L3N7A=
secure: G6uzyGqbkMCXS2+sCeBCT/+s/11AHLWXCuGayfKcMEE=
secret_access_key:
secure: vItsBXb2rEFLvkWtVn/Rcxu5a5+2EwC+b7GsA0waJy9hXh6XuBAD0lnHd9re3g/4
secure: Lc+aVrbcPSXoDV7h2J7gqKT+HX0n3eEzp3JIrSP2pcKxbAikGnCtOogCiHO9/er2
bucket: release.solana.com
region: us-west-1
set_public: true
- provider: GitHub
auth_token:
secure: 81fEmPZ0cV1wLtNuUrcmtgxKF6ROQF1+/ft5m+fHX21z6PoeCbaNo8cTyLioWBj7
secure: JdggY+mrznklWDcV0yvetHhD9eRcNdc627q6NcZdZAJsDidYcGgZ/tgYJiXb9D1A
draft: false
prerelease: false
on:

3
.gitignore vendored
View File

@@ -11,10 +11,7 @@
**/*.rs.bk
.cargo
# node config that is rsynced
/config/
# node config that remains local
/config-local/
# log files
*.log

982
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-exchange"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -14,28 +14,28 @@ bs58 = "0.2.0"
clap = "2.32.0"
env_logger = "0.6.2"
itertools = "0.8.0"
log = "0.4.7"
log = "0.4.8"
num-derive = "0.2"
num-traits = "0.2"
rand = "0.6.5"
rayon = "1.1.0"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_json = "1.0.40"
serde_yaml = "0.8.9"
# solana-runtime = { path = "../solana/runtime"}
solana = { path = "../core", version = "0.17.2" }
solana-client = { path = "../client", version = "0.17.2" }
solana-drone = { path = "../drone", version = "0.17.2" }
solana-exchange-api = { path = "../programs/exchange_api", version = "0.17.2" }
solana-exchange-program = { path = "../programs/exchange_program", version = "0.17.2" }
solana-logger = { path = "../logger", version = "0.17.2" }
solana-metrics = { path = "../metrics", version = "0.17.2" }
solana-netutil = { path = "../netutil", version = "0.17.2" }
solana-runtime = { path = "../runtime", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana = { path = "../core", version = "0.18.0-pre0" }
solana-client = { path = "../client", version = "0.18.0-pre0" }
solana-drone = { path = "../drone", version = "0.18.0-pre0" }
solana-exchange-api = { path = "../programs/exchange_api", version = "0.18.0-pre0" }
solana-exchange-program = { path = "../programs/exchange_program", version = "0.18.0-pre0" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-metrics = { path = "../metrics", version = "0.18.0-pre0" }
solana-netutil = { path = "../netutil", version = "0.18.0-pre0" }
solana-runtime = { path = "../runtime", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
untrusted = "0.7.0"
ws = "0.8.1"
ws = "0.9.0"
[features]
cuda = ["solana/cuda"]

View File

@@ -23,7 +23,7 @@ demo demonstrates one way to host an exchange on the Solana blockchain by
emulating a currency exchange.
The assets are virtual tokens held by investors who may post order requests to
the exchange. A Swapper monitors the exchange and posts swap requests for
the exchange. A Matcher monitors the exchange and posts swap requests for
matching orders. All the transactions can execute concurrently.
## Premise
@@ -42,30 +42,26 @@ matching orders. All the transactions can execute concurrently.
- A request to create a token account
- Token request
- A request to deposit tokens of a particular type into a token account.
- Token pair
- A unique ordered list of two tokens. For the four types of tokens used in
this demo, the valid pairs are AB, AC, AD, BC, BD, CD.
- Direction of trade
- Describes which token in the pair the investor wants to sell and buy and can
be either "To" or "From". For example, if an investor issues a "To" trade
for "AB" then they which to exchange A tokens to B tokens. A "From" order
would read the other way, A tokens from B tokens.
- Asset pair
- A struct with fields Base and Quote, representing the two assets which make up a
trading pair, which themselves are Tokens. The Base or 'primary' asset is the
numerator and the Quote is the denominator for pricing purposes.
- Order side
- Describes which side of the market an investor wants to place a trade on. Options
are "Bid" or "Ask", where a bid represents an offer to purchase the Base asset of
the AssetPair for a sum of the Quote Asset and an Ask is an offer to sell Base asset
for the Quote asset.
- Price ratio
- An expression of the relative prices of two tokens. They consist of the
price of the primary token and the price of the secondary token. For
simplicity sake, the primary token's price is always 1, which forces the
secondary to be the common denominator. For example, if token A was worth
2 and token B was worth 6, the price ratio would be 1:3 or just 3. Price
ratios are represented as fixed point numbers. The fixed point scaler is
defined in
- An expression of the relative prices of two tokens. Calculated with the Base
Asset as the numerator and the Quote Asset as the denominator. Ratios are
represented as fixed point numbers. The fixed point scaler is defined in
[exchange_state.rs](https://github.com/solana-labs/solana/blob/c2fdd1362a029dcf89c8907c562d2079d977df11/programs/exchange_api/src/exchange_state.rs#L7)
- Order request
- A Solana transaction executed by the exchange requesting the trade of one
type of token for another. order requests are made up of the token pair,
the direction of the trade, quantity of the primary token, the price ratio,
and the two token accounts to be credited/deducted. An example trade
request looks like "T AB 5 2" which reads "Exchange 5 A tokens to B tokens
at a price ratio of 1:2" A fulfilled trade would result in 5 A tokens
- A Solana transaction sent by a trader to the exchange to submit an order.
Order requests are made up of the token pair, the order side (bid or ask),
quantity of the primary token, the price ratio, and the two token accounts
to be credited/deducted. An example trade request looks like "T AB 5 2"
which reads "Exchange 5 A tokens to B tokens at a price ratio of 1:2" A fulfilled trade would result in 5 A tokens
deducted and 10 B tokens credited to the trade initiator's token accounts.
Successful order requests result in an order.
- Order
@@ -75,59 +71,62 @@ matching orders. All the transactions can execute concurrently.
contain the same information as the order request.
- Price spread
- The difference between the two matching orders. The spread is the
profit of the Swapper initiating the swap request.
- Swap requirements
profit of the Matcher initiating the swap request.
- Match requirements
- Policies that result in a successful trade swap.
- Swap request
- A request to exchange tokens between to orders
- Trade swap
- A successful trade. A swap consists of two matching orders that meet
swap requirements. A trade swap may not wholly satisfy one or both of the
orders in which case the orders are adjusted appropriately. As
long as the swap requirements are met there will be an exchange of tokens
between accounts. Any price spread is deposited into the Swapper's profit
account. All trade swaps are recorded in a new account for posterity.
- Match request
- A request to fill two complementary orders (bid/ask), resulting if successful,
in a trade being created.
- Trade
- A successful trade is created from two matching orders that meet
swap requirements which are submitted in a Match Request by a Matcher and
executed by the exchange. A trade may not wholly satisfy one or both of the
orders in which case the orders are adjusted appropriately. Upon execution,
tokens are distributed to the traders' accounts and any overlap or
"negative spread" between orders is deposited into the Matcher's profit
account. All successful trades are recorded in the data of a new solana
account for posterity.
- Investor
- Individual investors who hold a number of tokens and wish to trade them on
the exchange. Investors operate as Solana thin clients who own a set of
accounts containing tokens and/or order requests. Investors post
transactions to the exchange in order to request tokens and post or cancel
order requests.
- Swapper
- An agent who facilitates trading between investors. Swappers operate as
- Matcher
- An agent who facilitates trading between investors. Matchers operate as
Solana thin clients who monitor all the orders looking for a trade
match. Once found, the Swapper issues a swap request to the exchange.
Swappers are the engine of the exchange and are rewarded for their efforts by
accumulating the price spreads of the swaps they initiate. Swappers also
match. Once found, the Matcher issues a swap request to the exchange.
Matchers are the engine of the exchange and are rewarded for their efforts by
accumulating the price spreads of the swaps they initiate. Matchers also
provide current bid/ask price and OHLCV (Open, High, Low, Close, Volume)
information on demand via a public network port.
- Transaction fees
- Solana transaction fees are paid for by the transaction submitters who are
the Investors and Swappers.
the Investors and Matchers.
## Exchange startup
The exchange is up and running when it reaches a state where it can take
investor's trades and Swapper's swap requests. To achieve this state the
investors' trades and Matchers' match requests. To achieve this state the
following must occur in order:
- Start the Solana blockchain
- Start the Swapper thin-client
- The Swapper subscribes to change notifications for all the accounts owned by
- Start the thin-client
- The Matcher subscribes to change notifications for all the accounts owned by
the exchange program id. The subscription is managed via Solana's JSON RPC
interface.
- The Swapper starts responding to queries for bid/ask price and OHLCV
- The Matcher starts responding to queries for bid/ask price and OHLCV
The Swapper responding successfully to price and OHLCV requests is the signal to
The Matcher responding successfully to price and OHLCV requests is the signal to
the investors that trades submitted after that point will be analyzed. <!--This
is not ideal, and instead investors should be able to submit trades at any time,
and the Swapper could come and go without missing a trade. One way to achieve
this is for the Swapper to read the current state of all accounts looking for all
and the Matcher could come and go without missing a trade. One way to achieve
this is for the Matcher to read the current state of all accounts looking for all
open orders.-->
Investors will initially query the exchange to discover their current balance
for each type of token. If the investor does not already have an account for
each type of token, they will submit account requests. Swappers as well will
each type of token, they will submit account requests. Matcher as well will
request accounts to hold the tokens they earn by initiating trade swaps.
```rust
@@ -165,7 +164,7 @@ pub struct TokenAccountInfo {
}
```
For this demo investors or Swappers can request more tokens from the exchange at
For this demo investors or Matcher can request more tokens from the exchange at
any time by submitting token requests. In non-demos, an exchange of this type
would provide another way to exchange a 3rd party asset into tokens.
@@ -269,10 +268,10 @@ pub enum ExchangeInstruction {
## Trade swaps
The Swapper is monitoring the accounts assigned to the exchange program and
The Matcher is monitoring the accounts assigned to the exchange program and
building a trade-order table. The order table is used to identify
matching orders which could be fulfilled. When a match is found the
Swapper should issue a swap request. Swap requests may not satisfy the entirety
Matcher should issue a swap request. Swap requests may not satisfy the entirety
of either order, but the exchange will greedily fulfill it. Any leftover tokens
in either account will keep the order valid for further swap requests in
the future.
@@ -310,14 +309,14 @@ whole for clarity.
| 5 | 1 T AB 2 10 | 2 F AB 1 5 |
As part of a successful swap request, the exchange will credit tokens to the
Swapper's account equal to the difference in the price ratios or the two orders.
These tokens are considered the Swapper's profit for initiating the trade.
Matcher's account equal to the difference in the price ratios or the two orders.
These tokens are considered the Matcher's profit for initiating the trade.
The Swapper would initiate the following swap on the order table above:
The Matcher would initiate the following swap on the order table above:
- Row 1, To: Investor 1 trades 2 A tokens to 8 B tokens
- Row 1, From: Investor 2 trades 2 A tokens from 8 B tokens
- Swapper takes 8 B tokens as profit
- Matcher takes 8 B tokens as profit
Both row 1 trades are fully realized, table becomes:
@@ -328,11 +327,11 @@ Both row 1 trades are fully realized, table becomes:
| 3 | 1 T AB 2 8 | 2 F AB 3 6 |
| 4 | 1 T AB 2 10 | 2 F AB 1 5 |
The Swapper would initiate the following swap:
The Matcher would initiate the following swap:
- Row 1, To: Investor 1 trades 1 A token to 4 B tokens
- Row 1, From: Investor 2 trades 1 A token from 4 B tokens
- Swapper takes 4 B tokens as profit
- Matcher takes 4 B tokens as profit
Row 1 From is not fully realized, table becomes:
@@ -343,11 +342,11 @@ Row 1 From is not fully realized, table becomes:
| 3 | 1 T AB 2 10 | 2 F AB 3 6 |
| 4 | | 2 F AB 1 5 |
The Swapper would initiate the following swap:
The Matcher would initiate the following swap:
- Row 1, To: Investor 1 trades 1 A token to 6 B tokens
- Row 1, From: Investor 2 trades 1 A token from 6 B tokens
- Swapper takes 2 B tokens as profit
- Matcher takes 2 B tokens as profit
Row 1 To is now fully realized, table becomes:
@@ -357,11 +356,11 @@ Row 1 To is now fully realized, table becomes:
| 2 | 1 T AB 2 8 | 2 F AB 3 5 |
| 3 | 1 T AB 2 10 | 2 F AB 1 5 |
The Swapper would initiate the following last swap:
The Matcher would initiate the following last swap:
- Row 1, To: Investor 1 trades 2 A token to 12 B tokens
- Row 1, From: Investor 2 trades 2 A token from 12 B tokens
- Swapper takes 4 B tokens as profit
- Matcher takes 4 B tokens as profit
Table becomes:
@@ -383,7 +382,7 @@ pub enum ExchangeInstruction {
/// key 3 - `From` order
/// key 4 - Token account associated with the To Trade
/// key 5 - Token account associated with From trade
/// key 6 - Token account in which to deposit the Swappers profit from the swap.
/// key 6 - Token account in which to deposit the Matcher profit from the swap.
SwapRequest,
}
@@ -442,14 +441,14 @@ pub enum ExchangeInstruction {
/// key 3 - `From` order
/// key 4 - Token account associated with the To Trade
/// key 5 - Token account associated with From trade
/// key 6 - Token account in which to deposit the Swappers profit from the swap.
/// key 6 - Token account in which to deposit the Matcher profit from the swap.
SwapRequest,
}
```
## Quotes and OHLCV
The Swapper will provide current bid/ask price quotes based on trade actively and
The Matcher will provide current bid/ask price quotes based on trade actively and
also provide OHLCV based on some time window. The details of how the bid/ask
price quotes are calculated are yet to be decided.

View File

@@ -527,21 +527,21 @@ fn trader<T>(
let mut trade_infos = vec![];
let start = account_group * batch_size as usize;
let end = account_group * batch_size as usize + batch_size as usize;
let mut direction = Direction::To;
let mut side = OrderSide::Ask;
for (signer, trade, src) in izip!(
signers[start..end].iter(),
trade_keys,
srcs[start..end].iter(),
) {
direction = if direction == Direction::To {
Direction::From
side = if side == OrderSide::Ask {
OrderSide::Bid
} else {
Direction::To
OrderSide::Ask
};
let order_info = OrderInfo {
/// Owner of the trade order
owner: Pubkey::default(), // don't care
direction,
side,
pair,
tokens,
price,
@@ -551,7 +551,7 @@ fn trader<T>(
trade_account: trade.pubkey(),
order_info,
});
trades.push((signer, trade.pubkey(), direction, src));
trades.push((signer, trade.pubkey(), side, src));
}
account_group = (account_group + 1) % account_groups as usize;
@@ -562,7 +562,7 @@ fn trader<T>(
trades.chunks(chunk_size).for_each(|chunk| {
let trades_txs: Vec<_> = chunk
.par_iter()
.map(|(signer, trade, direction, src)| {
.map(|(signer, trade, side, src)| {
let s: &Keypair = &signer;
let owner = &signer.pubkey();
let space = mem::size_of::<ExchangeState>() as u64;
@@ -571,7 +571,7 @@ fn trader<T>(
vec![
system_instruction::create_account(owner, trade, 1, space, &id()),
exchange_instruction::trade_request(
owner, trade, *direction, pair, tokens, price, src,
owner, trade, *side, pair, tokens, price, src,
),
],
blockhash,

View File

@@ -96,12 +96,12 @@ impl OrderBook {
// Ok(())
// }
pub fn push(&mut self, pubkey: Pubkey, info: OrderInfo) -> Result<(), Box<dyn error::Error>> {
check_trade(info.direction, info.tokens, info.price)?;
match info.direction {
Direction::To => {
check_trade(info.side, info.tokens, info.price)?;
match info.side {
OrderSide::Ask => {
self.to_ab.push(ToOrder { pubkey, info });
}
Direction::From => {
OrderSide::Bid => {
self.from_ab.push(FromOrder { pubkey, info });
}
}

View File

@@ -2,16 +2,16 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-streamer"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.33.0"
solana = { path = "../core", version = "0.17.2" }
solana-logger = { path = "../logger", version = "0.17.2" }
solana-netutil = { path = "../netutil", version = "0.17.2" }
solana = { path = "../core", version = "0.18.0-pre0" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-netutil = { path = "../netutil", version = "0.18.0-pre0" }
[features]
cuda = ["solana/cuda"]

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-bench-tps"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,24 +10,24 @@ homepage = "https://solana.com/"
[dependencies]
bincode = "1.1.4"
clap = "2.33.0"
log = "0.4.7"
log = "0.4.8"
rayon = "1.1.0"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_json = "1.0.40"
serde_yaml = "0.8.9"
solana = { path = "../core", version = "0.17.2" }
solana-client = { path = "../client", version = "0.17.2" }
solana-drone = { path = "../drone", version = "0.17.2" }
solana-librapay-api = { path = "../programs/librapay_api", version = "0.17.2" }
solana-logger = { path = "../logger", version = "0.17.2" }
solana-metrics = { path = "../metrics", version = "0.17.2" }
solana-measure = { path = "../measure", version = "0.17.2" }
solana-netutil = { path = "../netutil", version = "0.17.2" }
solana-runtime = { path = "../runtime", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-move-loader-program = { path = "../programs/move_loader_program", version = "0.17.2" }
solana-move-loader-api = { path = "../programs/move_loader_api", version = "0.17.2" }
solana = { path = "../core", version = "0.18.0-pre0" }
solana-client = { path = "../client", version = "0.18.0-pre0" }
solana-drone = { path = "../drone", version = "0.18.0-pre0" }
solana-librapay-api = { path = "../programs/librapay_api", version = "0.18.0-pre0" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-metrics = { path = "../metrics", version = "0.18.0-pre0" }
solana-measure = { path = "../measure", version = "0.18.0-pre0" }
solana-netutil = { path = "../netutil", version = "0.18.0-pre0" }
solana-runtime = { path = "../runtime", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
solana-move-loader-program = { path = "../programs/move_loader_program", version = "0.18.0-pre0" }
solana-move-loader-api = { path = "../programs/move_loader_api", version = "0.18.0-pre0" }
[features]
cuda = ["solana/cuda"]

View File

@@ -21,7 +21,7 @@ use std::process::exit;
pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7;
fn main() {
solana_logger::setup();
solana_logger::setup_with_filter("solana=info");
solana_metrics::set_panic_hook("bench-tps");
let matches = cli::build_args().get_matches();

View File

@@ -30,6 +30,7 @@ Methods
* [getProgramAccounts](#getprogramaccounts)
* [getRecentBlockhash](#getrecentblockhash)
* [getSignatureStatus](#getsignaturestatus)
* [getSlot](#getslot)
* [getSlotLeader](#getslotleader)
* [getSlotsPerSegment](#getslotspersegment)
* [getStorageTurn](#getstorageturn)
@@ -226,7 +227,7 @@ Returns all accounts owned by the provided program Pubkey
##### Results:
The result field will be an array of arrays. Each sub array will contain:
* `string` - a the account Pubkey as base-58 encoded string
* `string` - the account Pubkey as base-58 encoded string
and a JSON object, with the following sub fields:
* `lamports`, number of lamports assigned to this account, as a signed 64-bit integer
@@ -293,6 +294,25 @@ curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id":1, "
-----
### getSlot
Returns the current slot the node is processing
##### Parameters:
None
##### Results:
* `u64` - Current slot
##### Example:
```bash
// Request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getSlot"}' http://localhost:8899
// Result
{"jsonrpc":"2.0","result":"1234","id":1}
```
-----
### getSlotLeader
Returns the current slot leader

View File

@@ -74,7 +74,7 @@ The `solana-install` tool can be used to easily install and upgrade the cluster
software on Linux x86_64 and mac OS systems.
```bash
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.17.0/install/solana-install-init.sh | sh -s
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.16.5/install/solana-install-init.sh | sh -s
```
Alternatively build the `solana-install` program from source and run the
@@ -144,28 +144,42 @@ $ solana-gossip --entrypoint testnet.solana.com:8001 spy
# Press ^C to exit
```
Now configure a key pair for your validator by running:
Now create an identity keypair for your validator by running:
```bash
$ solana-keygen new -o ~/validator-keypair.json
```
and airdrop yourself some lamports to get started:
```bash
$ solana-wallet --keypair ~/validator-keypair.json airdrop 1000
```
Your validator will need a vote account. Create it now with the following
commands:
```bash
$ solana-keygen new -o ~/validator-vote-keypair.json
$ VOTE_PUBKEY=$(solana-keygen pubkey ~/validator-vote-keypair.json)
$ IDENTITY_PUBKEY=$(solana-keygen pubkey ~/validator-keypair.json)
$ solana-wallet create-vote-account "$VOTE_PUBKEY" "$IDENTITY_PUBKEY" 1
```
Then use one of the following commands, depending on your installation
choice, to start the node:
If this is a `solana-install`-installation:
```bash
$ validator.sh --identity ~/validator-keypair.json --config-dir ~/validator-config --rpc-port 8899 --poll-for-new-genesis-block testnet.solana.com
$ validator.sh --identity ~/validator-keypair.json --voting-keypair ~/validator-vote-keypair.json --ledger ~/validator-config --rpc-port 8899 --poll-for-new-genesis-block testnet.solana.com
```
Alternatively, the `solana-install run` command can be used to run the validator
node while periodically checking for and applying software updates:
```bash
$ solana-install run validator.sh -- --identity ~/validator-keypair.json --config-dir ~/validator-config --rpc-port 8899 --poll-for-new-genesis-block testnet.solana.com
$ solana-install run validator.sh -- --identity ~/validator-keypair.json --voting-keypair ~/validator-vote-keypair.json --ledger ~/validator-config --rpc-port 8899 --poll-for-new-genesis-block testnet.solana.com
```
If you built from source:
```bash
$ NDEBUG=1 USE_INSTALL=1 ./multinode-demo/validator.sh --identity ~/validator-keypair.json --rpc-port 8899 --poll-for-new-genesis-block testnet.solana.com
$ NDEBUG=1 USE_INSTALL=1 ./multinode-demo/validator.sh --identity ~/validator-keypair.json --voting-keypair ~/validator-vote-keypair.json --rpc-port 8899 --poll-for-new-genesis-block testnet.solana.com
```
#### Enabling CUDA
@@ -185,6 +199,11 @@ By default the validator will dynamically select available network ports in the
example, `validator.sh --dynamic-port-range 11000-11010 ...` will restrict the
validator to ports 11000-11011.
#### Limiting ledger size to conserve disk space
By default the validator will retain the full ledger. To conserve disk space
start the validator with the `--limit-ledger-size`, which will instruct the
validator to only retain the last couple hours of ledger.
### Validator Monitoring
When `validator.sh` starts, it will output a validator configuration that looks
similar to:
@@ -216,12 +235,28 @@ $ solana-wallet show-vote-account 2ozWvfaXQd1X6uKh8jERoRGApDqSqcEy6fF1oN13LL2G
The vote pubkey for the validator can also be found by running:
```bash
# If this is a `solana-install`-installation run:
$ solana-keygen pubkey ~/.local/share/solana/install/active_release/config-local/validator-vote-keypair.json
# Otherwise run:
$ solana-keygen pubkey ./config-local/validator-vote-keypair.json
$ solana-keygen pubkey ~/validator-vote-keypair.json
```
#### Has my validator caught up?
After your validator boots, it may take some time to catch up with the cluster.
Use the `get-slot` wallet command to view the current slot that the cluster is
processing:
```bash
$ solana-wallet get-slot
```
The current slot that your validator is processing can then been seen with:
```bash
$ solana-wallet --url http://127.0.0.1:8899 get-slot
```
Until your validator has caught up, it will not be able to vote successfully and
stake cannot be delegated to it.
Also if you find the cluster's slot advancing faster than yours, you will likely
never catch up. This typically implies some kind of networking issue between
your validator and the rest of the cluster.
#### Validator Metrics
Metrics are available for local monitoring of your validator.
@@ -282,3 +317,33 @@ Keybase:
`https://keybase.pub/<KEYBASE_USERNAME>/solana/validator-<PUBKEY>`
3. Add or update your `solana-validator-info` with your Keybase username. The
CLI will verify the `validator-<PUBKEY>` file
### Staking
When your validator starts it will have no stake, which means it will ineligible to become leader.
Adding stake can be accomplished by using the `solana-wallet` command. First
obtain the public key for your validator's vote account with:
```bash
$ solana-keygen pubkey ~/validator-config/vote-keypair.json
```
This will output a base58-encoded value that looks similar to
`DhUYZR98qFLLrnHg2HWeGhBQJ9tru7nwdEfYm8L8HdR9`. Then create a stake account
keypair with `solana-keygen`:
```bash
$ solana-keygen new -o ~/validator-config/stake-keypair.json
```
and use the wallet's `delegate-stake` command to stake your validator with 42 lamports:
```bash
$ solana-wallet delegate-stake ~/validator-config/stake-keypair.json [VOTE PUBKEY] 42
```
Note that stake changes are applied at Epoch boundaries so it can take an hour
or more for the change to take effect.
Stake can be deactivate by running:
```bash
$ solana-wallet deactivate-stake ~/validator-config/stake-keypair.json
```
Note that a stake account may only be used once, so after deactivation use the
wallet's `withdraw-stake` command to recover the previously staked lamports.

View File

@@ -53,7 +53,8 @@ software.
##### Linux and mac OS
```bash
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.17.0/install/solana-install-init.sh | sh -s
$ export SOLANA_RELEASE=v0.16.0 # skip this line to install the latest release
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v0.16.0/install/solana-install-init.sh | sh -s
```
Alternatively build the `solana-install` program from source and run the

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-chacha-sys"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana chacha-sys"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"

View File

@@ -6,7 +6,7 @@ steps:
timeout_in_minutes: 60
name: "publish docker"
- command: "ci/publish-crate.sh"
timeout_in_minutes: 120
timeout_in_minutes: 90
name: "publish crate"
branches: "!master"
- command: "ci/publish-bpf-sdk.sh"

View File

@@ -74,20 +74,23 @@ source scripts/configure-metrics.sh
nodes=(
"multinode-demo/drone.sh"
"multinode-demo/bootstrap-leader.sh \
--enable-rpc-exit \
--no-restart \
--init-complete-file init-complete-node1.log"
--init-complete-file init-complete-node1.log \
--dynamic-port-range 8000-8019"
"multinode-demo/validator.sh \
--enable-rpc-exit \
--no-restart \
--dynamic-port-range 8020-8039
--init-complete-file init-complete-node2.log \
--rpc-port 18899"
)
for i in $(seq 1 $extraNodes); do
portStart=$((8040 + i * 20))
portEnd=$((portStart + 19))
nodes+=(
"multinode-demo/validator.sh \
--no-restart \
--dynamic-port-range $portStart-$portEnd
--label dyn$i \
--init-complete-file init-complete-node$((2 + i)).log"
)
@@ -267,7 +270,7 @@ verifyLedger() {
(
source multinode-demo/common.sh
set -x
$solana_ledger_tool --ledger "$SOLANA_CONFIG_DIR"/$ledger-ledger verify
$solana_ledger_tool --ledger "$SOLANA_CONFIG_DIR"/$ledger verify
) || flag_error
done
}

View File

@@ -101,7 +101,8 @@ echo --- Creating tarball
set -e
cd "$(dirname "$0")"/..
export USE_INSTALL=1
export REQUIRE_CONFIG_DIR=1
export REQUIRE_LEDGER_DIR=1
export REQUIRE_KEYPAIRS=1
exec multinode-demo/validator.sh "$@"
EOF
chmod +x solana-release/bin/validator.sh

View File

@@ -10,35 +10,6 @@ source ci/rust-version.sh nightly
export RUST_BACKTRACE=1
export RUSTFLAGS="-D warnings"
do_bpf_check() {
_ cargo +"$rust_stable" fmt --all -- --check
_ cargo +"$rust_nightly" test --all
_ cargo +"$rust_nightly" clippy --version
_ cargo +"$rust_nightly" clippy --all -- --deny=warnings
_ cargo +"$rust_stable" audit
}
(
(
cd sdk/bpf/rust/rust-no-std
do_bpf_check
)
(
cd sdk/bpf/rust/rust-utils
do_bpf_check
)
(
cd sdk/bpf/rust/rust-test
do_bpf_check
)
for project in programs/bpf/rust/*/ ; do
(
cd "$project"
do_bpf_check
)
done
)
_ cargo +"$rust_stable" fmt --all -- --check
_ cargo +"$rust_stable" clippy --version
_ cargo +"$rust_stable" clippy --all -- --deny=warnings
@@ -48,4 +19,16 @@ _ ci/nits.sh
_ ci/order-crates-for-publishing.py
_ book/build.sh
for project in sdk/bpf/rust/{rust-no-std,rust-utils,rust-test} programs/bpf/rust/*/ ; do
echo "+++ do_bpf_check $project"
(
cd "$project"
_ cargo +"$rust_stable" fmt --all -- --check
_ cargo +"$rust_nightly" test --all
_ cargo +"$rust_nightly" clippy --version
_ cargo +"$rust_nightly" clippy --all -- --deny=warnings
_ cargo +"$rust_stable" audit
)
done
echo --- ok

View File

@@ -154,8 +154,8 @@ testnet-demo)
: "${GCE_LOW_QUOTA_NODE_COUNT:=70}"
;;
tds)
CHANNEL_OR_TAG=beta
CHANNEL_BRANCH=$BETA_CHANNEL
CHANNEL_OR_TAG=edge
CHANNEL_BRANCH=$EDGE_CHANNEL
;;
*)
echo "Error: Invalid TESTNET=$TESTNET"
@@ -350,7 +350,7 @@ deploy() {
NO_VALIDATOR_SANITY=1 \
ci/testnet-deploy.sh -p beta-testnet-solana-com -C gce -z us-west1-b \
-t "$CHANNEL_OR_TAG" -n 2 -c 0 -u -P \
-a beta-testnet-solana-com --letsencrypt beta.testnet.solana.com \
-a beta-testnet-solana-com --letsencrypt beta.testnet.solana.com \
${skipCreate:+-e} \
${skipStart:+-s} \
${maybeStop:+-S} \
@@ -551,8 +551,7 @@ deploy() {
${maybeExternalAccountsFile} \
${maybeLamports} \
${maybeAdditionalDisk} \
--skip-deploy-update \
--no-snapshot
--skip-deploy-update
)
;;
*)

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-client"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana Client"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -11,18 +11,18 @@ edition = "2018"
[dependencies]
bincode = "1.1.4"
bs58 = "0.2.0"
jsonrpc-core = "12.1.0"
log = "0.4.7"
jsonrpc-core = "13.0.0"
log = "0.4.8"
rand = "0.6.5"
rayon = "1.1.0"
reqwest = "0.9.19"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_json = "1.0.40"
solana-netutil = { path = "../netutil", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-netutil = { path = "../netutil", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
[dev-dependencies]
jsonrpc-core = "12.1.0"
jsonrpc-http-server = "12.1.0"
solana-logger = { path = "../logger", version = "0.17.2" }
jsonrpc-core = "13.0.0"
jsonrpc-http-server = "13.0.0"
solana-logger = { path = "../logger", version = "0.18.0-pre0" }

View File

@@ -1,7 +1,7 @@
[package]
name = "solana"
description = "Blockchain, Rebuilt for Scale"
version = "0.17.2"
version = "0.18.0-pre0"
documentation = "https://docs.rs/solana"
homepage = "https://solana.com/"
readme = "../README.md"
@@ -25,16 +25,19 @@ chrono = { version = "0.4.7", features = ["serde"] }
core_affinity = "0.5.9"
crc = { version = "1.8.1", optional = true }
crossbeam-channel = "0.3"
dir-diff = "0.3.1"
flate2 = "1.0.9"
fs_extra = "1.1.0"
hashbrown = "0.2.0"
indexmap = "1.0"
itertools = "0.8.0"
jsonrpc-core = "12.1.0"
jsonrpc-derive = "12.1.0"
jsonrpc-http-server = "12.1.0"
jsonrpc-pubsub = "12.0.0"
jsonrpc-ws-server = "12.1.0"
jsonrpc-core = "13.0.0"
jsonrpc-derive = "13.0.0"
jsonrpc-http-server = "13.0.0"
jsonrpc-pubsub = "13.0.0"
jsonrpc-ws-server = "13.0.0"
libc = "0.2.58"
log = "0.4.7"
log = "0.4.8"
memmap = { version = "0.7.0", optional = true }
nix = "0.14.1"
num-traits = "0.2"
@@ -43,31 +46,36 @@ rand_chacha = "0.1.1"
rayon = "1.1.0"
reqwest = "0.9.19"
rocksdb = "0.11.0"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_json = "1.0.40"
solana-budget-api = { path = "../programs/budget_api", version = "0.17.2" }
solana-budget-program = { path = "../programs/budget_program", version = "0.17.2" }
solana-chacha-sys = { path = "../chacha-sys", version = "0.17.2" }
solana-client = { path = "../client", version = "0.17.2" }
solana-drone = { path = "../drone", version = "0.17.2" }
solana-budget-api = { path = "../programs/budget_api", version = "0.18.0-pre0" }
solana-budget-program = { path = "../programs/budget_program", version = "0.18.0-pre0" }
solana-chacha-sys = { path = "../chacha-sys", version = "0.18.0-pre0" }
solana-client = { path = "../client", version = "0.18.0-pre0" }
solana-drone = { path = "../drone", version = "0.18.0-pre0" }
solana-ed25519-dalek = "0.2.0"
solana-kvstore = { path = "../kvstore", version = "0.17.2", optional = true }
solana-logger = { path = "../logger", version = "0.17.2" }
solana-merkle-tree = { path = "../merkle-tree", version = "0.17.2" }
solana-metrics = { path = "../metrics", version = "0.17.2" }
solana-measure = { path = "../measure", version = "0.17.2" }
solana-netutil = { path = "../netutil", version = "0.17.2" }
solana-runtime = { path = "../runtime", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-stake-api = { path = "../programs/stake_api", version = "0.17.2" }
solana-storage-api = { path = "../programs/storage_api", version = "0.17.2" }
solana-storage-program = { path = "../programs/storage_program", version = "0.17.2" }
solana-vote-api = { path = "../programs/vote_api", version = "0.17.2" }
solana-vote-signer = { path = "../vote-signer", version = "0.17.2" }
solana-kvstore = { path = "../kvstore", version = "0.18.0-pre0", optional = true }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-merkle-tree = { path = "../merkle-tree", version = "0.18.0-pre0" }
solana-metrics = { path = "../metrics", version = "0.18.0-pre0" }
solana-measure = { path = "../measure", version = "0.18.0-pre0" }
solana-netutil = { path = "../netutil", version = "0.18.0-pre0" }
solana-runtime = { path = "../runtime", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
solana-stake-api = { path = "../programs/stake_api", version = "0.18.0-pre0" }
solana-storage-api = { path = "../programs/storage_api", version = "0.18.0-pre0" }
solana-storage-program = { path = "../programs/storage_program", version = "0.18.0-pre0" }
solana-vote-api = { path = "../programs/vote_api", version = "0.18.0-pre0" }
solana-vote-signer = { path = "../vote-signer", version = "0.18.0-pre0" }
symlink = "0.1.0"
sys-info = "0.5.7"
tar = "0.4.26"
tempfile = "3.1.0"
tokio = "0.1"
tokio-codec = "0.1"
tokio-fs = "0.1"
tokio-io = "0.1"
untrusted = "0.7.0"
# reed-solomon-erasure's simd_c feature fails to build for x86_64-pc-windows-msvc, use pure-rust

View File

@@ -11,12 +11,13 @@ use rand::{thread_rng, Rng};
use solana::blocktree::{get_tmp_ledger_path, Blocktree};
use solana::entry::{make_large_test_entries, make_tiny_test_entries, EntrySlice};
use solana::packet::{Blob, BLOB_HEADER_SIZE};
use std::path::Path;
use test::Bencher;
// Given some blobs and a ledger at ledger_path, benchmark writing the blobs to the ledger
fn bench_write_blobs(bench: &mut Bencher, blobs: &mut Vec<Blob>, ledger_path: &str) {
fn bench_write_blobs(bench: &mut Bencher, blobs: &mut Vec<Blob>, ledger_path: &Path) {
let blocktree =
Blocktree::open(&ledger_path).expect("Expected to be able to open database ledger");
Blocktree::open(ledger_path).expect("Expected to be able to open database ledger");
let num_blobs = blobs.len();
@@ -36,7 +37,7 @@ fn bench_write_blobs(bench: &mut Bencher, blobs: &mut Vec<Blob>, ledger_path: &s
}
});
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
Blocktree::destroy(ledger_path).expect("Expected successful database destruction");
}
// Insert some blobs into the ledger in preparation for read benchmarks

View File

@@ -1,25 +1,61 @@
//! The `bank_forks` module implments BankForks a DAG of checkpointed Banks
use bincode::{deserialize_from, serialize_into};
use crate::result::{Error, Result};
use crate::snapshot_package::SnapshotPackageSender;
use crate::snapshot_utils;
use solana_measure::measure::Measure;
use solana_metrics::inc_new_counter_info;
use solana_runtime::bank::{Bank, BankRc, StatusCacheRc};
use solana_runtime::status_cache::MAX_CACHE_ENTRIES;
use solana_sdk::genesis_block::GenesisBlock;
use solana_sdk::timing;
use std::collections::{HashMap, HashSet};
use std::fs;
use std::fs::File;
use std::io::{BufReader, BufWriter, Error, ErrorKind};
use std::io::{Error as IOError, ErrorKind};
use std::ops::Index;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Instant;
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct SnapshotConfig {
snapshot_path: PathBuf,
snapshot_package_output_path: PathBuf,
snapshot_interval_slots: usize,
}
impl SnapshotConfig {
pub fn new(
snapshot_path: PathBuf,
snapshot_package_output_path: PathBuf,
snapshot_interval_slots: usize,
) -> Self {
Self {
snapshot_path,
snapshot_package_output_path,
snapshot_interval_slots,
}
}
pub fn snapshot_path(&self) -> &Path {
self.snapshot_path.as_path()
}
pub fn snapshot_package_output_path(&self) -> &Path {
&self.snapshot_package_output_path.as_path()
}
pub fn snapshot_interval_slots(&self) -> usize {
self.snapshot_interval_slots
}
}
pub struct BankForks {
banks: HashMap<u64, Arc<Bank>>,
working_bank: Arc<Bank>,
root: u64,
slots: HashSet<u64>,
snapshot_path: Option<String>,
snapshot_config: Option<SnapshotConfig>,
last_snapshot: u64,
confidence: HashMap<u64, Confidence>,
}
@@ -71,8 +107,8 @@ impl BankForks {
banks,
working_bank,
root: 0,
slots: HashSet::new(),
snapshot_path: None,
snapshot_config: None,
last_snapshot: 0,
confidence: HashMap::new(),
}
}
@@ -136,8 +172,8 @@ impl BankForks {
root,
banks,
working_bank,
slots: HashSet::new(),
snapshot_path: None,
snapshot_config: None,
last_snapshot: 0,
confidence: HashMap::new(),
}
}
@@ -156,7 +192,7 @@ impl BankForks {
self.working_bank.clone()
}
pub fn set_root(&mut self, root: u64) {
pub fn set_root(&mut self, root: u64, snapshot_package_sender: &Option<SnapshotPackageSender>) {
self.root = root;
let set_root_start = Instant::now();
let root_bank = self
@@ -172,6 +208,33 @@ impl BankForks {
let new_tx_count = root_bank.transaction_count();
self.prune_non_root(root);
// Generate a snapshot if snapshots are configured and it's been an appropriate number
// of banks since the last snapshot
if self.snapshot_config.is_some() {
let config = self
.snapshot_config
.as_ref()
.expect("Called package_snapshot without a snapshot configuration");
if root - self.last_snapshot >= config.snapshot_interval_slots as u64 {
let mut snapshot_time = Measure::start("total-snapshot-ms");
let r = self.generate_snapshot(
root,
snapshot_package_sender.as_ref().unwrap(),
snapshot_utils::get_snapshot_tar_path(&config.snapshot_package_output_path),
);
if r.is_err() {
warn!("Error generating snapshot for bank: {}, err: {:?}", root, r);
} else {
self.last_snapshot = root;
}
// Cleanup outdated snapshots
self.purge_old_snapshots();
snapshot_time.stop();
inc_new_counter_info!("total-snapshot-setup-ms", snapshot_time.as_ms() as usize);
}
}
inc_new_counter_info!(
"bank-forks_set_root_ms",
timing::duration_as_ms(&set_root_start.elapsed()) as usize
@@ -186,30 +249,60 @@ impl BankForks {
self.root
}
fn prune_non_root(&mut self, root: u64) {
let slots: HashSet<u64> = self
.banks
.iter()
.filter(|(_, b)| b.is_frozen())
.map(|(k, _)| *k)
.collect();
let descendants = self.descendants();
self.banks
.retain(|slot, _| descendants[&root].contains(slot));
self.confidence
.retain(|slot, _| slot == &root || descendants[&root].contains(slot));
if self.snapshot_path.is_some() {
let diff: HashSet<_> = slots.symmetric_difference(&self.slots).collect();
trace!("prune non root {} - {:?}", root, diff);
for slot in diff.iter() {
if **slot > root {
let _ = self.add_snapshot(**slot, root);
} else {
BankForks::remove_snapshot(**slot, &self.snapshot_path);
}
fn purge_old_snapshots(&self) {
// Remove outdated snapshots
let config = self.snapshot_config.as_ref().unwrap();
let names = snapshot_utils::get_snapshot_names(&config.snapshot_path);
let num_to_remove = names.len().saturating_sub(MAX_CACHE_ENTRIES);
for old_root in &names[..num_to_remove] {
let r = snapshot_utils::remove_snapshot(*old_root, &config.snapshot_path);
if r.is_err() {
warn!("Couldn't remove snapshot at: {:?}", config.snapshot_path);
}
}
self.slots = slots.clone();
}
fn generate_snapshot<P: AsRef<Path>>(
&self,
root: u64,
snapshot_package_sender: &SnapshotPackageSender,
tar_output_file: P,
) -> Result<()> {
let config = self.snapshot_config.as_ref().unwrap();
// Add a snapshot for the new root
let bank = self
.get(root)
.cloned()
.expect("root must exist in BankForks");
snapshot_utils::add_snapshot(&config.snapshot_path, &bank, root)?;
// Package the relevant snapshots
let names = snapshot_utils::get_snapshot_names(&config.snapshot_path);
// We only care about the last MAX_CACHE_ENTRIES snapshots of roots because
// the status cache of anything older is thrown away by the bank in
// status_cache.prune_roots()
let start = names.len().saturating_sub(MAX_CACHE_ENTRIES);
let package = snapshot_utils::package_snapshot(
&bank,
&names[start..],
&config.snapshot_path,
tar_output_file,
)?;
// Send the package to the packaging thread
snapshot_package_sender.send(package)?;
Ok(())
}
fn prune_non_root(&mut self, root: u64) {
let descendants = self.descendants();
self.banks
.retain(|slot, _| slot == &root || descendants[&root].contains(slot));
self.confidence
.retain(|slot, _| slot == &root || descendants[&root].contains(slot));
}
pub fn cache_fork_confidence(
@@ -247,115 +340,20 @@ impl BankForks {
self.confidence.get(&fork)
}
fn get_io_error(error: &str) -> Error {
warn!("BankForks error: {:?}", error);
Error::new(ErrorKind::Other, error)
pub fn set_snapshot_config(&mut self, snapshot_config: SnapshotConfig) {
self.snapshot_config = Some(snapshot_config);
}
fn get_snapshot_path(path: &Option<String>) -> PathBuf {
Path::new(&path.clone().unwrap()).to_path_buf()
}
pub fn add_snapshot(&self, slot: u64, root: u64) -> Result<(), Error> {
let path = BankForks::get_snapshot_path(&self.snapshot_path);
fs::create_dir_all(path.clone())?;
let bank_file = format!("{}", slot);
let bank_file_path = path.join(bank_file);
trace!("path: {:?}", bank_file_path);
let file = File::create(bank_file_path)?;
let mut stream = BufWriter::new(file);
let bank_slot = self.get(slot);
if bank_slot.is_none() {
return Err(BankForks::get_io_error("bank_forks get error"));
}
let bank = bank_slot.unwrap().clone();
serialize_into(&mut stream, &*bank)
.map_err(|_| BankForks::get_io_error("serialize bank error"))?;
let mut parent_slot: u64 = 0;
if let Some(parent_bank) = bank.parent() {
parent_slot = parent_bank.slot();
}
serialize_into(&mut stream, &parent_slot)
.map_err(|_| BankForks::get_io_error("serialize bank parent error"))?;
serialize_into(&mut stream, &root)
.map_err(|_| BankForks::get_io_error("serialize root error"))?;
serialize_into(&mut stream, &bank.src)
.map_err(|_| BankForks::get_io_error("serialize bank status cache error"))?;
serialize_into(&mut stream, &bank.rc)
.map_err(|_| BankForks::get_io_error("serialize bank accounts error"))?;
Ok(())
}
pub fn remove_snapshot(slot: u64, path: &Option<String>) {
let path = BankForks::get_snapshot_path(path);
let bank_file = format!("{}", slot);
let bank_file_path = path.join(bank_file);
let _ = fs::remove_file(bank_file_path);
}
pub fn set_snapshot_config(&mut self, path: Option<String>) {
self.snapshot_path = path;
}
fn load_snapshots(
names: &[u64],
bank0: &mut Bank,
bank_maps: &mut Vec<(u64, u64, Bank)>,
status_cache_rc: &StatusCacheRc,
snapshot_path: &Option<String>,
) -> Option<u64> {
let path = BankForks::get_snapshot_path(snapshot_path);
let mut bank_root: Option<u64> = None;
for bank_slot in names.iter().rev() {
let bank_path = format!("{}", bank_slot);
let bank_file_path = path.join(bank_path.clone());
info!("Load from {:?}", bank_file_path);
let file = File::open(bank_file_path);
if file.is_err() {
warn!("Snapshot file open failed for {}", bank_slot);
continue;
}
let file = file.unwrap();
let mut stream = BufReader::new(file);
let bank: Result<Bank, std::io::Error> = deserialize_from(&mut stream)
.map_err(|_| BankForks::get_io_error("deserialize bank error"));
let slot: Result<u64, std::io::Error> = deserialize_from(&mut stream)
.map_err(|_| BankForks::get_io_error("deserialize bank parent error"));
let parent_slot = if slot.is_ok() { slot.unwrap() } else { 0 };
let root: Result<u64, std::io::Error> = deserialize_from(&mut stream)
.map_err(|_| BankForks::get_io_error("deserialize root error"));
let status_cache: Result<StatusCacheRc, std::io::Error> = deserialize_from(&mut stream)
.map_err(|_| BankForks::get_io_error("deserialize bank status cache error"));
if bank_root.is_none() && bank0.rc.update_from_stream(&mut stream).is_ok() {
bank_root = Some(root.unwrap());
}
if bank_root.is_some() {
match bank {
Ok(v) => {
if status_cache.is_ok() {
status_cache_rc.append(&status_cache.unwrap());
}
bank_maps.push((*bank_slot, parent_slot, v));
}
Err(_) => warn!("Load snapshot failed for {}", bank_slot),
}
} else {
BankForks::remove_snapshot(*bank_slot, snapshot_path);
warn!("Load snapshot rc failed for {}", bank_slot);
}
}
bank_root
pub fn snapshot_config(&self) -> &Option<SnapshotConfig> {
&self.snapshot_config
}
fn setup_banks(
bank_maps: &mut Vec<(u64, u64, Bank)>,
bank_rc: &BankRc,
status_cache_rc: &StatusCacheRc,
) -> (HashMap<u64, Arc<Bank>>, HashSet<u64>, u64) {
) -> (HashMap<u64, Arc<Bank>>, u64) {
let mut banks = HashMap::new();
let mut slots = HashSet::new();
let (last_slot, last_parent_slot, mut last_bank) = bank_maps.remove(0);
last_bank.set_bank_rc(&bank_rc, &status_cache_rc);
@@ -368,7 +366,6 @@ impl BankForks {
}
if slot > 0 {
banks.insert(slot, Arc::new(bank));
slots.insert(slot);
}
}
if last_parent_slot != 0 {
@@ -377,57 +374,54 @@ impl BankForks {
}
}
banks.insert(last_slot, Arc::new(last_bank));
slots.insert(last_slot);
(banks, slots, last_slot)
(banks, last_slot)
}
pub fn load_from_snapshot(
genesis_block: &GenesisBlock,
account_paths: Option<String>,
snapshot_path: &Option<String>,
) -> Result<Self, Error> {
let path = BankForks::get_snapshot_path(snapshot_path);
let paths = fs::read_dir(path)?;
let mut names = paths
.filter_map(|entry| {
entry.ok().and_then(|e| {
e.path()
.file_name()
.and_then(|n| n.to_str().map(|s| s.parse::<u64>().unwrap()))
})
})
.collect::<Vec<u64>>();
names.sort();
snapshot_config: &SnapshotConfig,
) -> Result<Self> {
fs::create_dir_all(&snapshot_config.snapshot_path)?;
let names = snapshot_utils::get_snapshot_names(&snapshot_config.snapshot_path);
if names.is_empty() {
return Err(Error::IO(IOError::new(
ErrorKind::Other,
"no snapshots found",
)));
}
let mut bank_maps = vec![];
let status_cache_rc = StatusCacheRc::default();
let id = (names[names.len() - 1] + 1) as usize;
let mut bank0 =
Bank::create_with_genesis(&genesis_block, account_paths.clone(), &status_cache_rc, id);
bank0.freeze();
let bank_root = BankForks::load_snapshots(
let bank_root = snapshot_utils::load_snapshots(
&names,
&mut bank0,
&mut bank_maps,
&status_cache_rc,
snapshot_path,
&snapshot_config.snapshot_path,
);
if bank_maps.is_empty() || bank_root.is_none() {
BankForks::remove_snapshot(0, snapshot_path);
return Err(Error::new(ErrorKind::Other, "no snapshots loaded"));
return Err(Error::IO(IOError::new(
ErrorKind::Other,
"no snapshots loaded",
)));
}
let root = bank_root.unwrap();
let (banks, slots, last_slot) =
let (banks, last_slot) =
BankForks::setup_banks(&mut bank_maps, &bank0.rc, &status_cache_rc);
let working_bank = banks[&last_slot].clone();
Ok(BankForks {
banks,
working_bank,
root,
slots,
snapshot_path: snapshot_path.clone(),
snapshot_config: None,
last_snapshot: *names.last().unwrap(),
confidence: HashMap::new(),
})
}
@@ -437,12 +431,17 @@ impl BankForks {
mod tests {
use super::*;
use crate::genesis_utils::{create_genesis_block, GenesisBlockInfo};
use crate::service::Service;
use crate::snapshot_package::SnapshotPackagerService;
use fs_extra::dir::CopyOptions;
use itertools::Itertools;
use solana_sdk::hash::Hash;
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_transaction;
use std::env;
use std::fs::remove_dir_all;
use std::sync::atomic::AtomicBool;
use std::sync::mpsc::channel;
use tempfile::TempDir;
#[test]
fn test_bank_forks() {
@@ -552,72 +551,25 @@ mod tests {
);
}
struct TempPaths {
pub paths: String,
}
impl TempPaths {
fn remove_all(&self) {
let paths: Vec<String> = self.paths.split(',').map(|s| s.to_string()).collect();
paths.iter().for_each(|p| {
let _ignored = remove_dir_all(p);
});
}
}
#[macro_export]
macro_rules! tmp_bank_accounts_name {
() => {
&format!("{}-{}", file!(), line!())
};
}
#[macro_export]
macro_rules! get_tmp_bank_accounts_path {
() => {
get_tmp_bank_accounts_path(tmp_bank_accounts_name!())
};
}
impl Drop for TempPaths {
fn drop(&mut self) {
self.remove_all()
}
}
fn get_paths_vec(paths: &str) -> Vec<String> {
paths.split(',').map(|s| s.to_string()).collect()
}
fn get_tmp_snapshots_path() -> TempPaths {
let out_dir = env::var("FARF_DIR").unwrap_or_else(|_| "farf".to_string());
let path = format!("{}/snapshots", out_dir);
TempPaths {
paths: path.to_string(),
}
}
fn get_tmp_bank_accounts_path(paths: &str) -> TempPaths {
let vpaths = get_paths_vec(paths);
let out_dir = env::var("FARF_DIR").unwrap_or_else(|_| "farf".to_string());
let vpaths: Vec<_> = vpaths
.iter()
.map(|path| format!("{}/{}", out_dir, path))
.collect();
TempPaths {
paths: vpaths.join(","),
}
}
fn restore_from_snapshot(
genesis_block: &GenesisBlock,
bank_forks: BankForks,
account_paths: Option<String>,
last_slot: u64,
) {
let new =
BankForks::load_from_snapshot(&genesis_block, account_paths, &bank_forks.snapshot_path)
.unwrap();
let snapshot_path = bank_forks
.snapshot_config
.as_ref()
.map(|c| &c.snapshot_path)
.unwrap();
let new = BankForks::load_from_snapshot(
&genesis_block,
account_paths,
bank_forks.snapshot_config.as_ref().unwrap(),
)
.unwrap();
for (slot, _) in new.banks.iter() {
if *slot > 0 {
let bank = bank_forks.banks.get(slot).unwrap().clone();
@@ -625,31 +577,39 @@ mod tests {
bank.compare_bank(&new_bank);
}
}
assert_eq!(new.working_bank().slot(), last_slot);
for (slot, _) in new.banks.iter() {
BankForks::remove_snapshot(*slot, &bank_forks.snapshot_path);
snapshot_utils::remove_snapshot(*slot, snapshot_path).unwrap();
}
}
#[test]
fn test_bank_forks_snapshot_n() {
solana_logger::setup();
let path = get_tmp_bank_accounts_path!();
let spath = get_tmp_snapshots_path();
let accounts_dir = TempDir::new().unwrap();
let snapshot_dir = TempDir::new().unwrap();
let snapshot_output_path = TempDir::new().unwrap();
let GenesisBlockInfo {
genesis_block,
mint_keypair,
..
} = create_genesis_block(10_000);
path.remove_all();
spath.remove_all();
for index in 0..10 {
let bank0 = Bank::new_with_paths(&genesis_block, Some(path.paths.clone()));
let bank0 = Bank::new_with_paths(
&genesis_block,
Some(accounts_dir.path().to_str().unwrap().to_string()),
);
bank0.freeze();
let slot = bank0.slot();
let mut bank_forks = BankForks::new(0, bank0);
bank_forks.set_snapshot_config(Some(spath.paths.clone()));
bank_forks.add_snapshot(slot, 0).unwrap();
let snapshot_config = SnapshotConfig::new(
PathBuf::from(snapshot_dir.path()),
PathBuf::from(snapshot_output_path.path()),
100,
);
bank_forks.set_snapshot_config(snapshot_config.clone());
let bank0 = bank_forks.get(0).unwrap();
snapshot_utils::add_snapshot(&snapshot_config.snapshot_path, bank0, 0).unwrap();
for forks in 0..index {
let bank = Bank::new_from_parent(&bank_forks[forks], &Pubkey::default(), forks + 1);
let key1 = Keypair::new().pubkey();
@@ -661,11 +621,146 @@ mod tests {
);
assert_eq!(bank.process_transaction(&tx), Ok(()));
bank.freeze();
let slot = bank.slot();
snapshot_utils::add_snapshot(&snapshot_config.snapshot_path, &bank, 0).unwrap();
bank_forks.insert(bank);
bank_forks.add_snapshot(slot, 0).unwrap();
}
restore_from_snapshot(&genesis_block, bank_forks, Some(path.paths.clone()), index);
restore_from_snapshot(
&genesis_block,
bank_forks,
Some(accounts_dir.path().to_str().unwrap().to_string()),
index,
);
}
}
#[test]
fn test_concurrent_snapshot_packaging() {
solana_logger::setup();
let accounts_dir = TempDir::new().unwrap();
let snapshots_dir = TempDir::new().unwrap();
let snapshot_output_path = TempDir::new().unwrap();
let GenesisBlockInfo {
genesis_block,
mint_keypair,
..
} = create_genesis_block(10_000);
let (sender, receiver) = channel();
let (fake_sender, _fake_receiver) = channel();
let bank0 = Bank::new_with_paths(
&genesis_block,
Some(accounts_dir.path().to_str().unwrap().to_string()),
);
bank0.freeze();
// Set up bank forks
let mut bank_forks = BankForks::new(0, bank0);
let snapshot_config = SnapshotConfig::new(
PathBuf::from(snapshots_dir.path()),
PathBuf::from(snapshot_output_path.path()),
1,
);
bank_forks.set_snapshot_config(snapshot_config.clone());
// Take snapshot of zeroth bank
let bank0 = bank_forks.get(0).unwrap();
snapshot_utils::add_snapshot(&snapshot_config.snapshot_path, bank0, 0).unwrap();
// Create next MAX_CACHE_ENTRIES + 2 banks and snapshots. Every bank will get snapshotted
// and the snapshot purging logic will run on every snapshot taken. This means the three
// (including snapshot for bank0 created above) earliest snapshots will get purged by the
// time this loop is done.
// Also, make a saved copy of the state of the snapshot for a bank with
// bank.slot == saved_slot, so we can use it for a correctness check later.
let saved_snapshots_dir = TempDir::new().unwrap();
let saved_accounts_dir = TempDir::new().unwrap();
let saved_slot = 4;
let saved_tar = snapshot_config
.snapshot_package_output_path
.join(saved_slot.to_string());
for forks in 0..MAX_CACHE_ENTRIES + 2 {
let bank = Bank::new_from_parent(
&bank_forks[forks as u64],
&Pubkey::default(),
(forks + 1) as u64,
);
let slot = bank.slot();
let key1 = Keypair::new().pubkey();
let tx = system_transaction::create_user_account(
&mint_keypair,
&key1,
1,
genesis_block.hash(),
);
assert_eq!(bank.process_transaction(&tx), Ok(()));
bank.freeze();
bank_forks.insert(bank);
let package_sender = {
if slot == saved_slot as u64 {
// Only send one package on the real sende so that the packaging service
// doesn't take forever to run the packaging logic on all MAX_CACHE_ENTRIES
// later
&sender
} else {
&fake_sender
}
};
bank_forks
.generate_snapshot(
slot,
&package_sender,
snapshot_config
.snapshot_package_output_path
.join(slot.to_string()),
)
.unwrap();
if slot == saved_slot as u64 {
let options = CopyOptions::new();
fs_extra::dir::copy(&accounts_dir, &saved_accounts_dir, &options).unwrap();
fs_extra::dir::copy(&snapshots_dir, &saved_snapshots_dir, &options).unwrap();
}
}
// Purge all the outdated snapshots, including the ones needed to generate the package
// currently sitting in the channel
bank_forks.purge_old_snapshots();
let mut snapshot_names = snapshot_utils::get_snapshot_names(&snapshots_dir);
snapshot_names.sort();
assert_eq!(
snapshot_names,
(3..=MAX_CACHE_ENTRIES as u64 + 2).collect_vec()
);
// Create a SnapshotPackagerService to create tarballs from all the pending
// SnapshotPackage's on the channel. By the time this service starts, we have already
// purged the first two snapshots, which are needed by every snapshot other than
// the last two snapshots. However, the packaging service should still be able to
// correctly construct the earlier snapshots because the SnapshotPackage's on the
// channel hold hard links to these deleted snapshots. We verify this is the case below.
let exit = Arc::new(AtomicBool::new(false));
let snapshot_packager_service = SnapshotPackagerService::new(receiver, &exit);
// Close the channel so that the package service will exit after reading all the
// packages off the channel
drop(sender);
// Wait for service to finish
snapshot_packager_service
.join()
.expect("SnapshotPackagerService exited with error");
// Check the tar we cached the state for earlier was generated correctly
snapshot_utils::tests::verify_snapshot_tar(
saved_tar,
saved_snapshots_dir
.path()
.join(snapshots_dir.path().file_name().unwrap()),
saved_accounts_dir
.path()
.join(accounts_dir.path().file_name().unwrap()),
);
}
}

View File

@@ -134,15 +134,13 @@ impl BankingStage {
fn forward_buffered_packets(
socket: &std::net::UdpSocket,
tpu_via_blobs: &std::net::SocketAddr,
tpu_forwards: &std::net::SocketAddr,
unprocessed_packets: &[PacketsAndOffsets],
) -> std::io::Result<()> {
let packets = Self::filter_valid_packets_for_forwarding(unprocessed_packets);
inc_new_counter_info!("banking_stage-forwarded_packets", packets.len());
let blobs = packet::packets_to_blobs(&packets);
for blob in blobs {
socket.send_to(&blob.data[..blob.meta.size], tpu_via_blobs)?;
for p in packets {
socket.send_to(&p.data[..p.meta.size], &tpu_forwards)?;
}
Ok(())
@@ -316,7 +314,7 @@ impl BankingStage {
.read()
.unwrap()
.lookup(&leader_pubkey)
.map(|leader| leader.tpu_via_blobs)
.map(|leader| leader.tpu_forwards)
};
leader_addr.map_or(Ok(()), |leader_addr| {

View File

@@ -10,6 +10,7 @@ use serde_json::json;
use solana_sdk::hash::Hash;
use solana_sdk::pubkey::Pubkey;
use std::cell::RefCell;
use std::path::{Path, PathBuf};
pub trait EntryWriter: std::fmt::Debug {
fn write(&self, payload: String) -> Result<()>;
@@ -41,7 +42,7 @@ impl EntryVec {
#[derive(Debug)]
pub struct EntrySocket {
socket: String,
unix_socket: PathBuf,
}
impl EntryWriter for EntrySocket {
@@ -50,11 +51,10 @@ impl EntryWriter for EntrySocket {
use std::io::prelude::*;
use std::net::Shutdown;
use std::os::unix::net::UnixStream;
use std::path::Path;
const MESSAGE_TERMINATOR: &str = "\n";
let mut socket = UnixStream::connect(Path::new(&self.socket))?;
let mut socket = UnixStream::connect(&self.unix_socket)?;
socket.write_all(payload.as_bytes())?;
socket.write_all(MESSAGE_TERMINATOR.as_bytes())?;
socket.shutdown(Shutdown::Write)?;
@@ -144,9 +144,11 @@ where
pub type SocketBlockstream = Blockstream<EntrySocket>;
impl SocketBlockstream {
pub fn new(socket: String) -> Self {
pub fn new(unix_socket: &Path) -> Self {
Blockstream {
output: EntrySocket { socket },
output: EntrySocket {
unix_socket: unix_socket.to_path_buf(),
},
}
}
}
@@ -154,7 +156,7 @@ impl SocketBlockstream {
pub type MockBlockstream = Blockstream<EntryVec>;
impl MockBlockstream {
pub fn new(_: String) -> Self {
pub fn new(_: &Path) -> Self {
Blockstream {
output: EntryVec::new(),
}
@@ -183,6 +185,7 @@ mod test {
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_transaction;
use std::collections::HashSet;
use std::path::PathBuf;
#[test]
fn test_serialize_transactions() {
@@ -205,7 +208,7 @@ mod test {
#[test]
fn test_blockstream() -> () {
let blockstream = MockBlockstream::new("test_stream".to_string());
let blockstream = MockBlockstream::new(&PathBuf::from("test_stream"));
let ticks_per_slot = 5;
let mut blockhash = Hash::default();

View File

@@ -11,6 +11,7 @@ use crate::blocktree::Blocktree;
use crate::result::{Error, Result};
use crate::service::Service;
use solana_sdk::pubkey::Pubkey;
use std::path::Path;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::mpsc::{Receiver, RecvTimeoutError};
use std::sync::Arc;
@@ -26,10 +27,10 @@ impl BlockstreamService {
pub fn new(
slot_full_receiver: Receiver<(u64, Pubkey)>,
blocktree: Arc<Blocktree>,
blockstream_socket: String,
unix_socket: &Path,
exit: &Arc<AtomicBool>,
) -> Self {
let mut blockstream = Blockstream::new(blockstream_socket);
let mut blockstream = Blockstream::new(unix_socket);
let exit = exit.clone();
let t_blockstream = Builder::new()
.name("solana-blockstream".to_string())
@@ -116,6 +117,7 @@ mod test {
use solana_sdk::hash::Hash;
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_transaction;
use std::path::PathBuf;
use std::sync::mpsc::channel;
#[test]
@@ -133,7 +135,7 @@ mod test {
let blocktree = Blocktree::open(&ledger_path).unwrap();
// Set up blockstream
let mut blockstream = Blockstream::new("test_stream".to_string());
let mut blockstream = Blockstream::new(&PathBuf::from("test_stream"));
// Set up dummy channel to receive a full-slot notification
let (slot_full_sender, slot_full_receiver) = channel();

View File

@@ -27,6 +27,7 @@ use std::cell::RefCell;
use std::cmp;
use std::fs;
use std::io;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::mpsc::{sync_channel, Receiver, SyncSender, TrySendError};
use std::sync::{Arc, RwLock};
@@ -113,14 +114,12 @@ pub const INDEX_CF: &str = "index";
impl Blocktree {
/// Opens a Ledger in directory, provides "infinite" window of blobs
pub fn open(ledger_path: &str) -> Result<Blocktree> {
use std::path::Path;
pub fn open(ledger_path: &Path) -> Result<Blocktree> {
fs::create_dir_all(&ledger_path)?;
let ledger_path = Path::new(&ledger_path).join(BLOCKTREE_DIRECTORY);
let blocktree_path = ledger_path.join(BLOCKTREE_DIRECTORY);
// Open the database
let db = Database::open(&ledger_path)?;
let db = Database::open(&blocktree_path)?;
let batch_processor = unsafe { Arc::new(RwLock::new(db.batch_processor())) };
@@ -162,7 +161,7 @@ impl Blocktree {
}
pub fn open_with_signal(
ledger_path: &str,
ledger_path: &Path,
) -> Result<(Self, Receiver<bool>, CompletedSlotsReceiver)> {
let mut blocktree = Self::open(ledger_path)?;
let (signal_sender, signal_receiver) = sync_channel(1);
@@ -174,11 +173,11 @@ impl Blocktree {
Ok((blocktree, signal_receiver, completed_slots_receiver))
}
pub fn destroy(ledger_path: &str) -> Result<()> {
// Database::destroy() fails is the path doesn't exist
pub fn destroy(ledger_path: &Path) -> Result<()> {
// Database::destroy() fails if the path doesn't exist
fs::create_dir_all(ledger_path)?;
let path = std::path::Path::new(ledger_path).join(BLOCKTREE_DIRECTORY);
Database::destroy(&path)
let blocktree_path = ledger_path.join(BLOCKTREE_DIRECTORY);
Database::destroy(&blocktree_path)
}
pub fn meta(&self, slot: u64) -> Result<Option<SlotMeta>> {
@@ -1958,7 +1957,7 @@ fn slot_has_updates(slot_meta: &SlotMeta, slot_meta_backup: &Option<SlotMeta>) -
// Creates a new ledger with slot 0 full of ticks (and only ticks).
//
// Returns the blockhash that can be used to append entries with.
pub fn create_new_ledger(ledger_path: &str, genesis_block: &GenesisBlock) -> Result<Hash> {
pub fn create_new_ledger(ledger_path: &Path, genesis_block: &GenesisBlock) -> Result<Hash> {
let ticks_per_slot = genesis_block.ticks_per_slot;
Blocktree::destroy(ledger_path)?;
genesis_block.write(&ledger_path)?;
@@ -1971,7 +1970,7 @@ pub fn create_new_ledger(ledger_path: &str, genesis_block: &GenesisBlock) -> Res
Ok(entries.last().unwrap().hash)
}
pub fn genesis<'a, I>(ledger_path: &str, keypair: &Keypair, entries: I) -> Result<()>
pub fn genesis<'a, I>(ledger_path: &Path, keypair: &Keypair, entries: I) -> Result<()>
where
I: IntoIterator<Item = &'a Entry>,
{
@@ -2008,12 +2007,18 @@ macro_rules! get_tmp_ledger_path {
};
}
pub fn get_tmp_ledger_path(name: &str) -> String {
pub fn get_tmp_ledger_path(name: &str) -> PathBuf {
use std::env;
let out_dir = env::var("FARF_DIR").unwrap_or_else(|_| "farf".to_string());
let keypair = Keypair::new();
let path = format!("{}/ledger/{}-{}", out_dir, name, keypair.pubkey());
let path = [
out_dir,
"ledger".to_string(),
format!("{}-{}", name, keypair.pubkey()),
]
.iter()
.collect();
// whack any possible collision
let _ignored = fs::remove_dir_all(&path);
@@ -2032,7 +2037,7 @@ macro_rules! create_new_tmp_ledger {
//
// Note: like `create_new_ledger` the returned ledger will have slot 0 full of ticks (and only
// ticks)
pub fn create_new_tmp_ledger(name: &str, genesis_block: &GenesisBlock) -> (String, Hash) {
pub fn create_new_tmp_ledger(name: &str, genesis_block: &GenesisBlock) -> (PathBuf, Hash) {
let ledger_path = get_tmp_ledger_path(name);
let blockhash = create_new_ledger(&ledger_path, genesis_block).unwrap();
(ledger_path, blockhash)
@@ -2045,7 +2050,7 @@ macro_rules! tmp_copy_blocktree {
};
}
pub fn tmp_copy_blocktree(from: &str, name: &str) -> String {
pub fn tmp_copy_blocktree(from: &Path, name: &str) -> PathBuf {
let path = get_tmp_ledger_path(name);
let blocktree = Blocktree::open(from).unwrap();

View File

@@ -241,6 +241,7 @@ mod test {
use solana_sdk::hash::Hash;
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::path::Path;
use std::sync::atomic::AtomicBool;
use std::sync::mpsc::channel;
use std::sync::{Arc, RwLock};
@@ -255,7 +256,7 @@ mod test {
fn setup_dummy_broadcast_service(
leader_pubkey: &Pubkey,
ledger_path: &str,
ledger_path: &Path,
entry_receiver: Receiver<WorkingBankEntries>,
) -> MockBroadcastStage {
// Make the database ledger

View File

@@ -32,9 +32,7 @@ use rand::SeedableRng;
use rand::{thread_rng, Rng};
use rand_chacha::ChaChaRng;
use rayon::prelude::*;
use solana_metrics::{
datapoint_debug, inc_new_counter_debug, inc_new_counter_error, inc_new_counter_warn,
};
use solana_metrics::{datapoint_debug, inc_new_counter_debug, inc_new_counter_error};
use solana_netutil::{
bind_in_range, bind_to, find_available_port_in_range, multi_bind_in_range, PortRange,
};
@@ -65,6 +63,9 @@ pub const GOSSIP_SLEEP_MILLIS: u64 = 100;
/// the number of slots to respond with when responding to `Orphan` requests
pub const MAX_ORPHAN_REPAIR_RESPONSES: usize = 10;
/// Allow protocol messages to carry only 1KB of data a time
const TARGET_PROTOCOL_PAYLOAD_SIZE: u64 = 1024;
#[derive(Debug, PartialEq, Eq)]
pub enum ClusterInfoError {
NoPeers,
@@ -719,9 +720,10 @@ impl ClusterInfo {
last_err?;
inc_new_counter_debug!("cluster_info-broadcast-max_idx", blobs_len);
if broadcast_table_len != 0 {
inc_new_counter_warn!("broadcast_service-num_peers", broadcast_table_len + 1);
}
datapoint_info!(
"cluster_info-num_nodes",
("count", broadcast_table_len + 1, i64)
);
Ok(())
}
@@ -850,6 +852,29 @@ impl ClusterInfo {
}
}
/// Splits a Vec of CrdsValues into a nested Vec, trying to make sure that
/// each Vec is no larger than `PROTOCOL_PAYLOAD_SIZE`
/// Note: some messages cannot be contained within that size so in the worst case this returns
/// N nested Vecs with 1 item each.
fn split_gossip_messages(mut msgs: Vec<CrdsValue>) -> Vec<Vec<CrdsValue>> {
let mut messages = vec![];
while !msgs.is_empty() {
let mut size = 0;
let mut payload = vec![];
while let Some(msg) = msgs.pop() {
// always put at least one msg. The PROTOCOL_PAYLOAD_SIZE is not a hard limit
let msg_size = msg.size();
size += msg_size;
payload.push(msg);
if size > TARGET_PROTOCOL_PAYLOAD_SIZE {
break;
}
}
messages.push(payload);
}
messages
}
fn new_pull_requests(&mut self, stakes: &HashMap<Pubkey, u64>) -> Vec<(SocketAddr, Protocol)> {
let now = timestamp();
let pulls: Vec<_> = self
@@ -893,7 +918,12 @@ impl ClusterInfo {
.and_then(CrdsValue::contact_info)
.map(|p| (p.gossip, messages))
})
.map(|(peer, msgs)| (peer, Protocol::PushMessage(self_id, msgs)))
.map(|(peer, msgs)| {
Self::split_gossip_messages(msgs)
.into_iter()
.map(move |payload| (peer, Protocol::PushMessage(self_id, payload)))
})
.flatten()
.collect()
}
@@ -1089,11 +1119,18 @@ impl ClusterInfo {
.process_pull_request(caller, filter, now);
let len = data.len();
trace!("get updates since response {}", len);
let rsp = Protocol::PullResponse(self_id, data);
let responses: Vec<_> = Self::split_gossip_messages(data)
.into_iter()
.map(move |payload| Protocol::PullResponse(self_id, payload))
.collect();
// The remote node may not know its public IP:PORT. Instead of responding to the caller's
// gossip addr, respond to the origin addr.
inc_new_counter_debug!("cluster_info-pull_request-rsp", len);
to_shared_blob(rsp, *from_addr).ok().into_iter().collect()
responses
.into_iter()
.map(|rsp| to_shared_blob(rsp, *from_addr).ok().into_iter())
.flatten()
.collect()
}
fn handle_pull_response(me: &Arc<RwLock<Self>>, from: &Pubkey, data: Vec<CrdsValue>) {
@@ -1464,7 +1501,7 @@ pub struct Sockets {
pub gossip: UdpSocket,
pub tvu: Vec<UdpSocket>,
pub tpu: Vec<UdpSocket>,
pub tpu_via_blobs: Vec<UdpSocket>,
pub tpu_forwards: Vec<UdpSocket>,
pub broadcast: UdpSocket,
pub repair: UdpSocket,
pub retransmit: UdpSocket,
@@ -1509,7 +1546,7 @@ impl Node {
gossip,
tvu: vec![tvu],
tpu: vec![],
tpu_via_blobs: vec![],
tpu_forwards: vec![],
broadcast,
repair,
retransmit,
@@ -1521,7 +1558,7 @@ impl Node {
let tpu = UdpSocket::bind("127.0.0.1:0").unwrap();
let gossip = UdpSocket::bind("127.0.0.1:0").unwrap();
let tvu = UdpSocket::bind("127.0.0.1:0").unwrap();
let tpu_via_blobs = UdpSocket::bind("127.0.0.1:0").unwrap();
let tpu_forwards = UdpSocket::bind("127.0.0.1:0").unwrap();
let repair = UdpSocket::bind("127.0.0.1:0").unwrap();
let rpc_port = find_available_port_in_range((1024, 65535)).unwrap();
let rpc_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), rpc_port);
@@ -1537,7 +1574,7 @@ impl Node {
gossip.local_addr().unwrap(),
tvu.local_addr().unwrap(),
tpu.local_addr().unwrap(),
tpu_via_blobs.local_addr().unwrap(),
tpu_forwards.local_addr().unwrap(),
storage.local_addr().unwrap(),
rpc_addr,
rpc_pubsub_addr,
@@ -1549,7 +1586,7 @@ impl Node {
gossip,
tvu: vec![tvu],
tpu: vec![tpu],
tpu_via_blobs: vec![tpu_via_blobs],
tpu_forwards: vec![tpu_forwards],
broadcast,
repair,
retransmit,
@@ -1583,7 +1620,7 @@ impl Node {
let (tpu_port, tpu_sockets) = multi_bind_in_range(port_range, 32).expect("tpu multi_bind");
let (tpu_via_blobs_port, tpu_via_blobs_sockets) =
let (tpu_forwards_port, tpu_forwards_sockets) =
multi_bind_in_range(port_range, 8).expect("tpu multi_bind");
let (_, repair) = Self::bind(port_range);
@@ -1595,7 +1632,7 @@ impl Node {
SocketAddr::new(gossip_addr.ip(), gossip_port),
SocketAddr::new(gossip_addr.ip(), tvu_port),
SocketAddr::new(gossip_addr.ip(), tpu_port),
SocketAddr::new(gossip_addr.ip(), tpu_via_blobs_port),
SocketAddr::new(gossip_addr.ip(), tpu_forwards_port),
socketaddr_any!(),
socketaddr_any!(),
socketaddr_any!(),
@@ -1609,7 +1646,7 @@ impl Node {
gossip,
tvu: tvu_sockets,
tpu: tpu_sockets,
tpu_via_blobs: tpu_via_blobs_sockets,
tpu_forwards: tpu_forwards_sockets,
broadcast,
repair,
retransmit,
@@ -1630,9 +1667,9 @@ impl Node {
let empty = socketaddr_any!();
new.info.tpu = empty;
new.info.tpu_via_blobs = empty;
new.info.tpu_forwards = empty;
new.sockets.tpu = vec![];
new.sockets.tpu_via_blobs = vec![];
new.sockets.tpu_forwards = vec![];
new
}
@@ -2202,45 +2239,79 @@ mod tests {
assert_eq!(votes, vec![]);
assert_eq!(max_ts, new_max_ts);
}
}
#[test]
fn test_add_entrypoint() {
let node_keypair = Arc::new(Keypair::new());
let mut cluster_info = ClusterInfo::new(
ContactInfo::new_localhost(&node_keypair.pubkey(), timestamp()),
node_keypair,
);
let entrypoint_pubkey = Pubkey::new_rand();
let entrypoint = ContactInfo::new_localhost(&entrypoint_pubkey, timestamp());
cluster_info.set_entrypoint(entrypoint.clone());
let pulls = cluster_info.new_pull_requests(&HashMap::new());
assert_eq!(1, pulls.len());
match pulls.get(0) {
Some((addr, msg)) => {
assert_eq!(*addr, entrypoint.gossip);
match msg {
Protocol::PullRequest(_, value) => {
assert!(value.verify());
assert_eq!(value.pubkey(), cluster_info.id())
#[test]
fn test_add_entrypoint() {
let node_keypair = Arc::new(Keypair::new());
let mut cluster_info = ClusterInfo::new(
ContactInfo::new_localhost(&node_keypair.pubkey(), timestamp()),
node_keypair,
);
let entrypoint_pubkey = Pubkey::new_rand();
let entrypoint = ContactInfo::new_localhost(&entrypoint_pubkey, timestamp());
cluster_info.set_entrypoint(entrypoint.clone());
let pulls = cluster_info.new_pull_requests(&HashMap::new());
assert_eq!(1, pulls.len());
match pulls.get(0) {
Some((addr, msg)) => {
assert_eq!(*addr, entrypoint.gossip);
match msg {
Protocol::PullRequest(_, value) => {
assert!(value.verify());
assert_eq!(value.pubkey(), cluster_info.id())
}
_ => panic!("wrong protocol"),
}
_ => panic!("wrong protocol"),
}
None => panic!("entrypoint should be a pull destination"),
}
None => panic!("entrypoint should be a pull destination"),
// now add this message back to the table and make sure after the next pull, the entrypoint is unset
let entrypoint_crdsvalue = CrdsValue::ContactInfo(entrypoint.clone());
let cluster_info = Arc::new(RwLock::new(cluster_info));
ClusterInfo::handle_pull_response(
&cluster_info,
&entrypoint_pubkey,
vec![entrypoint_crdsvalue],
);
let pulls = cluster_info
.write()
.unwrap()
.new_pull_requests(&HashMap::new());
assert_eq!(1, pulls.len());
assert_eq!(cluster_info.read().unwrap().entrypoint, Some(entrypoint));
}
#[test]
fn test_split_messages_small() {
let value = CrdsValue::ContactInfo(ContactInfo::default());
test_split_messages(value);
}
#[test]
fn test_split_messages_large() {
let mut btree_slots = BTreeSet::new();
for i in 0..128 {
btree_slots.insert(i);
}
let value = CrdsValue::EpochSlots(EpochSlots {
from: Pubkey::default(),
root: 0,
slots: btree_slots,
signature: Signature::default(),
wallclock: 0,
});
test_split_messages(value);
}
fn test_split_messages(value: CrdsValue) {
const NUM_VALUES: usize = 30;
let value_size = value.size();
let expected_len = NUM_VALUES / (TARGET_PROTOCOL_PAYLOAD_SIZE / value_size).max(1) as usize;
let msgs = vec![value; NUM_VALUES];
let split = ClusterInfo::split_gossip_messages(msgs);
assert!(split.len() <= expected_len);
}
// now add this message back to the table and make sure after the next pull, the entrypoint is unset
let entrypoint_crdsvalue = CrdsValue::ContactInfo(entrypoint.clone());
let cluster_info = Arc::new(RwLock::new(cluster_info));
ClusterInfo::handle_pull_response(
&cluster_info,
&entrypoint_pubkey,
vec![entrypoint_crdsvalue],
);
let pulls = cluster_info
.write()
.unwrap()
.new_pull_requests(&HashMap::new());
assert_eq!(1, pulls.len());
assert_eq!(cluster_info.read().unwrap().entrypoint, Some(entrypoint));
}

View File

@@ -22,6 +22,7 @@ use solana_sdk::timing::{
NUM_CONSECUTIVE_LEADER_SLOTS,
};
use solana_sdk::transport::TransportError;
use std::path::Path;
use std::thread::sleep;
use std::time::Duration;
@@ -94,7 +95,7 @@ pub fn fullnode_exit(entry_point_info: &ContactInfo, nodes: usize) {
}
}
pub fn verify_ledger_ticks(ledger_path: &str, ticks_per_slot: usize) {
pub fn verify_ledger_ticks(ledger_path: &Path, ticks_per_slot: usize) {
let ledger = Blocktree::open(ledger_path).unwrap();
let zeroth_slot = ledger.get_slot_entries(0, 0, None).unwrap();
let last_id = zeroth_slot.last().unwrap().hash;

View File

@@ -23,7 +23,7 @@ pub struct ContactInfo {
/// transactions address
pub tpu: SocketAddr,
/// address to forward unprocessed transactions to
pub tpu_via_blobs: SocketAddr,
pub tpu_forwards: SocketAddr,
/// storage data address
pub storage_addr: SocketAddr,
/// address to which to send JSON-RPC requests
@@ -78,7 +78,7 @@ impl Default for ContactInfo {
gossip: socketaddr_any!(),
tvu: socketaddr_any!(),
tpu: socketaddr_any!(),
tpu_via_blobs: socketaddr_any!(),
tpu_forwards: socketaddr_any!(),
storage_addr: socketaddr_any!(),
rpc: socketaddr_any!(),
rpc_pubsub: socketaddr_any!(),
@@ -94,7 +94,7 @@ impl ContactInfo {
gossip: SocketAddr,
tvu: SocketAddr,
tpu: SocketAddr,
tpu_via_blobs: SocketAddr,
tpu_forwards: SocketAddr,
storage_addr: SocketAddr,
rpc: SocketAddr,
rpc_pubsub: SocketAddr,
@@ -106,7 +106,7 @@ impl ContactInfo {
gossip,
tvu,
tpu,
tpu_via_blobs,
tpu_forwards,
storage_addr,
rpc,
rpc_pubsub,
@@ -157,7 +157,7 @@ impl ContactInfo {
let tpu_addr = *bind_addr;
let gossip_addr = next_port(&bind_addr, 1);
let tvu_addr = next_port(&bind_addr, 2);
let tpu_via_blobs_addr = next_port(&bind_addr, 3);
let tpu_forwards_addr = next_port(&bind_addr, 3);
let rpc_addr = SocketAddr::new(bind_addr.ip(), rpc_port::DEFAULT_RPC_PORT);
let rpc_pubsub_addr = SocketAddr::new(bind_addr.ip(), rpc_port::DEFAULT_RPC_PUBSUB_PORT);
Self::new(
@@ -165,7 +165,7 @@ impl ContactInfo {
gossip_addr,
tvu_addr,
tpu_addr,
tpu_via_blobs_addr,
tpu_forwards_addr,
"0.0.0.0:0".parse().unwrap(),
rpc_addr,
rpc_pubsub_addr,
@@ -233,7 +233,7 @@ impl Signable for ContactInfo {
gossip: SocketAddr,
tvu: SocketAddr,
tpu: SocketAddr,
tpu_via_blobs: SocketAddr,
tpu_forwards: SocketAddr,
storage_addr: SocketAddr,
rpc: SocketAddr,
rpc_pubsub: SocketAddr,
@@ -247,7 +247,7 @@ impl Signable for ContactInfo {
tvu: me.tvu,
tpu: me.tpu,
storage_addr: me.storage_addr,
tpu_via_blobs: me.tpu_via_blobs,
tpu_forwards: me.tpu_forwards,
rpc: me.rpc,
rpc_pubsub: me.rpc_pubsub,
wallclock: me.wallclock,
@@ -287,7 +287,7 @@ mod tests {
let ci = ContactInfo::default();
assert!(ci.gossip.ip().is_unspecified());
assert!(ci.tvu.ip().is_unspecified());
assert!(ci.tpu_via_blobs.ip().is_unspecified());
assert!(ci.tpu_forwards.ip().is_unspecified());
assert!(ci.rpc.ip().is_unspecified());
assert!(ci.rpc_pubsub.ip().is_unspecified());
assert!(ci.tpu.ip().is_unspecified());
@@ -298,7 +298,7 @@ mod tests {
let ci = ContactInfo::new_multicast();
assert!(ci.gossip.ip().is_multicast());
assert!(ci.tvu.ip().is_multicast());
assert!(ci.tpu_via_blobs.ip().is_multicast());
assert!(ci.tpu_forwards.ip().is_multicast());
assert!(ci.rpc.ip().is_multicast());
assert!(ci.rpc_pubsub.ip().is_multicast());
assert!(ci.tpu.ip().is_multicast());
@@ -310,7 +310,7 @@ mod tests {
let ci = ContactInfo::new_gossip_entry_point(&addr);
assert_eq!(ci.gossip, addr);
assert!(ci.tvu.ip().is_unspecified());
assert!(ci.tpu_via_blobs.ip().is_unspecified());
assert!(ci.tpu_forwards.ip().is_unspecified());
assert!(ci.rpc.ip().is_unspecified());
assert!(ci.rpc_pubsub.ip().is_unspecified());
assert!(ci.tpu.ip().is_unspecified());
@@ -323,7 +323,7 @@ mod tests {
assert_eq!(ci.tpu, addr);
assert_eq!(ci.gossip.port(), 11);
assert_eq!(ci.tvu.port(), 12);
assert_eq!(ci.tpu_via_blobs.port(), 13);
assert_eq!(ci.tpu_forwards.port(), 13);
assert_eq!(ci.rpc.port(), 8899);
assert_eq!(ci.rpc_pubsub.port(), 8900);
assert!(ci.storage_addr.ip().is_unspecified());
@@ -338,7 +338,7 @@ mod tests {
assert_eq!(d1.id, keypair.pubkey());
assert_eq!(d1.gossip, socketaddr!("127.0.0.1:1235"));
assert_eq!(d1.tvu, socketaddr!("127.0.0.1:1236"));
assert_eq!(d1.tpu_via_blobs, socketaddr!("127.0.0.1:1237"));
assert_eq!(d1.tpu_forwards, socketaddr!("127.0.0.1:1237"));
assert_eq!(d1.tpu, socketaddr!("127.0.0.1:1234"));
assert_eq!(d1.rpc, socketaddr!("127.0.0.1:8899"));
assert_eq!(d1.rpc_pubsub, socketaddr!("127.0.0.1:8900"));

View File

@@ -1,5 +1,5 @@
use crate::contact_info::ContactInfo;
use bincode::serialize;
use bincode::{serialize, serialized_size};
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{Keypair, Signable, Signature};
use solana_sdk::transaction::Transaction;
@@ -189,6 +189,11 @@ impl CrdsValue {
CrdsValueLabel::EpochSlots(*key),
]
}
/// Returns the size (in bytes) of a CrdsValue
pub fn size(&self) -> u64 {
serialized_size(&self).expect("unable to serialize contact info")
}
}
impl Signable for CrdsValue {

View File

@@ -334,6 +334,7 @@ pub mod test {
use solana_sdk::signature::Signable;
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::borrow::Borrow;
use std::path::Path;
/// Specifies the contents of a 16-data-blob and 4-coding-blob erasure set
/// Exists to be passed to `generate_blocktree_with_coding`
@@ -748,7 +749,7 @@ pub mod test {
/// Genarates a ledger according to the given specs.
/// Blocktree should have correct SlotMeta and ErasureMeta and so on but will not have done any
/// possible recovery.
pub fn generate_blocktree_with_coding(ledger_path: &str, specs: &[SlotSpec]) -> Blocktree {
pub fn generate_blocktree_with_coding(ledger_path: &Path, specs: &[SlotSpec]) -> Blocktree {
let blocktree = Blocktree::open(ledger_path).unwrap();
let model = generate_ledger_model(specs);

View File

@@ -22,28 +22,28 @@ impl FetchStage {
#[allow(clippy::new_ret_no_self)]
pub fn new(
sockets: Vec<UdpSocket>,
tpu_via_blobs_sockets: Vec<UdpSocket>,
tpu_forwards_sockets: Vec<UdpSocket>,
exit: &Arc<AtomicBool>,
poh_recorder: &Arc<Mutex<PohRecorder>>,
) -> (Self, PacketReceiver) {
let (sender, receiver) = channel();
(
Self::new_with_sender(sockets, tpu_via_blobs_sockets, exit, &sender, &poh_recorder),
Self::new_with_sender(sockets, tpu_forwards_sockets, exit, &sender, &poh_recorder),
receiver,
)
}
pub fn new_with_sender(
sockets: Vec<UdpSocket>,
tpu_via_blobs_sockets: Vec<UdpSocket>,
tpu_forwards_sockets: Vec<UdpSocket>,
exit: &Arc<AtomicBool>,
sender: &PacketSender,
poh_recorder: &Arc<Mutex<PohRecorder>>,
) -> Self {
let tx_sockets = sockets.into_iter().map(Arc::new).collect();
let tpu_via_blobs_sockets = tpu_via_blobs_sockets.into_iter().map(Arc::new).collect();
let tpu_forwards_sockets = tpu_forwards_sockets.into_iter().map(Arc::new).collect();
Self::new_multi_socket(
tx_sockets,
tpu_via_blobs_sockets,
tpu_forwards_sockets,
exit,
&sender,
&poh_recorder,
@@ -83,7 +83,7 @@ impl FetchStage {
fn new_multi_socket(
sockets: Vec<Arc<UdpSocket>>,
tpu_via_blobs_sockets: Vec<Arc<UdpSocket>>,
tpu_forwards_sockets: Vec<Arc<UdpSocket>>,
exit: &Arc<AtomicBool>,
sender: &PacketSender,
poh_recorder: &Arc<Mutex<PohRecorder>>,
@@ -100,9 +100,15 @@ impl FetchStage {
});
let (forward_sender, forward_receiver) = channel();
let tpu_via_blobs_threads = tpu_via_blobs_sockets
.into_iter()
.map(|socket| streamer::blob_packet_receiver(socket, &exit, forward_sender.clone()));
let tpu_forwards_threads = tpu_forwards_sockets.into_iter().map(|socket| {
streamer::receiver(
socket,
&exit,
forward_sender.clone(),
recycler.clone(),
"fetch_forward_stage",
)
});
let sender = sender.clone();
let poh_recorder = poh_recorder.clone();
@@ -124,7 +130,7 @@ impl FetchStage {
})
.unwrap();
let mut thread_hdls: Vec<_> = tpu_threads.chain(tpu_via_blobs_threads).collect();
let mut thread_hdls: Vec<_> = tpu_threads.chain(tpu_forwards_threads).collect();
thread_hdls.push(fwd_thread_hdl);
Self { thread_hdls }
}

View File

@@ -78,6 +78,15 @@ impl LeaderScheduleCache {
let (mut epoch, mut start_index) = bank.get_epoch_and_slot_index(current_slot + 1);
let mut first_slot = None;
let mut last_slot = current_slot;
let max_epoch = *self.max_epoch.read().unwrap();
if epoch > max_epoch {
debug!(
"Requested next leader in slot: {} of unconfirmed epoch: {}",
current_slot + 1,
epoch
);
return None;
}
while let Some(leader_schedule) = self.get_epoch_schedule_else_compute(epoch, bank) {
// clippy thinks I should do this:
// for (i, <item>) in leader_schedule
@@ -110,6 +119,9 @@ impl LeaderScheduleCache {
}
epoch += 1;
if epoch > max_epoch {
break;
}
start_index = 0;
}
first_slot.and_then(|slot| Some((slot, last_slot)))
@@ -480,6 +492,12 @@ mod tests {
}
expected_slot += index;
// If the max root isn't set, we'll get None
assert!(cache
.next_leader_slot(&node_pubkey, 0, &bank, None)
.is_none());
cache.set_root(&bank);
assert_eq!(
cache
.next_leader_slot(&node_pubkey, 0, &bank, None)

View File

@@ -61,8 +61,11 @@ pub mod rpc_pubsub_service;
pub mod rpc_service;
pub mod rpc_subscriptions;
pub mod service;
pub mod shred;
pub mod sigverify;
pub mod sigverify_stage;
pub mod snapshot_package;
pub mod snapshot_utils;
pub mod staking_utils;
pub mod storage_stage;
pub mod streamer;
@@ -101,3 +104,8 @@ extern crate solana_metrics;
extern crate matches;
extern crate crossbeam_channel;
extern crate dir_diff;
extern crate flate2;
extern crate fs_extra;
extern crate tar;
extern crate tempfile;

View File

@@ -27,23 +27,24 @@ use solana_vote_api::vote_state::VoteState;
use std::collections::HashMap;
use std::fs::remove_dir_all;
use std::io::{Error, ErrorKind, Result};
use std::path::PathBuf;
use std::sync::Arc;
pub struct ValidatorInfo {
pub keypair: Arc<Keypair>,
pub voting_keypair: Arc<Keypair>,
pub storage_keypair: Arc<Keypair>,
pub ledger_path: String,
pub ledger_path: PathBuf,
pub contact_info: ContactInfo,
}
pub struct ReplicatorInfo {
pub replicator_storage_pubkey: Pubkey,
pub ledger_path: String,
pub ledger_path: PathBuf,
}
impl ReplicatorInfo {
fn new(storage_pubkey: Pubkey, ledger_path: String) -> Self {
fn new(storage_pubkey: Pubkey, ledger_path: PathBuf) -> Self {
Self {
replicator_storage_pubkey: storage_pubkey,
ledger_path,
@@ -381,7 +382,7 @@ impl LocalCluster {
.chain(self.replicator_infos.values().map(|info| &info.ledger_path))
{
remove_dir_all(&ledger_path)
.unwrap_or_else(|_| panic!("Unable to remove {}", ledger_path));
.unwrap_or_else(|_| panic!("Unable to remove {:?}", ledger_path));
}
}

View File

@@ -12,13 +12,11 @@ pub use solana_sdk::packet::PACKET_DATA_SIZE;
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::Signable;
use solana_sdk::signature::Signature;
use std::borrow::Borrow;
use std::borrow::Cow;
use std::cmp;
use std::fmt;
use std::io;
use std::io::Cursor;
use std::io::Write;
use std::mem;
use std::mem::size_of;
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr, UdpSocket};
@@ -30,11 +28,11 @@ pub type SharedBlob = Arc<RwLock<Blob>>;
pub type SharedBlobs = Vec<SharedBlob>;
pub const NUM_PACKETS: usize = 1024 * 8;
pub const BLOB_SIZE: usize = (64 * 1024 - 128); // wikipedia says there should be 20b for ipv4 headers
pub const BLOB_SIZE: usize = (2 * 1024 - 128); // wikipedia says there should be 20b for ipv4 headers
pub const BLOB_DATA_SIZE: usize = BLOB_SIZE - (BLOB_HEADER_SIZE * 2);
pub const BLOB_DATA_ALIGN: usize = 16; // safe for erasure input pointers, gf.c needs 16byte-aligned buffers
pub const NUM_BLOBS: usize = (NUM_PACKETS * PACKET_DATA_SIZE) / BLOB_SIZE;
pub const PACKETS_PER_BLOB: usize = 256; // reasonable estimate for payment packets per blob based on ~200b transaction size
pub const PACKETS_PER_BLOB: usize = 8; // reasonable estimate for payment packets per blob based on ~200b transaction size
#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]
#[repr(C)]
@@ -293,7 +291,7 @@ impl Packets {
total_size += size;
// Try to batch into blob-sized buffers
// will cause less re-shuffling later on.
if start.elapsed().as_millis() > 1 || total_size >= (BLOB_DATA_SIZE - 4096) {
if start.elapsed().as_millis() > 1 || total_size >= (BLOB_DATA_SIZE - 512) {
break;
}
}
@@ -365,18 +363,6 @@ pub fn to_shared_blobs<T: Serialize>(rsps: Vec<(T, SocketAddr)>) -> Result<Share
Ok(blobs)
}
pub fn packets_to_blobs<T: Borrow<Packet>>(packets: &[T]) -> Vec<Blob> {
let mut current_index = 0;
let mut blobs = vec![];
while current_index < packets.len() {
let mut blob = Blob::default();
current_index += blob.store_packets(&packets[current_index..]) as usize;
blobs.push(blob);
}
blobs
}
macro_rules! range {
($prev:expr, $type:ident) => {
$prev..$prev + size_of::<$type>()
@@ -558,52 +544,6 @@ impl Blob {
&self.data[SIGNATURE_RANGE]
}
pub fn store_packets<T: Borrow<Packet>>(&mut self, packets: &[T]) -> u64 {
let size = self.size();
let mut cursor = Cursor::new(&mut self.data_mut()[size..]);
let mut written = 0;
let mut last_index = 0;
for packet in packets {
if bincode::serialize_into(&mut cursor, &packet.borrow().meta.size).is_err() {
break;
}
let packet = packet.borrow();
if cursor.write_all(&packet.data[..packet.meta.size]).is_err() {
break;
}
written = cursor.position() as usize;
last_index += 1;
}
self.set_size(size + written);
last_index
}
// other side of store_packets
pub fn load_packets(&self, packets: &mut PinnedVec<Packet>) {
// rough estimate
let mut pos = 0;
let size_len = bincode::serialized_size(&0usize).unwrap() as usize;
while pos + size_len < self.size() {
let size: usize = bincode::deserialize_from(&self.data()[pos..]).unwrap();
pos += size_len;
if size > PACKET_DATA_SIZE || pos + size > self.size() {
break;
}
let mut packet = Packet::default();
packet.meta.size = size;
packet.data[..size].copy_from_slice(&self.data()[pos..pos + size]);
pos += size;
packets.push(packet);
}
}
pub fn recv_blob(socket: &UdpSocket, r: &SharedBlob) -> io::Result<()> {
let mut p = r.write().unwrap();
trace!("receiving on {}", socket.local_addr().unwrap());
@@ -701,8 +641,6 @@ pub fn index_blobs(blobs: &[SharedBlob], id: &Pubkey, mut blob_index: u64, slot:
#[cfg(test)]
mod tests {
use super::*;
use bincode;
use rand::Rng;
use solana_sdk::hash::Hash;
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::system_transaction;
@@ -834,62 +772,6 @@ mod tests {
assert_eq!(config, b.erasure_config());
}
#[test]
fn test_store_blobs_max() {
let serialized_size_size = bincode::serialized_size(&0usize).unwrap() as usize;
let serialized_packet_size = serialized_size_size + PACKET_DATA_SIZE;
let num_packets = (BLOB_SIZE - BLOB_HEADER_SIZE) / serialized_packet_size + 1;
let mut blob = Blob::default();
let packets: Vec<_> = (0..num_packets)
.map(|_| {
let mut packet = Packet::default();
packet.meta.size = PACKET_DATA_SIZE;
packet
})
.collect();
// Everything except the last packet should have been written
assert_eq!(blob.store_packets(&packets[..]), (num_packets - 1) as u64);
blob = Blob::default();
// Store packets such that blob only has room for one more
assert_eq!(
blob.store_packets(&packets[..num_packets - 2]),
(num_packets - 2) as u64
);
// Fill the last packet in the blob
assert_eq!(blob.store_packets(&packets[..num_packets - 2]), 1);
// Blob is now full
assert_eq!(blob.store_packets(&packets), 0);
}
#[test]
fn test_packets_to_blobs() {
let mut rng = rand::thread_rng();
let packets: Vec<_> = (0..2)
.map(|_| {
let mut packet = Packet::default();
packet.meta.size = rng.gen_range(1, PACKET_DATA_SIZE);
for i in 0..packet.meta.size {
packet.data[i] = rng.gen_range(1, std::u8::MAX);
}
packet
})
.collect();
let blobs = packets_to_blobs(&packets[..]);
let mut reconstructed_packets = PinnedVec::default();
blobs
.iter()
.for_each(|b| b.load_packets(&mut reconstructed_packets));
assert_eq!(reconstructed_packets[..], packets[..]);
}
#[test]
fn test_blob_data_align() {
assert_eq!(std::mem::align_of::<BlobData>(), BLOB_DATA_ALIGN);

View File

@@ -12,6 +12,7 @@ use crate::poh_recorder::PohRecorder;
use crate::result::{Error, Result};
use crate::rpc_subscriptions::RpcSubscriptions;
use crate::service::Service;
use crate::snapshot_package::SnapshotPackageSender;
use solana_metrics::{datapoint_warn, inc_new_counter_info};
use solana_runtime::bank::Bank;
use solana_sdk::hash::Hash;
@@ -41,6 +42,7 @@ impl Finalizer {
Finalizer { exit_sender }
}
}
// Implement a destructor for Finalizer.
impl Drop for Finalizer {
fn drop(&mut self) {
@@ -90,6 +92,7 @@ impl ReplayStage {
poh_recorder: &Arc<Mutex<PohRecorder>>,
leader_schedule_cache: &Arc<LeaderScheduleCache>,
slot_full_senders: Vec<Sender<(u64, Pubkey)>>,
snapshot_package_sender: Option<SnapshotPackageSender>,
) -> (Self, Receiver<Vec<Arc<Bank>>>)
where
T: 'static + KeypairUtil + Send + Sync,
@@ -144,14 +147,25 @@ impl ReplayStage {
if let Some((_, bank, lockouts)) = votable.into_iter().last() {
subscriptions.notify_subscribers(bank.slot(), &bank_forks);
if let Some(new_leader) =
if let Some(votable_leader) =
leader_schedule_cache.slot_leader_at(bank.slot(), Some(&bank))
{
Self::log_leader_change(
&my_pubkey,
bank.slot(),
&mut current_leader,
&new_leader,
&votable_leader,
);
}
let next_slot = bank.slot() + 1;
if let Some(new_leader) =
leader_schedule_cache.slot_leader_at(next_slot, Some(&bank))
{
datapoint_info!(
"replay_stage-new_leader",
("slot", next_slot, i64),
("leader", new_leader.to_string(), String),
);
}
@@ -168,6 +182,7 @@ impl ReplayStage {
&root_bank_sender,
lockouts,
&lockouts_sender,
&snapshot_package_sender,
)?;
Self::reset_poh_recorder(
@@ -235,14 +250,14 @@ impl ReplayStage {
if let Some(ref current_leader) = current_leader {
if current_leader != new_leader {
let msg = if current_leader == my_pubkey {
"I am no longer the leader"
". I am no longer the leader"
} else if new_leader == my_pubkey {
"I am the new leader"
". I am now the leader"
} else {
""
};
info!(
"LEADER CHANGE at slot: {} leader: {}. {}",
"LEADER CHANGE at slot: {} leader: {}{}",
bank_slot, new_leader, msg
);
}
@@ -261,7 +276,7 @@ impl ReplayStage {
assert!(!poh_recorder.lock().unwrap().has_bank());
let (reached_leader_tick, grace_ticks, poh_slot, parent_slot) =
let (reached_leader_tick, _grace_ticks, poh_slot, parent_slot) =
poh_recorder.lock().unwrap().reached_leader_tick();
if !reached_leader_tick {
@@ -303,10 +318,10 @@ impl ReplayStage {
return;
}
datapoint_warn!(
datapoint_info!(
"replay_stage-new_leader",
("count", poh_slot, i64),
("grace", grace_ticks, i64)
("slot", poh_slot, i64),
("leader", next_leader.to_string(), String),
);
let tpu_bank = bank_forks
@@ -335,13 +350,16 @@ impl ReplayStage {
}
}
// Returns the replay result and the number of replayed transactions
fn replay_blocktree_into_bank(
bank: &Bank,
blocktree: &Blocktree,
progress: &mut HashMap<u64, ForkProgress>,
) -> Result<()> {
) -> (Result<()>, usize) {
let mut tx_count = 0;
let result =
Self::load_blocktree_entries(bank, blocktree, progress).and_then(|(entries, num)| {
tx_count += entries.iter().map(|e| e.transactions.len()).sum::<usize>();
Self::replay_entries_into_bank(bank, entries, progress, num)
});
@@ -354,7 +372,7 @@ impl ReplayStage {
Self::mark_dead_slot(bank.slot(), blocktree, progress);
}
result
(result, tx_count)
}
fn mark_dead_slot(slot: u64, blocktree: &Blocktree, progress: &mut HashMap<u64, ForkProgress>) {
@@ -382,6 +400,7 @@ impl ReplayStage {
root_bank_sender: &Sender<Vec<Arc<Bank>>>,
lockouts: HashMap<u64, StakeLockout>,
lockouts_sender: &Sender<LockoutAggregationData>,
snapshot_package_sender: &Option<SnapshotPackageSender>,
) -> Result<()>
where
T: 'static + KeypairUtil + Send + Sync,
@@ -405,7 +424,10 @@ impl ReplayStage {
// is consumed by repair_service to update gossip, so we don't want to get blobs for
// repair on gossip before we update leader schedule, otherwise they may get dropped.
leader_schedule_cache.set_root(rooted_banks.last().unwrap());
bank_forks.write().unwrap().set_root(new_root);
bank_forks
.write()
.unwrap()
.set_root(new_root, snapshot_package_sender);
Self::handle_new_root(&bank_forks, progress);
trace!("new root {}", new_root);
if let Err(e) = root_bank_sender.send(rooted_banks) {
@@ -478,13 +500,13 @@ impl ReplayStage {
.reset(bank.last_blockhash(), bank.slot(), next_leader_slot);
let next_leader_msg = if let Some(next_leader_slot) = next_leader_slot {
format!("My next leader slot is #{}", next_leader_slot.0)
format!("My next leader slot is {}", next_leader_slot.0)
} else {
"I am not in the upcoming leader schedule yet".to_owned()
"I am not in the leader schedule yet".to_owned()
};
info!(
"{} voted and reset poh at {}. {}",
"{} voted and reset PoH at tick height {}. {}",
my_pubkey,
bank.tick_height(),
next_leader_msg,
@@ -499,6 +521,7 @@ impl ReplayStage {
slot_full_senders: &[Sender<(u64, Pubkey)>],
) -> bool {
let mut did_complete_bank = false;
let mut tx_count = 0;
let active_banks = bank_forks.read().unwrap().active_banks();
trace!("active banks {:?}", active_banks);
@@ -509,15 +532,16 @@ impl ReplayStage {
}
let bank = bank_forks.read().unwrap().get(*bank_slot).unwrap().clone();
if bank.collector_id() != my_pubkey
&& Self::is_replay_result_fatal(&Self::replay_blocktree_into_bank(
&bank, &blocktree, progress,
))
{
trace!("replay_result_fatal slot {}", bank_slot);
// If the bank was corrupted, don't try to run the below logic to check if the
// bank is completed
continue;
if bank.collector_id() != my_pubkey {
let (replay_result, replay_tx_count) =
Self::replay_blocktree_into_bank(&bank, &blocktree, progress);
tx_count += replay_tx_count;
if Self::is_replay_result_fatal(&replay_result) {
trace!("replay_result_fatal slot {}", bank_slot);
// If the bank was corrupted, don't try to run the below logic to check if the
// bank is completed
continue;
}
}
assert_eq!(*bank_slot, bank.slot());
if bank.tick_height() == bank.max_tick_height() {
@@ -532,6 +556,7 @@ impl ReplayStage {
);
}
}
inc_new_counter_info!("replay_stage-replay_transactions", tx_count);
did_complete_bank
}
@@ -989,7 +1014,8 @@ mod test {
progress.insert(bank0.slot(), ForkProgress::new(last_blockhash));
let blob = blob_to_insert(&last_blockhash);
blocktree.insert_data_blobs(&[blob]).unwrap();
let res = ReplayStage::replay_blocktree_into_bank(&bank0, &blocktree, &mut progress);
let (res, _tx_count) =
ReplayStage::replay_blocktree_into_bank(&bank0, &blocktree, &mut progress);
// Check that the erroring bank was marked as dead in the progress map
assert!(progress

View File

@@ -62,7 +62,7 @@ pub struct Replicator {
struct ReplicatorMeta {
slot: u64,
slots_per_segment: u64,
ledger_path: String,
ledger_path: PathBuf,
signature: Signature,
ledger_data_file_encrypted: PathBuf,
sampling_offsets: Vec<u64>,
@@ -200,7 +200,7 @@ impl Replicator {
/// * `keypair` - Keypair for this replicator
#[allow(clippy::new_ret_no_self)]
pub fn new(
ledger_path: &str,
ledger_path: &Path,
node: Node,
cluster_entrypoint: ContactInfo,
keypair: Arc<Keypair>,
@@ -263,7 +263,7 @@ impl Replicator {
let exit = exit.clone();
let node_info = node.info.clone();
let mut meta = ReplicatorMeta {
ledger_path: ledger_path.to_string(),
ledger_path: ledger_path.to_path_buf(),
..ReplicatorMeta::default()
};
spawn(move || {
@@ -516,8 +516,7 @@ impl Replicator {
}
fn encrypt_ledger(meta: &mut ReplicatorMeta, blocktree: &Arc<Blocktree>) -> Result<()> {
let ledger_path = Path::new(&meta.ledger_path);
meta.ledger_data_file_encrypted = ledger_path.join(ENCRYPTED_FILENAME);
meta.ledger_data_file_encrypted = meta.ledger_path.join(ENCRYPTED_FILENAME);
{
let mut ivec = [0u8; 64];

View File

@@ -30,6 +30,7 @@ pub enum Error {
SendError,
PohRecorderError(poh_recorder::PohRecorderError),
BlocktreeError(blocktree::BlocktreeError),
FsExtra(fs_extra::error::Error),
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -102,6 +103,11 @@ impl std::convert::From<std::io::Error> for Error {
Error::IO(e)
}
}
impl std::convert::From<fs_extra::error::Error> for Error {
fn from(e: fs_extra::error::Error) -> Error {
Error::FsExtra(e)
}
}
impl std::convert::From<serde_json::Error> for Error {
fn from(e: serde_json::Error) -> Error {
Error::JSON(e)

View File

@@ -14,6 +14,7 @@ use rand::SeedableRng;
use rand_chacha::ChaChaRng;
use solana_metrics::{datapoint_info, inc_new_counter_error};
use solana_runtime::epoch_schedule::EpochSchedule;
use std::cmp;
use std::net::UdpSocket;
use std::sync::atomic::AtomicBool;
use std::sync::mpsc::channel;
@@ -39,12 +40,13 @@ fn retransmit(
let r_bank = bank_forks.read().unwrap().working_bank();
let bank_epoch = r_bank.get_stakers_epoch(r_bank.slot());
let mut peers_len = 0;
for blob in &blobs {
let (my_index, mut peers) = cluster_info.read().unwrap().shuffle_peers_and_index(
staking_utils::staked_nodes_at_epoch(&r_bank, bank_epoch).as_ref(),
ChaChaRng::from_seed(blob.read().unwrap().seed()),
);
peers_len = cmp::max(peers_len, peers.len());
peers.remove(my_index);
let (neighbors, children) = compute_retransmit_peers(DATA_PLANE_FANOUT, my_index, peers);
@@ -58,6 +60,7 @@ fn retransmit(
ClusterInfo::retransmit_to(&cluster_info, &children, blob, leader, sock, true)?;
}
}
datapoint_info!("cluster_info-num_nodes", ("count", peers_len, i64));
Ok(())
}

View File

@@ -6,12 +6,17 @@ use crate::rpc::*;
use crate::service::Service;
use crate::storage_stage::StorageState;
use jsonrpc_core::MetaIoHandler;
use jsonrpc_http_server::{hyper, AccessControlAllowOrigin, DomainsValidation, ServerBuilder};
use jsonrpc_http_server::{
hyper, AccessControlAllowOrigin, DomainsValidation, RequestMiddleware, RequestMiddlewareAction,
ServerBuilder,
};
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, RwLock};
use std::thread::{self, sleep, Builder, JoinHandle};
use std::time::Duration;
use tokio::prelude::Future;
pub struct JsonRpcService {
thread_hdl: JoinHandle<()>,
@@ -20,6 +25,61 @@ pub struct JsonRpcService {
pub request_processor: Arc<RwLock<JsonRpcRequestProcessor>>, // Used only by test_rpc_new()...
}
#[derive(Default)]
struct RpcRequestMiddleware {
ledger_path: PathBuf,
}
impl RpcRequestMiddleware {
pub fn new(ledger_path: PathBuf) -> Self {
Self { ledger_path }
}
fn not_found() -> hyper::Response<hyper::Body> {
hyper::Response::builder()
.status(hyper::StatusCode::NOT_FOUND)
.body(hyper::Body::empty())
.unwrap()
}
fn internal_server_error() -> hyper::Response<hyper::Body> {
hyper::Response::builder()
.status(hyper::StatusCode::INTERNAL_SERVER_ERROR)
.body(hyper::Body::empty())
.unwrap()
}
fn get(&self, filename: &str) -> RequestMiddlewareAction {
let filename = self.ledger_path.join(filename);
RequestMiddlewareAction::Respond {
should_validate_hosts: true,
response: Box::new(
tokio_fs::file::File::open(filename)
.and_then(|file| {
let buf: Vec<u8> = Vec::new();
tokio_io::io::read_to_end(file, buf)
.and_then(|item| Ok(hyper::Response::new(item.1.into())))
.or_else(|_| Ok(RpcRequestMiddleware::internal_server_error()))
})
.or_else(|_| Ok(RpcRequestMiddleware::not_found())),
),
}
}
}
impl RequestMiddleware for RpcRequestMiddleware {
fn on_request(&self, request: hyper::Request<hyper::Body>) -> RequestMiddlewareAction {
trace!("request uri: {}", request.uri());
match request.uri().path() {
"/snapshot.tgz" => self.get("snapshot.tgz"),
"/genesis.tgz" => self.get("genesis.tgz"),
_ => RequestMiddlewareAction::Proceed {
should_continue_on_invalid_cors: false,
request,
},
}
}
}
impl JsonRpcService {
pub fn new(
cluster_info: &Arc<RwLock<ClusterInfo>>,
@@ -27,6 +87,7 @@ impl JsonRpcService {
storage_state: StorageState,
config: JsonRpcConfig,
bank_forks: Arc<RwLock<BankForks>>,
ledger_path: &Path,
exit: &Arc<AtomicBool>,
) -> Self {
info!("rpc bound to {:?}", rpc_addr);
@@ -41,6 +102,7 @@ impl JsonRpcService {
let cluster_info = cluster_info.clone();
let exit_ = exit.clone();
let ledger_path = ledger_path.to_path_buf();
let thread_hdl = Builder::new()
.name("solana-jsonrpc".to_string())
@@ -57,11 +119,13 @@ impl JsonRpcService {
.cors(DomainsValidation::AllowOnly(vec![
AccessControlAllowOrigin::Any,
]))
.request_middleware(RpcRequestMiddleware::new(ledger_path))
.start_http(&rpc_addr);
if let Err(e) = server {
warn!("JSON RPC service unavailable error: {:?}. \nAlso, check that port {} is not already in use by another application", e, rpc_addr.port());
return;
}
while !exit_.load(Ordering::Relaxed) {
sleep(Duration::from_millis(100));
}
@@ -116,6 +180,7 @@ mod tests {
StorageState::default(),
JsonRpcConfig::default(),
bank_forks,
&PathBuf::from("farf"),
&exit,
);
let thread = rpc_service.thread_hdl.thread();

431
core/src/shred.rs Normal file
View File

@@ -0,0 +1,431 @@
//! The `shred` module defines data structures and methods to pull MTU sized data frames from the network.
use bincode::serialized_size;
use core::borrow::BorrowMut;
use serde::{Deserialize, Serialize};
use solana_sdk::signature::{Keypair, KeypairUtil, Signature};
use std::io::Write;
use std::sync::{Arc, RwLock};
use std::{cmp, io};
pub type SharedShred = Arc<RwLock<SignedShred>>;
pub type SharedShreds = Vec<SharedShred>;
pub type Shreds = Vec<SignedShred>;
// Assume 1500 bytes MTU size
// (subtract 8 bytes of UDP header + 20 bytes ipv4 OR 40 bytes ipv6 header)
pub const MAX_DGRAM_SIZE: usize = (1500 - 48);
#[derive(Serialize, Deserialize, PartialEq, Debug)]
pub struct SignedShred {
pub signature: Signature,
pub shred: Shred,
}
impl SignedShred {
fn new(shred: Shred) -> Self {
SignedShred {
signature: Signature::default(),
shred,
}
}
pub fn sign(&mut self, keypair: &Keypair) {
let data = bincode::serialize(&self.shred).expect("Failed to serialize shred");
self.signature = keypair.sign_message(&data);
}
}
#[derive(Serialize, Deserialize, PartialEq, Debug)]
pub enum Shred {
FirstInSlot(FirstDataShred),
FirstInFECSet(DataShred),
Data(DataShred),
LastInFECSetData(DataShred),
LastInSlotData(DataShred),
Coding(CodingShred),
LastInFECSetCoding(CodingShred),
LastInSlotCoding(CodingShred),
}
#[derive(Serialize, Deserialize, Default, PartialEq, Debug)]
pub struct ShredCommonHeader {
pub slot: u64,
pub index: u32,
}
#[derive(Serialize, Deserialize, Default, PartialEq, Debug)]
pub struct DataShredHeader {
_reserved: CodingShredHeader,
pub common_header: ShredCommonHeader,
}
#[derive(Serialize, Deserialize, Default, PartialEq, Debug)]
pub struct FirstDataShredHeader {
pub data_header: DataShredHeader,
pub parent: u64,
}
#[derive(Serialize, Deserialize, Default, PartialEq, Debug)]
pub struct CodingShredHeader {
pub common_header: ShredCommonHeader,
pub num_data_shreds: u8,
pub num_coding_shreds: u8,
pub position: u8,
}
#[derive(Serialize, Deserialize, PartialEq, Debug)]
pub struct FirstDataShred {
pub header: FirstDataShredHeader,
pub payload: Vec<u8>,
}
#[derive(Serialize, Deserialize, PartialEq, Debug)]
pub struct DataShred {
pub header: DataShredHeader,
pub payload: Vec<u8>,
}
#[derive(Serialize, Deserialize, PartialEq, Debug)]
pub struct CodingShred {
pub header: CodingShredHeader,
pub payload: Vec<u8>,
}
impl Default for FirstDataShred {
fn default() -> Self {
let empty_shred = Shred::FirstInSlot(FirstDataShred {
header: FirstDataShredHeader::default(),
payload: vec![],
});
let size =
MAX_DGRAM_SIZE - serialized_size(&SignedShred::new(empty_shred)).unwrap() as usize;
FirstDataShred {
header: FirstDataShredHeader::default(),
payload: vec![0; size],
}
}
}
impl Default for DataShred {
fn default() -> Self {
let empty_shred = Shred::Data(DataShred {
header: DataShredHeader::default(),
payload: vec![],
});
let size =
MAX_DGRAM_SIZE - serialized_size(&SignedShred::new(empty_shred)).unwrap() as usize;
DataShred {
header: DataShredHeader::default(),
payload: vec![0; size],
}
}
}
impl Default for CodingShred {
fn default() -> Self {
let empty_shred = Shred::Coding(CodingShred {
header: CodingShredHeader::default(),
payload: vec![],
});
let size =
MAX_DGRAM_SIZE - serialized_size(&SignedShred::new(empty_shred)).unwrap() as usize;
CodingShred {
header: CodingShredHeader::default(),
payload: vec![0; size],
}
}
}
pub trait WriteAtOffset {
fn write_at(&mut self, offset: usize, buf: &[u8]) -> usize;
}
impl WriteAtOffset for FirstDataShred {
fn write_at(&mut self, offset: usize, buf: &[u8]) -> usize {
let slice_len = cmp::min(self.payload.len().saturating_sub(offset), buf.len());
if slice_len > 0 {
self.payload[offset..offset + slice_len].copy_from_slice(&buf[..slice_len]);
}
slice_len
}
}
impl WriteAtOffset for DataShred {
fn write_at(&mut self, offset: usize, buf: &[u8]) -> usize {
let slice_len = cmp::min(self.payload.len().saturating_sub(offset), buf.len());
if slice_len > 0 {
self.payload[offset..offset + slice_len].copy_from_slice(&buf[..slice_len]);
}
slice_len
}
}
impl WriteAtOffset for CodingShred {
fn write_at(&mut self, offset: usize, buf: &[u8]) -> usize {
let slice_len = cmp::min(self.payload.len().saturating_sub(offset), buf.len());
if slice_len > 0 {
self.payload[offset..offset + slice_len].copy_from_slice(&buf[..slice_len]);
}
slice_len
}
}
#[derive(Default)]
pub struct Shredder {
slot: u64,
index: u32,
parent: Option<u64>,
_fec_ratio: u64,
signer: Arc<Keypair>,
pub shreds: Shreds,
pub active_shred: Option<SignedShred>,
pub active_offset: usize,
}
impl Write for Shredder {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let mut current_shred = self
.active_shred
.take()
.or_else(|| {
Some(
self.parent
.take()
.map(|parent| {
// If parent slot is provided, assume it's first shred in slot
SignedShred::new(Shred::FirstInSlot(self.new_first_shred(parent)))
})
.unwrap_or_else(||
// If parent slot is not provided, and since there's no existing shred,
// assume it's first shred in FEC block
SignedShred::new(Shred::FirstInFECSet(self.new_data_shred()))),
)
})
.unwrap();
let written = self.active_offset;
let slice_len = match current_shred.shred.borrow_mut() {
Shred::FirstInSlot(s) => s.write_at(written, buf),
Shred::FirstInFECSet(s) => s.write_at(written, buf),
Shred::Data(s) => s.write_at(written, buf),
Shred::LastInFECSetData(s) => s.write_at(written, buf),
Shred::LastInSlotData(s) => s.write_at(written, buf),
Shred::Coding(s) => s.write_at(written, buf),
Shred::LastInFECSetCoding(s) => s.write_at(written, buf),
Shred::LastInSlotCoding(s) => s.write_at(written, buf),
};
let active_shred = if buf.len() > slice_len {
self.finalize_shred(current_shred);
// Continue generating more data shreds.
// If the caller decides to finalize the FEC block or Slot, the data shred will
// morph into appropriate shred accordingly
SignedShred::new(Shred::Data(DataShred::default()))
} else {
self.active_offset += slice_len;
current_shred
};
self.active_shred = Some(active_shred);
Ok(slice_len)
}
fn flush(&mut self) -> io::Result<()> {
if self.active_shred.is_none() {
return Ok(());
}
let current_shred = self.active_shred.take().unwrap();
self.finalize_shred(current_shred);
Ok(())
}
}
impl Shredder {
pub fn new(
slot: u64,
parent: Option<u64>,
fec_ratio: u64,
signer: &Arc<Keypair>,
index: u32,
) -> Self {
Shredder {
slot,
index,
parent,
_fec_ratio: fec_ratio,
signer: signer.clone(),
..Shredder::default()
}
}
pub fn finalize_shred(&mut self, mut shred: SignedShred) {
shred.sign(&self.signer);
self.shreds.push(shred);
self.active_offset = 0;
self.index += 1;
}
pub fn new_data_shred(&self) -> DataShred {
let mut data_shred = DataShred::default();
data_shred.header.common_header.slot = self.slot;
data_shred.header.common_header.index = self.index;
data_shred
}
pub fn new_first_shred(&self, parent: u64) -> FirstDataShred {
let mut first_shred = FirstDataShred::default();
first_shred.header.parent = parent;
first_shred.header.data_header.common_header.slot = self.slot;
first_shred.header.data_header.common_header.index = self.index;
first_shred
}
fn make_final_data_shred(&mut self) -> DataShred {
self.active_shred
.take()
.map_or(self.new_data_shred(), |current_shred| {
match current_shred.shred {
Shred::FirstInSlot(s) => {
self.finalize_shred(SignedShred::new(Shred::FirstInSlot(s)));
self.new_data_shred()
}
Shred::FirstInFECSet(s) => s,
Shred::Data(s) => s,
Shred::LastInFECSetData(s) => s,
Shred::LastInSlotData(s) => s,
Shred::Coding(_) => self.new_data_shred(),
Shred::LastInFECSetCoding(_) => self.new_data_shred(),
Shred::LastInSlotCoding(_) => self.new_data_shred(),
}
})
}
pub fn finalize_fec_block(&mut self) {
let final_shred = self.make_final_data_shred();
self.finalize_shred(SignedShred::new(Shred::LastInFECSetData(final_shred)));
}
pub fn finalize_slot(&mut self) {
let final_shred = self.make_final_data_shred();
self.finalize_shred(SignedShred::new(Shred::LastInSlotData(final_shred)));
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_data_shredder() {
let keypair = Arc::new(Keypair::new());
let mut shredder = Shredder::new(0x123456789abcdef0, Some(5), 0, &keypair, 0);
assert!(shredder.shreds.is_empty());
assert_eq!(shredder.active_shred, None);
assert_eq!(shredder.active_offset, 0);
// Test0: Write some data to shred. Not enough to create a signed shred
let data: Vec<u8> = (0..25).collect();
assert_eq!(shredder.write(&data).unwrap(), data.len());
assert!(shredder.shreds.is_empty());
assert_ne!(shredder.active_shred, None);
assert_eq!(shredder.active_offset, 25);
// Test1: Write some more data to shred. Not enough to create a signed shred
assert_eq!(shredder.write(&data).unwrap(), data.len());
assert!(shredder.shreds.is_empty());
assert_eq!(shredder.active_offset, 50);
// Test2: Write enough data to create a shred (> MAX_DGRAM_SIZE)
let data: Vec<_> = (0..MAX_DGRAM_SIZE).collect();
let data: Vec<u8> = data.iter().map(|x| *x as u8).collect();
let offset = shredder.write(&data).unwrap();
assert_ne!(offset, data.len());
// Assert that we have atleast one signed shred
assert!(!shredder.shreds.is_empty());
// Assert that a new active shred was also created
assert_ne!(shredder.active_shred, None);
// Assert that the new active shred was not populated
assert_eq!(shredder.active_offset, 0);
// Test3: Assert that the first shred in slot was created (since we gave a parent to shredder)
let shred = shredder.shreds.pop().unwrap();
assert_matches!(shred.shred, Shred::FirstInSlot(_));
let pdu = bincode::serialize(&shred).unwrap();
assert_eq!(pdu.len(), MAX_DGRAM_SIZE);
info!("Len: {}", pdu.len());
info!("{:?}", pdu);
// Test4: Try deserialize the PDU and assert that it matches the original shred
let deserialized_shred: SignedShred =
bincode::deserialize(&pdu).expect("Failed in deserializing the PDU");
assert_eq!(shred, deserialized_shred);
// Test5: Write left over data, and assert that a data shred is being created
shredder.write(&data[offset..]).unwrap();
// assert_matches!(shredder.active_shred.unwrap().shred, Shred::Data(_));
// It shouldn't generate a signed shred
assert!(shredder.shreds.is_empty());
// Test6: Let's finalize the FEC block. That should result in the current shred to morph into
// a signed LastInFECSetData shred
shredder.finalize_fec_block();
// We should have a new signed shred
assert!(!shredder.shreds.is_empty());
// Must be Last in FEC Set
let shred = shredder.shreds.pop().unwrap();
assert_matches!(shred.shred, Shred::LastInFECSetData(_));
let pdu = bincode::serialize(&shred).unwrap();
assert_eq!(pdu.len(), MAX_DGRAM_SIZE);
// Test7: Let's write some more data to the shredder.
// Now we should get a new FEC block
let data: Vec<_> = (0..MAX_DGRAM_SIZE).collect();
let data: Vec<u8> = data.iter().map(|x| *x as u8).collect();
let offset = shredder.write(&data).unwrap();
assert_ne!(offset, data.len());
// We should have a new signed shred
assert!(!shredder.shreds.is_empty());
// Must be FirstInFECSet
let shred = shredder.shreds.pop().unwrap();
assert_matches!(shred.shred, Shred::FirstInFECSet(_));
let pdu = bincode::serialize(&shred).unwrap();
assert_eq!(pdu.len(), MAX_DGRAM_SIZE);
// Test8: Write more data to generate an intermediate data shred
let offset = shredder.write(&data).unwrap();
assert_ne!(offset, data.len());
// We should have a new signed shred
assert!(!shredder.shreds.is_empty());
// Must be a Data shred
let shred = shredder.shreds.pop().unwrap();
assert_matches!(shred.shred, Shred::Data(_));
let pdu = bincode::serialize(&shred).unwrap();
assert_eq!(pdu.len(), MAX_DGRAM_SIZE);
// Test9: Write some data to shredder
let data: Vec<u8> = (0..25).collect();
assert_eq!(shredder.write(&data).unwrap(), data.len());
// And, finish the slot
shredder.finalize_slot();
// We should have a new signed shred
assert!(!shredder.shreds.is_empty());
// Must be LastInSlot
let shred = shredder.shreds.pop().unwrap();
assert_matches!(shred.shred, Shred::LastInSlotData(_));
let pdu = bincode::serialize(&shred).unwrap();
assert_eq!(pdu.len(), MAX_DGRAM_SIZE);
}
}

View File

@@ -0,0 +1,195 @@
use crate::result::{Error, Result};
use crate::service::Service;
use flate2::write::GzEncoder;
use flate2::Compression;
use solana_runtime::accounts_db::AccountStorageEntry;
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::mpsc::{Receiver, RecvTimeoutError, Sender};
use std::sync::Arc;
use std::thread::{self, Builder, JoinHandle};
use std::time::Duration;
pub type SnapshotPackageSender = Sender<SnapshotPackage>;
pub type SnapshotPackageReceiver = Receiver<SnapshotPackage>;
pub const TAR_SNAPSHOT_DIR: &str = "snapshots";
pub const TAR_ACCOUNTS_DIR: &str = "accounts";
pub struct SnapshotPackage {
snapshot_path: PathBuf,
storage_entries: Vec<Arc<AccountStorageEntry>>,
tar_output_file: PathBuf,
}
impl SnapshotPackage {
pub fn new(
snapshot_path: PathBuf,
storage_entries: Vec<Arc<AccountStorageEntry>>,
tar_output_file: PathBuf,
) -> Self {
Self {
snapshot_path,
storage_entries,
tar_output_file,
}
}
}
impl Drop for SnapshotPackage {
fn drop(&mut self) {
let _ = fs::remove_dir_all(&self.snapshot_path);
}
}
pub struct SnapshotPackagerService {
t_snapshot_packager: JoinHandle<()>,
}
impl SnapshotPackagerService {
pub fn new(snapshot_package_receiver: SnapshotPackageReceiver, exit: &Arc<AtomicBool>) -> Self {
let exit = exit.clone();
let t_snapshot_packager = Builder::new()
.name("solana-snapshot-packager".to_string())
.spawn(move || loop {
if exit.load(Ordering::Relaxed) {
break;
}
if let Err(e) = Self::package_snapshots(&snapshot_package_receiver) {
match e {
Error::RecvTimeoutError(RecvTimeoutError::Disconnected) => break,
Error::RecvTimeoutError(RecvTimeoutError::Timeout) => (),
_ => info!("Error from package_snapshots: {:?}", e),
}
}
})
.unwrap();
Self {
t_snapshot_packager,
}
}
pub fn package_snapshots(snapshot_receiver: &SnapshotPackageReceiver) -> Result<()> {
let snapshot_package = snapshot_receiver.recv_timeout(Duration::from_secs(1))?;
// Create the tar builder
let tar_gz = tempfile::Builder::new()
.prefix("new_state")
.suffix(".tgz")
.tempfile()?;
let temp_tar_path = tar_gz.path();
let enc = GzEncoder::new(&tar_gz, Compression::default());
let mut tar = tar::Builder::new(enc);
// Create the list of paths to compress, starting with the snapshots
let tar_output_snapshots_dir = Path::new(&TAR_SNAPSHOT_DIR);
// Add the snapshots to the tarball and delete the directory of hardlinks to the snapshots
// that was created to persist those snapshots while this package was being created
let res = tar.append_dir_all(
tar_output_snapshots_dir,
snapshot_package.snapshot_path.as_path(),
);
let _ = fs::remove_dir_all(snapshot_package.snapshot_path.as_path());
res?;
// Add the AppendVecs into the compressible list
let tar_output_accounts_dir = Path::new(&TAR_ACCOUNTS_DIR);
for storage in &snapshot_package.storage_entries {
let storage_path = storage.get_path();
let output_path = tar_output_accounts_dir.join(
storage_path
.file_name()
.expect("Invalid AppendVec file path"),
);
// `output_path` - The directory where the AppendVec will be placed in the tarball.
// `storage_path` - The file path where the AppendVec itself is located
tar.append_path_with_name(storage_path, output_path)?;
}
// Once everything is successful, overwrite the previous tarball so that other validators
// can rsync this newly packaged snapshot
tar.finish()?;
let _ = fs::remove_file(&snapshot_package.tar_output_file);
fs::hard_link(&temp_tar_path, &snapshot_package.tar_output_file)?;
Ok(())
}
}
impl Service for SnapshotPackagerService {
type JoinReturnType = ();
fn join(self) -> thread::Result<()> {
self.t_snapshot_packager.join()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::snapshot_utils;
use std::fs::OpenOptions;
use std::io::Write;
use std::sync::mpsc::channel;
use tempfile::TempDir;
#[test]
fn test_package_snapshots() {
// Create temprorary placeholder directory for all test files
let temp_dir = TempDir::new().unwrap();
let (sender, receiver) = channel();
let accounts_dir = temp_dir.path().join("accounts");
let snapshots_dir = temp_dir.path().join("snapshots");
let snapshot_package_output_path = temp_dir.path().join("snapshots_output");
fs::create_dir_all(&snapshot_package_output_path).unwrap();
// Create some storage entries
let storage_entries: Vec<_> = (0..5)
.map(|i| Arc::new(AccountStorageEntry::new(&accounts_dir, 0, i, 10)))
.collect();
// Create some fake snapshot
fs::create_dir_all(&snapshots_dir).unwrap();
let snapshots_paths: Vec<_> = (0..5)
.map(|i| {
let fake_snapshot_path = snapshots_dir.join(format!("fake_snapshot_{}", i));
let mut fake_snapshot_file = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(&fake_snapshot_path)
.unwrap();
fake_snapshot_file.write_all(b"Hello, world!").unwrap();
fake_snapshot_path
})
.collect();
// Create directory of hard links for snapshots
let link_snapshots_dir = temp_dir.path().join("link_snapshots");
fs::create_dir_all(&link_snapshots_dir).unwrap();
for snapshots_path in snapshots_paths {
let snapshot_file_name = snapshots_path.file_name().unwrap();
let link_path = link_snapshots_dir.join(snapshot_file_name);
fs::hard_link(&snapshots_path, &link_path).unwrap();
}
// Create a packageable snapshot
let output_tar_path = snapshot_utils::get_snapshot_tar_path(&snapshot_package_output_path);
let snapshot_package = SnapshotPackage::new(
link_snapshots_dir,
storage_entries.clone(),
output_tar_path.clone(),
);
sender.send(snapshot_package).unwrap();
// Make tarball from packageable snapshot
SnapshotPackagerService::package_snapshots(&receiver).unwrap();
// Check tarball is correct
snapshot_utils::tests::verify_snapshot_tar(output_tar_path, snapshots_dir, accounts_dir);
}
}

255
core/src/snapshot_utils.rs Normal file
View File

@@ -0,0 +1,255 @@
use crate::result::{Error, Result};
use crate::snapshot_package::SnapshotPackage;
use bincode::{deserialize_from, serialize_into};
use flate2::read::GzDecoder;
use solana_runtime::bank::{Bank, StatusCacheRc};
use std::fs;
use std::fs::File;
use std::io::{BufReader, BufWriter, Error as IOError, ErrorKind};
use std::path::{Path, PathBuf};
use tar::Archive;
pub fn package_snapshot<P: AsRef<Path>, Q: AsRef<Path>>(
bank: &Bank,
snapshot_names: &[u64],
snapshot_dir: P,
snapshot_package_output_file: Q,
) -> Result<SnapshotPackage> {
let slot = bank.slot();
// Hard link all the snapshots we need for this package
let snapshot_hard_links_dir = get_snapshots_hardlink_dir_for_package(
snapshot_package_output_file
.as_ref()
.parent()
.expect("Invalid output path for tar"),
slot,
);
let _ = fs::remove_dir_all(&snapshot_hard_links_dir);
fs::create_dir_all(&snapshot_hard_links_dir)?;
// Get a reference to all the relevant AccountStorageEntries
let account_storage_entries = bank.rc.get_storage_entries();
// Create a snapshot package
trace!(
"Snapshot for bank: {} has {} account storage entries",
slot,
account_storage_entries.len()
);
let package = SnapshotPackage::new(
snapshot_hard_links_dir.clone(),
account_storage_entries,
snapshot_package_output_file.as_ref().to_path_buf(),
);
// Any errors from this point on will cause the above SnapshotPackage to drop, clearing
// any temporary state created for the SnapshotPackage (like the snapshot_hard_links_dir)
for name in snapshot_names {
hardlink_snapshot_directory(&snapshot_dir, &snapshot_hard_links_dir, *name)?;
}
Ok(package)
}
pub fn get_snapshot_names<P: AsRef<Path>>(snapshot_path: P) -> Vec<u64> {
let paths = fs::read_dir(snapshot_path).expect("Invalid snapshot path");
let mut names = paths
.filter_map(|entry| {
entry.ok().and_then(|e| {
e.path()
.file_name()
.and_then(|n| n.to_str().map(|s| s.parse::<u64>().unwrap()))
})
})
.collect::<Vec<u64>>();
names.sort();
names
}
pub fn add_snapshot<P: AsRef<Path>>(snapshot_path: P, bank: &Bank, root: u64) -> Result<()> {
let slot = bank.slot();
let slot_snapshot_dir = get_bank_snapshot_dir(snapshot_path, slot);
fs::create_dir_all(slot_snapshot_dir.clone()).map_err(Error::from)?;
let snapshot_file_path = slot_snapshot_dir.join(get_snapshot_file_name(slot));
trace!(
"creating snapshot {}, path: {:?}",
bank.slot(),
snapshot_file_path
);
let file = File::create(&snapshot_file_path)?;
let mut stream = BufWriter::new(file);
// Create the snapshot
serialize_into(&mut stream, &*bank).map_err(|_| get_io_error("serialize bank error"))?;
let mut parent_slot: u64 = 0;
if let Some(parent_bank) = bank.parent() {
parent_slot = parent_bank.slot();
}
serialize_into(&mut stream, &parent_slot)
.map_err(|_| get_io_error("serialize bank parent error"))?;
serialize_into(&mut stream, &root).map_err(|_| get_io_error("serialize root error"))?;
serialize_into(&mut stream, &bank.src)
.map_err(|_| get_io_error("serialize bank status cache error"))?;
serialize_into(&mut stream, &bank.rc)
.map_err(|_| get_io_error("serialize bank accounts error"))?;
trace!(
"successfully created snapshot {}, path: {:?}",
bank.slot(),
snapshot_file_path
);
Ok(())
}
pub fn remove_snapshot<P: AsRef<Path>>(slot: u64, snapshot_path: P) -> Result<()> {
let slot_snapshot_dir = get_bank_snapshot_dir(&snapshot_path, slot);
// Remove the snapshot directory for this slot
fs::remove_dir_all(slot_snapshot_dir)?;
Ok(())
}
pub fn load_snapshots<P: AsRef<Path>>(
names: &[u64],
bank0: &mut Bank,
bank_maps: &mut Vec<(u64, u64, Bank)>,
status_cache_rc: &StatusCacheRc,
snapshot_path: P,
) -> Option<u64> {
let mut bank_root: Option<u64> = None;
for (i, bank_slot) in names.iter().rev().enumerate() {
let snapshot_file_name = get_snapshot_file_name(*bank_slot);
let snapshot_dir = get_bank_snapshot_dir(&snapshot_path, *bank_slot);
let snapshot_file_path = snapshot_dir.join(snapshot_file_name.clone());
trace!("Load from {:?}", snapshot_file_path);
let file = File::open(snapshot_file_path);
if file.is_err() {
warn!("Snapshot file open failed for {}", bank_slot);
continue;
}
let file = file.unwrap();
let mut stream = BufReader::new(file);
let bank: Result<Bank> =
deserialize_from(&mut stream).map_err(|_| get_io_error("deserialize bank error"));
let slot: Result<u64> = deserialize_from(&mut stream)
.map_err(|_| get_io_error("deserialize bank parent error"));
let parent_slot = if slot.is_ok() { slot.unwrap() } else { 0 };
let root: Result<u64> =
deserialize_from(&mut stream).map_err(|_| get_io_error("deserialize root error"));
let status_cache: Result<StatusCacheRc> = deserialize_from(&mut stream)
.map_err(|_| get_io_error("deserialize bank status cache error"));
if bank_root.is_none() && bank0.rc.update_from_stream(&mut stream).is_ok() {
bank_root = Some(root.unwrap());
}
if bank_root.is_some() {
match bank {
Ok(v) => {
if status_cache.is_ok() {
let status_cache = status_cache.unwrap();
status_cache_rc.append(&status_cache);
// On the last snapshot, purge all outdated status cache
// entries
if i == names.len() - 1 {
status_cache_rc.purge_roots();
}
}
bank_maps.push((*bank_slot, parent_slot, v));
}
Err(_) => warn!("Load snapshot failed for {}", bank_slot),
}
} else {
warn!("Load snapshot rc failed for {}", bank_slot);
}
}
bank_root
}
pub fn get_snapshot_tar_path<P: AsRef<Path>>(snapshot_output_dir: P) -> PathBuf {
snapshot_output_dir.as_ref().join("snapshot.tgz")
}
pub fn untar_snapshot_in<P: AsRef<Path>, Q: AsRef<Path>>(
snapshot_tar: P,
unpack_dir: Q,
) -> Result<()> {
let tar_gz = File::open(snapshot_tar)?;
let tar = GzDecoder::new(tar_gz);
let mut archive = Archive::new(tar);
archive.unpack(&unpack_dir)?;
Ok(())
}
fn hardlink_snapshot_directory<P: AsRef<Path>, Q: AsRef<Path>>(
snapshot_dir: P,
snapshot_hardlink_dir: Q,
slot: u64,
) -> Result<()> {
// Create a new directory in snapshot_hardlink_dir
let new_slot_hardlink_dir = snapshot_hardlink_dir.as_ref().join(slot.to_string());
let _ = fs::remove_dir_all(&new_slot_hardlink_dir);
fs::create_dir_all(&new_slot_hardlink_dir)?;
// Hardlink the contents of the directory
let snapshot_file = snapshot_dir
.as_ref()
.join(slot.to_string())
.join(slot.to_string());
fs::hard_link(
&snapshot_file,
&new_slot_hardlink_dir.join(slot.to_string()),
)?;
Ok(())
}
fn get_snapshot_file_name(slot: u64) -> String {
slot.to_string()
}
fn get_bank_snapshot_dir<P: AsRef<Path>>(path: P, slot: u64) -> PathBuf {
path.as_ref().join(slot.to_string())
}
fn get_snapshots_hardlink_dir_for_package<P: AsRef<Path>>(parent_dir: P, slot: u64) -> PathBuf {
let file_name = format!("snapshot_{}_hard_links", slot);
parent_dir.as_ref().join(file_name)
}
fn get_io_error(error: &str) -> Error {
warn!("BankForks error: {:?}", error);
Error::IO(IOError::new(ErrorKind::Other, error))
}
#[cfg(test)]
pub mod tests {
use super::*;
use crate::snapshot_package::{TAR_ACCOUNTS_DIR, TAR_SNAPSHOT_DIR};
use tempfile::TempDir;
pub fn verify_snapshot_tar<P, Q, R>(
snapshot_tar: P,
snapshots_to_verify: Q,
storages_to_verify: R,
) where
P: AsRef<Path>,
Q: AsRef<Path>,
R: AsRef<Path>,
{
let temp_dir = TempDir::new().unwrap();
let unpack_dir = temp_dir.path();
untar_snapshot_in(snapshot_tar, &unpack_dir).unwrap();
// Check snapshots are the same
let unpacked_snapshots = unpack_dir.join(&TAR_SNAPSHOT_DIR);
assert!(!dir_diff::is_different(&snapshots_to_verify, unpacked_snapshots).unwrap());
// Check the account entries are the same
let unpacked_accounts = unpack_dir.join(&TAR_ACCOUNTS_DIR);
assert!(!dir_diff::is_different(&storages_to_verify, unpacked_accounts).unwrap());
}
}

View File

@@ -134,46 +134,6 @@ pub fn blob_receiver(
.unwrap()
}
fn recv_blob_packets(sock: &UdpSocket, s: &PacketSender, recycler: &PacketsRecycler) -> Result<()> {
trace!(
"recv_blob_packets: receiving on {}",
sock.local_addr().unwrap()
);
let blobs = Blob::recv_from(sock)?;
for blob in blobs {
let mut packets =
Packets::new_with_recycler(recycler.clone(), PACKETS_PER_BLOB, "recv_blob_packets");
blob.read().unwrap().load_packets(&mut packets.packets);
s.send(packets)?;
}
Ok(())
}
pub fn blob_packet_receiver(
sock: Arc<UdpSocket>,
exit: &Arc<AtomicBool>,
s: PacketSender,
) -> JoinHandle<()> {
//DOCUMENTED SIDE-EFFECT
//1 second timeout on socket read
let timer = Duration::new(1, 0);
sock.set_read_timeout(Some(timer))
.expect("set socket timeout");
let exit = exit.clone();
let recycler = PacketsRecycler::default();
Builder::new()
.name("solana-blob_packet_receiver".to_string())
.spawn(move || loop {
if exit.load(Ordering::Relaxed) {
break;
}
let _ = recv_blob_packets(&sock, &s, &recycler);
})
.unwrap()
}
#[cfg(test)]
mod test {
use super::*;

View File

@@ -33,7 +33,7 @@ impl Tpu {
poh_recorder: &Arc<Mutex<PohRecorder>>,
entry_receiver: Receiver<WorkingBankEntries>,
transactions_sockets: Vec<UdpSocket>,
tpu_via_blobs_sockets: Vec<UdpSocket>,
tpu_forwards_sockets: Vec<UdpSocket>,
broadcast_socket: UdpSocket,
sigverify_disabled: bool,
blocktree: &Arc<Blocktree>,
@@ -44,7 +44,7 @@ impl Tpu {
let (packet_sender, packet_receiver) = channel();
let fetch_stage = FetchStage::new_with_sender(
transactions_sockets,
tpu_via_blobs_sockets,
tpu_forwards_sockets,
&exit,
&packet_sender,
&poh_recorder,

View File

@@ -24,10 +24,12 @@ use crate::replay_stage::ReplayStage;
use crate::retransmit_stage::RetransmitStage;
use crate::rpc_subscriptions::RpcSubscriptions;
use crate::service::Service;
use crate::snapshot_package::SnapshotPackagerService;
use crate::storage_stage::{StorageStage, StorageState};
use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{Keypair, KeypairUtil};
use std::net::UdpSocket;
use std::path::PathBuf;
use std::sync::atomic::AtomicBool;
use std::sync::mpsc::{channel, Receiver};
use std::sync::{Arc, Mutex, RwLock};
@@ -40,6 +42,7 @@ pub struct Tvu {
blockstream_service: Option<BlockstreamService>,
ledger_cleanup_service: Option<LedgerCleanupService>,
storage_stage: StorageStage,
snapshot_packager_service: Option<SnapshotPackagerService>,
}
pub struct Sockets {
@@ -65,7 +68,7 @@ impl Tvu {
sockets: Sockets,
blocktree: Arc<Blocktree>,
storage_state: &StorageState,
blockstream: Option<&String>,
blockstream_unix_socket: Option<&PathBuf>,
max_ledger_slots: Option<u64>,
ledger_signal_receiver: Receiver<bool>,
subscriptions: &Arc<RpcSubscriptions>,
@@ -115,6 +118,17 @@ impl Tvu {
let (blockstream_slot_sender, blockstream_slot_receiver) = channel();
let (ledger_cleanup_slot_sender, ledger_cleanup_slot_receiver) = channel();
let (snapshot_packager_service, snapshot_package_sender) = {
let snapshot_config = { bank_forks.read().unwrap().snapshot_config().clone() };
if snapshot_config.is_some() {
// Start a snapshot packaging service
let (sender, receiver) = channel();
let snapshot_packager_service = SnapshotPackagerService::new(receiver, exit);
(Some(snapshot_packager_service), Some(sender))
} else {
(None, None)
}
};
let (replay_stage, root_bank_receiver) = ReplayStage::new(
&keypair.pubkey(),
@@ -129,13 +143,14 @@ impl Tvu {
poh_recorder,
leader_schedule_cache,
vec![blockstream_slot_sender, ledger_cleanup_slot_sender],
snapshot_package_sender,
);
let blockstream_service = if blockstream.is_some() {
let blockstream_service = if blockstream_unix_socket.is_some() {
let blockstream_service = BlockstreamService::new(
blockstream_slot_receiver,
blocktree.clone(),
blockstream.unwrap().to_string(),
blockstream_unix_socket.unwrap(),
&exit,
);
Some(blockstream_service)
@@ -170,6 +185,7 @@ impl Tvu {
blockstream_service,
ledger_cleanup_service,
storage_stage,
snapshot_packager_service,
}
}
}
@@ -188,6 +204,9 @@ impl Service for Tvu {
self.ledger_cleanup_service.unwrap().join()?;
}
self.replay_stage.join()?;
if let Some(s) = self.snapshot_packager_service {
s.join()?;
}
Ok(())
}
}

View File

@@ -1,6 +1,6 @@
//! The `fullnode` module hosts all the fullnode microservices.
use crate::bank_forks::BankForks;
use crate::bank_forks::{BankForks, SnapshotConfig};
use crate::blocktree::{Blocktree, CompletedSlotsReceiver};
use crate::blocktree_processor::{self, BankForksInfo};
use crate::broadcast_stage::BroadcastStageType;
@@ -26,6 +26,7 @@ use solana_sdk::pubkey::Pubkey;
use solana_sdk::signature::{Keypair, KeypairUtil};
use solana_sdk::timing::{timestamp, DEFAULT_SLOTS_PER_TURN};
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::mpsc::Receiver;
use std::sync::{Arc, Mutex, RwLock};
@@ -35,11 +36,11 @@ use std::thread::Result;
pub struct ValidatorConfig {
pub sigverify_disabled: bool,
pub voting_disabled: bool,
pub blockstream: Option<String>,
pub blockstream_unix_socket: Option<PathBuf>,
pub storage_slots_per_turn: u64,
pub account_paths: Option<String>,
pub rpc_config: JsonRpcConfig,
pub snapshot_path: Option<String>,
pub snapshot_config: Option<SnapshotConfig>,
pub max_ledger_slots: Option<u64>,
pub broadcast_stage_type: BroadcastStageType,
pub erasure_config: ErasureConfig,
@@ -50,12 +51,12 @@ impl Default for ValidatorConfig {
Self {
sigverify_disabled: false,
voting_disabled: false,
blockstream: None,
blockstream_unix_socket: None,
storage_slots_per_turn: DEFAULT_SLOTS_PER_TURN,
max_ledger_slots: None,
account_paths: None,
rpc_config: JsonRpcConfig::default(),
snapshot_path: None,
snapshot_config: None,
broadcast_stage_type: BroadcastStageType::Standard,
erasure_config: ErasureConfig::default(),
}
@@ -79,7 +80,7 @@ impl Validator {
pub fn new(
mut node: Node,
keypair: &Arc<Keypair>,
ledger_path: &str,
ledger_path: &Path,
vote_account: &Pubkey,
voting_keypair: &Arc<Keypair>,
storage_keypair: &Arc<Keypair>,
@@ -104,7 +105,7 @@ impl Validator {
) = new_banks_from_blocktree(
ledger_path,
config.account_paths.clone(),
config.snapshot_path.clone(),
config.snapshot_config.clone(),
verify_ledger,
);
@@ -133,7 +134,7 @@ impl Validator {
&leader_schedule_cache,
&poh_config,
);
if config.snapshot_path.is_some() {
if config.snapshot_config.is_some() {
poh_recorder.set_bank(&bank);
}
@@ -175,6 +176,7 @@ impl Validator {
storage_state.clone(),
config.rpc_config.clone(),
bank_forks.clone(),
ledger_path,
&exit,
))
};
@@ -248,7 +250,7 @@ impl Validator {
sockets,
blocktree.clone(),
&storage_state,
config.blockstream.as_ref(),
config.blockstream_unix_socket.as_ref(),
config.max_ledger_slots,
ledger_signal_receiver,
&subscriptions,
@@ -267,7 +269,7 @@ impl Validator {
&poh_recorder,
entry_receiver,
node.sockets.tpu,
node.sockets.tpu_via_blobs,
node.sockets.tpu_forwards,
node.sockets.broadcast,
config.sigverify_disabled,
&blocktree,
@@ -276,7 +278,7 @@ impl Validator {
&exit,
);
datapoint_info!("validator-new");
datapoint_info!("validator-new", ("id", id.to_string(), String));
Self {
id,
gossip_service,
@@ -306,43 +308,57 @@ fn get_bank_forks(
genesis_block: &GenesisBlock,
blocktree: &Blocktree,
account_paths: Option<String>,
snapshot_path: Option<String>,
snapshot_config: Option<SnapshotConfig>,
verify_ledger: bool,
) -> (BankForks, Vec<BankForksInfo>, LeaderScheduleCache) {
if snapshot_path.is_some() {
let bank_forks =
BankForks::load_from_snapshot(&genesis_block, account_paths.clone(), &snapshot_path);
match bank_forks {
Ok(v) => {
let bank = &v.working_bank();
let fork_info = BankForksInfo {
bank_slot: bank.slot(),
entry_height: bank.tick_height(),
};
return (v, vec![fork_info], LeaderScheduleCache::new_from_bank(bank));
let (mut bank_forks, bank_forks_info, leader_schedule_cache) = {
let mut result = None;
if snapshot_config.is_some() {
let bank_forks = BankForks::load_from_snapshot(
&genesis_block,
account_paths.clone(),
snapshot_config.as_ref().unwrap(),
);
match bank_forks {
Ok(v) => {
let bank = &v.working_bank();
let fork_info = BankForksInfo {
bank_slot: bank.slot(),
entry_height: bank.tick_height(),
};
result = Some((v, vec![fork_info], LeaderScheduleCache::new_from_bank(bank)));
}
Err(_) => warn!("Failed to load from snapshot, fallback to load from ledger"),
}
Err(_) => warn!("Failed to load from snapshot, fallback to load from ledger"),
}
// If loading from a snapshot failed/snapshot didn't exist
if result.is_none() {
result = Some(
blocktree_processor::process_blocktree(
&genesis_block,
&blocktree,
account_paths,
verify_ledger,
)
.expect("process_blocktree failed"),
);
}
result.unwrap()
};
if snapshot_config.is_some() {
bank_forks.set_snapshot_config(snapshot_config.unwrap());
}
let (mut bank_forks, bank_forks_info, leader_schedule_cache) =
blocktree_processor::process_blocktree(
&genesis_block,
&blocktree,
account_paths,
verify_ledger,
)
.expect("process_blocktree failed");
if snapshot_path.is_some() {
bank_forks.set_snapshot_config(snapshot_path);
let _ = bank_forks.add_snapshot(0, 0);
}
(bank_forks, bank_forks_info, leader_schedule_cache)
}
pub fn new_banks_from_blocktree(
blocktree_path: &str,
blocktree_path: &Path,
account_paths: Option<String>,
snapshot_path: Option<String>,
snapshot_config: Option<SnapshotConfig>,
verify_ledger: bool,
) -> (
BankForks,
@@ -364,7 +380,7 @@ pub fn new_banks_from_blocktree(
&genesis_block,
&blocktree,
account_paths,
snapshot_path,
snapshot_config,
verify_ledger,
);
@@ -401,7 +417,7 @@ impl Service for Validator {
}
}
pub fn new_validator_for_tests() -> (Validator, ContactInfo, Keypair, String) {
pub fn new_validator_for_tests() -> (Validator, ContactInfo, Keypair, PathBuf) {
use crate::blocktree::create_new_tmp_ledger;
use crate::genesis_utils::{create_genesis_block_with_leader, GenesisBlockInfo};

View File

@@ -456,6 +456,7 @@ fn test_star_network_push_star_200() {
let mut network = star_network_create(200);
network_simulator(&mut network, 0.9);
}
#[ignore]
#[test]
fn test_star_network_push_rstar_200() {
let mut network = rstar_network_create(200);

View File

@@ -79,10 +79,8 @@ fn test_replicator_startup_1_node() {
run_replicator_startup_basic(1, 1);
}
#[allow(unused_attributes)]
#[test]
#[serial]
#[ignore]
fn test_replicator_startup_2_nodes() {
run_replicator_startup_basic(2, 1);
}
@@ -95,7 +93,7 @@ fn test_replicator_startup_leader_hang() {
solana_logger::setup();
info!("starting replicator test");
let leader_ledger_path = "replicator_test_leader_ledger";
let leader_ledger_path = std::path::PathBuf::from("replicator_test_leader_ledger");
let (genesis_block, _mint_keypair) = create_genesis_block(10_000);
let (replicator_ledger_path, _blockhash) = create_new_tmp_ledger!(&genesis_block);

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-drone"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana Drone"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -17,12 +17,12 @@ bincode = "1.1.4"
byteorder = "1.3.2"
bytes = "0.4"
clap = "2.33"
log = "0.4.7"
serde = "1.0.97"
serde_derive = "1.0.97"
solana-logger = { path = "../logger", version = "0.17.2" }
solana-metrics = { path = "../metrics", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
log = "0.4.8"
serde = "1.0.98"
serde_derive = "1.0.98"
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-metrics = { path = "../metrics", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
tokio = "0.1"
tokio-codec = "0.1"

View File

@@ -8,7 +8,7 @@ use std::sync::{Arc, Mutex};
use std::thread;
fn main() -> Result<(), Box<error::Error>> {
solana_logger::setup();
solana_logger::setup_with_filter("solana=info");
solana_metrics::set_panic_hook("drone");
let matches = App::new(crate_name!())
.about(crate_description!())

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-genesis"
description = "Blockchain, Rebuilt for Scale"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -11,16 +11,16 @@ homepage = "https://solana.com/"
[dependencies]
bincode = "1.1.4"
clap = "2.33.0"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_json = "1.0.40"
serde_yaml = "0.8.9"
solana = { path = "../core", version = "0.17.2" }
solana-genesis-programs = { path = "../genesis_programs", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-stake-api = { path = "../programs/stake_api", version = "0.17.2" }
solana-storage-api = { path = "../programs/storage_api", version = "0.17.2" }
solana-vote-api = { path = "../programs/vote_api", version = "0.17.2" }
solana = { path = "../core", version = "0.18.0-pre0" }
solana-genesis-programs = { path = "../genesis_programs", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
solana-stake-api = { path = "../programs/stake_api", version = "0.18.0-pre0" }
solana-storage-api = { path = "../programs/storage_api", version = "0.18.0-pre0" }
solana-vote-api = { path = "../programs/vote_api", version = "0.18.0-pre0" }
[dev-dependencies]
hashbrown = "0.3.0"

View File

@@ -18,6 +18,7 @@ use std::collections::HashMap;
use std::error;
use std::fs::File;
use std::io;
use std::path::PathBuf;
use std::str::FromStr;
use std::time::{Duration, Instant};
@@ -232,7 +233,7 @@ fn main() -> Result<(), Box<dyn error::Error>> {
let bootstrap_storage_keypair_file =
matches.value_of("bootstrap_storage_keypair_file").unwrap();
let mint_keypair_file = matches.value_of("mint_keypair_file").unwrap();
let ledger_path = matches.value_of("ledger_path").unwrap();
let ledger_path = PathBuf::from(matches.value_of("ledger_path").unwrap());
let lamports = value_t_or_exit!(matches, "lamports", u64);
let bootstrap_leader_lamports = value_t_or_exit!(matches, "bootstrap_leader_lamports", u64);
let bootstrap_leader_stake_lamports =
@@ -334,7 +335,7 @@ fn main() -> Result<(), Box<dyn error::Error>> {
builder = solana_storage_api::rewards_pools::genesis(builder);
builder = solana_stake_api::rewards_pools::genesis(builder);
create_new_ledger(ledger_path, &builder.build())?;
create_new_ledger(&ledger_path, &builder.build())?;
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-genesis-programs"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana genesis programs"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -10,25 +10,25 @@ edition = "2018"
[dependencies]
hashbrown = "0.2.0"
solana-bpf-loader-api = { path = "../programs/bpf_loader_api", version = "0.17.2" }
solana-bpf-loader-program = { path = "../programs/bpf_loader_program", version = "0.17.2" }
solana-budget-api= { path = "../programs/budget_api", version = "0.17.0" }
solana-budget-program = { path = "../programs/budget_program", version = "0.17.2" }
solana-config-api = { path = "../programs/config_api", version = "0.17.2" }
solana-config-program = { path = "../programs/config_program", version = "0.17.2" }
solana-exchange-api = { path = "../programs/exchange_api", version = "0.17.2" }
solana-exchange-program = { path = "../programs/exchange_program", version = "0.17.2" }
solana-move-loader-program = { path = "../programs/move_loader_program", version = "0.17.2" }
solana-move-loader-api = { path = "../programs/move_loader_api", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-stake-api = { path = "../programs/stake_api", version = "0.17.2" }
solana-stake-program = { path = "../programs/stake_program", version = "0.17.2" }
solana-storage-api = { path = "../programs/storage_api", version = "0.17.2" }
solana-storage-program = { path = "../programs/storage_program", version = "0.17.2" }
solana-token-api = { path = "../programs/token_api", version = "0.17.2" }
solana-token-program = { path = "../programs/token_program", version = "0.17.2" }
solana-vote-api = { path = "../programs/vote_api", version = "0.17.2" }
solana-vote-program = { path = "../programs/vote_program", version = "0.17.2" }
solana-bpf-loader-api = { path = "../programs/bpf_loader_api", version = "0.18.0-pre0" }
solana-bpf-loader-program = { path = "../programs/bpf_loader_program", version = "0.18.0-pre0" }
solana-budget-api= { path = "../programs/budget_api", version = "0.18.0-pre0" }
solana-budget-program = { path = "../programs/budget_program", version = "0.18.0-pre0" }
solana-config-api = { path = "../programs/config_api", version = "0.18.0-pre0" }
solana-config-program = { path = "../programs/config_program", version = "0.18.0-pre0" }
solana-exchange-api = { path = "../programs/exchange_api", version = "0.18.0-pre0" }
solana-exchange-program = { path = "../programs/exchange_program", version = "0.18.0-pre0" }
solana-move-loader-program = { path = "../programs/move_loader_program", version = "0.18.0-pre0" }
solana-move-loader-api = { path = "../programs/move_loader_api", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
solana-stake-api = { path = "../programs/stake_api", version = "0.18.0-pre0" }
solana-stake-program = { path = "../programs/stake_program", version = "0.18.0-pre0" }
solana-storage-api = { path = "../programs/storage_api", version = "0.18.0-pre0" }
solana-storage-program = { path = "../programs/storage_program", version = "0.18.0-pre0" }
solana-token-api = { path = "../programs/token_api", version = "0.18.0-pre0" }
solana-token-program = { path = "../programs/token_program", version = "0.18.0-pre0" }
solana-vote-api = { path = "../programs/vote_api", version = "0.18.0-pre0" }
solana-vote-program = { path = "../programs/vote_program", version = "0.18.0-pre0" }
[lib]
crate-type = ["lib"]

View File

@@ -3,18 +3,18 @@ authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-gossip"
description = "Blockchain, Rebuilt for Scale"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
[dependencies]
clap = "2.33.0"
env_logger = "0.6.2"
solana = { path = "../core", version = "0.17.2" }
solana-client = { path = "../client", version = "0.17.2" }
solana-netutil = { path = "../netutil", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana = { path = "../core", version = "0.18.0-pre0" }
solana-client = { path = "../client", version = "0.18.0-pre0" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-netutil = { path = "../netutil", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
[features]
cuda = []

View File

@@ -20,7 +20,7 @@ fn pubkey_validator(pubkey: String) -> Result<(), String> {
}
fn main() -> Result<(), Box<dyn error::Error>> {
env_logger::Builder::from_env(env_logger::Env::new().default_filter_or("solana=info")).init();
solana_logger::setup_with_filter("solana=info");
let mut entrypoint_addr = SocketAddr::from(([127, 0, 0, 1], 8001));
let entrypoint_string = entrypoint_addr.to_string();

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-install"
description = "The solana cluster software installer"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -21,21 +21,21 @@ chrono = { version = "0.4.7", features = ["serde"] }
clap = { version = "2.33.0" }
console = "0.7.7"
ctrlc = { version = "3.1.3", features = ["termination"] }
dirs = "2.0.1"
dirs = "2.0.2"
indicatif = "0.11.0"
lazy_static = "1.3.0"
log = "0.4.7"
log = "0.4.8"
nix = "0.14.1"
reqwest = "0.9.19"
semver = "0.9.0"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_yaml = "0.8.9"
sha2 = "0.8.0"
solana-client = { path = "../client", version = "0.17.2" }
solana-config-api = { path = "../programs/config_api", version = "0.17.2" }
solana-logger = { path = "../logger", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-client = { path = "../client", version = "0.18.0-pre0" }
solana-config-api = { path = "../programs/config_api", version = "0.18.0-pre0" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
tar = "0.4.26"
tempdir = "0.3.7"
url = "2.0.0"

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-keygen"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana key generation utility"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -14,8 +14,8 @@ cuda = []
[dependencies]
clap = "2.33"
dirs = "2.0.1"
solana-sdk = { path = "../sdk", version = "0.17.2" }
dirs = "2.0.2"
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
[[bin]]
name = "solana-keygen"

View File

@@ -1,7 +1,7 @@
[package]
name = "solana-kvstore"
description = "Embedded Key-Value store for solana"
version = "0.17.2"
version = "0.18.0-pre0"
homepage = "https://solana.com/"
repository = "https://github.com/solana-labs/solana"
authors = ["Solana Maintainers <maintainers@solana.com>"]
@@ -15,8 +15,8 @@ chrono = "0.4.7"
crc = "1.8.1"
memmap = "0.7.0"
rand = "0.6.5"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
[dev-dependencies]
tempfile = "3.1.0"

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.com>"]
edition = "2018"
name = "solana-ledger-tool"
description = "Blockchain, Rebuilt for Scale"
version = "0.17.2"
version = "0.18.0-pre0"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -11,14 +11,14 @@ homepage = "https://solana.com/"
[dependencies]
bincode = "1.1.4"
clap = "2.33.0"
serde = "1.0.97"
serde_derive = "1.0.97"
serde = "1.0.98"
serde_derive = "1.0.98"
serde_json = "1.0.40"
serde_yaml = "0.8.9"
solana = { path = "../core", version = "0.17.2" }
solana-logger = { path = "../logger", version = "0.17.2" }
solana-runtime = { path = "../runtime", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana = { path = "../core", version = "0.18.0-pre0" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
solana-runtime = { path = "../runtime", version = "0.18.0-pre0" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
[dev-dependencies]
assert_cmd = "0.11"

View File

@@ -5,6 +5,7 @@ use solana_sdk::genesis_block::GenesisBlock;
use std::collections::BTreeMap;
use std::fs::File;
use std::io::{stdout, Write};
use std::path::PathBuf;
use std::process::exit;
use std::str::FromStr;
@@ -119,20 +120,20 @@ fn main() {
))
.get_matches();
let ledger_path = matches.value_of("ledger").unwrap();
let ledger_path = PathBuf::from(matches.value_of("ledger").unwrap());
let genesis_block = GenesisBlock::load(ledger_path).unwrap_or_else(|err| {
let genesis_block = GenesisBlock::load(&ledger_path).unwrap_or_else(|err| {
eprintln!(
"Failed to open ledger genesis_block at {}: {}",
"Failed to open ledger genesis_block at {:?}: {}",
ledger_path, err
);
exit(1);
});
let blocktree = match Blocktree::open(ledger_path) {
let blocktree = match Blocktree::open(&ledger_path) {
Ok(blocktree) => blocktree,
Err(err) => {
eprintln!("Failed to open ledger at {}: {}", ledger_path, err);
eprintln!("Failed to open ledger at {:?}: {}", ledger_path, err);
exit(1);
}
};

View File

@@ -38,6 +38,8 @@ fn nominal() {
let (ledger_path, _blockhash) = create_new_tmp_ledger!(&genesis_block);
let ticks = ticks_per_slot as usize;
let ledger_path = ledger_path.to_str().unwrap();
// Basic validation
let output = run_ledger_tool(&["-l", &ledger_path, "verify"]);
assert!(output.status.success());

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-logger"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana Logger"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"

View File

@@ -6,11 +6,14 @@ use std::sync::Once;
static INIT: Once = Once::new();
/// Setup function that is only run once, even if called multiple times.
pub fn setup() {
pub fn setup_with_filter(filter: &str) {
INIT.call_once(|| {
env_logger::Builder::from_default_env()
env_logger::Builder::from_env(env_logger::Env::new().default_filter_or(filter))
.default_format_timestamp_nanos(true)
.init();
});
}
pub fn setup() {
setup_with_filter("error");
}

View File

@@ -1,7 +1,7 @@
[package]
name = "solana-measure"
description = "Blockchain, Rebuilt for Scale"
version = "0.17.2"
version = "0.18.0-pre0"
documentation = "https://docs.rs/solana"
homepage = "https://solana.com/"
readme = "../README.md"
@@ -11,4 +11,4 @@ license = "Apache-2.0"
edition = "2018"
[dependencies]
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-merkle-tree"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana Merkle Tree"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -9,7 +9,7 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
[dev-dependencies]
hex = "0.3.2"

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-metrics"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana Metrics"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,9 +12,9 @@ edition = "2018"
env_logger = "0.6.2"
influx_db_client = "0.3.6"
lazy_static = "1.3.0"
log = "0.4.7"
log = "0.4.8"
reqwest = "0.9.19"
solana-sdk = { path = "../sdk", version = "0.17.2" }
solana-sdk = { path = "../sdk", version = "0.18.0-pre0" }
sys-info = "0.5.7"
[dev-dependencies]

View File

@@ -55,7 +55,7 @@ if channel == 'local':
'multi': False,
'name': 'hostid',
'options': [],
'query': 'SELECT DISTINCT(\"host_id\") FROM \"$testnet\".\"autogen\".\"validator-new\" ',
'query': 'SELECT DISTINCT(\"id\") FROM \"$testnet\".\"autogen\".\"validator-new\" ',
'refresh': 2,
'regex': '',
'sort': 1,
@@ -138,7 +138,7 @@ else:
'multi': False,
'name': 'hostid',
'options': [],
'query': 'SELECT DISTINCT(\"host_id\") FROM \"$testnet\".\"autogen\".\"validator-new\" ',
'query': 'SELECT DISTINCT(\"id\") FROM \"$testnet\".\"autogen\".\"validator-new\" ',
'refresh': 2,
'regex': '',
'sort': 1,

View File

@@ -103,7 +103,7 @@
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "last",
"tableColumn": "mode",
"targets": [
{
"groupBy": [
@@ -123,7 +123,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"host_id\") FROM \"$testnet\".\"autogen\".\"replay_stage-new_leader\" WHERE $timeFilter \n",
"query": "SELECT MODE(last) FROM ( SELECT last(\"leader\") FROM \"$testnet\".\"autogen\".\"replay_stage-new_leader\" WHERE $timeFilter GROUP BY host_id )\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "table",
@@ -236,7 +236,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"count\") FROM \"$testnet\".\"autogen\".\"broadcast_service-num_peers\" WHERE $timeFilter GROUP BY time(1s) \n\n",
"query": "SELECT LAST(median) FROM ( SELECT median(count) FROM \"$testnet\".\"autogen\".\"cluster_info-num_nodes\" WHERE $timeFilter AND count > 0 GROUP BY time(5s) )\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -325,7 +325,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT count(\"count\") AS \"total\" FROM \"$testnet\".\"autogen\".\"vote-native\" WHERE $timeFilter GROUP BY time($__interval) FILL(0)",
"query": "SELECT MEDIAN(\"host_count\") AS \"total\" FROM ( SELECT COUNT(\"count\") as host_count FROM \"$testnet\".\"autogen\".\"vote-native\" WHERE $timeFilter GROUP BY time($__interval), host_id ) GROUP BY time($__interval) fill(0)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -344,43 +344,6 @@
]
],
"tags": []
},
{
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT count(\"count\") AS \" \" FROM \"$testnet\".\"autogen\".\"validator-vote_sent\" WHERE $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
}
],
"thresholds": [],
@@ -502,7 +465,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT ROUND(MEAN(\"sum\")) FROM ( SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-record_transactions\" WHERE $timeFilter GROUP BY time(1s) )\n\n",
"query": "SELECT ROUND(MEAN(\"sum\")) FROM ( SELECT MEDIAN(tx_count) AS sum FROM (SELECT SUM(\"count\") AS tx_count FROM \"$testnet\".\"autogen\".\"replay_stage-replay_transactions\" WHERE $timeFilter AND count > 0 GROUP BY time(1s), host_id) GROUP BY time(1s) )\n\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -614,7 +577,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT MAX(\"sum\") FROM ( SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-record_transactions\" WHERE $timeFilter GROUP BY time(1s) )\n\n",
"query": "SELECT MAX(\"median_sum\") FROM ( SELECT MEDIAN(tx_count) AS median_sum FROM (SELECT SUM(\"count\") AS tx_count FROM \"$testnet\".\"autogen\".\"bank-process_transactions\" WHERE $timeFilter AND count > 0 GROUP BY time(1s), host_id) GROUP BY time(1s) )\n\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -726,7 +689,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"transactions\" FROM \"$testnet\".\"autogen\".\"banking_stage-record_transactions\" WHERE $timeFilter \n\n",
"query": "SELECT MEDIAN(tx_count) AS transactions FROM (SELECT SUM(\"count\") AS tx_count FROM \"$testnet\".\"autogen\".\"bank-process_transactions\" WHERE $timeFilter GROUP BY host_id) WHERE tx_count > 0\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -818,7 +781,7 @@
"lineColor": "rgb(31, 120, 193)",
"show": false
},
"tableColumn": "sum",
"tableColumn": "median",
"targets": [
{
"groupBy": [
@@ -838,7 +801,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"vote-native\" WHERE $timeFilter \n",
"query": "SELECT MEDIAN(\"vote_count\") FROM ( SELECT sum(\"count\") as vote_count FROM \"$testnet\".\"autogen\".\"vote-native\" WHERE $timeFilter GROUP BY host_id )\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "table",
@@ -1556,6 +1519,44 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "select median(\"tx_count\") as \"transactions\" from (select sum(\"count\") / 2 as \"tx_count\" from \"$testnet\".\"autogen\".\"bank-process_transactions\" where $timeFilter AND count > 0 GROUP BY time(2s), host_id) group by time(2s) fill(0)",
"rawQuery": true,
"refId": "E",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
},
{
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"hide": true,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") / 2 AS \"transactions\" FROM \"$testnet\".\"autogen\".\"banking_stage-record_transactions\" WHERE $timeFilter GROUP BY time(2s) FILL(0)\n",
"rawQuery": true,
"refId": "A",
@@ -1575,6 +1576,44 @@
]
],
"tags": []
},
{
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"hide": true,
"orderByTime": "ASC",
"policy": "default",
"query": "select median(\"tx_count\") as \"transactions\" from (select sum(\"count\") / 2 as \"tx_count\" from \"$testnet\".\"autogen\".\"bank-process_transactions\" where $timeFilter AND count > 0 GROUP BY time(2s), host_id) group by time(2s) fill(0)",
"rawQuery": true,
"refId": "D",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
}
],
"thresholds": [],
@@ -1655,44 +1694,6 @@
"stack": false,
"steppedLine": false,
"targets": [
{
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT mean(\"total_peers\") as \"total peers\" FROM \"$testnet\".\"autogen\".\"vote_stage-peer_count\" WHERE $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"count"
],
"type": "field"
},
{
"params": [],
"type": "sum"
}
]
],
"tags": []
},
{
"groupBy": [
{
@@ -1711,45 +1712,7 @@
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"valid_peers\") as \"valid peers\" FROM \"$testnet\".\"autogen\".\"vote_stage-peer_count\" WHERE $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": []
},
{
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"hide": false,
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"count\") AS \"peers\" FROM \"$testnet\".\"autogen\".\"broadcast_service-num_peers\" WHERE $timeFilter GROUP BY time(1s) FILL(0)",
"query": "SELECT median(\"count\") AS \"total\" FROM \"$testnet\".\"autogen\".\"cluster_info-num_nodes\" WHERE $timeFilter AND count > 0 GROUP BY time(5s)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -1876,7 +1839,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT mean(\"duration_ms\") FROM \"$testnet\".\"autogen\".\"validator-confirmation\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time(1s) FILL(0)\n",
"query": "SELECT mean(\"duration_ms\") FROM \"$testnet\".\"autogen\".\"validator-confirmation\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time(1s) FILL(0)\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -4311,7 +4274,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") / 3 FROM \"$testnet\".\"autogen\".\"exchange_processor-trades\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"query": "SELECT sum(\"count\") / 3 FROM \"$testnet\".\"autogen\".\"exchange_processor-trades\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -4348,7 +4311,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") / 3 FROM \"$testnet\".\"autogen\".\"exchange_processor-swaps\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"query": "SELECT sum(\"count\") / 3 FROM \"$testnet\".\"autogen\".\"exchange_processor-swaps\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -4462,7 +4425,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") / 3 AS \"trades\" FROM \"$testnet\".\"autogen\".\"bench-exchange-trades\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"query": "SELECT sum(\"count\") / 3 AS \"trades\" FROM \"$testnet\".\"autogen\".\"bench-exchange-trades\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -4499,7 +4462,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") / 3 AS \"swaps\" FROM \"$testnet\".\"autogen\".\"bench-exchange-swaps\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"query": "SELECT sum(\"count\") / 3 AS \"swaps\" FROM \"$testnet\".\"autogen\".\"bench-exchange-swaps\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -4536,7 +4499,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") / 3 AS \"transfers\" FROM \"$testnet\".\"autogen\".\"bench-exchange-do_tx_transfers\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"query": "SELECT sum(\"count\") / 3 AS \"transfers\" FROM \"$testnet\".\"autogen\".\"bench-exchange-do_tx_transfers\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time(3s)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -4963,7 +4926,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"retransmit\" FROM \"$testnet\".\"autogen\".\"retransmit-stage\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"query": "SELECT sum(\"count\") AS \"retransmit\" FROM \"$testnet\".\"autogen\".\"retransmit-stage\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "I",
"resultFormat": "time_series",
@@ -5080,7 +5043,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT sum(\"count\") AS \"sigverify\" FROM \"$testnet\".\"autogen\".\"sigverify_stage-time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"query": "SELECT sum(\"count\") AS \"sigverify\" FROM \"$testnet\".\"autogen\".\"sigverify_stage-time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -5117,7 +5080,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"banking\" FROM \"$testnet\".\"autogen\".\"banking_stage-time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"query": "SELECT sum(\"count\") AS \"banking\" FROM \"$testnet\".\"autogen\".\"banking_stage-time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -5154,7 +5117,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"request\" FROM \"$testnet\".\"autogen\".\"request_stage-time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"query": "SELECT sum(\"count\") AS \"request\" FROM \"$testnet\".\"autogen\".\"request_stage-time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -5191,7 +5154,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"write\" FROM \"$testnet\".\"autogen\".\"ledger_writer_stage-time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"query": "SELECT sum(\"count\") AS \"write\" FROM \"$testnet\".\"autogen\".\"ledger_writer_stage-time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"rawQuery": true,
"refId": "D",
"resultFormat": "time_series",
@@ -5228,7 +5191,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"broadcast\" FROM \"$testnet\".\"autogen\".\"broadcast_service-time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"query": "SELECT sum(\"count\") AS \"broadcast\" FROM \"$testnet\".\"autogen\".\"broadcast_service-time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)\n",
"rawQuery": true,
"refId": "E",
"resultFormat": "time_series",
@@ -5265,7 +5228,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"banking_stall\" FROM \"$testnet\".\"autogen\".\"banking_stage-stall_time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"query": "SELECT sum(\"count\") AS \"banking_stall\" FROM \"$testnet\".\"autogen\".\"banking_stage-stall_time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "F",
"resultFormat": "time_series",
@@ -5302,7 +5265,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") AS \"banking_buffered\" FROM \"$testnet\".\"autogen\".\"banking_stage-buffered_packet_time_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"query": "SELECT sum(\"count\") AS \"banking_buffered\" FROM \"$testnet\".\"autogen\".\"banking_stage-buffered_packet_time_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "G",
"resultFormat": "time_series",
@@ -5426,7 +5389,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT mean(\"count\") FROM \"$testnet\".\"autogen\".\"bank-forks_set_root_ms\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT mean(\"count\") FROM \"$testnet\".\"autogen\".\"bank-forks_set_root_ms\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -5465,7 +5428,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT mean(\"squash_accounts_ms\") AS \"squash_account\" FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT mean(\"squash_accounts_ms\") AS \"squash_account\" FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -5504,7 +5467,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT mean(\"squash_cache_ms\") AS \"squash_cache\" FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT mean(\"squash_cache_ms\") AS \"squash_cache\" FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -5635,7 +5598,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-buffered_packets\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-buffered_packets\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -5672,7 +5635,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-forwarded_packets\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-forwarded_packets\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -5709,7 +5672,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-consumed_buffered_packets\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-consumed_buffered_packets\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -5746,7 +5709,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-rebuffered_packets\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"banking_stage-rebuffered_packets\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "D",
"resultFormat": "time_series",
@@ -5783,7 +5746,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"fetch_stage-honor_forwards\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"fetch_stage-honor_forwards\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "E",
"resultFormat": "time_series",
@@ -5820,7 +5783,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"fetch_stage-discard_forwards\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"fetch_stage-discard_forwards\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "F",
"resultFormat": "time_series",
@@ -5951,7 +5914,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"poh_recorder-tick_lock_contention\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"poh_recorder-tick_lock_contention\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "F",
"resultFormat": "time_series",
@@ -5988,7 +5951,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"poh_recorder-record_lock_contention\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"poh_recorder-record_lock_contention\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -6025,7 +5988,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"poh_recorder-tick_overhead\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT sum(\"count\") FROM \"$testnet\".\"autogen\".\"poh_recorder-tick_overhead\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -6334,7 +6297,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"repair-ix\") AS \"repair-ix\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"repair-ix\") AS \"repair-ix\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -6371,7 +6334,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"repair-slot\") AS \"repair-slot\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"repair-slot\") AS \"repair-slot\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -6490,7 +6453,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"repair-orphan\") AS \"slot\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair_orphan\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"repair-orphan\") AS \"slot\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair_orphan\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -6609,7 +6572,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"repair-highest-slot\") AS \"slot\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair_highest\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"repair-highest-slot\") AS \"slot\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair_highest\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "C",
"resultFormat": "time_series",
@@ -6646,7 +6609,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"repair-highest-ix\") AS \"ix\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair_highest\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"repair-highest-ix\") AS \"ix\" FROM \"$testnet\".\"autogen\".\"cluster_info-repair_highest\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -6916,7 +6879,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT last(\"consumed\") AS \"validator\" FROM \"$testnet\".\"autogen\".\"window-stage\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"query": "SELECT last(\"consumed\") AS \"validator\" FROM \"$testnet\".\"autogen\".\"window-stage\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval) FILL(0)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -7089,7 +7052,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT last(\"latest\") - last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-vote\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"latest\") - last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-vote\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -7126,7 +7089,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"slot\") - last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"slot\") - last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -7249,7 +7212,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-vote\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-vote\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -7286,7 +7249,7 @@
],
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT last(\"root\") FROM \"$testnet\".\"autogen\".\"tower-observed\" WHERE host_id::tag =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
@@ -7408,7 +7371,7 @@
"measurement": "cluster_info-vote-count",
"orderByTime": "ASC",
"policy": "autogen",
"query": "SELECT last(\"count\") FROM \"$testnet\".\"autogen\".\"replay_stage-new_leader\" WHERE host_id =~ /$hostid/ AND $timeFilter GROUP BY time($__interval)",
"query": "SELECT median(\"slot\") FROM \"$testnet\".\"autogen\".\"replay_stage-new_leader\" WHERE $timeFilter GROUP BY time($__interval)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
@@ -7432,7 +7395,7 @@
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Leader Change ($hostid)",
"title": "Leader Change",
"tooltip": {
"shared": true,
"sort": 0,
@@ -8055,7 +8018,7 @@
"multi": false,
"name": "hostid",
"options": [],
"query": "SELECT DISTINCT(\"host_id\") FROM \"$testnet\".\"autogen\".\"counter-fullnode-new\" ",
"query": "SELECT DISTINCT(\"id\") FROM \"$testnet\".\"autogen\".\"validator-new\" ",
"refresh": 2,
"regex": "",
"sort": 1,

View File

@@ -56,11 +56,6 @@ macro_rules! datapoint {
(@point $name:expr) => {
$crate::influxdb::Point::new(&$name)
};
($name:expr) => {
if log_enabled!(log::Level::Debug) {
$crate::submit($crate::datapoint!(@point $name), log::Level::Debug);
}
};
($name:expr, $($fields:tt)+) => {
if log_enabled!(log::Level::Debug) {
$crate::submit($crate::datapoint!(@point $name, $($fields)+), log::Level::Debug);
@@ -245,7 +240,7 @@ impl MetricsAgent {
let extra = influxdb::Point::new("metrics")
.add_timestamp(timing::timestamp() as i64)
.add_field("host_id", influxdb::Value::String(HOST_ID.to_string()))
.add_tag("host_id", influxdb::Value::String(HOST_ID.to_string()))
.add_field(
"points_written",
influxdb::Value::Integer(points_written as i64),
@@ -342,7 +337,6 @@ impl MetricsAgent {
}
pub fn submit(&self, mut point: influxdb::Point, level: log::Level) {
point.add_field("host_id", influxdb::Value::String(HOST_ID.to_string()));
if point.timestamp.is_none() {
point.timestamp = Some(timing::timestamp() as i64);
}
@@ -383,7 +377,8 @@ fn get_singleton_agent() -> Arc<Mutex<MetricsAgent>> {
/// Submits a new point from any thread. Note that points are internally queued
/// and transmitted periodically in batches.
pub fn submit(point: influxdb::Point, level: log::Level) {
pub fn submit(mut point: influxdb::Point, level: log::Level) {
point.add_tag("host_id", influxdb::Value::String(HOST_ID.to_string()));
let agent_mutex = get_singleton_agent();
let agent = agent_mutex.lock().unwrap();
agent.submit(point, level);
@@ -435,6 +430,7 @@ pub fn set_panic_hook(program: &'static str) {
thread::current().name().unwrap_or("?").to_string(),
),
)
.add_tag("host_id", influxdb::Value::String(HOST_ID.to_string()))
// The 'one' field exists to give Kapacitor Alerts a numerical value
// to filter on
.add_field("one", influxdb::Value::Integer(1))
@@ -452,7 +448,6 @@ pub fn set_panic_hook(program: &'static str) {
None => "?".to_string(),
}),
)
.add_field("host_id", influxdb::Value::String(HOST_ID.to_string()))
.to_owned(),
Level::Error,
);
@@ -610,7 +605,6 @@ mod test {
}
};
}
datapoint!("name");
datapoint!("name", ("field name", "test".to_string(), String));
datapoint!("name", ("field name", 12.34_f64, f64));
datapoint!("name", ("field name", true, bool));

View File

@@ -2,6 +2,79 @@
#
# Start the bootstrap leader node
#
set -e
here=$(dirname "$0")
exec "$here"/fullnode.sh --bootstrap-leader "$@"
# shellcheck source=multinode-demo/common.sh
source "$here"/common.sh
if [[ -n $SOLANA_CUDA ]]; then
program=$solana_validator_cuda
else
program=$solana_validator
fi
args=()
while [[ -n $1 ]]; do
if [[ ${1:0:1} = - ]]; then
if [[ $1 = --init-complete-file ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --gossip-port ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --dynamic-port-range ]]; then
args+=("$1" "$2")
shift 2
else
echo "Unknown argument: $1"
$program --help
exit 1
fi
else
echo "Unknown argument: $1"
$program --help
exit 1
fi
done
if [[ -z $CI ]]; then # Skip in CI
# shellcheck source=scripts/tune-system.sh
source "$here"/../scripts/tune-system.sh
fi
setup_secondary_mount
# These keypairs are created by ./setup.sh and included in the genesis block
identity_keypair=$SOLANA_CONFIG_DIR/bootstrap-leader/identity-keypair.json
vote_keypair="$SOLANA_CONFIG_DIR"/bootstrap-leader/vote-keypair.json
storage_keypair=$SOLANA_CONFIG_DIR/bootstrap-leader/storage-keypair.json
ledger_dir="$SOLANA_CONFIG_DIR"/bootstrap-leader
[[ -d "$ledger_dir" ]] || {
echo "$ledger_dir does not exist"
echo
echo "Please run: $here/setup.sh"
exit 1
}
args+=(
--accounts "$SOLANA_CONFIG_DIR"/bootstrap-leader/accounts
--enable-rpc-exit
--identity "$identity_keypair"
--ledger "$ledger_dir"
--rpc-port 8899
--snapshot-path "$SOLANA_CONFIG_DIR"/bootstrap-leader/snapshots
--snapshot-interval-slots 100
--storage-keypair "$storage_keypair"
--voting-keypair "$vote_keypair"
--rpc-drone-address 127.0.0.1:9900
)
default_arg --gossip-port 8001
identity_pubkey=$($solana_keygen pubkey "$identity_keypair")
export SOLANA_METRICS_HOST_ID="$identity_pubkey"
set -x
# shellcheck disable=SC2086 # Don't want to double quote $program
exec $program "${args[@]}"

View File

@@ -9,11 +9,11 @@ source "$here"/common.sh
set -e
for i in "$SOLANA_RSYNC_CONFIG_DIR" "$SOLANA_CONFIG_DIR"; do
echo "Cleaning $i"
rm -rvf "${i:?}/" # <-- $i might be a symlink, rm the other side of it first
rm -rvf "$i"
mkdir -p "$i"
done
(
set -x
rm -rf "${SOLANA_CONFIG_DIR:?}/" # <-- $i might be a symlink, rm the other side of it first
rm -rf "$SOLANA_CONFIG_DIR"
mkdir -p "$SOLANA_CONFIG_DIR"
)
setup_secondary_mount

View File

@@ -9,8 +9,6 @@
SOLANA_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. || exit 1; pwd)"
rsync=rsync
if [[ $(uname) != Linux ]]; then
# Protect against unsupported configurations to prevent non-obvious errors
# later. Arguably these should be fatal errors but for now prefer tolerance.
@@ -60,25 +58,20 @@ solana_ledger_tool=$(solana_program ledger-tool)
solana_wallet=$(solana_program wallet)
solana_replicator=$(solana_program replicator)
export RUST_LOG=${RUST_LOG:-solana=info} # if RUST_LOG is unset, default to info
export RUST_BACKTRACE=1
# shellcheck source=scripts/configure-metrics.sh
source "$SOLANA_ROOT"/scripts/configure-metrics.sh
# The directory on the cluster entrypoint that is rsynced by other full nodes
SOLANA_RSYNC_CONFIG_DIR=$SOLANA_ROOT/config
# Configuration that remains local
SOLANA_CONFIG_DIR=$SOLANA_ROOT/config-local
SOLANA_CONFIG_DIR=$SOLANA_ROOT/config
SECONDARY_DISK_MOUNT_POINT=/mnt/extra-disk
setup_secondary_mount() {
# If there is a secondary disk, symlink the config-local dir there
# If there is a secondary disk, symlink the config/ dir there
if [[ -d $SECONDARY_DISK_MOUNT_POINT ]]; then
mkdir -p $SECONDARY_DISK_MOUNT_POINT/config-local
mkdir -p $SECONDARY_DISK_MOUNT_POINT/config
rm -rf "$SOLANA_CONFIG_DIR"
ln -sfT $SECONDARY_DISK_MOUNT_POINT/config-local "$SOLANA_CONFIG_DIR"
ln -sfT $SECONDARY_DISK_MOUNT_POINT/config "$SOLANA_CONFIG_DIR"
fi
}

112
multinode-demo/delegate-stake.sh Executable file
View File

@@ -0,0 +1,112 @@
#!/usr/bin/env bash
#
# Delegate stake to a validator
#
set -e
here=$(dirname "$0")
# shellcheck source=multinode-demo/common.sh
source "$here"/common.sh
stake_lamports=42 # default number of lamports to assign as stake
url=http://127.0.0.1:8899 # default RPC url
usage() {
if [[ -n $1 ]]; then
echo "$*"
echo
fi
cat <<EOF
usage: $0 [OPTIONS] <lamports to stake ($stake_lamports)>
Add stake to a validator
OPTIONS:
--url RPC_URL - RPC URL to the cluster ($url)
--label LABEL - Append the given label to the configuration files, useful when running
multiple validators in the same workspace
--no-airdrop - Do not attempt to airdrop the stake
--keypair FILE - Keypair to fund the stake from
--force - Override delegate-stake sanity checks
EOF
exit 1
}
common_args=()
label=
airdrops_enabled=1
maybe_force=
positional_args=()
while [[ -n $1 ]]; do
if [[ ${1:0:1} = - ]]; then
if [[ $1 = --label ]]; then
label="-$2"
shift 2
elif [[ $1 = --keypair || $1 = -k ]]; then
common_args+=("$1" "$2")
shift 2
elif [[ $1 = --force ]]; then
maybe_force=--force
shift 1
elif [[ $1 = --url || $1 = -u ]]; then
url=$2
shift 2
elif [[ $1 = --no-airdrop ]]; then
airdrops_enabled=0
shift
elif [[ $1 = -h ]]; then
usage "$@"
else
echo "Unknown argument: $1"
exit 1
fi
else
positional_args+=("$1")
shift
fi
done
common_args+=(--url "$url")
if [[ ${#positional_args[@]} -gt 1 ]]; then
usage "$@"
fi
if [[ -n ${positional_args[0]} ]]; then
stake_lamports=${positional_args[0]}
fi
config_dir="$SOLANA_CONFIG_DIR/validator$label"
vote_keypair_path="$config_dir"/vote-keypair.json
stake_keypair_path=$config_dir/stake-keypair.json
if [[ ! -f $vote_keypair_path ]]; then
echo "Error: $vote_keypair_path not found"
exit 1
fi
if [[ -f $stake_keypair_path ]]; then
# TODO: Add ability to add multiple stakes with this script?
echo "Error: $stake_keypair_path already exists"
exit 1
fi
vote_pubkey=$($solana_keygen pubkey "$vote_keypair_path")
if ((airdrops_enabled)); then
declare fees=100 # TODO: No hardcoded transaction fees, fetch the current cluster fees
$solana_wallet "${common_args[@]}" airdrop $((stake_lamports+fees))
fi
$solana_keygen new -o "$stake_keypair_path"
stake_pubkey=$($solana_keygen pubkey "$stake_keypair_path")
set -x
$solana_wallet "${common_args[@]}" \
show-vote-account "$vote_pubkey"
$solana_wallet "${common_args[@]}" \
delegate-stake $maybe_force "$stake_keypair_path" "$vote_pubkey" "$stake_lamports"
$solana_wallet "${common_args[@]}" show-stake-account "$stake_pubkey"

View File

@@ -1,518 +0,0 @@
#!/usr/bin/env bash
#
# Start a fullnode
#
here=$(dirname "$0")
# shellcheck source=multinode-demo/common.sh
source "$here"/common.sh
# shellcheck source=scripts/oom-score-adj.sh
source "$here"/../scripts/oom-score-adj.sh
fullnode_usage() {
if [[ -n $1 ]]; then
echo "$*"
echo
fi
cat <<EOF
Fullnode Usage:
usage: $0 [--config-dir PATH] [--blockstream PATH] [--init-complete-file FILE] [--label LABEL] [--stake LAMPORTS] [--no-voting] [--rpc-port port] [rsync network path to bootstrap leader configuration] [cluster entry point]
Start a validator
--config-dir PATH - store configuration and data files under this PATH
--blockstream PATH - open blockstream at this unix domain socket location
--init-complete-file FILE - create this file, if it doesn't already exist, once node initialization is complete
--label LABEL - Append the given label to the configuration files, useful when running
multiple fullnodes in the same workspace
--stake LAMPORTS - Number of lamports to stake
--node-lamports LAMPORTS - Number of lamports this node has been funded from the genesis block
--no-voting - start node without vote signer
--rpc-port port - custom RPC port for this node
--no-restart - do not restart the node if it exits
--no-airdrop - The genesis block has an account for the node. Airdrops are not required.
EOF
exit 1
}
find_entrypoint() {
declare entrypoint entrypoint_address
declare shift=0
if [[ -z $1 ]]; then
entrypoint="$SOLANA_ROOT" # Default to local tree for rsync
entrypoint_address=127.0.0.1:8001 # Default to local entrypoint
elif [[ -z $2 ]]; then
entrypoint=$1
entrypoint_address=$entrypoint:8001
shift=1
else
entrypoint=$1
entrypoint_address=$2
shift=2
fi
echo "$entrypoint" "$entrypoint_address" "$shift"
}
rsync_url() { # adds the 'rsync://` prefix to URLs that need it
declare url="$1"
if [[ $url =~ ^.*:.*$ ]]; then
# assume remote-shell transport when colon is present, use $url unmodified
echo "$url"
return 0
fi
if [[ -d $url ]]; then
# assume local directory if $url is a valid directory, use $url unmodified
echo "$url"
return 0
fi
# Default to rsync:// URL
echo "rsync://$url"
}
setup_validator_accounts() {
declare entrypoint_ip=$1
declare node_lamports=$2
declare stake_lamports=$3
if [[ -f $configured_flag ]]; then
echo "Vote and stake accounts have already been configured"
else
if ((airdrops_enabled)); then
echo "Fund the node with enough tokens to fund its Vote, Staking, and Storage accounts"
(
declare fees=100 # TODO: No hardcoded transaction fees, fetch the current cluster fees
set -x
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
airdrop $((node_lamports+stake_lamports+fees))
) || return $?
else
echo "current account balance is "
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" balance || return $?
fi
echo "Fund the vote account from the node's identity pubkey"
(
set -x
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
create-vote-account "$vote_pubkey" "$identity_pubkey" 1 --commission 127
) || return $?
echo "Delegate the stake account to the node's vote account"
(
set -x
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
delegate-stake "$stake_keypair_path" "$vote_pubkey" "$stake_lamports"
) || return $?
echo "Create validator storage account"
(
set -x
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
create-validator-storage-account "$identity_pubkey" "$storage_pubkey"
) || return $?
touch "$configured_flag"
fi
echo "Identity account balance:"
(
set -x
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" balance
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
show-vote-account "$vote_pubkey"
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
show-stake-account "$stake_pubkey"
$solana_wallet --keypair "$identity_keypair_path" --url "http://$entrypoint_ip:8899" \
show-storage-account "$storage_pubkey"
)
return 0
}
ledger_not_setup() {
echo "Error: $*"
echo
echo "Please run: ${here}/setup.sh"
exit 1
}
args=()
node_type=validator
node_lamports=424242424242 # number of lamports to assign the node for transaction fees
stake_lamports=42 # number of lamports to assign as stake
poll_for_new_genesis_block=0
label=
identity_keypair_path=
no_restart=0
airdrops_enabled=1
generate_snapshots=0
reset_ledger=0
config_dir=
positional_args=()
while [[ -n $1 ]]; do
if [[ ${1:0:1} = - ]]; then
if [[ $1 = --label ]]; then
label="-$2"
shift 2
elif [[ $1 = --no-restart ]]; then
no_restart=1
shift
elif [[ $1 = --bootstrap-leader ]]; then
node_type=bootstrap_leader
generate_snapshots=1
shift
elif [[ $1 = --generate-snapshots ]]; then
generate_snapshots=1
shift
elif [[ $1 = --no-snapshot ]]; then
# Ignore
shift
elif [[ $1 = --validator ]]; then
node_type=validator
shift
elif [[ $1 = --poll-for-new-genesis-block ]]; then
poll_for_new_genesis_block=1
shift
elif [[ $1 = --blockstream ]]; then
stake_lamports=0
args+=("$1" "$2")
shift 2
elif [[ $1 = --identity ]]; then
identity_keypair_path=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --voting-keypair ]]; then
voting_keypair_path=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --storage-keypair ]]; then
storage_keypair_path=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --enable-rpc-exit ]]; then
args+=("$1")
shift
elif [[ $1 = --init-complete-file ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --stake ]]; then
stake_lamports="$2"
shift 2
elif [[ $1 = --node-lamports ]]; then
node_lamports="$2"
shift 2
elif [[ $1 = --no-voting ]]; then
args+=("$1")
shift
elif [[ $1 = --skip-ledger-verify ]]; then
args+=("$1")
shift
elif [[ $1 = --no-sigverify ]]; then
args+=("$1")
shift
elif [[ $1 = --limit-ledger-size ]]; then
args+=("$1")
shift
elif [[ $1 = --rpc-port ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --dynamic-port-range ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --gossip-port ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --no-airdrop ]]; then
airdrops_enabled=0
shift
elif [[ $1 = --reset-ledger ]]; then
reset_ledger=1
shift
elif [[ $1 = --config-dir ]]; then
config_dir=$2
shift 2
elif [[ $1 = -h ]]; then
fullnode_usage "$@"
else
echo "Unknown argument: $1"
exit 1
fi
else
positional_args+=("$1")
shift
fi
done
if [[ -n $REQUIRE_CONFIG_DIR ]]; then
if [[ -z $config_dir ]]; then
fullnode_usage "Error: --config-dir not specified"
fi
SOLANA_RSYNC_CONFIG_DIR="$config_dir"/config
SOLANA_CONFIG_DIR="$config_dir"/config-local
fi
setup_secondary_mount
if [[ $node_type = bootstrap_leader ]]; then
if [[ ${#positional_args[@]} -ne 0 ]]; then
fullnode_usage "Unknown argument: ${positional_args[0]}"
fi
[[ -f "$SOLANA_CONFIG_DIR"/bootstrap-leader-keypair.json ]] ||
ledger_not_setup "$SOLANA_CONFIG_DIR/bootstrap-leader-keypair.json not found"
$solana_ledger_tool --ledger "$SOLANA_CONFIG_DIR"/bootstrap-leader-ledger verify
# These four keypairs are created by ./setup.sh and encoded into the genesis
# block
identity_keypair_path=$SOLANA_CONFIG_DIR/bootstrap-leader-keypair.json
voting_keypair_path="$SOLANA_CONFIG_DIR"/bootstrap-leader-vote-keypair.json
stake_keypair_path=$SOLANA_CONFIG_DIR/bootstrap-leader-stake-keypair.json
storage_keypair_path=$SOLANA_CONFIG_DIR/bootstrap-leader-storage-keypair.json
ledger_config_dir="$SOLANA_CONFIG_DIR"/bootstrap-leader-ledger
state_dir="$SOLANA_CONFIG_DIR"/bootstrap-leader-state
configured_flag=$SOLANA_CONFIG_DIR/bootstrap-leader.configured
default_arg --rpc-port 8899
if ((airdrops_enabled)); then
default_arg --rpc-drone-address 127.0.0.1:9900
fi
default_arg --gossip-port 8001
elif [[ $node_type = validator ]]; then
if [[ ${#positional_args[@]} -gt 2 ]]; then
fullnode_usage "$@"
fi
read -r entrypoint entrypoint_address shift < <(find_entrypoint "${positional_args[@]}")
shift "$shift"
mkdir -p "$SOLANA_CONFIG_DIR"
: "${identity_keypair_path:=$SOLANA_CONFIG_DIR/validator-keypair$label.json}"
[[ -r "$identity_keypair_path" ]] || $solana_keygen new -o "$identity_keypair_path"
: "${voting_keypair_path:=$SOLANA_CONFIG_DIR/validator-vote-keypair$label.json}"
[[ -r "$voting_keypair_path" ]] || $solana_keygen new -o "$voting_keypair_path"
: "${storage_keypair_path:=$SOLANA_CONFIG_DIR/validator-storage-keypair$label.json}"
[[ -r "$storage_keypair_path" ]] || $solana_keygen new -o "$storage_keypair_path"
stake_keypair_path=$SOLANA_CONFIG_DIR/validator-stake-keypair$label.json
[[ -r "$stake_keypair_path" ]] || $solana_keygen new -o "$stake_keypair_path"
ledger_config_dir=$SOLANA_CONFIG_DIR/validator-ledger$label
state_dir="$SOLANA_CONFIG_DIR"/validator-state$label
configured_flag=$SOLANA_CONFIG_DIR/validator$label.configured
default_arg --entrypoint "$entrypoint_address"
if ((airdrops_enabled)); then
default_arg --rpc-drone-address "${entrypoint_address%:*}:9900"
fi
rsync_entrypoint_url=$(rsync_url "$entrypoint")
else
echo "Error: Unknown node_type: $node_type"
exit 1
fi
identity_pubkey=$($solana_keygen pubkey "$identity_keypair_path")
export SOLANA_METRICS_HOST_ID="$identity_pubkey"
accounts_config_dir="$state_dir"/accounts
snapshot_config_dir="$state_dir"/snapshots
default_arg --identity "$identity_keypair_path"
default_arg --voting-keypair "$voting_keypair_path"
default_arg --storage-keypair "$storage_keypair_path"
default_arg --ledger "$ledger_config_dir"
default_arg --accounts "$accounts_config_dir"
default_arg --snapshot-path "$snapshot_config_dir"
if [[ -n $SOLANA_CUDA ]]; then
program=$solana_validator_cuda
else
program=$solana_validator
fi
if [[ -z $CI ]]; then # Skip in CI
# shellcheck source=scripts/tune-system.sh
source "$here"/../scripts/tune-system.sh
fi
new_genesis_block() {
(
set -x
$rsync -r "${rsync_entrypoint_url:?}"/config/ledger "$SOLANA_RSYNC_CONFIG_DIR"
) || (
echo "Error: failed to rsync genesis ledger"
)
! diff -q "$SOLANA_RSYNC_CONFIG_DIR"/ledger/genesis.bin "$ledger_config_dir"/genesis.bin >/dev/null 2>&1
}
set -e
PS4="$(basename "$0"): "
pid=
kill_fullnode() {
# Note: do not echo anything from this function to ensure $pid is actually
# killed when stdout/stderr are redirected
set +ex
if [[ -n $pid ]]; then
declare _pid=$pid
pid=
kill "$_pid" || true
wait "$_pid" || true
fi
exit
}
trap 'kill_fullnode' INT TERM ERR
if ((reset_ledger)); then
echo "Resetting ledger..."
(
set -x
rm -rf "$state_dir"
rm -rf "$ledger_config_dir"
)
if [[ -d "$SOLANA_RSYNC_CONFIG_DIR"/ledger/ ]]; then
cp -a "$SOLANA_RSYNC_CONFIG_DIR"/ledger/ "$ledger_config_dir"
fi
fi
while true; do
if [[ $node_type != bootstrap_leader ]] && new_genesis_block; then
# If the genesis block has changed remove the now stale ledger and start all
# over again
(
set -x
rm -rf "$ledger_config_dir" "$state_dir" "$configured_flag"
)
fi
if [[ $node_type = bootstrap_leader && ! -d "$SOLANA_RSYNC_CONFIG_DIR"/ledger ]]; then
ledger_not_setup "$SOLANA_RSYNC_CONFIG_DIR/ledger does not exist"
fi
if [[ ! -d "$ledger_config_dir" ]]; then
if [[ $node_type = validator ]]; then
(
cd "$SOLANA_RSYNC_CONFIG_DIR"
echo "Rsyncing genesis ledger from ${rsync_entrypoint_url:?}..."
SECONDS=
while ! $rsync -Pr "${rsync_entrypoint_url:?}"/config/ledger .; do
echo "Genesis ledger rsync failed"
sleep 5
done
echo "Fetched genesis ledger in $SECONDS seconds"
)
fi
(
set -x
cp -a "$SOLANA_RSYNC_CONFIG_DIR"/ledger/ "$ledger_config_dir"
)
fi
vote_pubkey=$($solana_keygen pubkey "$voting_keypair_path")
stake_pubkey=$($solana_keygen pubkey "$stake_keypair_path")
storage_pubkey=$($solana_keygen pubkey "$storage_keypair_path")
if [[ $node_type = validator ]] && ((stake_lamports)); then
setup_validator_accounts "${entrypoint_address%:*}" \
"$node_lamports" \
"$stake_lamports"
fi
cat <<EOF
======================[ $node_type configuration ]======================
identity pubkey: $identity_pubkey
vote pubkey: $vote_pubkey
stake pubkey: $stake_pubkey
storage pubkey: $storage_pubkey
ledger: $ledger_config_dir
accounts: $accounts_config_dir
snapshots: $snapshot_config_dir
========================================================================
EOF
echo "$PS4$program ${args[*]}"
$program "${args[@]}" &
pid=$!
echo "pid: $pid"
oom_score_adj "$pid" 1000
if ((no_restart)); then
wait "$pid"
exit $?
fi
secs_to_next_genesis_poll=5
secs_to_next_snapshot=30
while true; do
if [[ -z $pid ]] || ! kill -0 "$pid"; then
[[ -z $pid ]] || wait "$pid"
echo "############## $node_type exited, restarting ##############"
break
fi
sleep 1
if ((generate_snapshots && --secs_to_next_snapshot == 0)); then
(
SECONDS=
new_state_dir="$SOLANA_RSYNC_CONFIG_DIR"/new_state
new_state_archive="$SOLANA_RSYNC_CONFIG_DIR"/new_state.tgz
(
rm -rf "$new_state_dir" "$new_state_archive"
mkdir -p "$new_state_dir"
# When saving the state, its necessary to have the snapshots be saved first
# followed by the accounts folder. This would avoid conditions where incomplete
# accounts gets picked while its still in the process of being updated and are
# not frozen yet.
cp -a "$state_dir"/snapshots "$new_state_dir"
cp -a "$state_dir"/accounts "$new_state_dir"
cd "$new_state_dir"
tar zcfS "$new_state_archive" ./*
)
ln -f "$new_state_archive" "$SOLANA_RSYNC_CONFIG_DIR"/state.tgz
rm -rf "$new_state_dir" "$new_state_archive"
ls -hl "$SOLANA_RSYNC_CONFIG_DIR"/state.tgz
echo "Snapshot generated in $SECONDS seconds"
) || (
echo "Error: failed to generate snapshot"
)
secs_to_next_snapshot=60
fi
if ((poll_for_new_genesis_block && --secs_to_next_genesis_poll == 0)); then
echo "Polling for new genesis block..."
if new_genesis_block; then
echo "############## New genesis detected, restarting $node_type ##############"
break
fi
secs_to_next_genesis_poll=60
fi
done
kill_fullnode
# give the cluster time to come back up
(
set -x
sleep 60
)
done

View File

@@ -53,7 +53,7 @@ while [[ -n $1 ]]; do
done
: "${identity_keypair:="$SOLANA_ROOT"/farf/replicator-identity-keypair"$label".json}"
: "${storage_keypair:="$SOLANA_ROOT"/farf/storage-keypair"$label".json}"
: "${storage_keypair:="$SOLANA_ROOT"/farf/replicator-storage-keypair"$label".json}"
ledger="$SOLANA_ROOT"/farf/replicator-ledger"$label"
rpc_url=$("$here"/rpc-url.sh "$entrypoint")

View File

@@ -10,24 +10,29 @@ set -e
# Create genesis ledger
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/mint-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader-vote-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader-stake-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader-storage-keypair.json
mkdir "$SOLANA_CONFIG_DIR"/bootstrap-leader
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader/identity-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader/vote-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader/stake-keypair.json
$solana_keygen new -o "$SOLANA_CONFIG_DIR"/bootstrap-leader/storage-keypair.json
args=("$@")
default_arg --bootstrap-leader-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader-keypair.json
default_arg --bootstrap-vote-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader-vote-keypair.json
default_arg --bootstrap-stake-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader-stake-keypair.json
default_arg --bootstrap-storage-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader-storage-keypair.json
default_arg --ledger "$SOLANA_RSYNC_CONFIG_DIR"/ledger
default_arg --bootstrap-leader-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader/identity-keypair.json
default_arg --bootstrap-vote-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader/vote-keypair.json
default_arg --bootstrap-stake-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader/stake-keypair.json
default_arg --bootstrap-storage-keypair "$SOLANA_CONFIG_DIR"/bootstrap-leader/storage-keypair.json
default_arg --ledger "$SOLANA_CONFIG_DIR"/bootstrap-leader
default_arg --mint "$SOLANA_CONFIG_DIR"/mint-keypair.json
default_arg --lamports 100000000000000
default_arg --bootstrap-leader-lamports 424242424242
default_arg --bootstrap-leader-lamports 424242
default_arg --target-lamports-per-signature 42
default_arg --target-signatures-per-slot 42
default_arg --hashes-per-tick auto
$solana_genesis "${args[@]}"
test -d "$SOLANA_RSYNC_CONFIG_DIR"/ledger
cp -a "$SOLANA_RSYNC_CONFIG_DIR"/ledger "$SOLANA_CONFIG_DIR"/bootstrap-leader-ledger
(
cd "$SOLANA_CONFIG_DIR"/bootstrap-leader
set -x
tar zcvfS genesis.tgz genesis.bin rocksdb
)

View File

@@ -1,4 +1,406 @@
#!/usr/bin/env bash
#
# Start a validator
#
here=$(dirname "$0")
exec "$here"/fullnode.sh --validator "$@"
# shellcheck source=multinode-demo/common.sh
source "$here"/common.sh
usage() {
if [[ -n $1 ]]; then
echo "$*"
echo
fi
cat <<EOF
usage: $0 [OPTIONS] [cluster entry point hostname]
Start a validator with no stake
OPTIONS:
--ledger PATH - store ledger under this PATH
--blockstream PATH - open blockstream at this unix domain socket location
--init-complete-file FILE - create this file, if it doesn't already exist, once node initialization is complete
--label LABEL - Append the given label to the configuration files, useful when running
multiple validators in the same workspace
--node-lamports LAMPORTS - Number of lamports this node has been funded from the genesis block
--no-voting - start node without vote signer
--rpc-port port - custom RPC port for this node
--no-restart - do not restart the node if it exits
--no-airdrop - The genesis block has an account for the node. Airdrops are not required.
EOF
exit 1
}
args=()
airdrops_enabled=1
node_lamports=424242 # number of lamports to airdrop the node for transaction fees (ignored if airdrops_enabled=0)
poll_for_new_genesis_block=0
label=
identity_keypair_path=
voting_keypair_path=
no_restart=0
# TODO: Enable boot_from_snapshot when snapshots work
#boot_from_snapshot=1
boot_from_snapshot=0
reset_ledger=0
gossip_entrypoint=
ledger_dir=
positional_args=()
while [[ -n $1 ]]; do
if [[ ${1:0:1} = - ]]; then
if [[ $1 = --label ]]; then
label="-$2"
shift 2
elif [[ $1 = --no-restart ]]; then
no_restart=1
shift
elif [[ $1 = --no-snapshot ]]; then
boot_from_snapshot=0
shift
elif [[ $1 = --poll-for-new-genesis-block ]]; then
poll_for_new_genesis_block=1
shift
elif [[ $1 = --blockstream ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --entrypoint ]]; then
gossip_entrypoint=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --identity ]]; then
identity_keypair_path=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --voting-keypair ]]; then
voting_keypair_path=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --storage-keypair ]]; then
storage_keypair_path=$2
args+=("$1" "$2")
shift 2
elif [[ $1 = --enable-rpc-exit ]]; then
args+=("$1")
shift
elif [[ $1 = --init-complete-file ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --node-lamports ]]; then
node_lamports="$2"
shift 2
elif [[ $1 = --no-voting ]]; then
args+=("$1")
shift
elif [[ $1 = --skip-ledger-verify ]]; then
args+=("$1")
shift
elif [[ $1 = --no-sigverify ]]; then
args+=("$1")
shift
elif [[ $1 = --limit-ledger-size ]]; then
args+=("$1")
shift
elif [[ $1 = --rpc-port ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --dynamic-port-range ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --gossip-port ]]; then
args+=("$1" "$2")
shift 2
elif [[ $1 = --no-airdrop ]]; then
airdrops_enabled=0
shift
elif [[ $1 = --reset-ledger ]]; then
reset_ledger=1
shift
elif [[ $1 = --ledger ]]; then
ledger_dir=$2
shift 2
elif [[ $1 = -h ]]; then
usage "$@"
else
echo "Unknown argument: $1"
exit 1
fi
else
positional_args+=("$1")
shift
fi
done
if [[ ${#positional_args[@]} -gt 1 ]]; then
usage "$@"
fi
if [[ -n $REQUIRE_LEDGER_DIR ]]; then
if [[ -z $ledger_dir ]]; then
usage "Error: --ledger not specified"
fi
SOLANA_CONFIG_DIR="$ledger_dir"
fi
if [[ -n $REQUIRE_KEYPAIRS ]]; then
if [[ -z $identity_keypair_path ]]; then
usage "Error: --identity not specified"
fi
if [[ -z $voting_keypair_path ]]; then
usage "Error: --voting-keypair not specified"
fi
fi
if [[ -z "$ledger_dir" ]]; then
ledger_dir="$SOLANA_CONFIG_DIR/validator$label"
fi
mkdir -p "$ledger_dir"
setup_secondary_mount
if [[ -n $gossip_entrypoint ]]; then
# Prefer the --entrypoint argument if supplied...
if [[ ${#positional_args[@]} -gt 0 ]]; then
usage "$@"
fi
else
# ...but also support providing the entrypoint's hostname as the first
# positional argument
entrypoint_hostname=${positional_args[0]}
if [[ -z $entrypoint_hostname ]]; then
gossip_entrypoint=127.0.0.1:8001
else
gossip_entrypoint="$entrypoint_hostname":8001
fi
fi
rpc_url=$("$here"/rpc-url.sh "$gossip_entrypoint")
drone_address="${gossip_entrypoint%:*}":9900
: "${identity_keypair_path:=$ledger_dir/identity-keypair.json}"
: "${voting_keypair_path:=$ledger_dir/vote-keypair.json}"
: "${storage_keypair_path:=$ledger_dir/storage-keypair.json}"
default_arg --entrypoint "$gossip_entrypoint"
if ((airdrops_enabled)); then
default_arg --rpc-drone-address "$drone_address"
fi
accounts_dir="$ledger_dir"/accounts
snapshot_dir="$ledger_dir"/snapshots
default_arg --identity "$identity_keypair_path"
default_arg --voting-keypair "$voting_keypair_path"
default_arg --storage-keypair "$storage_keypair_path"
default_arg --ledger "$ledger_dir"
default_arg --accounts "$accounts_dir"
#default_arg --snapshot-path "$snapshot_dir"
#default_arg --snapshot-interval-slots 100
if [[ -n $SOLANA_CUDA ]]; then
program=$solana_validator_cuda
else
program=$solana_validator
fi
if [[ -z $CI ]]; then # Skip in CI
# shellcheck source=scripts/tune-system.sh
source "$here"/../scripts/tune-system.sh
fi
new_genesis_block() {
if [[ ! -d "$ledger_dir" ]]; then
return
fi
rm -f "$ledger_dir"/new-genesis.tgz
(
set -x
curl -f "$rpc_url"/genesis.tgz -o "$ledger_dir"/new-genesis.tgz
) || {
echo "Error: failed to fetch new genesis ledger"
}
! diff -q "$ledger_dir"/new-genesis.tgz "$ledger_dir"/genesis.tgz >/dev/null 2>&1
}
set -e
PS4="$(basename "$0"): "
pid=
kill_node() {
# Note: do not echo anything from this function to ensure $pid is actually
# killed when stdout/stderr are redirected
set +ex
if [[ -n $pid ]]; then
declare _pid=$pid
pid=
kill "$_pid" || true
wait "$_pid" || true
fi
exit
}
kill_node_and_exit() {
kill_node
exit
}
trap 'kill_node_and_exit' INT TERM ERR
if ((reset_ledger)); then
echo "Resetting ledger..."
(
set -x
rm -rf "$ledger_dir"
)
fi
wallet() {
(
set -x
$solana_wallet --keypair "$identity_keypair_path" --url "$rpc_url" "$@"
)
}
setup_validator_accounts() {
declare node_lamports=$1
if ((airdrops_enabled)); then
echo "Adding $node_lamports to validator identity account:"
(
declare fees=100 # TODO: No hardcoded transaction fees, fetch the current cluster fees
wallet airdrop $((node_lamports+fees))
) || return $?
else
echo "Validator identity account balance:"
wallet balance || return $?
fi
if ! wallet show-vote-account "$vote_pubkey"; then
echo "Creating validator vote account"
wallet create-vote-account "$vote_pubkey" "$identity_pubkey" 1 --commission 127 || return $?
fi
echo "Validator vote account configured"
if ! wallet show-storage-account "$storage_pubkey"; then
echo "Creating validator storage account"
wallet create-validator-storage-account "$identity_pubkey" "$storage_pubkey" || return $?
fi
echo "Validator storage account configured"
return 0
}
while true; do
if new_genesis_block; then
# If the genesis block has changed remove the now stale ledger and start all
# over again
(
set -x
rm -rf "$ledger_dir"
)
fi
if [[ ! -f "$ledger_dir"/.genesis ]]; then
echo "Fetching ledger from $rpc_url/genesis.tgz..."
SECONDS=
mkdir -p "$ledger_dir"
while ! curl -f "$rpc_url"/genesis.tgz -o "$ledger_dir"/genesis.tgz; do
echo "Genesis ledger fetch failed"
sleep 5
done
echo "Fetched genesis ledger in $SECONDS seconds"
(
set -x
cd "$ledger_dir"
tar -zxf genesis.tgz
touch .genesis
)
(
if ((boot_from_snapshot)); then
SECONDS=
echo "Fetching state snapshot $rpc_url/snapshot.tgz..."
mkdir -p "$snapshot_dir"
if ! curl -f "$rpc_url"/snapshot.tgz -o "$snapshot_dir"/snapshot.tgz; then
echo "State snapshot fetch failed"
rm -f "$snapshot_dir"/snapshot.tgz
exit 0 # None fatal
fi
echo "Fetched snapshot in $SECONDS seconds"
SECONDS=
(
set -x
cd "$snapshot_dir"
tar -zxf snapshot.tgz
rm snapshot.tgz
)
echo "Extracted snapshot in $SECONDS seconds"
fi
)
fi
[[ -r "$identity_keypair_path" ]] || $solana_keygen new -o "$identity_keypair_path"
[[ -r "$voting_keypair_path" ]] || $solana_keygen new -o "$voting_keypair_path"
[[ -r "$storage_keypair_path" ]] || $solana_keygen new -o "$storage_keypair_path"
vote_pubkey=$($solana_keygen pubkey "$voting_keypair_path")
storage_pubkey=$($solana_keygen pubkey "$storage_keypair_path")
identity_pubkey=$($solana_keygen pubkey "$identity_keypair_path")
export SOLANA_METRICS_HOST_ID="$identity_pubkey"
setup_validator_accounts "$node_lamports"
cat <<EOF
======================[ validator configuration ]======================
identity pubkey: $identity_pubkey
vote pubkey: $vote_pubkey
storage pubkey: $storage_pubkey
ledger: $ledger_dir
accounts: $accounts_dir
snapshots: $snapshot_dir
========================================================================
EOF
echo "$PS4$program ${args[*]}"
$program "${args[@]}" &
pid=$!
echo "pid: $pid"
if ((no_restart)); then
wait "$pid"
exit $?
fi
secs_to_next_genesis_poll=5
while true; do
if [[ -z $pid ]] || ! kill -0 "$pid"; then
[[ -z $pid ]] || wait "$pid"
echo "############## validator exited, restarting ##############"
break
fi
sleep 1
if ((poll_for_new_genesis_block && --secs_to_next_genesis_poll == 0)); then
echo "Polling for new genesis block..."
if new_genesis_block; then
echo "############## New genesis detected, restarting ##############"
break
fi
secs_to_next_genesis_poll=5
fi
done
kill_node
# give the cluster time to come back up
(
set -x
sleep 60
)
done

View File

@@ -128,8 +128,8 @@ Manage testnet instances
DNS name (useful only when the -a and -P options
are also provided)
--fullnode-additional-disk-size-gb [number]
- Add an additional [number] GB SSD to all fullnodes to store the config-local directory.
If not set, config-local will be written to the boot disk by default.
- Add an additional [number] GB SSD to all fullnodes to store the config directory.
If not set, config will be written to the boot disk by default.
Only supported on GCE.
config-specific options:
-P - Use public network IP addresses (default: $publicNetwork)
@@ -412,6 +412,7 @@ EOF
declare failOnFailure="$6"
declare arrayName="$7"
# This check should eventually be moved to cloud provider specific script
if [ "$publicIp" = "TERMINATED" ] || [ "$privateIp" = "TERMINATED" ]; then
if $failOnFailure; then
exit 1

View File

@@ -491,8 +491,6 @@ startClient() {
}
sanity() {
declare skipBlockstreamerSanity=$1
$metricsWriteDatapoint "testnet-deploy net-sanity-begin=1"
declare ok=true
@@ -510,7 +508,7 @@ sanity() {
) || ok=false
$ok || exit 1
if [[ -z $skipBlockstreamerSanity && -n $blockstreamer ]]; then
if [[ -n $blockstreamer ]]; then
# If there's a blockstreamer node run a reduced sanity check on it as well
echo "--- Sanity: $blockstreamer"
(
@@ -677,8 +675,7 @@ deploy() {
stopNode "$ipAddress" true
done
fi
sanity skipBlockstreamerSanity # skip sanity on blockstreamer node, it may not
# have caught up to the bootstrap leader yet
sanity
SECONDS=0
for ((i=0; i < "$numClients" && i < "$numClientsRequested"; i++)) do

View File

@@ -8,11 +8,12 @@ echo "$(date) | $0 $*" > client.log
deployMethod="$1"
entrypointIp="$2"
clientToRun="$3"
RUST_LOG="$4"
if [[ -n $4 ]]; then
export RUST_LOG="$4"
fi
benchTpsExtraArgs="$5"
benchExchangeExtraArgs="$6"
clientIndex="$7"
export RUST_LOG=${RUST_LOG:-solana=info} # if RUST_LOG is unset, default to info
missing() {
echo "Error: $1 not specified"

View File

@@ -25,8 +25,9 @@ missing() {
[[ -n $updatePlatform ]] || missing updatePlatform
[[ -f update_manifest_keypair.json ]] || missing update_manifest_keypair.json
RUST_LOG="$2"
export RUST_LOG=${RUST_LOG:-solana=info} # if RUST_LOG is unset, default to info
if [[ -n $2 ]]; then
export RUST_LOG="$2"
fi
source net/common.sh
loadConfigFile
@@ -35,5 +36,5 @@ PATH="$HOME"/.cargo/bin:"$PATH"
set -x
scripts/solana-install-deploy.sh \
--keypair config-local/mint-keypair.json \
--keypair config/mint-keypair.json \
localhost "$releaseChannel" "$updatePlatform"

View File

@@ -8,7 +8,9 @@ deployMethod="$1"
nodeType="$2"
entrypointIp="$3"
numNodes="$4"
RUST_LOG="$5"
if [[ -n $5 ]]; then
export RUST_LOG="$5"
fi
skipSetup="$6"
failOnValidatorBootupFailure="$7"
externalPrimordialAccountsFile="$8"
@@ -23,7 +25,6 @@ benchExchangeExtraArgs="${16}"
genesisOptions="${17}"
extraNodeArgs="${18}"
set +x
export RUST_LOG
# Use a very large stake (relative to the default multinode-demo/ stake of 42)
# for the testnet validators setup by net/. This make it less likely that
@@ -58,6 +59,7 @@ genesisOptions="$genesisOptions"
airdropsEnabled=$airdropsEnabled
EOF
source scripts/oom-score-adj.sh
source net/common.sh
loadConfigFile
@@ -82,12 +84,6 @@ local|tar|skip)
PATH="$HOME"/.cargo/bin:"$PATH"
export USE_INSTALL=1
# Setup `/var/snap/solana/current` symlink so rsyncing the genesis
# ledger works (reference: `net/scripts/install-rsync.sh`)
sudo rm -rf /var/snap/solana/current
sudo mkdir -p /var/snap/solana
sudo ln -sT /home/solana/solana /var/snap/solana/current
./fetch-perf-libs.sh
# shellcheck source=/dev/null
source ./target/perf-libs/env.sh
@@ -156,29 +152,27 @@ local|tar|skip)
args=(
--bootstrap-leader-stake-lamports "$stake"
)
if [[ -n $internalNodesLamports ]]; then
args+=(--bootstrap-leader-lamports "$internalNodesLamports")
fi
)
if [[ -n $internalNodesLamports ]]; then
args+=(--bootstrap-leader-lamports "$internalNodesLamports")
fi
# shellcheck disable=SC2206 # Do not want to quote $genesisOptions
args+=($genesisOptions)
./multinode-demo/setup.sh "${args[@]}"
fi
args=(
--gossip-port "$entrypointIp":8001
--init-complete-file "$initCompleteFile"
)
if [[ $airdropsEnabled = true ]]; then
./multinode-demo/drone.sh > drone.log 2>&1 &
fi
args=(
--enable-rpc-exit
--gossip-port "$entrypointIp":8001
)
if [[ $airdropsEnabled != true ]]; then
args+=(--no-airdrop)
fi
args+=(--init-complete-file "$initCompleteFile")
# shellcheck disable=SC2206 # Don't want to double quote $extraNodeArgs
args+=($extraNodeArgs)
nohup ./multinode-demo/validator.sh --bootstrap-leader "${args[@]}" > fullnode.log 2>&1 &
nohup ./multinode-demo/bootstrap-leader.sh "${args[@]}" > fullnode.log 2>&1 &
pid=$!
oom_score_adj "$pid" 1000
waitForNodeToInit
;;
validator|blockstreamer)
@@ -197,7 +191,7 @@ local|tar|skip)
fi
args=(
"$entrypointIp":~/solana "$entrypointIp:8001"
--entrypoint "$entrypointIp:8001"
--gossip-port 8001
--rpc-port 8899
)
@@ -205,11 +199,8 @@ local|tar|skip)
args+=(
--blockstream /tmp/solana-blockstream.sock
--no-voting
--stake 0
--generate-snapshots
)
else
args+=(--stake "$stake")
args+=(--enable-rpc-exit)
if [[ -n $internalNodesLamports ]]; then
args+=(--node-lamports "$internalNodesLamports")
@@ -234,7 +225,7 @@ local|tar|skip)
# with it on the blockstreamer node. Typically the blockstreamer node has
# a static IP/DNS name for hosting the blockexplorer web app, and is
# a location that somebody would expect to be able to airdrop from
scp "$entrypointIp":~/solana/config-local/mint-keypair.json config-local/
scp "$entrypointIp":~/solana/config/mint-keypair.json config/
if [[ $airdropsEnabled = true ]]; then
./multinode-demo/drone.sh > drone.log 2>&1 &
fi
@@ -246,7 +237,7 @@ local|tar|skip)
fi
export BLOCKEXPLORER_GEOIP_WHITELIST=$PWD/net/config/geoip.yml
npm install @solana/blockexplorer@1.41.0
npm install @solana/blockexplorer@1
npx solana-blockexplorer > blockexplorer.log 2>&1 &
# Confirm the blockexplorer is accessible
@@ -265,7 +256,25 @@ local|tar|skip)
# shellcheck disable=SC2206 # Don't want to double quote $extraNodeArgs
args+=($extraNodeArgs)
nohup ./multinode-demo/validator.sh "${args[@]}" > fullnode.log 2>&1 &
pid=$!
oom_score_adj "$pid" 1000
waitForNodeToInit
if [[ $skipSetup != true && $nodeType != blockstreamer ]]; then
args=(
--url http://"$entrypointIp":8899
--force
"$stake"
)
if [[ $airdropsEnabled != true ]]; then
args+=(--no-airdrop)
fi
if [[ -f ~/solana/fullnode-identity.json ]]; then
args+=(--keypair ~/solana/fullnode-identity.json)
fi
./multinode-demo/delegate-stake.sh "${args[@]}"
fi
;;
replicator)
if [[ $deployMethod != skip ]]; then
@@ -284,6 +293,8 @@ local|tar|skip)
fi
nohup ./multinode-demo/replicator.sh "${args[@]}" > fullnode.log 2>&1 &
pid=$!
oom_score_adj "$pid" 1000
sleep 1
;;
*)

View File

@@ -60,8 +60,9 @@ while [[ $1 = -o ]]; do
esac
done
RUST_LOG="$1"
export RUST_LOG=${RUST_LOG:-solana=info} # if RUST_LOG is unset, default to info
if [[ -n $1 ]]; then
export RUST_LOG="$1"
fi
source net/common.sh
loadConfigFile
@@ -75,15 +76,13 @@ local|tar|skip)
source target/perf-libs/env.sh
fi
entrypointRsyncUrl="$sanityTargetIp:~/solana"
solana_gossip=solana-gossip
solana_install=solana-install
solana_keygen=solana-keygen
solana_ledger_tool=solana-ledger-tool
ledger=config-local/bootstrap-leader-ledger
client_id=config-local/client-id.json
ledger=config/bootstrap-leader
client_id=config/client-id.json
;;
*)
echo "Unknown deployment method: $deployMethod"
@@ -158,9 +157,8 @@ echo "--- $sanityTargetIp: validator sanity"
if $validatorSanity; then
(
set -x -o pipefail
timeout 10s ./multinode-demo/validator-x.sh --no-restart --stake 0 \
"$entrypointRsyncUrl" \
"$sanityTargetIp:8001" 2>&1 | tee validator-sanity.log
timeout 10s ./multinode-demo/validator-x.sh \
--no-restart --identity "$sanityTargetIp:8001" 2>&1 | tee validator-sanity.log
) || {
exitcode=$?
[[ $exitcode -eq 124 ]] || exit $exitcode

View File

@@ -57,25 +57,6 @@
}
]
},
{
"PrefixListIds": [],
"FromPort": 873,
"IpRanges": [
{
"Description": "rsync",
"CidrIp": "0.0.0.0/0"
}
],
"ToPort": 873,
"IpProtocol": "tcp",
"UserIdGroupPairs": [],
"Ipv6Ranges": [
{
"CidrIpv6": "::/0",
"Description": "rsync"
}
]
},
{
"PrefixListIds": [],
"FromPort": 3001,

View File

@@ -36,17 +36,6 @@ __cloud_FindInstances() {
--filter "$filter" \
--format 'value(name,networkInterfaces[0].accessConfigs[0].natIP,networkInterfaces[0].networkIP,status,zone)' \
| grep RUNNING)
while read -r name status zone; do
privateIp=TERMINATED
publicIp=TERMINATED
printf "%-30s | publicIp=%-16s privateIp=%s status=%s zone=%s\n" "$name" "$publicIp" "$privateIp" "$status" "$zone"
instances+=("$name:$publicIp:$privateIp:$zone")
done < <(gcloud compute instances list \
--filter "$filter" \
--format 'value(name,status,zone)' \
| grep TERMINATED)
}
#
@@ -262,9 +251,6 @@ cloud_WaitForInstanceReady() {
# declare instanceZone="$3"
declare timeout="$4"
if [[ $instanceIp = "TERMINATED" ]]; then
return 1
fi
timeout "${timeout}"s bash -c "set -o pipefail; until ping -c 3 $instanceIp | tr - _; do echo .; done"
}
@@ -282,10 +268,6 @@ cloud_FetchFile() {
declare localFile="$4"
declare zone="$5"
if [[ $publicIp = "TERMINATED" ]]; then
return 1
fi
(
set -x
gcloud compute scp --zone "$zone" "$instanceName:$remoteFile" "$localFile"

View File

@@ -8,13 +8,3 @@ set -ex
[[ $USER = root ]] || exit 1
apt-get --assume-yes install rsync
cat > /etc/rsyncd.conf <<-EOF
[config]
path = /var/snap/solana/current/config
hosts allow = *
read only = true
EOF
systemctl enable rsync
systemctl start rsync

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-netutil"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana Network Utilities"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -11,11 +11,11 @@ edition = "2018"
[dependencies]
bincode = "1.1.4"
clap = "2.33.0"
log = "0.4.7"
log = "0.4.8"
nix = "0.14.1"
rand = "0.6.1"
socket2 = "0.3.9"
solana-logger = { path = "../logger", version = "0.17.2" }
solana-logger = { path = "../logger", version = "0.18.0-pre0" }
tokio = "0.1"
[lib]

View File

@@ -1,7 +1,7 @@
[package]
name = "solana-bpf-programs"
description = "Blockchain, Rebuilt for Scale"
version = "0.17.2"
version = "0.18.0-pre0"
documentation = "https://docs.rs/solana"
homepage = "https://solana.com/"
readme = "README.md"
@@ -22,10 +22,10 @@ walkdir = "2"
bincode = "1.1.4"
byteorder = "1.3.2"
elf = "0.0.10"
solana-bpf-loader-api = { path = "../bpf_loader_api", version = "0.17.2" }
solana-logger = { path = "../../logger", version = "0.17.2" }
solana-runtime = { path = "../../runtime", version = "0.17.2" }
solana-sdk = { path = "../../sdk", version = "0.17.2" }
solana-bpf-loader-api = { path = "../bpf_loader_api", version = "0.18.0-pre0" }
solana-logger = { path = "../../logger", version = "0.18.0-pre0" }
solana-runtime = { path = "../../runtime", version = "0.18.0-pre0" }
solana-sdk = { path = "../../sdk", version = "0.18.0-pre0" }
solana_rbpf = "=0.1.13"
[[bench]]

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-128bit"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF iter program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,12 +12,12 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.17.2" }
solana-bpf-rust-128bit-dep = { path = "../128bit_dep", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.18.0-pre0" }
solana-bpf-rust-128bit-dep = { path = "../128bit_dep", version = "0.18.0-pre0" }
[dev_dependencies]
solana-sdk-bpf-test = { path = "../../../../sdk/bpf/rust/rust-test", version = "0.17.2" }
solana-sdk-bpf-test = { path = "../../../../sdk/bpf/rust/rust-test", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-128bit-dep"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF many-args-dep program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,10 +12,10 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
[dev_dependencies]
solana-sdk-bpf-test = { path = "../../../../sdk/bpf/rust/rust-test", version = "0.17.2" }
solana-sdk-bpf-test = { path = "../../../../sdk/bpf/rust/rust-test", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-alloc"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF alloc program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,8 +12,8 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-dep-crate"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF dep-crate program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -13,8 +13,8 @@ edition = "2018"
[dependencies]
byteorder = { version = "1", default-features = false }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-external-spend"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF external spend program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,8 +12,8 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-iter"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF iter program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,8 +12,8 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-many-args"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF many-args program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,9 +12,9 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.17.2" }
solana-bpf-rust-many-args-dep = { path = "../many_args_dep", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
solana-sdk-bpf-no-std = { path = "../../../../sdk/bpf/rust/rust-no-std", version = "0.18.0-pre0" }
solana-bpf-rust-many-args-dep = { path = "../many_args_dep", version = "0.18.0-pre0" }
[workspace]
members = []

View File

@@ -3,7 +3,7 @@
[package]
name = "solana-bpf-rust-many-args-dep"
version = "0.17.2"
version = "0.18.0-pre0"
description = "Solana BPF many-args-dep program written in Rust"
authors = ["Solana Maintainers <maintainers@solana.com>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,10 +12,10 @@ homepage = "https://solana.com/"
edition = "2018"
[dependencies]
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.17.2" }
solana-sdk-bpf-utils = { path = "../../../../sdk/bpf/rust/rust-utils", version = "0.18.0-pre0" }
[dev_dependencies]
solana-sdk-bpf-test = { path = "../../../../sdk/bpf/rust/rust-test", version = "0.17.2" }
solana-sdk-bpf-test = { path = "../../../../sdk/bpf/rust/rust-test", version = "0.18.0-pre0" }
[workspace]
members = []

Some files were not shown because too many files have changed in this diff Show More