Compare commits

...

271 Commits

Author SHA1 Message Date
33ca98ece9 params: release Geth v1.10.5, Exodus Cluster 2021-07-14 11:01:38 +03:00
1fac96c1f9 internal/web3ext: remove unused console APIs (#23208) 2021-07-14 10:57:07 +03:00
b9e6e43722 consensus/clique: implement getSigner API method (#22987)
* clique: implement getSignerForBlock

* consensus/clique: use blockNrOrHash in getSignerForBlock

* consensus/clique: implement getSigner

* consensus/clique: fixed rlp decoding

* consensus/clique: use Author instead of getSigner

* consensus/clique: nit nit nit

* consensus/clique: nit nit nit
2021-07-13 14:40:22 +03:00
c49e065fea internal: get pending and queued transaction by address (#22992)
* core, eth, internal, les, light: get pending and queued transaction by address

* core: tiny nitpick fixes

* light: tiny nitpick

Co-authored-by: mark <mark@amis.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-07-13 13:40:58 +03:00
846badc480 internal/ethapi: fix transaction APIs (#23179)
* internal/ethapi: fix transaction APIs

* internal/ethapi: fix typo

* internal/ethapi: address comments

* internal/ethapi: address comment from Peter
2021-07-13 13:40:01 +03:00
8fe47b0a0d core/state: avoid unnecessary alloc in trie prefetcher (#23198) 2021-07-12 21:34:20 +02:00
58b0420a8a Merge pull request #23183 from karalabe/cht-1.10.5
params: update CHTs for the 1.10.5 release
2021-07-12 11:25:08 +03:00
afd4227df8 params: update CHTs for the 1.10.5 release 2021-07-09 14:27:41 +03:00
9624f92ede Merge pull request #23178 from karalabe/feeapi-fixes
eth/gasprice, internal/ethapi, miner: minor feehistory fixes
2021-07-09 07:47:23 +03:00
dea71556cc eth/gasprice, internal/ethapi, miner: minor feehistory fixes 2021-07-08 21:50:35 +03:00
ff4ff30a68 core, params: define london block at 12965000 (#23176)
* core, params: define london block at 12965000

* core/forkid: fix test
2021-07-08 12:34:56 +03:00
00b922fc5d core/types: go generate (#23177) 2021-07-08 07:53:28 +02:00
7522642393 core/types: remove LogForStorage type (#23173)
The encoding of Log and LogForStorage is exactly the same
now. After tracking it down it seems like #17106 changed the
storage schema of logs to be the same as the consensus
encoding.

Support for the legacy format was dropped in #22852 and if
I'm not wrong there's no reason anymore to have these two
equivalent types.

Since the RLP encoding simply contains the first three fields
of Log, we can also avoid creating a temporary struct for
encoding/decoding, and use the rlp:"-" tag in Log instead.

Note: this is an API change in core/types. We decided it's OK
to make this change because LogForStorage is an implementation
detail of go-ethereum and the type has zero uses outside of
package core/types.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-07-07 19:52:55 +02:00
b9d4412715 cmd/devp2p: fixes for eth and discv4 tests (#23155)
This PR fixes a false positive PONG 'to' endpoint mismatch seen in hive tests:

    got {IP:172.17.0.7 UDP:44025 TCP:44025}, want {IP:172.17.0.7 UDP:44025 TCP:0}

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-07-07 17:28:14 +02:00
5441a8fa47 all: remove noop vm config flags (#23111)
* all: rm external interpreter and ewasm config

* core/vm: rm Interpreter interface

* cmd/geth: deprecate interpreter config fields
2021-07-06 22:03:09 +02:00
e13d14e6a3 core/types: sanity check the basefee length inside a header (#23171) 2021-07-06 22:02:38 +02:00
d21a069619 eth, miner: add RPC method to modify miner gaslimit (pre london: ceiling) (#23134) 2021-07-06 10:35:39 +02:00
13bc9c0c6e fuzzing: fix typo in fuzzer definitions (#23169) 2021-07-06 09:48:29 +02:00
5afc82de6e p2p: fix array out of bounds issue (#23165) 2021-07-06 09:33:51 +02:00
bd566977e8 core: fix bad parent hash when jumping to genesis in setHead (#23162) 2021-07-06 10:32:26 +03:00
99169016d2 Merge pull request #23168 from karalabe/puppeth-fix-dashboard
cmd/puppeth: fix dashboard crash caused by updated base image
2021-07-06 09:59:24 +03:00
78c34fdc3c cmd/puppeth: fix dashboard crash caused by updated base image 2021-07-06 09:58:24 +03:00
d081c935d7 Merge pull request #23167 from karalabe/docker-nomake
dockerfile: get rid of make and env, see if that fixes builds
2021-07-06 09:34:54 +03:00
bb0191f22b dockerfile: get rid of make and env, see if that fixes builds 2021-07-06 09:33:31 +03:00
c619562313 Merge pull request #23159 from karalabe/ethstats-fix-fullnode
ethstats: fix full node interface post 1559
2021-07-05 11:18:51 +03:00
6b6d3190cf ethstats: fix full node interface post 1559 2021-07-05 10:49:52 +03:00
3b05318525 cmd/evm, eth/ethconfig: regenerate struct codecs (#23140) 2021-07-02 11:08:53 +02:00
a182c76815 consensus/clique: avoid a copy in clique (#23149)
* consensus/clique:optimize to avoid a copy in clique

* consensus/clique: test for sealhash

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-07-02 09:18:50 +02:00
6ed812db13 les: avoid shutdown hang (#23139) 2021-07-01 14:01:19 +02:00
3212fb6838 go.mod: update UPNP dependency (#23116) 2021-07-01 14:21:54 +03:00
f5f906dd0d eth/tracers: improve tracing performance (#23016)
Improves the performance of debug.traceTransaction
2021-07-01 09:15:04 +02:00
bbbeb7d8ba crypto: gofuzz build directives (#23137) 2021-06-30 23:04:28 +02:00
c131e812ae eth/fetcher, trie: unit test reliability fixes (#23020)
Some tests take quite some time during exit, which I think causes
some appveyor fails like this:

    https://ci.appveyor.com/project/ethereum/go-ethereum/builds/39511210/job/xhom84eg2e4uulq3

One of the things that seem to take time during exit is waiting
(up to 100ms) for the syncbloom to close. This PR changes it to use
a channel, instead of looping with a 100ms wait.

This also includes some unrelated changes improving the reliability of
eth/fetcher tests, which fail a lot because they are time-dependent.
2021-06-30 22:24:17 +02:00
686b2884ee all: removed blockhash from statedb (#23126)
This PR removes the blockhash from the statedb
2021-06-30 15:17:01 +02:00
e7c8693635 internal/ethapi: fix panic in access list creation (#23133)
Fixes test failure in the last commit.
2021-06-30 14:23:20 +02:00
ec88bd0cd0 cmd/geth: dont fail on deprecated toml config fields (#23118) 2021-06-30 12:57:32 +02:00
acdf9238fb ethclient/gethclient: RPC client wrapper for geth-specific API (#22977)
This commit adds the package gethclient which is similar to the ethclient
and implements some geth specific functionality.

Co-authored-by: Edgar Aroutiounian <edgar.factorial@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-06-30 11:03:01 +02:00
4fcc93d922 p2p/server: fix method name in comment (#23123) 2021-06-29 12:14:47 +03:00
61f4b5aa89 accounts/abi/bind: fix gas price suggestion with pre EIP-1559 clients (#23102)
This fixes transaction sending in the case where an app using go-ethereum v1.10.4
is talking to a pre-EIP-1559 RPC node. In this case, the eth_maxPriorityFeePerGas
endpoint is not available and we can only rely on eth_gasPrice.
2021-06-29 10:57:29 +02:00
35dbf7a8a3 eth/gasprice: implement feeHistory API (#23033)
* eth/gasprice: implement feeHistory API

* eth/gasprice: factored out resolveBlockRange

* eth/gasprice: add sanity check for missing block

* eth/gasprice: fetch actual gas used from receipts

* miner, eth/gasprice: add PendingBlockAndReceipts

* internal/ethapi: use hexutil.Big

* eth/gasprice: return error when requesting beyond head block

* eth/gasprice: fixed tests and return errors correctly

* eth/gasprice: rename receiver name

* eth/gasprice: return directly if blockCount == 0

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2021-06-28 16:16:32 +02:00
1b5582acf7 core, eth: fix precompile addresses for tracers (#23097)
* core,eth/tracers: make isPrecompiled dependent on HF

* eth/tracers: use keys when constructing chain config struct

* eth/tracers: dont initialize activePrecompiles with random value
2021-06-28 14:13:27 +02:00
dde6f1e92d p2p/enode: fix method doc (#23115)
This is an obvious spelling error

Co-authored-by: liuyaxiong <liuyaxiong@inspur.com>
2021-06-28 10:48:17 +03:00
2d4eff21ca eth/downloader: increase downloader block body allowance (#23074)
This change increases the cache size from 64 to 256 Mb for block bodies.
Benchmarks have shown this to be one bottleneck when trying to achieve
higher download speeds.

The commit also includes a minor optimization for header inserts in package
core: previously, the presence of headers in the database was checked for
every header before writing it. With the change, if one header fails the
presence check, all subsequent headers are also assumed to be missing.
This is an improvement because in practice, the headers are almost always
missing during sync.
2021-06-25 14:53:22 +02:00
bca8c03e57 core/state: remove unused methods ReturnGas, GetStorageProofByHash (#23092)
Co-authored-by: lidongwei <lidongwei@huobi.com>
2021-06-25 14:34:09 +02:00
c07918e7d8 eth/gasprice: fix typo in comment (#22998) 2021-06-25 12:48:06 +02:00
0e6961366a cmd/geth: fix IPC probe in les test (#23094)
Previously, the test waited a second and then failed if geth had not
started. This caused the test to fail intermittently. This change checks
whether the IPC is open 10 times over a 5 second period and then fails
if geth is still not available.
2021-06-25 12:40:37 +02:00
948a600ed5 eth/tracers: convert int/hash values from context into js object (#23108)
* Convert int/hash values from context into js object

* Use js fixed buffer

Co-authored-by: William <william.berman@coinbase.com>
2021-06-25 09:02:15 +03:00
9e23610b0f Merge pull request #23104 from karalabe/tracer-context
eth/tracers: expose contextual infos (block hash, tx hash, tx index)
2021-06-24 13:50:07 +03:00
29905d86ae eth/tracers: expose contextual infos (block hash, tx hash, tx index) 2021-06-24 12:46:26 +03:00
10eb654f27 Merge pull request #23089 from holiman/fix_fuzzers
crypto: fix build directives
2021-06-23 07:34:35 +03:00
4dde0665c8 core: transaction journal should not be executable (#23090) 2021-06-23 07:29:20 +03:00
a750bf8686 crypto: fix build directives 2021-06-22 15:21:11 +02:00
bef78efb49 graphql: fix transaction API (#23052) 2021-06-22 12:13:48 +03:00
ddf10250c7 accounts/abi/bind: replace context.TODO with context.Background (#23088) 2021-06-22 12:06:34 +03:00
fcd7bdc2b7 Merge pull request #23062 from nfeignon/fix-abi-bind-ensure-context
accounts/abi/bind: call ensureContext on every context
2021-06-22 11:47:48 +03:00
1e44c3585f README: Discord server instead of gitter for communication with devs (#23080)
The `README.md` links the Gitter channel for discussions, but the
official docs and even the Gitter channel itself recommend using the
official Discord Server for such discussions.
This PR simply changes the Gitter link and provides Discord invite link.
2021-06-22 11:33:49 +03:00
5228b2a353 Merge pull request #23083 from karalabe/docker-fix-experimental
travis: enable experimental docker for manifest building
2021-06-21 19:44:23 +03:00
e0123026b6 travis: enable experimental docker for manifest building 2021-06-21 19:43:37 +03:00
653a30f4ca Merge pull request #23082 from karalabe/docker-flat-publish
travis, Dockerfile, build: docker build and multi-arch publish combo
2021-06-21 19:32:45 +03:00
0f2347d070 travis, Dockerfile, build: docker build and multi-arch publish combo 2021-06-21 19:17:59 +03:00
da000c8314 Merge pull request #23078 from karalabe/docker-post-publish
travis: move docker steps further to prevent hanging other builders
2021-06-21 13:50:24 +03:00
f915a4bf20 travis: move docker steps further to prevent hanging other builders 2021-06-21 13:01:24 +03:00
732a6a3666 trie: small optimization of delete in fullNode case (#22979)
When deleting in fullNode, and the new child node nn is not nil, there is no need
to check the number of non-nil entries in the node. This is because the fullNode 
must've contained at least two children before deletion, so there must be another
child node other than nn.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-06-20 15:59:00 +02:00
7b6c8363da core: copy CliqueConfig in DeveloperGenesisBlock (#23068)
Copy the CliqueConfig instead of reusing the pointer.
This makes DeveloperGenesisBlock thread safe and prevents it from
changing params.AllCliqueProtocolChanges.Clique.Epoch.
2021-06-20 15:52:04 +02:00
4695117f2e Merge pull request #23069 from karalabe/docker-multi-arch
travis, build: add support for multi-arch docker images
2021-06-18 15:35:09 +03:00
e9f99d1c91 travis, build: add support for multi-arch docker images 2021-06-18 15:30:00 +03:00
ef946a6c87 tests: fix eip1559 tx on non-eip1559 network (#23054) 2021-06-18 12:34:31 +02:00
58aeab77d2 tests: fix nil pointer panic on failure (#23053) 2021-06-18 12:21:34 +02:00
97ce6dfa6d internal/ethapi: fix typo in comment (#23057) 2021-06-18 12:16:34 +02:00
bbb2b30506 params: fix typo in gas cost comments (#23065) 2021-06-18 12:15:51 +02:00
15fe3050a1 core/types: add DynamicFeeTx to TxData implementation list in docs (#23063) 2021-06-17 19:54:37 +02:00
c63c2d855e accounts/abi/bind: call ensureContext on every context 2021-06-17 14:04:24 +02:00
87a11a87c2 params: begin v1.10.5 release cycle 2021-06-17 12:36:42 +02:00
aa637fd38a params: release go-ethereum v1.10.4 stable 2021-06-17 12:35:17 +02:00
e1f244a6e6 Merge pull request #23061 from karalabe/docker-noarm
travis: don't overwrite amd64 images with arm64
2021-06-17 12:23:16 +03:00
40a11d644c travis: don't overwrite amd64 images with arm64 2021-06-17 12:22:22 +03:00
b28f8c0c43 Merge pull request #23060 from karalabe/travis-docker
travis, build: own docker builder and hub pusher
2021-06-17 12:02:18 +03:00
90ffcfde89 travis, build: own docker builder and hub pusher 2021-06-17 11:04:57 +03:00
a675c89c75 core: readded state processor error tests (#23055) 2021-06-16 16:00:36 +03:00
080b6ebe91 core/vm: evm fix panic (#23047)
* core/vm: evm fix panic

* core/vm/runtime: default to params.initialbasefee
2021-06-16 09:53:27 +03:00
ae315ef7a1 Merge pull request #23050 from karalabe/1559-receipt-rpc
core, graphql, internal: expose effectiveGasPrice in receipts
2021-06-16 09:52:31 +03:00
aa69d36152 core, graphql, internal: expose effectiveGasPrice in receipts 2021-06-16 09:52:06 +03:00
0aadb49c86 Merge pull request #23051 from karalabe/cht-1.10.4
params: bump CHTs for Geth v1.10.4
2021-06-16 09:37:09 +03:00
cdb9fefc48 params: bump CHTs for Geth v1.10.4 2021-06-16 09:14:58 +03:00
7a7abe3de8 accounts/abi/bind: fix bounded contracts and sim backend for 1559 (#23038)
* accounts/abi/bind: fix bounded contracts and sim backend for 1559

* accounts/abi/bind, ethclient: don't rely on chain config for gas prices

* all: enable London for all internal tests

* les: get receipt type info in les tests

* les: fix weird test

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-06-15 13:56:14 +03:00
087ed9c92e params, core/forkid: add london testnet blocks (#23041)
* params: add london testnet blocks

* core/forkid: update fork hashes
2021-06-14 20:35:01 +03:00
7530803065 Merge pull request #23039 from holiman/basefeepergas
core: change baseFee into baseFeePerGas in genesis json
2021-06-14 15:54:20 +03:00
8a4460c47e core: change baseFee into baseFeePerGas in genesis json 2021-06-14 14:04:44 +02:00
1d57f22d58 accounts/abi/bind/backends: add simulated reorgs (#22624)
* accounts/abi/bind/backends: add blockByHashNoLock

Signed-off-by: Oliver Tale-Yazdi <oliver@perun.network>

* accounts/abi/bind/backends: add 'parent' arg to rollback

Signed-off-by: Oliver Tale-Yazdi <oliver@perun.network>

* accounts/abi/bind/backends: add simulated forks

Signed-off-by: Oliver Tale-Yazdi <oliver@perun.network>

* accounts/abi/bind/backends: minor nitpicks

* accounts/abi/bind/backends: don't add defensive panics

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-06-14 08:55:44 +03:00
ccf53daee1 Merge pull request #23013 from holiman/genesis_fix
core: make genesis parse baseFee correctly
2021-06-14 07:52:33 +03:00
eff998effb Merge pull request #23027 from karalabe/1559-call
core, internal: support various eth_call invocations post 1559
2021-06-11 12:27:30 +03:00
a2ea537a6f common: rename unused function with typo (#23025)
This function is not used in the code base, so probably safe to do rename, or remove in its entirety, but I'm assuming the logic from the original creator still applies so rename probably better.
2021-06-10 10:53:23 +03:00
1fc0eba50d Merge pull request #23028 from karalabe/1559-rpcgascap
eth/ethconfig: bump the RPC gas cap to 50M, since 1559 exceeds 25
2021-06-10 10:52:31 +03:00
be1267ced5 eth/ethconfig: bump the RPC gas cap to 50M, since 1559 exceeds 25 2021-06-10 09:07:03 +03:00
f68a68a313 core, internal: support various eth_call invocations post 1559 2021-06-10 08:02:51 +03:00
7a00378e2b cmd/clef, signer: support for eip-1559 txs in clef (#22966) 2021-06-09 13:48:47 +02:00
c503f98f6d all: rename internal 1559 gas fields, add support for graphql (#23010)
* all: rename internal 1559 gas fields, add support for graphql

* cmd/evm/testdata, core: use public 1559 gas names on API surfaces
2021-06-08 12:05:41 +02:00
f763846e6e core: make genesis parse baseFee correctly 2021-06-08 11:07:27 +02:00
248572ee54 core/rawdb: db inspect move 'config' and 'shutdown' into 'meta data' (#22978)
* core/rawdb: db inspect move 'config' and 'shutdown' into 'meta data'

* gofmt
2021-06-08 11:39:24 +03:00
ddeeb89c03 go.mod: upgrade to fastcache v1.6.0 (#22982) 2021-06-08 10:39:05 +02:00
0e9c7d564d tests: update for London (#22976)
This updates the tests submodule to the London fork tests, and
also updates the test runner to support the new EIP-1559 fields in
test JSON.
2021-06-07 14:37:56 +02:00
08379b5533 trie: remove the duplicate batch-write for 'preimage' (#23001) 2021-06-07 09:11:07 +02:00
92b8f28df3 Merge pull request #22995 from karalabe/enforce-miner-tip
core, eth, miner: enforce configured mining reward post 1559 too
2021-06-04 10:57:22 +03:00
71ff65b188 miner/stress: add stress test for eip 1559 (#22930)
* miner/stress/1559: add 1559 stress tests

* miner/stress: add 1559 stress test
2021-06-04 09:32:35 +02:00
7e915ee379 core, eth, miner: enforce configured mining reward post 1559 too 2021-06-04 10:18:37 +03:00
3094e7f3b8 catalyst: runs every transaction in a snapshot in assembleBlock handler (#7) (#22989)
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Mikhail Kalinin <noblesse.knight@gmail.com>
2021-06-03 17:12:47 +02:00
216ed05c6e cmd/faucet: disable flaky facebook test (#22988) 2021-06-03 14:35:40 +02:00
7760a60794 Merge pull request #22973 from karalabe/the-switch
eth/ethconfig: flip the default from fast to snap sync
2021-06-03 12:05:08 +03:00
5cff9754d7 core, eth, internal, les: RPC methods and fields for EIP 1559 (#22964)
* internal/ethapi: add baseFee to RPCMarshalHeader

* internal/ethapi: add FeeCap, Tip and correct GasPrice to EIP-1559 RPCTransaction results

* core,eth,les,internal: add support for tip estimation in gas price oracle

* internal/ethapi,eth/gasprice: don't suggest tip larger than fee cap

* core/types,internal: use correct eip1559 terminology for json marshalling

* eth, internal/ethapi: fix rebase problems

* internal/ethapi: fix rpc name of basefee

* internal/ethapi: address review concerns

* core, eth, internal, les: simplify gasprice oracle (#25)

* core, eth, internal, les: simplify gasprice oracle

* eth/gasprice: fix typo

* internal/ethapi: minor tweak in tx args

* internal/ethapi: calculate basefee for pending block

* internal/ethapi: fix panic

* internal/ethapi, eth/tracers: simplify txargs ToMessage

* internal/ethapi: remove unused param

* core, eth, internal: fix regressions wrt effective gas price in the evm

* eth/gasprice: drop weird debug println

* internal/jsre/deps: hack in 1559 gas conversions into embedded web3

* internal/jsre/deps: hack basFee to decimal conversion

* internal/ethapi: init feecap and tipcap for legacy txs too

* eth, graphql, internal, les: fix gas price suggestion on all combos

* internal/jsre/deps: handle decimal tipcap and feecap

* eth, internal: minor review fixes

* graphql, internal: export max fee cap RPC endpoint

* internal/ethapi: fix crash in transaction_args

* internal/ethapi: minor refactor to make the code safer

Co-authored-by: Ryan Schneider <ryanleeschneider@gmail.com>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: gary rong <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-06-02 16:13:10 +03:00
2dee31930c metrics: use golang.org/x/sys/unix to support Solaris (#22584)
Fixes #11113

Co-authored-by: rene <41963722+renaynay@users.noreply.github.com>
2021-06-01 10:50:54 +02:00
2cde472650 core/state: fix typos in test error message (#22962) 2021-05-31 12:43:18 +02:00
9aaa4208a8 eth/ethconfig: flip the default from fast to snap sync 2021-05-31 10:21:48 +03:00
08ea52e77a cmd/geth, core, params: replace baikal with calaveras (#22972)
* cmd/geth, core, params: replace baikal with calaveras

* params: fix genesis hash for Calaveras

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-05-31 10:06:48 +03:00
2d716c4b01 core: add new eip-1559 tx constraints (#22970)
This PR adds the new consensus constraints of EIP-1559 transactions as specified in https://github.com/ethereum/EIPs#3594
2021-05-30 19:37:52 +02:00
966ee3ae6d all: EIP-1559 tx pool support (#22898)
This pull request implements EIP-1559 compatible transaction pool with dual heap eviction ordering.
It is based on #22791
The eviction ordering scheme and the reasoning behind it is described here: https://gist.github.com/zsfelfoldi/9607ad248707a925b701f49787904fd6
2021-05-28 10:28:07 +02:00
ee35ddc8fd cmd/devp2p/internal/ethtest: ignore block announcement in tx test (#22957) 2021-05-27 20:53:33 +02:00
04cb5e2be3 cmd/puppeth: remove outdated mist support (#22940) 2021-05-27 19:45:13 +03:00
427175153c p2p/msgrate: return capacity as integer, clamp to max uint32 (#22943)
* p2p/msgrate: return capacity as integer

* eth/protocols/snap: remove conversions

* p2p/msgrate: add overflow test

* p2p/msgrate: make the capacity overflow test actually overflow

* p2p/msgrate: clamp capacity to max int32

* p2p/msgrate: fix min/max confusion
2021-05-27 19:43:55 +03:00
0703ef62d3 crypto/secp256k1: fix undefined behavior in BitCurve.Add (#22621)
This commit changes the behavior of BitCurve.Add to be more inline
with btcd. It fixes two different bugs:

1) When adding a point at infinity to another point, the other point
   should be returned. While this is undefined behavior, it is better
   to be more inline with the go standard library.
   Thus (0,0) + (a, b) = (a,b)

2) Adding the same point to itself produced the point at infinity.
   This is incorrect, now doubleJacobian is used to correctly calculate it.
   Thus (a,b) + (a,b) == 2* (a,b) and not (0,0) anymore.

The change also adds a differential fuzzer for Add, testing it against btcd.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-05-27 13:30:25 +02:00
d836ad141e cmd/devp2p/internal/ethtest: add block hash announcement test (#22535) 2021-05-27 11:57:49 +02:00
7194c847b6 p2p/rlpx: reduce allocation and syscalls (#22899)
This change significantly improves the performance of RLPx message reads
and writes. In the previous implementation, reading and writing of
message frames performed multiple reads and writes on the underlying
network connection, and allocated a new []byte buffer for every read.

In the new implementation, reads and writes re-use buffers, and perform
much fewer system calls on the underlying connection. This doubles the
theoretically achievable throughput on a single connection, as shown by
the benchmark result:

    name             old speed      new speed       delta
    Throughput-8     70.3MB/s ± 0%  155.4MB/s ± 0%  +121.11%  (p=0.000 n=9+8)

The change also removes support for the legacy, pre-EIP-8 handshake encoding.
As of May 2021, no actively maintained client sends this format.
2021-05-27 10:19:13 +02:00
2e7714f864 cmd/utils: avoid large alloc in --dev mode (#22949)
* cmd/utils: avoid 1Gb alloc in --dev mode

* cmd/geth: avoid 512Mb alloc in genesis query tests
2021-05-27 10:13:35 +02:00
5869789d75 ethstats: fix typo in comment (#22952)
Trivial but helpful to understanding.
2021-05-26 22:33:00 +02:00
c73652da0b core/state/snapshot: fix flaky tests (#22944)
* core/state/snapshot: fix flaky tests

* core/state/snapshot: fix tests
2021-05-26 10:58:09 +03:00
05dab7f6bd internal/ethapi: remove unused vm.Config parameter of DoCall (#22942) 2021-05-26 08:39:41 +02:00
10962b685e ethstats: fix URL parser for '@' or ':' in node name/password (#21640)
Fixes the case (example below) where the value passed
to --ethstats flag would be parsed wrongly because the
node name and/or password value contained the special
characters '@' or ':'

    --ethstats "ETC Labs Metrics @meowsbits":mypass@ws://mordor.dash.fault.dev:3000
2021-05-25 23:22:46 +02:00
49bde05a55 cmd/devp2p: refactor eth test suite (#22843)
This PR refactors the eth test suite to make it more readable and
easier to use. Some notable differences:

- A new file helpers.go stores all of the methods used between
  both eth66 and eth65 and below tests, as well as methods shared
  among many test functions.
- suite.go now contains all of the test functions for both eth65
  tests and eth66 tests.
- The utesting.T object doesn't get passed through to other helper methods,
  but is instead only used within the scope of the test function,
  whereas helper methods return errors, so only the test function
  itself can fatal out in the case of an error.
- The full test suite now only takes 13.5 seconds to run.
2021-05-25 23:09:11 +02:00
6c7d6cf886 tests: get test name from testing.T (#22941)
There were 2 TODOs about that fix after Golang 1.8 release.
It's here for 3 years already, so now should be the right time.
2021-05-25 22:47:14 +02:00
750115ff39 p2p/nat: skip TestUPNP in non-CI environments if discover fails (#22877)
Fixes #21476
2021-05-25 22:37:30 +02:00
51b32cc7e4 internal/ethapi: merge CallArgs and SendTxArgs (#22718)
There are two transaction parameter structures defined in
the codebase, although for different purposes. But most of
the parameters are shared. So it's nice to reduce the code
duplication by merging them together.

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-05-25 22:30:21 +02:00
836c647bdd eth: unregister peer only when handler exits (#22908)
This removes the error log message that says 

    Ethereum peer removal failed ... err="peer not registered"

The error happened because removePeer was called multiple
times: once to disconnect the peer, and another time when the
handler exited. With this change, removePeer now has the sole
purpose of disconnecting the peer. Unregistering happens exactly
once, when the handler exits.
2021-05-25 22:20:36 +02:00
4d33de9b49 rlp: optimize big.Int decoding for size <= 32 bytes (#22927)
This change grows the static integer buffer in Stream to 32 bytes,
making it possible to decode 256bit integers without allocating a
temporary buffer.

In the recent commit 088da24, Stream struct size decreased from 120
bytes down to 88 bytes. This commit grows the struct to 112 bytes again,
but the size change will not degrade performance because Stream
instances are internally cached in sync.Pool.

    name             old time/op    new time/op    delta
    DecodeBigInts-8    12.2µs ± 0%     8.6µs ± 4%  -29.58%  (p=0.000 n=9+10)

    name             old speed      new speed      delta
    DecodeBigInts-8   230MB/s ± 0%   326MB/s ± 4%  +42.04%  (p=0.000 n=9+10)
2021-05-25 21:56:25 +02:00
017cf71fbd rlp, tests/fuzzers/bls12381: gofmt (#22937) 2021-05-25 10:14:39 +02:00
93407b14a6 core: make txpool free space calculation more accurate (#22933) 2021-05-24 14:34:38 +02:00
154ca32a8a rlp: optimize byte array handling (#22924)
This change improves the performance of encoding/decoding [N]byte.

    name                     old time/op    new time/op    delta
    DecodeByteArrayStruct-8     336ns ± 0%     246ns ± 0%  -26.98%  (p=0.000 n=9+10)
    EncodeByteArrayStruct-8     225ns ± 1%     148ns ± 1%  -34.12%  (p=0.000 n=10+10)

    name                     old alloc/op   new alloc/op   delta
    DecodeByteArrayStruct-8      120B ± 0%       48B ± 0%  -60.00%  (p=0.000 n=10+10)
    EncodeByteArrayStruct-8     0.00B          0.00B          ~     (all equal)
2021-05-22 15:10:16 +02:00
0d076d92db rlp: use atomic.Value for type cache (#22902)
All encoding/decoding operations read the type cache to find the
writer/decoder function responsible for a type. When analyzing CPU
profiles of geth during sync, I found that the use of sync.RWMutex in
cache lookups appears in the profiles. It seems we are running into
CPU cache contention problems when package rlp is heavily used
on all CPU cores during sync.

This change makes it use atomic.Value + a writer lock instead of
sync.RWMutex. In the common case where the typeinfo entry is present in
the cache, we simply fetch the map and lookup the type.
2021-05-22 13:34:29 +02:00
59f259b058 miner/stress: update stress tests (#22919)
This PR updates the miner stress tests and moves them to standalone
packages, so that they can be run directly.
2021-05-21 20:52:51 +02:00
6bc72783f6 Merge pull request #22921 from karalabe/les-simplify-reqids
les: generate random nums directly, not via strange conversions
2021-05-21 12:51:56 +03:00
835fe06f1d les: generate random nums directly, not via strange conversions 2021-05-21 12:36:04 +03:00
81662fe827 core/rawdb: handle prefix in table.compact method (#22911) 2021-05-21 10:33:59 +02:00
a6c462781f EIP-1559: miner changes (#22896)
* core/types, miner: create TxWithMinerFee wrapper, add EIP-1559 support to TransactionsByMinerFeeAndNonce

miner: set base fee when creating a new header, handle gas limit, log miner fees

* all: rename to NewTransactionsByPriceAndNonce

* core/types, miner: rename to NewTransactionsByPriceAndNonce + EffectiveTip

miner: activate 1559 for testGenerateBlockAndImport tests

* core,miner: revert naming to TransactionsByPriceAndTime

* core/types/transaction: update effective tip calculation logic

* miner: update aleut to london

* core/types/transaction_test: use correct signer for 1559 txs + add back sender check

* miner/worker: calculate gas target from gas limit

* core, miner: fix block  gas limits for 1559

Co-authored-by: Ansgar Dietrichs <adietrichs@gmail.com>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
2021-05-21 09:59:26 +02:00
16bc57438b p2p/dnsdisc: fix crash when iterator closed before first call to Next (#22906) 2021-05-20 09:24:41 +02:00
3e795881ea eth, p2p/msgrate: move peer QoS tracking to its own package and use it for snap (#22876)
This change extracts the peer QoS tracking logic from eth/downloader, moving
it into the new package p2p/msgrate. The job of msgrate.Tracker is determining
suitable timeout values and request sizes per peer.

The snap sync scheduler now uses msgrate.Tracker instead of the hard-coded 15s
timeout. This should make the sync work better on network links with high latency.
2021-05-19 14:09:03 +02:00
b3a1fda650 cmd/utils: expand tilde in --jspath (#22900) 2021-05-18 19:54:10 +02:00
088da24ebf rlp: improve decoder stream implementation (#22858)
This commit makes various cleanup changes to rlp.Stream.

* rlp: shrink Stream struct

This removes a lot of unused padding space in Stream by reordering the
fields. The size of Stream changes from 120 bytes to 88 bytes. Stream
instances are internally cached and reused using sync.Pool, so this does
not improve performance.

* rlp: simplify list stack

The list stack kept track of the size of the current list context as
well as the current offset into it. The size had to be stored in the
stack in order to subtract it from the remaining bytes of any enclosing
list in ListEnd. It seems that this can be implemented in a simpler
way: just subtract the size from the enclosing list context in List instead.
2021-05-18 12:10:27 +02:00
3e6f46caec p2p/discover/v4wire: use optional RLP field for EIP-868 seq (#22842)
This changes the definitions of Ping and Pong, adding an optional field
for the sequence number. This field was previously encoded/decoded using
the "tail" struct tag, but using "optional" is much nicer.
2021-05-18 11:48:41 +02:00
32c1ed8a9c core/forkid: fix off-by-one bug (#22879)
* forkid: added failing test

* forkid: fixed off-by-one bug
2021-05-18 11:37:18 +03:00
b7a91663ab core/asm: fix the bug of "00" prefix number (#22883) 2021-05-18 10:22:58 +02:00
bb9f9ccf4f core/rawdb: wait for background freezing to exit when closing freezer (#22878) 2021-05-18 01:30:01 +02:00
67e7f61af7 core: fix failing tests (#22888)
This PR fixes two errors that regressed when EIP-1559 was merged.
2021-05-18 01:10:28 +02:00
94451c2788 all: implement EIP-1559 (#22837)
This is the initial implementation of EIP-1559 in packages core/types and core.
Mining, RPC, etc. will be added in subsequent commits.

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-05-17 15:13:22 +02:00
14bc6e5130 consensus/ethash: change eip3554 from 9.5M to 9.7M (#22870) 2021-05-17 10:49:23 +02:00
597ecb39cc cmd/evm: return json error if unmarshalling from stdin fails (#22871)
* cmd/evm: return json error if unmarshalling from stdin fails

* cmd/evm: make error capitalizations uniform (all lowercase starts)

* cmd/evm: capitalize error sent directly to stderror
2021-05-17 08:52:32 +02:00
addd8824cf cmd/geth, eth, core: snapshot dump + unify with trie dump (#22795)
* cmd/geth, eth, core: snapshot dump + unify with trie dump

* cmd/evm: dump API fixes

* cmd/geth, core, eth: fix some remaining errors

* cmd/evm: dump - add limit, support address startkey, address review concerns

* cmd, core/state, eth: minor polishes, fix snap dump crash, unify format

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-05-12 11:05:39 +03:00
1cca781a02 Merge pull request #22840 from holiman/eip_3554
consensus/ethash: implement EIP-3554 (bomb delay)
2021-05-12 10:19:08 +03:00
a2c456a526 core: ensure a broken trie invariant crashes genesis creation (#22780)
* Ensure state could be created in ToBlock

* Fix rebase errors

* use a panic instead
2021-05-11 18:12:10 +03:00
f34f749e81 Merge pull request #22857 from karalabe/tracer-stack-fix-2
eth/tracers: do the JSON serialization via .js to capture C faults
2021-05-11 17:21:04 +03:00
0524cede37 eth/tracers: do the JSON serialization via .js to capture C faults 2021-05-11 16:23:54 +03:00
ca98080798 cmd/geth, eth/gasprice: add configurable threshold to gas price oracle (#22752)
This adds a cmd line parameter `--gpo.ignoreprice`, to make the gas price oracle ignore transactions below the given threshold.
2021-05-11 11:25:51 +02:00
643fd0efc6 core/types: remove support for legacy receipt/log storage encoding (#22852)
* core/types: remove support for legacy receipt storage encoding

* core/types: remove support for legacy log storage encoding
2021-05-11 11:43:35 +03:00
e536bb52ff eth/protocols/snap: adapt to uint256 API changes (#22851) 2021-05-10 13:35:07 +02:00
c0e201b690 eth/protocols/eth, les: avoid Raw() when decoding HashOrNumber (#22841)
Getting the raw value is not necessary to decode this type, and
decoding it directly from the stream is faster.
2021-05-10 12:38:54 +02:00
ae5fcdc67f go.mod: upgrade to github.com/holiman/uint256 v1.2.0 (#22745) 2021-05-10 12:29:33 +02:00
f19a679b09 cmd/geth: remove reference to monitor command (#22844)
'geth monitor' subcommand is no longer supported.
2021-05-10 12:19:32 +02:00
7ab7acfded build: upgrade -dlgo version to Go 1.16.4 (#22848) 2021-05-10 12:18:42 +02:00
700df1442d rlp: add support for optional struct fields (#22832)
This adds support for a new struct tag "optional". Using this tag, structs used
for RLP encoding/decoding can be extended in a backwards-compatible way,
by adding new fields at the end.
2021-05-07 14:37:13 +02:00
17b1be2661 consensus/ethash: implement EIP-3554 (bomb delay) 2021-05-07 14:04:54 +02:00
8a070e8f7d consensus/clique: add some missing checks (#22836) 2021-05-07 10:31:01 +02:00
a5669ae292 core, params: implement EIP-3529 (#22733)
* core, params: implement EIP-3529

* core/vm: add london instructionset

* core/vm: add method doc for EIP enabler

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-05-07 09:25:32 +03:00
e77ef8fa8a Merge pull request #22809 from holiman/alt_3541
core: implement EIP-3541
2021-05-07 08:19:21 +03:00
e69130d9f1 core/vm, params: implement EIP 3541 2021-05-06 11:28:46 +02:00
cc606be74c all: define London+baikal, undefine yolov3, add london override flag (#22822)
* all: define London+baikal, undefine yolov3, add london override flag

* cmd, core, params: add baikal genesis definition
2021-05-06 12:07:42 +03:00
df20b3b982 core/vm: avoid duplicate log in json logger (#22825) 2021-05-06 10:46:27 +02:00
37b5595456 params: begin v1.10.4 release cycle 2021-05-05 13:21:13 +02:00
991384a7f6 params: go-ethereum v1.10.3 stable 2021-05-05 13:20:06 +02:00
0f3a1e7f9b cmd/devp2p/internal/ethtest: send simultaneous requests on one connection (#22801)
This changes the SimultaneousRequests test to send the requests from the same
connection, as it doesn't really make sense to test whether a node can respond
to two requests with different request IDs from separate connections.
2021-05-05 12:27:27 +02:00
41671d449f build: fix windows installer build for NSIS v3.05 (#22821)
With the update to a newer AppVeyor build image, creating the Windows
installer no longer worked because of a string quoting error in EnvVarUpdate.nsh.

This applies the fix recommended in https://stackoverflow.com/questions/62081765.
2021-05-05 12:19:51 +02:00
3a2b29c1ed appveyor.yml: upgrade to VisualStudio 2019 image (#22811) 2021-05-04 23:39:09 +03:00
973ad66b49 build: fix iOS framework build (#22813)
This fixes a regression introduced in #22804.
2021-05-04 21:45:45 +02:00
d107f90d1c go.mod: go mod tidy (#22814)
This updates go.mod for the addition of golang.org/x/sync.
2021-05-04 21:45:21 +02:00
effaf18523 build: improve cross compilation setup (#22804)
This PR cleans up the CI build system and fixes a couple of issues.

- The go tool launcher code has been moved to internal/build. With the new
  toolchain functions, the environment of the host Go (i.e. the one that built
  ci.go) and the target Go (i.e. the toolchain downloaded by -dlgo) are isolated
  more strictly. This is important to make cross compilation and -dlgo work
  correctly in more cases.
- The -dlgo option now skips the download and uses the host Go if the running Go
  version matches dlgoVersion exactly.
- The 'test' command now supports -dlgo, -cc and -arch. Running unit tests with
  foreign GOARCH is occasionally useful. For example, it can be used to run
  32-bit tests on Windows. It can also be used to run darwin/amd64 tests on
  darwin/arm64 using Rosetta 2.
- The 'aar', 'xcode' and 'xgo' commands now use a slightly different method to
  install external tools. They previously used `go get`, but this comes with the
  annoying side effect of modifying go.mod. They now use `go install` instead,
  which is the recommended way of installing tools without modifying the local
  module.
- The old build warning about outdated Go version has been removed because we're
  much better at keeping backwards compatibility now.
2021-05-04 13:01:20 +02:00
b8040a430e cmd/utils: use eth DNS tree for snap discovery (#22808)
This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree.
Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support
snap, we have decided to retire the dedicated index for snap and just use the eth
tree instead.

The dial iterators of eth and snap now use the same DNS tree in the default configuration,
so both iterators should use the same DNS discovery client instance. This ensures that
the record cache and rate limit are shared. Records will not be requested multiple times.

While testing the change, I noticed that duplicate DNS requests do happen even
when the client instance is shared. This is because the two iterators request the tree
root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the
change also adds a singleflight.Group instance in the client. When one iterator
attempts to resolve an entry which is already being resolved, the singleflight object
waits for the existing resolve call to finish and returns the entry to both places.
2021-05-04 11:29:32 +02:00
640d2c5e30 Merge pull request #22803 from karalabe/silence-scary-warning
eth: don't print db upgrade warning on db init
2021-05-04 10:54:25 +03:00
856c379626 eth: don't print db upgrade warning on db init 2021-05-03 15:42:43 +03:00
fc1c1cbea9 Merge pull request #22739 from holiman/remove_code
core: remove old conversion to shuffle leveldb blocks into ancients
2021-05-03 15:37:46 +03:00
8f94fc26e3 cmd/utils: don't crash on nonexistent datadir (#22738) 2021-05-03 15:29:05 +03:00
afb097eda8 params: remove dependency on crypto (#22788)
* params: remove dependency on crypto

Package params should not depend on package crypto because building
crypto requires cgo.

Since build/ci.go needs package params to get the go-ethereum version
number, C code must be compiled in order to run the build tool, which is
annoying for certain cross-compilation setups.

* params: add SectionHead
2021-05-03 15:28:02 +03:00
ca9c576e62 core/vm: fix interpreter comments (#22797)
* Fix interpreter comment

* Fix comment
2021-05-03 11:58:00 +03:00
0e00ee42ec core/vm: clean up contract creation error handling (#22766)
Do not keep separate flag for "max code size exceeded" case, but instead
assign appropriate error for it sooner.
2021-05-01 13:19:24 +02:00
8ff98108e5 cmd/devp2p: fix flakey tests in CI (#22757)
This PR fixes a couple of issues in the eth test suite that caused flakiness when run in the CI.
2021-04-30 22:47:36 +02:00
afc1abd878 Merge pull request #22789 from karalabe/snap-fix-batch
eth/protocols/snap: use storage batch, not account batch in st task
2021-04-30 21:12:09 +03:00
52b5d2d869 eth/protocols/snap: use storage batch, not account batch in st task 2021-04-30 18:24:34 +03:00
8681a2536c Merge pull request #22777 from karalabe/snapshots-abort-resume-on-sync
core, eth: abort snapshot generation on snap sync and resume later
2021-04-30 17:04:05 +03:00
745757ac6b core, eth: abort snapshot generation on snap sync and resume later 2021-04-30 17:03:10 +03:00
bbb57fd64b core/state: remove toAddr helper in tests (#22772) 2021-04-30 13:10:12 +02:00
f66f1a16b3 eth/filters: fix comment on PublicFilterAPI timeoutLoop (#22782) 2021-04-30 13:00:48 +02:00
ff75b21f25 README.md: update commands table, add note about web3.js version (#22748) 2021-04-30 12:52:25 +02:00
b778e37daa core: fix typo in comment (#22773) 2021-04-30 12:50:02 +02:00
dde6cb0b92 core/vm: replace repeated string with variable in tests (#22774) 2021-04-30 12:49:13 +02:00
1e57ab5de6 core: remove unused else branch in reorg (#22783) 2021-04-30 12:47:05 +02:00
8130dd5cef core/vm: fix typo in comment (#22785) 2021-04-30 12:46:34 +02:00
bb43cd7a79 core/types: add license header (#22781) 2021-04-29 22:14:57 +02:00
b50b17ac69 github: add note about screenshots in issue template (#22764) 2021-04-29 19:30:37 +02:00
63bad18c33 evm: remove unused errors left after EIP-2315 removal (#22767) 2021-04-29 19:30:16 +02:00
56f533d00c docs: fix docstring on read head block (#22776) 2021-04-29 19:23:07 +02:00
793c8f889f add myself as code owner for catalyst (#22778) 2021-04-29 18:36:22 +02:00
c7d07294a6 catalyst: check if block exists in assemble-block call with unknown parent-hash (#22770) 2021-04-29 16:42:21 +02:00
871f50b911 Merge pull request #22765 from karalabe/revert-eth-hashrate
eth: restore eth_hashrate API endpoint
2021-04-29 12:03:15 +03:00
06f44c0fd4 eth: restore eth_hashrate API endpoint 2021-04-29 12:02:30 +03:00
64b60c7995 Merge pull request #22762 from karalabe/snap-lower-complexity
core, eth, ethdb, trie: simplify range proofs
2021-04-29 11:29:28 +03:00
fae165a5de core, eth, ethdb, trie: simplify range proofs 2021-04-29 10:59:08 +03:00
a81cf0d2b3 trie: remove redundant returns + use stacktrie where applicable (#22760)
* trie: add benchmark for proofless range

* trie: remove unused returns + use stacktrie
2021-04-28 22:47:48 +03:00
abb6cfae6a Merge pull request #22761 from karalabe/snap-small-packets
eth/protocols/snap: lower the packet size to avoid overloading link
2021-04-28 22:40:38 +03:00
e4270cacf4 cmd/devp2p: fix flaky SameRequestID test (#22754) 2021-04-28 21:38:38 +02:00
558bff4008 eth/protocols/snap: lower the packet size to avoid overloading link 2021-04-28 21:40:06 +03:00
6d7c9566df les, tests: fix les clientpool (#22756)
* les, tests: fix les clientpool

* tests: disable debug mode

* les: polish code
2021-04-28 14:18:25 +02:00
9e5bb84c0e tests/fuzzers: crypto/bn256 and crypto/bls12381 tests against gnark-crypto (#22755)
Add more cross-fuzzers to fuzz bls with gnark versus geth's own bls12-381 library
2021-04-28 12:04:25 +02:00
256c5d68b2 eth/gasprice: improve stability of estimated price (#22722)
This PR makes the gas price oracle ignore transactions priced at `<=1 wei`.
2021-04-28 09:06:34 +02:00
0c99868416 cmd/devp2p, eth/protocols/eth: fix tests + make sanity checks earlier (#22749) 2021-04-28 08:48:07 +02:00
d9c9ee5ac9 Merge pull request #22753 from karalabe/p2p-tracker-stopfix
p2p/tracker: only reschedule wake if previous didn't run
2021-04-27 21:49:54 +03:00
ff3535e8e0 p2p/tracker: only reschedule wake if previous didn't run 2021-04-27 21:47:59 +03:00
55043eec45 Merge pull request #22751 from holiman/tracker_fix
p2p/tracker: properly clean up fulfilled requests
2021-04-27 21:43:07 +03:00
45fca44c24 p2p/tracker: properly clean up fulfilled requests 2021-04-27 18:09:34 +02:00
caea6c4661 eth/protocols/snap: generate storage trie from full dirty snap data (#22668)
* eth/protocols/snap: generate storage trie from full dirty snap data

* eth/protocols/snap: get rid of some more dead code

* eth/protocols/snap: less frequent logs, also log during trie generation

* eth/protocols/snap: implement dirty account range stack-hashing

* eth/protocols/snap: don't loop on account trie generation

* eth/protocols/snap: fix account format in trie

* core, eth, ethdb: glue snap packets together, but not chunks

* eth/protocols/snap: print completion log for snap phase

* eth/protocols/snap: extended tests

* eth/protocols/snap: make testcase pass

* eth/protocols/snap: fix account stacktrie commit without defer

* ethdb: fix key counts on reset

* eth/protocols: fix typos

* eth/protocols/snap: make better use of delivered data (#44)

* eth/protocols/snap: make better use of delivered data

* squashme

* eth/protocols/snap: reduce chunking

* squashme

* eth/protocols/snap: reduce chunking further

* eth/protocols/snap: break out hash range calculations

* eth/protocols/snap: use sort.Search instead of looping

* eth/protocols/snap: prevent crash on storage response with no keys

* eth/protocols/snap: nitpicks all around

* eth/protocols/snap: clear heal need on 1-chunk storage completion

* eth/protocols/snap: fix range chunker, add tests

Co-authored-by: Péter Szilágyi <peterke@gmail.com>

* trie: fix test API error

* eth/protocols/snap: fix some further liter issues

* eth/protocols/snap: fix accidental batch reuse

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-04-27 17:19:59 +03:00
65a1c2d829 core/vm: make gas cost reporting to tracers correct (#22702)
Previously, the makeCallVariantGasCallEIP2929 charged the cold account access cost directly, leading to an incorrect gas cost passed to the tracer from the main execution loop.
This change still temporarily charges the cost (to allow for an accurate calculation of the available gas for the call), but then afterwards refunds it and instead returns the correct total gas cost to be then properly charged in the main loop.
2021-04-27 13:21:41 +02:00
a0a99e610d build: upgrade -dlgo version to Go 1.16.3 (#22746) 2021-04-27 12:43:47 +02:00
ad983b300b cmd/puppeth: add support for authentication via ssh agent (#22634) 2021-04-27 11:36:57 +02:00
85a0bab6d7 Merge pull request #21467 from holiman/minor_ethashfix
consensus/ethash: less lookups of block data
2021-04-27 12:26:46 +03:00
a3f0da1ac4 build: upgrade to golangci-lint v1.39.0 (#22696)
* build: upgrade to golangci-lint v1.39.0

* consensus/ethash: fix go vet warning regarding reflect.SliceHeader

* eth/catalyst: fix lint issue

* consensus/ethash: fix bug in memoryMapFile
2021-04-27 11:49:06 +03:00
854f068ed6 les: polish code (#22625)
* les: polish code

* les/vflus/server: fixes

* les: fix lint
2021-04-27 09:44:59 +02:00
9b99e3dfe0 core/rawdb: fix datarace in freezer (#22728)
The Append / truncate operations were racy. When a datafile reaches 2Gb, a new file is needed. For this operation, we require a writelock, which is not needed in the 99.99% of all cases where the data does fit in the current head-file.

This transition from readlock to writelock was incorrect, and as the readlock was released, a truncate operation could slip in between, and truncate the data. This would have been fine, however, the Append operation continued writing as if no truncation had occurred, e.g writing item 5 where item 0 should reside.

This PR changes the behaviour, so that if when we run into the situation that a new file is needed, it aborts, and retries, this time with a writelock.

The outcome of the situation described above, running on this PR, would instead be that the Append operation exits with a failure.
2021-04-26 18:19:07 +02:00
83375b0873 core: remove old conversion to shuffle leveldb blocks into ancients 2021-04-26 14:27:56 +02:00
34f3c9539b p2p/discover: improve discv5 handling of IPv4-in-IPv6 addresses (#22703)
When receiving PING from an IPv4 address over IPv6, the implementation sent
back a IPv4-in-IPv6 address. This change makes it reflect the IPv4 address.
2021-04-23 18:18:10 +02:00
cac1b21d39 cmd/devp2p/internal/ethtest: add more tx propagation tests (#22630)
This adds a test for large tx announcement messages, as well as a test to
check that announced tx hashes are requested by the node.
2021-04-23 18:14:39 +02:00
49281ab84f core/state/snapshot, true: reuse dirty data instead of hitting disk when generating (#22667)
* core/state/snapshot: reuse memory data instead of hitting disk when generating

* trie: minor nitpicks wrt the resolver optimization

* core/state/snapshot, trie: use key/value store for resolver

* trie: fix linter

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-04-23 14:39:18 +03:00
ea54c58d4f cmd/devp2p/internal/ethtest: run test suite as Go unit test (#22698)
This change adds a Go unit test that runs the protocol test suite
against the go-ethereum implementation of the eth protocol.
2021-04-23 11:15:42 +02:00
1fb9a6dd32 eth/protocols, prp/tracker: add support for req/rep rtt tracking (#22608)
* eth/protocols, prp/tracker: add support for req/rep rtt tracking

* p2p/tracker: sanity cap the number of pending requests

* pap/tracker: linter <3

* p2p/tracker: disable entire tracker if no metrics are enabled
2021-04-22 11:42:46 +03:00
9357280fce rpc: add HTTPError type for HTTP error responses (#22677)
The new error type is returned by client operations contains details of
the response error code and response body.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-04-21 15:51:30 +02:00
67da83aca5 accounts/external, signer/core: add support for EIP-2930 transactions (#22585)
This adds support for signing EIP-2930 with clef.
2021-04-21 13:03:33 +02:00
4b783c0064 trie: improve the node iterator seek operation (#22470)
This change improves the efficiency of the nodeIterator seek
operation. Previously, seek essentially ran the iterator forward
until it found the matching node. With this change, it skips
over fullnode children and avoids resolving them from the database.
2021-04-21 12:25:26 +02:00
3e68d627b1 les: fix goroutine leaks in tests (#22707) 2021-04-21 10:19:28 +02:00
96828c90f5 eth/tracers, internal/ethapi: fix typos causing lint issue (#22711) 2021-04-21 10:18:27 +02:00
dd9c3225cf eth, internal: extend the TraceCall API (#22245)
Adds an an optional parameter `overrides *map[common.Address]account` to the `eth_call` API in order for the caller to  can customize the state.
2021-04-21 09:21:22 +02:00
cc33398cef tests: disable blockchain tests based on general state tests (#22704) 2021-04-20 12:28:20 +02:00
beee6b77a0 go.mod: upgrade gopsutils to v3.21.4 (#22693)
This fixes the OpenBSD/arm64 build.
2021-04-20 10:54:41 +02:00
581539c6ee trie: make stacktrie support binary marshal/unmarshal (#22685) 2021-04-20 10:42:02 +02:00
d7bfb978ba ethash: no block reward in catalyst mode (#22697) 2021-04-20 10:29:36 +02:00
d6ffa14035 core: nuke legacy snapshot supporting (#22663) 2021-04-20 08:27:46 +03:00
653b7e959d cmd/devp2p: add dns nuke-route53 command (#22695) 2021-04-19 14:54:55 +02:00
424656519a cmd/devp2p: add support for -limit option in nodeset filter command (#22694)
The new -limit option makes the filter operate on top N nodes by score.
This also adds ENR attribute stats in the nodeset info command.
Node set commands are now documented in README.
2021-04-19 14:54:38 +02:00
e43ac53264 Merge pull request #22686 from holiman/minor_fixes
core/state/snapshot: avoid copybytes for stacktrie
2021-04-18 17:53:01 +03:00
f79cce5de9 eth/catalyst: add catalyst API prototype (#22641)
This change adds the --catalyst flag, enabling an RPC API for eth2 integration.
In this initial version, catalyst mode also disables all peer-to-peer networking.

Co-authored-by: Mikhail Kalinin <noblesse.knight@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-04-16 21:29:22 +02:00
09d44e9925 core/state/snapshot: avoid copybytes for stacktrie 2021-04-16 14:58:23 +02:00
4f3ba6742f trie: make stacktrie not mutate input values (#22673)
The stacktrie is a bit un-untuitive, API-wise: since it mutates input values.
Such behaviour is dangerous, and easy to get wrong if the calling code 'forgets' this quirk. The behaviour is fixed by this PR, so that the input values are not modified by the stacktrie. 

Note: just as with the Trie, the stacktrie still references the live input objects, so it's still _not_ safe to mutate the values form the callsite.
2021-04-16 14:21:01 +02:00
65689e7fce les/vflux/server: fix priority cornercase causing fuzzer timeout (#22650)
* les/vflux/server: fix estimatePriority corner case

* les/vflux/server: simplify inactiveAllowance == 0 case
2021-04-16 09:52:33 +02:00
f8afb681dd Merge pull request #22678 from karalabe/snap-ephemeral-channels
eth/protocols/snap: use ephemeral channels to avoid cross-sync delveries
2021-04-16 09:27:39 +03:00
fda93f643e log: fix formatting of big.Int (#22679)
* log: fix formatting of big.Int

The implementation of formatLogfmtBigInt had two issues: it crashed when
the number was actually large enough to hit the big integer case, and
modified the big.Int while formatting it.

* log: don't call FormatLogfmtInt64 for int16

* log: separate from decimals back, not front

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-04-16 09:27:16 +03:00
3cfd0fe7a8 core: add TestGenesisHashes and fix YoloV3 (#22559)
This adds simple unit test checking if the hard-coded genesis
hash values in package params match the actual genesis block hashes.
2021-04-16 00:32:16 +02:00
9553c98de8 eth/protocols/snap: use ephemeral channels to avoid cross-sync delveries 2021-04-15 21:16:54 +03:00
1e207342b5 all: make logs a bit easier on the eye to digest (#22665)
* all: add thousandths separators for big numbers on log messages

* p2p/sentry: drop accidental file

* common, log: add fast number formatter

* common, eth/protocols/snap: simplifty fancy num types

* log: handle nil big ints
2021-04-15 20:35:00 +03:00
d8ff53dfb8 Merge pull request #22666 from karalabe/remove-stale-datatype
core/types: drop some relice data types
2021-04-15 02:14:09 +03:00
d5e57948d1 core/types: drop some relice data types 2021-04-14 23:39:42 +03:00
7088f1e814 core, eth: faster snapshot generation (#22504)
* eth/protocols: persist received state segments

* core: initial implementation

* core/state/snapshot: add tests

* core, eth: updates

* eth/protocols/snapshot: count flat state size

* core/state: add metrics

* core/state/snapshot: skip unnecessary deletion

* core/state/snapshot: rename

* core/state/snapshot: use the global batch

* core/state/snapshot: add logs and fix wiping

* core/state/snapshot: fix

* core/state/snapshot: save generation progress even if the batch is empty

* core/state/snapshot: fixes

* core/state/snapshot: fix initial account range length

* core/state/snapshot: fix initial account range

* eth/protocols/snap: store flat states during the healing

* eth/protocols/snap: print logs

* core/state/snapshot: refactor (#4)

* core/state/snapshot: refactor

* core/state/snapshot: tiny fix and polish

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>

* core, eth: fixes

* core, eth: fix healing writer

* core, trie, eth: fix paths

* eth/protocols/snap: fix encoding

* eth, core: add debug log

* core/state/generate: release iterator asap (#5)

core/state/snapshot: less copy

core/state/snapshot: revert split loop

core/state/snapshot: handle storage becoming empty, improve test robustness

core/state: test modified codehash

core/state/snapshot: polish

* core/state/snapshot: optimize stats counter

* core, eth: add metric

* core/state/snapshot: update comments

* core/state/snapshot: improve tests

* core/state/snapshot: replace secure trie with standard trie

* core/state/snapshot: wrap return as the struct

* core/state/snapshot: skip wiping correct states

* core/state/snapshot: updates

* core/state/snapshot: fixes

* core/state/snapshot: fix panic due to reference flaw in closure

* core/state/snapshot: fix errors in state generation logic + fix log output

* core/state/snapshot: remove an error case

* core/state/snapshot: fix condition-check for exhausted snap state

* core/state/snapshot: use stackTrie for small tries

* core/state/snapshot: don't resolve small storage tries in vain

* core/state/snapshot: properly clean up storage of deleted accounts

* core/state/snapshot: avoid RLP-encoding in some cases + minor nitpicks

* core/state/snapshot: fix error (+testcase)

* core/state/snapshot: clean up tests a bit

* core/state/snapshot: work in progress on better tests

* core/state/snapshot: polish code

* core/state/snapshot: fix trie iteration abortion trigger

* core/state/snapshot: fixes flaws

* core/state/snapshot: remove panic

* core/state/snapshot: fix abort

* core/state/snapshot: more tests (plus failing testcase)

* core/state/snapshot: more testcases + fix for failing test

* core/state/snapshot: testcase for malformed data

* core/state/snapshot: some test nitpicks

* core/state/snapshot: improvements to logging

* core/state/snapshot: testcase to demo error in abortion

* core/state/snapshot: fix abortion

* cmd/geth: make verify-state report the root

* trie: fix failing test

* core/state/snapshot: add timer metrics

* core/state/snapshot: fix metrics

* core/state/snapshot: udpate tests

* eth/protocols/snap: write snapshot account even if code or state is needed

* core/state/snapshot: fix diskmore check

* core/state/snapshot: review fixes

* core/state/snapshot: improve error message

* cmd/geth: rename 'error' to 'err' in logs

* core/state/snapshot: fix some review concerns

* core/state/snapshot, eth/protocols/snap: clear snapshot marker when starting/resuming snap sync

* core: add error log

* core/state/snapshot: use proper timers for metrics collection

* core/state/snapshot: address some review concerns

* eth/protocols/snap: improved log message

* eth/protocols/snap: fix heal logs to condense infos

* core/state/snapshot: wait for generator termination before restarting

* core/state/snapshot: revert timers to counters to track total time

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-04-14 23:23:11 +03:00
a50251e6cb eth/fetcher: avoid spurious timer events at startup (#22652)
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-04-14 12:44:32 +02:00
72e37942f3 cmd/faucet: support testnet flags in the faucet (#22545)
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-04-13 23:51:46 +02:00
271e5b7fc9 cmd/geth: add db-command to inspect freezer index (#22633)
This PR makes it easier to inspect the freezer index, which could be useful to investigate things like #22111
2021-04-13 15:45:30 +02:00
6c27d8f996 accounts: documentation fixes (#22645)
* replaces `an chance` with `a chance`
* replaces `SignHashWithPassphrase` with `SignTextWithPassphrase` as there was no SignHashWithPasspharse function in the file
2021-04-13 10:00:48 +02:00
9c653ff662 Merge pull request #22636 from karalabe/drop-eth64
eth, les: drop support for eth/64
2021-04-09 13:52:21 +03:00
fe1586b094 eth, les: drop support for eth/64, fix eth/66 tests 2021-04-09 10:39:45 +03:00
04dcc9378d params: begin v1.10.3 release cycle 2021-04-08 13:04:30 +02:00
493100ba4d consensus/ethash: less lookups of block data 2020-08-21 13:55:39 +02:00
383 changed files with 18017 additions and 7485 deletions

1
.github/CODEOWNERS vendored
View File

@ -9,6 +9,7 @@ cmd/puppeth @karalabe
consensus @karalabe
core/ @karalabe @holiman @rjl493456442
eth/ @karalabe @holiman @rjl493456442
eth/catalyst/ @gballet
graphql/ @gballet
les/ @zsfelfoldi @rjl493456442
light/ @zsfelfoldi @rjl493456442

View File

@ -26,3 +26,5 @@ Commit hash : (if `develop`)
````
[backtrace]
````
When submitting logs: please submit them as text and not screenshots.

View File

@ -24,6 +24,42 @@ jobs:
script:
- go run build/ci.go lint
# These builders create the Docker sub-images for multi-arch push and each
# will attempt to push the multi-arch image if they are the last builder
- stage: build
if: type = push
os: linux
arch: amd64
dist: bionic
go: 1.16.x
env:
- docker
services:
- docker
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- export DOCKER_CLI_EXPERIMENTAL=enabled
script:
- go run build/ci.go docker -image -manifest amd64,arm64 -upload karalabe/geth-docker-test
- stage: build
if: type = push
os: linux
arch: arm64
dist: bionic
go: 1.16.x
env:
- docker
services:
- docker
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- export DOCKER_CLI_EXPERIMENTAL=enabled
script:
- go run build/ci.go docker -image -manifest amd64,arm64 -upload karalabe/geth-docker-test
# This builder does the Ubuntu PPA upload
- stage: build
if: type = push

View File

@ -1,10 +1,15 @@
# Support setting various labels on the final image
ARG COMMIT=""
ARG VERSION=""
ARG BUILDNUM=""
# Build Geth in a stock Go builder container
FROM golang:1.16-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
RUN apk add --no-cache gcc musl-dev linux-headers git
ADD . /go-ethereum
RUN cd /go-ethereum && make geth
RUN cd /go-ethereum && go run build/ci.go install ./cmd/geth
# Pull Geth into a second stage deploy alpine container
FROM alpine:latest
@ -14,3 +19,10 @@ COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp
ENTRYPOINT ["geth"]
# Add some metadata labels to help programatic image consumption
ARG COMMIT=""
ARG VERSION=""
ARG BUILDNUM=""
LABEL commit="$COMMIT" version="$VERSION" buildnum="$BUILDNUM"

View File

@ -1,10 +1,15 @@
# Support setting various labels on the final image
ARG COMMIT=""
ARG VERSION=""
ARG BUILDNUM=""
# Build Geth in a stock Go builder container
FROM golang:1.16-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
RUN apk add --no-cache gcc musl-dev linux-headers git
ADD . /go-ethereum
RUN cd /go-ethereum && make all
RUN cd /go-ethereum && go run build/ci.go install
# Pull all binaries into a second stage deploy alpine container
FROM alpine:latest
@ -13,3 +18,10 @@ RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp
# Add some metadata labels to help programatic image consumption
ARG COMMIT=""
ARG VERSION=""
ARG BUILDNUM=""
LABEL commit="$COMMIT" version="$VERSION" buildnum="$BUILDNUM"

View File

@ -26,7 +26,7 @@ android:
@echo "Import \"$(GOBIN)/geth.aar\" to use the library."
@echo "Import \"$(GOBIN)/geth-sources.jar\" to add javadocs"
@echo "For more info see https://stackoverflow.com/questions/20994336/android-studio-how-to-attach-javadoc"
ios:
$(GORUN) build/ci.go xcode --local
@echo "Done building."
@ -46,12 +46,11 @@ clean:
# You need to put $GOBIN (or $GOPATH/bin) in your PATH to use 'go generate'.
devtools:
env GOBIN= go get -u golang.org/x/tools/cmd/stringer
env GOBIN= go get -u github.com/kevinburke/go-bindata/go-bindata
env GOBIN= go get -u github.com/fjl/gencodec
env GOBIN= go get -u github.com/golang/protobuf/protoc-gen-go
env GOBIN= go install golang.org/x/tools/cmd/stringer@latest
env GOBIN= go install github.com/kevinburke/go-bindata/go-bindata@latest
env GOBIN= go install github.com/fjl/gencodec@latest
env GOBIN= go install github.com/golang/protobuf/protoc-gen-go@latest
env GOBIN= go install ./cmd/abigen
@type "npm" 2> /dev/null || echo 'Please install node.js and npm'
@type "solc" 2> /dev/null || echo 'Please install solc'
@type "protoc" 2> /dev/null || echo 'Please install protoc'

View File

@ -16,7 +16,7 @@ archives are published at https://geth.ethereum.org/downloads/.
For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/install-and-build/installing-geth).
Building `geth` requires both a Go (version 1.13 or later) and a C compiler. You can install
Building `geth` requires both a Go (version 1.14 or later) and a C compiler. You can install
them using your favourite package manager. Once the dependencies are installed, run
```shell
@ -37,10 +37,11 @@ directory.
| Command | Description |
| :-----------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/interface/command-line-options) for command line options. |
| `clef` | Stand-alone signing tool, which can be used as a backend signer for `geth`. |
| `devp2p` | Utilities to interact with nodes on the networking layer, without running a full blockchain. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/dapp/native-bindings) page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://eth.wiki/json-rpc/API) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://eth.wiki/en/fundamentals/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
@ -67,7 +68,8 @@ This command will:
causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive.
* Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://web3js.readthedocs.io/en/)
(via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://web3js.readthedocs.io/en/)
(note: the `web3` version bundled within `geth` is very old, and not up to date with official docs),
as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server).
This tool is optional and if you leave it out you can always attach to an already running
`geth` instance with `geth attach`.
@ -228,7 +230,8 @@ aware of and agree upon. This consists of a small JSON file (e.g. call it `genes
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0
"istanbulBlock": 0,
"berlinBlock": 0
},
"alloc": {},
"coinbase": "0x0000000000000000000000000000000000000000",
@ -329,7 +332,7 @@ from anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit
more complex changes though, please check up with the core devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum)
more complex changes though, please check up with the core devs first on [our Discord Server](https://discord.gg/invite/nthXNEv)
to ensure those changes are in line with the general philosophy of the project and/or get
some early feedback which can make both your efforts much lighter as well as our review
and merge procedures quick and simple.

View File

@ -32,12 +32,12 @@ var (
// have any code associated with it (i.e. suicided).
ErrNoCode = errors.New("no contract code at given address")
// This error is raised when attempting to perform a pending state action
// ErrNoPendingState is raised when attempting to perform a pending state action
// on a backend that doesn't implement PendingContractCaller.
ErrNoPendingState = errors.New("backend does not support pending state")
// This error is returned by WaitDeployed if contract creation leaves an
// empty contract behind.
// ErrNoCodeAfterDeploy is returned by WaitDeployed if contract creation leaves
// an empty contract behind.
ErrNoCodeAfterDeploy = errors.New("no contract code after deployment")
)
@ -47,7 +47,8 @@ type ContractCaller interface {
// CodeAt returns the code of the given account. This is needed to differentiate
// between contract internal errors and the local chain being out of sync.
CodeAt(ctx context.Context, contract common.Address, blockNumber *big.Int) ([]byte, error)
// ContractCall executes an Ethereum contract call with the specified data as the
// CallContract executes an Ethereum contract call with the specified data as the
// input.
CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error)
}
@ -58,6 +59,7 @@ type ContractCaller interface {
type PendingContractCaller interface {
// PendingCodeAt returns the code of the given account in the pending state.
PendingCodeAt(ctx context.Context, contract common.Address) ([]byte, error)
// PendingCallContract executes an Ethereum contract call against the pending state.
PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error)
}
@ -67,19 +69,31 @@ type PendingContractCaller interface {
// used when the user does not provide some needed values, but rather leaves it up
// to the transactor to decide.
type ContractTransactor interface {
// HeaderByNumber returns a block header from the current canonical chain. If
// number is nil, the latest known header is returned.
HeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, error)
// PendingCodeAt returns the code of the given account in the pending state.
PendingCodeAt(ctx context.Context, account common.Address) ([]byte, error)
// PendingNonceAt retrieves the current pending nonce associated with an account.
PendingNonceAt(ctx context.Context, account common.Address) (uint64, error)
// SuggestGasPrice retrieves the currently suggested gas price to allow a timely
// execution of a transaction.
SuggestGasPrice(ctx context.Context) (*big.Int, error)
// SuggestGasTipCap retrieves the currently suggested 1559 priority fee to allow
// a timely execution of a transaction.
SuggestGasTipCap(ctx context.Context) (*big.Int, error)
// EstimateGas tries to estimate the gas needed to execute a specific
// transaction based on the current pending state of the backend blockchain.
// There is no guarantee that this is the true gas limit requirement as other
// transactions may be added or removed by miners, but it should provide a basis
// for setting a reasonable default.
EstimateGas(ctx context.Context, call ethereum.CallMsg) (gas uint64, err error)
// SendTransaction injects the transaction into the pending pool for execution.
SendTransaction(ctx context.Context, tx *types.Transaction) error
}

View File

@ -86,7 +86,7 @@ func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.Genesis
config: genesis.Config,
events: filters.NewEventSystem(&filterBackend{database, blockchain}, false),
}
backend.rollback()
backend.rollback(blockchain.CurrentBlock())
return backend
}
@ -112,7 +112,9 @@ func (b *SimulatedBackend) Commit() {
if _, err := b.blockchain.InsertChain([]*types.Block{b.pendingBlock}); err != nil {
panic(err) // This cannot happen unless the simulator is wrong, fail in that case
}
b.rollback()
// Using the last inserted block here makes it possible to build on a side
// chain after a fork.
b.rollback(b.pendingBlock)
}
// Rollback aborts all pending transactions, reverting to the last committed state.
@ -120,22 +122,49 @@ func (b *SimulatedBackend) Rollback() {
b.mu.Lock()
defer b.mu.Unlock()
b.rollback()
b.rollback(b.blockchain.CurrentBlock())
}
func (b *SimulatedBackend) rollback() {
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(int, *core.BlockGen) {})
func (b *SimulatedBackend) rollback(parent *types.Block) {
blocks, _ := core.GenerateChain(b.config, parent, ethash.NewFaker(), b.database, 1, func(int, *core.BlockGen) {})
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), b.blockchain.StateCache(), nil)
}
// Fork creates a side-chain that can be used to simulate reorgs.
//
// This function should be called with the ancestor block where the new side
// chain should be started. Transactions (old and new) can then be applied on
// top and Commit-ed.
//
// Note, the side-chain will only become canonical (and trigger the events) when
// it becomes longer. Until then CallContract will still operate on the current
// canonical chain.
//
// There is a % chance that the side chain becomes canonical at the same length
// to simulate live network behavior.
func (b *SimulatedBackend) Fork(ctx context.Context, parent common.Hash) error {
b.mu.Lock()
defer b.mu.Unlock()
if len(b.pendingBlock.Transactions()) != 0 {
return errors.New("pending block dirty")
}
block, err := b.blockByHash(ctx, parent)
if err != nil {
return err
}
b.rollback(block)
return nil
}
// stateByBlockNumber retrieves a state by a given blocknumber.
func (b *SimulatedBackend) stateByBlockNumber(ctx context.Context, blockNumber *big.Int) (*state.StateDB, error) {
if blockNumber == nil || blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) == 0 {
return b.blockchain.State()
}
block, err := b.blockByNumberNoLock(ctx, blockNumber)
block, err := b.blockByNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
@ -228,6 +257,11 @@ func (b *SimulatedBackend) BlockByHash(ctx context.Context, hash common.Hash) (*
b.mu.Lock()
defer b.mu.Unlock()
return b.blockByHash(ctx, hash)
}
// blockByHash retrieves a block based on the block hash without Locking.
func (b *SimulatedBackend) blockByHash(ctx context.Context, hash common.Hash) (*types.Block, error) {
if hash == b.pendingBlock.Hash() {
return b.pendingBlock, nil
}
@ -246,12 +280,12 @@ func (b *SimulatedBackend) BlockByNumber(ctx context.Context, number *big.Int) (
b.mu.Lock()
defer b.mu.Unlock()
return b.blockByNumberNoLock(ctx, number)
return b.blockByNumber(ctx, number)
}
// blockByNumberNoLock retrieves a block from the database by number, caching it
// blockByNumber retrieves a block from the database by number, caching it
// (associated with its hash) if found without Lock.
func (b *SimulatedBackend) blockByNumberNoLock(ctx context.Context, number *big.Int) (*types.Block, error) {
func (b *SimulatedBackend) blockByNumber(ctx context.Context, number *big.Int) (*types.Block, error) {
if number == nil || number.Cmp(b.pendingBlock.Number()) == 0 {
return b.blockchain.CurrentBlock(), nil
}
@ -431,6 +465,12 @@ func (b *SimulatedBackend) SuggestGasPrice(ctx context.Context) (*big.Int, error
return big.NewInt(1), nil
}
// SuggestGasTipCap implements ContractTransactor.SuggestGasTipCap. Since the simulated
// chain doesn't have miners, we just return a gas tip of 1 for any call.
func (b *SimulatedBackend) SuggestGasTipCap(ctx context.Context) (*big.Int, error) {
return big.NewInt(1), nil
}
// EstimateGas executes the requested code against the currently pending block/state and
// returns the used amount of gas.
func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMsg) (uint64, error) {
@ -527,10 +567,38 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
// callContract implements common code between normal and pending contract calls.
// state is modified during execution, make sure to copy it if necessary.
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, stateDB *state.StateDB) (*core.ExecutionResult, error) {
// Ensure message is initialized properly.
if call.GasPrice == nil {
call.GasPrice = big.NewInt(1)
// Gas prices post 1559 need to be initialized
if call.GasPrice != nil && (call.GasFeeCap != nil || call.GasTipCap != nil) {
return nil, errors.New("both gasPrice and (maxFeePerGas or maxPriorityFeePerGas) specified")
}
head := b.blockchain.CurrentHeader()
if !b.blockchain.Config().IsLondon(head.Number) {
// If there's no basefee, then it must be a non-1559 execution
if call.GasPrice == nil {
call.GasPrice = new(big.Int)
}
call.GasFeeCap, call.GasTipCap = call.GasPrice, call.GasPrice
} else {
// A basefee is provided, necessitating 1559-type execution
if call.GasPrice != nil {
// User specified the legacy gas field, convert to 1559 gas typing
call.GasFeeCap, call.GasTipCap = call.GasPrice, call.GasPrice
} else {
// User specified 1559 gas feilds (or none), use those
if call.GasFeeCap == nil {
call.GasFeeCap = new(big.Int)
}
if call.GasTipCap == nil {
call.GasTipCap = new(big.Int)
}
// Backfill the legacy gasPrice for EVM execution, unless we're all zeroes
call.GasPrice = new(big.Int)
if call.GasFeeCap.BitLen() > 0 || call.GasTipCap.BitLen() > 0 {
call.GasPrice = math.BigMin(new(big.Int).Add(call.GasTipCap, head.BaseFee), call.GasFeeCap)
}
}
}
// Ensure message is initialized properly.
if call.Gas == 0 {
call.Gas = 50000000
}
@ -547,7 +615,7 @@ func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallM
evmContext := core.NewEVMBlockContext(block.Header(), b.blockchain, nil)
// Create a new environment which holds all relevant information
// about the transaction and calling mechanisms.
vmEnv := vm.NewEVM(evmContext, txContext, stateDB, b.config, vm.Config{})
vmEnv := vm.NewEVM(evmContext, txContext, stateDB, b.config, vm.Config{NoBaseFee: true})
gasPool := new(core.GasPool).AddGas(math.MaxUint64)
return core.NewStateTransition(vmEnv, msg, gasPool).TransitionDb()
@ -559,8 +627,12 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
b.mu.Lock()
defer b.mu.Unlock()
// Check transaction validity.
block := b.blockchain.CurrentBlock()
// Get the last block
block, err := b.blockByHash(ctx, b.pendingBlock.ParentHash())
if err != nil {
panic("could not fetch parent")
}
// Check transaction validity
signer := types.MakeSigner(b.blockchain.Config(), block.Number())
sender, err := types.Sender(signer, tx)
if err != nil {
@ -570,8 +642,7 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
if tx.Nonce() != nonce {
panic(fmt.Errorf("invalid transaction nonce: got %d, want %d", tx.Nonce(), nonce))
}
// Include tx in chain.
// Include tx in chain
blocks, _ := core.GenerateChain(b.config, block, ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTxWithChain(b.blockchain, tx)
@ -716,6 +787,8 @@ func (m callMsg) Nonce() uint64 { return 0 }
func (m callMsg) CheckNonce() bool { return false }
func (m callMsg) To() *common.Address { return m.CallMsg.To }
func (m callMsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callMsg) GasFeeCap() *big.Int { return m.CallMsg.GasFeeCap }
func (m callMsg) GasTipCap() *big.Int { return m.CallMsg.GasTipCap }
func (m callMsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callMsg) Value() *big.Int { return m.CallMsg.Value }
func (m callMsg) Data() []byte { return m.CallMsg.Data }

View File

@ -21,6 +21,7 @@ import (
"context"
"errors"
"math/big"
"math/rand"
"reflect"
"strings"
"testing"
@ -58,9 +59,12 @@ func TestSimulatedBackend(t *testing.T) {
}
// generate a transaction and confirm you can retrieve it
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
code := `6060604052600a8060106000396000f360606040526008565b00`
var gas uint64 = 3000000
tx := types.NewContractCreation(0, big.NewInt(0), gas, big.NewInt(1), common.FromHex(code))
tx := types.NewContractCreation(0, big.NewInt(0), gas, gasPrice, common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, key)
err = sim.SendTransaction(context.Background(), tx)
@ -110,14 +114,14 @@ var expectedReturn = []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
func simTestBackend(testAddr common.Address) *SimulatedBackend {
return NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
testAddr: {Balance: big.NewInt(10000000000000000)},
}, 10000000,
)
}
func TestNewSimulatedBackend(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
expectedBal := big.NewInt(10000000000)
expectedBal := big.NewInt(10000000000000000)
sim := simTestBackend(testAddr)
defer sim.Close()
@ -136,7 +140,7 @@ func TestNewSimulatedBackend(t *testing.T) {
}
}
func TestSimulatedBackend_AdjustTime(t *testing.T) {
func TestAdjustTime(t *testing.T) {
sim := NewSimulatedBackend(
core.GenesisAlloc{}, 10000000,
)
@ -153,11 +157,15 @@ func TestSimulatedBackend_AdjustTime(t *testing.T) {
}
}
func TestNewSimulatedBackend_AdjustTimeFail(t *testing.T) {
func TestNewAdjustTimeFail(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
// Create tx and send
tx := types.NewTransaction(0, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(0, testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -178,7 +186,7 @@ func TestNewSimulatedBackend_AdjustTimeFail(t *testing.T) {
t.Errorf("adjusted time not equal to a minute. prev: %v, new: %v", prevTime, newTime)
}
// Put a transaction after adjusting time
tx2 := types.NewTransaction(1, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
tx2 := types.NewTransaction(1, testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx2, err := types.SignTx(tx2, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -191,9 +199,9 @@ func TestNewSimulatedBackend_AdjustTimeFail(t *testing.T) {
}
}
func TestSimulatedBackend_BalanceAt(t *testing.T) {
func TestBalanceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
expectedBal := big.NewInt(10000000000)
expectedBal := big.NewInt(10000000000000000)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -208,7 +216,7 @@ func TestSimulatedBackend_BalanceAt(t *testing.T) {
}
}
func TestSimulatedBackend_BlockByHash(t *testing.T) {
func TestBlockByHash(t *testing.T) {
sim := NewSimulatedBackend(
core.GenesisAlloc{}, 10000000,
)
@ -229,7 +237,7 @@ func TestSimulatedBackend_BlockByHash(t *testing.T) {
}
}
func TestSimulatedBackend_BlockByNumber(t *testing.T) {
func TestBlockByNumber(t *testing.T) {
sim := NewSimulatedBackend(
core.GenesisAlloc{}, 10000000,
)
@ -264,7 +272,7 @@ func TestSimulatedBackend_BlockByNumber(t *testing.T) {
}
}
func TestSimulatedBackend_NonceAt(t *testing.T) {
func TestNonceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -281,7 +289,10 @@ func TestSimulatedBackend_NonceAt(t *testing.T) {
}
// create a signed transaction to send
tx := types.NewTransaction(nonce, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(nonce, testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -314,7 +325,7 @@ func TestSimulatedBackend_NonceAt(t *testing.T) {
}
}
func TestSimulatedBackend_SendTransaction(t *testing.T) {
func TestSendTransaction(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -322,7 +333,10 @@ func TestSimulatedBackend_SendTransaction(t *testing.T) {
bgCtx := context.Background()
// create a signed transaction to send
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -345,19 +359,22 @@ func TestSimulatedBackend_SendTransaction(t *testing.T) {
}
}
func TestSimulatedBackend_TransactionByHash(t *testing.T) {
func TestTransactionByHash(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
testAddr: {Balance: big.NewInt(10000000000000000)},
}, 10000000,
)
defer sim.Close()
bgCtx := context.Background()
// create a signed transaction to send
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -396,7 +413,7 @@ func TestSimulatedBackend_TransactionByHash(t *testing.T) {
}
}
func TestSimulatedBackend_EstimateGas(t *testing.T) {
func TestEstimateGas(t *testing.T) {
/*
pragma solidity ^0.6.4;
contract GasEstimation {
@ -514,7 +531,7 @@ func TestSimulatedBackend_EstimateGas(t *testing.T) {
}
}
func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
func TestEstimateGasWithPrice(t *testing.T) {
key, _ := crypto.GenerateKey()
addr := crypto.PubkeyToAddress(key.PublicKey)
@ -533,7 +550,7 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(0),
Value: big.NewInt(1000),
Value: big.NewInt(100000000000),
Data: nil,
}, 21000, nil},
@ -541,8 +558,8 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
From: addr,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(1000),
Value: big.NewInt(1000),
GasPrice: big.NewInt(100000000000),
Value: big.NewInt(100000000000),
Data: nil,
}, 21000, nil},
@ -560,28 +577,28 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(2e14), // gascost = 4.2ether
Value: big.NewInt(1000),
Value: big.NewInt(100000000000),
Data: nil,
}, 21000, errors.New("gas required exceeds allowance (10999)")}, // 10999=(2.2ether-1000wei)/(2e14)
}
for _, c := range cases {
for i, c := range cases {
got, err := sim.EstimateGas(context.Background(), c.message)
if c.expectError != nil {
if err == nil {
t.Fatalf("Expect error, got nil")
t.Fatalf("test %d: expect error, got nil", i)
}
if c.expectError.Error() != err.Error() {
t.Fatalf("Expect error, want %v, got %v", c.expectError, err)
t.Fatalf("test %d: expect error, want %v, got %v", i, c.expectError, err)
}
continue
}
if got != c.expect {
t.Fatalf("Gas estimation mismatch, want %d, got %d", c.expect, got)
t.Fatalf("test %d: gas estimation mismatch, want %d, got %d", i, c.expect, got)
}
}
}
func TestSimulatedBackend_HeaderByHash(t *testing.T) {
func TestHeaderByHash(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -602,7 +619,7 @@ func TestSimulatedBackend_HeaderByHash(t *testing.T) {
}
}
func TestSimulatedBackend_HeaderByNumber(t *testing.T) {
func TestHeaderByNumber(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -649,7 +666,7 @@ func TestSimulatedBackend_HeaderByNumber(t *testing.T) {
}
}
func TestSimulatedBackend_TransactionCount(t *testing.T) {
func TestTransactionCount(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -668,9 +685,11 @@ func TestSimulatedBackend_TransactionCount(t *testing.T) {
if count != 0 {
t.Errorf("expected transaction count of %v does not match actual count of %v", 0, count)
}
// create a signed transaction to send
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -699,7 +718,7 @@ func TestSimulatedBackend_TransactionCount(t *testing.T) {
}
}
func TestSimulatedBackend_TransactionInBlock(t *testing.T) {
func TestTransactionInBlock(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -723,9 +742,11 @@ func TestSimulatedBackend_TransactionInBlock(t *testing.T) {
if pendingNonce != uint64(0) {
t.Errorf("expected pending nonce of 0 got %v", pendingNonce)
}
// create a signed transaction to send
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -762,7 +783,7 @@ func TestSimulatedBackend_TransactionInBlock(t *testing.T) {
}
}
func TestSimulatedBackend_PendingNonceAt(t *testing.T) {
func TestPendingNonceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -780,7 +801,10 @@ func TestSimulatedBackend_PendingNonceAt(t *testing.T) {
}
// create a signed transaction to send
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -803,7 +827,7 @@ func TestSimulatedBackend_PendingNonceAt(t *testing.T) {
}
// make a new transaction with a nonce of 1
tx = types.NewTransaction(uint64(1), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
tx = types.NewTransaction(uint64(1), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err = types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -824,7 +848,7 @@ func TestSimulatedBackend_PendingNonceAt(t *testing.T) {
}
}
func TestSimulatedBackend_TransactionReceipt(t *testing.T) {
func TestTransactionReceipt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
@ -832,7 +856,10 @@ func TestSimulatedBackend_TransactionReceipt(t *testing.T) {
bgCtx := context.Background()
// create a signed transaction to send
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewTransaction(uint64(0), testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
@ -855,7 +882,7 @@ func TestSimulatedBackend_TransactionReceipt(t *testing.T) {
}
}
func TestSimulatedBackend_SuggestGasPrice(t *testing.T) {
func TestSuggestGasPrice(t *testing.T) {
sim := NewSimulatedBackend(
core.GenesisAlloc{},
10000000,
@ -871,7 +898,7 @@ func TestSimulatedBackend_SuggestGasPrice(t *testing.T) {
}
}
func TestSimulatedBackend_PendingCodeAt(t *testing.T) {
func TestPendingCodeAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
@ -907,7 +934,7 @@ func TestSimulatedBackend_PendingCodeAt(t *testing.T) {
}
}
func TestSimulatedBackend_CodeAt(t *testing.T) {
func TestCodeAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
@ -946,7 +973,7 @@ func TestSimulatedBackend_CodeAt(t *testing.T) {
// When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt:
// receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
func TestSimulatedBackend_PendingAndCallContract(t *testing.T) {
func TestPendingAndCallContract(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
@ -1030,7 +1057,7 @@ contract Reverter {
}
}
}*/
func TestSimulatedBackend_CallContractRevert(t *testing.T) {
func TestCallContractRevert(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
@ -1114,3 +1141,175 @@ func TestSimulatedBackend_CallContractRevert(t *testing.T) {
sim.Commit()
}
}
// TestFork check that the chain length after a reorg is correct.
// Steps:
// 1. Save the current block which will serve as parent for the fork.
// 2. Mine n blocks with n ∈ [0, 20].
// 3. Assert that the chain length is n.
// 4. Fork by using the parent block as ancestor.
// 5. Mine n+1 blocks which should trigger a reorg.
// 6. Assert that the chain length is n+1.
// Since Commit() was called 2n+1 times in total,
// having a chain length of just n+1 means that a reorg occurred.
func TestFork(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
// 1.
parent := sim.blockchain.CurrentBlock()
// 2.
n := int(rand.Int31n(21))
for i := 0; i < n; i++ {
sim.Commit()
}
// 3.
if sim.blockchain.CurrentBlock().NumberU64() != uint64(n) {
t.Error("wrong chain length")
}
// 4.
sim.Fork(context.Background(), parent.Hash())
// 5.
for i := 0; i < n+1; i++ {
sim.Commit()
}
// 6.
if sim.blockchain.CurrentBlock().NumberU64() != uint64(n+1) {
t.Error("wrong chain length")
}
}
/*
Example contract to test event emission:
pragma solidity >=0.7.0 <0.9.0;
contract Callable {
event Called();
function Call() public { emit Called(); }
}
*/
const callableAbi = "[{\"anonymous\":false,\"inputs\":[],\"name\":\"Called\",\"type\":\"event\"},{\"inputs\":[],\"name\":\"Call\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]"
const callableBin = "6080604052348015600f57600080fd5b5060998061001e6000396000f3fe6080604052348015600f57600080fd5b506004361060285760003560e01c806334e2292114602d575b600080fd5b60336035565b005b7f81fab7a4a0aa961db47eefc81f143a5220e8c8495260dd65b1356f1d19d3c7b860405160405180910390a156fea2646970667358221220029436d24f3ac598ceca41d4d712e13ced6d70727f4cdc580667de66d2f51d8b64736f6c63430008010033"
// TestForkLogsReborn check that the simulated reorgs
// correctly remove and reborn logs.
// Steps:
// 1. Deploy the Callable contract.
// 2. Set up an event subscription.
// 3. Save the current block which will serve as parent for the fork.
// 4. Send a transaction.
// 5. Check that the event was included.
// 6. Fork by using the parent block as ancestor.
// 7. Mine two blocks to trigger a reorg.
// 8. Check that the event was removed.
// 9. Re-send the transaction and mine a block.
// 10. Check that the event was reborn.
func TestForkLogsReborn(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
// 1.
parsed, _ := abi.JSON(strings.NewReader(callableAbi))
auth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
_, _, contract, err := bind.DeployContract(auth, parsed, common.FromHex(callableBin), sim)
if err != nil {
t.Errorf("deploying contract: %v", err)
}
sim.Commit()
// 2.
logs, sub, err := contract.WatchLogs(nil, "Called")
if err != nil {
t.Errorf("watching logs: %v", err)
}
defer sub.Unsubscribe()
// 3.
parent := sim.blockchain.CurrentBlock()
// 4.
tx, err := contract.Transact(auth, "Call")
if err != nil {
t.Errorf("transacting: %v", err)
}
sim.Commit()
// 5.
log := <-logs
if log.TxHash != tx.Hash() {
t.Error("wrong event tx hash")
}
if log.Removed {
t.Error("Event should be included")
}
// 6.
if err := sim.Fork(context.Background(), parent.Hash()); err != nil {
t.Errorf("forking: %v", err)
}
// 7.
sim.Commit()
sim.Commit()
// 8.
log = <-logs
if log.TxHash != tx.Hash() {
t.Error("wrong event tx hash")
}
if !log.Removed {
t.Error("Event should be removed")
}
// 9.
if err := sim.SendTransaction(context.Background(), tx); err != nil {
t.Errorf("sending transaction: %v", err)
}
sim.Commit()
// 10.
log = <-logs
if log.TxHash != tx.Hash() {
t.Error("wrong event tx hash")
}
if log.Removed {
t.Error("Event should be included")
}
}
// TestForkResendTx checks that re-sending a TX after a fork
// is possible and does not cause a "nonce mismatch" panic.
// Steps:
// 1. Save the current block which will serve as parent for the fork.
// 2. Send a transaction.
// 3. Check that the TX is included in block 1.
// 4. Fork by using the parent block as ancestor.
// 5. Mine a block, Re-send the transaction and mine another one.
// 6. Check that the TX is now included in block 2.
func TestForkResendTx(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
// 1.
parent := sim.blockchain.CurrentBlock()
// 2.
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
_tx := types.NewTransaction(0, testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
tx, _ := types.SignTx(_tx, types.HomesteadSigner{}, testKey)
sim.SendTransaction(context.Background(), tx)
sim.Commit()
// 3.
receipt, _ := sim.TransactionReceipt(context.Background(), tx.Hash())
if h := receipt.BlockNumber.Uint64(); h != 1 {
t.Errorf("TX included in wrong block: %d", h)
}
// 4.
if err := sim.Fork(context.Background(), parent.Hash()); err != nil {
t.Errorf("forking: %v", err)
}
// 5.
sim.Commit()
if err := sim.SendTransaction(context.Background(), tx); err != nil {
t.Errorf("sending transaction: %v", err)
}
sim.Commit()
// 6.
receipt, _ = sim.TransactionReceipt(context.Background(), tx.Hash())
if h := receipt.BlockNumber.Uint64(); h != 2 {
t.Errorf("TX included in wrong block: %d", h)
}
}

View File

@ -49,9 +49,11 @@ type TransactOpts struct {
Nonce *big.Int // Nonce to use for the transaction execution (nil = use pending state)
Signer SignerFn // Method to use for signing the transaction (mandatory)
Value *big.Int // Funds to transfer along the transaction (nil = 0 = no funds)
GasPrice *big.Int // Gas price to use for the transaction execution (nil = gas price oracle)
GasLimit uint64 // Gas limit to set for the transaction execution (0 = estimate)
Value *big.Int // Funds to transfer along the transaction (nil = 0 = no funds)
GasPrice *big.Int // Gas price to use for the transaction execution (nil = gas price oracle)
GasFeeCap *big.Int // Gas fee cap to use for the 1559 transaction execution (nil = gas price oracle)
GasTipCap *big.Int // Gas priority fee cap to use for the 1559 transaction execution (nil = gas price oracle)
GasLimit uint64 // Gas limit to set for the transaction execution (0 = estimate)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
@ -223,12 +225,42 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
} else {
nonce = opts.Nonce.Uint64()
}
// Figure out the gas allowance and gas price values
gasPrice := opts.GasPrice
if gasPrice == nil {
gasPrice, err = c.transactor.SuggestGasPrice(ensureContext(opts.Context))
if err != nil {
return nil, fmt.Errorf("failed to suggest gas price: %v", err)
// Figure out reasonable gas price values
if opts.GasPrice != nil && (opts.GasFeeCap != nil || opts.GasTipCap != nil) {
return nil, errors.New("both gasPrice and (maxFeePerGas or maxPriorityFeePerGas) specified")
}
head, err := c.transactor.HeaderByNumber(ensureContext(opts.Context), nil)
if err != nil {
return nil, err
}
if head.BaseFee != nil && opts.GasPrice == nil {
if opts.GasTipCap == nil {
tip, err := c.transactor.SuggestGasTipCap(ensureContext(opts.Context))
if err != nil {
return nil, err
}
opts.GasTipCap = tip
}
if opts.GasFeeCap == nil {
gasFeeCap := new(big.Int).Add(
opts.GasTipCap,
new(big.Int).Mul(head.BaseFee, big.NewInt(2)),
)
opts.GasFeeCap = gasFeeCap
}
if opts.GasFeeCap.Cmp(opts.GasTipCap) < 0 {
return nil, fmt.Errorf("maxFeePerGas (%v) < maxPriorityFeePerGas (%v)", opts.GasFeeCap, opts.GasTipCap)
}
} else {
if opts.GasFeeCap != nil || opts.GasTipCap != nil {
return nil, errors.New("maxFeePerGas or maxPriorityFeePerGas specified but london is not active yet")
}
if opts.GasPrice == nil {
price, err := c.transactor.SuggestGasPrice(ensureContext(opts.Context))
if err != nil {
return nil, err
}
opts.GasPrice = price
}
}
gasLimit := opts.GasLimit
@ -242,7 +274,7 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
}
}
// If the contract surely has code (or code is not needed), estimate the transaction
msg := ethereum.CallMsg{From: opts.From, To: contract, GasPrice: gasPrice, Value: value, Data: input}
msg := ethereum.CallMsg{From: opts.From, To: contract, GasPrice: opts.GasPrice, GasTipCap: opts.GasTipCap, GasFeeCap: opts.GasFeeCap, Value: value, Data: input}
gasLimit, err = c.transactor.EstimateGas(ensureContext(opts.Context), msg)
if err != nil {
return nil, fmt.Errorf("failed to estimate gas needed: %v", err)
@ -250,10 +282,31 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
}
// Create the transaction, sign it and schedule it for execution
var rawTx *types.Transaction
if contract == nil {
rawTx = types.NewContractCreation(nonce, value, gasLimit, gasPrice, input)
if opts.GasFeeCap == nil {
baseTx := &types.LegacyTx{
Nonce: nonce,
GasPrice: opts.GasPrice,
Gas: gasLimit,
Value: value,
Data: input,
}
if contract != nil {
baseTx.To = &c.address
}
rawTx = types.NewTx(baseTx)
} else {
rawTx = types.NewTransaction(nonce, c.address, value, gasLimit, gasPrice, input)
baseTx := &types.DynamicFeeTx{
Nonce: nonce,
GasFeeCap: opts.GasFeeCap,
GasTipCap: opts.GasTipCap,
Gas: gasLimit,
Value: value,
Data: input,
}
if contract != nil {
baseTx.To = &c.address
}
rawTx = types.NewTx(baseTx)
}
if opts.Signer == nil {
return nil, errors.New("no signer to authorize the transaction with")
@ -387,7 +440,7 @@ func (c *BoundContract) UnpackLogIntoMap(out map[string]interface{}, event strin
// user specified it as such.
func ensureContext(ctx context.Context) context.Context {
if ctx == nil {
return context.TODO()
return context.Background()
}
return ctx
}

View File

@ -298,7 +298,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy an interaction tester contract and call a transaction on it
@ -353,7 +353,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a tuple tester contract and execute a structured call on it
@ -399,7 +399,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a tuple tester contract and execute a structured call on it
@ -457,7 +457,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a slice tester contract and execute a n array call on it
@ -505,7 +505,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a default method invoker contract and execute its default method
@ -571,7 +571,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a structs method invoker contract and execute its default method
@ -703,7 +703,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a funky gas pattern contract
@ -753,7 +753,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a sender tester contract and execute a structured call on it
@ -828,7 +828,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a underscorer tester contract and execute a structured call on it
@ -922,7 +922,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy an eventer contract
@ -1112,7 +1112,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
//deploy the test contract
@ -1247,7 +1247,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
_, _, contract, err := DeployTuple(auth, sim)
@ -1389,7 +1389,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
//deploy the test contract
@ -1454,7 +1454,7 @@ var bindTests = []struct {
// Initialize test accounts
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// deploy the test contract
@ -1544,7 +1544,7 @@ var bindTests = []struct {
addr := crypto.PubkeyToAddress(key.PublicKey)
// Deploy registrar contract
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
@ -1606,7 +1606,7 @@ var bindTests = []struct {
addr := crypto.PubkeyToAddress(key.PublicKey)
// Deploy registrar contract
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
@ -1668,7 +1668,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000000000)}}, 10000000)
defer sim.Close()
// Deploy a tester contract and execute a structured call on it
@ -1728,7 +1728,7 @@ var bindTests = []struct {
key, _ := crypto.GenerateKey()
addr := crypto.PubkeyToAddress(key.PublicKey)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 1000000)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(10000000000000000)}}, 1000000)
defer sim.Close()
opts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))

View File

@ -56,14 +56,17 @@ func TestWaitDeployed(t *testing.T) {
for name, test := range waitDeployedTests {
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000000000)},
},
10000000,
)
defer backend.Close()
// Create the transaction.
tx := types.NewContractCreation(0, big.NewInt(0), test.gas, big.NewInt(1), common.FromHex(test.code))
// Create the transaction
head, _ := backend.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
tx := types.NewContractCreation(0, big.NewInt(0), test.gas, gasPrice, common.FromHex(test.code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
// Wait for it to get mined in the background.
@ -99,15 +102,18 @@ func TestWaitDeployed(t *testing.T) {
func TestWaitDeployedCornerCases(t *testing.T) {
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000000000)},
},
10000000,
)
defer backend.Close()
head, _ := backend.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
// Create a transaction to an account.
code := "6060604052600a8060106000396000f360606040526008565b00"
tx := types.NewTransaction(0, common.HexToAddress("0x01"), big.NewInt(0), 3000000, big.NewInt(1), common.FromHex(code))
tx := types.NewTransaction(0, common.HexToAddress("0x01"), big.NewInt(0), 3000000, gasPrice, common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -119,7 +125,7 @@ func TestWaitDeployedCornerCases(t *testing.T) {
}
// Create a transaction that is not mined.
tx = types.NewContractCreation(1, big.NewInt(0), 3000000, big.NewInt(1), common.FromHex(code))
tx = types.NewContractCreation(1, big.NewInt(0), 3000000, gasPrice, common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
go func() {

View File

@ -113,7 +113,7 @@ type Wallet interface {
SignData(account Account, mimeType string, data []byte) ([]byte, error)
// SignDataWithPassphrase is identical to SignData, but also takes a password
// NOTE: there's an chance that an erroneous call might mistake the two strings, and
// NOTE: there's a chance that an erroneous call might mistake the two strings, and
// supply password in the mimetype field, or vice versa. Thus, an implementation
// should never echo the mimetype or return the mimetype in the error-response
SignDataWithPassphrase(account Account, passphrase, mimeType string, data []byte) ([]byte, error)
@ -127,7 +127,7 @@ type Wallet interface {
// a password to decrypt the account, or a PIN code o verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignHashWithPassphrase, or by other means (e.g. unlock
// the needed details via SignTextWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
//
// This method should return the signature in 'canonical' format, with v 0 or 1

View File

@ -204,13 +204,32 @@ func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transactio
to = &t
}
args := &core.SendTxArgs{
Data: &data,
Nonce: hexutil.Uint64(tx.Nonce()),
Value: hexutil.Big(*tx.Value()),
Gas: hexutil.Uint64(tx.Gas()),
GasPrice: hexutil.Big(*tx.GasPrice()),
To: to,
From: common.NewMixedcaseAddress(account.Address),
Data: &data,
Nonce: hexutil.Uint64(tx.Nonce()),
Value: hexutil.Big(*tx.Value()),
Gas: hexutil.Uint64(tx.Gas()),
To: to,
From: common.NewMixedcaseAddress(account.Address),
}
if tx.GasFeeCap() != nil {
args.MaxFeePerGas = (*hexutil.Big)(tx.GasFeeCap())
args.MaxPriorityFeePerGas = (*hexutil.Big)(tx.GasTipCap())
} else {
args.GasPrice = (*hexutil.Big)(tx.GasPrice())
}
// We should request the default chain id that we're operating with
// (the chain we're executing on)
if chainID != nil {
args.ChainID = (*hexutil.Big)(chainID)
}
if tx.Type() != types.LegacyTxType {
// However, if the user asked for a particular chain id, then we should
// use that instead.
if tx.ChainId() != nil {
args.ChainID = (*hexutil.Big)(tx.ChainId())
}
accessList := tx.AccessList()
args.AccessList = &accessList
}
var res signTransactionResult
if err := api.client.Call(&res, "account_signTransaction", args); err != nil {

View File

@ -64,7 +64,7 @@ func (u URL) String() string {
func (u URL) TerminalString() string {
url := u.String()
if len(url) > 32 {
return url[:31] + ""
return url[:31] + ".."
}
return url
}

View File

@ -1,41 +1,29 @@
os: Visual Studio 2015
# Clone directly into GOPATH.
clone_folder: C:\gopath\src\github.com\ethereum\go-ethereum
os: Visual Studio 2019
clone_depth: 5
version: "{branch}.{build}"
environment:
global:
GO111MODULE: on
GOPATH: C:\gopath
CC: gcc.exe
matrix:
# We use gcc from MSYS2 because it is the most recent compiler version available on
# AppVeyor. Note: gcc.exe only works properly if the corresponding bin/ directory is
# contained in PATH.
- GETH_ARCH: amd64
MSYS2_ARCH: x86_64
MSYS2_BITS: 64
MSYSTEM: MINGW64
PATH: C:\msys64\mingw64\bin\;C:\Program Files (x86)\NSIS\;%PATH%
GETH_CC: C:\msys64\mingw64\bin\gcc.exe
PATH: C:\msys64\mingw64\bin;C:\Program Files (x86)\NSIS\;%PATH%
- GETH_ARCH: 386
MSYS2_ARCH: i686
MSYS2_BITS: 32
MSYSTEM: MINGW32
PATH: C:\msys64\mingw32\bin\;C:\Program Files (x86)\NSIS\;%PATH%
GETH_CC: C:\msys64\mingw32\bin\gcc.exe
PATH: C:\msys64\mingw32\bin;C:\Program Files (x86)\NSIS\;%PATH%
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://dl.google.com/go/go1.16.windows-%GETH_ARCH%.zip
- 7z x go1.16.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- git submodule update --init --depth 1
- go version
- gcc --version
- "%GETH_CC% --version"
build_script:
- go run build\ci.go install -dlgo
- go run build\ci.go install -dlgo -arch %GETH_ARCH% -cc %GETH_CC%
after_build:
- go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
- go run build\ci.go nsis -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
- go run build\ci.go archive -arch %GETH_ARCH% -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
- go run build\ci.go nsis -arch %GETH_ARCH% -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
test_script:
- set CGO_ENABLED=1
- go run build\ci.go test -coverage
- go run build\ci.go test -dlgo -arch %GETH_ARCH% -cc %GETH_CC% -coverage

View File

@ -1,31 +1,33 @@
# This file contains sha256 checksums of optional build dependencies.
7688063d55656105898f323d90a79a39c378d86fe89ae192eb3b7fc46347c95a go1.16.src.tar.gz
6000a9522975d116bf76044967d7e69e04e982e9625330d9a539a8b45395f9a8 go1.16.darwin-amd64.tar.gz
ea435a1ac6d497b03e367fdfb74b33e961d813883468080f6e239b3b03bea6aa go1.16.linux-386.tar.gz
013a489ebb3e24ef3d915abe5b94c3286c070dfe0818d5bca8108f1d6e8440d2 go1.16.linux-amd64.tar.gz
3770f7eb22d05e25fbee8fb53c2a4e897da043eb83c69b9a14f8d98562cd8098 go1.16.linux-arm64.tar.gz
d1d9404b1dbd77afa2bdc70934e10fbfcf7d785c372efc29462bb7d83d0a32fd go1.16.linux-armv6l.tar.gz
481492a17d42193d471b93b7a06da3555331bd833b76336afc87be820c48933f go1.16.windows-386.zip
5cc88fa506b3d5c453c54c3ea218fc8dd05d7362ae1de15bb67986b72089ce93 go1.16.windows-amd64.zip
d7d6c70b05a7c2f68b48aab5ab8cb5116b8444c9ddad131673b152e7cff7c726 go1.16.freebsd-386.tar.gz
40b03216f6945fb6883a50604fc7f409a83f62171607229a9c598e701e684f8a go1.16.freebsd-amd64.tar.gz
27a1aaa988e930b7932ce459c8a63ad5b3333b3a06b016d87ff289f2a11aacd6 go1.16.linux-ppc64le.tar.gz
be4c9e4e2cf058efc4e3eb013a760cb989ddc4362f111950c990d1c63b27ccbe go1.16.linux-s390x.tar.gz
ae4f6b6e2a1677d31817984655a762074b5356da50fb58722b99104870d43503 go1.16.4.src.tar.gz
18fe94775763db3878717393b6d41371b0b45206055e49b3838328120c977d13 go1.16.4.darwin-amd64.tar.gz
cb6b972cc42e669f3585c648198cd5b6f6d7a0811d413ad64b50c02ba06ccc3a go1.16.4.darwin-arm64.tar.gz
cd1b146ef6e9006f27dd99e9687773e7fef30e8c985b7d41bff33e955a3bb53a go1.16.4.linux-386.tar.gz
7154e88f5a8047aad4b80ebace58a059e36e7e2e4eb3b383127a28c711b4ff59 go1.16.4.linux-amd64.tar.gz
8b18eb05ddda2652d69ab1b1dd1f40dd731799f43c6a58b512ad01ae5b5bba21 go1.16.4.linux-arm64.tar.gz
a53391a800ddec749ee90d38992babb27b95cfb864027350c737b9aa8e069494 go1.16.4.linux-armv6l.tar.gz
e75c0b114a09eb5499874162b208931dc260de0fedaeedac8621bf263c974605 go1.16.4.windows-386.zip
d40139b7ade8a3008e3240a6f86fe8f899a9c465c917e11dac8758af216f5eb0 go1.16.4.windows-amd64.zip
7cf2bc8a175d6d656861165bfc554f92dc78d2abf5afe5631db3579555d97409 go1.16.4.freebsd-386.tar.gz
ccdd2b76de1941b60734408fda0d750aaa69330d8a07430eed4c56bdb3502f6f go1.16.4.freebsd-amd64.tar.gz
80cfac566e344096a8df8f37bbd21f89e76a6fbe601406565d71a87a665fc125 go1.16.4.linux-ppc64le.tar.gz
d6431881b3573dc29ecc24fbeab5e5ec25d8c9273aa543769c86a1a3bbac1ddf go1.16.4.linux-s390x.tar.gz
d998a84eea42f2271aca792a7b027ca5c1edfcba229e8e5a844c9ac3f336df35 golangci-lint-1.27.0-linux-armv7.tar.gz
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint.exe-1.27.0-windows-amd64.zip
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint-1.27.0-windows-amd64.zip
0e2a57d6ba709440d3ed018ef1037465fa010ed02595829092860e5cf863042e golangci-lint-1.27.0-freebsd-386.tar.gz
90205fc42ab5ed0096413e790d88ac9b4ed60f4c47e576d13dc0660f7ed4b013 golangci-lint-1.27.0-linux-arm64.tar.gz
8d345e4e88520e21c113d81978e89ad77fc5b13bfdf20e5bca86b83fc4261272 golangci-lint-1.27.0-linux-amd64.tar.gz
cc619634a77f18dc73df2a0725be13116d64328dc35131ca1737a850d6f76a59 golangci-lint-1.27.0-freebsd-armv7.tar.gz
fe683583cfc9eeec83e498c0d6159d87b5e1919dbe4b6c3b3913089642906069 golangci-lint-1.27.0-linux-s390x.tar.gz
058f5579bee75bdaacbaf75b75e1369f7ad877fd8b3b145aed17a17545de913e golangci-lint-1.27.0-freebsd-armv6.tar.gz
38e1e3dadbe3f56ab62b4de82ee0b88e8fad966d8dfd740a26ef94c2edef9818 golangci-lint-1.27.0-linux-armv6.tar.gz
071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint.exe-1.27.0-windows-386.zip
071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint-1.27.0-windows-386.zip
5f37e2b33914ecddb7cad38186ef4ec61d88172fc04f930fa0267c91151ff306 golangci-lint-1.27.0-linux-386.tar.gz
4d94cfb51fdebeb205f1d5a349ac2b683c30591c5150708073c1c329e15965f0 golangci-lint-1.27.0-freebsd-amd64.tar.gz
52572ba8ff07d5169c2365d3de3fec26dc55a97522094d13d1596199580fa281 golangci-lint-1.27.0-linux-ppc64le.tar.gz
3fb1a1683a29c6c0a8cd76135f62b606fbdd538d5a7aeab94af1af70ffdc2fd4 golangci-lint-1.27.0-darwin-amd64.tar.gz
7e9a47ab540aa3e8472fbf8120d28bed3b9d9cf625b955818e8bc69628d7187c golangci-lint-1.39.0-darwin-amd64.tar.gz
574daa2c9c299b01672a6daeb1873b5f12e413cdb6dc0e30f2ff163956778064 golangci-lint-1.39.0-darwin-arm64.tar.gz
6225f7014987324ab78e9b511f294e3f25be013728283c33918c67c8576d543e golangci-lint-1.39.0-freebsd-386.tar.gz
6b3e76e1e5eaf0159411c8e2727f8d533989d3bb19f10e9caa6e0b9619ee267d golangci-lint-1.39.0-freebsd-amd64.tar.gz
a301cacfff87ed9b00313d95278533c25a4527a06b040a17d969b4b7e1b8a90d golangci-lint-1.39.0-freebsd-armv7.tar.gz
25bfd96a29c3112f508d5e4fc860dbad7afce657233c343acfa20715717d51e7 golangci-lint-1.39.0-freebsd-armv6.tar.gz
9687e4ff15545cfc722b0e46107a94195166a505023b48a316579af25ad09505 golangci-lint-1.39.0-linux-armv7.tar.gz
a7fa7ab2bfc99cbe5e5bcbf5684f5a997f920afbbe2f253d2feb1001d5e3c8b3 golangci-lint-1.39.0-linux-armv6.tar.gz
c8f9634115beddb4ed9129c1f7ecd4c97c99d07aeef33e3707234097eeb51b7b golangci-lint-1.39.0-linux-mips64le.tar.gz
d1234c213b74751f1af413302dde0e9a6d4d29aecef034af7abb07dc1b6e887f golangci-lint-1.39.0-linux-arm64.tar.gz
df25d9267168323b163147acb823ab0215a8a3bb6898a4a9320afdfedde66817 golangci-lint-1.39.0-linux-386.tar.gz
1767e75fba357b7651b1a796d38453558f371c60af805505ec99e166908c04b5 golangci-lint-1.39.0-linux-ppc64le.tar.gz
25fd75bf3186b3d930ecae10185689968fd18fd8fa6f9f555d6beb04348c20f6 golangci-lint-1.39.0-linux-s390x.tar.gz
3a73aa7468087caa62673c8adea99b4e4dff846dc72707222db85f8679b40cbf golangci-lint-1.39.0-linux-amd64.tar.gz
578caceccf81739bda67dbfec52816709d03608c6878888ecdc0e186a094a41b golangci-lint-1.39.0-linux-mips64.tar.gz
494b66ba0e32c8ddf6c4f6b1d05729b110900f6017eda943057e43598c17d7a8 golangci-lint-1.39.0-windows-386.zip
52ec2e13a3cbb47147244dff8cfc35103563deb76e0459133058086fc35fb2c7 golangci-lint-1.39.0-windows-amd64.zip

View File

@ -54,6 +54,7 @@ import (
"path/filepath"
"regexp"
"runtime"
"strconv"
"strings"
"time"
@ -152,7 +153,7 @@ var (
// This is the version of go that will be downloaded by
//
// go run ci.go install -dlgo
dlgoVersion = "1.16"
dlgoVersion = "1.16.4"
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@ -182,6 +183,8 @@ func main() {
doLint(os.Args[2:])
case "archive":
doArchive(os.Args[2:])
case "docker":
doDocker(os.Args[2:])
case "debsrc":
doDebianSource(os.Args[2:])
case "nsis":
@ -208,58 +211,25 @@ func doInstall(cmdline []string) {
cc = flag.String("cc", "", "C compiler to cross build with")
)
flag.CommandLine.Parse(cmdline)
// Configure the toolchain.
tc := build.GoToolchain{GOARCH: *arch, CC: *cc}
if *dlgo {
csdb := build.MustLoadChecksums("build/checksums.txt")
tc.Root = build.DownloadGo(csdb, dlgoVersion)
}
// Configure the build.
env := build.Env()
// Check local Go version. People regularly open issues about compilation
// failure with outdated Go. This should save them the trouble.
if !strings.Contains(runtime.Version(), "devel") {
// Figure out the minor version number since we can't textually compare (1.10 < 1.9)
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if minor < 13 {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.13 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
}
}
// Choose which go command we're going to use.
var gobuild *exec.Cmd
if !*dlgo {
// Default behavior: use the go version which runs ci.go right now.
gobuild = goTool("build")
} else {
// Download of Go requested. This is for build environments where the
// installed version is too old and cannot be upgraded easily.
cachedir := filepath.Join("build", "cache")
goroot := downloadGo(runtime.GOARCH, runtime.GOOS, cachedir)
gobuild = localGoTool(goroot, "build")
}
// Configure environment for cross build.
if *arch != "" || *arch != runtime.GOARCH {
gobuild.Env = append(gobuild.Env, "CGO_ENABLED=1")
gobuild.Env = append(gobuild.Env, "GOARCH="+*arch)
}
// Configure C compiler.
if *cc != "" {
gobuild.Env = append(gobuild.Env, "CC="+*cc)
} else if os.Getenv("CC") != "" {
gobuild.Env = append(gobuild.Env, "CC="+os.Getenv("CC"))
}
gobuild := tc.Go("build", buildFlags(env)...)
// arm64 CI builders are memory-constrained and can't handle concurrent builds,
// better disable it. This check isn't the best, it should probably
// check for something in env instead.
if runtime.GOARCH == "arm64" {
if env.CI && runtime.GOARCH == "arm64" {
gobuild.Args = append(gobuild.Args, "-p", "1")
}
// Put the default settings in.
gobuild.Args = append(gobuild.Args, buildFlags(env)...)
// We use -trimpath to avoid leaking local paths into the built executables.
gobuild.Args = append(gobuild.Args, "-trimpath")
@ -301,53 +271,30 @@ func buildFlags(env build.Environment) (flags []string) {
return flags
}
// goTool returns the go tool. This uses the Go version which runs ci.go.
func goTool(subcmd string, args ...string) *exec.Cmd {
cmd := build.GoTool(subcmd, args...)
goToolSetEnv(cmd)
return cmd
}
// localGoTool returns the go tool from the given GOROOT.
func localGoTool(goroot string, subcmd string, args ...string) *exec.Cmd {
gotool := filepath.Join(goroot, "bin", "go")
cmd := exec.Command(gotool, subcmd)
goToolSetEnv(cmd)
cmd.Env = append(cmd.Env, "GOROOT="+goroot)
cmd.Args = append(cmd.Args, args...)
return cmd
}
// goToolSetEnv forwards the build environment to the go tool.
func goToolSetEnv(cmd *exec.Cmd) {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOBIN=") || strings.HasPrefix(e, "CC=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
}
// Running The Tests
//
// "tests" also includes static analysis tools such as vet.
func doTest(cmdline []string) {
coverage := flag.Bool("coverage", false, "Whether to record code coverage")
verbose := flag.Bool("v", false, "Whether to log verbosely")
var (
dlgo = flag.Bool("dlgo", false, "Download Go and build with it")
arch = flag.String("arch", "", "Run tests for given architecture")
cc = flag.String("cc", "", "Sets C compiler binary")
coverage = flag.Bool("coverage", false, "Whether to record code coverage")
verbose = flag.Bool("v", false, "Whether to log verbosely")
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
packages := []string{"./..."}
if len(flag.CommandLine.Args()) > 0 {
packages = flag.CommandLine.Args()
// Configure the toolchain.
tc := build.GoToolchain{GOARCH: *arch, CC: *cc}
if *dlgo {
csdb := build.MustLoadChecksums("build/checksums.txt")
tc.Root = build.DownloadGo(csdb, dlgoVersion)
}
gotest := tc.Go("test")
// Run the actual tests.
// Test a single package at a time. CI builders are slow
// and some tests run into timeouts under load.
gotest := goTool("test", buildFlags(env)...)
gotest.Args = append(gotest.Args, "-p", "1")
if *coverage {
gotest.Args = append(gotest.Args, "-covermode=atomic", "-cover")
@ -356,6 +303,10 @@ func doTest(cmdline []string) {
gotest.Args = append(gotest.Args, "-v")
}
packages := []string{"./..."}
if len(flag.CommandLine.Args()) > 0 {
packages = flag.CommandLine.Args()
}
gotest.Args = append(gotest.Args, packages...)
build.MustRun(gotest)
}
@ -379,7 +330,7 @@ func doLint(cmdline []string) {
// downloadLinter downloads and unpacks golangci-lint.
func downloadLinter(cachedir string) string {
const version = "1.27.0"
const version = "1.39.0"
csdb := build.MustLoadChecksums("build/checksums.txt")
base := fmt.Sprintf("golangci-lint-%s-%s-%s", version, runtime.GOOS, runtime.GOARCH)
@ -415,8 +366,7 @@ func doArchive(cmdline []string) {
}
var (
env = build.Env()
env = build.Env()
basegeth = archiveBasename(*arch, params.ArchiveVersion(env.Commit))
geth = "geth-" + basegeth + ext
alltools = "geth-alltools-" + basegeth + ext
@ -492,19 +442,185 @@ func archiveUpload(archive string, blobstore string, signer string, signifyVar s
// skips archiving for some build configurations.
func maybeSkipArchive(env build.Environment) {
if env.IsPullRequest {
log.Printf("skipping because this is a PR build")
log.Printf("skipping archive creation because this is a PR build")
os.Exit(0)
}
if env.IsCronJob {
log.Printf("skipping because this is a cron job")
log.Printf("skipping archive creation because this is a cron job")
os.Exit(0)
}
if env.Branch != "master" && !strings.HasPrefix(env.Tag, "v1.") {
log.Printf("skipping because branch %q, tag %q is not on the whitelist", env.Branch, env.Tag)
log.Printf("skipping archive creation because branch %q, tag %q is not on the whitelist", env.Branch, env.Tag)
os.Exit(0)
}
}
// Builds the docker images and optionally uploads them to Docker Hub.
func doDocker(cmdline []string) {
var (
image = flag.Bool("image", false, `Whether to build and push an arch specific docker image`)
manifest = flag.String("manifest", "", `Push a multi-arch docker image for the specified architectures (usually "amd64,arm64")`)
upload = flag.String("upload", "", `Where to upload the docker image (usually "ethereum/client-go")`)
)
flag.CommandLine.Parse(cmdline)
// Skip building and pushing docker images for PR builds
env := build.Env()
maybeSkipArchive(env)
// Retrieve the upload credentials and authenticate
user := getenvBase64("DOCKER_HUB_USERNAME")
pass := getenvBase64("DOCKER_HUB_PASSWORD")
if len(user) > 0 && len(pass) > 0 {
auther := exec.Command("docker", "login", "-u", string(user), "--password-stdin")
auther.Stdin = bytes.NewReader(pass)
build.MustRun(auther)
}
// Retrieve the version infos to build and push to the following paths:
// - ethereum/client-go:latest - Pushes to the master branch, Geth only
// - ethereum/client-go:stable - Version tag publish on GitHub, Geth only
// - ethereum/client-go:alltools-latest - Pushes to the master branch, Geth & tools
// - ethereum/client-go:alltools-stable - Version tag publish on GitHub, Geth & tools
// - ethereum/client-go:release-<major>.<minor> - Version tag publish on GitHub, Geth only
// - ethereum/client-go:alltools-release-<major>.<minor> - Version tag publish on GitHub, Geth & tools
// - ethereum/client-go:v<major>.<minor>.<patch> - Version tag publish on GitHub, Geth only
// - ethereum/client-go:alltools-v<major>.<minor>.<patch> - Version tag publish on GitHub, Geth & tools
var tags []string
switch {
case env.Branch == "master":
tags = []string{"latest"}
case strings.HasPrefix(env.Tag, "v1."):
tags = []string{"stable", fmt.Sprintf("release-1.%d", params.VersionMinor), params.Version}
}
// If architecture specific image builds are requested, build and push them
if *image {
build.MustRunCommand("docker", "build", "--build-arg", "COMMIT="+env.Commit, "--build-arg", "VERSION="+params.VersionWithMeta, "--build-arg", "BUILDNUM="+env.Buildnum, "--tag", fmt.Sprintf("%s:TAG", *upload), ".")
build.MustRunCommand("docker", "build", "--build-arg", "COMMIT="+env.Commit, "--build-arg", "VERSION="+params.VersionWithMeta, "--build-arg", "BUILDNUM="+env.Buildnum, "--tag", fmt.Sprintf("%s:alltools-TAG", *upload), "-f", "Dockerfile.alltools", ".")
// Tag and upload the images to Docker Hub
for _, tag := range tags {
gethImage := fmt.Sprintf("%s:%s-%s", *upload, tag, runtime.GOARCH)
toolImage := fmt.Sprintf("%s:alltools-%s-%s", *upload, tag, runtime.GOARCH)
// If the image already exists (non version tag), check the build
// number to prevent overwriting a newer commit if concurrent builds
// are running. This is still a tiny bit racey if two published are
// done at the same time, but that's extremely unlikely even on the
// master branch.
for _, img := range []string{gethImage, toolImage} {
if exec.Command("docker", "pull", img).Run() != nil {
continue // Generally the only failure is a missing image, which is good
}
buildnum, err := exec.Command("docker", "inspect", "--format", "{{index .Config.Labels \"buildnum\"}}", img).CombinedOutput()
if err != nil {
log.Fatalf("Failed to inspect container: %v\nOutput: %s", err, string(buildnum))
}
buildnum = bytes.TrimSpace(buildnum)
if len(buildnum) > 0 && len(env.Buildnum) > 0 {
oldnum, err := strconv.Atoi(string(buildnum))
if err != nil {
log.Fatalf("Failed to parse old image build number: %v", err)
}
newnum, err := strconv.Atoi(env.Buildnum)
if err != nil {
log.Fatalf("Failed to parse current build number: %v", err)
}
if oldnum > newnum {
log.Fatalf("Current build number %d not newer than existing %d", newnum, oldnum)
} else {
log.Printf("Updating %s from build %d to %d", img, oldnum, newnum)
}
}
}
build.MustRunCommand("docker", "image", "tag", fmt.Sprintf("%s:TAG", *upload), gethImage)
build.MustRunCommand("docker", "image", "tag", fmt.Sprintf("%s:alltools-TAG", *upload), toolImage)
build.MustRunCommand("docker", "push", gethImage)
build.MustRunCommand("docker", "push", toolImage)
}
}
// If multi-arch image manifest push is requested, assemble it
if len(*manifest) != 0 {
// Since different architectures are pushed by different builders, wait
// until all required images are updated.
var mismatch bool
for i := 0; i < 2; i++ { // 2 attempts, second is race check
mismatch = false // hope there's no mismatch now
for _, tag := range tags {
for _, arch := range strings.Split(*manifest, ",") {
gethImage := fmt.Sprintf("%s:%s-%s", *upload, tag, arch)
toolImage := fmt.Sprintf("%s:alltools-%s-%s", *upload, tag, arch)
for _, img := range []string{gethImage, toolImage} {
if out, err := exec.Command("docker", "pull", img).CombinedOutput(); err != nil {
log.Printf("Required image %s unavailable: %v\nOutput: %s", img, err, out)
mismatch = true
break
}
buildnum, err := exec.Command("docker", "inspect", "--format", "{{index .Config.Labels \"buildnum\"}}", img).CombinedOutput()
if err != nil {
log.Fatalf("Failed to inspect container: %v\nOutput: %s", err, string(buildnum))
}
buildnum = bytes.TrimSpace(buildnum)
if string(buildnum) != env.Buildnum {
log.Printf("Build number mismatch on %s: want %s, have %s", img, env.Buildnum, buildnum)
mismatch = true
break
}
}
if mismatch {
break
}
}
if mismatch {
break
}
}
if mismatch {
// Build numbers mismatching, retry in a short time to
// avoid concurrent failes in both publisher images. If
// however the retry failed too, it means the concurrent
// builder is still crunching, let that do the publish.
if i == 0 {
time.Sleep(30 * time.Second)
}
continue
}
break
}
if mismatch {
log.Println("Relinquishing publish to other builder")
return
}
// Assemble and push the Geth manifest image
for _, tag := range tags {
gethImage := fmt.Sprintf("%s:%s", *upload, tag)
var gethSubImages []string
for _, arch := range strings.Split(*manifest, ",") {
gethSubImages = append(gethSubImages, gethImage+"-"+arch)
}
build.MustRunCommand("docker", append([]string{"manifest", "create", gethImage}, gethSubImages...)...)
build.MustRunCommand("docker", "manifest", "push", gethImage)
}
// Assemble and push the alltools manifest image
for _, tag := range tags {
toolImage := fmt.Sprintf("%s:alltools-%s", *upload, tag)
var toolSubImages []string
for _, arch := range strings.Split(*manifest, ",") {
toolSubImages = append(toolSubImages, toolImage+"-"+arch)
}
build.MustRunCommand("docker", append([]string{"manifest", "create", toolImage}, toolSubImages...)...)
build.MustRunCommand("docker", "manifest", "push", toolImage)
}
}
}
// Debian Packaging
func doDebianSource(cmdline []string) {
var (
@ -518,6 +634,7 @@ func doDebianSource(cmdline []string) {
flag.CommandLine.Parse(cmdline)
*workdir = makeWorkdir(*workdir)
env := build.Env()
tc := new(build.GoToolchain)
maybeSkipArchive(env)
// Import the signing key.
@ -531,12 +648,12 @@ func doDebianSource(cmdline []string) {
gobundle := downloadGoSources(*cachedir)
// Download all the dependencies needed to build the sources and run the ci script
srcdepfetch := goTool("mod", "download")
srcdepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath"))
srcdepfetch := tc.Go("mod", "download")
srcdepfetch.Env = append(srcdepfetch.Env, "GOPATH="+filepath.Join(*workdir, "modgopath"))
build.MustRun(srcdepfetch)
cidepfetch := goTool("run", "./build/ci.go")
cidepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath"))
cidepfetch := tc.Go("run", "./build/ci.go")
cidepfetch.Env = append(cidepfetch.Env, "GOPATH="+filepath.Join(*workdir, "modgopath"))
cidepfetch.Run() // Command fails, don't care, we only need the deps to start it
// Create Debian packages and upload them.
@ -592,41 +709,6 @@ func downloadGoSources(cachedir string) string {
return dst
}
// downloadGo downloads the Go binary distribution and unpacks it into a temporary
// directory. It returns the GOROOT of the unpacked toolchain.
func downloadGo(goarch, goos, cachedir string) string {
if goarch == "arm" {
goarch = "armv6l"
}
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.%s-%s", dlgoVersion, goos, goarch)
if goos == "windows" {
file += ".zip"
} else {
file += ".tar.gz"
}
url := "https://golang.org/dl/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
log.Fatal(err)
}
ucache, err := os.UserCacheDir()
if err != nil {
log.Fatal(err)
}
godir := filepath.Join(ucache, fmt.Sprintf("geth-go-%s-%s-%s", dlgoVersion, goos, goarch))
if err := build.ExtractArchive(dst, godir); err != nil {
log.Fatal(err)
}
goroot, err := filepath.Abs(filepath.Join(godir, "go"))
if err != nil {
log.Fatal(err)
}
return goroot
}
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
@ -901,13 +983,23 @@ func doAndroidArchive(cmdline []string) {
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
tc := new(build.GoToolchain)
// Sanity check that the SDK and NDK are installed and set
if os.Getenv("ANDROID_HOME") == "" {
log.Fatal("Please ensure ANDROID_HOME points to your Android SDK")
}
// Build gomobile.
install := tc.Install(GOBIN, "golang.org/x/mobile/cmd/gomobile@latest", "golang.org/x/mobile/cmd/gobind@latest")
install.Env = append(install.Env)
build.MustRun(install)
// Ensure all dependencies are available. This is required to make
// gomobile bind work because it expects go.sum to contain all checksums.
build.MustRun(tc.Go("mod", "download"))
// Build the Android archive and Maven resources
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("bind", "-ldflags", "-s -w", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
if *local {
@ -1027,10 +1119,16 @@ func doXCodeFramework(cmdline []string) {
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
tc := new(build.GoToolchain)
// Build gomobile.
build.MustRun(tc.Install(GOBIN, "golang.org/x/mobile/cmd/gomobile@latest", "golang.org/x/mobile/cmd/gobind@latest"))
// Ensure all dependencies are available. This is required to make
// gomobile bind work because it expects go.sum to contain all checksums.
build.MustRun(tc.Go("mod", "download"))
// Build the iOS XCode framework
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init"))
bind := gomobileTool("bind", "-ldflags", "-s -w", "--target", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
if *local {
@ -1039,17 +1137,17 @@ func doXCodeFramework(cmdline []string) {
build.MustRun(bind)
return
}
// Create the archive.
maybeSkipArchive(env)
archive := "geth-" + archiveBasename("ios", params.ArchiveVersion(env.Commit))
if err := os.Mkdir(archive, os.ModePerm); err != nil {
if err := os.MkdirAll(archive, 0755); err != nil {
log.Fatal(err)
}
bind.Dir, _ = filepath.Abs(archive)
build.MustRun(bind)
build.MustRunCommand("tar", "-zcvf", archive+".tar.gz", archive)
// Skip CocoaPods deploy and Azure upload for PR builds
maybeSkipArchive(env)
// Sign and upload the framework to Azure
if err := archiveUpload(archive+".tar.gz", *upload, *signer, *signify); err != nil {
log.Fatal(err)
@ -1115,10 +1213,10 @@ func doXgo(cmdline []string) {
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
var tc build.GoToolchain
// Make sure xgo is available for cross compilation
gogetxgo := goTool("get", "github.com/karalabe/xgo")
build.MustRun(gogetxgo)
build.MustRun(tc.Install(GOBIN, "github.com/karalabe/xgo@latest"))
// If all tools building is requested, build everything the builder wants
args := append(buildFlags(env), flag.Args()...)
@ -1129,27 +1227,23 @@ func doXgo(cmdline []string) {
if strings.HasPrefix(res, GOBIN) {
// Binary tool found, cross build it explicitly
args = append(args, "./"+filepath.Join("cmd", filepath.Base(res)))
xgo := xgoTool(args)
build.MustRun(xgo)
build.MustRun(xgoTool(args))
args = args[:len(args)-1]
}
}
return
}
// Otherwise xxecute the explicit cross compilation
// Otherwise execute the explicit cross compilation
path := args[len(args)-1]
args = append(args[:len(args)-1], []string{"--dest", GOBIN, path}...)
xgo := xgoTool(args)
build.MustRun(xgo)
build.MustRun(xgoTool(args))
}
func xgoTool(args []string) *exec.Cmd {
cmd := exec.Command(filepath.Join(GOBIN, "xgo"), args...)
cmd.Env = os.Environ()
cmd.Env = append(cmd.Env, []string{
"GOBIN=" + GOBIN,
}...)
cmd.Env = append(cmd.Env, []string{"GOBIN=" + GOBIN}...)
return cmd
}

View File

@ -43,7 +43,7 @@
!ifndef Un${StrFuncName}_INCLUDED
${Un${StrFuncName}}
!endif
!define un.${StrFuncName} "${Un${StrFuncName}}"
!define un.${StrFuncName} '${Un${StrFuncName}}'
!macroend
!insertmacro _IncludeStrFunction StrTok

View File

@ -929,7 +929,7 @@ func testExternalUI(api *core.SignerAPI) {
Value: hexutil.Big(*big.NewInt(6)),
From: common.NewMixedcaseAddress(a),
To: &to,
GasPrice: hexutil.Big(*big.NewInt(5)),
GasPrice: (*hexutil.Big)(big.NewInt(5)),
Gas: 1000,
Input: nil,
}
@ -1065,7 +1065,7 @@ func GenDoc(ctx *cli.Context) {
Value: hexutil.Big(*big.NewInt(6)),
From: common.NewMixedcaseAddress(a),
To: nil,
GasPrice: hexutil.Big(*big.NewInt(5)),
GasPrice: (*hexutil.Big)(big.NewInt(5)),
Gas: 1000,
Input: nil,
}})
@ -1081,7 +1081,7 @@ func GenDoc(ctx *cli.Context) {
Value: hexutil.Big(*big.NewInt(6)),
From: common.NewMixedcaseAddress(a),
To: nil,
GasPrice: hexutil.Big(*big.NewInt(5)),
GasPrice: (*hexutil.Big)(big.NewInt(5)),
Gas: 1000,
Input: nil,
}})

View File

@ -0,0 +1,16 @@
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"to": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"gas": "0x333",
"maxFeePerGas": "0x123",
"nonce": "0x0",
"value": "0x10",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
}
],
"id": 67
}

View File

@ -0,0 +1,16 @@
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"to": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"gas": "0x333",
"maxPriorityFeePerGas": "0x123",
"nonce": "0x0",
"value": "0x10",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
}
],
"id": 67
}

17
cmd/clef/testdata/sign_1559_tx.json vendored Normal file
View File

@ -0,0 +1,17 @@
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"to": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"gas": "0x333",
"maxPriorityFeePerGas": "0x123",
"maxFeePerGas": "0x123",
"nonce": "0x0",
"value": "0x10",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
}
],
"id": 67
}

View File

@ -0,0 +1,17 @@
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from":"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192",
"to":"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192",
"gas": "0x333",
"gasPrice": "0x123",
"nonce": "0x0",
"value": "0x10",
"data":
"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
}
],
"id": 67
}

View File

@ -0,0 +1,17 @@
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from":"0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"to":"0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"gas": "0x333",
"gasPrice": "0x123",
"nonce": "0x0",
"value": "0x10",
"data":
"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
}
],
"id": 67
}

View File

@ -30,6 +30,29 @@ Run `devp2p dns to-route53 <directory>` to publish a tree to Amazon Route53.
You can find more information about these commands in the [DNS Discovery Setup Guide][dns-tutorial].
### Node Set Utilities
There are several commands for working with JSON node set files. These files are generated
by the discovery crawlers and DNS client commands. Node sets also used as the input of the
DNS deployer commands.
Run `devp2p nodeset info <nodes.json>` to display statistics of a node set.
Run `devp2p nodeset filter <nodes.json> <filter flags...>` to write a new, filtered node
set to standard output. The following filters are supported:
- `-limit <N>` limits the output set to N entries, taking the top N nodes by score
- `-ip <CIDR>` filters nodes by IP subnet
- `-min-age <duration>` filters nodes by 'first seen' time
- `-eth-network <mainnet/rinkeby/goerli/ropsten>` filters nodes by "eth" ENR entry
- `-les-server` filters nodes by LES server support
- `-snap` filters nodes by snap protocol support
For example, given a node set in `nodes.json`, you could create a filtered set containing
up to 20 eth mainnet nodes which also support snap sync using this command:
devp2p nodeset filter nodes.json -eth-network mainnet -snap -limit 20
### Discovery v4 Utilities
The `devp2p discv4 ...` command family deals with the [Node Discovery v4][discv4]
@ -94,7 +117,7 @@ To run the eth protocol test suite against your implementation, the node needs t
geth --datadir <datadir> --nodiscover --nat=none --networkid 19763 --verbosity 5
```
Then, run the following command, replacing `<enode>` with the enode of the geth node:
Then, run the following command, replacing `<enode>` with the enode of the geth node:
```
devp2p rlpx eth-test <enode> cmd/devp2p/internal/ethtest/testdata/chain.rlp cmd/devp2p/internal/ethtest/testdata/genesis.json
```
@ -103,7 +126,7 @@ Repeat the above process (re-initialising the node) in order to run the Eth Prot
#### Eth66 Test Suite
The Eth66 test suite is also a conformance test suite for the eth 66 protocol version specifically.
The Eth66 test suite is also a conformance test suite for the eth 66 protocol version specifically.
To run the eth66 protocol test suite, initialize a geth node as described above and run the following command,
replacing `<enode>` with the enode of the geth node:

View File

@ -107,22 +107,48 @@ func (c *route53Client) deploy(name string, t *dnsdisc.Tree) error {
return err
}
log.Info(fmt.Sprintf("Found %d TXT records", len(existing)))
records := t.ToTXT(name)
changes := c.computeChanges(name, records, existing)
// Submit to API.
comment := fmt.Sprintf("enrtree update of %s at seq %d", name, t.Seq())
return c.submitChanges(changes, comment)
}
// deleteDomain removes all TXT records of the given domain.
func (c *route53Client) deleteDomain(name string) error {
if err := c.checkZone(name); err != nil {
return err
}
// Compute DNS changes.
existing, err := c.collectRecords(name)
if err != nil {
return err
}
log.Info(fmt.Sprintf("Found %d TXT records", len(existing)))
changes := makeDeletionChanges(existing, nil)
// Submit to API.
comment := "enrtree delete of " + name
return c.submitChanges(changes, comment)
}
// submitChanges submits the given DNS changes to Route53.
func (c *route53Client) submitChanges(changes []types.Change, comment string) error {
if len(changes) == 0 {
log.Info("No DNS changes needed")
return nil
}
// Submit all change batches.
var err error
batches := splitChanges(changes, route53ChangeSizeLimit, route53ChangeCountLimit)
changesToCheck := make([]*route53.ChangeResourceRecordSetsOutput, len(batches))
for i, changes := range batches {
log.Info(fmt.Sprintf("Submitting %d changes to Route53", len(changes)))
batch := &types.ChangeBatch{
Changes: changes,
Comment: aws.String(fmt.Sprintf("enrtree update %d/%d of %s at seq %d", i+1, len(batches), name, t.Seq())),
Comment: aws.String(fmt.Sprintf("%s (%d/%d)", comment, i+1, len(batches))),
}
req := &route53.ChangeResourceRecordSetsInput{HostedZoneId: &c.zoneID, ChangeBatch: batch}
changesToCheck[i], err = c.api.ChangeResourceRecordSets(context.TODO(), req)
@ -151,7 +177,6 @@ func (c *route53Client) deploy(name string, t *dnsdisc.Tree) error {
time.Sleep(30 * time.Second)
}
}
return nil
}
@ -186,7 +211,8 @@ func (c *route53Client) findZoneID(name string) (string, error) {
return "", errors.New("can't find zone ID for " + name)
}
// computeChanges creates DNS changes for the given record.
// computeChanges creates DNS changes for the given set of DNS discovery records.
// The 'existing' arg is the set of records that already exist on Route53.
func (c *route53Client) computeChanges(name string, records map[string]string, existing map[string]recordSet) []types.Change {
// Convert all names to lowercase.
lrecords := make(map[string]string, len(records))
@ -223,16 +249,23 @@ func (c *route53Client) computeChanges(name string, records map[string]string, e
}
// Iterate over the old records and delete anything stale.
for path, set := range existing {
if _, ok := records[path]; ok {
changes = append(changes, makeDeletionChanges(existing, records)...)
// Ensure changes are in the correct order.
sortChanges(changes)
return changes
}
// makeDeletionChanges creates record changes which delete all records not contained in 'keep'.
func makeDeletionChanges(records map[string]recordSet, keep map[string]string) []types.Change {
var changes []types.Change
for path, set := range records {
if _, ok := keep[path]; ok {
continue
}
// Stale entry, nuke it.
log.Info(fmt.Sprintf("Deleting %s = %q", path, strings.Join(set.values, "")))
log.Info(fmt.Sprintf("Deleting %s = %s", path, strings.Join(set.values, "")))
changes = append(changes, newTXTChange("DELETE", path, set.ttl, set.values...))
}
sortChanges(changes)
return changes
}

View File

@ -43,6 +43,7 @@ var (
dnsTXTCommand,
dnsCloudflareCommand,
dnsRoute53Command,
dnsRoute53NukeCommand,
},
}
dnsSyncCommand = cli.Command{
@ -84,6 +85,18 @@ var (
route53RegionFlag,
},
}
dnsRoute53NukeCommand = cli.Command{
Name: "nuke-route53",
Usage: "Deletes DNS TXT records of a subdomain on Amazon Route53",
ArgsUsage: "<domain>",
Action: dnsNukeRoute53,
Flags: []cli.Flag{
route53AccessKeyFlag,
route53AccessSecretFlag,
route53ZoneIDFlag,
route53RegionFlag,
},
}
)
var (
@ -174,6 +187,9 @@ func dnsSign(ctx *cli.Context) error {
return nil
}
// directoryName returns the directory name of the given path.
// For example, when dir is "foo/bar", it returns "bar".
// When dir is ".", and the working directory is "example/foo", it returns "foo".
func directoryName(dir string) string {
abs, err := filepath.Abs(dir)
if err != nil {
@ -182,7 +198,7 @@ func directoryName(dir string) string {
return filepath.Base(abs)
}
// dnsToTXT peforms dnsTXTCommand.
// dnsToTXT performs dnsTXTCommand.
func dnsToTXT(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need tree definition directory as argument")
@ -199,9 +215,9 @@ func dnsToTXT(ctx *cli.Context) error {
return nil
}
// dnsToCloudflare peforms dnsCloudflareCommand.
// dnsToCloudflare performs dnsCloudflareCommand.
func dnsToCloudflare(ctx *cli.Context) error {
if ctx.NArg() < 1 {
if ctx.NArg() != 1 {
return fmt.Errorf("need tree definition directory as argument")
}
domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0))
@ -212,9 +228,9 @@ func dnsToCloudflare(ctx *cli.Context) error {
return client.deploy(domain, t)
}
// dnsToRoute53 peforms dnsRoute53Command.
// dnsToRoute53 performs dnsRoute53Command.
func dnsToRoute53(ctx *cli.Context) error {
if ctx.NArg() < 1 {
if ctx.NArg() != 1 {
return fmt.Errorf("need tree definition directory as argument")
}
domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0))
@ -225,6 +241,15 @@ func dnsToRoute53(ctx *cli.Context) error {
return client.deploy(domain, t)
}
// dnsNukeRoute53 performs dnsRoute53NukeCommand.
func dnsNukeRoute53(ctx *cli.Context) error {
if ctx.NArg() != 1 {
return fmt.Errorf("need domain name as argument")
}
client := newRoute53Client(ctx)
return client.deleteDomain(ctx.Args().First())
}
// loadSigningKey loads a private key in Ethereum keystore format.
func loadSigningKey(keyfile string) *ecdsa.PrivateKey {
keyjson, err := ioutil.ReadFile(keyfile)

View File

@ -34,6 +34,7 @@ import (
)
type Chain struct {
genesis core.Genesis
blocks []*types.Block
chainConfig *params.ChainConfig
}
@ -53,10 +54,24 @@ func (c *Chain) Len() int {
return len(c.blocks)
}
// TD calculates the total difficulty of the chain.
func (c *Chain) TD(height int) *big.Int { // TODO later on channge scheme so that the height is included in range
// TD calculates the total difficulty of the chain at the
// chain head.
func (c *Chain) TD() *big.Int {
sum := big.NewInt(0)
for _, block := range c.blocks[:height] {
for _, block := range c.blocks[:c.Len()] {
sum.Add(sum, block.Difficulty())
}
return sum
}
// TotalDifficultyAt calculates the total difficulty of the chain
// at the given block height.
func (c *Chain) TotalDifficultyAt(height int) *big.Int {
sum := big.NewInt(0)
if height >= c.Len() {
return sum
}
for _, block := range c.blocks[:height+1] {
sum.Add(sum, block.Difficulty())
}
return sum
@ -124,16 +139,34 @@ func (c *Chain) GetHeaders(req GetBlockHeaders) (BlockHeaders, error) {
// loadChain takes the given chain.rlp file, and decodes and returns
// the blocks from the file.
func loadChain(chainfile string, genesis string) (*Chain, error) {
chainConfig, err := ioutil.ReadFile(genesis)
gen, err := loadGenesis(genesis)
if err != nil {
return nil, err
}
var gen core.Genesis
if err := json.Unmarshal(chainConfig, &gen); err != nil {
return nil, err
}
gblock := gen.ToBlock(nil)
blocks, err := blocksFromFile(chainfile, gblock)
if err != nil {
return nil, err
}
c := &Chain{genesis: gen, blocks: blocks, chainConfig: gen.Config}
return c, nil
}
func loadGenesis(genesisFile string) (core.Genesis, error) {
chainConfig, err := ioutil.ReadFile(genesisFile)
if err != nil {
return core.Genesis{}, err
}
var gen core.Genesis
if err := json.Unmarshal(chainConfig, &gen); err != nil {
return core.Genesis{}, err
}
return gen, nil
}
func blocksFromFile(chainfile string, gblock *types.Block) ([]*types.Block, error) {
// Load chain.rlp.
fh, err := os.Open(chainfile)
if err != nil {
@ -161,7 +194,5 @@ func loadChain(chainfile string, genesis string) (*Chain, error) {
}
blocks = append(blocks, &b)
}
c := &Chain{blocks: blocks, chainConfig: gen.Config}
return c, nil
return blocks, nil
}

View File

@ -1,396 +0,0 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
)
// Is_66 checks if the node supports the eth66 protocol version,
// and if not, exists the test suite
func (s *Suite) Is_66(t *utesting.T) {
conn := s.dial66(t)
conn.handshake(t)
if conn.negotiatedProtoVersion < 66 {
t.Fail()
}
}
// TestStatus_66 attempts to connect to the given node and exchange
// a status message with it on the eth66 protocol, and then check to
// make sure the chain head is correct.
func (s *Suite) TestStatus_66(t *utesting.T) {
conn := s.dial66(t)
// get protoHandshake
conn.handshake(t)
// get status
switch msg := conn.statusExchange66(t, s.chain).(type) {
case *Status:
status := *msg
if status.ProtocolVersion != uint32(66) {
t.Fatalf("mismatch in version: wanted 66, got %d", status.ProtocolVersion)
}
t.Logf("got status message: %s", pretty.Sdump(msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestGetBlockHeaders_66 tests whether the given node can respond to
// an eth66 `GetBlockHeaders` request and that the response is accurate.
func (s *Suite) TestGetBlockHeaders_66(t *utesting.T) {
conn := s.setupConnection66(t)
// get block headers
req := &eth.GetBlockHeadersPacket66{
RequestId: 3,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 2,
Skip: 1,
Reverse: false,
},
}
// write message
headers := s.getBlockHeaders66(t, conn, req, req.RequestId)
// check for correct headers
headersMatch(t, s.chain, headers)
}
// TestSimultaneousRequests_66 sends two simultaneous `GetBlockHeader` requests
// with different request IDs and checks to make sure the node responds with the correct
// headers per request.
func (s *Suite) TestSimultaneousRequests_66(t *utesting.T) {
// create two connections
conn1, conn2 := s.setupConnection66(t), s.setupConnection66(t)
// create two requests
req1 := &eth.GetBlockHeadersPacket66{
RequestId: 111,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 2,
Skip: 1,
Reverse: false,
},
}
req2 := &eth.GetBlockHeadersPacket66{
RequestId: 222,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 4,
Skip: 1,
Reverse: false,
},
}
// wait for headers for first request
headerChan := make(chan BlockHeaders, 1)
go func(headers chan BlockHeaders) {
headers <- s.getBlockHeaders66(t, conn1, req1, req1.RequestId)
}(headerChan)
// check headers of second request
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn2, req2, req2.RequestId))
// check headers of first request
headersMatch(t, s.chain, <-headerChan)
}
// TestBroadcast_66 tests whether a block announcement is correctly
// propagated to the given node's peer(s) on the eth66 protocol.
func (s *Suite) TestBroadcast_66(t *utesting.T) {
sendConn, receiveConn := s.setupConnection66(t), s.setupConnection66(t)
nextBlock := len(s.chain.blocks)
blockAnnouncement := &NewBlock{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
}
s.testAnnounce66(t, sendConn, receiveConn, blockAnnouncement)
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock66(s.chain.Head()); err != nil {
t.Fatal(err)
}
}
// TestGetBlockBodies_66 tests whether the given node can respond to
// a `GetBlockBodies` request and that the response is accurate over
// the eth66 protocol.
func (s *Suite) TestGetBlockBodies_66(t *utesting.T) {
conn := s.setupConnection66(t)
// create block bodies request
id := uint64(55)
req := &eth.GetBlockBodiesPacket66{
RequestId: id,
GetBlockBodiesPacket: eth.GetBlockBodiesPacket{
s.chain.blocks[54].Hash(),
s.chain.blocks[75].Hash(),
},
}
if err := conn.write66(req, GetBlockBodies{}.Code()); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
reqID, msg := conn.readAndServe66(s.chain, timeout)
switch msg := msg.(type) {
case BlockBodies:
if reqID != req.RequestId {
t.Fatalf("request ID mismatch: wanted %d, got %d", req.RequestId, reqID)
}
t.Logf("received %d block bodies", len(msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestLargeAnnounce_66 tests the announcement mechanism with a large block.
func (s *Suite) TestLargeAnnounce_66(t *utesting.T) {
nextBlock := len(s.chain.blocks)
blocks := []*NewBlock{
{
Block: largeBlock(),
TD: s.fullChain.TD(nextBlock + 1),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: largeNumber(2),
},
{
Block: largeBlock(),
TD: largeNumber(2),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
},
}
for i, blockAnnouncement := range blocks[0:3] {
t.Logf("Testing malicious announcement: %v\n", i)
sendConn := s.setupConnection66(t)
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Invalid announcement, check that peer disconnected
switch msg := sendConn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
break
default:
t.Fatalf("unexpected: %s wanted disconnect", pretty.Sdump(msg))
}
}
// Test the last block as a valid block
sendConn := s.setupConnection66(t)
receiveConn := s.setupConnection66(t)
s.testAnnounce66(t, sendConn, receiveConn, blocks[3])
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock66(s.fullChain.blocks[nextBlock]); err != nil {
t.Fatal(err)
}
}
func (s *Suite) TestOldAnnounce_66(t *utesting.T) {
s.oldAnnounce(t, s.setupConnection66(t), s.setupConnection66(t))
}
// TestMaliciousHandshake_66 tries to send malicious data during the handshake.
func (s *Suite) TestMaliciousHandshake_66(t *utesting.T) {
conn := s.dial66(t)
// write hello to client
pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:]
handshakes := []*Hello{
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 66},
},
ID: pub0,
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
{Name: "eth", Version: 66},
},
ID: append(pub0, byte(0)),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
{Name: "eth", Version: 66},
},
ID: append(pub0, pub0...),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
{Name: "eth", Version: 66},
},
ID: largeBuffer(2),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 66},
},
ID: largeBuffer(2),
},
}
for i, handshake := range handshakes {
t.Logf("Testing malicious handshake %v\n", i)
// Init the handshake
if err := conn.Write(handshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// check that the peer disconnected
timeout := 20 * time.Second
// Discard one hello
for i := 0; i < 2; i++ {
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
case *Hello:
// Hello's are sent concurrently, so ignore them
continue
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// Dial for the next round
conn = s.dial66(t)
}
}
// TestMaliciousStatus_66 sends a status package with a large total difficulty.
func (s *Suite) TestMaliciousStatus_66(t *utesting.T) {
conn := s.dial66(t)
// get protoHandshake
conn.handshake(t)
status := &Status{
ProtocolVersion: uint32(66),
NetworkID: s.chain.chainConfig.ChainID.Uint64(),
TD: largeNumber(2),
Head: s.chain.blocks[s.chain.Len()-1].Hash(),
Genesis: s.chain.blocks[0].Hash(),
ForkID: s.chain.ForkID(),
}
// get status
switch msg := conn.statusExchange(t, s.chain, status).(type) {
case *Status:
t.Logf("%+v\n", msg)
default:
t.Fatalf("expected status, got: %#v ", msg)
}
// wait for disconnect
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
return
default:
t.Fatalf("expected disconnect, got: %s", pretty.Sdump(msg))
}
}
func (s *Suite) TestTransaction_66(t *utesting.T) {
tests := []*types.Transaction{
getNextTxFromChain(t, s),
unknownTx(t, s),
}
for i, tx := range tests {
t.Logf("Testing tx propagation: %v\n", i)
sendSuccessfulTx66(t, s, tx)
}
}
func (s *Suite) TestMaliciousTx_66(t *utesting.T) {
tests := []*types.Transaction{
getOldTxFromChain(t, s),
invalidNonceTx(t, s),
hugeAmount(t, s),
hugeGasPrice(t, s),
hugeData(t, s),
}
for i, tx := range tests {
t.Logf("Testing malicious tx propagation: %v\n", i)
sendFailingTx66(t, s, tx)
}
}
// TestZeroRequestID_66 checks that a request ID of zero is still handled
// by the node.
func (s *Suite) TestZeroRequestID_66(t *utesting.T) {
conn := s.setupConnection66(t)
req := &eth.GetBlockHeadersPacket66{
RequestId: 0,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Number: 0,
},
Amount: 2,
},
}
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req, req.RequestId))
}
// TestSameRequestID_66 sends two requests with the same request ID
// concurrently to a single node.
func (s *Suite) TestSameRequestID_66(t *utesting.T) {
conn := s.setupConnection66(t)
// create two separate requests with same ID
reqID := uint64(1234)
req1 := &eth.GetBlockHeadersPacket66{
RequestId: reqID,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Number: 0,
},
Amount: 2,
},
}
req2 := &eth.GetBlockHeadersPacket66{
RequestId: reqID,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Number: 33,
},
Amount: 2,
},
}
// send requests concurrently
go func() {
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req2, reqID))
}()
// check response from first request
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req1, reqID))
}

View File

@ -1,274 +0,0 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"fmt"
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rlp"
"github.com/stretchr/testify/assert"
)
func (c *Conn) statusExchange66(t *utesting.T, chain *Chain) Message {
status := &Status{
ProtocolVersion: uint32(66),
NetworkID: chain.chainConfig.ChainID.Uint64(),
TD: chain.TD(chain.Len()),
Head: chain.blocks[chain.Len()-1].Hash(),
Genesis: chain.blocks[0].Hash(),
ForkID: chain.ForkID(),
}
return c.statusExchange(t, chain, status)
}
func (s *Suite) dial66(t *utesting.T) *Conn {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
conn.caps = append(conn.caps, p2p.Cap{Name: "eth", Version: 66})
conn.ourHighestProtoVersion = 66
return conn
}
func (c *Conn) write66(req eth.Packet, code int) error {
payload, err := rlp.EncodeToBytes(req)
if err != nil {
return err
}
_, err = c.Conn.Write(uint64(code), payload)
return err
}
func (c *Conn) read66() (uint64, Message) {
code, rawData, _, err := c.Conn.Read()
if err != nil {
return 0, errorf("could not read from connection: %v", err)
}
var msg Message
switch int(code) {
case (Hello{}).Code():
msg = new(Hello)
case (Ping{}).Code():
msg = new(Ping)
case (Pong{}).Code():
msg = new(Pong)
case (Disconnect{}).Code():
msg = new(Disconnect)
case (Status{}).Code():
msg = new(Status)
case (GetBlockHeaders{}).Code():
ethMsg := new(eth.GetBlockHeadersPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetBlockHeaders(*ethMsg.GetBlockHeadersPacket)
case (BlockHeaders{}).Code():
ethMsg := new(eth.BlockHeadersPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, BlockHeaders(ethMsg.BlockHeadersPacket)
case (GetBlockBodies{}).Code():
ethMsg := new(eth.GetBlockBodiesPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetBlockBodies(ethMsg.GetBlockBodiesPacket)
case (BlockBodies{}).Code():
ethMsg := new(eth.BlockBodiesPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, BlockBodies(ethMsg.BlockBodiesPacket)
case (NewBlock{}).Code():
msg = new(NewBlock)
case (NewBlockHashes{}).Code():
msg = new(NewBlockHashes)
case (Transactions{}).Code():
msg = new(Transactions)
case (NewPooledTransactionHashes{}).Code():
msg = new(NewPooledTransactionHashes)
default:
msg = errorf("invalid message code: %d", code)
}
if msg != nil {
if err := rlp.DecodeBytes(rawData, msg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return 0, msg
}
return 0, errorf("invalid message: %s", string(rawData))
}
// ReadAndServe serves GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) readAndServe66(chain *Chain, timeout time.Duration) (uint64, Message) {
start := time.Now()
for time.Since(start) < timeout {
timeout := time.Now().Add(10 * time.Second)
c.SetReadDeadline(timeout)
reqID, msg := c.read66()
switch msg := msg.(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
headers, err := chain.GetHeaders(*msg)
if err != nil {
return 0, errorf("could not get headers for inbound header request: %v", err)
}
resp := &eth.BlockHeadersPacket66{
RequestId: reqID,
BlockHeadersPacket: eth.BlockHeadersPacket(headers),
}
if err := c.write66(resp, BlockHeaders{}.Code()); err != nil {
return 0, errorf("could not write to connection: %v", err)
}
default:
return reqID, msg
}
}
return 0, errorf("no message received within %v", timeout)
}
func (s *Suite) setupConnection66(t *utesting.T) *Conn {
// create conn
sendConn := s.dial66(t)
sendConn.handshake(t)
sendConn.statusExchange66(t, s.chain)
return sendConn
}
func (s *Suite) testAnnounce66(t *utesting.T, sendConn, receiveConn *Conn, blockAnnouncement *NewBlock) {
// Announce the block.
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
s.waitAnnounce66(t, receiveConn, blockAnnouncement)
}
func (s *Suite) waitAnnounce66(t *utesting.T, conn *Conn, blockAnnouncement *NewBlock) {
timeout := 20 * time.Second
_, msg := conn.readAndServe66(s.chain, timeout)
switch msg := msg.(type) {
case *NewBlock:
t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block))
assert.Equal(t,
blockAnnouncement.Block.Header(), msg.Block.Header(),
"wrong block header in announcement",
)
assert.Equal(t,
blockAnnouncement.TD, msg.TD,
"wrong TD in announcement",
)
case *NewBlockHashes:
blockHashes := *msg
t.Logf("received NewBlockHashes message: %s", pretty.Sdump(blockHashes))
assert.Equal(t, blockAnnouncement.Block.Hash(), blockHashes[0].Hash,
"wrong block hash in announcement",
)
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// waitForBlock66 waits for confirmation from the client that it has
// imported the given block.
func (c *Conn) waitForBlock66(block *types.Block) error {
defer c.SetReadDeadline(time.Time{})
timeout := time.Now().Add(20 * time.Second)
c.SetReadDeadline(timeout)
for {
req := eth.GetBlockHeadersPacket66{
RequestId: 54,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: block.Hash(),
},
Amount: 1,
},
}
if err := c.write66(req, GetBlockHeaders{}.Code()); err != nil {
return err
}
reqID, msg := c.read66()
// check message
switch msg := msg.(type) {
case BlockHeaders:
// check request ID
if reqID != req.RequestId {
return fmt.Errorf("request ID mismatch: wanted %d, got %d", req.RequestId, reqID)
}
if len(msg) > 0 {
return nil
}
time.Sleep(100 * time.Millisecond)
default:
return fmt.Errorf("invalid message: %s", pretty.Sdump(msg))
}
}
}
func sendSuccessfulTx66(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn := s.setupConnection66(t)
sendSuccessfulTxWithConn(t, s, tx, sendConn)
}
func sendFailingTx66(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn, recvConn := s.setupConnection66(t), s.setupConnection66(t)
sendFailingTxWithConns(t, s, tx, sendConn, recvConn)
}
func (s *Suite) getBlockHeaders66(t *utesting.T, conn *Conn, req eth.Packet, expectedID uint64) BlockHeaders {
if err := conn.write66(req, GetBlockHeaders{}.Code()); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// check block headers response
reqID, msg := conn.readAndServe66(s.chain, timeout)
switch msg := msg.(type) {
case BlockHeaders:
if reqID != expectedID {
t.Fatalf("request ID mismatch: wanted %d, got %d", expectedID, reqID)
}
return msg
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
return nil
}
}
func headersMatch(t *utesting.T, chain *Chain, headers BlockHeaders) {
for _, header := range headers {
num := header.Number.Uint64()
t.Logf("received header (%d): %s", num, pretty.Sdump(header.Hash()))
assert.Equal(t, chain.blocks[int(num)].Header(), header)
}
}

View File

@ -0,0 +1,749 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"fmt"
"net"
"reflect"
"strings"
"time"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/rlpx"
)
var (
pretty = spew.ConfigState{
Indent: " ",
DisableCapacities: true,
DisablePointerAddresses: true,
SortKeys: true,
}
timeout = 20 * time.Second
)
// Is_66 checks if the node supports the eth66 protocol version,
// and if not, exists the test suite
func (s *Suite) Is_66(t *utesting.T) {
conn, err := s.dial66()
if err != nil {
t.Fatalf("dial failed: %v", err)
}
if err := conn.handshake(); err != nil {
t.Fatalf("handshake failed: %v", err)
}
if conn.negotiatedProtoVersion < 66 {
t.Fail()
}
}
// dial attempts to dial the given node and perform a handshake,
// returning the created Conn if successful.
func (s *Suite) dial() (*Conn, error) {
// dial
fd, err := net.Dial("tcp", fmt.Sprintf("%v:%d", s.Dest.IP(), s.Dest.TCP()))
if err != nil {
return nil, err
}
conn := Conn{Conn: rlpx.NewConn(fd, s.Dest.Pubkey())}
// do encHandshake
conn.ourKey, _ = crypto.GenerateKey()
_, err = conn.Handshake(conn.ourKey)
if err != nil {
conn.Close()
return nil, err
}
// set default p2p capabilities
conn.caps = []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
}
conn.ourHighestProtoVersion = 65
return &conn, nil
}
// dial66 attempts to dial the given node and perform a handshake,
// returning the created Conn with additional eth66 capabilities if
// successful
func (s *Suite) dial66() (*Conn, error) {
conn, err := s.dial()
if err != nil {
return nil, fmt.Errorf("dial failed: %v", err)
}
conn.caps = append(conn.caps, p2p.Cap{Name: "eth", Version: 66})
conn.ourHighestProtoVersion = 66
return conn, nil
}
// peer performs both the protocol handshake and the status message
// exchange with the node in order to peer with it.
func (c *Conn) peer(chain *Chain, status *Status) error {
if err := c.handshake(); err != nil {
return fmt.Errorf("handshake failed: %v", err)
}
if _, err := c.statusExchange(chain, status); err != nil {
return fmt.Errorf("status exchange failed: %v", err)
}
return nil
}
// handshake performs a protocol handshake with the node.
func (c *Conn) handshake() error {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(10 * time.Second))
// write hello to client
pub0 := crypto.FromECDSAPub(&c.ourKey.PublicKey)[1:]
ourHandshake := &Hello{
Version: 5,
Caps: c.caps,
ID: pub0,
}
if err := c.Write(ourHandshake); err != nil {
return fmt.Errorf("write to connection failed: %v", err)
}
// read hello from client
switch msg := c.Read().(type) {
case *Hello:
// set snappy if version is at least 5
if msg.Version >= 5 {
c.SetSnappy(true)
}
c.negotiateEthProtocol(msg.Caps)
if c.negotiatedProtoVersion == 0 {
return fmt.Errorf("unexpected eth protocol version")
}
return nil
default:
return fmt.Errorf("bad handshake: %#v", msg)
}
}
// negotiateEthProtocol sets the Conn's eth protocol version to highest
// advertised capability from peer.
func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
var highestEthVersion uint
for _, capability := range caps {
if capability.Name != "eth" {
continue
}
if capability.Version > highestEthVersion && capability.Version <= c.ourHighestProtoVersion {
highestEthVersion = capability.Version
}
}
c.negotiatedProtoVersion = highestEthVersion
}
// statusExchange performs a `Status` message exchange with the given node.
func (c *Conn) statusExchange(chain *Chain, status *Status) (Message, error) {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(20 * time.Second))
// read status message from client
var message Message
loop:
for {
switch msg := c.Read().(type) {
case *Status:
if have, want := msg.Head, chain.blocks[chain.Len()-1].Hash(); have != want {
return nil, fmt.Errorf("wrong head block in status, want: %#x (block %d) have %#x",
want, chain.blocks[chain.Len()-1].NumberU64(), have)
}
if have, want := msg.TD.Cmp(chain.TD()), 0; have != want {
return nil, fmt.Errorf("wrong TD in status: have %v want %v", have, want)
}
if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) {
return nil, fmt.Errorf("wrong fork ID in status: have %v, want %v", have, want)
}
if have, want := msg.ProtocolVersion, c.ourHighestProtoVersion; have != uint32(want) {
return nil, fmt.Errorf("wrong protocol version: have %v, want %v", have, want)
}
message = msg
break loop
case *Disconnect:
return nil, fmt.Errorf("disconnect received: %v", msg.Reason)
case *Ping:
c.Write(&Pong{}) // TODO (renaynay): in the future, this should be an error
// (PINGs should not be a response upon fresh connection)
default:
return nil, fmt.Errorf("bad status message: %s", pretty.Sdump(msg))
}
}
// make sure eth protocol version is set for negotiation
if c.negotiatedProtoVersion == 0 {
return nil, fmt.Errorf("eth protocol version must be set in Conn")
}
if status == nil {
// default status message
status = &Status{
ProtocolVersion: uint32(c.negotiatedProtoVersion),
NetworkID: chain.chainConfig.ChainID.Uint64(),
TD: chain.TD(),
Head: chain.blocks[chain.Len()-1].Hash(),
Genesis: chain.blocks[0].Hash(),
ForkID: chain.ForkID(),
}
}
if err := c.Write(status); err != nil {
return nil, fmt.Errorf("write to connection failed: %v", err)
}
return message, nil
}
// createSendAndRecvConns creates two connections, one for sending messages to the
// node, and one for receiving messages from the node.
func (s *Suite) createSendAndRecvConns(isEth66 bool) (*Conn, *Conn, error) {
var (
sendConn *Conn
recvConn *Conn
err error
)
if isEth66 {
sendConn, err = s.dial66()
if err != nil {
return nil, nil, fmt.Errorf("dial failed: %v", err)
}
recvConn, err = s.dial66()
if err != nil {
sendConn.Close()
return nil, nil, fmt.Errorf("dial failed: %v", err)
}
} else {
sendConn, err = s.dial()
if err != nil {
return nil, nil, fmt.Errorf("dial failed: %v", err)
}
recvConn, err = s.dial()
if err != nil {
sendConn.Close()
return nil, nil, fmt.Errorf("dial failed: %v", err)
}
}
return sendConn, recvConn, nil
}
// readAndServe serves GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) readAndServe(chain *Chain, timeout time.Duration) Message {
start := time.Now()
for time.Since(start) < timeout {
c.SetReadDeadline(time.Now().Add(5 * time.Second))
switch msg := c.Read().(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
req := *msg
headers, err := chain.GetHeaders(req)
if err != nil {
return errorf("could not get headers for inbound header request: %v", err)
}
if err := c.Write(headers); err != nil {
return errorf("could not write to connection: %v", err)
}
default:
return msg
}
}
return errorf("no message received within %v", timeout)
}
// readAndServe66 serves eth66 GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) readAndServe66(chain *Chain, timeout time.Duration) (uint64, Message) {
start := time.Now()
for time.Since(start) < timeout {
c.SetReadDeadline(time.Now().Add(10 * time.Second))
reqID, msg := c.Read66()
switch msg := msg.(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
headers, err := chain.GetHeaders(*msg)
if err != nil {
return 0, errorf("could not get headers for inbound header request: %v", err)
}
resp := &eth.BlockHeadersPacket66{
RequestId: reqID,
BlockHeadersPacket: eth.BlockHeadersPacket(headers),
}
if err := c.Write66(resp, BlockHeaders{}.Code()); err != nil {
return 0, errorf("could not write to connection: %v", err)
}
default:
return reqID, msg
}
}
return 0, errorf("no message received within %v", timeout)
}
// headersRequest executes the given `GetBlockHeaders` request.
func (c *Conn) headersRequest(request *GetBlockHeaders, chain *Chain, isEth66 bool, reqID uint64) (BlockHeaders, error) {
defer c.SetReadDeadline(time.Time{})
c.SetReadDeadline(time.Now().Add(20 * time.Second))
// if on eth66 connection, perform eth66 GetBlockHeaders request
if isEth66 {
return getBlockHeaders66(chain, c, request, reqID)
}
if err := c.Write(request); err != nil {
return nil, err
}
switch msg := c.readAndServe(chain, timeout).(type) {
case *BlockHeaders:
return *msg, nil
default:
return nil, fmt.Errorf("invalid message: %s", pretty.Sdump(msg))
}
}
// getBlockHeaders66 executes the given `GetBlockHeaders` request over the eth66 protocol.
func getBlockHeaders66(chain *Chain, conn *Conn, request *GetBlockHeaders, id uint64) (BlockHeaders, error) {
// write request
packet := eth.GetBlockHeadersPacket(*request)
req := &eth.GetBlockHeadersPacket66{
RequestId: id,
GetBlockHeadersPacket: &packet,
}
if err := conn.Write66(req, GetBlockHeaders{}.Code()); err != nil {
return nil, fmt.Errorf("could not write to connection: %v", err)
}
// wait for response
msg := conn.waitForResponse(chain, timeout, req.RequestId)
headers, ok := msg.(BlockHeaders)
if !ok {
return nil, fmt.Errorf("unexpected message received: %s", pretty.Sdump(msg))
}
return headers, nil
}
// headersMatch returns whether the received headers match the given request
func headersMatch(expected BlockHeaders, headers BlockHeaders) bool {
return reflect.DeepEqual(expected, headers)
}
// waitForResponse reads from the connection until a response with the expected
// request ID is received.
func (c *Conn) waitForResponse(chain *Chain, timeout time.Duration, requestID uint64) Message {
for {
id, msg := c.readAndServe66(chain, timeout)
if id == requestID {
return msg
}
}
}
// sendNextBlock broadcasts the next block in the chain and waits
// for the node to propagate the block and import it into its chain.
func (s *Suite) sendNextBlock(isEth66 bool) error {
// set up sending and receiving connections
sendConn, recvConn, err := s.createSendAndRecvConns(isEth66)
if err != nil {
return err
}
defer sendConn.Close()
defer recvConn.Close()
if err = sendConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
if err = recvConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// create new block announcement
nextBlock := s.fullChain.blocks[s.chain.Len()]
blockAnnouncement := &NewBlock{
Block: nextBlock,
TD: s.fullChain.TotalDifficultyAt(s.chain.Len()),
}
// send announcement and wait for node to request the header
if err = s.testAnnounce(sendConn, recvConn, blockAnnouncement); err != nil {
return fmt.Errorf("failed to announce block: %v", err)
}
// wait for client to update its chain
if err = s.waitForBlockImport(recvConn, nextBlock, isEth66); err != nil {
return fmt.Errorf("failed to receive confirmation of block import: %v", err)
}
// update test suite chain
s.chain.blocks = append(s.chain.blocks, nextBlock)
return nil
}
// testAnnounce writes a block announcement to the node and waits for the node
// to propagate it.
func (s *Suite) testAnnounce(sendConn, receiveConn *Conn, blockAnnouncement *NewBlock) error {
if err := sendConn.Write(blockAnnouncement); err != nil {
return fmt.Errorf("could not write to connection: %v", err)
}
return s.waitAnnounce(receiveConn, blockAnnouncement)
}
// waitAnnounce waits for a NewBlock or NewBlockHashes announcement from the node.
func (s *Suite) waitAnnounce(conn *Conn, blockAnnouncement *NewBlock) error {
for {
switch msg := conn.readAndServe(s.chain, timeout).(type) {
case *NewBlock:
if !reflect.DeepEqual(blockAnnouncement.Block.Header(), msg.Block.Header()) {
return fmt.Errorf("wrong header in block announcement: \nexpected %v "+
"\ngot %v", blockAnnouncement.Block.Header(), msg.Block.Header())
}
if !reflect.DeepEqual(blockAnnouncement.TD, msg.TD) {
return fmt.Errorf("wrong TD in announcement: expected %v, got %v", blockAnnouncement.TD, msg.TD)
}
return nil
case *NewBlockHashes:
hashes := *msg
if blockAnnouncement.Block.Hash() != hashes[0].Hash {
return fmt.Errorf("wrong block hash in announcement: expected %v, got %v", blockAnnouncement.Block.Hash(), hashes[0].Hash)
}
return nil
case *NewPooledTransactionHashes:
// ignore tx announcements from previous tests
continue
default:
return fmt.Errorf("unexpected: %s", pretty.Sdump(msg))
}
}
}
func (s *Suite) waitForBlockImport(conn *Conn, block *types.Block, isEth66 bool) error {
defer conn.SetReadDeadline(time.Time{})
conn.SetReadDeadline(time.Now().Add(20 * time.Second))
// create request
req := &GetBlockHeaders{
Origin: eth.HashOrNumber{
Hash: block.Hash(),
},
Amount: 1,
}
// loop until BlockHeaders response contains desired block, confirming the
// node imported the block
for {
var (
headers BlockHeaders
err error
)
if isEth66 {
requestID := uint64(54)
headers, err = conn.headersRequest(req, s.chain, eth66, requestID)
} else {
headers, err = conn.headersRequest(req, s.chain, eth65, 0)
}
if err != nil {
return fmt.Errorf("GetBlockHeader request failed: %v", err)
}
// if headers response is empty, node hasn't imported block yet, try again
if len(headers) == 0 {
time.Sleep(100 * time.Millisecond)
continue
}
if !reflect.DeepEqual(block.Header(), headers[0]) {
return fmt.Errorf("wrong header returned: wanted %v, got %v", block.Header(), headers[0])
}
return nil
}
}
func (s *Suite) oldAnnounce(isEth66 bool) error {
sendConn, receiveConn, err := s.createSendAndRecvConns(isEth66)
if err != nil {
return err
}
defer sendConn.Close()
defer receiveConn.Close()
if err := sendConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
if err := receiveConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// create old block announcement
oldBlockAnnounce := &NewBlock{
Block: s.chain.blocks[len(s.chain.blocks)/2],
TD: s.chain.blocks[len(s.chain.blocks)/2].Difficulty(),
}
if err := sendConn.Write(oldBlockAnnounce); err != nil {
return fmt.Errorf("could not write to connection: %v", err)
}
// wait to see if the announcement is propagated
switch msg := receiveConn.readAndServe(s.chain, time.Second*8).(type) {
case *NewBlock:
block := *msg
if block.Block.Hash() == oldBlockAnnounce.Block.Hash() {
return fmt.Errorf("unexpected: block propagated: %s", pretty.Sdump(msg))
}
case *NewBlockHashes:
hashes := *msg
for _, hash := range hashes {
if hash.Hash == oldBlockAnnounce.Block.Hash() {
return fmt.Errorf("unexpected: block announced: %s", pretty.Sdump(msg))
}
}
case *Error:
errMsg := *msg
// check to make sure error is timeout (propagation didn't come through == test successful)
if !strings.Contains(errMsg.String(), "timeout") {
return fmt.Errorf("unexpected error: %v", pretty.Sdump(msg))
}
default:
return fmt.Errorf("unexpected: %s", pretty.Sdump(msg))
}
return nil
}
func (s *Suite) maliciousHandshakes(t *utesting.T, isEth66 bool) error {
var (
conn *Conn
err error
)
if isEth66 {
conn, err = s.dial66()
if err != nil {
return fmt.Errorf("dial failed: %v", err)
}
} else {
conn, err = s.dial()
if err != nil {
return fmt.Errorf("dial failed: %v", err)
}
}
defer conn.Close()
// write hello to client
pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:]
handshakes := []*Hello{
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 64},
},
ID: pub0,
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: append(pub0, byte(0)),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: append(pub0, pub0...),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: largeBuffer(2),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 64},
},
ID: largeBuffer(2),
},
}
for i, handshake := range handshakes {
t.Logf("Testing malicious handshake %v\n", i)
if err := conn.Write(handshake); err != nil {
return fmt.Errorf("could not write to connection: %v", err)
}
// check that the peer disconnected
for i := 0; i < 2; i++ {
switch msg := conn.readAndServe(s.chain, 20*time.Second).(type) {
case *Disconnect:
case *Error:
case *Hello:
// Discard one hello as Hello's are sent concurrently
continue
default:
return fmt.Errorf("unexpected: %s", pretty.Sdump(msg))
}
}
// dial for the next round
if isEth66 {
conn, err = s.dial66()
if err != nil {
return fmt.Errorf("dial failed: %v", err)
}
} else {
conn, err = s.dial()
if err != nil {
return fmt.Errorf("dial failed: %v", err)
}
}
}
return nil
}
func (s *Suite) maliciousStatus(conn *Conn) error {
if err := conn.handshake(); err != nil {
return fmt.Errorf("handshake failed: %v", err)
}
status := &Status{
ProtocolVersion: uint32(conn.negotiatedProtoVersion),
NetworkID: s.chain.chainConfig.ChainID.Uint64(),
TD: largeNumber(2),
Head: s.chain.blocks[s.chain.Len()-1].Hash(),
Genesis: s.chain.blocks[0].Hash(),
ForkID: s.chain.ForkID(),
}
// get status
msg, err := conn.statusExchange(s.chain, status)
if err != nil {
return fmt.Errorf("status exchange failed: %v", err)
}
switch msg := msg.(type) {
case *Status:
default:
return fmt.Errorf("expected status, got: %#v ", msg)
}
// wait for disconnect
switch msg := conn.readAndServe(s.chain, timeout).(type) {
case *Disconnect:
return nil
case *Error:
return nil
default:
return fmt.Errorf("expected disconnect, got: %s", pretty.Sdump(msg))
}
}
func (s *Suite) hashAnnounce(isEth66 bool) error {
// create connections
sendConn, recvConn, err := s.createSendAndRecvConns(isEth66)
if err != nil {
return fmt.Errorf("failed to create connections: %v", err)
}
defer sendConn.Close()
defer recvConn.Close()
if err := sendConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
if err := recvConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// create NewBlockHashes announcement
type anno struct {
Hash common.Hash // Hash of one particular block being announced
Number uint64 // Number of one particular block being announced
}
nextBlock := s.fullChain.blocks[s.chain.Len()]
announcement := anno{Hash: nextBlock.Hash(), Number: nextBlock.Number().Uint64()}
newBlockHash := &NewBlockHashes{announcement}
if err := sendConn.Write(newBlockHash); err != nil {
return fmt.Errorf("failed to write to connection: %v", err)
}
// Announcement sent, now wait for a header request
var (
id uint64
msg Message
blockHeaderReq GetBlockHeaders
)
if isEth66 {
id, msg = sendConn.Read66()
switch msg := msg.(type) {
case GetBlockHeaders:
blockHeaderReq = msg
default:
return fmt.Errorf("unexpected %s", pretty.Sdump(msg))
}
if blockHeaderReq.Amount != 1 {
return fmt.Errorf("unexpected number of block headers requested: %v", blockHeaderReq.Amount)
}
if blockHeaderReq.Origin.Hash != announcement.Hash {
return fmt.Errorf("unexpected block header requested. Announced:\n %v\n Remote request:\n%v",
pretty.Sdump(announcement),
pretty.Sdump(blockHeaderReq))
}
if err := sendConn.Write66(&eth.BlockHeadersPacket66{
RequestId: id,
BlockHeadersPacket: eth.BlockHeadersPacket{
nextBlock.Header(),
},
}, BlockHeaders{}.Code()); err != nil {
return fmt.Errorf("failed to write to connection: %v", err)
}
} else {
msg = sendConn.Read()
switch msg := msg.(type) {
case *GetBlockHeaders:
blockHeaderReq = *msg
default:
return fmt.Errorf("unexpected %s", pretty.Sdump(msg))
}
if blockHeaderReq.Amount != 1 {
return fmt.Errorf("unexpected number of block headers requested: %v", blockHeaderReq.Amount)
}
if blockHeaderReq.Origin.Hash != announcement.Hash {
return fmt.Errorf("unexpected block header requested. Announced:\n %v\n Remote request:\n%v",
pretty.Sdump(announcement),
pretty.Sdump(blockHeaderReq))
}
if err := sendConn.Write(&BlockHeaders{nextBlock.Header()}); err != nil {
return fmt.Errorf("failed to write to connection: %v", err)
}
}
// wait for block announcement
msg = recvConn.readAndServe(s.chain, timeout)
switch msg := msg.(type) {
case *NewBlockHashes:
hashes := *msg
if len(hashes) != 1 {
return fmt.Errorf("unexpected new block hash announcement: wanted 1 announcement, got %d", len(hashes))
}
if nextBlock.Hash() != hashes[0].Hash {
return fmt.Errorf("unexpected block hash announcement, wanted %v, got %v", nextBlock.Hash(),
hashes[0].Hash)
}
case *NewBlock:
// node should only propagate NewBlock without having requested the body if the body is empty
nextBlockBody := nextBlock.Body()
if len(nextBlockBody.Transactions) != 0 || len(nextBlockBody.Uncles) != 0 {
return fmt.Errorf("unexpected non-empty new block propagated: %s", pretty.Sdump(msg))
}
if msg.Block.Hash() != nextBlock.Hash() {
return fmt.Errorf("mismatched hash of propagated new block: wanted %v, got %v",
nextBlock.Hash(), msg.Block.Hash())
}
// check to make sure header matches header that was sent to the node
if !reflect.DeepEqual(nextBlock.Header(), msg.Block.Header()) {
return fmt.Errorf("incorrect header received: wanted %v, got %v", nextBlock.Header(), msg.Block.Header())
}
default:
return fmt.Errorf("unexpected: %s", pretty.Sdump(msg))
}
// confirm node imported block
if err := s.waitForBlockImport(recvConn, nextBlock, isEth66); err != nil {
return fmt.Errorf("error waiting for node to import new block: %v", err)
}
// update the chain
s.chain.blocks = append(s.chain.blocks, nextBlock)
return nil
}

View File

@ -70,7 +70,7 @@ func largeHeader() *types.Header {
GasUsed: 0,
Coinbase: common.Address{},
GasLimit: 0,
UncleHash: randHash(),
UncleHash: types.EmptyUncleHash,
Time: 1337,
ParentHash: randHash(),
Root: randHash(),

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,107 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"os"
"testing"
"time"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
)
var (
genesisFile = "./testdata/genesis.json"
halfchainFile = "./testdata/halfchain.rlp"
fullchainFile = "./testdata/chain.rlp"
)
func TestEthSuite(t *testing.T) {
geth, err := runGeth()
if err != nil {
t.Fatalf("could not run geth: %v", err)
}
defer geth.Close()
suite, err := NewSuite(geth.Server().Self(), fullchainFile, genesisFile)
if err != nil {
t.Fatalf("could not create new test suite: %v", err)
}
for _, test := range suite.AllEthTests() {
t.Run(test.Name, func(t *testing.T) {
result := utesting.RunTAP([]utesting.Test{{Name: test.Name, Fn: test.Fn}}, os.Stdout)
if result[0].Failed {
t.Fatal()
}
})
}
}
// runGeth creates and starts a geth node
func runGeth() (*node.Node, error) {
stack, err := node.New(&node.Config{
P2P: p2p.Config{
ListenAddr: "127.0.0.1:0",
NoDiscovery: true,
MaxPeers: 10, // in case a test requires multiple connections, can be changed in the future
NoDial: true,
},
})
if err != nil {
return nil, err
}
err = setupGeth(stack)
if err != nil {
stack.Close()
return nil, err
}
if err = stack.Start(); err != nil {
stack.Close()
return nil, err
}
return stack, nil
}
func setupGeth(stack *node.Node) error {
chain, err := loadChain(halfchainFile, genesisFile)
if err != nil {
return err
}
backend, err := eth.New(stack, &ethconfig.Config{
Genesis: &chain.genesis,
NetworkId: chain.genesis.Config.ChainID.Uint64(), // 19763
DatabaseCache: 10,
TrieCleanCache: 10,
TrieCleanCacheJournal: "",
TrieCleanCacheRejournal: 60 * time.Minute,
TrieDirtyCache: 16,
TrieTimeout: 60 * time.Minute,
SnapshotCache: 10,
})
if err != nil {
return err
}
_, err = backend.BlockChain().InsertChain(chain.blocks[1:])
return err
}

View File

@ -17,173 +17,403 @@
package ethtest
import (
"fmt"
"math/big"
"strings"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/params"
)
//var faucetAddr = common.HexToAddress("0x71562b71999873DB5b286dF957af199Ec94617F7")
var faucetKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
func sendSuccessfulTx(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn := s.setupConnection(t)
sendSuccessfulTxWithConn(t, s, tx, sendConn)
func (s *Suite) sendSuccessfulTxs(t *utesting.T, isEth66 bool) error {
tests := []*types.Transaction{
getNextTxFromChain(s),
unknownTx(s),
}
for i, tx := range tests {
if tx == nil {
return fmt.Errorf("could not find tx to send")
}
t.Logf("Testing tx propagation %d: sending tx %v %v %v\n", i, tx.Hash().String(), tx.GasPrice(), tx.Gas())
// get previous tx if exists for reference in case of old tx propagation
var prevTx *types.Transaction
if i != 0 {
prevTx = tests[i-1]
}
// write tx to connection
if err := sendSuccessfulTx(s, tx, prevTx, isEth66); err != nil {
return fmt.Errorf("send successful tx test failed: %v", err)
}
}
return nil
}
func sendSuccessfulTxWithConn(t *utesting.T, s *Suite, tx *types.Transaction, sendConn *Conn) {
t.Logf("sending tx: %v %v %v\n", tx.Hash().String(), tx.GasPrice(), tx.Gas())
// Send the transaction
if err := sendConn.Write(&Transactions{tx}); err != nil {
t.Fatal(err)
func sendSuccessfulTx(s *Suite, tx *types.Transaction, prevTx *types.Transaction, isEth66 bool) error {
sendConn, recvConn, err := s.createSendAndRecvConns(isEth66)
if err != nil {
return err
}
time.Sleep(100 * time.Millisecond)
recvConn := s.setupConnection(t)
defer sendConn.Close()
defer recvConn.Close()
if err = sendConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// Send the transaction
if err = sendConn.Write(&Transactions{tx}); err != nil {
return fmt.Errorf("failed to write to connection: %v", err)
}
// peer receiving connection to node
if err = recvConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// update last nonce seen
nonce = tx.Nonce()
// Wait for the transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *Transactions:
recTxs := *msg
for _, gotTx := range recTxs {
if gotTx.Hash() == tx.Hash() {
// Ok
return
for {
switch msg := recvConn.readAndServe(s.chain, timeout).(type) {
case *Transactions:
recTxs := *msg
// if you receive an old tx propagation, read from connection again
if len(recTxs) == 1 && prevTx != nil {
if recTxs[0] == prevTx {
continue
}
}
}
t.Fatalf("missing transaction: got %v missing %v", recTxs, tx.Hash())
case *NewPooledTransactionHashes:
txHashes := *msg
for _, gotHash := range txHashes {
if gotHash == tx.Hash() {
return
for _, gotTx := range recTxs {
if gotTx.Hash() == tx.Hash() {
// Ok
return nil
}
}
return fmt.Errorf("missing transaction: got %v missing %v", recTxs, tx.Hash())
case *NewPooledTransactionHashes:
txHashes := *msg
// if you receive an old tx propagation, read from connection again
if len(txHashes) == 1 && prevTx != nil {
if txHashes[0] == prevTx.Hash() {
continue
}
}
for _, gotHash := range txHashes {
if gotHash == tx.Hash() {
// Ok
return nil
}
}
return fmt.Errorf("missing transaction announcement: got %v missing %v", txHashes, tx.Hash())
default:
return fmt.Errorf("unexpected message in sendSuccessfulTx: %s", pretty.Sdump(msg))
}
t.Fatalf("missing transaction announcement: got %v missing %v", txHashes, tx.Hash())
default:
t.Fatalf("unexpected message in sendSuccessfulTx: %s", pretty.Sdump(msg))
}
}
func sendFailingTx(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn, recvConn := s.setupConnection(t), s.setupConnection(t)
sendFailingTxWithConns(t, s, tx, sendConn, recvConn)
func (s *Suite) sendMaliciousTxs(t *utesting.T, isEth66 bool) error {
badTxs := []*types.Transaction{
getOldTxFromChain(s),
invalidNonceTx(s),
hugeAmount(s),
hugeGasPrice(s),
hugeData(s),
}
// setup receiving connection before sending malicious txs
var (
recvConn *Conn
err error
)
if isEth66 {
recvConn, err = s.dial66()
} else {
recvConn, err = s.dial()
}
if err != nil {
return fmt.Errorf("dial failed: %v", err)
}
defer recvConn.Close()
if err = recvConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
for i, tx := range badTxs {
t.Logf("Testing malicious tx propagation: %v\n", i)
if err = sendMaliciousTx(s, tx, isEth66); err != nil {
return fmt.Errorf("malicious tx test failed:\ntx: %v\nerror: %v", tx, err)
}
}
// check to make sure bad txs aren't propagated
return checkMaliciousTxPropagation(s, badTxs, recvConn)
}
func sendFailingTxWithConns(t *utesting.T, s *Suite, tx *types.Transaction, sendConn, recvConn *Conn) {
// Wait for a transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *NewPooledTransactionHashes:
break
default:
t.Logf("unexpected message, logging: %v", pretty.Sdump(msg))
func sendMaliciousTx(s *Suite, tx *types.Transaction, isEth66 bool) error {
// setup connection
var (
conn *Conn
err error
)
if isEth66 {
conn, err = s.dial66()
} else {
conn, err = s.dial()
}
// Send the transaction
if err := sendConn.Write(&Transactions{tx}); err != nil {
t.Fatal(err)
if err != nil {
return fmt.Errorf("dial failed: %v", err)
}
// Wait for another transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
defer conn.Close()
if err = conn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// write malicious tx
if err = conn.Write(&Transactions{tx}); err != nil {
return fmt.Errorf("failed to write to connection: %v", err)
}
return nil
}
var nonce = uint64(99)
// sendMultipleSuccessfulTxs sends the given transactions to the node and
// expects the node to accept and propagate them.
func sendMultipleSuccessfulTxs(t *utesting.T, s *Suite, txs []*types.Transaction) error {
txMsg := Transactions(txs)
t.Logf("sending %d txs\n", len(txs))
sendConn, recvConn, err := s.createSendAndRecvConns(true)
if err != nil {
return err
}
defer sendConn.Close()
defer recvConn.Close()
if err = sendConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
if err = recvConn.peer(s.chain, nil); err != nil {
return fmt.Errorf("peering failed: %v", err)
}
// Send the transactions
if err = sendConn.Write(&txMsg); err != nil {
return fmt.Errorf("failed to write message to connection: %v", err)
}
// update nonce
nonce = txs[len(txs)-1].Nonce()
// Wait for the transaction announcement(s) and make sure all sent txs are being propagated
recvHashes := make([]common.Hash, 0)
// all txs should be announced within 3 announcements
for i := 0; i < 3; i++ {
switch msg := recvConn.readAndServe(s.chain, timeout).(type) {
case *Transactions:
for _, tx := range *msg {
recvHashes = append(recvHashes, tx.Hash())
}
case *NewPooledTransactionHashes:
recvHashes = append(recvHashes, *msg...)
default:
if !strings.Contains(pretty.Sdump(msg), "i/o timeout") {
return fmt.Errorf("unexpected message while waiting to receive txs: %s", pretty.Sdump(msg))
}
}
// break once all 2000 txs have been received
if len(recvHashes) == 2000 {
break
}
if len(recvHashes) > 0 {
_, missingTxs := compareReceivedTxs(recvHashes, txs)
if len(missingTxs) > 0 {
continue
} else {
t.Logf("successfully received all %d txs", len(txs))
return nil
}
}
}
_, missingTxs := compareReceivedTxs(recvHashes, txs)
if len(missingTxs) > 0 {
for _, missing := range missingTxs {
t.Logf("missing tx: %v", missing.Hash())
}
return fmt.Errorf("missing %d txs", len(missingTxs))
}
return nil
}
// checkMaliciousTxPropagation checks whether the given malicious transactions were
// propagated by the node.
func checkMaliciousTxPropagation(s *Suite, txs []*types.Transaction, conn *Conn) error {
switch msg := conn.readAndServe(s.chain, time.Second*8).(type) {
case *Transactions:
t.Fatalf("Received unexpected transaction announcement: %v", msg)
// check to see if any of the failing txs were in the announcement
recvTxs := make([]common.Hash, len(*msg))
for i, recvTx := range *msg {
recvTxs[i] = recvTx.Hash()
}
badTxs, _ := compareReceivedTxs(recvTxs, txs)
if len(badTxs) > 0 {
return fmt.Errorf("received %d bad txs: \n%v", len(badTxs), badTxs)
}
case *NewPooledTransactionHashes:
t.Fatalf("Received unexpected pooledTx announcement: %v", msg)
badTxs, _ := compareReceivedTxs(*msg, txs)
if len(badTxs) > 0 {
return fmt.Errorf("received %d bad txs: \n%v", len(badTxs), badTxs)
}
case *Error:
// Transaction should not be announced -> wait for timeout
return
return nil
default:
t.Fatalf("unexpected message in sendFailingTx: %s", pretty.Sdump(msg))
return fmt.Errorf("unexpected message in sendFailingTx: %s", pretty.Sdump(msg))
}
return nil
}
func unknownTx(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
// compareReceivedTxs compares the received set of txs against the given set of txs,
// returning both the set received txs that were present within the given txs, and
// the set of txs that were missing from the set of received txs
func compareReceivedTxs(recvTxs []common.Hash, txs []*types.Transaction) (present []*types.Transaction, missing []*types.Transaction) {
// create a map of the hashes received from node
recvHashes := make(map[common.Hash]common.Hash)
for _, hash := range recvTxs {
recvHashes[hash] = hash
}
// collect present txs and missing txs separately
present = make([]*types.Transaction, 0)
missing = make([]*types.Transaction, 0)
for _, tx := range txs {
if _, exists := recvHashes[tx.Hash()]; exists {
present = append(present, tx)
} else {
missing = append(missing, tx)
}
}
return present, missing
}
func unknownTx(s *Suite) *types.Transaction {
tx := getNextTxFromChain(s)
if tx == nil {
return nil
}
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce()+1, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
return signWithFaucet(s.chain.chainConfig, txNew)
}
func getNextTxFromChain(t *utesting.T, s *Suite) *types.Transaction {
func getNextTxFromChain(s *Suite) *types.Transaction {
// Get a new transaction
var tx *types.Transaction
for _, blocks := range s.fullChain.blocks[s.chain.Len():] {
txs := blocks.Transactions()
if txs.Len() != 0 {
tx = txs[0]
break
return txs[0]
}
}
if tx == nil {
t.Fatal("could not find transaction")
}
return tx
return nil
}
func getOldTxFromChain(t *utesting.T, s *Suite) *types.Transaction {
var tx *types.Transaction
func generateTxs(s *Suite, numTxs int) (map[common.Hash]common.Hash, []*types.Transaction, error) {
txHashMap := make(map[common.Hash]common.Hash, numTxs)
txs := make([]*types.Transaction, numTxs)
nextTx := getNextTxFromChain(s)
if nextTx == nil {
return nil, nil, fmt.Errorf("failed to get the next transaction")
}
gas := nextTx.Gas()
nonce = nonce + 1
// generate txs
for i := 0; i < numTxs; i++ {
tx := generateTx(s.chain.chainConfig, nonce, gas)
if tx == nil {
return nil, nil, fmt.Errorf("failed to get the next transaction")
}
txHashMap[tx.Hash()] = tx.Hash()
txs[i] = tx
nonce = nonce + 1
}
return txHashMap, txs, nil
}
func generateTx(chainConfig *params.ChainConfig, nonce uint64, gas uint64) *types.Transaction {
var to common.Address
tx := types.NewTransaction(nonce, to, big.NewInt(1), gas, big.NewInt(1), []byte{})
return signWithFaucet(chainConfig, tx)
}
func getOldTxFromChain(s *Suite) *types.Transaction {
for _, blocks := range s.fullChain.blocks[:s.chain.Len()-1] {
txs := blocks.Transactions()
if txs.Len() != 0 {
tx = txs[0]
break
return txs[0]
}
}
if tx == nil {
t.Fatal("could not find transaction")
}
return tx
return nil
}
func invalidNonceTx(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
func invalidNonceTx(s *Suite) *types.Transaction {
tx := getNextTxFromChain(s)
if tx == nil {
return nil
}
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce()-2, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
return signWithFaucet(s.chain.chainConfig, txNew)
}
func hugeAmount(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
func hugeAmount(s *Suite) *types.Transaction {
tx := getNextTxFromChain(s)
if tx == nil {
return nil
}
amount := largeNumber(2)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, amount, tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
return signWithFaucet(s.chain.chainConfig, txNew)
}
func hugeGasPrice(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
func hugeGasPrice(s *Suite) *types.Transaction {
tx := getNextTxFromChain(s)
if tx == nil {
return nil
}
gasPrice := largeNumber(2)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), gasPrice, tx.Data())
return signWithFaucet(t, txNew)
return signWithFaucet(s.chain.chainConfig, txNew)
}
func hugeData(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
func hugeData(s *Suite) *types.Transaction {
tx := getNextTxFromChain(s)
if tx == nil {
return nil
}
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), tx.GasPrice(), largeBuffer(2))
return signWithFaucet(t, txNew)
return signWithFaucet(s.chain.chainConfig, txNew)
}
func signWithFaucet(t *utesting.T, tx *types.Transaction) *types.Transaction {
signer := types.HomesteadSigner{}
func signWithFaucet(chainConfig *params.ChainConfig, tx *types.Transaction) *types.Transaction {
signer := types.LatestSigner(chainConfig)
signedTx, err := types.SignTx(tx, signer, faucetKey)
if err != nil {
t.Fatalf("could not sign tx: %v\n", err)
return nil
}
return signedTx
}

View File

@ -19,13 +19,8 @@ package ethtest
import (
"crypto/ecdsa"
"fmt"
"reflect"
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/ethereum/go-ethereum/rlp"
@ -120,6 +115,14 @@ type NewPooledTransactionHashes eth.NewPooledTransactionHashesPacket
func (nb NewPooledTransactionHashes) Code() int { return 24 }
type GetPooledTransactions eth.GetPooledTransactionsPacket
func (gpt GetPooledTransactions) Code() int { return 25 }
type PooledTransactions eth.PooledTransactionsPacket
func (pt PooledTransactions) Code() int { return 26 }
// Conn represents an individual connection with a peer
type Conn struct {
*rlpx.Conn
@ -129,6 +132,7 @@ type Conn struct {
caps []p2p.Cap
}
// Read reads an eth packet from the connection.
func (c *Conn) Read() Message {
code, rawData, _, err := c.Conn.Read()
if err != nil {
@ -163,6 +167,10 @@ func (c *Conn) Read() Message {
msg = new(Transactions)
case (NewPooledTransactionHashes{}).Code():
msg = new(NewPooledTransactionHashes)
case (GetPooledTransactions{}.Code()):
msg = new(GetPooledTransactions)
case (PooledTransactions{}.Code()):
msg = new(PooledTransactions)
default:
return errorf("invalid message code: %d", code)
}
@ -173,33 +181,83 @@ func (c *Conn) Read() Message {
return msg
}
// ReadAndServe serves GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) ReadAndServe(chain *Chain, timeout time.Duration) Message {
start := time.Now()
for time.Since(start) < timeout {
timeout := time.Now().Add(10 * time.Second)
c.SetReadDeadline(timeout)
switch msg := c.Read().(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
req := *msg
headers, err := chain.GetHeaders(req)
if err != nil {
return errorf("could not get headers for inbound header request: %v", err)
}
if err := c.Write(headers); err != nil {
return errorf("could not write to connection: %v", err)
}
default:
return msg
}
// Read66 reads an eth66 packet from the connection.
func (c *Conn) Read66() (uint64, Message) {
code, rawData, _, err := c.Conn.Read()
if err != nil {
return 0, errorf("could not read from connection: %v", err)
}
return errorf("no message received within %v", timeout)
var msg Message
switch int(code) {
case (Hello{}).Code():
msg = new(Hello)
case (Ping{}).Code():
msg = new(Ping)
case (Pong{}).Code():
msg = new(Pong)
case (Disconnect{}).Code():
msg = new(Disconnect)
case (Status{}).Code():
msg = new(Status)
case (GetBlockHeaders{}).Code():
ethMsg := new(eth.GetBlockHeadersPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetBlockHeaders(*ethMsg.GetBlockHeadersPacket)
case (BlockHeaders{}).Code():
ethMsg := new(eth.BlockHeadersPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, BlockHeaders(ethMsg.BlockHeadersPacket)
case (GetBlockBodies{}).Code():
ethMsg := new(eth.GetBlockBodiesPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetBlockBodies(ethMsg.GetBlockBodiesPacket)
case (BlockBodies{}).Code():
ethMsg := new(eth.BlockBodiesPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, BlockBodies(ethMsg.BlockBodiesPacket)
case (NewBlock{}).Code():
msg = new(NewBlock)
case (NewBlockHashes{}).Code():
msg = new(NewBlockHashes)
case (Transactions{}).Code():
msg = new(Transactions)
case (NewPooledTransactionHashes{}).Code():
msg = new(NewPooledTransactionHashes)
case (GetPooledTransactions{}.Code()):
ethMsg := new(eth.GetPooledTransactionsPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetPooledTransactions(ethMsg.GetPooledTransactionsPacket)
case (PooledTransactions{}.Code()):
ethMsg := new(eth.PooledTransactionsPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, PooledTransactions(ethMsg.PooledTransactionsPacket)
default:
msg = errorf("invalid message code: %d", code)
}
if msg != nil {
if err := rlp.DecodeBytes(rawData, msg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return 0, msg
}
return 0, errorf("invalid message: %s", string(rawData))
}
// Write writes a eth packet to the connection.
func (c *Conn) Write(msg Message) error {
// check if message is eth protocol message
var (
@ -214,130 +272,12 @@ func (c *Conn) Write(msg Message) error {
return err
}
// handshake checks to make sure a `HELLO` is received.
func (c *Conn) handshake(t *utesting.T) Message {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(10 * time.Second))
// write hello to client
pub0 := crypto.FromECDSAPub(&c.ourKey.PublicKey)[1:]
ourHandshake := &Hello{
Version: 5,
Caps: c.caps,
ID: pub0,
}
if err := c.Write(ourHandshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// read hello from client
switch msg := c.Read().(type) {
case *Hello:
// set snappy if version is at least 5
if msg.Version >= 5 {
c.SetSnappy(true)
}
c.negotiateEthProtocol(msg.Caps)
if c.negotiatedProtoVersion == 0 {
t.Fatalf("unexpected eth protocol version")
}
return msg
default:
t.Fatalf("bad handshake: %#v", msg)
return nil
}
}
// negotiateEthProtocol sets the Conn's eth protocol version
// to highest advertised capability from peer
func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
var highestEthVersion uint
for _, capability := range caps {
if capability.Name != "eth" {
continue
}
if capability.Version > highestEthVersion && capability.Version <= c.ourHighestProtoVersion {
highestEthVersion = capability.Version
}
}
c.negotiatedProtoVersion = highestEthVersion
}
// statusExchange performs a `Status` message exchange with the given
// node.
func (c *Conn) statusExchange(t *utesting.T, chain *Chain, status *Status) Message {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(20 * time.Second))
// read status message from client
var message Message
loop:
for {
switch msg := c.Read().(type) {
case *Status:
if have, want := msg.Head, chain.blocks[chain.Len()-1].Hash(); have != want {
t.Fatalf("wrong head block in status, want: %#x (block %d) have %#x",
want, chain.blocks[chain.Len()-1].NumberU64(), have)
}
if have, want := msg.TD.Cmp(chain.TD(chain.Len())), 0; have != want {
t.Fatalf("wrong TD in status: have %v want %v", have, want)
}
if have, want := msg.ForkID, chain.ForkID(); !reflect.DeepEqual(have, want) {
t.Fatalf("wrong fork ID in status: have %v, want %v", have, want)
}
message = msg
break loop
case *Disconnect:
t.Fatalf("disconnect received: %v", msg.Reason)
case *Ping:
c.Write(&Pong{}) // TODO (renaynay): in the future, this should be an error
// (PINGs should not be a response upon fresh connection)
default:
t.Fatalf("bad status message: %s", pretty.Sdump(msg))
}
}
// make sure eth protocol version is set for negotiation
if c.negotiatedProtoVersion == 0 {
t.Fatalf("eth protocol version must be set in Conn")
}
if status == nil {
// write status message to client
status = &Status{
ProtocolVersion: uint32(c.negotiatedProtoVersion),
NetworkID: chain.chainConfig.ChainID.Uint64(),
TD: chain.TD(chain.Len()),
Head: chain.blocks[chain.Len()-1].Hash(),
Genesis: chain.blocks[0].Hash(),
ForkID: chain.ForkID(),
}
}
if err := c.Write(status); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
return message
}
// waitForBlock waits for confirmation from the client that it has
// imported the given block.
func (c *Conn) waitForBlock(block *types.Block) error {
defer c.SetReadDeadline(time.Time{})
timeout := time.Now().Add(20 * time.Second)
c.SetReadDeadline(timeout)
for {
req := &GetBlockHeaders{Origin: eth.HashOrNumber{Hash: block.Hash()}, Amount: 1}
if err := c.Write(req); err != nil {
return err
}
switch msg := c.Read().(type) {
case *BlockHeaders:
if len(*msg) > 0 {
return nil
}
time.Sleep(100 * time.Millisecond)
default:
return fmt.Errorf("invalid message: %s", pretty.Sdump(msg))
}
// Write66 writes an eth66 packet to the connection.
func (c *Conn) Write66(req eth.Packet, code int) error {
payload, err := rlp.EncodeToBytes(req)
if err != nil {
return err
}
_, err = c.Conn.Write(uint64(code), payload)
return err
}

View File

@ -21,7 +21,6 @@ import (
"crypto/rand"
"fmt"
"net"
"reflect"
"time"
"github.com/ethereum/go-ethereum/crypto"
@ -89,16 +88,18 @@ func BasicPing(t *utesting.T) {
// checkPong verifies that reply is a valid PONG matching the given ping hash.
func (te *testenv) checkPong(reply v4wire.Packet, pingHash []byte) error {
if reply == nil || reply.Kind() != v4wire.PongPacket {
return fmt.Errorf("expected PONG reply, got %v", reply)
if reply == nil {
return fmt.Errorf("expected PONG reply, got nil")
}
if reply.Kind() != v4wire.PongPacket {
return fmt.Errorf("expected PONG reply, got %v %v", reply.Name(), reply)
}
pong := reply.(*v4wire.Pong)
if !bytes.Equal(pong.ReplyTok, pingHash) {
return fmt.Errorf("PONG reply token mismatch: got %x, want %x", pong.ReplyTok, pingHash)
}
wantEndpoint := te.localEndpoint(te.l1)
if !reflect.DeepEqual(pong.To, wantEndpoint) {
return fmt.Errorf("PONG 'to' endpoint mismatch: got %+v, want %+v", pong.To, wantEndpoint)
if want := te.localEndpoint(te.l1); !want.IP.Equal(pong.To.IP) || want.UDP != pong.To.UDP {
return fmt.Errorf("PONG 'to' endpoint mismatch: got %+v, want %+v", pong.To, want)
}
if v4wire.Expired(pong.Expiration) {
return fmt.Errorf("PONG is expired (%v)", pong.Expiration)

View File

@ -71,6 +71,7 @@ func writeNodesJSON(file string, nodes nodeSet) {
}
}
// nodes returns the node records contained in the set.
func (ns nodeSet) nodes() []*enode.Node {
result := make([]*enode.Node, 0, len(ns))
for _, n := range ns {
@ -83,12 +84,37 @@ func (ns nodeSet) nodes() []*enode.Node {
return result
}
// add ensures the given nodes are present in the set.
func (ns nodeSet) add(nodes ...*enode.Node) {
for _, n := range nodes {
ns[n.ID()] = nodeJSON{Seq: n.Seq(), N: n}
v := ns[n.ID()]
v.N = n
v.Seq = n.Seq()
ns[n.ID()] = v
}
}
// topN returns the top n nodes by score as a new set.
func (ns nodeSet) topN(n int) nodeSet {
if n >= len(ns) {
return ns
}
byscore := make([]nodeJSON, 0, len(ns))
for _, v := range ns {
byscore = append(byscore, v)
}
sort.Slice(byscore, func(i, j int) bool {
return byscore[i].Score >= byscore[j].Score
})
result := make(nodeSet, n)
for _, v := range byscore[:n] {
result[v.N.ID()] = v
}
return result
}
// verify performs integrity checks on the node set.
func (ns nodeSet) verify() error {
for id, n := range ns {
if n.N.ID() != id {

View File

@ -17,8 +17,12 @@
package main
import (
"errors"
"fmt"
"net"
"sort"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/core/forkid"
@ -60,25 +64,64 @@ func nodesetInfo(ctx *cli.Context) error {
ns := loadNodesJSON(ctx.Args().First())
fmt.Printf("Set contains %d nodes.\n", len(ns))
showAttributeCounts(ns)
return nil
}
// showAttributeCounts prints the distribution of ENR attributes in a node set.
func showAttributeCounts(ns nodeSet) {
attrcount := make(map[string]int)
var attrlist []interface{}
for _, n := range ns {
r := n.N.Record()
attrlist = r.AppendElements(attrlist[:0])[1:]
for i := 0; i < len(attrlist); i += 2 {
key := attrlist[i].(string)
attrcount[key]++
}
}
var keys []string
var maxlength int
for key := range attrcount {
keys = append(keys, key)
if len(key) > maxlength {
maxlength = len(key)
}
}
sort.Strings(keys)
fmt.Println("ENR attribute counts:")
for _, key := range keys {
fmt.Printf("%s%s: %d\n", strings.Repeat(" ", maxlength-len(key)+1), key, attrcount[key])
}
}
func nodesetFilter(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need nodes file as argument")
}
ns := loadNodesJSON(ctx.Args().First())
// Parse -limit.
limit, err := parseFilterLimit(ctx.Args().Tail())
if err != nil {
return err
}
// Parse the filters.
filter, err := andFilter(ctx.Args().Tail())
if err != nil {
return err
}
// Load nodes and apply filters.
ns := loadNodesJSON(ctx.Args().First())
result := make(nodeSet)
for id, n := range ns {
if filter(n) {
result[id] = n
}
}
if limit >= 0 {
result = result.topN(limit)
}
writeNodesJSON("-", result)
return nil
}
@ -91,6 +134,7 @@ type nodeFilterC struct {
}
var filterFlags = map[string]nodeFilterC{
"-limit": {1, trueFilter}, // needed to skip over -limit
"-ip": {1, ipFilter},
"-min-age": {1, minAgeFilter},
"-eth-network": {1, ethFilter},
@ -98,6 +142,7 @@ var filterFlags = map[string]nodeFilterC{
"-snap": {0, snapFilter},
}
// parseFilters parses nodeFilters from args.
func parseFilters(args []string) ([]nodeFilter, error) {
var filters []nodeFilter
for len(args) > 0 {
@ -118,6 +163,26 @@ func parseFilters(args []string) ([]nodeFilter, error) {
return filters, nil
}
// parseFilterLimit parses the -limit option in args. It returns -1 if there is no limit.
func parseFilterLimit(args []string) (int, error) {
limit := -1
for i, arg := range args {
if arg == "-limit" {
if i == len(args)-1 {
return -1, errors.New("-limit requires an argument")
}
n, err := strconv.Atoi(args[i+1])
if err != nil {
return -1, fmt.Errorf("invalid -limit %q", args[i+1])
}
limit = n
}
}
return limit, nil
}
// andFilter parses node filters in args and and returns a single filter that requires all
// of them to match.
func andFilter(args []string) (nodeFilter, error) {
checks, err := parseFilters(args)
if err != nil {
@ -134,6 +199,10 @@ func andFilter(args []string) (nodeFilter, error) {
return f, nil
}
func trueFilter(args []string) (nodeFilter, error) {
return func(n nodeJSON) bool { return true }, nil
}
func ipFilter(args []string) (nodeFilter, error) {
_, cidr, err := net.ParseCIDR(args[0])
if err != nil {

View File

@ -256,9 +256,9 @@ Error code: 4
Another thing that can be done, is to chain invocations:
```
./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json
INFO [01-21|22:41:22.963] rejected tx index=1 hash="0557ba18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.966] rejected tx index=0 hash="0557ba18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.967] rejected tx index=1 hash="0557ba18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.963] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.966] rejected tx index=0 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.967] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
```
What happened here, is that we first applied two identical transactions, so the second one was rejected.

View File

@ -52,7 +52,7 @@ type ExecutionResult struct {
LogsHash common.Hash `json:"logsHash"`
Bloom types.Bloom `json:"logsBloom" gencodec:"required"`
Receipts types.Receipts `json:"receipts"`
Rejected []int `json:"rejected,omitempty"`
Rejected []*rejectedTx `json:"rejected,omitempty"`
}
type ommer struct {
@ -69,6 +69,7 @@ type stEnv struct {
Timestamp uint64 `json:"currentTimestamp" gencodec:"required"`
BlockHashes map[math.HexOrDecimal64]common.Hash `json:"blockHashes,omitempty"`
Ommers []ommer `json:"ommers,omitempty"`
BaseFee *big.Int `json:"currentBaseFee,omitempty"`
}
type stEnvMarshaling struct {
@ -77,6 +78,12 @@ type stEnvMarshaling struct {
GasLimit math.HexOrDecimal64
Number math.HexOrDecimal64
Timestamp math.HexOrDecimal64
BaseFee *math.HexOrDecimal256
}
type rejectedTx struct {
Index int `json:"index"`
Err string `json:"error"`
}
// Apply applies a set of transactions to a pre-state
@ -103,7 +110,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number))
gaspool = new(core.GasPool)
blockHash = common.Hash{0x13, 0x37}
rejectedTxs []int
rejectedTxs []*rejectedTx
includedTxs types.Transactions
gasUsed = uint64(0)
receipts = make(types.Receipts, 0)
@ -120,6 +127,10 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
GasLimit: pre.Env.GasLimit,
GetHash: getHash,
}
// If currentBaseFee is defined, add it to the vmContext.
if pre.Env.BaseFee != nil {
vmContext.BaseFee = new(big.Int).Set(pre.Env.BaseFee)
}
// If DAO is supported/enabled, we need to handle it here. In geth 'proper', it's
// done in StateProcessor.Process(block, ...), right before transactions are applied.
if chainConfig.DAOForkSupport &&
@ -129,10 +140,10 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
}
for i, tx := range txs {
msg, err := tx.AsMessage(signer)
msg, err := tx.AsMessage(signer, pre.Env.BaseFee)
if err != nil {
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "error", err)
rejectedTxs = append(rejectedTxs, i)
log.Warn("rejected tx", "index", i, "hash", tx.Hash(), "error", err)
rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()})
continue
}
tracer, err := getTracerFn(txIndex, tx.Hash())
@ -141,7 +152,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
}
vmConfig.Tracer = tracer
vmConfig.Debug = (tracer != nil)
statedb.Prepare(tx.Hash(), blockHash, txIndex)
statedb.Prepare(tx.Hash(), txIndex)
txContext := core.NewEVMTxContext(msg)
snapshot := statedb.Snapshot()
evm := vm.NewEVM(vmContext, txContext, statedb, chainConfig, vmConfig)
@ -151,7 +162,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
if err != nil {
statedb.RevertToSnapshot(snapshot)
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From(), "error", err)
rejectedTxs = append(rejectedTxs, i)
rejectedTxs = append(rejectedTxs, &rejectedTx{i, err.Error()})
continue
}
includedTxs = append(includedTxs, tx)
@ -186,7 +197,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
}
// Set the receipt logs and create the bloom filter.
receipt.Logs = statedb.GetLogs(tx.Hash())
receipt.Logs = statedb.GetLogs(tx.Hash(), blockHash)
receipt.Bloom = types.CreateBloom(types.Receipts{receipt})
// These three are non-consensus fields:
//receipt.BlockHash

View File

@ -23,6 +23,7 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
Timestamp math.HexOrDecimal64 `json:"currentTimestamp" gencodec:"required"`
BlockHashes map[math.HexOrDecimal64]common.Hash `json:"blockHashes,omitempty"`
Ommers []ommer `json:"ommers,omitempty"`
BaseFee *math.HexOrDecimal256 `json:"currentBaseFee,omitempty"`
}
var enc stEnv
enc.Coinbase = common.UnprefixedAddress(s.Coinbase)
@ -32,6 +33,7 @@ func (s stEnv) MarshalJSON() ([]byte, error) {
enc.Timestamp = math.HexOrDecimal64(s.Timestamp)
enc.BlockHashes = s.BlockHashes
enc.Ommers = s.Ommers
enc.BaseFee = (*math.HexOrDecimal256)(s.BaseFee)
return json.Marshal(&enc)
}
@ -45,6 +47,7 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
Timestamp *math.HexOrDecimal64 `json:"currentTimestamp" gencodec:"required"`
BlockHashes map[math.HexOrDecimal64]common.Hash `json:"blockHashes,omitempty"`
Ommers []ommer `json:"ommers,omitempty"`
BaseFee *math.HexOrDecimal256 `json:"currentBaseFee,omitempty"`
}
var dec stEnv
if err := json.Unmarshal(input, &dec); err != nil {
@ -76,5 +79,8 @@ func (s *stEnv) UnmarshalJSON(input []byte) error {
if dec.Ommers != nil {
s.Ommers = dec.Ommers
}
if dec.BaseFee != nil {
s.BaseFee = (*big.Int)(dec.BaseFee)
}
return nil
}

View File

@ -19,6 +19,7 @@ package t8ntool
import (
"crypto/ecdsa"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"math/big"
@ -142,7 +143,9 @@ func Main(ctx *cli.Context) error {
// Figure out the prestate alloc
if allocStr == stdinSelector || envStr == stdinSelector || txStr == stdinSelector {
decoder := json.NewDecoder(os.Stdin)
decoder.Decode(inputData)
if err := decoder.Decode(inputData); err != nil {
return NewError(ErrorJson, fmt.Errorf("failed unmarshaling stdin: %v", err))
}
}
if allocStr != stdinSelector {
inFile, err := os.Open(allocStr)
@ -152,7 +155,7 @@ func Main(ctx *cli.Context) error {
defer inFile.Close()
decoder := json.NewDecoder(inFile)
if err := decoder.Decode(&inputData.Alloc); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed unmarshaling alloc-file: %v", err))
return NewError(ErrorJson, fmt.Errorf("failed unmarshaling alloc-file: %v", err))
}
}
prestate.Pre = inputData.Alloc
@ -167,7 +170,7 @@ func Main(ctx *cli.Context) error {
decoder := json.NewDecoder(inFile)
var env stEnv
if err := decoder.Decode(&env); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed unmarshaling env-file: %v", err))
return NewError(ErrorJson, fmt.Errorf("failed unmarshaling env-file: %v", err))
}
inputData.Env = &env
}
@ -180,7 +183,7 @@ func Main(ctx *cli.Context) error {
// Construct the chainconfig
var chainConfig *params.ChainConfig
if cConf, extraEips, err := tests.GetChainConfig(ctx.String(ForknameFlag.Name)); err != nil {
return NewError(ErrorVMConfig, fmt.Errorf("Failed constructing chain configuration: %v", err))
return NewError(ErrorVMConfig, fmt.Errorf("failed constructing chain configuration: %v", err))
} else {
chainConfig = cConf
vmConfig.ExtraEips = extraEips
@ -197,7 +200,7 @@ func Main(ctx *cli.Context) error {
defer inFile.Close()
decoder := json.NewDecoder(inFile)
if err := decoder.Decode(&txsWithKeys); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed unmarshaling txs-file: %v", err))
return NewError(ErrorJson, fmt.Errorf("failed unmarshaling txs-file: %v", err))
}
} else {
txsWithKeys = inputData.Txs
@ -206,22 +209,24 @@ func Main(ctx *cli.Context) error {
signer := types.MakeSigner(chainConfig, big.NewInt(int64(prestate.Env.Number)))
if txs, err = signUnsignedTransactions(txsWithKeys, signer); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed signing transactions: %v", err))
return NewError(ErrorJson, fmt.Errorf("failed signing transactions: %v", err))
}
// Sanity check, to not `panic` in state_transition
if chainConfig.IsLondon(big.NewInt(int64(prestate.Env.Number))) {
if prestate.Env.BaseFee == nil {
return NewError(ErrorVMConfig, errors.New("EIP-1559 config but missing 'currentBaseFee' in env section"))
}
}
// Iterate over all the tests, run them and aggregate the results
// Run the test and aggregate the result
state, result, err := prestate.Apply(vmConfig, chainConfig, txs, ctx.Int64(RewardFlag.Name), getTracer)
s, result, err := prestate.Apply(vmConfig, chainConfig, txs, ctx.Int64(RewardFlag.Name), getTracer)
if err != nil {
return err
}
body, _ := rlp.EncodeToBytes(txs)
// Dump the excution result
collector := make(Alloc)
state.DumpToCollector(collector, false, false, false, nil, -1)
s.DumpToCollector(collector, nil)
return dispatchOutput(ctx, baseDir, result, collector, body)
}
// txWithKey is a helper-struct, to allow us to use the types.Transaction along with
@ -278,7 +283,7 @@ func signUnsignedTransactions(txs []*txWithKey, signer types.Signer) (types.Tran
// This transaction needs to be signed
signed, err := types.SignTx(tx, signer, key)
if err != nil {
return nil, NewError(ErrorJson, fmt.Errorf("Tx %d: failed to sign tx: %v", i, err))
return nil, NewError(ErrorJson, fmt.Errorf("tx %d: failed to sign tx: %v", i, err))
}
signedTxs = append(signedTxs, signed)
} else {
@ -303,7 +308,7 @@ func (g Alloc) OnAccount(addr common.Address, dumpAccount state.DumpAccount) {
}
}
genesisAccount := core.GenesisAccount{
Code: common.FromHex(dumpAccount.Code),
Code: dumpAccount.Code,
Storage: storage,
Balance: balance,
Nonce: dumpAccount.Nonce,

View File

@ -129,11 +129,6 @@ var (
Name: "noreturndata",
Usage: "disable return data output",
}
EVMInterpreterFlag = cli.StringFlag{
Name: "vm.evm",
Usage: "External EVM configuration (default = built-in interpreter)",
Value: "",
}
)
var stateTransitionCommand = cli.Command{
@ -185,7 +180,6 @@ func init() {
DisableStackFlag,
DisableStorageFlag,
DisableReturnDataFlag,
EVMInterpreterFlag,
}
app.Commands = []cli.Command{
compileCommand,

View File

@ -1,23 +0,0 @@
{
"root": "f4157bb27bcb1d1a63001434a249a80948f2e9fe1f53d551244c1dae826b5b23",
"accounts": {
"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": {
"balance": "4276951709",
"nonce": 1,
"root": "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"codeHash": "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "6916764286133345652",
"nonce": 172,
"root": "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"codeHash": "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
},
"0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "42500",
"nonce": 0,
"root": "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"codeHash": "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
}
}
}

View File

@ -211,9 +211,8 @@ func runCmd(ctx *cli.Context) error {
Coinbase: genesisConfig.Coinbase,
BlockNumber: new(big.Int).SetUint64(genesisConfig.Number),
EVMConfig: vm.Config{
Tracer: tracer,
Debug: ctx.GlobalBool(DebugFlag.Name) || ctx.GlobalBool(MachineFlag.Name),
EVMInterpreter: ctx.GlobalString(EVMInterpreterFlag.Name),
Tracer: tracer,
Debug: ctx.GlobalBool(DebugFlag.Name) || ctx.GlobalBool(MachineFlag.Name),
},
}
@ -270,7 +269,7 @@ func runCmd(ctx *cli.Context) error {
if ctx.GlobalBool(DumpFlag.Name) {
statedb.Commit(true)
statedb.IntermediateRoot(true)
fmt.Println(string(statedb.Dump(false, false, true)))
fmt.Println(string(statedb.Dump(nil)))
}
if memProfilePath := ctx.GlobalString(MemProfileFlag.Name); memProfilePath != "" {

View File

@ -98,16 +98,16 @@ func stateTestCmd(ctx *cli.Context) error {
for _, st := range test.Subtests() {
// Run the test and aggregate the result
result := &StatetestResult{Name: key, Fork: st.Fork, Pass: true}
_, state, err := test.Run(st, cfg, false)
_, s, err := test.Run(st, cfg, false)
// print state root for evmlab tracing
if ctx.GlobalBool(MachineFlag.Name) && state != nil {
fmt.Fprintf(os.Stderr, "{\"stateRoot\": \"%x\"}\n", state.IntermediateRoot(false))
if ctx.GlobalBool(MachineFlag.Name) && s != nil {
fmt.Fprintf(os.Stderr, "{\"stateRoot\": \"%x\"}\n", s.IntermediateRoot(false))
}
if err != nil {
// Test failed, mark as so and dump any state to aid debugging
result.Pass, result.Error = false, err.Error()
if ctx.GlobalBool(DumpFlag.Name) && state != nil {
dump := state.RawDump(false, false, true)
if ctx.GlobalBool(DumpFlag.Name) && s != nil {
dump := s.RawDump(nil)
result.State = &dump
}
}

23
cmd/evm/testdata/10/alloc.json vendored Normal file
View File

@ -0,0 +1,23 @@
{
"0x1111111111111111111111111111111111111111" : {
"balance" : "0x010000000000",
"code" : "0xfe",
"nonce" : "0x01",
"storage" : {
}
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b" : {
"balance" : "0x010000000000",
"code" : "0x",
"nonce" : "0x01",
"storage" : {
}
},
"0xd02d72e067e77158444ef2020ff2d325f929b363" : {
"balance" : "0x01000000000000",
"code" : "0x",
"nonce" : "0x01",
"storage" : {
}
}
}

12
cmd/evm/testdata/10/env.json vendored Normal file
View File

@ -0,0 +1,12 @@
{
"currentCoinbase" : "0x2adc25665018aa1fe0e6bc666dac8fc2697ff9ba",
"currentDifficulty" : "0x020000",
"currentNumber" : "0x01",
"currentTimestamp" : "0x079e",
"previousHash" : "0xcb23ee65a163121f640673b41788ee94633941405f95009999b502eedfbbfd4f",
"currentGasLimit" : "0x40000000",
"currentBaseFee" : "0x036b",
"blockHashes" : {
"0" : "0xcb23ee65a163121f640673b41788ee94633941405f95009999b502eedfbbfd4f"
}
}

79
cmd/evm/testdata/10/readme.md vendored Normal file
View File

@ -0,0 +1,79 @@
## EIP-1559 testing
This test contains testcases for EIP-1559, which were reported by Ori as misbehaving.
```
[user@work evm]$ dir=./testdata/10 && ./evm t8n --state.fork=London --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout --output.result=stdout 2>&1
INFO [05-09|22:11:59.436] rejected tx index=3 hash=db07bf..ede1e8 from=0xd02d72E067e77158444ef2020Ff2d325f929B363 error="gas limit reached"
```
Output:
```json
{
"alloc": {
"0x1111111111111111111111111111111111111111": {
"code": "0xfe",
"balance": "0x10000000000",
"nonce": "0x1"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0x10000000000",
"nonce": "0x1"
},
"0xd02d72e067e77158444ef2020ff2d325f929b363": {
"balance": "0xff5beffffc95",
"nonce": "0x4"
}
},
"result": {
"stateRoot": "0xf91a7ec08e4bfea88719aab34deabb000c86902360532b52afa9599d41f2bb8b",
"txRoot": "0xda925f2306a52fa24c15d5cd212d736ee016415fd8dd0c45fd368de7917d64bb",
"receiptRoot": "0x439a25f7fc424c10fb1f89800e4aa1df74156b137239d9ac3eaa7c911c353cd5",
"logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"receipts": [
{
"type": "0x2",
"root": "0x",
"status": "0x0",
"cumulativeGasUsed": "0x10000001",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"transactionHash": "0x88980f6efcc5358d9c359663e7b9414722d430497637340ea056b076bc206701",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x10000001",
"blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"transactionIndex": "0x0"
},
{
"type": "0x2",
"root": "0x",
"status": "0x0",
"cumulativeGasUsed": "0x20000001",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"transactionHash": "0xd7bf3886f4e2aef74d525ae072c680f3846f550254401b67cbfda4a233757582",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x10000000",
"blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"transactionIndex": "0x1"
},
{
"type": "0x2",
"root": "0x",
"status": "0x0",
"cumulativeGasUsed": "0x30000001",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"transactionHash": "0x50308296760f01f1eeec7500e9e73cad67469249b1f59e9a9f55e6625a4923db",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x10000000",
"blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"transactionIndex": "0x2"
}
],
"rejected": [
3
]
}
}
```

70
cmd/evm/testdata/10/txs.json vendored Normal file
View File

@ -0,0 +1,70 @@
[
{
"input" : "0x",
"gas" : "0x10000001",
"nonce" : "0x1",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x0",
"v" : "0x0",
"r" : "0x7a45f00bcde9036b026cdf1628b023cd8a31a95c62b5e4dbbee2fa7debe668fb",
"s" : "0x3cc9d6f2cd00a045b0263f2d6dad7d60938d5d13d061af4969f95928aa934d4a",
"secretKey" : "0x41f6e321b31e72173f8ff2e292359e1862f24fba42fe6f97efaf641980eff298",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x0",
"accessList" : [
]
},
{
"input" : "0x",
"gas" : "0x10000000",
"nonce" : "0x2",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x0",
"v" : "0x0",
"r" : "0x4c564b94b0281a8210eeec2dd1fe2e16ff1c1903a8c3a1078d735d7f8208b2af",
"s" : "0x56432b2593e6de95db1cb997b7385217aca03f1615327e231734446b39f266d",
"secretKey" : "0x41f6e321b31e72173f8ff2e292359e1862f24fba42fe6f97efaf641980eff298",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x0",
"accessList" : [
]
},
{
"input" : "0x",
"gas" : "0x10000000",
"nonce" : "0x3",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x0",
"v" : "0x0",
"r" : "0x2ed2ef52f924f59d4a21e1f2a50d3b1109303ce5e32334a7ece9b46f4fbc2a57",
"s" : "0x2980257129cbd3da987226f323d50ba3975a834d165e0681f991b75615605c44",
"secretKey" : "0x41f6e321b31e72173f8ff2e292359e1862f24fba42fe6f97efaf641980eff298",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x0",
"accessList" : [
]
},
{
"input" : "0x",
"gas" : "0x10000000",
"nonce" : "0x4",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x0",
"v" : "0x0",
"r" : "0x5df7d7f8f8e15b36fc9f189cacb625040fad10398d08fc90812595922a2c49b2",
"s" : "0x565fc1803f77a84d754ffe3c5363ab54a8d93a06ea1bb9d4c73c73a282b35917",
"secretKey" : "0x41f6e321b31e72173f8ff2e292359e1862f24fba42fe6f97efaf641980eff298",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x0",
"accessList" : [
]
}
]

25
cmd/evm/testdata/11/alloc.json vendored Normal file
View File

@ -0,0 +1,25 @@
{
"0x0f572e5295c57f15886f9b263e2f6d2d6c7b5ec6" : {
"balance" : "0x0de0b6b3a7640000",
"code" : "0x61ffff5060046000f3",
"nonce" : "0x01",
"storage" : {
}
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b" : {
"balance" : "0x0de0b6b3a7640000",
"code" : "0x",
"nonce" : "0x00",
"storage" : {
"0x00" : "0x00"
}
},
"0xb94f5374fce5edbc8e2a8697c15331677e6ebf0b" : {
"balance" : "0x00",
"code" : "0x6001600055",
"nonce" : "0x00",
"storage" : {
}
}
}

12
cmd/evm/testdata/11/env.json vendored Normal file
View File

@ -0,0 +1,12 @@
{
"currentCoinbase" : "0x2adc25665018aa1fe0e6bc666dac8fc2697ff9ba",
"currentDifficulty" : "0x020000",
"currentNumber" : "0x01",
"currentTimestamp" : "0x03e8",
"previousHash" : "0xfda4419b3660e99f37e536dae1ab081c180136bb38c837a93e93d9aab58553b2",
"currentGasLimit" : "0x0f4240",
"blockHashes" : {
"0" : "0xfda4419b3660e99f37e536dae1ab081c180136bb38c837a93e93d9aab58553b2"
}
}

13
cmd/evm/testdata/11/readme.md vendored Normal file
View File

@ -0,0 +1,13 @@
## Test missing basefee
In this test, the `currentBaseFee` is missing from the env portion.
On a live blockchain, the basefee is present in the header, and verified as part of header validation.
In `evm t8n`, we don't have blocks, so it needs to be added in the `env`instead.
When it's missing, an error is expected.
```
dir=./testdata/11 && ./evm t8n --state.fork=London --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout --output.result=stdout 2>&1>/dev/null
ERROR(3): EIP-1559 config but missing 'currentBaseFee' in env section
```

14
cmd/evm/testdata/11/txs.json vendored Normal file
View File

@ -0,0 +1,14 @@
[
{
"input" : "0x38600060013960015160005560006000f3",
"gas" : "0x61a80",
"gasPrice" : "0x1",
"nonce" : "0x0",
"value" : "0x186a0",
"v" : "0x1c",
"r" : "0x2e1391fd903387f1cc2b51df083805fb4bbb0d4710a2cdf4a044d191ff7be63e",
"s" : "0x7f10a933c42ab74927db02b1db009e923d9d2ab24ac24d63c399f2fe5d9c9b22",
"secretKey" : "0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8"
}
]

View File

@ -56,8 +56,8 @@ dir=./testdata/8 \
If we try to execute it on older rules:
```
dir=./testdata/8 && ./evm t8n --state.fork=Istanbul --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json
INFO [01-21|23:21:51.265] rejected tx index=0 hash="d2818d6ab3da" error="tx type not supported"
INFO [01-21|23:21:51.265] rejected tx index=1 hash="26ea0081c01b" from=0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B error="nonce too high: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B, tx: 1 state: 0"
INFO [01-21|23:21:51.265] rejected tx index=2 hash="698d01369cee" error="tx type not supported"
INFO [01-21|23:21:51.265] rejected tx index=0 hash=d2818d..6ab3da error="tx type not supported"
INFO [01-21|23:21:51.265] rejected tx index=1 hash=26ea00..81c01b from=0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B error="nonce too high: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B, tx: 1 state: 0"
INFO [01-21|23:21:51.265] rejected tx index=2 hash=698d01..369cee error="tx type not supported"
```
Number `1` and `3` are not applicable, and therefore number `2` has wrong nonce.

11
cmd/evm/testdata/9/alloc.json vendored Normal file
View File

@ -0,0 +1,11 @@
{
"0x000000000000000000000000000000000000aaaa": {
"balance": "0x03",
"code": "0x58585454",
"nonce": "0x1"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0x100000000000000",
"nonce": "0x00"
}
}

8
cmd/evm/testdata/9/env.json vendored Normal file
View File

@ -0,0 +1,8 @@
{
"currentCoinbase": "0x2adc25665018aa1fe0e6bc666dac8fc2697ff9ba",
"currentDifficulty": "0x20000",
"currentGasTarget": "0x1000000000",
"currentBaseFee": "0x3B9ACA00",
"currentNumber": "0x1000000",
"currentTimestamp": "0x04"
}

75
cmd/evm/testdata/9/readme.md vendored Normal file
View File

@ -0,0 +1,75 @@
## EIP-1559 testing
This test contains testcases for EIP-1559, which uses an new transaction type and has a new block parameter.
### Prestate
The alloc portion contains one contract (`0x000000000000000000000000000000000000aaaa`), containing the
following code: `0x58585454`: `PC; PC; SLOAD; SLOAD`.
Essentialy, this contract does `SLOAD(0)` and `SLOAD(1)`.
The alloc also contains some funds on `0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b`.
## Transactions
There are two transactions, each invokes the contract above.
1. EIP-1559 ACL-transaction, which contains the `0x0` slot for `0xaaaa`
2. Legacy transaction
## Execution
Running it yields:
```
$ dir=./testdata/9 && ./evm t8n --state.fork=London --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --trace && cat trace-* | grep SLOAD
{"pc":2,"op":84,"gas":"0x48c28","gasCost":"0x834","memory":"0x","memSize":0,"stack":["0x0","0x1"],"returnStack":null,"returnD
ata":"0x","depth":1,"refund":0,"opName":"SLOAD","error":""}
{"pc":3,"op":84,"gas":"0x483f4","gasCost":"0x64","memory":"0x","memSize":0,"stack":["0x0","0x0"],"returnStack":null,"returnDa
ta":"0x","depth":1,"refund":0,"opName":"SLOAD","error":""}
{"pc":2,"op":84,"gas":"0x49cf4","gasCost":"0x834","memory":"0x","memSize":0,"stack":["0x0","0x1"],"returnStack":null,"returnD
ata":"0x","depth":1,"refund":0,"opName":"SLOAD","error":""}
{"pc":3,"op":84,"gas":"0x494c0","gasCost":"0x834","memory":"0x","memSize":0,"stack":["0x0","0x0"],"returnStack":null,"returnD
ata":"0x","depth":1,"refund":0,"opName":"SLOAD","error":""}
```
We can also get the post-alloc:
```
$ dir=./testdata/9 && ./evm t8n --state.fork=London --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout
{
"alloc": {
"0x000000000000000000000000000000000000aaaa": {
"code": "0x58585454",
"balance": "0x3",
"nonce": "0x1"
},
"0x2adc25665018aa1fe0e6bc666dac8fc2697ff9ba": {
"balance": "0xbfc02677a000"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0xff104fcfea7800",
"nonce": "0x2"
}
}
}
```
If we try to execute it on older rules:
```
dir=./testdata/9 && ./evm t8n --state.fork=Berlin --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout
ERROR(10): Failed signing transactions: ERROR(10): Tx 0: failed to sign tx: transaction type not supported
```
It fails, due to the `evm t8n` cannot sign them in with the given signer. We can bypass that, however,
by feeding it presigned transactions, located in `txs_signed.json`.
```
dir=./testdata/9 && ./evm t8n --state.fork=Berlin --input.alloc=$dir/alloc.json --input.txs=$dir/txs_signed.json --input.env=$dir/env.json
INFO [05-07|12:28:42.072] rejected tx index=0 hash=b4821e..536819 error="transaction type not supported"
INFO [05-07|12:28:42.072] rejected tx index=1 hash=a9c6c6..fa4036 from=0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B error="nonce too high: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B, tx: 1 state: 0"
INFO [05-07|12:28:42.073] Wrote file file=alloc.json
INFO [05-07|12:28:42.073] Wrote file file=result.json
```
Number `0` is not applicable, and therefore number `1` has wrong nonce, and both are rejected.

37
cmd/evm/testdata/9/txs.json vendored Normal file
View File

@ -0,0 +1,37 @@
[
{
"gas": "0x4ef00",
"maxPriorityFeePerGas": "0x2",
"maxFeePerGas": "0x12A05F200",
"chainId": "0x1",
"input": "0x",
"nonce": "0x0",
"to": "0x000000000000000000000000000000000000aaaa",
"value": "0x0",
"type" : "0x2",
"accessList": [
{"address": "0x000000000000000000000000000000000000aaaa",
"storageKeys": [
"0x0000000000000000000000000000000000000000000000000000000000000000"
]
}
],
"v": "0x0",
"r": "0x0",
"s": "0x0",
"secretKey": "0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8"
},
{
"gas": "0x4ef00",
"gasPrice": "0x12A05F200",
"chainId": "0x1",
"input": "0x",
"nonce": "0x1",
"to": "0x000000000000000000000000000000000000aaaa",
"value": "0x0",
"v": "0x0",
"r": "0x0",
"s": "0x0",
"secretKey": "0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8"
}
]

View File

@ -85,6 +85,9 @@ var (
twitterTokenFlag = flag.String("twitter.token", "", "Bearer token to authenticate with the v2 Twitter API")
twitterTokenV1Flag = flag.String("twitter.token.v1", "", "Bearer token to authenticate with the v1.1 Twitter API")
goerliFlag = flag.Bool("goerli", false, "Initializes the faucet with Görli network config")
rinkebyFlag = flag.Bool("rinkeby", false, "Initializes the faucet with Rinkeby network config")
)
var (
@ -144,13 +147,9 @@ func main() {
log.Crit("Failed to render the faucet template", "err", err)
}
// Load and parse the genesis block requested by the user
blob, err := ioutil.ReadFile(*genesisFlag)
genesis, err := getGenesis(genesisFlag, *goerliFlag, *rinkebyFlag)
if err != nil {
log.Crit("Failed to read genesis block contents", "genesis", *genesisFlag, "err", err)
}
genesis := new(core.Genesis)
if err = json.Unmarshal(blob, genesis); err != nil {
log.Crit("Failed to parse genesis block json", "err", err)
log.Crit("Failed to parse genesis config", "err", err)
}
// Convert the bootnodes to internal enode representations
var enodes []*enode.Node
@ -162,7 +161,8 @@ func main() {
}
}
// Load up the account key and decrypt its password
if blob, err = ioutil.ReadFile(*accPassFlag); err != nil {
blob, err := ioutil.ReadFile(*accPassFlag)
if err != nil {
log.Crit("Failed to read account password contents", "file", *accPassFlag, "err", err)
}
pass := strings.TrimSuffix(string(blob), "\n")
@ -884,3 +884,19 @@ func authNoAuth(url string) (string, string, common.Address, error) {
}
return address.Hex() + "@noauth", "", address, nil
}
// getGenesis returns a genesis based on input args
func getGenesis(genesisFlag *string, goerliFlag bool, rinkebyFlag bool) (*core.Genesis, error) {
switch {
case genesisFlag != nil:
var genesis core.Genesis
err := common.LoadJSON(*genesisFlag, &genesis)
return &genesis, err
case goerliFlag:
return core.DefaultGoerliGenesisBlock(), nil
case rinkebyFlag:
return core.DefaultRinkebyGenesisBlock(), nil
default:
return nil, fmt.Errorf("no genesis flag provided")
}
}

View File

@ -23,6 +23,8 @@ import (
)
func TestFacebook(t *testing.T) {
// TODO: Remove facebook auth or implement facebook api, which seems to require an API key
t.Skipf("The facebook access is flaky, needs to be reimplemented or removed")
for _, tt := range []struct {
url string
want common.Address

View File

@ -18,6 +18,7 @@ package main
import (
"encoding/json"
"errors"
"fmt"
"os"
"runtime"
@ -27,12 +28,16 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/node"
"gopkg.in/urfave/cli.v1"
)
@ -63,7 +68,7 @@ It expects the genesis file as argument.`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
Category: "BLOCKCHAIN COMMANDS",
Description: `
@ -152,20 +157,21 @@ The export-preimages command export hash preimages to an RLP encoded stream`,
Action: utils.MigrateFlags(dump),
Name: "dump",
Usage: "Dump a specific block from storage",
ArgsUsage: "[<blockHash> | <blockNum>]...",
ArgsUsage: "[? <blockHash> | <blockNum>]",
Flags: []cli.Flag{
utils.DataDirFlag,
utils.CacheFlag,
utils.SyncModeFlag,
utils.IterativeOutputFlag,
utils.ExcludeCodeFlag,
utils.ExcludeStorageFlag,
utils.IncludeIncompletesFlag,
utils.StartKeyFlag,
utils.DumpLimitFlag,
},
Category: "BLOCKCHAIN COMMANDS",
Description: `
The arguments are interpreted as block numbers or hashes.
Use "ethereum dump 0" to dump the genesis block.`,
This command dumps out the state for a given block (or latest, if none provided).
`,
}
)
@ -373,47 +379,85 @@ func exportPreimages(ctx *cli.Context) error {
return nil
}
func parseDumpConfig(ctx *cli.Context, stack *node.Node) (*state.DumpConfig, ethdb.Database, common.Hash, error) {
db := utils.MakeChainDatabase(ctx, stack, true)
var header *types.Header
if ctx.NArg() > 1 {
return nil, nil, common.Hash{}, fmt.Errorf("expected 1 argument (number or hash), got %d", ctx.NArg())
}
if ctx.NArg() == 1 {
arg := ctx.Args().First()
if hashish(arg) {
hash := common.HexToHash(arg)
if number := rawdb.ReadHeaderNumber(db, hash); number != nil {
header = rawdb.ReadHeader(db, hash, *number)
} else {
return nil, nil, common.Hash{}, fmt.Errorf("block %x not found", hash)
}
} else {
number, err := strconv.Atoi(arg)
if err != nil {
return nil, nil, common.Hash{}, err
}
if hash := rawdb.ReadCanonicalHash(db, uint64(number)); hash != (common.Hash{}) {
header = rawdb.ReadHeader(db, hash, uint64(number))
} else {
return nil, nil, common.Hash{}, fmt.Errorf("header for block %d not found", number)
}
}
} else {
// Use latest
header = rawdb.ReadHeadHeader(db)
}
if header == nil {
return nil, nil, common.Hash{}, errors.New("no head block found")
}
startArg := common.FromHex(ctx.String(utils.StartKeyFlag.Name))
var start common.Hash
switch len(startArg) {
case 0: // common.Hash
case 32:
start = common.BytesToHash(startArg)
case 20:
start = crypto.Keccak256Hash(startArg)
log.Info("Converting start-address to hash", "address", common.BytesToAddress(startArg), "hash", start.Hex())
default:
return nil, nil, common.Hash{}, fmt.Errorf("invalid start argument: %x. 20 or 32 hex-encoded bytes required", startArg)
}
var conf = &state.DumpConfig{
SkipCode: ctx.Bool(utils.ExcludeCodeFlag.Name),
SkipStorage: ctx.Bool(utils.ExcludeStorageFlag.Name),
OnlyWithAddresses: !ctx.Bool(utils.IncludeIncompletesFlag.Name),
Start: start.Bytes(),
Max: ctx.Uint64(utils.DumpLimitFlag.Name),
}
log.Info("State dump configured", "block", header.Number, "hash", header.Hash().Hex(),
"skipcode", conf.SkipCode, "skipstorage", conf.SkipStorage,
"start", hexutil.Encode(conf.Start), "limit", conf.Max)
return conf, db, header.Root, nil
}
func dump(ctx *cli.Context) error {
stack, _ := makeConfigNode(ctx)
defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack, true)
for _, arg := range ctx.Args() {
var header *types.Header
if hashish(arg) {
hash := common.HexToHash(arg)
number := rawdb.ReadHeaderNumber(db, hash)
if number != nil {
header = rawdb.ReadHeader(db, hash, *number)
}
} else {
number, _ := strconv.Atoi(arg)
hash := rawdb.ReadCanonicalHash(db, uint64(number))
if hash != (common.Hash{}) {
header = rawdb.ReadHeader(db, hash, uint64(number))
}
}
if header == nil {
fmt.Println("{}")
utils.Fatalf("block not found")
} else {
state, err := state.New(header.Root, state.NewDatabase(db), nil)
if err != nil {
utils.Fatalf("could not create new state: %v", err)
}
excludeCode := ctx.Bool(utils.ExcludeCodeFlag.Name)
excludeStorage := ctx.Bool(utils.ExcludeStorageFlag.Name)
includeMissing := ctx.Bool(utils.IncludeIncompletesFlag.Name)
if ctx.Bool(utils.IterativeOutputFlag.Name) {
state.IterativeDump(excludeCode, excludeStorage, !includeMissing, json.NewEncoder(os.Stdout))
} else {
if includeMissing {
fmt.Printf("If you want to include accounts with missing preimages, you need iterative output, since" +
" otherwise the accounts will overwrite each other in the resulting mapping.")
}
fmt.Printf("%v %s\n", includeMissing, state.Dump(excludeCode, excludeStorage, false))
}
conf, db, root, err := parseDumpConfig(ctx, stack)
if err != nil {
return err
}
state, err := state.New(root, state.NewDatabase(db), nil)
if err != nil {
return err
}
if ctx.Bool(utils.IterativeOutputFlag.Name) {
state.IterativeDump(conf, json.NewEncoder(os.Stdout))
} else {
if conf.OnlyWithAddresses {
fmt.Fprintf(os.Stderr, "If you want to include accounts with missing preimages, you need iterative output, since"+
" otherwise the accounts will overwrite each other in the resulting mapping.")
return fmt.Errorf("incompatible options")
}
fmt.Println(string(state.Dump(conf)))
}
return nil
}

View File

@ -28,8 +28,10 @@ import (
"gopkg.in/urfave/cli.v1"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/eth/catalyst"
"github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
@ -62,7 +64,12 @@ var tomlSettings = toml.Config{
return field
},
MissingField: func(rt reflect.Type, field string) error {
link := ""
id := fmt.Sprintf("%s.%s", rt.String(), field)
if deprecated(id) {
log.Warn("Config field is deprecated and won't have an effect", "name", id)
return nil
}
var link string
if unicode.IsUpper(rune(rt.Name()[0])) && rt.PkgPath() != "main" {
link = fmt.Sprintf(", see https://godoc.org/%s#%s for available fields", rt.PkgPath(), rt.Name())
}
@ -140,10 +147,20 @@ func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
// makeFullNode loads geth configuration and creates the Ethereum backend.
func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) {
stack, cfg := makeConfigNode(ctx)
if ctx.GlobalIsSet(utils.OverrideBerlinFlag.Name) {
cfg.Eth.OverrideBerlin = new(big.Int).SetUint64(ctx.GlobalUint64(utils.OverrideBerlinFlag.Name))
if ctx.GlobalIsSet(utils.OverrideLondonFlag.Name) {
cfg.Eth.OverrideLondon = new(big.Int).SetUint64(ctx.GlobalUint64(utils.OverrideLondonFlag.Name))
}
backend, eth := utils.RegisterEthService(stack, &cfg.Eth)
// Configure catalyst.
if ctx.GlobalBool(utils.CatalystFlag.Name) {
if eth == nil {
utils.Fatalf("Catalyst does not work in light client mode.")
}
if err := catalyst.Register(stack, eth); err != nil {
utils.Fatalf("%v", err)
}
}
backend := utils.RegisterEthService(stack, &cfg.Eth)
// Configure GraphQL if requested
if ctx.GlobalIsSet(utils.GraphQLEnabledFlag.Name) {
@ -217,3 +234,14 @@ func applyMetricConfig(ctx *cli.Context, cfg *gethConfig) {
cfg.Metrics.InfluxDBTags = ctx.GlobalString(utils.MetricsInfluxDBTagsFlag.Name)
}
}
func deprecated(field string) bool {
switch field {
case "ethconfig.Config.EVMInterpreter":
return true
case "ethconfig.Config.EWASMInterpreter":
return true
default:
return false
}
}

View File

@ -134,8 +134,8 @@ func remoteConsole(ctx *cli.Context) error {
path = filepath.Join(path, "rinkeby")
} else if ctx.GlobalBool(utils.GoerliFlag.Name) {
path = filepath.Join(path, "goerli")
} else if ctx.GlobalBool(utils.YoloV3Flag.Name) {
path = filepath.Join(path, "yolo-v3")
} else if ctx.GlobalBool(utils.CalaverasFlag.Name) {
path = filepath.Join(path, "calaveras")
}
}
endpoint = fmt.Sprintf("%s/geth.ipc", path)
@ -171,7 +171,7 @@ func remoteConsole(ctx *cli.Context) error {
// dialRPC returns a RPC client which connects to the given endpoint.
// The check for empty endpoint implements the defaulting logic
// for "geth attach" and "geth monitor" with no argument.
// for "geth attach" with no argument.
func dialRPC(endpoint string) (*rpc.Client, error) {
if endpoint == "" {
endpoint = node.DefaultIPCEndpoint(clientIdentifier)

View File

@ -20,6 +20,7 @@ import (
"fmt"
"os"
"path/filepath"
"sort"
"strconv"
"time"
@ -60,6 +61,7 @@ Remove blockchain and state databases`,
dbDeleteCmd,
dbPutCmd,
dbGetSlotsCmd,
dbDumpFreezerIndex,
},
}
dbInspectCmd = cli.Command{
@ -73,7 +75,7 @@ Remove blockchain and state databases`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
Usage: "Inspect the storage size for each type of data in the database",
Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`,
@ -89,7 +91,7 @@ Remove blockchain and state databases`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
}
dbCompactCmd = cli.Command{
@ -103,7 +105,7 @@ Remove blockchain and state databases`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
utils.CacheFlag,
utils.CacheDatabaseFlag,
},
@ -123,7 +125,7 @@ corruption if it is aborted during execution'!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
Description: "This command looks up the specified database key from the database.",
}
@ -139,7 +141,7 @@ corruption if it is aborted during execution'!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
Description: `This command deletes the specified database key from the database.
WARNING: This is a low-level operation which may cause database corruption!`,
@ -156,7 +158,7 @@ WARNING: This is a low-level operation which may cause database corruption!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
Description: `This command sets a given database key to the given value.
WARNING: This is a low-level operation which may cause database corruption!`,
@ -173,10 +175,26 @@ WARNING: This is a low-level operation which may cause database corruption!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
},
Description: "This command looks up the specified database key from the database.",
}
dbDumpFreezerIndex = cli.Command{
Action: utils.MigrateFlags(freezerInspect),
Name: "freezer-index",
Usage: "Dump out the index of a given freezer type",
ArgsUsage: "<type> <start (int)> <end (int)>",
Flags: []cli.Flag{
utils.DataDirFlag,
utils.SyncModeFlag,
utils.MainnetFlag,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Description: "This command displays information about the freezer index.",
}
)
func removeDB(ctx *cli.Context) error {
@ -449,3 +467,43 @@ func dbDumpTrie(ctx *cli.Context) error {
}
return it.Err
}
func freezerInspect(ctx *cli.Context) error {
var (
start, end int64
disableSnappy bool
err error
)
if ctx.NArg() < 3 {
return fmt.Errorf("required arguments: %v", ctx.Command.ArgsUsage)
}
kind := ctx.Args().Get(0)
if noSnap, ok := rawdb.FreezerNoSnappy[kind]; !ok {
var options []string
for opt := range rawdb.FreezerNoSnappy {
options = append(options, opt)
}
sort.Strings(options)
return fmt.Errorf("Could read freezer-type '%v'. Available options: %v", kind, options)
} else {
disableSnappy = noSnap
}
if start, err = strconv.ParseInt(ctx.Args().Get(1), 10, 64); err != nil {
log.Info("Could read start-param", "error", err)
return err
}
if end, err = strconv.ParseInt(ctx.Args().Get(2), 10, 64); err != nil {
log.Info("Could read count param", "error", err)
return err
}
stack, _ := makeConfigNode(ctx)
defer stack.Close()
path := filepath.Join(stack.ResolvePath("chaindata"), "ancient")
log.Info("Opening freezer", "location", path, "name", kind)
if f, err := rawdb.NewFreezerTable(path, kind, disableSnappy); err != nil {
return err
} else {
f.DumpIndex(start, end)
}
return nil
}

View File

@ -84,7 +84,7 @@ func TestCustomGenesis(t *testing.T) {
runGeth(t, "--datadir", datadir, "init", json).WaitExit()
// Query the custom genesis block
geth := runGeth(t, "--networkid", "1337", "--syncmode=full",
geth := runGeth(t, "--networkid", "1337", "--syncmode=full", "--cache", "16",
"--datadir", datadir, "--maxpeers", "0", "--port", "0",
"--nodiscover", "--nat", "none", "--ipcdisable",
"--exec", tt.query, "console")

View File

@ -137,14 +137,18 @@ func startGethWithIpc(t *testing.T, name string, args ...string) *gethrpc {
name: name,
geth: runGeth(t, args...),
}
// wait before we can attach to it. TODO: probe for it properly
time.Sleep(1 * time.Second)
var err error
ipcpath := ipcEndpoint(ipcName, g.geth.Datadir)
if g.rpc, err = rpc.Dial(ipcpath); err != nil {
t.Fatalf("%v rpc connect to %v: %v", name, ipcpath, err)
// We can't know exactly how long geth will take to start, so we try 10
// times over a 5 second period.
var err error
for i := 0; i < 10; i++ {
time.Sleep(500 * time.Millisecond)
if g.rpc, err = rpc.Dial(ipcpath); err == nil {
return g
}
}
return g
t.Fatalf("%v rpc connect to %v: %v", name, ipcpath, err)
return nil
}
func initGeth(t *testing.T) string {

View File

@ -66,7 +66,7 @@ var (
utils.NoUSBFlag,
utils.USBFlag,
utils.SmartCardDaemonPathFlag,
utils.OverrideBerlinFlag,
utils.OverrideLondonFlag,
utils.EthashCacheDirFlag,
utils.EthashCachesInMemoryFlag,
utils.EthashCachesOnDiskFlag,
@ -138,7 +138,7 @@ var (
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
utils.VMEnableDebugFlag,
utils.NetworkIdFlag,
utils.EthStatsURLFlag,
@ -147,10 +147,10 @@ var (
utils.GpoBlocksFlag,
utils.GpoPercentileFlag,
utils.GpoMaxGasPriceFlag,
utils.EWASMInterpreterFlag,
utils.EVMInterpreterFlag,
utils.GpoIgnoreGasPriceFlag,
utils.MinerNotifyFullFlag,
configFileFlag,
utils.CatalystFlag,
}
rpcFlags = []cli.Flag{
@ -274,8 +274,8 @@ func prepare(ctx *cli.Context) {
case ctx.GlobalIsSet(utils.GoerliFlag.Name):
log.Info("Starting Geth on Görli testnet...")
case ctx.GlobalIsSet(utils.YoloV3Flag.Name):
log.Info("Starting Geth on YOLOv3 testnet...")
case ctx.GlobalIsSet(utils.CalaverasFlag.Name):
log.Info("Starting Geth on Calaveras testnet...")
case ctx.GlobalIsSet(utils.DeveloperFlag.Name):
log.Info("Starting Geth in ephemeral dev mode...")

View File

@ -18,7 +18,9 @@ package main
import (
"bytes"
"encoding/json"
"errors"
"os"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
@ -142,6 +144,31 @@ verification. The default checking target is the HEAD state. It's basically iden
to traverse-state, but the check granularity is smaller.
It's also usable without snapshot enabled.
`,
},
{
Name: "dump",
Usage: "Dump a specific block from storage (same as 'geth dump' but using snapshots)",
ArgsUsage: "[? <blockHash> | <blockNum>]",
Action: utils.MigrateFlags(dumpState),
Category: "MISCELLANEOUS COMMANDS",
Flags: []cli.Flag{
utils.DataDirFlag,
utils.AncientFlag,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.ExcludeCodeFlag,
utils.ExcludeStorageFlag,
utils.StartKeyFlag,
utils.DumpLimitFlag,
},
Description: `
This command is semantically equivalent to 'geth dump', but uses the snapshots
as the backend data source, making this command a lot faster.
The argument is interpreted as block number or hash. If none is provided, the latest
block is used.
`,
},
},
@ -155,7 +182,7 @@ func pruneState(ctx *cli.Context) error {
chaindb := utils.MakeChainDatabase(ctx, stack, false)
pruner, err := pruner.NewPruner(chaindb, stack.ResolvePath(""), stack.ResolvePath(config.Eth.TrieCleanCacheJournal), ctx.GlobalUint64(utils.BloomFilterSizeFlag.Name))
if err != nil {
log.Error("Failed to open snapshot tree", "error", err)
log.Error("Failed to open snapshot tree", "err", err)
return err
}
if ctx.NArg() > 1 {
@ -166,12 +193,12 @@ func pruneState(ctx *cli.Context) error {
if ctx.NArg() == 1 {
targetRoot, err = parseRoot(ctx.Args()[0])
if err != nil {
log.Error("Failed to resolve state root", "error", err)
log.Error("Failed to resolve state root", "err", err)
return err
}
}
if err = pruner.Prune(targetRoot); err != nil {
log.Error("Failed to prune state", "error", err)
log.Error("Failed to prune state", "err", err)
return err
}
return nil
@ -189,7 +216,7 @@ func verifyState(ctx *cli.Context) error {
}
snaptree, err := snapshot.New(chaindb, trie.NewDatabase(chaindb), 256, headBlock.Root(), false, false, false)
if err != nil {
log.Error("Failed to open snapshot tree", "error", err)
log.Error("Failed to open snapshot tree", "err", err)
return err
}
if ctx.NArg() > 1 {
@ -200,15 +227,15 @@ func verifyState(ctx *cli.Context) error {
if ctx.NArg() == 1 {
root, err = parseRoot(ctx.Args()[0])
if err != nil {
log.Error("Failed to resolve state root", "error", err)
log.Error("Failed to resolve state root", "err", err)
return err
}
}
if err := snaptree.Verify(root); err != nil {
log.Error("Failed to verfiy state", "error", err)
log.Error("Failed to verfiy state", "root", root, "err", err)
return err
}
log.Info("Verified the state")
log.Info("Verified the state", "root", root)
return nil
}
@ -236,7 +263,7 @@ func traverseState(ctx *cli.Context) error {
if ctx.NArg() == 1 {
root, err = parseRoot(ctx.Args()[0])
if err != nil {
log.Error("Failed to resolve state root", "error", err)
log.Error("Failed to resolve state root", "err", err)
return err
}
log.Info("Start traversing the state", "root", root)
@ -247,7 +274,7 @@ func traverseState(ctx *cli.Context) error {
triedb := trie.NewDatabase(chaindb)
t, err := trie.NewSecure(root, triedb)
if err != nil {
log.Error("Failed to open trie", "root", root, "error", err)
log.Error("Failed to open trie", "root", root, "err", err)
return err
}
var (
@ -262,13 +289,13 @@ func traverseState(ctx *cli.Context) error {
accounts += 1
var acc state.Account
if err := rlp.DecodeBytes(accIter.Value, &acc); err != nil {
log.Error("Invalid account encountered during traversal", "error", err)
log.Error("Invalid account encountered during traversal", "err", err)
return err
}
if acc.Root != emptyRoot {
storageTrie, err := trie.NewSecure(acc.Root, triedb)
if err != nil {
log.Error("Failed to open storage trie", "root", acc.Root, "error", err)
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
return err
}
storageIter := trie.NewIterator(storageTrie.NodeIterator(nil))
@ -276,7 +303,7 @@ func traverseState(ctx *cli.Context) error {
slots += 1
}
if storageIter.Err != nil {
log.Error("Failed to traverse storage trie", "root", acc.Root, "error", storageIter.Err)
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Err)
return storageIter.Err
}
}
@ -294,7 +321,7 @@ func traverseState(ctx *cli.Context) error {
}
}
if accIter.Err != nil {
log.Error("Failed to traverse state trie", "root", root, "error", accIter.Err)
log.Error("Failed to traverse state trie", "root", root, "err", accIter.Err)
return accIter.Err
}
log.Info("State is complete", "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
@ -326,7 +353,7 @@ func traverseRawState(ctx *cli.Context) error {
if ctx.NArg() == 1 {
root, err = parseRoot(ctx.Args()[0])
if err != nil {
log.Error("Failed to resolve state root", "error", err)
log.Error("Failed to resolve state root", "err", err)
return err
}
log.Info("Start traversing the state", "root", root)
@ -337,7 +364,7 @@ func traverseRawState(ctx *cli.Context) error {
triedb := trie.NewDatabase(chaindb)
t, err := trie.NewSecure(root, triedb)
if err != nil {
log.Error("Failed to open trie", "root", root, "error", err)
log.Error("Failed to open trie", "root", root, "err", err)
return err
}
var (
@ -368,13 +395,13 @@ func traverseRawState(ctx *cli.Context) error {
accounts += 1
var acc state.Account
if err := rlp.DecodeBytes(accIter.LeafBlob(), &acc); err != nil {
log.Error("Invalid account encountered during traversal", "error", err)
log.Error("Invalid account encountered during traversal", "err", err)
return errors.New("invalid account")
}
if acc.Root != emptyRoot {
storageTrie, err := trie.NewSecure(acc.Root, triedb)
if err != nil {
log.Error("Failed to open storage trie", "root", acc.Root, "error", err)
log.Error("Failed to open storage trie", "root", acc.Root, "err", err)
return errors.New("missing storage trie")
}
storageIter := storageTrie.NodeIterator(nil)
@ -397,7 +424,7 @@ func traverseRawState(ctx *cli.Context) error {
}
}
if storageIter.Error() != nil {
log.Error("Failed to traverse storage trie", "root", acc.Root, "error", storageIter.Error())
log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Error())
return storageIter.Error()
}
}
@ -416,7 +443,7 @@ func traverseRawState(ctx *cli.Context) error {
}
}
if accIter.Error() != nil {
log.Error("Failed to traverse state trie", "root", root, "error", accIter.Error())
log.Error("Failed to traverse state trie", "root", root, "err", accIter.Error())
return accIter.Error()
}
log.Info("State is complete", "nodes", nodes, "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start)))
@ -430,3 +457,73 @@ func parseRoot(input string) (common.Hash, error) {
}
return h, nil
}
func dumpState(ctx *cli.Context) error {
stack, _ := makeConfigNode(ctx)
defer stack.Close()
conf, db, root, err := parseDumpConfig(ctx, stack)
if err != nil {
return err
}
snaptree, err := snapshot.New(db, trie.NewDatabase(db), 256, root, false, false, false)
if err != nil {
return err
}
accIt, err := snaptree.AccountIterator(root, common.BytesToHash(conf.Start))
if err != nil {
return err
}
defer accIt.Release()
log.Info("Snapshot dumping started", "root", root)
var (
start = time.Now()
logged = time.Now()
accounts uint64
)
enc := json.NewEncoder(os.Stdout)
enc.Encode(struct {
Root common.Hash `json:"root"`
}{root})
for accIt.Next() {
account, err := snapshot.FullAccount(accIt.Account())
if err != nil {
return err
}
da := &state.DumpAccount{
Balance: account.Balance.String(),
Nonce: account.Nonce,
Root: account.Root,
CodeHash: account.CodeHash,
SecureKey: accIt.Hash().Bytes(),
}
if !conf.SkipCode && !bytes.Equal(account.CodeHash, emptyCode) {
da.Code = rawdb.ReadCode(db, common.BytesToHash(account.CodeHash))
}
if !conf.SkipStorage {
da.Storage = make(map[common.Hash]string)
stIt, err := snaptree.StorageIterator(root, accIt.Hash(), common.Hash{})
if err != nil {
return err
}
for stIt.Next() {
da.Storage[stIt.Hash()] = common.Bytes2Hex(stIt.Slot())
}
}
enc.Encode(da)
accounts++
if time.Since(logged) > 8*time.Second {
log.Info("Snapshot dumping in progress", "at", accIt.Hash(), "accounts", accounts,
"elapsed", common.PrettyDuration(time.Since(start)))
logged = time.Now()
}
if conf.Max > 0 && accounts >= conf.Max {
break
}
}
log.Info("Snapshot dumping complete", "accounts", accounts,
"elapsed", common.PrettyDuration(time.Since(start)))
return nil
}

View File

@ -44,7 +44,7 @@ var AppHelpFlagGroups = []flags.FlagGroup{
utils.MainnetFlag,
utils.GoerliFlag,
utils.RinkebyFlag,
utils.YoloV3Flag,
utils.CalaverasFlag,
utils.RopstenFlag,
utils.SyncModeFlag,
utils.ExitWhenSyncedFlag,
@ -196,14 +196,13 @@ var AppHelpFlagGroups = []flags.FlagGroup{
utils.GpoBlocksFlag,
utils.GpoPercentileFlag,
utils.GpoMaxGasPriceFlag,
utils.GpoIgnoreGasPriceFlag,
},
},
{
Name: "VIRTUAL MACHINE",
Flags: []cli.Flag{
utils.VMEnableDebugFlag,
utils.EVMInterpreterFlag,
utils.EWASMInterpreterFlag,
},
},
{
@ -235,6 +234,7 @@ var AppHelpFlagGroups = []flags.FlagGroup{
utils.SnapshotFlag,
utils.BloomFilterSizeFlag,
cli.HelpFlag,
utils.CatalystFlag,
},
},
}

View File

@ -80,12 +80,10 @@ var dashboardContent = `
<ul class="nav side-menu">
{{if .EthstatsPage}}<li id="stats_menu"><a onclick="load('#stats')"><i class="fa fa-tachometer"></i> Network Stats</a></li>{{end}}
{{if .ExplorerPage}}<li id="explorer_menu"><a onclick="load('#explorer')"><i class="fa fa-database"></i> Block Explorer</a></li>{{end}}
{{if .WalletPage}}<li id="wallet_menu"><a onclick="load('#wallet')"><i class="fa fa-address-book-o"></i> Browser Wallet</a></li>{{end}}
{{if .FaucetPage}}<li id="faucet_menu"><a onclick="load('#faucet')"><i class="fa fa-bath"></i> Crypto Faucet</a></li>{{end}}
<li id="connect_menu"><a><i class="fa fa-plug"></i> Connect Yourself</a>
<ul id="connect_list" class="nav child_menu">
<li><a onclick="$('#connect_menu').removeClass('active'); $('#connect_list').toggle(); load('#geth')">Go Ethereum: Geth</a></li>
<li><a onclick="$('#connect_menu').removeClass('active'); $('#connect_list').toggle(); load('#mist')">Go Ethereum: Wallet & Mist</a></li>
<li><a onclick="$('#connect_menu').removeClass('active'); $('#connect_list').toggle(); load('#mobile')">Go Ethereum: Android & iOS</a></li>{{if .Ethash}}
<li><a onclick="$('#connect_menu').removeClass('active'); $('#connect_list').toggle(); load('#other')">Other Ethereum Clients</a></li>{{end}}
</ul>
@ -186,58 +184,6 @@ var dashboardContent = `
</div>
</div>
</div>
<div id="mist" hidden style="padding: 16px;">
<div class="page-title">
<div class="title_left">
<h3>Connect Yourself &ndash; Go Ethereum: Wallet &amp; Mist</h3>
</div>
</div>
<div class="clearfix"></div>
<div class="row">
<div class="col-md-6">
<div class="x_panel">
<div class="x_title">
<h2><i class="fa fa-credit-card" aria-hidden="true"></i> Desktop wallet <small>Interacts with accounts and contracts</small></h2>
<div class="clearfix"></div>
</div>
<div class="x_content">
<p>The Ethereum Wallet is an <a href="https://electron.atom.io/" target="about:blank">Electron</a> based desktop application to manage your Ethereum accounts and funds. Beside the usual account life-cycle operations you would expect to perform, the wallet also provides a means to send transactions from your accounts and to interact with smart contracts deployed on the network.</p>
<p>Under the hood the wallet is backed by a go-ethereum full node, meaning that a mid range machine is assumed. Similarly, synchronization is based on <strong>fast-sync</strong>, which will download all blockchain data from the network and make it available to the wallet. Light nodes cannot currently fully back the wallet, but it's a target actively pursued.</p>
<br/>
<p>To connect with the Ethereum Wallet, you'll need to initialize your private network first via Geth as the wallet does not currently support calling Geth directly. To initialize your local chain, download <a href="/{{.GethGenesis}}"><code>{{.GethGenesis}}</code></a> and run:
<pre>geth --datadir=$HOME/.{{.Network}} init {{.GethGenesis}}</pre>
</p>
<p>With your local chain initialized, you can start the Ethereum Wallet:
<pre>ethereumwallet --rpc $HOME/.{{.Network}}/geth.ipc --node-networkid={{.NetworkID}} --node-datadir=$HOME/.{{.Network}}{{if .Ethstats}} --node-ethstats='{{.Ethstats}}'{{end}} --node-bootnodes={{.BootnodesFlat}}</pre>
<p>
<br/>
<p>You can download the Ethereum Wallet from <a href="https://github.com/ethereum/mist/releases" target="about:blank">https://github.com/ethereum/mist/releases</a>.</p>
</div>
</div>
</div>
<div class="col-md-6">
<div class="x_panel">
<div class="x_title">
<h2><i class="fa fa-picture-o" aria-hidden="true"></i> Mist browser <small>Interacts with third party DApps</small></h2>
<div class="clearfix"></div>
</div>
<div class="x_content">
<p>The Mist browser is an <a href="https://electron.atom.io/" target="about:blank">Electron</a> based desktop application to load and interact with Ethereum enabled third party web DApps. Beside all the functionality provided by the Ethereum Wallet, Mist is an extended web-browser where loaded pages have access to the Ethereum network via a web3.js provider, and may also interact with users' own accounts (given proper authorization and confirmation of course).</p>
<p>Under the hood the browser is backed by a go-ethereum full node, meaning that a mid range machine is assumed. Similarly, synchronization is based on <strong>fast-sync</strong>, which will download all blockchain data from the network and make it available to the wallet. Light nodes cannot currently fully back the wallet, but it's a target actively pursued.</p>
<br/>
<p>To connect with the Mist browser, you'll need to initialize your private network first via Geth as Mist does not currently support calling Geth directly. To initialize your local chain, download <a href="/{{.GethGenesis}}"><code>{{.GethGenesis}}</code></a> and run:
<pre>geth --datadir=$HOME/.{{.Network}} init {{.GethGenesis}}</pre>
</p>
<p>With your local chain initialized, you can start Mist:
<pre>mist --rpc $HOME/.{{.Network}}/geth.ipc --node-networkid={{.NetworkID}} --node-datadir=$HOME/.{{.Network}}{{if .Ethstats}} --node-ethstats='{{.Ethstats}}'{{end}} --node-bootnodes={{.BootnodesFlat}}</pre>
<p>
<br/>
<p>You can download the Mist browser from <a href="https://github.com/ethereum/mist/releases" target="about:blank">https://github.com/ethereum/mist/releases</a>.</p>
</div>
</div>
</div>
</div>
</div>
<div id="mobile" hidden style="padding: 16px;">
<div class="page-title">
<div class="title_left">
@ -416,7 +362,7 @@ try! node?.start();
<div class="clearfix"></div>
</div>
<div style="display: inline-block; vertical-align: bottom; width: 623px; margin-top: 16px;">
<p>Puppeth is a tool to aid you in creating a new Ethereum network down to the genesis block, bootnodes, signers, ethstats server, crypto faucet, wallet browsers, block explorer, dashboard and more; without the hassle that it would normally entail to manually configure all these services one by one.</p>
<p>Puppeth is a tool to aid you in creating a new Ethereum network down to the genesis block, bootnodes, signers, ethstats server, crypto faucet, block explorer, dashboard and more; without the hassle that it would normally entail to manually configure all these services one by one.</p>
<p>Puppeth uses ssh to dial in to remote servers, and builds its network components out of docker containers using docker-compose. The user is guided through the process via a command line wizard that does the heavy lifting and topology configuration automatically behind the scenes.</p>
<br/>
<p>Puppeth is distributed as part of the <a href="https://geth.ethereum.org/downloads/" target="about:blank">Geth &amp; Tools</a> bundles, but can also be installed separately via:<pre>go get github.com/ethereum/go-ethereum/cmd/puppeth</pre></p>
@ -461,9 +407,6 @@ try! node?.start();
case "#explorer":
url = "//{{.ExplorerPage}}";
break;
case "#wallet":
url = "//{{.WalletPage}}";
break;
case "#faucet":
url = "//{{.FaucetPage}}";
break;
@ -539,7 +482,7 @@ ADD puppeth.png /dashboard/puppeth.png
EXPOSE 80
CMD ["node", "/server.js"]
CMD ["node", "./server.js"]
`
// dashboardComposefile is the docker-compose.yml file required to deploy and
@ -587,7 +530,6 @@ func deployDashboard(client *sshClient, network string, conf *config, config *da
"VHost": config.host,
"EthstatsPage": config.ethstats,
"ExplorerPage": config.explorer,
"WalletPage": config.wallet,
"FaucetPage": config.faucet,
})
files[filepath.Join(workdir, "docker-compose.yaml")] = composefile.Bytes()
@ -615,7 +557,6 @@ func deployDashboard(client *sshClient, network string, conf *config, config *da
"NetworkTitle": strings.Title(network),
"EthstatsPage": config.ethstats,
"ExplorerPage": config.explorer,
"WalletPage": config.wallet,
"FaucetPage": config.faucet,
"GethGenesis": network + ".json",
"Bootnodes": conf.bootnodes,
@ -695,7 +636,6 @@ type dashboardInfos struct {
ethstats string
explorer string
wallet string
faucet string
}
@ -707,7 +647,6 @@ func (info *dashboardInfos) Report() map[string]string {
"Website listener port": strconv.Itoa(info.port),
"Ethstats service": info.ethstats,
"Explorer service": info.explorer,
"Wallet service": info.wallet,
"Faucet service": info.faucet,
}
}
@ -748,7 +687,6 @@ func checkDashboard(client *sshClient, network string) (*dashboardInfos, error)
port: port,
ethstats: infos.envvars["ETHSTATS_PAGE"],
explorer: infos.envvars["EXPLORER_PAGE"],
wallet: infos.envvars["WALLET_PAGE"],
faucet: infos.envvars["FAUCET_PAGE"],
}, nil
}

View File

@ -1,201 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"fmt"
"html/template"
"math/rand"
"path/filepath"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/log"
)
// walletDockerfile is the Dockerfile required to run a web wallet.
var walletDockerfile = `
FROM puppeth/wallet:latest
ADD genesis.json /genesis.json
RUN \
echo 'node server.js &' > wallet.sh && \
echo 'geth --cache 512 init /genesis.json' >> wallet.sh && \
echo $'exec geth --networkid {{.NetworkID}} --port {{.NodePort}} --bootnodes {{.Bootnodes}} --ethstats \'{{.Ethstats}}\' --cache=512 --http --http.addr=0.0.0.0 --http.corsdomain "*" --http.vhosts "*"' >> wallet.sh
RUN \
sed -i 's/PuppethNetworkID/{{.NetworkID}}/g' dist/js/etherwallet-master.js && \
sed -i 's/PuppethNetwork/{{.Network}}/g' dist/js/etherwallet-master.js && \
sed -i 's/PuppethDenom/{{.Denom}}/g' dist/js/etherwallet-master.js && \
sed -i 's/PuppethHost/{{.Host}}/g' dist/js/etherwallet-master.js && \
sed -i 's/PuppethRPCPort/{{.RPCPort}}/g' dist/js/etherwallet-master.js
ENTRYPOINT ["/bin/sh", "wallet.sh"]
`
// walletComposefile is the docker-compose.yml file required to deploy and
// maintain a web wallet.
var walletComposefile = `
version: '2'
services:
wallet:
build: .
image: {{.Network}}/wallet
container_name: {{.Network}}_wallet_1
ports:
- "{{.NodePort}}:{{.NodePort}}"
- "{{.NodePort}}:{{.NodePort}}/udp"
- "{{.RPCPort}}:8545"{{if not .VHost}}
- "{{.WebPort}}:80"{{end}}
volumes:
- {{.Datadir}}:/root/.ethereum
environment:
- NODE_PORT={{.NodePort}}/tcp
- STATS={{.Ethstats}}{{if .VHost}}
- VIRTUAL_HOST={{.VHost}}
- VIRTUAL_PORT=80{{end}}
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "10"
restart: always
`
// deployWallet deploys a new web wallet container to a remote machine via SSH,
// docker and docker-compose. If an instance with the specified network name
// already exists there, it will be overwritten!
func deployWallet(client *sshClient, network string, bootnodes []string, config *walletInfos, nocache bool) ([]byte, error) {
// Generate the content to upload to the server
workdir := fmt.Sprintf("%d", rand.Int63())
files := make(map[string][]byte)
dockerfile := new(bytes.Buffer)
template.Must(template.New("").Parse(walletDockerfile)).Execute(dockerfile, map[string]interface{}{
"Network": strings.ToTitle(network),
"Denom": strings.ToUpper(network),
"NetworkID": config.network,
"NodePort": config.nodePort,
"RPCPort": config.rpcPort,
"Bootnodes": strings.Join(bootnodes, ","),
"Ethstats": config.ethstats,
"Host": client.address,
})
files[filepath.Join(workdir, "Dockerfile")] = dockerfile.Bytes()
composefile := new(bytes.Buffer)
template.Must(template.New("").Parse(walletComposefile)).Execute(composefile, map[string]interface{}{
"Datadir": config.datadir,
"Network": network,
"NodePort": config.nodePort,
"RPCPort": config.rpcPort,
"VHost": config.webHost,
"WebPort": config.webPort,
"Ethstats": config.ethstats[:strings.Index(config.ethstats, ":")],
})
files[filepath.Join(workdir, "docker-compose.yaml")] = composefile.Bytes()
files[filepath.Join(workdir, "genesis.json")] = config.genesis
// Upload the deployment files to the remote server (and clean up afterwards)
if out, err := client.Upload(files); err != nil {
return out, err
}
defer client.Run("rm -rf " + workdir)
// Build and deploy the boot or seal node service
if nocache {
return nil, client.Stream(fmt.Sprintf("cd %s && docker-compose -p %s build --pull --no-cache && docker-compose -p %s up -d --force-recreate --timeout 60", workdir, network, network))
}
return nil, client.Stream(fmt.Sprintf("cd %s && docker-compose -p %s up -d --build --force-recreate --timeout 60", workdir, network))
}
// walletInfos is returned from a web wallet status check to allow reporting
// various configuration parameters.
type walletInfos struct {
genesis []byte
network int64
datadir string
ethstats string
nodePort int
rpcPort int
webHost string
webPort int
}
// Report converts the typed struct into a plain string->string map, containing
// most - but not all - fields for reporting to the user.
func (info *walletInfos) Report() map[string]string {
report := map[string]string{
"Data directory": info.datadir,
"Ethstats username": info.ethstats,
"Node listener port ": strconv.Itoa(info.nodePort),
"RPC listener port ": strconv.Itoa(info.rpcPort),
"Website address ": info.webHost,
"Website listener port ": strconv.Itoa(info.webPort),
}
return report
}
// checkWallet does a health-check against web wallet server to verify whether
// it's running, and if yes, whether it's responsive.
func checkWallet(client *sshClient, network string) (*walletInfos, error) {
// Inspect a possible web wallet container on the host
infos, err := inspectContainer(client, fmt.Sprintf("%s_wallet_1", network))
if err != nil {
return nil, err
}
if !infos.running {
return nil, ErrServiceOffline
}
// Resolve the port from the host, or the reverse proxy
webPort := infos.portmap["80/tcp"]
if webPort == 0 {
if proxy, _ := checkNginx(client, network); proxy != nil {
webPort = proxy.port
}
}
if webPort == 0 {
return nil, ErrNotExposed
}
// Resolve the host from the reverse-proxy and the config values
host := infos.envvars["VIRTUAL_HOST"]
if host == "" {
host = client.server
}
// Run a sanity check to see if the devp2p and RPC ports are reachable
nodePort := infos.portmap[infos.envvars["NODE_PORT"]]
if err = checkPort(client.server, nodePort); err != nil {
log.Warn("Wallet devp2p port seems unreachable", "server", client.server, "port", nodePort, "err", err)
}
rpcPort := infos.portmap["8545/tcp"]
if err = checkPort(client.server, rpcPort); err != nil {
log.Warn("Wallet RPC port seems unreachable", "server", client.server, "port", rpcPort, "err", err)
}
// Assemble and return the useful infos
stats := &walletInfos{
datadir: infos.volumes["/root/.ethereum"],
nodePort: nodePort,
rpcPort: rpcPort,
webHost: host,
webPort: webPort,
ethstats: infos.envvars["STATS"],
}
return stats, nil
}

View File

@ -30,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"golang.org/x/crypto/ssh"
"golang.org/x/crypto/ssh/agent"
"golang.org/x/crypto/ssh/terminal"
)
@ -43,6 +44,8 @@ type sshClient struct {
logger log.Logger
}
const EnvSSHAuthSock = "SSH_AUTH_SOCK"
// dial establishes an SSH connection to a remote node using the current user and
// the user's configured private RSA key. If that fails, password authentication
// is fallen back to. server can be a string like user:identity@server:port.
@ -79,38 +82,49 @@ func dial(server string, pubkey []byte) (*sshClient, error) {
if username == "" {
username = user.Username
}
// Configure the supported authentication methods (private key and password)
var auths []ssh.AuthMethod
path := filepath.Join(user.HomeDir, ".ssh", identity)
if buf, err := ioutil.ReadFile(path); err != nil {
log.Warn("No SSH key, falling back to passwords", "path", path, "err", err)
// Configure the supported authentication methods (ssh agent, private key and password)
var (
auths []ssh.AuthMethod
conn net.Conn
)
if conn, err = net.Dial("unix", os.Getenv(EnvSSHAuthSock)); err != nil {
log.Warn("Unable to dial SSH agent, falling back to private keys", "err", err)
} else {
key, err := ssh.ParsePrivateKey(buf)
if err != nil {
fmt.Printf("What's the decryption password for %s? (won't be echoed)\n>", path)
blob, err := terminal.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
client := agent.NewClient(conn)
auths = append(auths, ssh.PublicKeysCallback(client.Signers))
}
if err != nil {
path := filepath.Join(user.HomeDir, ".ssh", identity)
if buf, err := ioutil.ReadFile(path); err != nil {
log.Warn("No SSH key, falling back to passwords", "path", path, "err", err)
} else {
key, err := ssh.ParsePrivateKey(buf)
if err != nil {
log.Warn("Couldn't read password", "err", err)
}
key, err := ssh.ParsePrivateKeyWithPassphrase(buf, blob)
if err != nil {
log.Warn("Failed to decrypt SSH key, falling back to passwords", "path", path, "err", err)
fmt.Printf("What's the decryption password for %s? (won't be echoed)\n>", path)
blob, err := terminal.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
if err != nil {
log.Warn("Couldn't read password", "err", err)
}
key, err := ssh.ParsePrivateKeyWithPassphrase(buf, blob)
if err != nil {
log.Warn("Failed to decrypt SSH key, falling back to passwords", "path", path, "err", err)
} else {
auths = append(auths, ssh.PublicKeys(key))
}
} else {
auths = append(auths, ssh.PublicKeys(key))
}
} else {
auths = append(auths, ssh.PublicKeys(key))
}
}
auths = append(auths, ssh.PasswordCallback(func() (string, error) {
fmt.Printf("What's the login password for %s at %s? (won't be echoed)\n> ", username, server)
blob, err := terminal.ReadPassword(int(os.Stdin.Fd()))
auths = append(auths, ssh.PasswordCallback(func() (string, error) {
fmt.Printf("What's the login password for %s at %s? (won't be echoed)\n> ", username, server)
blob, err := terminal.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
return string(blob), err
}))
fmt.Println()
return string(blob), err
}))
}
// Resolve the IP address of the remote server
addr, err := net.LookupHost(hostname)
if err != nil {

View File

@ -60,7 +60,7 @@ func (w *wizard) deployDashboard() {
available[service] = append(available[service], server)
}
}
for _, service := range []string{"ethstats", "explorer", "wallet", "faucet"} {
for _, service := range []string{"ethstats", "explorer", "faucet"} {
// Gather all the locally hosted pages of this type
var pages []string
for _, server := range available[service] {
@ -79,10 +79,6 @@ func (w *wizard) deployDashboard() {
if infos, err := checkExplorer(client, w.network); err == nil {
port = infos.port
}
case "wallet":
if infos, err := checkWallet(client, w.network); err == nil {
port = infos.webPort
}
case "faucet":
if infos, err := checkFaucet(client, w.network); err == nil {
port = infos.port
@ -127,8 +123,6 @@ func (w *wizard) deployDashboard() {
infos.ethstats = page
case "explorer":
infos.explorer = page
case "wallet":
infos.wallet = page
case "faucet":
infos.faucet = page
}

View File

@ -240,8 +240,8 @@ func (w *wizard) manageGenesis() {
w.conf.Genesis.Config.BerlinBlock = w.readDefaultBigInt(w.conf.Genesis.Config.BerlinBlock)
fmt.Println()
fmt.Printf("Which block should YOLOv3 come into effect? (default = %v)\n", w.conf.Genesis.Config.YoloV3Block)
w.conf.Genesis.Config.YoloV3Block = w.readDefaultBigInt(w.conf.Genesis.Config.YoloV3Block)
fmt.Printf("Which block should London come into effect? (default = %v)\n", w.conf.Genesis.Config.LondonBlock)
w.conf.Genesis.Config.LondonBlock = w.readDefaultBigInt(w.conf.Genesis.Config.LondonBlock)
out, _ := json.MarshalIndent(w.conf.Genesis.Config, "", " ")
fmt.Printf("Chain configuration updated:\n\n%s\n", out)

View File

@ -141,14 +141,6 @@ func (w *wizard) gatherStats(server string, pubkey []byte, client *sshClient) *s
} else {
stat.services["explorer"] = infos.Report()
}
logger.Debug("Checking for wallet availability")
if infos, err := checkWallet(client, w.network); err != nil {
if err != ErrServiceUnknown {
stat.services["wallet"] = map[string]string{"offline": err.Error()}
}
} else {
stat.services["wallet"] = infos.Report()
}
logger.Debug("Checking for faucet availability")
if infos, err := checkFaucet(client, w.network); err != nil {
if err != ErrServiceUnknown {

View File

@ -175,9 +175,8 @@ func (w *wizard) deployComponent() {
fmt.Println(" 2. Bootnode - Entry point of the network")
fmt.Println(" 3. Sealer - Full node minting new blocks")
fmt.Println(" 4. Explorer - Chain analysis webservice")
fmt.Println(" 5. Wallet - Browser wallet for quick sends")
fmt.Println(" 6. Faucet - Crypto faucet to give away funds")
fmt.Println(" 7. Dashboard - Website listing above web-services")
fmt.Println(" 5. Faucet - Crypto faucet to give away funds")
fmt.Println(" 6. Dashboard - Website listing above web-services")
switch w.read() {
case "1":
@ -189,10 +188,8 @@ func (w *wizard) deployComponent() {
case "4":
w.deployExplorer()
case "5":
w.deployWallet()
case "6":
w.deployFaucet()
case "7":
case "6":
w.deployDashboard()
default:
log.Error("That's not something I can do")

View File

@ -1,113 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"encoding/json"
"fmt"
"time"
"github.com/ethereum/go-ethereum/log"
)
// deployWallet creates a new web wallet based on some user input.
func (w *wizard) deployWallet() {
// Do some sanity check before the user wastes time on input
if w.conf.Genesis == nil {
log.Error("No genesis block configured")
return
}
if w.conf.ethstats == "" {
log.Error("No ethstats server configured")
return
}
// Select the server to interact with
server := w.selectServer()
if server == "" {
return
}
client := w.servers[server]
// Retrieve any active node configurations from the server
infos, err := checkWallet(client, w.network)
if err != nil {
infos = &walletInfos{
nodePort: 30303, rpcPort: 8545, webPort: 80, webHost: client.server,
}
}
existed := err == nil
infos.genesis, _ = json.MarshalIndent(w.conf.Genesis, "", " ")
infos.network = w.conf.Genesis.Config.ChainID.Int64()
// Figure out which port to listen on
fmt.Println()
fmt.Printf("Which port should the wallet listen on? (default = %d)\n", infos.webPort)
infos.webPort = w.readDefaultInt(infos.webPort)
// Figure which virtual-host to deploy ethstats on
if infos.webHost, err = w.ensureVirtualHost(client, infos.webPort, infos.webHost); err != nil {
log.Error("Failed to decide on wallet host", "err", err)
return
}
// Figure out where the user wants to store the persistent data
fmt.Println()
if infos.datadir == "" {
fmt.Printf("Where should data be stored on the remote machine?\n")
infos.datadir = w.readString()
} else {
fmt.Printf("Where should data be stored on the remote machine? (default = %s)\n", infos.datadir)
infos.datadir = w.readDefaultString(infos.datadir)
}
// Figure out which port to listen on
fmt.Println()
fmt.Printf("Which TCP/UDP port should the backing node listen on? (default = %d)\n", infos.nodePort)
infos.nodePort = w.readDefaultInt(infos.nodePort)
fmt.Println()
fmt.Printf("Which port should the backing RPC API listen on? (default = %d)\n", infos.rpcPort)
infos.rpcPort = w.readDefaultInt(infos.rpcPort)
// Set a proper name to report on the stats page
fmt.Println()
if infos.ethstats == "" {
fmt.Printf("What should the wallet be called on the stats page?\n")
infos.ethstats = w.readString() + ":" + w.conf.ethstats
} else {
fmt.Printf("What should the wallet be called on the stats page? (default = %s)\n", infos.ethstats)
infos.ethstats = w.readDefaultString(infos.ethstats) + ":" + w.conf.ethstats
}
// Try to deploy the wallet on the host
nocache := false
if existed {
fmt.Println()
fmt.Printf("Should the wallet be built from scratch (y/n)? (default = no)\n")
nocache = w.readDefaultYesNo(false)
}
if out, err := deployWallet(client, w.network, w.conf.bootnodes, infos, nocache); err != nil {
log.Error("Failed to deploy wallet container", "err", err)
if len(out) > 0 {
fmt.Printf("%s\n", out)
}
return
}
// All ok, run a network scan to pick any changes up
log.Info("Waiting for node to finish booting")
time.Sleep(3 * time.Second)
w.networkStats()
}

View File

@ -151,9 +151,9 @@ var (
Name: "goerli",
Usage: "Görli network: pre-configured proof-of-authority test network",
}
YoloV3Flag = cli.BoolFlag{
Name: "yolov3",
Usage: "YOLOv3 network: pre-configured proof-of-authority shortlived test network.",
CalaverasFlag = cli.BoolFlag{
Name: "calaveras",
Usage: "Calaveras network: pre-configured proof-of-authority shortlived test network.",
}
RinkebyFlag = cli.BoolFlag{
Name: "rinkeby",
@ -184,7 +184,7 @@ var (
Name: "exitwhensynced",
Usage: "Exits after block synchronisation completes",
}
IterativeOutputFlag = cli.BoolFlag{
IterativeOutputFlag = cli.BoolTFlag{
Name: "iterative",
Usage: "Print streaming JSON iteratively, delimited by newlines",
}
@ -200,6 +200,16 @@ var (
Name: "nocode",
Usage: "Exclude contract code (save db lookups)",
}
StartKeyFlag = cli.StringFlag{
Name: "start",
Usage: "Start position. Either a hash or address",
Value: "0x0000000000000000000000000000000000000000000000000000000000000000",
}
DumpLimitFlag = cli.Uint64Flag{
Name: "limit",
Usage: "Max number of elements (0 = no limit)",
Value: 0,
}
defaultSyncMode = ethconfig.Defaults.SyncMode
SyncModeFlag = TextMarshalerFlag{
Name: "syncmode",
@ -233,9 +243,9 @@ var (
Usage: "Megabytes of memory allocated to bloom-filter for pruning",
Value: 2048,
}
OverrideBerlinFlag = cli.Uint64Flag{
Name: "override.berlin",
Usage: "Manually specify Berlin fork-block, overriding the bundled setting",
OverrideLondonFlag = cli.Uint64Flag{
Name: "override.london",
Usage: "Manually specify London fork-block, overriding the bundled setting",
}
// Light server and client settings
LightServeFlag = cli.IntFlag{
@ -665,10 +675,10 @@ var (
}
// ATM the url is left to the user and deployment to
JSpathFlag = cli.StringFlag{
JSpathFlag = DirectoryFlag{
Name: "jspath",
Usage: "JavaScript root path for `loadScript`",
Value: ".",
Value: DirectoryString("."),
}
// Gas price oracle settings
@ -687,6 +697,11 @@ var (
Usage: "Maximum gas price will be recommended by gpo",
Value: ethconfig.Defaults.GPO.MaxPrice.Int64(),
}
GpoIgnoreGasPriceFlag = cli.Int64Flag{
Name: "gpo.ignoreprice",
Usage: "Gas price below which gpo will ignore transactions",
Value: ethconfig.Defaults.GPO.IgnorePrice.Int64(),
}
// Metrics flags
MetricsEnabledFlag = cli.BoolFlag{
@ -745,15 +760,10 @@ var (
Usage: "Comma-separated InfluxDB tags (key/values) attached to all measurements",
Value: metrics.DefaultConfig.InfluxDBTags,
}
EWASMInterpreterFlag = cli.StringFlag{
Name: "vm.ewasm",
Usage: "External ewasm configuration (default = built-in interpreter)",
Value: "",
}
EVMInterpreterFlag = cli.StringFlag{
Name: "vm.evm",
Usage: "External EVM configuration (default = built-in interpreter)",
Value: "",
CatalystFlag = cli.BoolFlag{
Name: "catalyst",
Usage: "Catalyst mode (eth2 integration testing)",
}
)
@ -773,8 +783,8 @@ func MakeDataDir(ctx *cli.Context) string {
if ctx.GlobalBool(GoerliFlag.Name) {
return filepath.Join(path, "goerli")
}
if ctx.GlobalBool(YoloV3Flag.Name) {
return filepath.Join(path, "yolo-v3")
if ctx.GlobalBool(CalaverasFlag.Name) {
return filepath.Join(path, "calaveras")
}
return path
}
@ -828,8 +838,8 @@ func setBootstrapNodes(ctx *cli.Context, cfg *p2p.Config) {
urls = params.RinkebyBootnodes
case ctx.GlobalBool(GoerliFlag.Name):
urls = params.GoerliBootnodes
case ctx.GlobalBool(YoloV3Flag.Name):
urls = params.YoloV3Bootnodes
case ctx.GlobalBool(CalaverasFlag.Name):
urls = params.CalaverasBootnodes
case cfg.BootstrapNodes != nil:
return // already set, don't apply defaults.
}
@ -1186,10 +1196,11 @@ func SetP2PConfig(ctx *cli.Context, cfg *p2p.Config) {
cfg.NetRestrict = list
}
if ctx.GlobalBool(DeveloperFlag.Name) {
if ctx.GlobalBool(DeveloperFlag.Name) || ctx.GlobalBool(CatalystFlag.Name) {
// --dev mode can't use p2p networking.
cfg.MaxPeers = 0
cfg.ListenAddr = ":0"
cfg.ListenAddr = ""
cfg.NoDial = true
cfg.NoDiscovery = true
cfg.DiscoveryV5 = false
}
@ -1213,6 +1224,9 @@ func SetNodeConfig(ctx *cli.Context, cfg *node.Config) {
if ctx.GlobalIsSet(KeyStoreDirFlag.Name) {
cfg.KeyStoreDir = ctx.GlobalString(KeyStoreDirFlag.Name)
}
if ctx.GlobalIsSet(DeveloperFlag.Name) {
cfg.UseLightweightKDF = true
}
if ctx.GlobalIsSet(LightKDFFlag.Name) {
cfg.UseLightweightKDF = ctx.GlobalBool(LightKDFFlag.Name)
}
@ -1269,8 +1283,8 @@ func setDataDir(ctx *cli.Context, cfg *node.Config) {
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby")
case ctx.GlobalBool(GoerliFlag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "goerli")
case ctx.GlobalBool(YoloV3Flag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "yolo-v3")
case ctx.GlobalBool(CalaverasFlag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "calaveras")
}
}
@ -1278,8 +1292,7 @@ func setGPO(ctx *cli.Context, cfg *gasprice.Config, light bool) {
// If we are running the light client, apply another group
// settings for gas oracle.
if light {
cfg.Blocks = ethconfig.LightClientGPO.Blocks
cfg.Percentile = ethconfig.LightClientGPO.Percentile
*cfg = ethconfig.LightClientGPO
}
if ctx.GlobalIsSet(GpoBlocksFlag.Name) {
cfg.Blocks = ctx.GlobalInt(GpoBlocksFlag.Name)
@ -1290,6 +1303,9 @@ func setGPO(ctx *cli.Context, cfg *gasprice.Config, light bool) {
if ctx.GlobalIsSet(GpoMaxGasPriceFlag.Name) {
cfg.MaxPrice = big.NewInt(ctx.GlobalInt64(GpoMaxGasPriceFlag.Name))
}
if ctx.GlobalIsSet(GpoIgnoreGasPriceFlag.Name) {
cfg.IgnorePrice = big.NewInt(ctx.GlobalInt64(GpoIgnoreGasPriceFlag.Name))
}
}
func setTxPool(ctx *cli.Context, cfg *core.TxPoolConfig) {
@ -1454,7 +1470,7 @@ func CheckExclusive(ctx *cli.Context, args ...interface{}) {
// SetEthConfig applies eth-related command line flags to the config.
func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
// Avoid conflicting network flags
CheckExclusive(ctx, MainnetFlag, DeveloperFlag, RopstenFlag, RinkebyFlag, GoerliFlag, YoloV3Flag)
CheckExclusive(ctx, MainnetFlag, DeveloperFlag, RopstenFlag, RinkebyFlag, GoerliFlag, CalaverasFlag)
CheckExclusive(ctx, LightServeFlag, SyncModeFlag, "light")
CheckExclusive(ctx, DeveloperFlag, ExternalSignerFlag) // Can't use both ephemeral unlocked and external signer
if ctx.GlobalString(GCModeFlag.Name) == "archive" && ctx.GlobalUint64(TxLookupLimitFlag.Name) != 0 {
@ -1560,13 +1576,6 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
cfg.EnablePreimageRecording = ctx.GlobalBool(VMEnableDebugFlag.Name)
}
if ctx.GlobalIsSet(EWASMInterpreterFlag.Name) {
cfg.EWASMInterpreter = ctx.GlobalString(EWASMInterpreterFlag.Name)
}
if ctx.GlobalIsSet(EVMInterpreterFlag.Name) {
cfg.EVMInterpreter = ctx.GlobalString(EVMInterpreterFlag.Name)
}
if ctx.GlobalIsSet(RPCGlobalGasCapFlag.Name) {
cfg.RPCGasCap = ctx.GlobalUint64(RPCGlobalGasCapFlag.Name)
}
@ -1614,15 +1623,16 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
}
cfg.Genesis = core.DefaultGoerliGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.GoerliGenesisHash)
case ctx.GlobalBool(YoloV3Flag.Name):
case ctx.GlobalBool(CalaverasFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = new(big.Int).SetBytes([]byte("yolov3x")).Uint64() // "yolov3x"
cfg.NetworkId = 123 // https://gist.github.com/holiman/c5697b041b3dc18c50a5cdd382cbdd16
}
cfg.Genesis = core.DefaultYoloV3GenesisBlock()
cfg.Genesis = core.DefaultCalaverasGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 1337
}
cfg.SyncMode = downloader.FullSync
// Create new developer account or reuse existing one
var (
developer accounts.Account
@ -1656,7 +1666,7 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
if ctx.GlobalIsSet(DataDirFlag.Name) {
// Check if we have an already initialized chain and fall back to
// that if so. Otherwise we need to generate a new genesis spec.
chaindb := MakeChainDatabase(ctx, stack, true)
chaindb := MakeChainDatabase(ctx, stack, false) // TODO (MariusVanDerWijden) make this read only
if rawdb.ReadCanonicalHash(chaindb, 0) != (common.Hash{}) {
cfg.Genesis = nil // fallback to db content
}
@ -1684,23 +1694,21 @@ func SetDNSDiscoveryDefaults(cfg *ethconfig.Config, genesis common.Hash) {
}
if url := params.KnownDNSNetwork(genesis, protocol); url != "" {
cfg.EthDiscoveryURLs = []string{url}
}
if cfg.SyncMode == downloader.SnapSync {
if url := params.KnownDNSNetwork(genesis, "snap"); url != "" {
cfg.SnapDiscoveryURLs = []string{url}
}
cfg.SnapDiscoveryURLs = cfg.EthDiscoveryURLs
}
}
// RegisterEthService adds an Ethereum client to the stack.
func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) ethapi.Backend {
// The second return value is the full node instance, which may be nil if the
// node is running as a light client.
func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) (ethapi.Backend, *eth.Ethereum) {
if cfg.SyncMode == downloader.LightSync {
backend, err := les.New(stack, cfg)
if err != nil {
Fatalf("Failed to register the Ethereum service: %v", err)
}
stack.RegisterAPIs(tracers.APIs(backend.ApiBackend))
return backend.ApiBackend
return backend.ApiBackend, nil
}
backend, err := eth.New(stack, cfg)
if err != nil {
@ -1713,7 +1721,7 @@ func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) ethapi.Backend
}
}
stack.RegisterAPIs(tracers.APIs(backend.APIBackend))
return backend.APIBackend
return backend.APIBackend, backend
}
// RegisterEthStatsService configures the Ethereum Stats daemon and adds it to
@ -1809,8 +1817,8 @@ func MakeGenesis(ctx *cli.Context) *core.Genesis {
genesis = core.DefaultRinkebyGenesisBlock()
case ctx.GlobalBool(GoerliFlag.Name):
genesis = core.DefaultGoerliGenesisBlock()
case ctx.GlobalBool(YoloV3Flag.Name):
genesis = core.DefaultYoloV3GenesisBlock()
case ctx.GlobalBool(CalaverasFlag.Name):
genesis = core.DefaultCalaverasGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name):
Fatalf("Developer chains are ephemeral")
}

View File

@ -37,8 +37,8 @@ func Report(extra ...interface{}) {
fmt.Fprintln(os.Stderr, "#### BUG! PLEASE REPORT ####")
}
// PrintDepricationWarning prinst the given string in a box using fmt.Println.
func PrintDepricationWarning(str string) {
// PrintDeprecationWarning prints the given string in a box using fmt.Println.
func PrintDeprecationWarning(str string) {
line := strings.Repeat("#", len(str)+4)
emptyLine := strings.Repeat(" ", len(str))
fmt.Printf(`

View File

@ -76,7 +76,7 @@ func (h Hash) Hex() string { return hexutil.Encode(h[:]) }
// TerminalString implements log.TerminalStringer, formatting a string for console
// output during logging.
func (h Hash) TerminalString() string {
return fmt.Sprintf("%x%x", h[:3], h[29:])
return fmt.Sprintf("%x..%x", h[:3], h[29:])
}
// String implements the stringer interface and is used also by the logger when

View File

@ -17,11 +17,14 @@
package clique
import (
"encoding/json"
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/rpc"
)
@ -175,3 +178,51 @@ func (api *API) Status() (*status, error) {
NumBlocks: numBlocks,
}, nil
}
type blockNumberOrHashOrRLP struct {
*rpc.BlockNumberOrHash
RLP hexutil.Bytes `json:"rlp,omitempty"`
}
func (sb *blockNumberOrHashOrRLP) UnmarshalJSON(data []byte) error {
bnOrHash := new(rpc.BlockNumberOrHash)
// Try to unmarshal bNrOrHash
if err := bnOrHash.UnmarshalJSON(data); err == nil {
sb.BlockNumberOrHash = bnOrHash
return nil
}
// Try to unmarshal RLP
var input string
if err := json.Unmarshal(data, &input); err != nil {
return err
}
sb.RLP = hexutil.MustDecode(input)
return nil
}
// GetSigner returns the signer for a specific clique block.
// Can be called with either a blocknumber, blockhash or an rlp encoded blob.
// The RLP encoded blob can either be a block or a header.
func (api *API) GetSigner(rlpOrBlockNr *blockNumberOrHashOrRLP) (common.Address, error) {
if len(rlpOrBlockNr.RLP) == 0 {
blockNrOrHash := rlpOrBlockNr.BlockNumberOrHash
var header *types.Header
if blockNrOrHash == nil {
header = api.chain.CurrentHeader()
} else if hash, ok := blockNrOrHash.Hash(); ok {
header = api.chain.GetHeaderByHash(hash)
} else if number, ok := blockNrOrHash.Number(); ok {
header = api.chain.GetHeaderByNumber(uint64(number.Int64()))
}
return api.clique.Author(header)
}
block := new(types.Block)
if err := rlp.DecodeBytes(rlpOrBlockNr.RLP, block); err == nil {
return api.clique.Author(block.Header())
}
header := new(types.Header)
if err := rlp.DecodeBytes(rlpOrBlockNr.RLP, header); err != nil {
return common.Address{}, err
}
return api.clique.Author(header)
}

View File

@ -20,6 +20,7 @@ package clique
import (
"bytes"
"errors"
"fmt"
"io"
"math/big"
"math/rand"
@ -293,6 +294,11 @@ func (c *Clique) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
return errInvalidDifficulty
}
}
// Verify that the gas limit is <= 2^63-1
cap := uint64(0x7fffffffffffffff)
if header.GasLimit > cap {
return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, cap)
}
// If all checks passed, validate any special fields for hard forks
if err := misc.VerifyForkHashes(chain.Config(), header, false); err != nil {
return err
@ -324,6 +330,22 @@ func (c *Clique) verifyCascadingFields(chain consensus.ChainHeaderReader, header
if parent.Time+c.config.Period > header.Time {
return errInvalidTimestamp
}
// Verify that the gasUsed is <= gasLimit
if header.GasUsed > header.GasLimit {
return fmt.Errorf("invalid gasUsed: have %d, gasLimit %d", header.GasUsed, header.GasLimit)
}
if !chain.Config().IsLondon(header.Number) {
// Verify BaseFee not present before EIP-1559 fork.
if header.BaseFee != nil {
return fmt.Errorf("invalid baseFee before fork: have %d, want <nil>", header.BaseFee)
}
if err := misc.VerifyGaslimit(parent.GasLimit, header.GasLimit); err != nil {
return err
}
} else if err := misc.VerifyEip1559Header(chain.Config(), parent, header); err != nil {
// Verify the header's EIP-1559 attributes.
return err
}
// Retrieve the snapshot needed to verify this header and cache it
snap, err := c.snapshot(chain, number-1, header.ParentHash, parents)
if err != nil {
@ -688,7 +710,7 @@ func (c *Clique) APIs(chain consensus.ChainHeaderReader) []rpc.API {
func SealHash(header *types.Header) (hash common.Hash) {
hasher := sha3.NewLegacyKeccak256()
encodeSigHeader(hasher, header)
hasher.Sum(hash[:0])
hasher.(crypto.KeccakState).Read(hash[:])
return hash
}
@ -706,7 +728,7 @@ func CliqueRLP(header *types.Header) []byte {
}
func encodeSigHeader(w io.Writer, header *types.Header) {
err := rlp.Encode(w, []interface{}{
enc := []interface{}{
header.ParentHash,
header.UncleHash,
header.Coinbase,
@ -722,8 +744,11 @@ func encodeSigHeader(w io.Writer, header *types.Header) {
header.Extra[:len(header.Extra)-crypto.SignatureLength], // Yes, this will panic if extra is too short
header.MixDigest,
header.Nonce,
})
if err != nil {
}
if header.BaseFee != nil {
enc = append(enc, header.BaseFee)
}
if err := rlp.Encode(w, enc); err != nil {
panic("can't encode: " + err.Error())
}
}

View File

@ -47,8 +47,9 @@ func TestReimportMirroredState(t *testing.T) {
genspec := &core.Genesis{
ExtraData: make([]byte, extraVanity+common.AddressLength+extraSeal),
Alloc: map[common.Address]core.GenesisAccount{
addr: {Balance: big.NewInt(1)},
addr: {Balance: big.NewInt(10000000000000000)},
},
BaseFee: big.NewInt(params.InitialBaseFee),
}
copy(genspec.ExtraData[extraVanity:], addr[:])
genesis := genspec.MustCommit(db)
@ -65,7 +66,7 @@ func TestReimportMirroredState(t *testing.T) {
// We want to simulate an empty middle block, having the same state as the
// first one. The last is needs a state change again to force a reorg.
if i != 1 {
tx, err := types.SignTx(types.NewTransaction(block.TxNonce(addr), common.Address{0x00}, new(big.Int), params.TxGas, nil, nil), signer, key)
tx, err := types.SignTx(types.NewTransaction(block.TxNonce(addr), common.Address{0x00}, new(big.Int), params.TxGas, block.BaseFee(), nil), signer, key)
if err != nil {
panic(err)
}
@ -111,3 +112,16 @@ func TestReimportMirroredState(t *testing.T) {
t.Fatalf("chain head mismatch: have %d, want %d", head, 3)
}
}
func TestSealHash(t *testing.T) {
have := SealHash(&types.Header{
Difficulty: new(big.Int),
Number: new(big.Int),
Extra: make([]byte, 32+65),
BaseFee: new(big.Int),
})
want := common.HexToHash("0xbd3d1fa43fbc4c5bfcc91b179ec92e2861df3654de60468beb908ff805359e8f")
if have != want {
t.Errorf("have %x, want %x", have, want)
}
}

View File

@ -19,6 +19,7 @@ package clique
import (
"bytes"
"crypto/ecdsa"
"math/big"
"sort"
"testing"
@ -395,6 +396,7 @@ func TestClique(t *testing.T) {
// Create the genesis block with the initial set of signers
genesis := &core.Genesis{
ExtraData: make([]byte, extraVanity+common.AddressLength*len(signers)+extraSeal),
BaseFee: big.NewInt(params.InitialBaseFee),
}
for j, signer := range signers {
copy(genesis.ExtraData[extraVanity+j*common.AddressLength:], signer[:])

View File

@ -45,6 +45,11 @@ var (
maxUncles = 2 // Maximum number of uncles allowed in a single block
allowedFutureBlockTimeSeconds = int64(15) // Max seconds from current time allowed for blocks, before they're considered future blocks
// calcDifficultyEip3554 is the difficulty adjustment algorithm as specified by EIP 3554.
// It offsets the bomb a total of 9.7M blocks.
// Specification EIP-3554: https://eips.ethereum.org/EIPS/eip-3554
calcDifficultyEip3554 = makeDifficultyCalculator(big.NewInt(9700000))
// calcDifficultyEip2384 is the difficulty adjustment algorithm as specified by EIP 2384.
// It offsets the bomb 4M blocks from Constantinople, so in total 9M blocks.
// Specification EIP-2384: https://eips.ethereum.org/EIPS/eip-2384
@ -203,15 +208,23 @@ func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Blo
number, parent := block.NumberU64()-1, block.ParentHash()
for i := 0; i < 7; i++ {
ancestor := chain.GetBlock(parent, number)
if ancestor == nil {
ancestorHeader := chain.GetHeader(parent, number)
if ancestorHeader == nil {
break
}
ancestors[ancestor.Hash()] = ancestor.Header()
for _, uncle := range ancestor.Uncles() {
uncles.Add(uncle.Hash())
ancestors[parent] = ancestorHeader
// If the ancestor doesn't have any uncles, we don't have to iterate them
if ancestorHeader.UncleHash != types.EmptyUncleHash {
// Need to add those uncles to the blacklist too
ancestor := chain.GetBlock(parent, number)
if ancestor == nil {
break
}
for _, uncle := range ancestor.Uncles() {
uncles.Add(uncle.Hash())
}
}
parent, number = ancestor.ParentHash(), number-1
parent, number = ancestorHeader.ParentHash, number-1
}
ancestors[block.Hash()] = block.Header()
uncles.Add(block.Hash())
@ -271,16 +284,18 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, pa
if header.GasUsed > header.GasLimit {
return fmt.Errorf("invalid gasUsed: have %d, gasLimit %d", header.GasUsed, header.GasLimit)
}
// Verify that the gas limit remains within allowed bounds
diff := int64(parent.GasLimit) - int64(header.GasLimit)
if diff < 0 {
diff *= -1
}
limit := parent.GasLimit / params.GasLimitBoundDivisor
if uint64(diff) >= limit || header.GasLimit < params.MinGasLimit {
return fmt.Errorf("invalid gas limit: have %d, want %d += %d", header.GasLimit, parent.GasLimit, limit)
// Verify the block's gas usage and (if applicable) verify the base fee.
if !chain.Config().IsLondon(header.Number) {
// Verify BaseFee not present before EIP-1559 fork.
if header.BaseFee != nil {
return fmt.Errorf("invalid baseFee before fork: have %d, expected 'nil'", header.BaseFee)
}
if err := misc.VerifyGaslimit(parent.GasLimit, header.GasLimit); err != nil {
return err
}
} else if err := misc.VerifyEip1559Header(chain.Config(), parent, header); err != nil {
// Verify the header's EIP-1559 attributes.
return err
}
// Verify that the block number is parent's +1
if diff := new(big.Int).Sub(header.Number, parent.Number); diff.Cmp(big.NewInt(1)) != 0 {
@ -315,6 +330,10 @@ func (ethash *Ethash) CalcDifficulty(chain consensus.ChainHeaderReader, time uin
func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int {
next := new(big.Int).Add(parent.Number, big1)
switch {
case config.IsCatalyst(next):
return big.NewInt(1)
case config.IsLondon(next):
return calcDifficultyEip3554(time, parent)
case config.IsMuirGlacier(next):
return calcDifficultyEip2384(time, parent)
case config.IsConstantinople(next):
@ -587,7 +606,7 @@ func (ethash *Ethash) FinalizeAndAssemble(chain consensus.ChainHeaderReader, hea
func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
hasher := sha3.NewLegacyKeccak256()
rlp.Encode(hasher, []interface{}{
enc := []interface{}{
header.ParentHash,
header.UncleHash,
header.Coinbase,
@ -601,7 +620,11 @@ func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
header.GasUsed,
header.Time,
header.Extra,
})
}
if header.BaseFee != nil {
enc = append(enc, header.BaseFee)
}
rlp.Encode(hasher, enc)
hasher.Sum(hash[:0])
return hash
}
@ -616,6 +639,10 @@ var (
// reward. The total reward consists of the static block reward and rewards for
// included uncles. The coinbase of each uncle block is also rewarded.
func accumulateRewards(config *params.ChainConfig, state *state.StateDB, header *types.Header, uncles []*types.Header) {
// Skip block reward in catalyst mode
if config.IsCatalyst(header.Number) {
return
}
// Select the correct block reward based on chain progression
blockReward := FrontierBlockReward
if config.IsByzantium(header.Number) {

View File

@ -52,8 +52,7 @@ func CalcDifficultyFrontierU256(time uint64, parent *types.Header) *big.Int {
- num = block.number
*/
pDiff := uint256.NewInt()
pDiff.SetFromBig(parent.Difficulty) // pDiff: pdiff
pDiff, _ := uint256.FromBig(parent.Difficulty) // pDiff: pdiff
adjust := pDiff.Clone()
adjust.Rsh(adjust, difficultyBoundDivisor) // adjust: pDiff / 2048
@ -96,8 +95,7 @@ func CalcDifficultyHomesteadU256(time uint64, parent *types.Header) *big.Int {
- num = block.number
*/
pDiff := uint256.NewInt()
pDiff.SetFromBig(parent.Difficulty) // pDiff: pdiff
pDiff, _ := uint256.FromBig(parent.Difficulty) // pDiff: pdiff
adjust := pDiff.Clone()
adjust.Rsh(adjust, difficultyBoundDivisor) // adjust: pDiff / 2048

View File

@ -112,12 +112,13 @@ func memoryMapFile(file *os.File, write bool) (mmap.MMap, []uint32, error) {
if err != nil {
return nil, nil, err
}
// Yay, we managed to memory map the file, here be dragons
header := *(*reflect.SliceHeader)(unsafe.Pointer(&mem))
header.Len /= 4
header.Cap /= 4
return mem, *(*[]uint32)(unsafe.Pointer(&header)), nil
// The file is now memory-mapped. Create a []uint32 view of the file.
var view []uint32
header := (*reflect.SliceHeader)(unsafe.Pointer(&view))
header.Data = (*reflect.SliceHeader)(unsafe.Pointer(&mem)).Data
header.Cap = len(mem) / 4
header.Len = header.Cap
return mem, view, nil
}
// memoryMapAndGenerate tries to memory map a temporary file of uint32s for write

93
consensus/misc/eip1559.go Normal file
View File

@ -0,0 +1,93 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package misc
import (
"fmt"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
)
// VerifyEip1559Header verifies some header attributes which were changed in EIP-1559,
// - gas limit check
// - basefee check
func VerifyEip1559Header(config *params.ChainConfig, parent, header *types.Header) error {
// Verify that the gas limit remains within allowed bounds
parentGasLimit := parent.GasLimit
if !config.IsLondon(parent.Number) {
parentGasLimit = parent.GasLimit * params.ElasticityMultiplier
}
if err := VerifyGaslimit(parentGasLimit, header.GasLimit); err != nil {
return err
}
// Verify the header is not malformed
if header.BaseFee == nil {
return fmt.Errorf("header is missing baseFee")
}
// Verify the baseFee is correct based on the parent header.
expectedBaseFee := CalcBaseFee(config, parent)
if header.BaseFee.Cmp(expectedBaseFee) != 0 {
return fmt.Errorf("invalid baseFee: have %s, want %s, parentBaseFee %s, parentGasUsed %d",
expectedBaseFee, header.BaseFee, parent.BaseFee, parent.GasUsed)
}
return nil
}
// CalcBaseFee calculates the basefee of the header.
func CalcBaseFee(config *params.ChainConfig, parent *types.Header) *big.Int {
// If the current block is the first EIP-1559 block, return the InitialBaseFee.
if !config.IsLondon(parent.Number) {
return new(big.Int).SetUint64(params.InitialBaseFee)
}
var (
parentGasTarget = parent.GasLimit / params.ElasticityMultiplier
parentGasTargetBig = new(big.Int).SetUint64(parentGasTarget)
baseFeeChangeDenominator = new(big.Int).SetUint64(params.BaseFeeChangeDenominator)
)
// If the parent gasUsed is the same as the target, the baseFee remains unchanged.
if parent.GasUsed == parentGasTarget {
return new(big.Int).Set(parent.BaseFee)
}
if parent.GasUsed > parentGasTarget {
// If the parent block used more gas than its target, the baseFee should increase.
gasUsedDelta := new(big.Int).SetUint64(parent.GasUsed - parentGasTarget)
x := new(big.Int).Mul(parent.BaseFee, gasUsedDelta)
y := x.Div(x, parentGasTargetBig)
baseFeeDelta := math.BigMax(
x.Div(y, baseFeeChangeDenominator),
common.Big1,
)
return x.Add(parent.BaseFee, baseFeeDelta)
} else {
// Otherwise if the parent block used less gas than its target, the baseFee should decrease.
gasUsedDelta := new(big.Int).SetUint64(parentGasTarget - parent.GasUsed)
x := new(big.Int).Mul(parent.BaseFee, gasUsedDelta)
y := x.Div(x, parentGasTargetBig)
baseFeeDelta := x.Div(y, baseFeeChangeDenominator)
return math.BigMax(
x.Sub(parent.BaseFee, baseFeeDelta),
common.Big0,
)
}
}

View File

@ -0,0 +1,132 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package misc
import (
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
)
// copyConfig does a _shallow_ copy of a given config. Safe to set new values, but
// do not use e.g. SetInt() on the numbers. For testing only
func copyConfig(original *params.ChainConfig) *params.ChainConfig {
return &params.ChainConfig{
ChainID: original.ChainID,
HomesteadBlock: original.HomesteadBlock,
DAOForkBlock: original.DAOForkBlock,
DAOForkSupport: original.DAOForkSupport,
EIP150Block: original.EIP150Block,
EIP150Hash: original.EIP150Hash,
EIP155Block: original.EIP155Block,
EIP158Block: original.EIP158Block,
ByzantiumBlock: original.ByzantiumBlock,
ConstantinopleBlock: original.ConstantinopleBlock,
PetersburgBlock: original.PetersburgBlock,
IstanbulBlock: original.IstanbulBlock,
MuirGlacierBlock: original.MuirGlacierBlock,
BerlinBlock: original.BerlinBlock,
LondonBlock: original.LondonBlock,
CatalystBlock: original.CatalystBlock,
Ethash: original.Ethash,
Clique: original.Clique,
}
}
func config() *params.ChainConfig {
config := copyConfig(params.TestChainConfig)
config.LondonBlock = big.NewInt(5)
return config
}
// TestBlockGasLimits tests the gasLimit checks for blocks both across
// the EIP-1559 boundary and post-1559 blocks
func TestBlockGasLimits(t *testing.T) {
initial := new(big.Int).SetUint64(params.InitialBaseFee)
for i, tc := range []struct {
pGasLimit uint64
pNum int64
gasLimit uint64
ok bool
}{
// Transitions from non-london to london
{10000000, 4, 20000000, true}, // No change
{10000000, 4, 20019530, true}, // Upper limit
{10000000, 4, 20019531, false}, // Upper +1
{10000000, 4, 19980470, true}, // Lower limit
{10000000, 4, 19980469, false}, // Lower limit -1
// London to London
{20000000, 5, 20000000, true},
{20000000, 5, 20019530, true}, // Upper limit
{20000000, 5, 20019531, false}, // Upper limit +1
{20000000, 5, 19980470, true}, // Lower limit
{20000000, 5, 19980469, false}, // Lower limit -1
{40000000, 5, 40039061, true}, // Upper limit
{40000000, 5, 40039062, false}, // Upper limit +1
{40000000, 5, 39960939, true}, // lower limit
{40000000, 5, 39960938, false}, // Lower limit -1
} {
parent := &types.Header{
GasUsed: tc.pGasLimit / 2,
GasLimit: tc.pGasLimit,
BaseFee: initial,
Number: big.NewInt(tc.pNum),
}
header := &types.Header{
GasUsed: tc.gasLimit / 2,
GasLimit: tc.gasLimit,
BaseFee: initial,
Number: big.NewInt(tc.pNum + 1),
}
err := VerifyEip1559Header(config(), parent, header)
if tc.ok && err != nil {
t.Errorf("test %d: Expected valid header: %s", i, err)
}
if !tc.ok && err == nil {
t.Errorf("test %d: Expected invalid header", i)
}
}
}
// TestCalcBaseFee assumes all blocks are 1559-blocks
func TestCalcBaseFee(t *testing.T) {
tests := []struct {
parentBaseFee int64
parentGasLimit uint64
parentGasUsed uint64
expectedBaseFee int64
}{
{params.InitialBaseFee, 20000000, 10000000, params.InitialBaseFee}, // usage == target
{params.InitialBaseFee, 20000000, 9000000, 987500000}, // usage below target
{params.InitialBaseFee, 20000000, 11000000, 1012500000}, // usage above target
}
for i, test := range tests {
parent := &types.Header{
Number: common.Big32,
GasLimit: test.parentGasLimit,
GasUsed: test.parentGasUsed,
BaseFee: big.NewInt(test.parentBaseFee),
}
if have, want := CalcBaseFee(config(), parent), big.NewInt(test.expectedBaseFee); have.Cmp(want) != 0 {
t.Errorf("test %d: have %d want %d, ", i, have, want)
}
}
}

View File

@ -0,0 +1,42 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package misc
import (
"errors"
"fmt"
"github.com/ethereum/go-ethereum/params"
)
// VerifyGaslimit verifies the header gas limit according increase/decrease
// in relation to the parent gas limit.
func VerifyGaslimit(parentGasLimit, headerGasLimit uint64) error {
// Verify that the gas limit remains within allowed bounds
diff := int64(parentGasLimit) - int64(headerGasLimit)
if diff < 0 {
diff *= -1
}
limit := parentGasLimit / params.GasLimitBoundDivisor
if uint64(diff) >= limit {
return fmt.Errorf("invalid gas limit: have %d, want %d +-= %d", headerGasLimit, parentGasLimit, limit-1)
}
if headerGasLimit < params.MinGasLimit {
return errors.New("invalid gas limit below 5000")
}
return nil
}

View File

@ -175,7 +175,13 @@ func TestCheckpointRegister(t *testing.T) {
sort.Sort(accounts)
// Deploy registrar contract
contractBackend := backends.NewSimulatedBackend(core.GenesisAlloc{accounts[0].addr: {Balance: big.NewInt(1000000000)}, accounts[1].addr: {Balance: big.NewInt(1000000000)}, accounts[2].addr: {Balance: big.NewInt(1000000000)}}, 10000000)
contractBackend := backends.NewSimulatedBackend(
core.GenesisAlloc{
accounts[0].addr: {Balance: big.NewInt(10000000000000000)},
accounts[1].addr: {Balance: big.NewInt(10000000000000000)},
accounts[2].addr: {Balance: big.NewInt(10000000000000000)},
}, 10000000,
)
defer contractBackend.Close()
transactOpts, _ := bind.NewKeyedTransactorWithChainID(accounts[0].key, big.NewInt(1337))

View File

@ -60,6 +60,10 @@ func TestLexer(t *testing.T) {
input: "0123abc",
tokens: []token{{typ: lineStart}, {typ: number, text: "0123"}, {typ: element, text: "abc"}, {typ: eof}},
},
{
input: "00123abc",
tokens: []token{{typ: lineStart}, {typ: number, text: "00123"}, {typ: element, text: "abc"}, {typ: eof}},
},
{
input: "@foo",
tokens: []token{{typ: lineStart}, {typ: label, text: "foo"}, {typ: eof}},

View File

@ -254,7 +254,7 @@ func lexInsideString(l *lexer) stateFn {
func lexNumber(l *lexer) stateFn {
acceptance := Numbers
if l.accept("0") || l.accept("xX") {
if l.accept("xX") {
acceptance = HexadecimalNumbers
}
l.acceptRun(acceptance)

View File

@ -112,7 +112,7 @@ func genTxRing(naccounts int) func(int, *BlockGen) {
from := 0
return func(i int, gen *BlockGen) {
block := gen.PrevBlock(i - 1)
gas := CalcGasLimit(block, block.GasLimit(), block.GasLimit())
gas := block.GasLimit()
for {
gas -= params.TxGas
if gas < params.TxGas {

View File

@ -106,12 +106,12 @@ func (v *BlockValidator) ValidateState(block *types.Block, statedb *state.StateD
// to keep the baseline gas above the provided floor, and increase it towards the
// ceil if the blocks are full. If the ceil is exceeded, it will always decrease
// the gas allowance.
func CalcGasLimit(parent *types.Block, gasFloor, gasCeil uint64) uint64 {
func CalcGasLimit(parentGasUsed, parentGasLimit, gasFloor, gasCeil uint64) uint64 {
// contrib = (parentGasUsed * 3 / 2) / 1024
contrib := (parent.GasUsed() + parent.GasUsed()/2) / params.GasLimitBoundDivisor
contrib := (parentGasUsed + parentGasUsed/2) / params.GasLimitBoundDivisor
// decay = parentGasLimit / 1024 -1
decay := parent.GasLimit()/params.GasLimitBoundDivisor - 1
decay := parentGasLimit/params.GasLimitBoundDivisor - 1
/*
strategy: gasLimit of block-to-mine is set based on parent's
@ -120,21 +120,45 @@ func CalcGasLimit(parent *types.Block, gasFloor, gasCeil uint64) uint64 {
at that usage) the amount increased/decreased depends on how far away
from parentGasLimit * (2/3) parentGasUsed is.
*/
limit := parent.GasLimit() - decay + contrib
limit := parentGasLimit - decay + contrib
if limit < params.MinGasLimit {
limit = params.MinGasLimit
}
// If we're outside our allowed gas range, we try to hone towards them
if limit < gasFloor {
limit = parent.GasLimit() + decay
limit = parentGasLimit + decay
if limit > gasFloor {
limit = gasFloor
}
} else if limit > gasCeil {
limit = parent.GasLimit() - decay
limit = parentGasLimit - decay
if limit < gasCeil {
limit = gasCeil
}
}
return limit
}
// CalcGasLimit1559 calculates the next block gas limit under 1559 rules.
func CalcGasLimit1559(parentGasLimit, desiredLimit uint64) uint64 {
delta := parentGasLimit/params.GasLimitBoundDivisor - 1
limit := parentGasLimit
if desiredLimit < params.MinGasLimit {
desiredLimit = params.MinGasLimit
}
// If we're outside our allowed gas range, we try to hone towards them
if limit < desiredLimit {
limit = parentGasLimit + delta
if limit > desiredLimit {
limit = desiredLimit
}
return limit
}
if limit > desiredLimit {
limit = parentGasLimit - delta
if limit < desiredLimit {
limit = desiredLimit
}
}
return limit
}

View File

@ -197,3 +197,36 @@ func testHeaderConcurrentAbortion(t *testing.T, threads int) {
t.Errorf("verification count too large: have %d, want below %d", verified, 2*threads)
}
}
func TestCalcGasLimit1559(t *testing.T) {
for i, tc := range []struct {
pGasLimit uint64
max uint64
min uint64
}{
{20000000, 20019530, 19980470},
{40000000, 40039061, 39960939},
} {
// Increase
if have, want := CalcGasLimit1559(tc.pGasLimit, 2*tc.pGasLimit), tc.max; have != want {
t.Errorf("test %d: have %d want <%d", i, have, want)
}
// Decrease
if have, want := CalcGasLimit1559(tc.pGasLimit, 0), tc.min; have != want {
t.Errorf("test %d: have %d want >%d", i, have, want)
}
// Small decrease
if have, want := CalcGasLimit1559(tc.pGasLimit, tc.pGasLimit-1), tc.pGasLimit-1; have != want {
t.Errorf("test %d: have %d want %d", i, have, want)
}
// Small increase
if have, want := CalcGasLimit1559(tc.pGasLimit, tc.pGasLimit+1), tc.pGasLimit+1; have != want {
t.Errorf("test %d: have %d want %d", i, have, want)
}
// No change
if have, want := CalcGasLimit1559(tc.pGasLimit, tc.pGasLimit), tc.pGasLimit; have != want {
t.Errorf("test %d: have %d want %d", i, have, want)
}
}
}

Some files were not shown because too many files have changed in this diff Show More