Compare commits

..

307 Commits

Author SHA1 Message Date
Felix Lange
e787272901 params: go-ethereum v1.9.25 stable 2020-12-11 09:03:16 +01:00
Felix Lange
1d1f5fea4a build: upgrade to Go 1.15.6 (#21986) 2020-12-11 09:02:55 +01:00
gary rong
004541098d les: introduce forkID (#21974)
* les: introduce forkID

* les: address comment
2020-12-10 17:20:55 +01:00
Martin Holst Swende
b44f24e3e6 core, trie: speed up some tests with quadratic processing flaw (#21987)
This commit fixes a flaw in two testcases, and brings down the exec-time from ~40s to ~8s for trie/TestIncompleteSync.

The checkConsistency was performed over and over again on the complete set of nodes, not just the recently added, turning it into a quadratic runtime.
2020-12-10 14:48:32 +01:00
gary rong
9f6bb492bb les, light: remove untrusted header retrieval in ODR (#21907)
* les, light: remove untrusted header retrieval in ODR

* les: polish

* light: check the hash equality in odr
2020-12-10 14:33:52 +01:00
Felix Lange
817a3fb562 p2p/enode: avoid crashing for invalid IP (#21981)
The database panicked for invalid IPs. This is usually no problem
because all code paths leading to node DB access verify the IP, but it's
dangerous because improper validation can turn this panic into a DoS
vulnerability. The quick fix here is to just turn database accesses
using invalid IP into a noop. This isn't great, but I'm planning to
remove the node DB for discv5 long-term, so it should be fine to have
this quick fix for half a year.

Fixes #21849
2020-12-09 20:21:31 +01:00
Felix Lange
f935b1d542 crypto/signify, build: fix archive signing with signify (#21977)
This fixes some issues in crypto/signify and makes release signing work.

The archive signing step in ci.go used getenvBase64, which decodes the key data.
This is incorrect here because crypto/signify already base64-decodes the key.
2020-12-09 15:43:36 +01:00
Martin Holst Swende
915643a3e5 cmd/geth: add test to verify regexps in version check (#21962) 2020-12-09 13:59:24 +01:00
Martin Holst Swende
40b6ccf383 core,les: headerchain import in batches (#21471)
* core: add test for headerchain inserts

* core, light: write headerchains in batches

* core: change to one callback per batch of inserted headers + review concerns

* core: error-check on batch write

* core: unexport writeHeaders

* core: remove callback parameter in InsertHeaderChain

The semantics of InsertHeaderChain are now much simpler: it is now an
all-or-nothing operation. The new WriteStatus return value allows
callers to check for the canonicality of the insertion. This change
simplifies use of HeaderChain in package les, where the callback was
previously used to post chain events.

* core: skip some hashing when writing headers

* core: less hashing in header validation

* core: fix headerchain flaw regarding blacklisted hashes

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-12-09 11:13:02 +01:00
Li, Cheng
bd848aad7c common: improve printing of Hash and Address (#21834)
Both Hash and Address have a String method, which returns the value as
hex with 0x prefix. They also had a Format method which tried to print
the value using printf of []byte. The way Format worked was at odds with
String though, leading to a situation where fmt.Sprintf("%v", hash)
returned the decimal notation and hash.String() returned a hex string.

This commit makes it consistent again. Both types now support the %v,
%s, %q format verbs for 0x-prefixed hex output. %x, %X creates
unprefixed hex output. %d is also supported and returns the decimal
notation "[1 2 3...]".

For Address, the case of hex characters in %v, %s, %q output is
determined using the EIP-55 checksum. Using %x, %X with Address
disables checksumming.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-12-08 19:19:09 +01:00
Marius van der Wijden
ed0670cb17 accounts/abi/bind: allow specifying signer on transactOpts (#21356)
This commit enables users to specify which signer they want to use while creating their transactOpts.
Previously all contract interactions used the homestead signer. Now a user can specify whether they
want to sign with homestead or EIP155 and specify the chainID which adds another layer of security.

Closes #16484
2020-12-08 14:44:56 +01:00
Steve Ruckdashel
6a4e730003 crypto/secp256k1: add workaround for go mod vendor (#21735)
Go won't vendor C files if there are no Go files present in the directory.
Workaround is to add dummy Go files.

Fixes: #20232
2020-12-08 10:47:56 +01:00
Guillaume Ballet
581c028d18 les: cosmetic rewrite of the arm64 float bug workaround (#21960)
* les: revert arm float bug workaround to check go 1.15

* add traces to reproduce outside travis

* simpler workaround
2020-12-07 14:04:27 +01:00
Martin Holst Swende
15339cf1c9 cmd/geth: implement vulnerability check (#21859)
* cmd/geth: implement vulnerability check

* cmd/geth: use minisign to verify vulnerability feed

* cmd/geth: add the test too

* cmd/geth: more minisig/signify testing

* cmd/geth: support multiple pubfiles for signing

* cmd/geth: add @holiman minisig pubkey

* cmd/geth: polishes on vulnerability check

* cmd/geth: fix ineffassign linter nit

* cmd/geth: add CVE to version check struct

* cmd/geth/testdata: add missing testfile

* cmd/geth: add more keys to versionchecker

* cmd/geth: support file:// URLs in version check

* cmd/geth: improve key ID printing when signature check fails

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-12-04 15:01:47 +01:00
Martin Holst Swende
7770e41cb5 core: improve contextual information on core errors (#21869)
A lot of times when we hit 'core' errors, example: invalid tx, the information provided is
insufficient. We miss several pieces of information: what account has nonce too high,
and what transaction in that block was offending?

This PR adds that information, using the new type of wrapped errors.
It also adds a testcase which (partly) verifies the output from the errors.

The first commit changes all usage of direct equality-checks on core errors, into
using errors.Is. The second commit adds contextual information. This wraps most
of the core errors with more information, and also wraps it one more time in
stateprocessor, to further provide tx index and tx hash, if such a tx is encoutered in
a block. The third commit uses the chainmaker to try to generate chains with such
errors in them, thus triggering the errors and checking that the generated string meets
expectations.
2020-12-04 12:22:19 +01:00
Chris Ziogas
62cedb3aab core/vm/runtime: remove duplicated line (#21956)
This line is duplicated, though it doesn't cause any issues.
2020-12-04 08:54:07 +01:00
Martin Holst Swende
d7a64dc02b cmd/devp2p: add node filter for snap + fix arg error (#21950) 2020-12-03 13:16:20 +01:00
Martin Holst Swende
0b2f1446bb go.mod: update github.com/golang/snappy(#21934)
This updates the snappy library depency to include a fix for
a Go 1.16 incompatibility issue.
2020-12-02 16:42:38 +01:00
Martin Holst Swende
e9e86aeacb eth: fix error in tracing if reexec is set (#21830)
* eth: fix error in tracing if reexec is set

* eth: change pointer embedding to value-embedding
2020-12-02 12:49:20 +01:00
gary rong
908c18073a params: update CHTs (#21941) 2020-12-02 09:17:59 +01:00
Felföldi Zsolt
a2795c8055 les: fix nodiscover option (#21906) 2020-12-01 10:03:41 +01:00
Martin Holst Swende
e7db1dbc96 p2p/nodestate: fix deadlock during shutdown of les server (#21927)
This PR fixes a deadlock reported here: #21925

The cause is that many operations may be pending, but if the close happens, only one of them gets awoken and exits, the others remain waiting for a signal that never comes.
2020-11-30 18:58:47 +01:00
Marius van der Wijden
a1ddd9e1d3 cmd/devp2p/internal/ethtest: add transaction tests (#21857) 2020-11-30 15:23:48 +01:00
Martin Holst Swende
aba0c234c2 cmd/geth: make tests run quicker + use less memory and disk (#21919) 2020-11-30 14:43:20 +01:00
Pascal Dierich
566cb4c5f0 accounts/keystore: add missing function doc for SignText (#21914)
Co-authored-by: Pascal Dierich <pascal@pascaldierich.com>
2020-11-30 09:03:24 +01:00
Kristofer Peterson
b71334ac3d accounts, signer: fix Ledger Live account derivation path (clef) (#21757)
* signer/core/api: fix derivation of ledger live accounts

For ledger hardware wallets, change account iteration as follows:

- ledger legacy: m/44'/60'/0'/X; for 0<=X<5
- ledger live: m/44'/60'/0'/0/X; for 0<=X<5

- ledger legacy: m/44'/60'/0'/X; for 0<=X<10
- ledger live: m/44'/60'/X'/0/0; for 0<=X<10

Non-ledger derivation is unchanged and remains as:
- non-ledger: m/44'/60'/0'/0/X; for 0<=X<10

* signer/core/api: derive ten default paths for all hardware wallets, plus ten legacy and ten live paths for ledger wallets

* signer/core/api: as .../0'/0/0 already included by default paths, do not include it again with ledger live paths

* accounts, signer: implement path iterators for hd wallets

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-29 13:43:15 +01:00
Guillaume Ballet
fa572cd297 crypto: signing builds with signify/minisign (#21798)
* internal/build: implement signify's signing func
* Add signify to the ci utility
* fix output file format
* Add unit test for signify
* holiman's + travis' feedback
* internal/build: verify signify's output
* crypto: move signify to common dir
* use go-minisign to verify binaries
* more holiman feedback
* crypto, ci: support minisign output
* only accept one-line trusted comments
* configurable untrusted comments
* code cleanup in tests
* revert to use ed25519 from the stdlib
* bug: fix for empty untrusted comments
* write timestamp as comment if trusted comment isn't present
* rename line checker to commentHasManyLines
* crypto: added signify fuzzer (#6)
* crypto: added signify fuzzer
* stuff
* crypto: updated signify fuzzer to fuzz comments
* crypto: repro signify crashes
* rebased fuzzer on build-signify branch
* hide fuzzer behind gofuzz build flag
* extract key data inside a single function
* don't treat \r as a newline
* travis: fix signing command line
* do not use an external binary in tests
* crypto: move signify to crypto/signify
* travis: fix formatting issue
* ci: fix linter build after package move

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-11-27 12:13:54 +01:00
Nishant Das
429e7141f2 p2p/discover: fix deadlock in discv5 message dispatch (#21858)
This fixes a deadlock that could occur when a response packet arrived
after a call had already received enough responses and was about to
signal completion to the dispatch loop.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-11-25 22:16:36 +01:00
Alex Prut
810f9e057d all: remove redundant conversions and import names (#21903) 2020-11-25 21:00:23 +01:00
Antoine Toulme
f59ed3565d graphql: always return 400 if errors are present in the response (#21882)
* Make sure to return 400 when errors are present in the response

* graphql: use less memory in chainconfig for tests

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-25 10:19:36 +01:00
Alex Prut
c92faee66e all: simplify nested complexity and if blocks ending with a return statement (#21854)
Changes:

    Simplify nested complexity
    If an if blocks ends with a return statement then remove the else nesting.

Most of the changes has also been reported in golint https://goreportcard.com/report/github.com/ethereum/go-ethereum#golint
2020-11-25 09:24:50 +01:00
Marius van der Wijden
29efe1fc7e core/types: fixed typo (#21897) 2020-11-25 08:53:20 +01:00
Marius van der Wijden
59b480ab4b cmd/devp2p/internal/ethtest: add 'large announcement' tests (#21792)
* cmd/devp2p/internal/ethtest: added large announcement tests

* cmd/devp2p/internal/ethtest: added large announcement tests

* cmd/devp2p/internal/ethtest: refactored stuff a bit

* cmd/devp2p/internal/ethtest: added TestMaliciousStatus/Handshake

* cmd/devp2p/internal/ethtest: fixed rebasing issue

* happy linter, happy life

* cmd/devp2p/internal/ethtest: used readAndServe

* stuff

* cmd/devp2p/internal/ethtest: fixed test cases
2020-11-24 16:09:17 +01:00
ligi
7e7a3f0f71 github: Remove vulnerability.md (#21894)
This type is automatically offered by github after changing to the new style and a security.md being present
2020-11-24 16:02:53 +01:00
Felföldi Zsolt
bddd103a9f les: fix GetProofsV2 bug (#21896) 2020-11-24 10:55:17 +01:00
LieutenantRoger
6b58409614 cmd/faucet: improve handling of facebook post url (#21838)
Resolves #21532

Co-authored-by: roger <dengjun@huobi.com>
2020-11-24 10:33:58 +01:00
Péter Szilágyi
ead814616c Merge pull request #21890 from ligi/issue_templates
github: Add new style of issue-templates
2020-11-23 17:47:20 +02:00
Martin Holst Swende
6104ab6b6d tests/fuzzers/bls1381: add bls fuzzer (#21796)
* added bls fuzzer

* crypto/bls12381: revert bls-changes, fixup fuzzer tests

* fuzzers: split bls fuzzing into 8 different units

* fuzzers/bls: remove (now stale) corpus

* crypto/bls12381: added blsfuzz corpus

* fuzzers/bls12381: fix the bls corpus

* fuzzers: fix oss-fuzz script

* tests/fuzzers: fixups on bls corpus

* test/fuzzers: remove leftover corpus

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-11-23 15:49:16 +01:00
ligi
f6e1aed504 github: Add new style of issue-templates
closes #20024
2020-11-23 13:12:42 +01:00
Felföldi Zsolt
bddf5aaa2f les/utils: protect against WeightedRandomSelect overflow (#21839)
Also fixes a bug in les/flowcontrol that caused the overflow.
2020-11-23 10:18:33 +01:00
Martin Holst Swende
3ef52775c4 p2p: avoid spinning loop on out-of-handles (#21878)
* p2p: avoid busy-loop on temporary errors

* p2p: address review concerns
2020-11-20 15:14:25 +01:00
Martin Holst Swende
ebb9591c4d crypto/bn256: fix bn256Mul fuzzer to not hang on large input (#21872)
* crypto/bn256: fix bn256Mul fuzzer to not hang on large input

* Update crypto/bn256/bn256_fuzz.go

Co-authored-by: ligi <ligi@ligi.de>

Co-authored-by: ligi <ligi@ligi.de>
2020-11-20 08:53:10 +01:00
Martin Holst Swende
6f88d6530a trie, rpc, cmd/geth: fix tests on 32-bit and windows + minor rpc fixes (#21871)
* trie: fix tests to work on 32-bit systems

* les: make test work on 32-bit platform

* cmd/geth: fix windows-issues on tests

* trie: improve balance

* cmd/geth: make account tests less verbose + less mem intense

* rpc: make debug-level log output less verbose

* cmd/geth: lint
2020-11-19 22:50:47 +01:00
wbt
f1e1d9f874 node: support expressive origin rules in ws.origins (#21481)
* Only compare hostnames in ws.origins

Also using a helper function for ToLower consolidates all preparation steps in one function for more maintainable consistency.

Spaces => tabs

Remove a semicolon

Add space at start of comment

Remove parens around conditional

Handle case wehre parsed hostname is empty

When passing a single word like "localhost" the parsed hostname is an empty string. Handle this and the error-parsing case together as default, and the nonempty hostname case in the conditional.

Refactor with new originIsAllowed functions

Adds originIsAllowed() & ruleAllowsOrigin(); removes prepOriginForComparison

Remove blank line

Added tests for simple allowed-orign rule

which does not specify a protocol or port, just a hostname

Fix copy-paste: `:=` => `=`

Remove parens around conditional

Remove autoadded whitespace on blank lines

Compare scheme, hostname, and port with rule

if the rule specifies those portions.

Remove one autoadded trailing whitespace

Better handle case where only origin host is given

e.g. "localhost"

Remove parens around conditional

Refactor: attemptWebsocketConnectionFromOrigin DRY

Include return type on helper function

Provide srv obj in helper fn

Provide srv to helper fn

Remove stray underscore

Remove blank line

parent 93e666b4c1e7e49b8406dc83ed93f4a02ea49ac1
author wbt <wbt@users.noreply.github.com> 1598559718 -0400
committer Martin Holst Swende <martin@swende.se> 1605602257 +0100
gpgsig -----BEGIN PGP SIGNATURE-----

 iQFFBAABCAAvFiEEypmrtbNuJK1doP1AaDtDjAWl3fAFAl+zi9ARHG1hcnRpbkBz
 d2VuZGUuc2UACgkQaDtDjAWl3fDRiwgAoMtzU8dwRV7Q9xkCwWEx9Wz2f3n6jUr2
 VWBycDKGKwRkPPOER3oc9kzjGU/P1tFlK07PjfnAKZ9KWzxpDcJZwYM3xCBurG7A
 16y4YsQnzgPNONv3xIkdi3RZtDBIiPFFEmdZFFvZ/jKexfI6JIYPngCAoqdTIFb9
 On/aPvvVWQn1ExfmarsvvJ7kUDUG77tZipuacEH5FfFsfelBWOEYPe+I9ToUHskv
 +qO6rOkV1Ojk8eBc6o0R1PnApwCAlEhJs7aM/SEOg4B4ZJJneiFuEXBIG9+0yS2I
 NOicuDPLGucOB5nBsfIKI3USPeE+3jxdT8go2lN5Nrhm6MimoILDsQ==
 =sgUp
 -----END PGP SIGNATURE-----

Refactor: drop err var for more concise test lines

Add several tests for new WebSocket origin checks

Remove autoadded whitespace on blank lines

Restore TestWebsocketOrigins originally-named test

and rename the others to be helpers rather than full tests

Remove autoadded whitespace on blank line

Temporarily comment out new test sets

Uncomment test around origin rule with scheme

Remove tests without scheme on browser origin

per https://github.com/ethereum/go-ethereum/pull/21481/files#r479371498

Uncomment tests with port; remove some blank lines

Handle when browser does not specify scheme/port

Uncomment test for including scheme & port in rule

Add IP tests

* node: more tests + table-driven, ws origin changes

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-19 14:54:49 +01:00
Péter Szilágyi
28080463d2 Merge pull request #21861 from holiman/remove_retesteth
cmd/geth: remove retesteth
2020-11-19 07:49:40 +02:00
gary rong
b9ff57c59e metrics: fix the panic for reading empty cpu stats (#21864) 2020-11-18 21:50:11 +01:00
gary rong
23524f8900 all: disable recording preimage of trie keys (#21402)
* cmd, core, eth, light, trie: disable recording preimage by default

* core, eth: fix unit tests

* core: fix import

* all: change to nopreimage

* cmd, core, eth, trie: use cache.preimages flag

* cmd: enable preimages for archive node

* cmd/utils, trie: simplify preimage tracking a bit

* core: fix linter

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-11-18 11:51:33 +02:00
Martin Holst Swende
6b9858085f cmd/geth: improve les test on windows (#21860) 2020-11-17 12:01:19 +01:00
Abd ar-Rahman Hamidi
db87223269 crypto/secp256k1: add checking z sign in affineFromJacobian (#18419)
The z == 0 check is hit whenever we Add two points with the same x1/x2
coordinate. crypto/elliptic uses the same check in their affineFromJacobian
function. This change does not affect block processing or tx signature verification
in any way, because it does not use the Add or Double methods.
2020-11-17 11:47:17 +01:00
Martin Holst Swende
d513584e52 cmd/geth: remove retesteth 2020-11-17 11:44:38 +01:00
Preston Van Loon
844485ec6a consensus/ethash: fix usage of *reflect.SliceHeader (#21372)
* consensus/ethash: only use *reflect.SliceHeader, not reflect.SliceHeader. See comment here: https://github.com/golang/go/issues/40397\#issuecomment-663748689

* consensus/ethash: pr feedback from @mdempsky, makes a copy of dest such that is not mutated

* consensus/ethash: remove noop assign

* consensus/ethash: apply same fix to another location

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-17 11:35:58 +02:00
Sad Pencil
1ea7537997 crypto/bn256: refine comments according to #19577, #21595, and #21836 (#21847) 2020-11-17 09:51:36 +01:00
Pascal Dierich
92c56eb820 common: fix documentation of Address.SetBytes (#21814) 2020-11-16 14:08:13 +01:00
Nicolas Feignon
cf856ea1ad accounts/abi: template: set events Raw field in Parse methods (#21807) 2020-11-13 13:43:15 +01:00
Marius van der Wijden
2045a2bba3 core, all: split vm.Context into BlockContext and TxContext (#21672)
* all: core: split vm.Config into BlockConfig and TxConfig

* core: core/vm: reset EVM between tx in block instead of creating new

* core/vm: added docs
2020-11-13 13:42:19 +01:00
Martin Holst Swende
6f4cccf8d2 core/vm, protocol_params: implement eip-2565 modexp repricing (#21607)
* core/vm, protocol_params: implement eip-2565 modexp repricing

* core/vm: fix review concerns
2020-11-13 13:39:59 +01:00
Martin Holst Swende
0703c91fba tests/fuzzers: improve the fuzzers (#21829)
* tests/fuzzers, common/bitutil: make fuzzers use correct returnvalues + remove output

* tests/fuzzers/stacktrie: fix duplicate-key insertion in stacktrie (false positive)

* tests/fuzzers/stacktrie: fix compilation error

* tests/fuzzers: linter nits
2020-11-13 12:36:38 +01:00
Marius van der Wijden
9ded4e33c5 crypto/bn256: better comments for u, P and Order (#21836) 2020-11-13 10:17:23 +01:00
Martin Holst Swende
a19b4235c7 crypto/bn256: improve bn256 fuzzer (#21815)
* crypto/cloudflare: fix nil deref in random G1/G2 reading

* crypto/bn256: improve fuzzer

* crypto/bn256: fix some flaws in fuzzer
2020-11-13 09:27:57 +01:00
Felix Lange
919229d63c params: begin v1.9.25 release cycle 2020-11-12 21:21:24 +01:00
Péter Szilágyi
cc05b050df params: release Geth v1.9.24 with Go 1.15.5 (#21842) 2020-11-12 21:10:15 +01:00
Felix Lange
920a287117 .travis.yml: move test builders after install builders (#21833) 2020-11-11 23:52:50 +01:00
Felix Lange
d49407427d build: fix regressions with the -dlgo change (#21831)
This fixes cross-build and mobile framework failures.
It also disables the mac test builder because it was failing
all the time in hard to understand ways and we can't afford
it anymore under Travis CI's new pricing.
2020-11-11 22:08:22 +01:00
Slava Karpenko
d990df909d consensus/ethash: use 64bit indexes for the DAG generation (#21793)
* Bit boundary fix for the DAG generation routine

* Fix unnecessary conversion warnings

Co-authored-by: Sergey Pavlov <spavlov@gmail.com>
2020-11-11 21:13:12 +01:00
Felix Lange
27d93c1848 build: add -dlgo flag in ci.go (#21824)
This new flag downloads a known version of Go and builds with it. This
is meant for environments where we can't easily upgrade the installed Go
version.

* .travis.yml: remove install step for PR test builders

We added this step originally to avoid re-building everything
for every test. go test has become much smarter in recent go
releases, so we no longer need to install anything here.
2020-11-11 14:34:43 +01:00
Marius van der Wijden
70868b1e4a fuzzers: removed fuzzbuzz configuration (#21813)
We decided to move our fuzzing efforts to oss-fuzz since fuzzbuzz is still early access.
2020-11-10 21:54:59 +02:00
Martin Holst Swende
941d8b5c5c scripts: create oss-fuzz script in go-ethereum (#21808) 2020-11-10 15:21:41 +01:00
gary rong
c52dfd55fb p2p/simulations/adapters/exec: fix some issues (#21801)
- Remove the ws:// prefix from the status endpoint since
  the ws:// is already included in the stack.WSEndpoint().
- Don't register the services again in the node start.
  Registration is already done in the initialization stage.
- Expose admin namespace via websocket.
  This namespace is necessary for connecting the peers via websocket.
- Offer logging relevant options for exec adapter.
  It's really painful to mix all log output in the single console. So
  this PR offers two additional options for exec adapter in this case
  testers can config the log output(e.g. file output) and log level
  for each p2p node.
2020-11-10 14:19:44 +01:00
Péter Szilágyi
0c34eae172 Merge pull request #21803 from holiman/ethash
consensus/ethash: fix the percentage progress report
2020-11-09 17:57:23 +02:00
Péter Szilágyi
7c30f4d085 Merge pull request #21804 from karalabe/snapshot-marker-sync
core/state/snapshot: update generator marker in sync with flushes
2020-11-09 17:50:26 +02:00
Péter Szilágyi
040928d8bb Merge pull request #21805 from karalabe/travis-drop-1.13
travis: drop Go 1.13 builders as it's not supported any more
2020-11-09 17:49:56 +02:00
Péter Szilágyi
9e688fb64c Merge pull request #21806 from karalabe/deprecate-eoan
build: stop building for Ubuntu Eoan, not supported any more
2020-11-09 17:49:21 +02:00
Péter Szilágyi
1143dc6e29 build: stop building for Ubuntu Eoan, not supported any more 2020-11-09 17:43:54 +02:00
Péter Szilágyi
eb694ea706 travis: drop Go 1.13 builders as it's not supported any more 2020-11-09 17:39:42 +02:00
Martin Holst Swende
81678971db trie, tests/fuzzers: implement a stacktrie fuzzer + stacktrie fixes (#21799)
* trie: fix error in stacktrie not committing small roots

* fuzzers: make trie-fuzzer use correct returnvalues

* trie: improved tests

* tests/fuzzers: fuzzer for stacktrie vs regular trie

* test/fuzzers: make stacktrie fuzzer use 32-byte keys

* trie: fix error in stacktrie with small nodes

* trie: add (skipped) testcase for stacktrie

* tests/fuzzers: address review comments for stacktrie fuzzer

* trie: fix docs in stacktrie
2020-11-09 15:08:12 +01:00
Péter Szilágyi
7b7b327ff2 core/state/snapshot: update generator marker in sync with flushes 2020-11-09 16:03:58 +02:00
Martin Holst Swende
81ff700077 consensus/ethash: fix the percentage progress report 2020-11-09 11:56:29 +01:00
Péter Szilágyi
97fc1c3b1d Merge pull request #21787 from karalabe/pod-non-verbose
build: stop verbose output to keep travis from overflowing
2020-11-05 11:55:50 +02:00
Péter Szilágyi
6cfe494276 build: stop verbose output to keep travis from overflowing 2020-11-05 11:52:35 +02:00
Martin Holst Swende
175506e7fd core/types, rlp: optimize derivesha (#21728)
This PR contains a minor optimization in derivesha, by exposing the RLP
int-encoding and making use of it to write integers directly to a
buffer (an RLP integer is known to never require more than 9 bytes
total). rlp.AppendUint64 might be useful in other places too.

The code assumes, just as before, that the hasher (a trie) will copy the
key internally, which it does when doing keybytesToHex(key).

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-11-04 19:29:24 +01:00
rene
36bb7ac083 cmd/devp2p/internal/ethtest: add correct chain files and improve test output (#21782)
This PR replaces the old test genesis.json and chain.rlp files in the testdata
directory for the eth protocol test suite, and also adds documentation for
running the eth test suite locally.

It also improves the test output text and adds more timeouts.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-11-04 17:36:56 +01:00
Felix Lange
5d20fbbb6f cmd/devp2p, internal/utesting: implement TAP output (#21760)
TAP is a text format for test results. Parsers for it are available in many languages,
making it easy to consume. I want TAP output from our protocol tests because the
Hive wrapper around them needs to know about the test names and their individual
results and logs. It would also be possible to just write this info as JSON, but I don't
want to invent a new format.

This also improves the normal console output for tests (when running without --tap).
It now prints -- RUN lines before any output from the test, and indents the log output
by one space.
2020-11-04 15:02:58 +01:00
gary rong
e6402677c2 core/state/snapshot: fix journal recovery from generating old journal (#21775)
* core/state/snapshot: print warning if failed to resolve journal

* core/state/snapshot: fix snapshot recovery

When we meet the snapshot journal consisted with:
- disk layer generator with new-format
- diff layer journal with old-format

The base layer should be returned without error.
The broken diff layer can be reconstructed later
but we definitely don't want to reconstruct the
huge diff layer.

* core: add tests
2020-11-04 13:41:46 +02:00
Marius van der Wijden
3eebf34038 common: remove ToHex and ToHexArray (#21610)
ToHex was deprecated a couple years ago. The last remaining use
was in ToHexArray, which itself only had a single call site.

This just moves ToHexArray near its only remaining call site and
implements it using hexutil.Encode. This changes the default behaviour
of ToHexArray and with it the behaviour of eth_getProof. Previously we
encoded an empty slice as 0, now the empty slice is encoded as 0x.
2020-11-04 11:20:39 +01:00
gary rong
b63bffe820 les, p2p/simulations/adapters: fix issues found while simulating les (#21761)
This adds a few tiny fixes for les and the p2p simulation framework:

LES Parts

- Keep the LES-SERVER connection even it's non-synced

  We had this idea to reject the connections in LES protocol if the les-server itself is
  not synced. However, in LES protocol we will also receive the connection from another
  les-server. In this case even the local node is not synced yet, we should keep the tcp
  connection for other protocols(e.g. eth protocol).

- Don't count "invalid message" for non-existing GetBlockHeadersMsg request

  In the eth syncing mechanism (full sync, fast sync, light sync), it will try to fetch
  some non-existent blocks or headers(to ensure we indeed download all the missing chain).
  In this case, it's possible that the les-server will receive the request for
  non-existent headers. So don't count it as the "invalid message" for scheduling
  dropping.

- Copy the announce object in the closure

  Before the les-server pushes the latest headers to all connected clients, it will create
  a closure and queue it in the underlying request scheduler. In some scenarios it's
  problematic. E.g, in private networks, the block can be mined very fast. So before the
  first closure is executed, we may already update the latest_announce object. So actually
  the "announce" object we want to send is replaced.

  The downsize is the client will receive two announces with the same td and then drop the
  server.

P2P Simulation Framework

- Don't double register the protocol services in p2p-simulation "Start".

  The protocols upon the devp2p are registered in the "New node stage". So don't reigster
  them again when starting a node in the p2p simulation framework

- Add one more new config field "ExternalSigner", in order to use clef service in the
  framework.
2020-10-30 18:04:38 +01:00
gary rong
b63e3c37a6 core: improve snapshot journal recovery (#21594)
* core/state/snapshot: introduce snapshot journal version

* core: update the disk layer in an atomic way

* core: persist the disk layer generator periodically

* core/state/snapshot: improve logging

* core/state/snapshot: forcibly ensure the legacy snapshot is matched

* core/state/snapshot: add debug logs

* core, tests: fix tests and special recovery case

* core: polish

* core: add more blockchain tests for snapshot recovery

* core/state: fix comment

* core: add recovery flag for snapshot

* core: add restart after start-after-crash tests

* core/rawdb: fix imports

* core: fix tests

* core: remove log

* core/state/snapshot: fix snapshot

* core: avoid callbacks in SetHead

* core: fix setHead cornercase where the threshold root has state

* core: small docs for the test cases

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-10-29 21:01:58 +02:00
gary rong
43c278cdf9 core/state: disable snapshot iteration if it's not fully constructed (#21682)
* core/state/snapshot: add diskRoot function

* core/state/snapshot: disable iteration if the snapshot is generating

* core/state/snapshot: simplify the function

* core/state: panic for undefined layer
2020-10-28 14:27:37 +02:00
gary rong
18145adf08 core/state: maintain one more diff layer (#21730)
* core/state: maintain one more diff layer

* core/state: address comment
2020-10-28 14:00:22 +02:00
Marius van der Wijden
296a27d106 accounts/abi/bind: restore error functionality (#21743)
* accounts/abi/bind: restore error functionality

* Update accounts/abi/bind/base.go

Co-authored-by: Guillaume Ballet <gballet@gmail.com>

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-10-27 17:22:44 +01:00
James Prestwich
1a55e20d35 cmd/geth: fix dir path in geth attach for yolov2 network (#21749) 2020-10-26 14:45:08 +02:00
Péter Szilágyi
7b748e550a Merge pull request #21747 from holiman/yolov2update
params: update yolov2 bootnode with elastic ip
2020-10-23 17:48:43 +03:00
Martin Holst Swende
68ac4eb796 params: update yolov2 bootnode with elastic ip 2020-10-23 16:47:26 +02:00
Péter Szilágyi
8a94aa91fb Merge pull request #21745 from holiman/yolov2_bootnodes
utils, params: add yolov2 bootnode
2020-10-23 16:42:08 +03:00
Martin Holst Swende
f5182c7b9c utils, params: add yolov2 bootnode 2020-10-23 15:40:48 +02:00
Felix Lange
95f720fffc cmd/devp2p/internal/ethtest: update test chain (#21742)
The old one was wrong in two ways: the first block in chain.rlp was the
genesis block, and the genesis difficulty was below minimum difficulty.

This also contains some other fixes to the test.
2020-10-23 13:34:44 +02:00
Martin Holst Swende
6487c002f6 all: implement EIP-2929 (gas cost increases for state access opcodes) + yolo-v2 (#21509)
* core/vm, core/state: implement EIP-2929 + YOLOv2

* core/state, core/vm: fix some review concerns

* core/state, core/vm: address review concerns

* core/vm: address review concerns

* core/vm: better documentation

* core/vm: unify sload cost as fully dynamic

* core/vm: fix typo

* core/vm/runtime: fix compilation flaw

* core/vm/runtime: fix renaming-err leftovers

* core/vm: renaming

* params/config: use correct yolov2 chainid for config

* core, params: use a proper new genesis for yolov2

* core/state/tests: golinter nitpicks
2020-10-23 08:26:57 +02:00
Kristofer Peterson
fb2c79df19 accounts/usbwallet: fix ledger version check (#21733)
The version check logic did not take into account the second digit (i.e. the '4' in v1.4.0) - this one line patch corrects this.
2020-10-21 16:56:45 +02:00
hwanjo
91c4607979 core: fix blockchain insert report time interval calculation (#21723) 2020-10-21 16:53:30 +02:00
Felföldi Zsolt
85d81b2cdd les: remove clientPeerSet and serverSet (#21566)
* les: move NodeStateMachine from clientPool to LesServer

* les: new header broadcaster

* les: peerCommons.headInfo always contains last announced head

* les: remove clientPeerSet and serverSet

* les: fixed panic

* les: fixed --nodiscover option

* les: disconnect all peers at ns.Stop()

* les: added comments and fixed signed broadcasts

* les: removed unused parameter, fixed tests
2020-10-21 10:56:33 +02:00
aaronbuchwald
3e82c9ef67 eth/api: fix potential nil deref in AccountRange (#21710)
* Fix potential nil pointer error when neither block number nor hash is specified to accountRange

* Update error description
2020-10-20 20:19:21 +02:00
gary rong
9d25f34263 core: track and improve tx indexing/unindexing (#21331)
* core: add background indexer to waitgroup

* core: make indexer stopable

* core/rawdb: add unit tests

* core/rawdb: fix lint

* core/rawdb: fix tests

* core/rawdb: fix linter
2020-10-20 16:34:50 +02:00
Marius van der Wijden
6e7137103c miner: fixed race condition in tests (#21664) 2020-10-20 10:58:26 +02:00
rene
cef3e2dc5a console: don't exit on ctrl-c, only on ctrl-d (#21660)
* add interrupt counter

* remove interrupt counter, allow ctrl-C to clear ONLY, ctrl-D will terminate console, stop node

* format

* add instructions to exit

* fix tests
2020-10-20 10:56:51 +02:00
Marius van der Wijden
b305591e14 core/vm: marshall returnData as hexstring in trace logs (#21715)
* core/vm: marshall returnData as hexstring in trace logs

* core/vm: marshall returnData as hexstring in trace logs
2020-10-16 11:28:03 +02:00
Felix Lange
51d026ca85 params: begin v1.9.24 release cycle 2020-10-15 12:30:41 +02:00
Felix Lange
8c2f271528 params: go-ethereum v1.9.23 stable 2020-10-15 12:29:42 +02:00
Felix Lange
524aaf5ec6 p2p/discover: implement v5.1 wire protocol (#21647)
This change implements the Discovery v5.1 wire protocol and
also adds an interactive test suite for this protocol.
2020-10-14 12:28:17 +02:00
Martin Holst Swende
4eb01b21c8 miner: set etherbase even if mining isn't possible at the moment (#21707) 2020-10-14 11:59:11 +02:00
gary rong
bdc7554918 params: update CHTs (#21706) 2020-10-14 11:57:37 +02:00
Marius van der Wijden
1fed223483 accounts/keystore: fix flaky test (#21703)
* accounts/keystore: add timeout to test to prevent failure on travis

The TestWalletNotifications test sporadically fails on travis.
This is because we shutdown the event collection before all events are received.
Adding a small timeout (10 milliseconds) allows the collector to be scheduled
and to consume all pending events before we shut it down.

* accounts/keystore: added newlines back in

* accounts/keystore: properly fix the walletNotifications test
2020-10-13 19:46:43 +02:00
Martin Holst Swende
1e10489196 miner: don't interrupt mining after successful sync (#21701)
* miner: exit loop when downloader Done or Failed

Following the logic of the comment at the method,
this fixes a regression introduced at 7cf56d6f06
, which would allow external parties to DoS with
blocks, preventing mining progress.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: remove ineff assign (lint)

Signed-off-by: meows <b5c6@protonmail.com>

* miner: update test re downloader events

Signed-off-by: meows <b5c6@protonmail.com>

* Revert "miner: remove ineff assign (lint)"

This reverts commit eaefcd34ab4862ebc936fb8a07578aa2744bc058.

* Revert "miner: exit loop when downloader Done or Failed"

This reverts commit 23abd34265aa246c38fc390bb72572ad6ae9fe3b.

* miner: add test showing imprecise TestMiner

Signed-off-by: meows <b5c6@protonmail.com>

* miner: fix waitForMiningState precision

This helper function would return an affirmation
on the first positive match on a desired bool.

This was imprecise; it return false positives
by not waiting initially for an 'updated' value.

This fix causes TestMiner_2 to fail, which is
expected.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: remove TestMiner_2 demonstrating broken test

This test demonstrated the imprecision of the test
helper function waitForMiningState. This function
has been fixed with 6d365c2851, and this test test
may now be removed.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: fix test regarding downloader event/mining expectations

See comment for logic.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: add test describing expectations for downloader/mining events

We expect that once the downloader emits a DoneEvent,
signaling a successful sync, that subsequent StartEvents
are not longer permitted to stop the miner.

This prevents a security vulnerability where forced syncs via
fake high blocks would stall mining operation.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: use 'canStop' state to fix downloader event handling

- Break downloader event handling into event
separating Done and Failed events. We need to
treat these cases differently since a DoneEvent
should prevent the miner from being stopped on
subsequent downloader Start events.

- Use canStop state to handle the one-off
case when a downloader first succeeds.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: improve comment wording

Signed-off-by: meows <b5c6@protonmail.com>

* miner: start mining on downloader events iff not already mining

Signed-off-by: meows <b5c6@protonmail.com>

* miner: refactor miner update logic w/r/t downloader events

This makes mining pause/start logic regarding downloader
events more explicit. Instead of eternally handling downloader
events after the first done event, the subscription is closed
when downloader events are no longer actionable.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: fix handling downloader events on subcription closed

Signed-off-by: meows <b5c6@protonmail.com>

* miner: (lint:gosimple) use range over chan instead of for/select

Signed-off-by: meows <b5c6@protonmail.com>

* miner: refactor update loop to remove race condition

The go routine handling the downloader events handling
vars in parallel with the parent routine, causing a
race condition.

This change, though ugly, remove the condition while
still allowing the downloader event subscription to be
closed when the miner has no further use for it (ie DoneEvent).

* miner: alternate fix for miner-flaw

Co-authored-by: meows <b5c6@protonmail.com>
2020-10-13 15:12:06 +03:00
Giuseppe Bertone
2a9ea6be87 cmd/geth, cmd/utils: fixed flags name (#21700) 2020-10-13 13:33:10 +02:00
Martin Holst Swende
7a5a822905 eth, p2p: use truncated names (#21698)
* peer: return localAddr instead of name to prevent spam

We currently use the name (which can be freely set by the peer) in several log messages.
This enables malicious actors to write spam into your geth log.
This commit returns the localAddr instead of the freely settable name.

* p2p: reduce usage of peer.Name in warn messages

* eth, p2p: use truncated names

* Update peer.go

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-10-13 13:28:24 +02:00
mr_franklin
5c6155f9f4 internal/web3ext: improve some web3 apis (#21639)
* imporve some web3-ext apis

* Update web3ext.go

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-10-13 13:24:08 +02:00
Martin Holst Swende
348c3bc47d trie: fix flaw in stacktrie pool reuse (#21699) 2020-10-13 13:21:25 +02:00
mr_franklin
94d1f5888a consensus/clique: unexport calcDifficulty and improve comment (#21619) 2020-10-13 11:00:42 +02:00
mr_franklin
c37e68e7c1 all: replace RWMutex with Mutex in places where RLock is not used (#21622) 2020-10-13 10:58:41 +02:00
Hanjiang Yu
32341f88e3 console: fix admin.sleepBlocks (#21629) 2020-10-13 10:55:57 +02:00
mr_franklin
66c3eb2f1a accouts, consensus, core: fix some comments (#21617) 2020-10-12 15:02:38 +02:00
gary rong
86dd005544 trie: polish commit function (#21692)
* trie: polish commit function

* trie: fix typo
2020-10-12 12:08:04 +02:00
Martin Holst Swende
706f5e3b98 core: fix txpool off-by-one error (#21683) 2020-10-09 12:23:46 +03:00
Marius van der Wijden
19a1c95046 eth/downloader: cache parent hash instead of recomputing (#21678) 2020-10-09 09:09:10 +02:00
gary rong
905ed109ed eth/downloader: fix data race around the ancientlimit (#21681)
* eth/downloader: fix data race around the ancientlimit

* eth/downloader: initialize the ancientlimit as 0
2020-10-09 09:58:30 +03:00
Guillaume Ballet
43cd31ea9f core/vm: dedup config check in markdown logger (#21655)
* core/vm: dedup config check

* review feedback: reuse buffer
2020-10-08 14:03:24 +02:00
Felix Lange
5e86e4ed29 p2p/discover: remove use of shared hash instance for key derivation (#21673)
For some reason, using the shared hash causes a cryptographic incompatibility
when using Go 1.15. I noticed this during the development of Discovery v5.1
when I added test vector verification.

The go library commit that broke this is golang/go@97240d5, but the
way we used HKDF is slightly dodgy anyway and it's not a regression.
2020-10-08 11:19:54 +02:00
Martin Holst Swende
6d29e192e9 signer/core: don't mismatch reject and no accounts (#21677)
* signer/core: don't mismatch reject and zero accounts, fixes #21674

* signer/core: docs
2020-10-08 11:10:58 +03:00
Felix Lange
015e78928a node: relax websocket connection header check (#21646)
This makes it accept the "upgrade,keep-alive" header value, which
apparently is a thing.
2020-10-07 20:05:14 +02:00
rene
716864deba cmd/devp2p/internal/ethtest: improve eth test suite (#21615)
This fixes issues with the protocol handshake and status exchange
and adds support for responding to GetBlockHeaders requests.
2020-10-07 17:22:44 +02:00
Martin Holst Swende
e43d827a19 core/types: optimize bloom filters (#21624)
* core/types: tests for bloom

* core/types: refactored bloom filter for receipts, added tests

core/types: replaced old bloom implementation

core/types: change interface of bloom add+test

* core/types: refactor bloom

* core/types: minor tweak on LogsBloom

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-10-06 16:57:00 +03:00
Martin Holst Swende
eb87121300 core/bloombits: faster generator (#21625)
* core/bloombits: add benchmark

* core/bloombits: optimize inserts
2020-10-06 16:34:29 +03:00
Raw Pong Ghmoa
2b2fd74158 params: update goerli testnet bootnodes (#21659)
* params: update pegasys besu bootnode

* params: update goerli initiative bootnodes
2020-10-06 08:35:21 +03:00
Felix Lange
d9890a6a8f cmd/faucet: enable DNS discovery for known networks (#21636) 2020-10-05 13:50:26 +03:00
Péter Szilágyi
a15d71a255 core/state/snapshot: stop generator if it hits missing trie nodes (#21649)
* core/state/snapshot: exit Geth if generator hits missing trie nodes

* core/state/snapshot: error instead of hard die on generator fault

* core/state/snapshot: don't enable logging on the tests
2020-10-05 11:52:36 +03:00
Martin Holst Swende
9d1e2027a0 trie: add Commit-sequence tests for stacktrie commit (#21643) 2020-09-30 19:49:20 +02:00
gary rong
053ed9cc84 trie: polishes to trie committer (#21351)
* trie: update tests to check commit integrity

* trie: polish committer

* trie: fix typo

* trie: remove hasvalue notion

According to the benchmarks, type assertion between the pointer and
interface is extremely fast.

BenchmarkIntmethod-12           1000000000               1.91 ns/op
BenchmarkInterface-12           1000000000               2.13 ns/op
BenchmarkTypeSwitch-12          1000000000               1.81 ns/op
BenchmarkTypeAssertion-12       2000000000               1.78 ns/op

So the overhead for asserting whether the shortnode has "valuenode"
child is super tiny. No necessary to have another field.

* trie: linter nitpicks

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-30 13:45:56 +02:00
Martin Holst Swende
dad26582b6 accounts, signer: implement gnosis safe support (#21593)
* accounts, signer: implement gnosis safe support

* common/math: add type for marshalling big to dec

* accounts, signer: properly sign gnosis requests

* signer, clef: implement account_signGnosisTx

* signer: fix auditlog print, change rpc-name (signGnosisTx to signGnosisSafeTx)

* signer: pass validation-messages/warnings to the UI for gnonsis-safe txs

* signer/core: minor change to validationmessages of typed data
2020-09-29 17:40:08 +02:00
Guillaume Ballet
6c8310ebb4 trie: use stacktrie for Derivesha operation (#21407)
core/types: use stacktrie for derivesha

trie: add stacktrie file

trie: fix linter

core/types: use stacktrie for derivesha

rebased: adapt stacktrie to the newer version of DeriveSha

Co-authored-by: Martin Holst Swende <martin@swende.se>

More linter fixes

review feedback: no key offset for nodes converted to hashes

trie: use EncodeRLP for full nodes

core/types: insert txs in order in derivesha

trie: tests for derivesha with stacktrie

trie: make stacktrie use pooled hashers

trie: make stacktrie reuse tmp slice space

trie: minor polishes on stacktrie

trie/stacktrie: less rlp dancing

core/types: explain the contorsions in DeriveSha

ci: fix goimport errors

trie: clear mem on subtrie hashing

squashme: linter fix

stracktrie: use pooling, less allocs (#3)

trie: in-place hex prefix, reduce allocs and add rawNode.EncodeRLP

Reintroduce the `[]node` method, add the missing `EncodeRLP` implementation for `rawNode` and calculate the hex prefix in place.

Co-authored-by: Martin Holst Swende <martin@swende.se>

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-29 17:38:13 +02:00
mr_franklin
4ee11b072e cmd/bootnode,internal/debug: fix some comments (#21623) 2020-09-29 11:31:14 +02:00
Marius van der Wijden
901471f733 build: keep geth-sources.jar build result for JavaDoc (#21596)
* ci: tooltips for javadoc for mobile app

* f space
2020-09-28 20:11:30 +02:00
mr_franklin
666092936c p2p/enode: remove unused code (#21612) 2020-09-28 20:10:11 +02:00
shigeyuki azuchi
b007df89dd light: fix wrong description in a comment (#21573) 2020-09-28 14:30:10 +02:00
mr_franklin
a04294d160 internal/web3ext: improve eth_getBlockByNumber and eth_getBlockByHash console api (#21608) 2020-09-28 14:28:38 +02:00
aaronbuchwald
eebfb13053 core: free pointer from slice after popping element from price heap (#21572)
* Fix potential memory leak in price heap

* core: nil free pointer slice (alternative version)

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-28 14:24:01 +02:00
Martin Holst Swende
0ddd4612b7 core/vm, params: make 2200 in line with spec (#21605) 2020-09-28 14:14:45 +02:00
Marius van der Wijden
a90e645ccd mobile: added constructor for big int (#21597)
* mobile: added constructor for big int

* mobile: tiny nitpick
2020-09-28 14:12:08 +02:00
Marius van der Wijden
420b78659b accounts/abi: ABI explicit difference between Unpack and UnpackIntoInterface (#21091)
* accounts/abi: refactored abi.Unpack

* accounts/abi/bind: fixed error

* accounts/abi/bind: modified template

* accounts/abi/bind: added ToStruct for conversion

* accounts/abi: reenabled tests

* accounts/abi: fixed tests

* accounts/abi: fixed tests for packing/unpacking

* accounts/abi: fixed tests

* accounts/abi: added more logic to ToStruct

* accounts/abi/bind: fixed template

* accounts/abi/bind: fixed ToStruct conversion

* accounts/abi/: removed unused code

* accounts/abi: updated template

* accounts/abi: refactored unused code

* contracts/checkpointoracle: updated contracts to sol ^0.6.0

* accounts/abi: refactored reflection logic

* accounts/abi: less code duplication in Unpack*

* accounts/abi: fixed rebasing bug

* fix a few typos in comments

* rebase on master

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-09-28 14:10:26 +02:00
Péter Szilágyi
c9959145a9 params: begin v1.9.23 release cycle 2020-09-28 11:23:02 +03:00
Péter Szilágyi
c71a7e26a8 params: release Geth v1.9.22 2020-09-28 11:21:47 +03:00
Péter Szilágyi
7ddb44b80e Merge pull request #21635 from karalabe/cht-1.9.22
params: update CHTs for Geth v1.9.22
2020-09-28 11:21:17 +03:00
Péter Szilágyi
b5d362b2bf params: update CHTs for Geth v1.9.22 2020-09-28 11:19:22 +03:00
rene
fdd42d425b cmd/devp2p/internal/ethtest: lower protocol version to 64 (#21604) 2020-09-24 10:46:43 +02:00
rene
39f8268147 cmd/devp2p/internal/ethtest: update version in handshake (#21603) 2020-09-23 17:48:47 +02:00
rene
a25899f3dc cmd/devp2p: add eth protocol test suite (#21598)
This change adds a test framework for the "eth" protocol and some basic
tests. The tests can be run using the './devp2p rlpx eth-test' command.
2020-09-23 15:18:17 +02:00
Marius van der Wijden
c1544423d6 internal/ethapi: fix nil deref + fix estimateGas console bindings (#21601)
* tried to fix

* fix for js api

* fix for nil pointer ex

* rev space

* rev space

* input call formatter
2020-09-23 13:08:40 +02:00
gary rong
e5defccd58 trie: extend range proof (#21250)
* trie: support non-existent right proof

* trie: improve test

* trie: minor linter fix

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-09-23 12:44:09 +03:00
Marius van der Wijden
0921f8a74f internal/ethapi: add optional parameter blockNrOrHash to estimateGas (#21545)
This allows users to estimate gas on top of arbitrary blocks as well as pending and latest.
Tracing on pending is useful for most users as it takes into account the current txpool while
tracing on latest might be useful for users that have little to know knowledge of the current
transactions in the network.

If blockNrOrHash is not specified, estimateGas defaults to pending
2020-09-23 10:29:48 +02:00
gary rong
25b16085da trie: support empty range proof (#21199) 2020-09-23 11:03:21 +03:00
gary rong
e1365b2464 trie: fix gaped range proof test case (#21484) 2020-09-23 10:59:11 +03:00
Binacs
fdb742419e cmd/clef, cmd/geth: use SplitAndTrim from cmd/utils (#21579) 2020-09-22 23:22:54 +02:00
rene
129cf075e9 p2p: move rlpx into separate package (#21464)
This change moves the RLPx protocol implementation into a separate package,
p2p/rlpx. The new package can be used to establish RLPx connections for
protocol testing purposes.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-09-22 10:17:39 +02:00
Marius van der Wijden
2c097bb7a2 mobile: better api for java users (#21580)
* (mobile): Adds string representations for types

* mobile: better interfaces add stringer to types

Co-authored-by: sarath <sarath@melvault.com>
2020-09-21 16:33:35 +02:00
Osoro Bironga
9a39c6bcb1 accounts/abi: improve documentation and names (#21540)
* accounts: abi/bid/backends; cleaned doc errors, camelCase refactors and anonymous variable assignments

* acounts/abi/bind: doc errors, anonymous parameter assignments

* accounts/abi: doc edits, camelCase refactors

* accounts/abi/bind: review fix

* reverted name changes

* name revert

Co-authored-by: Osoro Bironga <osoro@doctaroo.com>
2020-09-20 10:43:57 +02:00
Guillaume Ballet
f354c622ca core: fix a typo in comment (#21439) 2020-09-18 14:26:19 +02:00
Péter Szilágyi
2482ba016e Merge pull request #21529 from karalabe/dynamic-pivot
eth/downloader: dynamically move pivot even during chain sync
2020-09-18 12:29:33 +03:00
Péter Szilágyi
fb835c024c eth/downloader: dynamically move pivot even during chain sync 2020-09-18 11:37:42 +03:00
Giuseppe Bertone
07751c3d26 cmd/geth: added counters to the geth inspect report (#21495)
* database: added counters

* Improved stats for ancient db

* Small improvement

* Better message and added percentage while counting receipts

* Fast counting for receipts

* added info message

* Show both receips itemscount  from ancient db and counted receipts

* Fixed default case

* Removed counter for receipts in ancient store

* Removed counting of receipts present in leveldb
2020-09-17 10:23:56 +02:00
Marius van der Wijden
faba018b29 cmd/utils: use preconfigured testnet flags instead of networkid (#21561)
* cmd/utils: use preconfigured testnet flags instead of networkid

* cmd/utils: shorter description

Co-authored-by: Martin Holst Swende <martin@swende.se>

* Update flags.go

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-16 13:17:50 +02:00
Marius van der Wijden
89884dc353 tests/fuzzers/abi: add fuzzer for fuzzing package accounts/abi (#21217)
* tests/fuzzers/abi: added abi fuzzer

* accounts/abi: fixed issues found by fuzzing

* tests/fuzzers/abi: update fuzzers, added repro test

* tests/fuzzers/abi: renamed abi_fuzzer to abifuzzer

* tests/fuzzers/abi: updated abi fuzzer

* tests/fuzzers/abi: updated abi fuzzer

* accounts/abi: minor style fix

* go.mod: added go-fuzz dependency

* tests/fuzzers/abi: updated abi fuzzer

* tests/fuzzers/abi: make linter happy

* tests/fuzzers/abi: make linter happy

* tests/fuzzers/abi: comment out false positives
2020-09-16 13:15:22 +02:00
gary rong
93f047023f les/lespay/server: bump database version (#21571) 2020-09-16 11:51:16 +02:00
Vinod Damle
8696dd39cb params: allow setting Petersburg block before chain head (#21473)
* Allow setting PetersburgBlock before chainhead

if it is at the same block as ConstantinopleBlock

* Add a negative test
2020-09-16 09:39:35 +03:00
Mason Fischer
cf2a77af28 ethclient: fix BlockNumber (#21565)
It didn't actually work because it called a method that doesn't
exist. This fixes it also adds a test.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-09-15 11:29:51 +02:00
Giuseppe Bertone
0185ee0993 core/rawdb: single point of maintenance for writing and deleting tx lookup indexes (#21480) 2020-09-15 10:37:01 +02:00
Kirill Elagin
4764b2f0be COYPING: restore the full text text of GPL (#21568)
When the license was added to the repository, its text was changed (some
sections at the end removed) and, worse, the authors of go-ethereum
tried to claim copyright on the license text.

The correct way to apply GPL to a project is to copy it verbatim.
This change reverts the text of the GPL to the original.
2020-09-15 08:27:17 +02:00
Martin Holst Swende
b65c384181 eth/tracers: regenerate assets from #21549 (#21564) 2020-09-15 08:22:47 +02:00
Felföldi Zsolt
4996fce25a les, les/lespay/server: refactor client pool (#21236)
* les, les/lespay/server: refactor client pool

* les: use ns.Operation and sub calls where needed

* les: fixed tests

* les: removed active/inactive logic from peerSet

* les: removed active/inactive peer logic

* les: fixed linter warnings

* les: fixed more linter errors and added missing metrics

* les: addressed comments

* cmd/geth: fixed TestPriorityClient

* les: simplified clientPool state machine

* les/lespay/server: do not use goroutine for balance callbacks

* internal/web3ext: fix addBalance required parameters

* les: removed freeCapacity, always connect at minCapacity initially

* les: only allow capacity change with priority status

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2020-09-14 22:44:20 +02:00
Felix Lange
f7112cc182 rlp: add SplitUint64 (#21563)
This can be useful when working with raw RLP data.
2020-09-14 19:23:01 +02:00
Julian Koh
71c37d82ad js/tracers: make calltracer report value in selfdestructs (#21549) 2020-09-14 14:57:28 +02:00
Felföldi Zsolt
4eb9296910 p2p/nodestate: ensure correct callback order (#21436)
This PR adds an extra guarantee to NodeStateMachine: it ensures that all
immediate effects of a certain change are processed before any subsequent
effects of any of the immediate effects on the same node. In the original
version, if a cascaded change caused a subscription callback to be called
multiple times for the same node then these calls might have happened in a
wrong chronological order.

For example:

- a subscription to flag0 changes flag1 and flag2
- a subscription to flag1 changes flag3
- a subscription to flag1, flag2 and flag3 was called in the following order:

   [flag1] -> [flag1, flag3]
   [] -> [flag1]
   [flag1, flag3] -> [flag1, flag2, flag3]

This happened because the tree of changes was traversed in a "depth-first
order". Now it is traversed in a "breadth-first order"; each node has a
FIFO queue for pending callbacks and each triggered subscription callback
is added to the end of the list. The already existing guarantees are
retained; no SetState or SetField returns until the callback queue of the
node is empty again. Just like before, it is the responsibility of the
state machine design to ensure that infinite state loops are not possible.
Multiple changes affecting the same node can still happen simultaneously;
in this case the changes can be interleaved in the FIFO of the node but the
correct order is still guaranteed.

A new unit test is also added to verify callback order in the above scenario.
2020-09-14 14:01:18 +02:00
Shude Li
a99ac5335c Dockerfile: unexpose port 8547 as GraphQL was merged into HTTP endpoint (#21556) 2020-09-13 23:25:15 +03:00
Guillaume Ballet
4e2641319b p2p/discover: fix typo in comments (#21554) 2020-09-11 20:35:38 +02:00
Marius van der Wijden
df219e23df miner: fix regression, add test for starting while download (#21547)
Fixes a regression introduced in #21536
2020-09-11 18:17:09 +02:00
Marius van der Wijden
7cf56d6f06 miner: use channels instead of atomics in update loop (#21536)
This PR changes several different things:

- Adds test cases for the miner loop
- Stops the worker if it wasn't already stopped in worker.Close()
- Uses channels instead of atomics in the miner.update() loop

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-09-10 19:27:42 +02:00
Guillaume Ballet
d7f02b448a cmd/geth: print warning when whisper config is present in toml (#21544)
* cmd/geth: print warning when whisper config is present in toml

* Update cmd/geth/config.go

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-10 13:14:19 +00:00
Dan Sosedoff
1167639524 ethclient: add BlockNumber method (#21500)
This adds a new client method BlockNumber to fetch the most recent
block number of the chain.
2020-09-10 14:24:21 +02:00
Shude Li
4ea9737de6 go.mod: remove golang.org/x/sync (#21541) 2020-09-10 14:21:51 +02:00
Martin Holst Swende
a3cd8a040a core/vm: fix benchmark overflow + prep for precompile repricings (#21530)
* core/vm/testdata: add gascost expectations to testcases

* core/vm: verify expected gas in tests for precompiles

* core/vm: fix overflow flaw in gas/s calculation
2020-09-10 09:19:30 +02:00
gary rong
328901c24c cmd, eth: offer maxprice flag for overwritting price cap (#21531)
* cmd, eth: offer maxprice flag for overwritting price cap

* eth: rename default price cap
2020-09-09 18:38:47 +03:00
Péter Szilágyi
3a98c6f6e6 Merge pull request #21537 from karalabe/les-reorg-fix
eth/downloader: only roll back light sync if not fully validating
2020-09-09 18:37:37 +03:00
Marius van der Wijden
d81c9d9b76 accounts/abi/bind/backends: reverted some stylistic changes (#21535) 2020-09-09 13:21:20 +00:00
Péter Szilágyi
367f12f734 eth/downloader: only roll back light sync if not fully validating 2020-09-09 14:13:26 +03:00
Péter Szilágyi
8d35b1eb2b params: begin v1.9.22 release cycle 2020-09-09 11:23:37 +03:00
Péter Szilágyi
0287d54847 params: release Geth v1.9.21 2020-09-09 11:22:11 +03:00
Péter Szilágyi
24562d9b0c Merge pull request #21534 from karalabe/cht-1.9.21
params: update CHTs for v1.9.21 release
2020-09-09 10:34:53 +03:00
Péter Szilágyi
dc681fc1f6 params: update CHTs for v1.9.21 release 2020-09-09 10:33:20 +03:00
Guillaume Ballet
86bcbb0d79 .github: remove whisper from CODEOWNERS (#21527) 2020-09-08 23:02:14 +03:00
Guillaume Ballet
066c75531d build: remove wnode from the list of packages binaries (#21526) 2020-09-08 16:13:48 +02:00
Martin Holst Swende
8327d1fdfc accounts/usbwallet, signer/core: show accounts from ledger legacy derivation paths (#21517)
* accounts/usbwallet, signer/core: un-hide accounts from ledger legacy derivation paths

* Update accounts/usbwallet/wallet.go

* Update signer/core/api.go

* Update signer/core/api.go
2020-09-08 14:07:55 +03:00
Guillaume Ballet
d54f2f2e5e whisper: remove whisper (#21487)
* whisper: remove whisper

* Update cmd/geth/config.go

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>

* cmd/geth: warn on enabling whisper + remove more whisper deps

* mobile: remove all whisper references

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-08 11:47:48 +03:00
Osoro Bironga
c5d28f0b27 accounts: abi/bid/backends; cleaned doc errors, camelCase refactors and anonymous variable assignments (#21514)
Co-authored-by: Osoro Bironga <osoro@doctaroo.com>
2020-09-07 13:07:15 +02:00
Marius van der Wijden
de971cc845 eth: added trace_call to trace on top of arbitrary blocks (#21338)
* eth: Added TraceTransactionPending

* eth: Implement Trace_Call, remove traceTxPending

* eth: debug_call -> debug_traceCall, recompute tx environment if pruned

* eth: fix nil panic

* eth: improve block retrieving logic in tracers

* internal/web3ext: add debug_traceCall to console
2020-09-07 10:52:01 +02:00
Péter Szilágyi
f86324edb7 Merge pull request #21504 from karalabe/trie-path-sync
core, eth, trie: prepare trie sync for path based operation
2020-09-02 13:52:51 +03:00
Péter Szilágyi
eeaf191633 core, eth, trie: prepare trie sync for path based operation 2020-09-02 13:21:32 +03:00
Martin Holst Swende
3010f9fc75 eth/downloader: change intial download size (#21366)
This changes how the downloader works, a little bit. Previously, when block sync started,
we immediately started filling up to 8192 blocks. Usually this is fine, blocks are small
in the early numbers. The threshold then is lowered as we measure the size of the blocks
that are filled.

However, if the node is shut down and restarts syncing while we're in a heavy segment,
that might be bad. This PR introduces a more conservative initial threshold of 2K blocks
instead.
2020-09-02 11:01:46 +02:00
ucwong
d90bbce954 go.mod : update goja dependency (#21432) 2020-09-01 12:56:22 +02:00
Giuseppe Bertone
5cdb476dd1 "Downloader queue stats" is now provided once per minute (#21455)
* "Downloader queue stats" is now a DEBUG information

I think this info is more a DEBUG related information then an INFO. If it must remains an INFO, maybe it can be slow down to one time every 5 minutes or so.

* Update queue.go

"Downloader queue stats" information is now provided once every minute instead of once every 10 seconds.
2020-09-01 11:02:12 +02:00
Hanjiang Yu
ff23e265cd internal: fix personal.sign() (#21503) 2020-09-01 10:23:04 +02:00
Fuyang Deng
12d8570322 accounts/abi: fix a bug in getTypeSize method (#21501)
* accounts/abi: fix a bug in getTypeSize method

e.g. for "Tuple[2]" type, the element of the array is a tuple type and the size of the tuple may not be 32.

* accounts/abi: add unit test of getTypeSize method
2020-09-01 07:29:54 +00:00
Felix Lange
5883afb3ef rpc: fix issue with null JSON-RPC messages (#21497) 2020-08-28 16:27:58 +02:00
libotony
05280a7ae3 eth/tracers: revert reason in call_tracer + error for failed internal calls (#21387)
* tests: add testdata of call tracer

* eth/tracers: return revert reason in call_tracer

* eth/tracers: regenerate assets

* eth/tracers: add error message even if no exec occurrs, fixes #21438

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-08-27 11:33:45 +02:00
Péter Szilágyi
d97e0063d5 Merge pull request #21491 from karalabe/state-sync-leak-fix
core/state, eth, trie: stabilize memory use, fix memory leak
2020-08-27 11:24:28 +03:00
ucwong
856307d8bb go.mod | goleveldb latest update (#21448)
* go.mod | goleveldb latest update

* go.mod update

* leveldb options

* go.mod: double check

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-08-26 15:53:12 +02:00
Marius van der Wijden
16d7eae1c8 eth: updated comments (#21490) 2020-08-26 13:20:12 +03:00
Péter Szilágyi
d8da0b3d81 core/state, eth, trie: stabilize memory use, fix memory leak 2020-08-26 13:05:06 +03:00
Marius van der Wijden
92b12ee6c6 accounts/abi/bind/backends: Disallow AdjustTime for non-empty blocks (#21334)
* accounts/abi/bind/backends: Disallow timeshift for non-empty blocks

* accounts/abi/bind/backends: added tests for adjust time

* accounts/abi/bind/simulated: added comments, fixed test for AdjustTime

* accounts/abi/bind/backends: updated comment
2020-08-26 09:37:00 +02:00
Felix Lange
fc20680b95 params: begin v1.9.21 release cycle 2020-08-25 16:21:41 +02:00
Felix Lange
979fc96899 params: release Geth v1.9.20 2020-08-25 16:20:37 +02:00
Péter Szilágyi
63a9d4b2ae Merge pull request #21486 from karalabe/cht-1.9.20
params: update CHTs for v1.9.20 release
2020-08-25 13:25:23 +03:00
Péter Szilágyi
ce5f94920d params: update CHTs for v1.9.20 release 2020-08-25 13:02:51 +03:00
Shude Li
341f451083 graphql: add support for retrieving the chain id (#21451) 2020-08-25 11:38:56 +03:00
Péter Szilágyi
d13b8e5570 Merge pull request #21483 from karalabe/freezer-truncate-silent
core/rawdb: only complain loudly if truncating many items
2020-08-25 09:21:44 +03:00
Péter Szilágyi
5655dce3b8 core/rawdb: only complain loudly if truncating many items 2020-08-25 09:03:14 +03:00
timcooijmans
7b5107b73f p2p/discover: avoid dropping unverified nodes when table is almost empty (#21396)
This change improves discovery behavior in small networks. Very small
networks would often fail to bootstrap because all member nodes were
dropping table content due to findnode failure. The check is now changed
to avoid dropping nodes on findnode failure when their bucket is almost
empty. It also relaxes the liveness check requirement for FINDNODE/v4
response nodes, returning unverified nodes as results when there aren't
any verified nodes yet.

The "findnode failed" log now reports whether the node was dropped
instead of the number of results. The value of the "results" was
always zero by definition.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-24 14:42:39 +02:00
Péter Szilágyi
bdde616f23 Merge pull request #21477 from karalabe/snapshotter-shallow-generator
core/state/snapshot: reduce disk layer depth during generation
2020-08-24 14:00:57 +03:00
Péter Szilágyi
3ee91b9f2e core/state/snapshot: reduce disk layer depth during generation 2020-08-24 13:22:36 +03:00
Martin Holst Swende
0f4e7c9b0d eth: utilize sync bloom for getNodeData (#21445)
* eth/downloader, eth/handler: utilize sync bloom for getNodeData

* trie: handle if bloom is nil

* trie, downloader: check bloom nilness externally
2020-08-24 11:32:12 +03:00
Martin Holst Swende
1b5a867eec core: do less lookups when writing fast-sync block bodies (#21468) 2020-08-22 18:12:04 +02:00
gary rong
87c0ba9213 core, eth, les, trie: add a prefix to contract code (#21080) 2020-08-21 15:10:40 +03:00
Péter Szilágyi
b68929caee Merge pull request #21472 from holiman/fix_dltest_fail
eth/downloader: fix rollback issue on short chains
2020-08-21 14:43:14 +03:00
Martin Holst Swende
9f7b79af00 eth/downloader: fix rollback issue on short chains 2020-08-21 13:36:08 +02:00
Marius van der Wijden
4e54b1a45e metrics: zero temp variable in updateMeter (#21470)
* metrics: zero temp variable in  updateMeter

Previously the temp variable was not updated properly after summing it to count.
This meant we had astronomically high metrics, now we zero out the temp whenever we
sum it onto the snapshot count

* metrics: move temp variable to be aligned, unit tests

Moves the temp variable in MeterSnapshot to be 64-bit aligned because of the atomic bug.
Adds a unit test, that catches the previous bug.
2020-08-21 11:04:36 +03:00
Péter Szilágyi
a70a79b285 Merge pull request #21466 from karalabe/go1.15
travis, dockerfile, appveyor, build: bump to Go 1.15
2020-08-20 17:41:26 +03:00
Péter Szilágyi
15fdaf2005 travis, dockerfile, appveyor, build: bump to Go 1.15 2020-08-20 16:41:37 +03:00
Péter Szilágyi
8cbdc8638f core: define and test chain rewind corner cases (#21409)
* core: define and test chain reparation cornercases

* core: write up a variety of set-head tests

* core, eth: unify chain rollbacks, handle all the cases

* core: make linter smile

* core: remove commented out legacy code

* core, eth/downloader: fix review comments

* core: revert a removed recovery mechanism
2020-08-20 13:01:24 +03:00
Marius van der Wijden
0bdd295cc0 core: more detailed metering for reorgs (#21420) 2020-08-20 09:49:35 +02:00
Martin Holst Swende
7ebc6c43ff cmd/evm: statet8n output folder + tx hashes on trace filenames (#21406)
* t8ntool: add output basedir

* t8ntool: add txhash to trace filename

* t8ntool: don't default to '.' basedir, allow absolute paths
2020-08-19 11:31:13 +02:00
Péter Szilágyi
560d44479c Merge pull request #21461 from karalabe/ppa-drop-disco
build: drop disco, enable groovy on Ubuntu PPAs
2020-08-19 10:29:54 +03:00
Péter Szilágyi
32b078d418 build: drop disco, enable groovy on Ubuntu PPAs 2020-08-19 10:28:08 +03:00
Giuseppe Bertone
2ff464b29d core/state: fixed some comments (#21450) 2020-08-19 09:54:21 +03:00
Marius van der Wijden
f3bafecef7 metrics: make meter updates lock-free (#21446) 2020-08-18 11:27:04 +02:00
Martin Holst Swende
54add42550 cmd/geth/tests: try to fix spurious travis failure in les tests (#21410)
* cmd/geth/tests: try to fix spurious travis failure in les tests

* cmd/geth: les_test - remove extraneous option during boot
2020-08-14 14:18:12 +02:00
Péter Szilágyi
04926db204 params: begin v1.9.20 release cycle 2020-08-11 14:11:16 +03:00
Péter Szilágyi
3e0641923d params: release Geth v1.9.19 2020-08-11 14:10:21 +03:00
Péter Szilágyi
74925e547f Merge pull request #21437 from karalabe/cht-1.9.19
params: update CHTs for v1.9.19
2020-08-11 10:34:08 +03:00
Péter Szilágyi
7afdf792ab params: update CHTs for v1.9.19 2020-08-11 10:20:03 +03:00
Martin Holst Swende
c28fd9c079 tests: add Berlin-definition identical to YOLOv1 (#21435) 2020-08-10 21:06:14 +02:00
Péter Szilágyi
4baa574410 Merge pull request #21434 from karalabe/ethstats-split-rwlock
ethstats: split read and write lock, otherwise they'll lock up
2020-08-10 15:13:31 +03:00
Péter Szilágyi
9f45d6efae ethstats: split read and write lock, otherwise they'll lock up 2020-08-10 14:33:22 +03:00
Péter Szilágyi
cbbc54c495 Merge pull request #21433 from holiman/statsync_exiter
eth/downloader: allow all timers to exit
2020-08-10 12:08:46 +03:00
Martin Holst Swende
7cee2509c0 eth/downloader: allow all timers to exit 2020-08-10 10:42:33 +02:00
Péter Szilágyi
48b484c5ac Merge pull request #21428 from holiman/ethstats_moar
ethstats: overwrite old errors
2020-08-07 19:40:28 +03:00
Péter Szilágyi
06125bff89 Merge pull request #21429 from holiman/timerfix
eth/downloader: set deliverytime on drops and timeouts too
2020-08-07 16:36:33 +03:00
Martin Holst Swende
9fea1a5cf5 eth/downloader: set deliverytime on drops and timeouts too 2020-08-07 15:34:58 +02:00
gary rong
e401f5ff10 les: close all connected les-server when shutdown (#21426)
* les: close all connected les-server when shutdown

* les: linter nitpick

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-08-07 15:33:00 +02:00
Martin Holst Swende
6a53ce29a4 ethstats: overwrite old errors 2020-08-07 14:44:44 +02:00
Péter Szilágyi
8f24097836 Merge pull request #21427 from karalabe/fix-statesync-delivery-time
eth/downloader: save the correct delivery time for state sync
2020-08-07 15:27:00 +03:00
Péter Szilágyi
4b9c0ea76d eth/downloader: save the correct delivery time for state sync 2020-08-07 15:17:13 +03:00
Péter Szilágyi
3bb8a4ed3f Merge pull request #21425 from holiman/leslock
les: update checktime even if check fails
2020-08-07 12:32:01 +03:00
Martin Holst Swende
983cb25a07 les: update checktime even if check fails 2020-08-07 10:57:02 +02:00
Péter Szilágyi
68754f3931 cmd/utils: grant snapshot cache to trie if disabled (#21416)
* cmd/utils: grant snapshot cache to trie if disabled

* eth: fix up default non-mainnet cache distribution
2020-08-06 15:28:31 +03:00
timcooijmans
5d4512b113 eth: use maxQueuedTxAnns for to limit the number of transactions announced (#21419) 2020-08-06 15:19:00 +03:00
rene
d21303f9dd cmd/geth: fixes db unavailability for chain commands (#21415)
* chaincmd should make config nodes instead of full nodes

* add documentation for using makeConfigNode instead of makeFullNode;

* add documentation to functions

* code style
2020-08-06 10:24:36 +03:00
Péter Szilágyi
4fde0cabc1 Merge pull request #21411 from holiman/fix_codelookup
core/vm: avoid map lookups for accessing jumpdest analysis
2020-08-06 08:09:15 +03:00
rene
4a04127ce3 cmd/geth: fix import / export issues related to DB unavailability (#21414)
* should fix import / export issues related to DB unavailability

* document reason for makeConfigNode

* fix comment

* comment consistency

* remove comments

* lint
2020-08-06 08:02:05 +03:00
rene
2de37f28e0 downloader: add eth65 tests (#21383)
* eth65 tests

linted

* remove non-latest eth light tests
2020-08-05 12:22:29 +03:00
Robert Zaremba
5a88a7cf5b core: use errors.Is for consensus errors check (#21095) 2020-08-05 09:52:54 +02:00
Felix Lange
1d25039ff5 p2p/nat: limit UPNP request concurrency (#21390)
This adds a lock around requests because some routers can't handle
concurrent requests. Requests are also rate-limited.
 
The Map function request a new mapping exactly when the map timeout
occurs instead of 5 minutes earlier. This should prevent duplicate mappings.
2020-08-05 09:51:37 +02:00
Martin Holst Swende
8ead45c20b core/vm: avoid map lookups for accessing jumpdest analysis 2020-08-04 15:45:35 +02:00
Martin Holst Swende
82a9e11058 ethstats: avoid concurrent write on websocket (#21404)
Fixes #21403
2020-08-04 12:21:51 +02:00
Hao Duan
b35e4fce99 core: avoid modification of accountSet cache in tx_pool (#21159)
* core: avoid modification of accountSet cache in tx_pool

when runReorg, we may copy the dirtyAccounts' accountSet cache to promoteAddrs
in which accounts will be promoted, however, if we have reset request at the
same time, we may reuse promoteAddrs and modify the cache content which is
against the original intention of accountSet cache. So, we need to make a new
slice here to avoid modify accountSet cache.

* core: fix flatten condition + comment

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-04 11:51:53 +02:00
Adam Schmideg
e24e05dd01 cmd/devp2p: print enode:// URL in enrdump (#21270)
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-04 11:33:07 +02:00
Natsu Kagami
90dedea40f signer: EIP 712, parse bytes and bytesX as hex strings + correct padding (#21307)
* Handle hex strings for bytesX types

* Add tests for parseBytes

* Improve tests

* Return nil bytes if error is non-nil

* Right-pad instead of left-pad bytes

* More tests
2020-08-03 21:53:12 +02:00
rene
c0c01612e9 node: refactor package node (#21105)
This PR significantly changes the APIs for instantiating Ethereum nodes in
a Go program. The new APIs are not backwards-compatible, but we feel that
this is made up for by the much simpler way of registering services on
node.Node. You can find more information and rationale in the design
document: https://gist.github.com/renaynay/5bec2de19fde66f4d04c535fd24f0775.

There is also a new feature in Node's Go API: it is now possible to
register arbitrary handlers on the user-facing HTTP server. In geth, this
facility is used to enable GraphQL.

There is a single minor change relevant for geth users in this PR: The
GraphQL API is no longer available separately from the JSON-RPC HTTP
server. If you want GraphQL, you need to enable it using the
./geth --http --graphql flag combination.

The --graphql.port and --graphql.addr flags are no longer available.
2020-08-03 19:40:46 +02:00
Natsu Kagami
b2b14e6ce3 signer/core: EIP-712 encoded data should not reject a Domain without a ChainId (#21306)
* Do not check for a non-nil ChainId

* Add encoding test
2020-08-03 15:30:32 +02:00
rene
290d6bd903 rpc: add SetHeader method to Client (#21392)
Resolves #20163

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-03 14:08:42 +02:00
Felix Lange
9c2ac6fbd5 rpc: remove silly use of ReadVarint in subscription ID generator (#21391)
Found by @protolambda
2020-07-31 16:20:31 +02:00
Péter Szilágyi
a00dc5095b Merge pull request #21358 from hendrikhofstadt/fix/tx-sort-time
core: sort txs at the same gas price by received time
2020-07-30 10:23:36 +03:00
meowsbits
ff90894636 core/rawdb: convert some comments to godoc convention (#21384) 2020-07-29 21:53:59 +02:00
ucwong
9e04c5ec83 core/bloombits: use single channel for shutdown (#20878)
This replaces the two-stage shutdown scheme with the one we
use almost everywhere else: a single quit channel signalling
termination.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-29 13:49:12 +02:00
Julian Y
abf2d7d74f build: use -trimpath flag when building executables (#21374)
* Disable symbol table and DWARF generation by default.
Trimpath if compiling with Go >= 1.13

* Set Go to minimum version 1.13. Revert debug symbol changes.
2020-07-29 13:48:03 +03:00
rene
1976bb3df0 eth/downloader: remove eth62 (#21378)
* init

notes

removed some mentions of eth62, bumped protocol err too old to >=63

* remove sanity checks and bump supported protocol version up to 63

* remove 62 tests, still need to add 65

* remove 65 tests
2020-07-29 13:47:19 +03:00
gary rong
350a0490ab les: fix unittest (#21382) 2020-07-29 13:44:14 +03:00
Robert Zaremba
37564ceda6 miner: refactor helper functions in worker.go (#21044)
This reduces complexity of some lengthy functions in worker.go,
making the code easier to read.
2020-07-28 18:16:49 +02:00
gary rong
28c5a8a54b les: implement new les fetcher (#20692)
* cmd, consensus, eth, les: implement light fetcher

* les: address comment

* les: address comment

* les: address comments

* les: check td after delivery

* les: add linearExpiredValue for error counter

* les: fix import

* les: fix dead lock

* les: order announces by td

* les: encapsulate invalid counter

* les: address comment

* les: add more checks during the delivery

* les: fix log

* eth, les: fix lint

* eth/fetcher: address comment
2020-07-28 18:02:35 +03:00
Péter Szilágyi
298a19bbc6 core: API-less transaction time sorting 2020-07-28 17:13:40 +03:00
Hendrik Hofstadt
c47052a580 core: sort txs at the same gas price by received time 2020-07-28 17:13:39 +03:00
gary rong
93da0cf8a1 cmd, core, eth, light, trie: dump clean cache periodically (#20391)
* cmd, core, eth, light, trie: dump clean cache periodically

* eth: update config

* trie: minor fix

* core, trie: address comments

* eth: remove useless

* trie: print clean cache dump start too

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-28 16:30:31 +03:00
6543
79ce5537ab signer/storage: fix a badly ordered error check (#21379) 2020-07-28 12:47:05 +03:00
Péter Szilágyi
8e7bee9b56 params: begin v1.9.19 release cycle 2020-07-27 14:58:45 +03:00
Péter Szilágyi
f538259187 params: release Geth v1.9.18 2020-07-27 14:53:53 +03:00
gary rong
b1be979443 params: upgrade CHTs (#21376) 2020-07-27 12:57:15 +03:00
Péter Szilágyi
e997f92caf Merge pull request #21368 from holiman/update_uint256
deps: update uint256 to v1.1.1
2020-07-24 15:02:52 +03:00
Martin Holst Swende
56434bfa89 deps: update uint256 to v1.1.1 2020-07-24 14:00:08 +02:00
Péter Szilágyi
6793ffa12b Merge pull request #21300 from rjl493456442/txpool-fix-queued-evictions
core: fix queued transaction eviction
2020-07-24 11:14:42 +03:00
rjl493456442
5413df1dfa core: fix heartbeat in txpool
core: address comment
2020-07-24 11:12:59 +03:00
villanuevawill
c374447401 core: fix queued transaction eviction
Solves issue#20582. Non-executable transactions should not be evicted on each tick if there are no promote transactions or if a pending/reset empties the pending list. Tests and logging expanded to handle these cases in the future.

core/tx_pool: use a ts for each tx in the queue, but only update the heartbeat on promotion or pending replaced

queuedTs proper naming
2020-07-24 11:11:57 +03:00
Martin Holst Swende
105922180f eth/downloader: refactor downloader + queue (#21263)
* eth/downloader: refactor downloader + queue

downloader, fetcher: throttle-metrics, fetcher filter improvements, standalone resultcache

downloader: more accurate deliverytime calculation, less mem overhead in state requests

downloader/queue: increase underlying buffer of results, new throttle mechanism

eth/downloader: updates to tests

eth/downloader: fix up some review concerns

eth/downloader/queue: minor fixes

eth/downloader: minor fixes after review call

eth/downloader: testcases for queue.go

eth/downloader: minor change, don't set progress unless progress...

eth/downloader: fix flaw which prevented useless peers from being dropped

eth/downloader: try to fix tests

eth/downloader: verify non-deliveries against advertised remote head

eth/downloader: fix flaw with checking closed-status causing hang

eth/downloader: hashing avoidance

eth/downloader: review concerns + simplify resultcache and queue

eth/downloader: add back some locks, address review concerns

downloader/queue: fix remaining lock flaw

* eth/downloader: nitpick fixes

* eth/downloader: remove the *2*3/4 throttling threshold dance

* eth/downloader: print correct throttle threshold in stats

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-24 10:46:26 +03:00
Felix Lange
3a57eecc69 mobile: fix build on iOS (#21362)
This fixes the iOS framework build by naming the second parameter of the
Signer interface method. The name is important because it becomes part
of the objc method signature.

Fixes #21340
2020-07-23 19:15:40 +02:00
Felix Lange
997b55236e build: fix GOBIN for gomobile commands (#21361) 2020-07-23 12:34:08 +02:00
meowsbits
4c268e65a0 cmd/utils: implement configurable developer (--dev) account options (#21301)
* geth,utils: implement configurable developer account options

Prior to this change --dev (developer) mode
generated one account with an empty password,
irrespective of existing --password and --miner.etherbase
options.

This change makes --dev mode compatible with these
existing flags.

--dev mode may now be used in conjunction with
--password and --miner.etherbase flags to configure
the developer faucet using an existing keystore or
in creating a new account.

Signed-off-by: meows <b5c6@protonmail.com>

* main: remove key/pass flags from usage developer section

These flags are included already in other sections,
and it is not desired to duplicate them.

They were originally included in this section
along with added support for these flags in the
developer mode.

Signed-off-by: meows <b5c6@protonmail.com>
2020-07-23 06:47:34 +03:00
Péter Szilágyi
0b53e485d8 Merge pull request #21352 from karalabe/dev-noinit-genesis
cmd/utils: reuse existing genesis in persistent dev mode
2020-07-22 17:39:08 +03:00
Péter Szilágyi
9e22e912e3 cmd/utils: reuse existing genesis in persistent dev mode 2020-07-21 15:58:29 +03:00
rene
123864fc05 whisper/whisperv6: improve test error messages (#21348) 2020-07-21 10:53:06 +02:00
Sammy Libre
7163a6664e ethclient: serialize negative block number as "pending" (#21177)
Fixes #21175

Co-authored-by: sammy007 <sammy007@users.noreply.github.com>
Co-authored-by: Adam Schmideg <adamschmideg@users.noreply.github.com>
2020-07-21 10:51:15 +02:00
Binacs
4366c45e4e les: make clientPool.connectedBias configurable (#21305) 2020-07-21 10:23:40 +02:00
Péter Szilágyi
3a52c4dcf2 Merge pull request #21336 from karalabe/tiny-ref-optimization
core/vm: use pointers to operations vs. copy by value
2020-07-21 10:53:12 +03:00
Péter Szilágyi
722b742780 params: begin v1.9.18 release cycle 2020-07-20 15:58:33 +03:00
Péter Szilágyi
508891e64b core/vm: use pointers to operations vs. copy by value 2020-07-16 15:32:01 +03:00
541 changed files with 33825 additions and 21265 deletions

1
.github/CODEOWNERS vendored
View File

@@ -20,4 +20,3 @@ p2p/simulations @fjl
p2p/protocols @fjl
p2p/testing @fjl
signer/ @holiman
whisper/ @gballet

View File

@@ -1,8 +1,10 @@
Hi there,
Please note that this is an issue tracker reserved for bug reports and feature requests.
For general questions please use [discord](https://discord.gg/nthXNEv) or the Ethereum stack exchange at https://ethereum.stackexchange.com.
---
name: Report a bug
about: Something with go-ethereum is not working as expected
title: ''
labels: 'type:bug'
assignees: ''
---
#### System information

17
.github/ISSUE_TEMPLATE/feature.md vendored Normal file
View File

@@ -0,0 +1,17 @@
---
name: Request a feature
about: Report a missing feature - e.g. as a step before submitting a PR
title: ''
labels: 'type:feature'
assignees: ''
---
# Rationale
Why should this feature exist?
What are the use-cases?
# Implementation
Do you have ideas regarding the implementation of this feature?
Are you willing to implement this feature?

9
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@@ -0,0 +1,9 @@
---
name: Ask a question
about: Something is unclear
title: ''
labels: 'type:docs'
assignees: ''
---
This should only be used in very rare cases e.g. if you are not 100% sure if something is a bug or asking a question that leads to improving the documentation. For general questions please use [discord](https://discord.gg/nthXNEv) or the Ethereum stack exchange at https://ethereum.stackexchange.com.

View File

@@ -16,7 +16,7 @@ jobs:
- stage: lint
os: linux
dist: xenial
go: 1.14.x
go: 1.15.x
env:
- lint
git:
@@ -24,65 +24,12 @@ jobs:
script:
- go run build/ci.go lint
- stage: build
os: linux
dist: xenial
go: 1.13.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- stage: build
os: linux
arch: amd64
dist: xenial
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
if: type = pull_request
os: linux
arch: arm64
dist: xenial
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: osx
osx_image: xcode11.3
go: 1.14.x
env:
- GO111MODULE=on
script:
- echo "Increase the maximum number of open file descriptors on macOS"
- NOFILE=20480
- sudo sysctl -w kern.maxfiles=$NOFILE
- sudo sysctl -w kern.maxfilesperproc=$NOFILE
- sudo launchctl limit maxfiles $NOFILE $NOFILE
- sudo launchctl limit maxfiles
- ulimit -S -n $NOFILE
- ulimit -n
- unset -f cd # workaround for https://github.com/travis-ci/travis-ci/issues/8703
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# This builder does the Ubuntu PPA upload
- stage: build
if: type = push
os: linux
dist: xenial
go: 1.14.x
go: 1.15.x
env:
- ubuntu-ppa
- GO111MODULE=on
@@ -99,7 +46,7 @@ jobs:
- python-paramiko
script:
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -goversion 1.14.2 -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Linux Azure uploads
- stage: build
@@ -107,7 +54,7 @@ jobs:
os: linux
dist: xenial
sudo: required
go: 1.14.x
go: 1.15.x
env:
- azure-linux
- GO111MODULE=on
@@ -119,23 +66,23 @@ jobs:
- gcc-multilib
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -arch 386
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo -arch 386
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# Switch over GCC to cross compilation (breaks 386, hence why do it here only)
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
- sudo ln -s /usr/include/asm-generic /usr/include/asm
- GOARM=5 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=6 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=7 go run build/ci.go install -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=5 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- GOARM=6 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- GOARM=7 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# This builder does the Linux Azure MIPS xgo uploads
- stage: build
@@ -144,7 +91,7 @@ jobs:
dist: xenial
services:
- docker
go: 1.14.x
go: 1.15.x
env:
- azure-linux-mips
- GO111MODULE=on
@@ -153,19 +100,19 @@ jobs:
script:
- go run build/ci.go xgo --alltools -- --targets=linux/mips --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips; do mv -f "${bin}" "${bin/-linux-mips/}"; done
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mipsle --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mipsle; do mv -f "${bin}" "${bin/-linux-mipsle/}"; done
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mips64 --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips64; do mv -f "${bin}" "${bin/-linux-mips64/}"; done
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mips64le --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips64le; do mv -f "${bin}" "${bin/-linux-mips64le/}"; done
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# This builder does the Android Maven and Azure uploads
- stage: build
@@ -192,7 +139,7 @@ jobs:
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz | tar -xz
- curl https://dl.google.com/go/go1.15.5.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go
@@ -204,13 +151,13 @@ jobs:
- mkdir -p $GOPATH/src/github.com/ethereum
- ln -s `pwd` $GOPATH/src/github.com/ethereum/go-ethereum
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -signify SIGNIFY_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- stage: build
if: type = push
os: osx
go: 1.14.x
go: 1.15.x
env:
- azure-osx
- azure-ios
@@ -219,8 +166,8 @@ jobs:
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# Build the iOS framework and upload it to CocoaPods and Azure
- gem uninstall cocoapods -a -x
@@ -235,14 +182,45 @@ jobs:
# Workaround for https://github.com/golang/go/issues/23749
- export CGO_CFLAGS_ALLOW='-fmodules|-fblocks|-fobjc-arc'
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -signify SIGNIFY_KEY -deploy trunk -upload gethstore/builds
# These builders run the tests
- stage: build
os: linux
arch: amd64
dist: xenial
go: 1.15.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
if: type = pull_request
os: linux
arch: arm64
dist: xenial
go: 1.15.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: linux
dist: xenial
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -coverage $TEST_PACKAGES
# This builder does the Azure archive purges to avoid accumulating junk
- stage: build
if: type = cron
os: linux
dist: xenial
go: 1.14.x
go: 1.15.x
env:
- azure-purge
- GO111MODULE=on

59
COPYING
View File

@@ -1,7 +1,7 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2014 The go-ethereum Authors.
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
@@ -616,4 +616,59 @@ above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.14-alpine as builder
FROM golang:1.15-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
@@ -12,5 +12,5 @@ FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
EXPOSE 8545 8546 8547 30303 30303/udp
EXPOSE 8545 8546 30303 30303/udp
ENTRYPOINT ["geth"]

View File

@@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.14-alpine as builder
FROM golang:1.15-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
@@ -12,4 +12,4 @@ FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 8547 30303 30303/udp
EXPOSE 8545 8546 30303 30303/udp

View File

@@ -24,7 +24,9 @@ android:
$(GORUN) build/ci.go aar --local
@echo "Done building."
@echo "Import \"$(GOBIN)/geth.aar\" to use the library."
@echo "Import \"$(GOBIN)/geth-sources.jar\" to add javadocs"
@echo "For more info see https://stackoverflow.com/questions/20994336/android-studio-how-to-attach-javadoc"
ios:
$(GORUN) build/ci.go xcode --local
@echo "Done building."

View File

@@ -80,39 +80,59 @@ func (abi ABI) Pack(name string, args ...interface{}) ([]byte, error) {
return append(method.ID, arguments...), nil
}
// Unpack output in v according to the abi specification
func (abi ABI) Unpack(v interface{}, name string, data []byte) (err error) {
func (abi ABI) getArguments(name string, data []byte) (Arguments, error) {
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
var args Arguments
if method, ok := abi.Methods[name]; ok {
if len(data)%32 != 0 {
return fmt.Errorf("abi: improperly formatted output: %s - Bytes: [%+v]", string(data), data)
return nil, fmt.Errorf("abi: improperly formatted output: %s - Bytes: [%+v]", string(data), data)
}
return method.Outputs.Unpack(v, data)
args = method.Outputs
}
if event, ok := abi.Events[name]; ok {
return event.Inputs.Unpack(v, data)
args = event.Inputs
}
return fmt.Errorf("abi: could not locate named method or event")
if args == nil {
return nil, errors.New("abi: could not locate named method or event")
}
return args, nil
}
// UnpackIntoMap unpacks a log into the provided map[string]interface{}
// Unpack unpacks the output according to the abi specification.
func (abi ABI) Unpack(name string, data []byte) ([]interface{}, error) {
args, err := abi.getArguments(name, data)
if err != nil {
return nil, err
}
return args.Unpack(data)
}
// UnpackIntoInterface unpacks the output in v according to the abi specification.
// It performs an additional copy. Please only use, if you want to unpack into a
// structure that does not strictly conform to the abi structure (e.g. has additional arguments)
func (abi ABI) UnpackIntoInterface(v interface{}, name string, data []byte) error {
args, err := abi.getArguments(name, data)
if err != nil {
return err
}
unpacked, err := args.Unpack(data)
if err != nil {
return err
}
return args.Copy(v, unpacked)
}
// UnpackIntoMap unpacks a log into the provided map[string]interface{}.
func (abi ABI) UnpackIntoMap(v map[string]interface{}, name string, data []byte) (err error) {
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
if method, ok := abi.Methods[name]; ok {
if len(data)%32 != 0 {
return fmt.Errorf("abi: improperly formatted output")
}
return method.Outputs.UnpackIntoMap(v, data)
args, err := abi.getArguments(name, data)
if err != nil {
return err
}
if event, ok := abi.Events[name]; ok {
return event.Inputs.UnpackIntoMap(v, data)
}
return fmt.Errorf("abi: could not locate named method or event")
return args.UnpackIntoMap(v, data)
}
// UnmarshalJSON implements json.Unmarshaler interface
// UnmarshalJSON implements json.Unmarshaler interface.
func (abi *ABI) UnmarshalJSON(data []byte) error {
var fields []struct {
Type string
@@ -201,8 +221,8 @@ func (abi *ABI) overloadedEventName(rawName string) string {
return name
}
// MethodById looks up a method by the 4-byte id
// returns nil if none found
// MethodById looks up a method by the 4-byte id,
// returns nil if none found.
func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
if len(sigdata) < 4 {
return nil, fmt.Errorf("data too short (%d bytes) for abi method lookup", len(sigdata))
@@ -250,10 +270,10 @@ func UnpackRevert(data []byte) (string, error) {
if !bytes.Equal(data[:4], revertSelector) {
return "", errors.New("invalid data for unpacking")
}
var reason string
typ, _ := NewType("string", "", nil)
if err := (Arguments{{Type: typ}}).Unpack(&reason, data[4:]); err != nil {
unpacked, err := (Arguments{{Type: typ}}).Unpack(data[4:])
if err != nil {
return "", err
}
return reason, nil
return unpacked[0].(string), nil
}

View File

@@ -33,7 +33,7 @@ import (
const jsondata = `
[
{ "type" : "function", "name" : "", "stateMutability" : "view" },
{ "type" : "function", "name" : ""},
{ "type" : "function", "name" : "balance", "stateMutability" : "view" },
{ "type" : "function", "name" : "send", "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "test", "inputs" : [ { "name" : "number", "type" : "uint32" } ] },
@@ -88,7 +88,7 @@ var (
)
var methods = map[string]Method{
"": NewMethod("", "", Function, "view", false, false, nil, nil),
"": NewMethod("", "", Function, "", false, false, nil, nil),
"balance": NewMethod("balance", "balance", Function, "view", false, false, nil, nil),
"send": NewMethod("send", "send", Function, "", false, false, []Argument{{"amount", Uint256, false}}, nil),
"test": NewMethod("test", "test", Function, "", false, false, []Argument{{"number", Uint32, false}}, nil),
@@ -181,18 +181,15 @@ func TestConstructor(t *testing.T) {
if err != nil {
t.Error(err)
}
v := struct {
A *big.Int
B *big.Int
}{new(big.Int), new(big.Int)}
//abi.Unpack(&v, "", packed)
if err := abi.Constructor.Inputs.Unpack(&v, packed); err != nil {
unpacked, err := abi.Constructor.Inputs.Unpack(packed)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(v.A, big.NewInt(1)) {
if !reflect.DeepEqual(unpacked[0], big.NewInt(1)) {
t.Error("Unable to pack/unpack from constructor")
}
if !reflect.DeepEqual(v.B, big.NewInt(2)) {
if !reflect.DeepEqual(unpacked[1], big.NewInt(2)) {
t.Error("Unable to pack/unpack from constructor")
}
}
@@ -743,7 +740,7 @@ func TestUnpackEvent(t *testing.T) {
}
var ev ReceivedEvent
err = abi.Unpack(&ev, "received", data)
err = abi.UnpackIntoInterface(&ev, "received", data)
if err != nil {
t.Error(err)
}
@@ -752,7 +749,7 @@ func TestUnpackEvent(t *testing.T) {
Sender common.Address
}
var receivedAddrEv ReceivedAddrEvent
err = abi.Unpack(&receivedAddrEv, "receivedAddr", data)
err = abi.UnpackIntoInterface(&receivedAddrEv, "receivedAddr", data)
if err != nil {
t.Error(err)
}
@@ -1092,7 +1089,7 @@ func TestDoubleDuplicateEventNames(t *testing.T) {
}
// TestUnnamedEventParam checks that an event with unnamed parameters is
// correctly handled
// correctly handled.
// The test runs the abi of the following contract.
// contract TestEvent {
// event send(uint256, uint256);

View File

@@ -41,7 +41,7 @@ type ArgumentMarshaling struct {
Indexed bool
}
// UnmarshalJSON implements json.Unmarshaler interface
// UnmarshalJSON implements json.Unmarshaler interface.
func (argument *Argument) UnmarshalJSON(data []byte) error {
var arg ArgumentMarshaling
err := json.Unmarshal(data, &arg)
@@ -59,7 +59,7 @@ func (argument *Argument) UnmarshalJSON(data []byte) error {
return nil
}
// NonIndexed returns the arguments with indexed arguments filtered out
// NonIndexed returns the arguments with indexed arguments filtered out.
func (arguments Arguments) NonIndexed() Arguments {
var ret []Argument
for _, arg := range arguments {
@@ -70,37 +70,29 @@ func (arguments Arguments) NonIndexed() Arguments {
return ret
}
// isTuple returns true for non-atomic constructs, like (uint,uint) or uint[]
// isTuple returns true for non-atomic constructs, like (uint,uint) or uint[].
func (arguments Arguments) isTuple() bool {
return len(arguments) > 1
}
// Unpack performs the operation hexdata -> Go format
func (arguments Arguments) Unpack(v interface{}, data []byte) error {
// Unpack performs the operation hexdata -> Go format.
func (arguments Arguments) Unpack(data []byte) ([]interface{}, error) {
if len(data) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
return nil, fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
}
return nil // Nothing to unmarshal, return
// Nothing to unmarshal, return default variables
nonIndexedArgs := arguments.NonIndexed()
defaultVars := make([]interface{}, len(nonIndexedArgs))
for index, arg := range nonIndexedArgs {
defaultVars[index] = reflect.New(arg.Type.GetType())
}
return defaultVars, nil
}
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
if len(marshalledValues) == 0 {
return fmt.Errorf("abi: Unpack(no-values unmarshalled %T)", v)
}
if arguments.isTuple() {
return arguments.unpackTuple(v, marshalledValues)
}
return arguments.unpackAtomic(v, marshalledValues[0])
return arguments.UnpackValues(data)
}
// UnpackIntoMap performs the operation hexdata -> mapping of argument name to argument value
// UnpackIntoMap performs the operation hexdata -> mapping of argument name to argument value.
func (arguments Arguments) UnpackIntoMap(v map[string]interface{}, data []byte) error {
// Make sure map is not nil
if v == nil {
@@ -122,8 +114,26 @@ func (arguments Arguments) UnpackIntoMap(v map[string]interface{}, data []byte)
return nil
}
// Copy performs the operation go format -> provided struct.
func (arguments Arguments) Copy(v interface{}, values []interface{}) error {
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
}
if len(values) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to copy no values while %d arguments are expected", len(arguments))
}
return nil // Nothing to copy, return
}
if arguments.isTuple() {
return arguments.copyTuple(v, values)
}
return arguments.copyAtomic(v, values[0])
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues interface{}) error {
func (arguments Arguments) copyAtomic(v interface{}, marshalledValues interface{}) error {
dst := reflect.ValueOf(v).Elem()
src := reflect.ValueOf(marshalledValues)
@@ -133,8 +143,8 @@ func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues interfac
return set(dst, src)
}
// unpackTuple unpacks ( hexdata -> go ) a batch of values.
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
// copyTuple copies a batch of values from marshalledValues to v.
func (arguments Arguments) copyTuple(v interface{}, marshalledValues []interface{}) error {
value := reflect.ValueOf(v).Elem()
nonIndexedArgs := arguments.NonIndexed()
@@ -207,13 +217,13 @@ func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
return retval, nil
}
// PackValues performs the operation Go format -> Hexdata
// It is the semantic opposite of UnpackValues
// PackValues performs the operation Go format -> Hexdata.
// It is the semantic opposite of UnpackValues.
func (arguments Arguments) PackValues(args []interface{}) ([]byte, error) {
return arguments.Pack(args...)
}
// Pack performs the operation Go format -> Hexdata
// Pack performs the operation Go format -> Hexdata.
func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
// Make sure arguments match up and pack them
abiArgs := arguments

View File

@@ -21,6 +21,7 @@ import (
"errors"
"io"
"io/ioutil"
"math/big"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/external"
@@ -28,11 +29,21 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
)
// ErrNoChainID is returned whenever the user failed to specify a chain id.
var ErrNoChainID = errors.New("no chain id specified")
// ErrNotAuthorized is returned when an account is not properly unlocked.
var ErrNotAuthorized = errors.New("not authorized to sign this account")
// NewTransactor is a utility method to easily create a transaction signer from
// an encrypted json key stream and the associated passphrase.
//
// Deprecated: Use NewTransactorWithChainID instead.
func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
log.Warn("WARNING: NewTransactor has been deprecated in favour of NewTransactorWithChainID")
json, err := ioutil.ReadAll(keyin)
if err != nil {
return nil, err
@@ -45,13 +56,17 @@ func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
}
// NewKeyStoreTransactor is a utility method to easily create a transaction signer from
// an decrypted key from a keystore
// an decrypted key from a keystore.
//
// Deprecated: Use NewKeyStoreTransactorWithChainID instead.
func NewKeyStoreTransactor(keystore *keystore.KeyStore, account accounts.Account) (*TransactOpts, error) {
log.Warn("WARNING: NewKeyStoreTransactor has been deprecated in favour of NewTransactorWithChainID")
signer := types.HomesteadSigner{}
return &TransactOpts{
From: account.Address,
Signer: func(signer types.Signer, address common.Address, tx *types.Transaction) (*types.Transaction, error) {
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, errors.New("not authorized to sign this account")
return nil, ErrNotAuthorized
}
signature, err := keystore.SignHash(account, signer.Hash(tx).Bytes())
if err != nil {
@@ -64,13 +79,17 @@ func NewKeyStoreTransactor(keystore *keystore.KeyStore, account accounts.Account
// NewKeyedTransactor is a utility method to easily create a transaction signer
// from a single private key.
//
// Deprecated: Use NewKeyedTransactorWithChainID instead.
func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
log.Warn("WARNING: NewKeyedTransactor has been deprecated in favour of NewKeyedTransactorWithChainID")
keyAddr := crypto.PubkeyToAddress(key.PublicKey)
signer := types.HomesteadSigner{}
return &TransactOpts{
From: keyAddr,
Signer: func(signer types.Signer, address common.Address, tx *types.Transaction) (*types.Transaction, error) {
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != keyAddr {
return nil, errors.New("not authorized to sign this account")
return nil, ErrNotAuthorized
}
signature, err := crypto.Sign(signer.Hash(tx).Bytes(), key)
if err != nil {
@@ -81,14 +100,73 @@ func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
}
}
// NewTransactorWithChainID is a utility method to easily create a transaction signer from
// an encrypted json key stream and the associated passphrase.
func NewTransactorWithChainID(keyin io.Reader, passphrase string, chainID *big.Int) (*TransactOpts, error) {
json, err := ioutil.ReadAll(keyin)
if err != nil {
return nil, err
}
key, err := keystore.DecryptKey(json, passphrase)
if err != nil {
return nil, err
}
return NewKeyedTransactorWithChainID(key.PrivateKey, chainID)
}
// NewKeyStoreTransactorWithChainID is a utility method to easily create a transaction signer from
// an decrypted key from a keystore.
func NewKeyStoreTransactorWithChainID(keystore *keystore.KeyStore, account accounts.Account, chainID *big.Int) (*TransactOpts, error) {
if chainID == nil {
return nil, ErrNoChainID
}
signer := types.NewEIP155Signer(chainID)
return &TransactOpts{
From: account.Address,
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, ErrNotAuthorized
}
signature, err := keystore.SignHash(account, signer.Hash(tx).Bytes())
if err != nil {
return nil, err
}
return tx.WithSignature(signer, signature)
},
}, nil
}
// NewKeyedTransactorWithChainID is a utility method to easily create a transaction signer
// from a single private key.
func NewKeyedTransactorWithChainID(key *ecdsa.PrivateKey, chainID *big.Int) (*TransactOpts, error) {
keyAddr := crypto.PubkeyToAddress(key.PublicKey)
if chainID == nil {
return nil, ErrNoChainID
}
signer := types.NewEIP155Signer(chainID)
return &TransactOpts{
From: keyAddr,
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != keyAddr {
return nil, ErrNotAuthorized
}
signature, err := crypto.Sign(signer.Hash(tx).Bytes(), key)
if err != nil {
return nil, err
}
return tx.WithSignature(signer, signature)
},
}, nil
}
// NewClefTransactor is a utility method to easily create a transaction signer
// with a clef backend.
func NewClefTransactor(clef *external.ExternalSigner, account accounts.Account) *TransactOpts {
return &TransactOpts{
From: account.Address,
Signer: func(signer types.Signer, address common.Address, transaction *types.Transaction) (*types.Transaction, error) {
Signer: func(address common.Address, transaction *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, errors.New("not authorized to sign this account")
return nil, ErrNotAuthorized
}
return clef.SignTx(account, transaction, nil) // Clef enforces its own chain id
},

View File

@@ -41,7 +41,7 @@ var (
ErrNoCodeAfterDeploy = errors.New("no contract code after deployment")
)
// ContractCaller defines the methods needed to allow operating with contract on a read
// ContractCaller defines the methods needed to allow operating with a contract on a read
// only basis.
type ContractCaller interface {
// CodeAt returns the code of the given account. This is needed to differentiate
@@ -62,8 +62,8 @@ type PendingContractCaller interface {
PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error)
}
// ContractTransactor defines the methods needed to allow operating with contract
// on a write only basis. Beside the transacting method, the remainder are helpers
// ContractTransactor defines the methods needed to allow operating with a contract
// on a write only basis. Besides the transacting method, the remainder are helpers
// used when the user does not provide some needed values, but rather leaves it up
// to the transactor to decide.
type ContractTransactor interface {

View File

@@ -45,7 +45,7 @@ import (
"github.com/ethereum/go-ethereum/rpc"
)
// This nil assignment ensures compile time that SimulatedBackend implements bind.ContractBackend.
// This nil assignment ensures at compile time that SimulatedBackend implements bind.ContractBackend.
var _ bind.ContractBackend = (*SimulatedBackend)(nil)
var (
@@ -55,7 +55,7 @@ var (
)
// SimulatedBackend implements bind.ContractBackend, simulating a blockchain in
// the background. Its main purpose is to allow easily testing contract bindings.
// the background. Its main purpose is to allow for easy testing of contract bindings.
// Simulated backend implements the following interfaces:
// ChainReader, ChainStateReader, ContractBackend, ContractCaller, ContractFilterer, ContractTransactor,
// DeployBackend, GasEstimator, GasPricer, LogFilterer, PendingContractCaller, TransactionReader, and TransactionSender
@@ -74,6 +74,7 @@ type SimulatedBackend struct {
// NewSimulatedBackendWithDatabase creates a new binding backend based on the given database
// and uses a simulated blockchain for testing purposes.
// A simulated backend always uses chainID 1337.
func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
genesis.MustCommit(database)
@@ -91,6 +92,7 @@ func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.Genesis
// NewSimulatedBackend creates a new binding backend using a simulated blockchain
// for testing purposes.
// A simulated backend always uses chainID 1337.
func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
return NewSimulatedBackendWithDatabase(rawdb.NewMemoryDatabase(), alloc, gasLimit)
}
@@ -123,10 +125,10 @@ func (b *SimulatedBackend) Rollback() {
func (b *SimulatedBackend) rollback() {
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(int, *core.BlockGen) {})
statedb, _ := b.blockchain.State()
stateDB, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil)
}
// stateByBlockNumber retrieves a state by a given blocknumber.
@@ -146,12 +148,12 @@ func (b *SimulatedBackend) CodeAt(ctx context.Context, contract common.Address,
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
return statedb.GetCode(contract), nil
return stateDB.GetCode(contract), nil
}
// BalanceAt returns the wei balance of a certain account in the blockchain.
@@ -159,12 +161,12 @@ func (b *SimulatedBackend) BalanceAt(ctx context.Context, contract common.Addres
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
return statedb.GetBalance(contract), nil
return stateDB.GetBalance(contract), nil
}
// NonceAt returns the nonce of a certain account in the blockchain.
@@ -172,12 +174,12 @@ func (b *SimulatedBackend) NonceAt(ctx context.Context, contract common.Address,
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return 0, err
}
return statedb.GetNonce(contract), nil
return stateDB.GetNonce(contract), nil
}
// StorageAt returns the value of key in the storage of an account in the blockchain.
@@ -185,12 +187,12 @@ func (b *SimulatedBackend) StorageAt(ctx context.Context, contract common.Addres
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
val := statedb.GetState(contract, key)
val := stateDB.GetState(contract, key)
return val[:], nil
}
@@ -222,7 +224,7 @@ func (b *SimulatedBackend) TransactionByHash(ctx context.Context, txHash common.
return nil, false, ethereum.NotFound
}
// BlockByHash retrieves a block based on the block hash
// BlockByHash retrieves a block based on the block hash.
func (b *SimulatedBackend) BlockByHash(ctx context.Context, hash common.Hash) (*types.Block, error) {
b.mu.Lock()
defer b.mu.Unlock()
@@ -293,7 +295,7 @@ func (b *SimulatedBackend) HeaderByNumber(ctx context.Context, block *big.Int) (
return b.blockchain.GetHeaderByNumber(uint64(block.Int64())), nil
}
// TransactionCount returns the number of transactions in a given block
// TransactionCount returns the number of transactions in a given block.
func (b *SimulatedBackend) TransactionCount(ctx context.Context, blockHash common.Hash) (uint, error) {
b.mu.Lock()
defer b.mu.Unlock()
@@ -310,7 +312,7 @@ func (b *SimulatedBackend) TransactionCount(ctx context.Context, blockHash commo
return uint(block.Transactions().Len()), nil
}
// TransactionInBlock returns the transaction for a specific block at a specific index
// TransactionInBlock returns the transaction for a specific block at a specific index.
func (b *SimulatedBackend) TransactionInBlock(ctx context.Context, blockHash common.Hash, index uint) (*types.Transaction, error) {
b.mu.Lock()
defer b.mu.Unlock()
@@ -357,14 +359,14 @@ func newRevertError(result *core.ExecutionResult) *revertError {
}
}
// revertError is an API error that encompassas an EVM revertal with JSON error
// revertError is an API error that encompasses an EVM revert with JSON error
// code and a binary data blob.
type revertError struct {
error
reason string // revert reason hex encoded
}
// ErrorCode returns the JSON error code for a revertal.
// ErrorCode returns the JSON error code for a revert.
// See: https://github.com/ethereum/wiki/wiki/JSON-RPC-Error-Codes-Improvement-Proposal
func (e *revertError) ErrorCode() int {
return 3
@@ -383,11 +385,11 @@ func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallM
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return nil, errBlockNumberUnsupported
}
state, err := b.blockchain.State()
stateDB, err := b.blockchain.State()
if err != nil {
return nil, err
}
res, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), state)
res, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), stateDB)
if err != nil {
return nil, err
}
@@ -479,7 +481,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
b.pendingState.RevertToSnapshot(snapshot)
if err != nil {
if err == core.ErrIntrinsicGas {
if errors.Is(err, core.ErrIntrinsicGas) {
return true, nil, nil // Special case, raise gas limit
}
return true, nil, err // Bail out
@@ -525,7 +527,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
// callContract implements common code between normal and pending contract calls.
// state is modified during execution, make sure to copy it if necessary.
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, statedb *state.StateDB) (*core.ExecutionResult, error) {
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, stateDB *state.StateDB) (*core.ExecutionResult, error) {
// Ensure message is initialized properly.
if call.GasPrice == nil {
call.GasPrice = big.NewInt(1)
@@ -537,18 +539,19 @@ func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallM
call.Value = new(big.Int)
}
// Set infinite balance to the fake caller account.
from := statedb.GetOrNewStateObject(call.From)
from := stateDB.GetOrNewStateObject(call.From)
from.SetBalance(math.MaxBig256)
// Execute the call.
msg := callmsg{call}
msg := callMsg{call}
evmContext := core.NewEVMContext(msg, block.Header(), b.blockchain, nil)
txContext := core.NewEVMTxContext(msg)
evmContext := core.NewEVMBlockContext(block.Header(), b.blockchain, nil)
// Create a new environment which holds all relevant information
// about the transaction and calling mechanisms.
vmenv := vm.NewEVM(evmContext, statedb, b.config, vm.Config{})
gaspool := new(core.GasPool).AddGas(math.MaxUint64)
vmEnv := vm.NewEVM(evmContext, txContext, stateDB, b.config, vm.Config{})
gasPool := new(core.GasPool).AddGas(math.MaxUint64)
return core.NewStateTransition(vmenv, msg, gaspool).TransitionDb()
return core.NewStateTransition(vmEnv, msg, gasPool).TransitionDb()
}
// SendTransaction updates the pending block to include the given transaction.
@@ -572,10 +575,10 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
}
block.AddTxWithChain(b.blockchain, tx)
})
statedb, _ := b.blockchain.State()
stateDB, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil)
return nil
}
@@ -589,7 +592,7 @@ func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.Filter
// Block filter requested, construct a single-shot filter
filter = filters.NewBlockFilter(&filterBackend{b.database, b.blockchain}, *query.BlockHash, query.Addresses, query.Topics)
} else {
// Initialize unset filter boundaried to run from genesis to chain head
// Initialize unset filter boundaries to run from genesis to chain head
from := int64(0)
if query.FromBlock != nil {
from = query.FromBlock.Int64()
@@ -607,8 +610,8 @@ func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.Filter
return nil, err
}
res := make([]types.Log, len(logs))
for i, log := range logs {
res[i] = *log
for i, nLog := range logs {
res[i] = *nLog
}
return res, nil
}
@@ -629,9 +632,9 @@ func (b *SimulatedBackend) SubscribeFilterLogs(ctx context.Context, query ethere
for {
select {
case logs := <-sink:
for _, log := range logs {
for _, nlog := range logs {
select {
case ch <- *log:
case ch <- *nlog:
case err := <-sub.Err():
return err
case <-quit:
@@ -647,7 +650,7 @@ func (b *SimulatedBackend) SubscribeFilterLogs(ctx context.Context, query ethere
}), nil
}
// SubscribeNewHead returns an event subscription for a new header
// SubscribeNewHead returns an event subscription for a new header.
func (b *SimulatedBackend) SubscribeNewHead(ctx context.Context, ch chan<- *types.Header) (ethereum.Subscription, error) {
// subscribe to a new head
sink := make(chan *types.Header)
@@ -675,20 +678,22 @@ func (b *SimulatedBackend) SubscribeNewHead(ctx context.Context, ch chan<- *type
}
// AdjustTime adds a time shift to the simulated clock.
// It can only be called on empty blocks.
func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
b.mu.Lock()
defer b.mu.Unlock()
if len(b.pendingBlock.Transactions()) != 0 {
return errors.New("Could not adjust time on non-empty block")
}
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTx(tx)
}
block.OffsetTime(int64(adjustment.Seconds()))
})
statedb, _ := b.blockchain.State()
stateDB, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil)
return nil
}
@@ -698,19 +703,19 @@ func (b *SimulatedBackend) Blockchain() *core.BlockChain {
return b.blockchain
}
// callmsg implements core.Message to allow passing it as a transaction simulator.
type callmsg struct {
// callMsg implements core.Message to allow passing it as a transaction simulator.
type callMsg struct {
ethereum.CallMsg
}
func (m callmsg) From() common.Address { return m.CallMsg.From }
func (m callmsg) Nonce() uint64 { return 0 }
func (m callmsg) CheckNonce() bool { return false }
func (m callmsg) To() *common.Address { return m.CallMsg.To }
func (m callmsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callmsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callmsg) Value() *big.Int { return m.CallMsg.Value }
func (m callmsg) Data() []byte { return m.CallMsg.Data }
func (m callMsg) From() common.Address { return m.CallMsg.From }
func (m callMsg) Nonce() uint64 { return 0 }
func (m callMsg) CheckNonce() bool { return false }
func (m callMsg) To() *common.Address { return m.CallMsg.To }
func (m callMsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callMsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callMsg) Value() *big.Int { return m.CallMsg.Value }
func (m callMsg) Data() []byte { return m.CallMsg.Data }
// filterBackend implements filters.Backend to support filtering for logs without
// taking bloom-bits acceleration structures into account.

View File

@@ -39,7 +39,7 @@ import (
func TestSimulatedBackend(t *testing.T) {
var gasLimit uint64 = 8000029
key, _ := crypto.GenerateKey() // nolint: gosec
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
genAlloc := make(core.GenesisAlloc)
genAlloc[auth.From] = core.GenesisAccount{Balance: big.NewInt(9223372036854775807)}
@@ -129,8 +129,8 @@ func TestNewSimulatedBackend(t *testing.T) {
t.Errorf("expected sim blockchain config to equal params.AllEthashProtocolChanges, got %v", sim.config)
}
statedb, _ := sim.blockchain.State()
bal := statedb.GetBalance(testAddr)
stateDB, _ := sim.blockchain.State()
bal := stateDB.GetBalance(testAddr)
if bal.Cmp(expectedBal) != 0 {
t.Errorf("expected balance for test address not received. expected: %v actual: %v", expectedBal, bal)
}
@@ -143,8 +143,7 @@ func TestSimulatedBackend_AdjustTime(t *testing.T) {
defer sim.Close()
prevTime := sim.pendingBlock.Time()
err := sim.AdjustTime(time.Second)
if err != nil {
if err := sim.AdjustTime(time.Second); err != nil {
t.Error(err)
}
newTime := sim.pendingBlock.Time()
@@ -154,6 +153,44 @@ func TestSimulatedBackend_AdjustTime(t *testing.T) {
}
}
func TestNewSimulatedBackend_AdjustTimeFail(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
// Create tx and send
tx := types.NewTransaction(0, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
}
sim.SendTransaction(context.Background(), signedTx)
// AdjustTime should fail on non-empty block
if err := sim.AdjustTime(time.Second); err == nil {
t.Error("Expected adjust time to error on non-empty block")
}
sim.Commit()
prevTime := sim.pendingBlock.Time()
if err := sim.AdjustTime(time.Minute); err != nil {
t.Error(err)
}
newTime := sim.pendingBlock.Time()
if newTime-prevTime != uint64(time.Minute.Seconds()) {
t.Errorf("adjusted time not equal to a minute. prev: %v, new: %v", prevTime, newTime)
}
// Put a transaction after adjusting time
tx2 := types.NewTransaction(1, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
signedTx2, err := types.SignTx(tx2, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
}
sim.SendTransaction(context.Background(), signedTx2)
sim.Commit()
newTime = sim.pendingBlock.Time()
if newTime-prevTime >= uint64(time.Minute.Seconds()) {
t.Errorf("time adjusted, but shouldn't be: prev: %v, new: %v", prevTime, newTime)
}
}
func TestSimulatedBackend_BalanceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
expectedBal := big.NewInt(10000000000)
@@ -374,7 +411,7 @@ func TestSimulatedBackend_EstimateGas(t *testing.T) {
key, _ := crypto.GenerateKey()
addr := crypto.PubkeyToAddress(key.PublicKey)
opts := bind.NewKeyedTransactor(key)
opts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(params.Ether)}}, 10000000)
defer sim.Close()
@@ -484,7 +521,7 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
sim := NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(params.Ether*2 + 2e17)}}, 10000000)
defer sim.Close()
receipant := common.HexToAddress("deadbeef")
recipient := common.HexToAddress("deadbeef")
var cases = []struct {
name string
message ethereum.CallMsg
@@ -493,7 +530,7 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
}{
{"EstimateWithoutPrice", ethereum.CallMsg{
From: addr,
To: &receipant,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(0),
Value: big.NewInt(1000),
@@ -502,7 +539,7 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
{"EstimateWithPrice", ethereum.CallMsg{
From: addr,
To: &receipant,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(1000),
Value: big.NewInt(1000),
@@ -511,7 +548,7 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
{"EstimateWithVeryHighPrice", ethereum.CallMsg{
From: addr,
To: &receipant,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(1e14), // gascost = 2.1ether
Value: big.NewInt(1e17), // the remaining balance for fee is 2.1ether
@@ -520,7 +557,7 @@ func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
{"EstimateWithSuperhighPrice", ethereum.CallMsg{
From: addr,
To: &receipant,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(2e14), // gascost = 4.2ether
Value: big.NewInt(1000),
@@ -851,7 +888,7 @@ func TestSimulatedBackend_PendingCodeAt(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
auth := bind.NewKeyedTransactor(testKey)
auth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
contractAddr, tx, contract, err := bind.DeployContract(auth, parsed, common.FromHex(abiBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v tx: %v contract: %v", err, tx, contract)
@@ -887,7 +924,7 @@ func TestSimulatedBackend_CodeAt(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
auth := bind.NewKeyedTransactor(testKey)
auth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
contractAddr, tx, contract, err := bind.DeployContract(auth, parsed, common.FromHex(abiBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v tx: %v contract: %v", err, tx, contract)
@@ -919,7 +956,7 @@ func TestSimulatedBackend_PendingAndCallContract(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
contractAuth := bind.NewKeyedTransactor(testKey)
contractAuth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
addr, _, _, err := bind.DeployContract(contractAuth, parsed, common.FromHex(abiBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v", err)
@@ -1006,7 +1043,7 @@ func TestSimulatedBackend_CallContractRevert(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
contractAuth := bind.NewKeyedTransactor(testKey)
contractAuth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
addr, _, _, err := bind.DeployContract(contractAuth, parsed, common.FromHex(reverterBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v", err)

View File

@@ -32,7 +32,7 @@ import (
// SignerFn is a signer function callback when a contract requires a method to
// sign the transaction before submission.
type SignerFn func(types.Signer, common.Address, *types.Transaction) (*types.Transaction, error)
type SignerFn func(common.Address, *types.Transaction) (*types.Transaction, error)
// CallOpts is the collection of options to fine tune a contract call request.
type CallOpts struct {
@@ -117,11 +117,14 @@ func DeployContract(opts *TransactOpts, abi abi.ABI, bytecode []byte, backend Co
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string, params ...interface{}) error {
func (c *BoundContract) Call(opts *CallOpts, results *[]interface{}, method string, params ...interface{}) error {
// Don't crash on a lazy user
if opts == nil {
opts = new(CallOpts)
}
if results == nil {
results = new([]interface{})
}
// Pack the input, call and unpack the results
input, err := c.abi.Pack(method, params...)
if err != nil {
@@ -149,7 +152,10 @@ func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string,
}
} else {
output, err = c.caller.CallContract(ctx, msg, opts.BlockNumber)
if err == nil && len(output) == 0 {
if err != nil {
return err
}
if len(output) == 0 {
// Make sure we have a contract to operate on, and bail out otherwise.
if code, err = c.caller.CodeAt(ctx, c.address, opts.BlockNumber); err != nil {
return err
@@ -158,10 +164,14 @@ func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string,
}
}
}
if err != nil {
if len(*results) == 0 {
res, err := c.abi.Unpack(method, output)
*results = res
return err
}
return c.abi.Unpack(result, method, output)
res := *results
return c.abi.UnpackIntoInterface(res[0], method, output)
}
// Transact invokes the (paid) contract method with params as input values.
@@ -177,7 +187,7 @@ func (c *BoundContract) Transact(opts *TransactOpts, method string, params ...in
}
// RawTransact initiates a transaction with the given raw calldata as the input.
// It's usually used to initiates transaction for invoking **Fallback** function.
// It's usually used to initiate transactions for invoking **Fallback** function.
func (c *BoundContract) RawTransact(opts *TransactOpts, calldata []byte) (*types.Transaction, error) {
// todo(rjl493456442) check the method is payable or not,
// reject invalid transaction at the first place
@@ -246,7 +256,7 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
if opts.Signer == nil {
return nil, errors.New("no signer to authorize the transaction with")
}
signedTx, err := opts.Signer(types.HomesteadSigner{}, opts.From, rawTx)
signedTx, err := opts.Signer(opts.From, rawTx)
if err != nil {
return nil, err
}
@@ -339,7 +349,7 @@ func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]inter
// UnpackLog unpacks a retrieved log into the provided output structure.
func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log) error {
if len(log.Data) > 0 {
if err := c.abi.Unpack(out, event, log.Data); err != nil {
if err := c.abi.UnpackIntoInterface(out, event, log.Data); err != nil {
return err
}
}

View File

@@ -71,11 +71,10 @@ func TestPassingBlockNumber(t *testing.T) {
},
},
}, mc, nil, nil)
var ret string
blockNumber := big.NewInt(42)
bc.Call(&bind.CallOpts{BlockNumber: blockNumber}, &ret, "something")
bc.Call(&bind.CallOpts{BlockNumber: blockNumber}, nil, "something")
if mc.callContractBlockNumber != blockNumber {
t.Fatalf("CallContract() was not passed the block number")
@@ -85,7 +84,7 @@ func TestPassingBlockNumber(t *testing.T) {
t.Fatalf("CodeAt() was not passed the block number")
}
bc.Call(&bind.CallOpts{}, &ret, "something")
bc.Call(&bind.CallOpts{}, nil, "something")
if mc.callContractBlockNumber != nil {
t.Fatalf("CallContract() was passed a block number when it should not have been")
@@ -95,7 +94,7 @@ func TestPassingBlockNumber(t *testing.T) {
t.Fatalf("CodeAt() was passed a block number when it should not have been")
}
bc.Call(&bind.CallOpts{BlockNumber: blockNumber, Pending: true}, &ret, "something")
bc.Call(&bind.CallOpts{BlockNumber: blockNumber, Pending: true}, nil, "something")
if !mc.pendingCallContractCalled {
t.Fatalf("CallContract() was not passed the block number")

View File

@@ -52,7 +52,7 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
// contracts is the map of each individual contract requested binding
contracts = make(map[string]*tmplContract)
// structs is the map of all reclared structs shared by passed contracts.
// structs is the map of all redeclared structs shared by passed contracts.
structs = make(map[string]*tmplStruct)
// isLib is the map used to flag each encountered library as such
@@ -80,10 +80,10 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
fallback *tmplMethod
receive *tmplMethod
// identifiers are used to detect duplicated identifier of function
// and event. For all calls, transacts and events, abigen will generate
// identifiers are used to detect duplicated identifiers of functions
// and events. For all calls, transacts and events, abigen will generate
// corresponding bindings. However we have to ensure there is no
// identifier coliision in the bindings of these categories.
// identifier collisions in the bindings of these categories.
callIdentifiers = make(map[string]bool)
transactIdentifiers = make(map[string]bool)
eventIdentifiers = make(map[string]bool)
@@ -246,7 +246,7 @@ var bindType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) stri
LangJava: bindTypeJava,
}
// bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go one.
// bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go ones.
func bindBasicTypeGo(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
@@ -286,7 +286,7 @@ func bindTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
}
}
// bindBasicTypeJava converts basic solidity types(except array, slice and tuple) to Java one.
// bindBasicTypeJava converts basic solidity types(except array, slice and tuple) to Java ones.
func bindBasicTypeJava(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
@@ -330,7 +330,7 @@ func bindBasicTypeJava(kind abi.Type) string {
}
// pluralizeJavaType explicitly converts multidimensional types to predefined
// type in go side.
// types in go side.
func pluralizeJavaType(typ string) string {
switch typ {
case "boolean":
@@ -369,7 +369,7 @@ var bindTopicType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct)
}
// bindTopicTypeGo converts a Solidity topic type to a Go one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
// functionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeGo(kind, structs)
@@ -386,7 +386,7 @@ func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
}
// bindTopicTypeJava converts a Solidity topic type to a Java one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
// functionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeJava(kind, structs)
@@ -394,7 +394,7 @@ func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
// parameters that are not value types i.e. arrays and structs are not
// stored directly but instead a keccak256-hash of an encoding is stored.
//
// We only convert stringS and bytes to hash, still need to deal with
// We only convert strings and bytes to hash, still need to deal with
// array(both fixed-size and dynamic-size) and struct.
if bound == "String" || bound == "byte[]" {
bound = "Hash"
@@ -415,7 +415,7 @@ var bindStructType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct
func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
// We compose raw struct name and canonical parameter expression
// We compose a raw struct name and a canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
@@ -454,7 +454,7 @@ func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
// We compose raw struct name and canonical parameter expression
// We compose a raw struct name and a canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
@@ -486,7 +486,7 @@ func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
}
// namedType is a set of functions that transform language specific types to
// named versions that my be used inside method names.
// named versions that may be used inside method names.
var namedType = map[Lang]func(string, abi.Type) string{
LangGo: func(string, abi.Type) string { panic("this shouldn't be needed") },
LangJava: namedTypeJava,
@@ -528,7 +528,7 @@ func alias(aliases map[string]string, n string) string {
}
// methodNormalizer is a name transformer that modifies Solidity method names to
// conform to target language naming concentions.
// conform to target language naming conventions.
var methodNormalizer = map[Lang]func(string) string{
LangGo: abi.ToCamelCase,
LangJava: decapitalise,

View File

@@ -296,7 +296,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -351,7 +351,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -397,7 +397,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -455,7 +455,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -503,7 +503,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -598,7 +598,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -648,7 +648,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -723,7 +723,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -817,7 +817,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -1007,7 +1007,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -1142,7 +1142,7 @@ var bindTests = []struct {
`
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -1284,7 +1284,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -1350,7 +1350,7 @@ var bindTests = []struct {
`
// Initialize test accounts
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -1444,7 +1444,7 @@ var bindTests = []struct {
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 10000000)
defer sim.Close()
transactOpts := bind.NewKeyedTransactor(key)
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
_, _, _, err := DeployIdentifierCollision(transactOpts, sim)
if err != nil {
t.Fatalf("failed to deploy contract: %v", err)
@@ -1506,7 +1506,7 @@ var bindTests = []struct {
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 10000000)
defer sim.Close()
transactOpts := bind.NewKeyedTransactor(key)
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
_, _, c1, err := DeployContractOne(transactOpts, sim)
if err != nil {
t.Fatal("Failed to deploy contract")
@@ -1563,7 +1563,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@@ -1632,7 +1632,7 @@ var bindTests = []struct {
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 1000000)
defer sim.Close()
opts := bind.NewKeyedTransactor(key)
opts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
_, _, c, err := DeployNewFallbacks(opts, sim)
if err != nil {
t.Fatalf("Failed to deploy contract: %v", err)
@@ -1696,11 +1696,11 @@ func TestGolangBindings(t *testing.T) {
t.Skip("go sdk not found for testing")
}
// Create a temporary workspace for the test suite
ws, err := ioutil.TempDir("", "")
ws, err := ioutil.TempDir("", "binding-test")
if err != nil {
t.Fatalf("failed to create temporary workspace: %v", err)
}
defer os.RemoveAll(ws)
//defer os.RemoveAll(ws)
pkg := filepath.Join(ws, "bindtest")
if err = os.MkdirAll(pkg, 0700); err != nil {

View File

@@ -30,7 +30,7 @@ type tmplData struct {
type tmplContract struct {
Type string // Type name of the main contract binding
InputABI string // JSON ABI used as the input to generate the binding from
InputBin string // Optional EVM bytecode used to denetare deploy code from
InputBin string // Optional EVM bytecode used to generate deploy code from
FuncSigs map[string]string // Optional map: string signature -> 4-byte signature
Constructor abi.Method // Contract constructor for deploy parametrization
Calls map[string]*tmplMethod // Contract calls that only read state data
@@ -50,7 +50,8 @@ type tmplMethod struct {
Structured bool // Whether the returns should be accumulated into a struct
}
// tmplEvent is a wrapper around an a
// tmplEvent is a wrapper around an abi.Event that contains a few preprocessed
// and cached data fields.
type tmplEvent struct {
Original abi.Event // Original event as parsed by the abi package
Normalized abi.Event // Normalized version of the parsed fields
@@ -64,7 +65,7 @@ type tmplField struct {
SolKind abi.Type // Raw abi type information
}
// tmplStruct is a wrapper around an abi.tuple contains an auto-generated
// tmplStruct is a wrapper around an abi.tuple and contains an auto-generated
// struct name.
type tmplStruct struct {
Name string // Auto-generated struct name(before solidity v0.5.11) or raw name.
@@ -78,8 +79,8 @@ var tmplSource = map[Lang]string{
LangJava: tmplSourceJava,
}
// tmplSourceGo is the Go source template use to generate the contract binding
// based on.
// tmplSourceGo is the Go source template that the generated Go contract binding
// is based on.
const tmplSourceGo = `
// Code generated - DO NOT EDIT.
// This file is a generated binding and any manual changes will be lost.
@@ -260,7 +261,7 @@ var (
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (_{{$contract.Type}} *{{$contract.Type}}Raw) Call(opts *bind.CallOpts, result interface{}, method string, params ...interface{}) error {
func (_{{$contract.Type}} *{{$contract.Type}}Raw) Call(opts *bind.CallOpts, result *[]interface{}, method string, params ...interface{}) error {
return _{{$contract.Type}}.Contract.{{$contract.Type}}Caller.contract.Call(opts, result, method, params...)
}
@@ -279,7 +280,7 @@ var (
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (_{{$contract.Type}} *{{$contract.Type}}CallerRaw) Call(opts *bind.CallOpts, result interface{}, method string, params ...interface{}) error {
func (_{{$contract.Type}} *{{$contract.Type}}CallerRaw) Call(opts *bind.CallOpts, result *[]interface{}, method string, params ...interface{}) error {
return _{{$contract.Type}}.Contract.contract.Call(opts, result, method, params...)
}
@@ -299,19 +300,23 @@ var (
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Caller) {{.Normalized.Name}}(opts *bind.CallOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} },{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}}{{end}} error) {
{{if .Structured}}ret := new(struct{
{{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}}
{{end}}
}){{else}}var (
{{range $i, $_ := .Normalized.Outputs}}ret{{$i}} = new({{bindtype .Type $structs}})
{{end}}
){{end}}
out := {{if .Structured}}ret{{else}}{{if eq (len .Normalized.Outputs) 1}}ret0{{else}}&[]interface{}{
{{range $i, $_ := .Normalized.Outputs}}ret{{$i}},
{{end}}
}{{end}}{{end}}
err := _{{$contract.Type}}.contract.Call(opts, out, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
return {{if .Structured}}*ret,{{else}}{{range $i, $_ := .Normalized.Outputs}}*ret{{$i}},{{end}}{{end}} err
var out []interface{}
err := _{{$contract.Type}}.contract.Call(opts, &out, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
{{if .Structured}}
outstruct := new(struct{ {{range .Normalized.Outputs}} {{.Name}} {{bindtype .Type $structs}}; {{end}} })
{{range $i, $t := .Normalized.Outputs}}
outstruct.{{.Name}} = out[{{$i}}].({{bindtype .Type $structs}}){{end}}
return *outstruct, err
{{else}}
if err != nil {
return {{range $i, $_ := .Normalized.Outputs}}*new({{bindtype .Type $structs}}), {{end}} err
}
{{range $i, $t := .Normalized.Outputs}}
out{{$i}} := *abi.ConvertType(out[{{$i}}], new({{bindtype .Type $structs}})).(*{{bindtype .Type $structs}}){{end}}
return {{range $i, $t := .Normalized.Outputs}}out{{$i}}, {{end}} err
{{end}}
}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
@@ -536,6 +541,7 @@ var (
if err := _{{$contract.Type}}.contract.UnpackLog(event, "{{.Original.Name}}", log); err != nil {
return nil, err
}
event.Raw = log
return event, nil
}
@@ -543,8 +549,8 @@ var (
{{end}}
`
// tmplSourceJava is the Java source template use to generate the contract binding
// based on.
// tmplSourceJava is the Java source template that the generated Java contract binding
// is based on.
const tmplSourceJava = `
// This file is an automatically generated Java binding. Do not modify as any
// change will likely be lost upon the next re-generation!

View File

@@ -52,7 +52,7 @@ func sliceTypeCheck(t Type, val reflect.Value) error {
}
}
if elemKind := val.Type().Elem().Kind(); elemKind != t.Elem.GetType().Kind() {
if val.Type().Elem().Kind() != t.Elem.GetType().Kind() {
return typeErr(formatSliceString(t.Elem.GetType().Kind(), t.Size), val.Type())
}
return nil

View File

@@ -32,7 +32,7 @@ type Event struct {
// the raw name and a suffix will be added in the case of a event overload.
//
// e.g.
// There are two events have same name:
// These are two events that have the same name:
// * foo(int,int)
// * foo(uint,uint)
// The event name of the first one wll be resolved as foo while the second one

View File

@@ -147,10 +147,6 @@ func TestEventString(t *testing.T) {
// TestEventMultiValueWithArrayUnpack verifies that array fields will be counted after parsing array.
func TestEventMultiValueWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": false, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 [2]uint8
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
@@ -158,10 +154,10 @@ func TestEventMultiValueWithArrayUnpack(t *testing.T) {
for ; i <= 3; i++ {
b.Write(packNum(reflect.ValueOf(i)))
}
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{1, 2}, rst.Value1)
require.Equal(t, uint8(3), rst.Value2)
unpacked, err := abi.Unpack("test", b.Bytes())
require.NoError(t, err)
require.Equal(t, [2]uint8{1, 2}, unpacked[0])
require.Equal(t, uint8(3), unpacked[1])
}
func TestEventTupleUnpack(t *testing.T) {
@@ -351,14 +347,14 @@ func unpackTestEventData(dest interface{}, hexData string, jsonEvent []byte, ass
var e Event
assert.NoError(json.Unmarshal(jsonEvent, &e), "Should be able to unmarshal event ABI")
a := ABI{Events: map[string]Event{"e": e}}
return a.Unpack(dest, "e", data)
return a.UnpackIntoInterface(dest, "e", data)
}
// TestEventUnpackIndexed verifies that indexed field will be skipped by event decoder.
func TestEventUnpackIndexed(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 uint8
Value1 uint8 // indexed
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
@@ -366,16 +362,16 @@ func TestEventUnpackIndexed(t *testing.T) {
var b bytes.Buffer
b.Write(packNum(reflect.ValueOf(uint8(8))))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.NoError(t, abi.UnpackIntoInterface(&rst, "test", b.Bytes()))
require.Equal(t, uint8(0), rst.Value1)
require.Equal(t, uint8(8), rst.Value2)
}
// TestEventIndexedWithArrayUnpack verifies that decoder will not overlow when static array is indexed input.
// TestEventIndexedWithArrayUnpack verifies that decoder will not overflow when static array is indexed input.
func TestEventIndexedWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"string"}]}]`
type testStruct struct {
Value1 [2]uint8
Value1 [2]uint8 // indexed
Value2 string
}
abi, err := JSON(strings.NewReader(definition))
@@ -388,7 +384,7 @@ func TestEventIndexedWithArrayUnpack(t *testing.T) {
b.Write(common.RightPadBytes([]byte(stringOut), 32))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.NoError(t, abi.UnpackIntoInterface(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{0, 0}, rst.Value1)
require.Equal(t, stringOut, rst.Value2)
}

View File

@@ -45,7 +45,7 @@ const (
// If the method is `Const` no transaction needs to be created for this
// particular Method call. It can easily be simulated using a local VM.
// For example a `Balance()` method only needs to retrieve something
// from the storage and therefore requires no Tx to be send to the
// from the storage and therefore requires no Tx to be sent to the
// network. A method such as `Transact` does require a Tx and thus will
// be flagged `false`.
// Input specifies the required input parameters for this gives method.
@@ -54,7 +54,7 @@ type Method struct {
// the raw name and a suffix will be added in the case of a function overload.
//
// e.g.
// There are two functions have same name:
// These are two functions that have the same name:
// * foo(int,int)
// * foo(uint,uint)
// The method name of the first one will be resolved as foo while the second one

View File

@@ -17,6 +17,8 @@
package abi
import (
"errors"
"fmt"
"math/big"
"reflect"
@@ -25,7 +27,7 @@ import (
)
// packBytesSlice packs the given bytes as [L, V] as the canonical representation
// bytes slice
// bytes slice.
func packBytesSlice(bytes []byte, l int) []byte {
len := packNum(reflect.ValueOf(l))
return append(len, common.RightPadBytes(bytes, (l+31)/32*32)...)
@@ -33,39 +35,42 @@ func packBytesSlice(bytes []byte, l int) []byte {
// packElement packs the given reflect value according to the abi specification in
// t.
func packElement(t Type, reflectValue reflect.Value) []byte {
func packElement(t Type, reflectValue reflect.Value) ([]byte, error) {
switch t.T {
case IntTy, UintTy:
return packNum(reflectValue)
return packNum(reflectValue), nil
case StringTy:
return packBytesSlice([]byte(reflectValue.String()), reflectValue.Len())
return packBytesSlice([]byte(reflectValue.String()), reflectValue.Len()), nil
case AddressTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.LeftPadBytes(reflectValue.Bytes(), 32)
return common.LeftPadBytes(reflectValue.Bytes(), 32), nil
case BoolTy:
if reflectValue.Bool() {
return math.PaddedBigBytes(common.Big1, 32)
return math.PaddedBigBytes(common.Big1, 32), nil
}
return math.PaddedBigBytes(common.Big0, 32)
return math.PaddedBigBytes(common.Big0, 32), nil
case BytesTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return packBytesSlice(reflectValue.Bytes(), reflectValue.Len())
if reflectValue.Type() != reflect.TypeOf([]byte{}) {
return []byte{}, errors.New("Bytes type is neither slice nor array")
}
return packBytesSlice(reflectValue.Bytes(), reflectValue.Len()), nil
case FixedBytesTy, FunctionTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.RightPadBytes(reflectValue.Bytes(), 32)
return common.RightPadBytes(reflectValue.Bytes(), 32), nil
default:
panic("abi: fatal error")
return []byte{}, fmt.Errorf("Could not pack element, unknown type: %v", t.T)
}
}
// packNum packs the given number (using the reflect value) and will cast it to appropriate number representation
// packNum packs the given number (using the reflect value) and will cast it to appropriate number representation.
func packNum(value reflect.Value) []byte {
switch kind := value.Kind(); kind {
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
@@ -77,5 +82,4 @@ func packNum(value reflect.Value) []byte {
default:
panic("abi: fatal error")
}
}

View File

@@ -44,18 +44,7 @@ func TestPack(t *testing.T) {
t.Fatalf("invalid ABI definition %s, %v", inDef, err)
}
var packed []byte
if reflect.TypeOf(test.unpacked).Kind() != reflect.Struct {
packed, err = inAbi.Pack("method", test.unpacked)
} else {
// if want is a struct we need to use the components.
elem := reflect.ValueOf(test.unpacked)
var values []interface{}
for i := 0; i < elem.NumField(); i++ {
field := elem.Field(i)
values = append(values, field.Interface())
}
packed, err = inAbi.Pack("method", values...)
}
packed, err = inAbi.Pack("method", test.unpacked)
if err != nil {
t.Fatalf("test %d (%v) failed: %v", i, test.def, err)

View File

@@ -620,7 +620,7 @@ var packUnpackTests = []packUnpackTest{
{
def: `[{"type": "bytes32[]"}]`,
unpacked: []common.Hash{{1}, {2}},
unpacked: [][32]byte{{1}, {2}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0100000000000000000000000000000000000000000000000000000000000000" +
@@ -722,7 +722,7 @@ var packUnpackTests = []packUnpackTest{
},
// struct outputs
{
def: `[{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}]`,
def: `[{"components": [{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: struct {
@@ -731,28 +731,28 @@ var packUnpackTests = []packUnpackTest{
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int_one","type":"int256"}]`,
def: `[{"components": [{"name":"int_one","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
def: `[{"components": [{"name":"int__one","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
def: `[{"components": [{"name":"int_one_","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}]`,
def: `[{"components": [{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: struct {
@@ -831,11 +831,11 @@ var packUnpackTests = []packUnpackTest{
},
{
// static tuple
def: `[{"name":"a","type":"int64"},
def: `[{"components": [{"name":"a","type":"int64"},
{"name":"b","type":"int256"},
{"name":"c","type":"int256"},
{"name":"d","type":"bool"},
{"name":"e","type":"bytes32[3][2]"}]`,
{"name":"e","type":"bytes32[3][2]"}], "type":"tuple"}]`,
unpacked: struct {
A int64
B *big.Int
@@ -855,21 +855,22 @@ var packUnpackTests = []packUnpackTest{
"0500000000000000000000000000000000000000000000000000000000000000", // struct[e] array[1][2]
},
{
def: `[{"name":"a","type":"string"},
def: `[{"components": [{"name":"a","type":"string"},
{"name":"b","type":"int64"},
{"name":"c","type":"bytes"},
{"name":"d","type":"string[]"},
{"name":"e","type":"int256[]"},
{"name":"f","type":"address[]"}]`,
{"name":"f","type":"address[]"}], "type":"tuple"}]`,
unpacked: struct {
FieldA string `abi:"a"` // Test whether abi tag works
FieldB int64 `abi:"b"`
C []byte
D []string
E []*big.Int
F []common.Address
A string
B int64
C []byte
D []string
E []*big.Int
F []common.Address
}{"foobar", 1, []byte{1}, []string{"foo", "bar"}, []*big.Int{big.NewInt(1), big.NewInt(-1)}, []common.Address{{1}, {2}}},
packed: "00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
packed: "0000000000000000000000000000000000000000000000000000000000000020" + // struct a
"00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"0000000000000000000000000000000000000000000000000000000000000100" + // struct[c] offset
"0000000000000000000000000000000000000000000000000000000000000140" + // struct[d] offset
@@ -894,23 +895,24 @@ var packUnpackTests = []packUnpackTest{
"0000000000000000000000000200000000000000000000000000000000000000", // common.Address{2}
},
{
def: `[{"components": [{"name": "a","type": "uint256"},
def: `[{"components": [{ "type": "tuple","components": [{"name": "a","type": "uint256"},
{"name": "b","type": "uint256[]"}],
"name": "a","type": "tuple"},
{"name": "b","type": "uint256[]"}]`,
{"name": "b","type": "uint256[]"}], "type": "tuple"}]`,
unpacked: struct {
A struct {
FieldA *big.Int `abi:"a"`
B []*big.Int
A *big.Int
B []*big.Int
}
B []*big.Int
}{
A: struct {
FieldA *big.Int `abi:"a"` // Test whether abi tag works for nested tuple
B []*big.Int
A *big.Int
B []*big.Int
}{big.NewInt(1), []*big.Int{big.NewInt(1), big.NewInt(2)}},
B: []*big.Int{big.NewInt(1), big.NewInt(2)}},
packed: "0000000000000000000000000000000000000000000000000000000000000040" + // a offset
packed: "0000000000000000000000000000000000000000000000000000000000000020" + // struct a
"0000000000000000000000000000000000000000000000000000000000000040" + // a offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // b offset
"0000000000000000000000000000000000000000000000000000000000000001" + // a.a value
"0000000000000000000000000000000000000000000000000000000000000040" + // a.b offset

View File

@@ -24,6 +24,29 @@ import (
"strings"
)
// ConvertType converts an interface of a runtime type into a interface of the
// given type
// e.g. turn
// var fields []reflect.StructField
// fields = append(fields, reflect.StructField{
// Name: "X",
// Type: reflect.TypeOf(new(big.Int)),
// Tag: reflect.StructTag("json:\"" + "x" + "\""),
// }
// into
// type TupleT struct { X *big.Int }
func ConvertType(in interface{}, proto interface{}) interface{} {
protoType := reflect.TypeOf(proto)
if reflect.TypeOf(in).ConvertibleTo(protoType) {
return reflect.ValueOf(in).Convert(protoType).Interface()
}
// Use set as a last ditch effort
if err := set(reflect.ValueOf(proto), reflect.ValueOf(in)); err != nil {
panic(err)
}
return proto
}
// indirect recursively dereferences the value until it either gets the value
// or finds a big.Int
func indirect(v reflect.Value) reflect.Value {
@@ -61,7 +84,7 @@ func reflectIntType(unsigned bool, size int) reflect.Type {
return reflect.TypeOf(&big.Int{})
}
// mustArrayToBytesSlice creates a new byte slice with the exact same size as value
// mustArrayToByteSlice creates a new byte slice with the exact same size as value
// and copies the bytes in value to the new slice.
func mustArrayToByteSlice(value reflect.Value) reflect.Value {
slice := reflect.MakeSlice(reflect.TypeOf([]byte{}), value.Len(), value.Len())
@@ -119,6 +142,9 @@ func setSlice(dst, src reflect.Value) error {
}
func setArray(dst, src reflect.Value) error {
if src.Kind() == reflect.Ptr {
return set(dst, indirect(src))
}
array := reflect.New(dst.Type()).Elem()
min := src.Len()
if src.Len() > dst.Len() {

View File

@@ -17,6 +17,7 @@
package abi
import (
"math/big"
"reflect"
"testing"
)
@@ -189,3 +190,72 @@ func TestReflectNameToStruct(t *testing.T) {
})
}
}
func TestConvertType(t *testing.T) {
// Test Basic Struct
type T struct {
X *big.Int
Y *big.Int
}
// Create on-the-fly structure
var fields []reflect.StructField
fields = append(fields, reflect.StructField{
Name: "X",
Type: reflect.TypeOf(new(big.Int)),
Tag: "json:\"" + "x" + "\"",
})
fields = append(fields, reflect.StructField{
Name: "Y",
Type: reflect.TypeOf(new(big.Int)),
Tag: "json:\"" + "y" + "\"",
})
val := reflect.New(reflect.StructOf(fields))
val.Elem().Field(0).Set(reflect.ValueOf(big.NewInt(1)))
val.Elem().Field(1).Set(reflect.ValueOf(big.NewInt(2)))
// ConvertType
out := *ConvertType(val.Interface(), new(T)).(*T)
if out.X.Cmp(big.NewInt(1)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out.X, big.NewInt(1))
}
if out.Y.Cmp(big.NewInt(2)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out.Y, big.NewInt(2))
}
// Slice Type
val2 := reflect.MakeSlice(reflect.SliceOf(reflect.StructOf(fields)), 2, 2)
val2.Index(0).Field(0).Set(reflect.ValueOf(big.NewInt(1)))
val2.Index(0).Field(1).Set(reflect.ValueOf(big.NewInt(2)))
val2.Index(1).Field(0).Set(reflect.ValueOf(big.NewInt(3)))
val2.Index(1).Field(1).Set(reflect.ValueOf(big.NewInt(4)))
out2 := *ConvertType(val2.Interface(), new([]T)).(*[]T)
if out2[0].X.Cmp(big.NewInt(1)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[0].X, big.NewInt(1))
}
if out2[0].Y.Cmp(big.NewInt(2)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[1].Y, big.NewInt(2))
}
if out2[1].X.Cmp(big.NewInt(3)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[0].X, big.NewInt(1))
}
if out2[1].Y.Cmp(big.NewInt(4)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[1].Y, big.NewInt(2))
}
// Array Type
val3 := reflect.New(reflect.ArrayOf(2, reflect.StructOf(fields)))
val3.Elem().Index(0).Field(0).Set(reflect.ValueOf(big.NewInt(1)))
val3.Elem().Index(0).Field(1).Set(reflect.ValueOf(big.NewInt(2)))
val3.Elem().Index(1).Field(0).Set(reflect.ValueOf(big.NewInt(3)))
val3.Elem().Index(1).Field(1).Set(reflect.ValueOf(big.NewInt(4)))
out3 := *ConvertType(val3.Interface(), new([2]T)).(*[2]T)
if out3[0].X.Cmp(big.NewInt(1)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[0].X, big.NewInt(1))
}
if out3[0].Y.Cmp(big.NewInt(2)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[1].Y, big.NewInt(2))
}
if out3[1].X.Cmp(big.NewInt(3)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[0].X, big.NewInt(1))
}
if out3[1].Y.Cmp(big.NewInt(4)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[1].Y, big.NewInt(2))
}
}

View File

@@ -102,7 +102,7 @@ func genIntType(rule int64, size uint) []byte {
var topic [common.HashLength]byte
if rule < 0 {
// if a rule is negative, we need to put it into two's complement.
// extended to common.Hashlength bytes.
// extended to common.HashLength bytes.
topic = [common.HashLength]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}
}
for i := uint(0); i < size; i++ {
@@ -120,7 +120,7 @@ func ParseTopics(out interface{}, fields Arguments, topics []common.Hash) error
})
}
// ParseTopicsIntoMap converts the indexed topic field-value pairs into map key-value pairs
// ParseTopicsIntoMap converts the indexed topic field-value pairs into map key-value pairs.
func ParseTopicsIntoMap(out map[string]interface{}, fields Arguments, topics []common.Hash) error {
return parseTopicWithSetter(fields, topics,
func(arg Argument, reconstr interface{}) {

View File

@@ -44,7 +44,7 @@ const (
FunctionTy
)
// Type is the reflection of the supported argument type
// Type is the reflection of the supported argument type.
type Type struct {
Elem *Type
Size int
@@ -264,7 +264,7 @@ func overloadedArgName(rawName string, names map[string]string) (string, error)
return fieldName, nil
}
// String implements Stringer
// String implements Stringer.
func (t Type) String() (out string) {
return t.stringKind
}
@@ -346,7 +346,7 @@ func (t Type) pack(v reflect.Value) ([]byte, error) {
return append(ret, tail...), nil
default:
return packElement(t, v), nil
return packElement(t, v)
}
}
@@ -386,7 +386,7 @@ func isDynamicType(t Type) bool {
func getTypeSize(t Type) int {
if t.T == ArrayTy && !isDynamicType(*t.Elem) {
// Recursively calculate type size if it is a nested array
if t.Elem.T == ArrayTy {
if t.Elem.T == ArrayTy || t.Elem.T == TupleTy {
return t.Size * getTypeSize(*t.Elem)
}
return t.Size * 32

View File

@@ -255,7 +255,7 @@ func TestTypeCheck(t *testing.T) {
{"bytes", nil, [2]byte{0, 1}, "abi: cannot use array as type slice as argument"},
{"bytes", nil, common.Hash{1}, "abi: cannot use array as type slice as argument"},
{"string", nil, "hello world", ""},
{"string", nil, string(""), ""},
{"string", nil, "", ""},
{"string", nil, []byte{}, "abi: cannot use slice as type string as argument"},
{"bytes32[]", nil, [][32]byte{{}}, ""},
{"function", nil, [24]byte{}, ""},
@@ -330,3 +330,39 @@ func TestInternalType(t *testing.T) {
t.Errorf("type %q: parsed type mismatch:\nGOT %s\nWANT %s ", blob, spew.Sdump(typeWithoutStringer(typ)), spew.Sdump(typeWithoutStringer(kind)))
}
}
func TestGetTypeSize(t *testing.T) {
var testCases = []struct {
typ string
components []ArgumentMarshaling
typSize int
}{
// simple array
{"uint256[2]", nil, 32 * 2},
{"address[3]", nil, 32 * 3},
{"bytes32[4]", nil, 32 * 4},
// array array
{"uint256[2][3][4]", nil, 32 * (2 * 3 * 4)},
// array tuple
{"tuple[2]", []ArgumentMarshaling{{Name: "x", Type: "bytes32"}, {Name: "y", Type: "bytes32"}}, (32 * 2) * 2},
// simple tuple
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "uint256"}, {Name: "y", Type: "uint256"}}, 32 * 2},
// tuple array
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "bytes32[2]"}}, 32 * 2},
// tuple tuple
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "tuple", Components: []ArgumentMarshaling{{Name: "x", Type: "bytes32"}}}}, 32},
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "tuple", Components: []ArgumentMarshaling{{Name: "x", Type: "bytes32[2]"}, {Name: "y", Type: "uint256"}}}}, 32 * (2 + 1)},
}
for i, data := range testCases {
typ, err := NewType(data.typ, "", data.components)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", data.typ, err)
}
result := getTypeSize(typ)
if result != data.typSize {
t.Errorf("case %d type %q: get type size error: actual: %d expected: %d", i, data.typ, result, data.typSize)
}
}
}

View File

@@ -26,13 +26,13 @@ import (
)
var (
// MaxUint256 is the maximum value that can be represented by a uint256
// MaxUint256 is the maximum value that can be represented by a uint256.
MaxUint256 = new(big.Int).Sub(new(big.Int).Lsh(common.Big1, 256), common.Big1)
// MaxInt256 is the maximum value that can be represented by a int256
// MaxInt256 is the maximum value that can be represented by a int256.
MaxInt256 = new(big.Int).Sub(new(big.Int).Lsh(common.Big1, 255), common.Big1)
)
// ReadInteger reads the integer based on its kind and returns the appropriate value
// ReadInteger reads the integer based on its kind and returns the appropriate value.
func ReadInteger(typ Type, b []byte) interface{} {
if typ.T == UintTy {
switch typ.Size {
@@ -73,7 +73,7 @@ func ReadInteger(typ Type, b []byte) interface{} {
}
}
// reads a bool
// readBool reads a bool.
func readBool(word []byte) (bool, error) {
for _, b := range word[:31] {
if b != 0 {
@@ -91,7 +91,8 @@ func readBool(word []byte) (bool, error) {
}
// A function type is simply the address with the function selection signature at the end.
// This enforces that standard by always presenting it as a 24-array (address + sig = 24 bytes)
//
// readFunctionType enforces that standard by always presenting it as a 24-array (address + sig = 24 bytes)
func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
if t.T != FunctionTy {
return [24]byte{}, fmt.Errorf("abi: invalid type in call to make function type byte array")
@@ -104,7 +105,7 @@ func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
return
}
// ReadFixedBytes uses reflection to create a fixed array to be read from
// ReadFixedBytes uses reflection to create a fixed array to be read from.
func ReadFixedBytes(t Type, word []byte) (interface{}, error) {
if t.T != FixedBytesTy {
return nil, fmt.Errorf("abi: invalid type in call to make fixed byte array")
@@ -117,7 +118,7 @@ func ReadFixedBytes(t Type, word []byte) (interface{}, error) {
}
// iteratively unpack elements
// forEachUnpack iteratively unpack elements.
func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error) {
if size < 0 {
return nil, fmt.Errorf("cannot marshal input to array, size is negative (%d)", size)
@@ -224,7 +225,10 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
return forEachUnpack(t, output[begin:], 0, length)
case ArrayTy:
if isDynamicType(*t.Elem) {
offset := int64(binary.BigEndian.Uint64(returnOutput[len(returnOutput)-8:]))
offset := binary.BigEndian.Uint64(returnOutput[len(returnOutput)-8:])
if offset > uint64(len(output)) {
return nil, fmt.Errorf("abi: toGoType offset greater than output length: offset: %d, len(output): %d", offset, len(output))
}
return forEachUnpack(t, output[offset:], 0, t.Size)
}
return forEachUnpack(t, output[index:], 0, t.Size)
@@ -249,7 +253,7 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
}
}
// interprets a 32 byte slice as an offset and then determines which indice to look to decode the type.
// lengthPrefixPointsTo interprets a 32 byte slice as an offset and then determines which indices to look to decode the type.
func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err error) {
bigOffsetEnd := big.NewInt(0).SetBytes(output[index : index+32])
bigOffsetEnd.Add(bigOffsetEnd, common.Big32)

View File

@@ -44,15 +44,13 @@ func TestUnpack(t *testing.T) {
if err != nil {
t.Fatalf("invalid hex %s: %v", test.packed, err)
}
outptr := reflect.New(reflect.TypeOf(test.unpacked))
err = abi.Unpack(outptr.Interface(), "method", encb)
out, err := abi.Unpack("method", encb)
if err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
}
out := outptr.Elem().Interface()
if !reflect.DeepEqual(test.unpacked, out) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, test.unpacked, out)
if !reflect.DeepEqual(test.unpacked, ConvertType(out[0], test.unpacked)) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, test.unpacked, out[0])
}
})
}
@@ -221,7 +219,7 @@ func TestLocalUnpackTests(t *testing.T) {
t.Fatalf("invalid hex %s: %v", test.enc, err)
}
outptr := reflect.New(reflect.TypeOf(test.want))
err = abi.Unpack(outptr.Interface(), "method", encb)
err = abi.UnpackIntoInterface(outptr.Interface(), "method", encb)
if err := test.checkError(err); err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
@@ -234,7 +232,7 @@ func TestLocalUnpackTests(t *testing.T) {
}
}
func TestUnpackSetDynamicArrayOutput(t *testing.T) {
func TestUnpackIntoInterfaceSetDynamicArrayOutput(t *testing.T) {
abi, err := JSON(strings.NewReader(`[{"constant":true,"inputs":[],"name":"testDynamicFixedBytes15","outputs":[{"name":"","type":"bytes15[]"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"testDynamicFixedBytes32","outputs":[{"name":"","type":"bytes32[]"}],"payable":false,"stateMutability":"view","type":"function"}]`))
if err != nil {
t.Fatal(err)
@@ -249,7 +247,7 @@ func TestUnpackSetDynamicArrayOutput(t *testing.T) {
)
// test 32
err = abi.Unpack(&out32, "testDynamicFixedBytes32", marshalledReturn32)
err = abi.UnpackIntoInterface(&out32, "testDynamicFixedBytes32", marshalledReturn32)
if err != nil {
t.Fatal(err)
}
@@ -266,7 +264,7 @@ func TestUnpackSetDynamicArrayOutput(t *testing.T) {
}
// test 15
err = abi.Unpack(&out15, "testDynamicFixedBytes32", marshalledReturn15)
err = abi.UnpackIntoInterface(&out15, "testDynamicFixedBytes32", marshalledReturn15)
if err != nil {
t.Fatal(err)
}
@@ -367,7 +365,7 @@ func TestMethodMultiReturn(t *testing.T) {
tc := tc
t.Run(tc.name, func(t *testing.T) {
require := require.New(t)
err := abi.Unpack(tc.dest, "multi", data)
err := abi.UnpackIntoInterface(tc.dest, "multi", data)
if tc.error == "" {
require.Nil(err, "Should be able to unpack method outputs.")
require.Equal(tc.expected, tc.dest)
@@ -390,7 +388,7 @@ func TestMultiReturnWithArray(t *testing.T) {
ret1, ret1Exp := new([3]uint64), [3]uint64{9, 9, 9}
ret2, ret2Exp := new(uint64), uint64(8)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@@ -414,7 +412,7 @@ func TestMultiReturnWithStringArray(t *testing.T) {
ret2, ret2Exp := new(common.Address), common.HexToAddress("ab1257528b3782fb40d7ed5f72e624b744dffb2f")
ret3, ret3Exp := new([2]string), [2]string{"Ethereum", "Hello, Ethereum!"}
ret4, ret4Exp := new(bool), false
if err := abi.Unpack(&[]interface{}{ret1, ret2, ret3, ret4}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2, ret3, ret4}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@@ -452,7 +450,7 @@ func TestMultiReturnWithStringSlice(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000065")) // output[1][1] value
ret1, ret1Exp := new([]string), []string{"ethereum", "go-ethereum"}
ret2, ret2Exp := new([]*big.Int), []*big.Int{big.NewInt(100), big.NewInt(101)}
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@@ -492,7 +490,7 @@ func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
{{0x411, 0x412, 0x413}, {0x421, 0x422, 0x423}},
}
ret2, ret2Exp := new(uint64), uint64(0x9876)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@@ -531,7 +529,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000a"))
buff.Write(common.Hex2Bytes("0102000000000000000000000000000000000000000000000000000000000000"))
err = abi.Unpack(&mixedBytes, "mixedBytes", buff.Bytes())
err = abi.UnpackIntoInterface(&mixedBytes, "mixedBytes", buff.Bytes())
if err != nil {
t.Error(err)
} else {
@@ -546,7 +544,7 @@ func TestUnmarshal(t *testing.T) {
// marshal int
var Int *big.Int
err = abi.Unpack(&Int, "int", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
err = abi.UnpackIntoInterface(&Int, "int", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
@@ -557,7 +555,7 @@ func TestUnmarshal(t *testing.T) {
// marshal bool
var Bool bool
err = abi.Unpack(&Bool, "bool", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
err = abi.UnpackIntoInterface(&Bool, "bool", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
@@ -574,7 +572,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(bytesOut)
var Bytes []byte
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@@ -590,7 +588,7 @@ func TestUnmarshal(t *testing.T) {
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@@ -606,7 +604,7 @@ func TestUnmarshal(t *testing.T) {
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@@ -616,7 +614,7 @@ func TestUnmarshal(t *testing.T) {
}
// marshal dynamic bytes output empty
err = abi.Unpack(&Bytes, "bytes", nil)
err = abi.UnpackIntoInterface(&Bytes, "bytes", nil)
if err == nil {
t.Error("expected error")
}
@@ -627,7 +625,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000005"))
buff.Write(common.RightPadBytes([]byte("hello"), 32))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@@ -641,7 +639,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.RightPadBytes([]byte("hello"), 32))
var hash common.Hash
err = abi.Unpack(&hash, "fixed", buff.Bytes())
err = abi.UnpackIntoInterface(&hash, "fixed", buff.Bytes())
if err != nil {
t.Error(err)
}
@@ -654,12 +652,12 @@ func TestUnmarshal(t *testing.T) {
// marshal error
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err == nil {
t.Error("expected error")
}
err = abi.Unpack(&Bytes, "multi", make([]byte, 64))
err = abi.UnpackIntoInterface(&Bytes, "multi", make([]byte, 64))
if err == nil {
t.Error("expected error")
}
@@ -670,7 +668,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000003"))
// marshal int array
var intArray [3]*big.Int
err = abi.Unpack(&intArray, "intArraySingle", buff.Bytes())
err = abi.UnpackIntoInterface(&intArray, "intArraySingle", buff.Bytes())
if err != nil {
t.Error(err)
}
@@ -691,7 +689,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000100000000000000000000000000000000000000"))
var outAddr []common.Address
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
err = abi.UnpackIntoInterface(&outAddr, "addressSliceSingle", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
@@ -718,7 +716,7 @@ func TestUnmarshal(t *testing.T) {
A []common.Address
B []common.Address
}
err = abi.Unpack(&outAddrStruct, "addressSliceDouble", buff.Bytes())
err = abi.UnpackIntoInterface(&outAddrStruct, "addressSliceDouble", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
@@ -746,7 +744,7 @@ func TestUnmarshal(t *testing.T) {
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000100"))
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
err = abi.UnpackIntoInterface(&outAddr, "addressSliceSingle", buff.Bytes())
if err == nil {
t.Fatal("expected error:", err)
}
@@ -769,7 +767,7 @@ func TestUnpackTuple(t *testing.T) {
B *big.Int
}{new(big.Int), new(big.Int)}
err = abi.Unpack(&v, "tuple", buff.Bytes())
err = abi.UnpackIntoInterface(&v, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
} else {
@@ -841,7 +839,7 @@ func TestUnpackTuple(t *testing.T) {
A: big.NewInt(1),
}
err = abi.Unpack(&ret, "tuple", buff.Bytes())
err = abi.UnpackIntoInterface(&ret, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
}

View File

@@ -21,7 +21,7 @@ import (
"fmt"
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
@@ -88,7 +88,7 @@ type Wallet interface {
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// Note, self derivation will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//

View File

@@ -150,3 +150,31 @@ func (path *DerivationPath) UnmarshalJSON(b []byte) error {
*path, err = ParseDerivationPath(dp)
return err
}
// DefaultIterator creates a BIP-32 path iterator, which progresses by increasing the last component:
// i.e. m/44'/60'/0'/0/0, m/44'/60'/0'/0/1, m/44'/60'/0'/0/2, ... m/44'/60'/0'/0/N.
func DefaultIterator(base DerivationPath) func() DerivationPath {
path := make(DerivationPath, len(base))
copy(path[:], base[:])
// Set it back by one, so the first call gives the first result
path[len(path)-1]--
return func() DerivationPath {
path[len(path)-1]++
return path
}
}
// LedgerLiveIterator creates a bip44 path iterator for Ledger Live.
// Ledger Live increments the third component rather than the fifth component
// i.e. m/44'/60'/0'/0/0, m/44'/60'/1'/0/0, m/44'/60'/2'/0/0, ... m/44'/60'/N'/0/0.
func LedgerLiveIterator(base DerivationPath) func() DerivationPath {
path := make(DerivationPath, len(base))
copy(path[:], base[:])
// Set it back by one, so the first call gives the first result
path[2]--
return func() DerivationPath {
// ledgerLivePathIterator iterates on the third component
path[2]++
return path
}
}

View File

@@ -17,6 +17,7 @@
package accounts
import (
"fmt"
"reflect"
"testing"
)
@@ -77,3 +78,41 @@ func TestHDPathParsing(t *testing.T) {
}
}
}
func testDerive(t *testing.T, next func() DerivationPath, expected []string) {
t.Helper()
for i, want := range expected {
if have := next(); fmt.Sprintf("%v", have) != want {
t.Errorf("step %d, have %v, want %v", i, have, want)
}
}
}
func TestHdPathIteration(t *testing.T) {
testDerive(t, DefaultIterator(DefaultBaseDerivationPath),
[]string{
"m/44'/60'/0'/0/0", "m/44'/60'/0'/0/1",
"m/44'/60'/0'/0/2", "m/44'/60'/0'/0/3",
"m/44'/60'/0'/0/4", "m/44'/60'/0'/0/5",
"m/44'/60'/0'/0/6", "m/44'/60'/0'/0/7",
"m/44'/60'/0'/0/8", "m/44'/60'/0'/0/9",
})
testDerive(t, DefaultIterator(LegacyLedgerBaseDerivationPath),
[]string{
"m/44'/60'/0'/0", "m/44'/60'/0'/1",
"m/44'/60'/0'/2", "m/44'/60'/0'/3",
"m/44'/60'/0'/4", "m/44'/60'/0'/5",
"m/44'/60'/0'/6", "m/44'/60'/0'/7",
"m/44'/60'/0'/8", "m/44'/60'/0'/9",
})
testDerive(t, LedgerLiveIterator(DefaultBaseDerivationPath),
[]string{
"m/44'/60'/0'/0/0", "m/44'/60'/1'/0/0",
"m/44'/60'/2'/0/0", "m/44'/60'/3'/0/0",
"m/44'/60'/4'/0/0", "m/44'/60'/5'/0/0",
"m/44'/60'/6'/0/0", "m/44'/60'/7'/0/0",
"m/44'/60'/8'/0/0", "m/44'/60'/9'/0/0",
})
}

View File

@@ -32,7 +32,7 @@ import (
type fileCache struct {
all mapset.Set // Set of all files from the keystore folder
lastMod time.Time // Last time instance when a file was modified
mu sync.RWMutex
mu sync.Mutex
}
// scan performs a new scan on the given directory, compares against the already

View File

@@ -336,7 +336,9 @@ func TestWalletNotifications(t *testing.T) {
// Shut down the event collector and check events.
sub.Unsubscribe()
<-updates
for ev := range updates {
events = append(events, walletEvent{ev, ev.Wallet.Accounts()[0]})
}
checkAccounts(t, live, ks.Wallets())
checkEvents(t, wantEvents, events)
}

View File

@@ -230,7 +230,7 @@ func DecryptKey(keyjson []byte, auth string) (*Key, error) {
key := crypto.ToECDSAUnsafe(keyBytes)
return &Key{
Id: uuid.UUID(keyId),
Id: keyId,
Address: crypto.PubkeyToAddress(key.PublicKey),
PrivateKey: key,
}, nil

View File

@@ -19,7 +19,7 @@ package keystore
import (
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
@@ -58,7 +58,7 @@ func (w *keystoreWallet) Open(passphrase string) error { return nil }
func (w *keystoreWallet) Close() error { return nil }
// Accounts implements accounts.Wallet, returning an account list consisting of
// a single account that the plain kestore wallet contains.
// a single account that the plain keystore wallet contains.
func (w *keystoreWallet) Accounts() []accounts.Account {
return []accounts.Account{w.account}
}
@@ -93,12 +93,12 @@ func (w *keystoreWallet) signHash(account accounts.Account, hash []byte) ([]byte
return w.keystore.SignHash(account, hash)
}
// SignData signs keccak256(data). The mimetype parameter describes the type of data being signed
// SignData signs keccak256(data). The mimetype parameter describes the type of data being signed.
func (w *keystoreWallet) SignData(account accounts.Account, mimeType string, data []byte) ([]byte, error) {
return w.signHash(account, crypto.Keccak256(data))
}
// SignDataWithPassphrase signs keccak256(data). The mimetype parameter describes the type of data being signed
// SignDataWithPassphrase signs keccak256(data). The mimetype parameter describes the type of data being signed.
func (w *keystoreWallet) SignDataWithPassphrase(account accounts.Account, passphrase, mimeType string, data []byte) ([]byte, error) {
// Make sure the requested account is contained within
if !w.Contains(account) {
@@ -108,12 +108,14 @@ func (w *keystoreWallet) SignDataWithPassphrase(account accounts.Account, passph
return w.keystore.SignHashWithPassphrase(account, passphrase, crypto.Keccak256(data))
}
// SignText implements accounts.Wallet, attempting to sign the hash of
// the given text with the given account.
func (w *keystoreWallet) SignText(account accounts.Account, text []byte) ([]byte, error) {
return w.signHash(account, accounts.TextHash(text))
}
// SignTextWithPassphrase implements accounts.Wallet, attempting to sign the
// given hash with the given account using passphrase as extra authentication.
// hash of the given text with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignTextWithPassphrase(account accounts.Account, passphrase string, text []byte) ([]byte, error) {
// Make sure the requested account is contained within
if !w.Contains(account) {

View File

@@ -33,7 +33,7 @@ import (
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
@@ -637,7 +637,7 @@ func (w *Wallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Accoun
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// Note, self derivation will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//

View File

@@ -162,7 +162,7 @@ func (w *ledgerDriver) SignTx(path accounts.DerivationPath, tx *types.Transactio
return common.Address{}, nil, accounts.ErrWalletClosed
}
// Ensure the wallet is capable of signing the given transaction
if chainID != nil && w.version[0] <= 1 && w.version[2] <= 2 {
if chainID != nil && w.version[0] <= 1 && w.version[1] <= 0 && w.version[2] <= 2 {
//lint:ignore ST1005 brand name displayed on the console
return common.Address{}, nil, fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least", w.version[0], w.version[1], w.version[2])
}

View File

@@ -25,7 +25,7 @@ import (
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
@@ -368,18 +368,22 @@ func (w *wallet) selfDerive() {
w.log.Warn("USB wallet nonce retrieval failed", "err", err)
break
}
// If the next account is empty, stop self-derivation, but add for the last base path
// We've just self-derived a new account, start tracking it locally
// unless the account was empty.
path := make(accounts.DerivationPath, len(nextPaths[i]))
copy(path[:], nextPaths[i][:])
if balance.Sign() == 0 && nonce == 0 {
empty = true
// If it indeed was empty, make a log output for it anyway. In the case
// of legacy-ledger, the first account on the legacy-path will
// be shown to the user, even if we don't actively track it
if i < len(nextAddrs)-1 {
w.log.Info("Skipping trakcking first account on legacy path, use personal.deriveAccount(<url>,<path>, false) to track",
"path", path, "address", nextAddrs[i])
break
}
}
// We've just self-derived a new account, start tracking it locally
path := make(accounts.DerivationPath, len(nextPaths[i]))
copy(path[:], nextPaths[i][:])
paths = append(paths, path)
account := accounts.Account{
Address: nextAddrs[i],
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
@@ -489,7 +493,7 @@ func (w *wallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Accoun
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// Note, self derivation will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//

View File

@@ -24,13 +24,13 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://dl.google.com/go/go1.14.2.windows-%GETH_ARCH%.zip
- 7z x go1.14.2.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://dl.google.com/go/go1.15.5.windows-%GETH_ARCH%.zip
- 7z x go1.15.5.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version
build_script:
- go run build\ci.go install
- go run build\ci.go install -dlgo
after_build:
- go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds

View File

@@ -1,6 +1,17 @@
# This file contains sha256 checksums of optional build dependencies.
98de84e69726a66da7b4e58eac41b99cbe274d7e8906eeb8a5b7eb0aadee7f7c go1.14.2.src.tar.gz
890bba73c5e2b19ffb1180e385ea225059eb008eb91b694875dd86ea48675817 go1.15.6.src.tar.gz
940a73b45993a3bae5792cf324140dded34af97c548af4864d22fd6d49f3bd9f go1.15.6.darwin-amd64.tar.gz
ad187f02158b9a9013ef03f41d14aa69c402477f178825a3940280814bcbb755 go1.15.6.linux-386.tar.gz
3918e6cc85e7eaaa6f859f1bdbaac772e7a825b0eb423c63d3ae68b21f84b844 go1.15.6.linux-amd64.tar.gz
f87515b9744154ffe31182da9341d0a61eb0795551173d242c8cad209239e492 go1.15.6.linux-arm64.tar.gz
40ba9a57764e374195018ef37c38a5fbac9bbce908eab436370631a84bfc5788 go1.15.6.linux-armv6l.tar.gz
5872eff6746a0a5f304272b27cbe9ce186f468454e95749cce01e903fbfc0e17 go1.15.6.windows-386.zip
b7b3808bb072c2bab73175009187fd5a7f20ffe0a31739937003a14c5c4d9006 go1.15.6.windows-amd64.zip
9d9dd5c217c1392f1b2ed5e03e1c71bf4cf8553884e57a38e68fd37fdcfe31a8 go1.15.6.freebsd-386.tar.gz
609f065d855aed5a0b40ef0245aacbcc0b4b7882dc3b1e75ae50576cf25265ee go1.15.6.freebsd-amd64.tar.gz
d4174fc217e749ac049eacc8827df879689f2246ac230d04991ae7df336f7cb2 go1.15.6.linux-ppc64le.tar.gz
839cc6b67687d8bb7cb044e4a9a2eac0c090765cc8ec55ffe714dfb7cd51cf3a go1.15.6.linux-s390x.tar.gz
d998a84eea42f2271aca792a7b027ca5c1edfcba229e8e5a844c9ac3f336df35 golangci-lint-1.27.0-linux-armv7.tar.gz
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint.exe-1.27.0-windows-amd64.zip

View File

@@ -26,7 +26,7 @@ Available commands are:
install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables
test [ -coverage ] [ packages... ] -- runs the tests
lint -- runs certain pre-selected linters
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artifacts
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts
importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
nsis -- creates a Windows NSIS installer
@@ -46,12 +46,11 @@ import (
"encoding/base64"
"flag"
"fmt"
"go/parser"
"go/token"
"io/ioutil"
"log"
"os"
"os/exec"
"path"
"path/filepath"
"regexp"
"runtime"
@@ -59,6 +58,7 @@ import (
"time"
"github.com/cespare/cp"
"github.com/ethereum/go-ethereum/crypto/signify"
"github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params"
)
@@ -79,7 +79,6 @@ var (
executablePath("geth"),
executablePath("puppeth"),
executablePath("rlpdump"),
executablePath("wnode"),
executablePath("clef"),
}
@@ -109,10 +108,6 @@ var (
BinaryName: "rlpdump",
Description: "Developer utility tool that prints RLP structures.",
},
{
BinaryName: "wnode",
Description: "Ethereum Whisper diagnostic tool",
},
{
BinaryName: "clef",
Description: "Ethereum account management tool.",
@@ -139,19 +134,25 @@ var (
// Note: zesty is unsupported because it was officially deprecated on Launchpad.
// Note: artful is unsupported because it was officially deprecated on Launchpad.
// Note: cosmic is unsupported because it was officially deprecated on Launchpad.
// Note: disco is unsupported because it was officially deprecated on Launchpad.
// Note: eoan is unsupported because it was officially deprecated on Launchpad.
debDistroGoBoots = map[string]string{
"trusty": "golang-1.11",
"xenial": "golang-go",
"bionic": "golang-go",
"disco": "golang-go",
"eoan": "golang-go",
"focal": "golang-go",
"groovy": "golang-go",
}
debGoBootPaths = map[string]string{
"golang-1.11": "/usr/lib/go-1.11",
"golang-go": "/usr/lib/go",
}
// This is the version of go that will be downloaded by
//
// go run ci.go install -dlgo
dlgoVersion = "1.15.6"
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@@ -202,108 +203,130 @@ func main() {
func doInstall(cmdline []string) {
var (
dlgo = flag.Bool("dlgo", false, "Download Go and build with it")
arch = flag.String("arch", "", "Architecture to cross build for")
cc = flag.String("cc", "", "C compiler to cross build with")
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
// Check Go version. People regularly open issues about compilation
// Check local Go version. People regularly open issues about compilation
// failure with outdated Go. This should save them the trouble.
if !strings.Contains(runtime.Version(), "devel") {
// Figure out the minor version number since we can't textually compare (1.10 < 1.9)
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if minor < 11 {
if minor < 13 {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.11 and cannot")
log.Println("go-ethereum requires at least Go version 1.13 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
}
}
// Compile packages given as arguments, or everything if there are no arguments.
packages := []string{"./..."}
if flag.NArg() > 0 {
packages = flag.Args()
// Choose which go command we're going to use.
var gobuild *exec.Cmd
if !*dlgo {
// Default behavior: use the go version which runs ci.go right now.
gobuild = goTool("build")
} else {
// Download of Go requested. This is for build environments where the
// installed version is too old and cannot be upgraded easily.
cachedir := filepath.Join("build", "cache")
goroot := downloadGo(runtime.GOARCH, runtime.GOOS, cachedir)
gobuild = localGoTool(goroot, "build")
}
if *arch == "" || *arch == runtime.GOARCH {
goinstall := goTool("install", buildFlags(env)...)
if runtime.GOARCH == "arm64" {
goinstall.Args = append(goinstall.Args, "-p", "1")
}
goinstall.Args = append(goinstall.Args, "-v")
goinstall.Args = append(goinstall.Args, packages...)
build.MustRun(goinstall)
return
// Configure environment for cross build.
if *arch != "" || *arch != runtime.GOARCH {
gobuild.Env = append(gobuild.Env, "CGO_ENABLED=1")
gobuild.Env = append(gobuild.Env, "GOARCH="+*arch)
}
// Seems we are cross compiling, work around forbidden GOBIN
goinstall := goToolArch(*arch, *cc, "install", buildFlags(env)...)
goinstall.Args = append(goinstall.Args, "-v")
goinstall.Args = append(goinstall.Args, []string{"-buildmode", "archive"}...)
goinstall.Args = append(goinstall.Args, packages...)
build.MustRun(goinstall)
// Configure C compiler.
if *cc != "" {
gobuild.Env = append(gobuild.Env, "CC="+*cc)
} else if os.Getenv("CC") != "" {
gobuild.Env = append(gobuild.Env, "CC="+os.Getenv("CC"))
}
if cmds, err := ioutil.ReadDir("cmd"); err == nil {
for _, cmd := range cmds {
pkgs, err := parser.ParseDir(token.NewFileSet(), filepath.Join(".", "cmd", cmd.Name()), nil, parser.PackageClauseOnly)
if err != nil {
log.Fatal(err)
}
for name := range pkgs {
if name == "main" {
gobuild := goToolArch(*arch, *cc, "build", buildFlags(env)...)
gobuild.Args = append(gobuild.Args, "-v")
gobuild.Args = append(gobuild.Args, []string{"-o", executablePath(cmd.Name())}...)
gobuild.Args = append(gobuild.Args, "."+string(filepath.Separator)+filepath.Join("cmd", cmd.Name()))
build.MustRun(gobuild)
break
}
}
}
// arm64 CI builders are memory-constrained and can't handle concurrent builds,
// better disable it. This check isn't the best, it should probably
// check for something in env instead.
if runtime.GOARCH == "arm64" {
gobuild.Args = append(gobuild.Args, "-p", "1")
}
// Put the default settings in.
gobuild.Args = append(gobuild.Args, buildFlags(env)...)
// We use -trimpath to avoid leaking local paths into the built executables.
gobuild.Args = append(gobuild.Args, "-trimpath")
// Show packages during build.
gobuild.Args = append(gobuild.Args, "-v")
// Now we choose what we're even building.
// Default: collect all 'main' packages in cmd/ and build those.
packages := flag.Args()
if len(packages) == 0 {
packages = build.FindMainPackages("./cmd")
}
// Do the build!
for _, pkg := range packages {
args := make([]string, len(gobuild.Args))
copy(args, gobuild.Args)
args = append(args, "-o", executablePath(path.Base(pkg)))
args = append(args, pkg)
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env})
}
}
// buildFlags returns the go tool flags for building.
func buildFlags(env build.Environment) (flags []string) {
var ld []string
if env.Commit != "" {
ld = append(ld, "-X", "main.gitCommit="+env.Commit)
ld = append(ld, "-X", "main.gitDate="+env.Date)
}
// Strip DWARF on darwin. This used to be required for certain things,
// and there is no downside to this, so we just keep doing it.
if runtime.GOOS == "darwin" {
ld = append(ld, "-s")
}
if len(ld) > 0 {
flags = append(flags, "-ldflags", strings.Join(ld, " "))
}
return flags
}
// goTool returns the go tool. This uses the Go version which runs ci.go.
func goTool(subcmd string, args ...string) *exec.Cmd {
return goToolArch(runtime.GOARCH, os.Getenv("CC"), subcmd, args...)
cmd := build.GoTool(subcmd, args...)
goToolSetEnv(cmd)
return cmd
}
func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd {
cmd := build.GoTool(subcmd, args...)
if arch == "" || arch == runtime.GOARCH {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
} else {
cmd.Env = append(cmd.Env, "CGO_ENABLED=1")
cmd.Env = append(cmd.Env, "GOARCH="+arch)
}
if cc != "" {
cmd.Env = append(cmd.Env, "CC="+cc)
}
// localGoTool returns the go tool from the given GOROOT.
func localGoTool(goroot string, subcmd string, args ...string) *exec.Cmd {
gotool := filepath.Join(goroot, "bin", "go")
cmd := exec.Command(gotool, subcmd)
goToolSetEnv(cmd)
cmd.Env = append(cmd.Env, "GOROOT="+goroot)
cmd.Args = append(cmd.Args, args...)
return cmd
}
// goToolSetEnv forwards the build environment to the go tool.
func goToolSetEnv(cmd *exec.Cmd) {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOBIN=") {
if strings.HasPrefix(e, "GOBIN=") || strings.HasPrefix(e, "CC=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
return cmd
}
// Running The Tests
@@ -365,7 +388,7 @@ func downloadLinter(cachedir string) string {
if err := csdb.DownloadFile(url, archivePath); err != nil {
log.Fatal(err)
}
if err := build.ExtractTarballArchive(archivePath, cachedir); err != nil {
if err := build.ExtractArchive(archivePath, cachedir); err != nil {
log.Fatal(err)
}
return filepath.Join(cachedir, base, "golangci-lint")
@@ -374,11 +397,12 @@ func downloadLinter(cachedir string) string {
// Release Packaging
func doArchive(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
atype = flag.String("type", "zip", "Type of archive to write (zip|tar)")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. LINUX_SIGNING_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
ext string
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
atype = flag.String("type", "zip", "Type of archive to write (zip|tar)")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. LINUX_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify key (e.g. LINUX_SIGNIFY_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
ext string
)
flag.CommandLine.Parse(cmdline)
switch *atype {
@@ -405,7 +429,7 @@ func doArchive(cmdline []string) {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools} {
if err := archiveUpload(archive, *upload, *signer); err != nil {
if err := archiveUpload(archive, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
}
@@ -425,7 +449,7 @@ func archiveBasename(arch string, archiveVersion string) string {
return platform + "-" + archiveVersion
}
func archiveUpload(archive string, blobstore string, signer string) error {
func archiveUpload(archive string, blobstore string, signer string, signifyVar string) error {
// If signing was requested, generate the signature files
if signer != "" {
key := getenvBase64(signer)
@@ -433,6 +457,14 @@ func archiveUpload(archive string, blobstore string, signer string) error {
return err
}
}
if signifyVar != "" {
key := os.Getenv(signifyVar)
untrustedComment := "verify with geth-release.pub"
trustedComment := fmt.Sprintf("%s (%s)", archive, time.Now().UTC().Format(time.RFC1123))
if err := signify.SignFile(archive, archive+".sig", key, untrustedComment, trustedComment); err != nil {
return err
}
}
// If uploading to Azure was requested, push the archive possibly with its signature
if blobstore != "" {
auth := build.AzureBlobstoreConfig{
@@ -448,6 +480,11 @@ func archiveUpload(archive string, blobstore string, signer string) error {
return err
}
}
if signifyVar != "" {
if err := build.AzureBlobstoreUpload(archive+".sig", filepath.Base(archive+".sig"), auth); err != nil {
return err
}
}
}
return nil
}
@@ -471,13 +508,12 @@ func maybeSkipArchive(env build.Environment) {
// Debian Packaging
func doDebianSource(cmdline []string) {
var (
goversion = flag.String("goversion", "", `Go version to build with (will be included in the source package)`)
cachedir = flag.String("cachedir", "./build/cache", `Filesystem path to cache the downloaded Go bundles at`)
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
cachedir = flag.String("cachedir", "./build/cache", `Filesystem path to cache the downloaded Go bundles at`)
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
)
flag.CommandLine.Parse(cmdline)
*workdir = makeWorkdir(*workdir)
@@ -492,10 +528,10 @@ func doDebianSource(cmdline []string) {
}
// Download and verify the Go source package.
gobundle := downloadGoSources(*goversion, *cachedir)
gobundle := downloadGoSources(*cachedir)
// Download all the dependencies needed to build the sources and run the ci script
srcdepfetch := goTool("install", "-n", "./...")
srcdepfetch := goTool("mod", "download")
srcdepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath"))
build.MustRun(srcdepfetch)
@@ -511,7 +547,7 @@ func doDebianSource(cmdline []string) {
pkgdir := stageDebianSource(*workdir, meta)
// Add Go source code
if err := build.ExtractTarballArchive(gobundle, pkgdir); err != nil {
if err := build.ExtractArchive(gobundle, pkgdir); err != nil {
log.Fatalf("Failed to extract Go sources: %v", err)
}
if err := os.Rename(filepath.Join(pkgdir, "go"), filepath.Join(pkgdir, ".go")); err != nil {
@@ -543,9 +579,10 @@ func doDebianSource(cmdline []string) {
}
}
func downloadGoSources(version string, cachedir string) string {
// downloadGoSources downloads the Go source tarball.
func downloadGoSources(cachedir string) string {
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.src.tar.gz", version)
file := fmt.Sprintf("go%s.src.tar.gz", dlgoVersion)
url := "https://dl.google.com/go/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
@@ -554,6 +591,41 @@ func downloadGoSources(version string, cachedir string) string {
return dst
}
// downloadGo downloads the Go binary distribution and unpacks it into a temporary
// directory. It returns the GOROOT of the unpacked toolchain.
func downloadGo(goarch, goos, cachedir string) string {
if goarch == "arm" {
goarch = "armv6l"
}
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.%s-%s", dlgoVersion, goos, goarch)
if goos == "windows" {
file += ".zip"
} else {
file += ".tar.gz"
}
url := "https://golang.org/dl/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
log.Fatal(err)
}
ucache, err := os.UserCacheDir()
if err != nil {
log.Fatal(err)
}
godir := filepath.Join(ucache, fmt.Sprintf("geth-go-%s-%s-%s", dlgoVersion, goos, goarch))
if err := build.ExtractArchive(dst, godir); err != nil {
log.Fatal(err)
}
goroot, err := filepath.Abs(filepath.Join(godir, "go"))
if err != nil {
log.Fatal(err)
}
return goroot
}
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
@@ -749,6 +821,7 @@ func doWindowsInstaller(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
signify = flag.String("signify key", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
)
@@ -810,7 +883,7 @@ func doWindowsInstaller(cmdline []string) {
filepath.Join(*workdir, "geth.nsi"),
)
// Sign and publish installer.
if err := archiveUpload(installer, *upload, *signer); err != nil {
if err := archiveUpload(installer, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
}
@@ -819,10 +892,11 @@ func doWindowsInstaller(cmdline []string) {
func doAndroidArchive(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. ANDROID_SIGNING_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "https://oss.sonatype.org")`)
upload = flag.String("upload", "", `Destination to upload the archive (usually "gethstore/builds")`)
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. ANDROID_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. ANDROID_SIGNIFY_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "https://oss.sonatype.org")`)
upload = flag.String("upload", "", `Destination to upload the archive (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
@@ -838,6 +912,7 @@ func doAndroidArchive(cmdline []string) {
if *local {
// If we're building locally, copy bundle to build dir and skip Maven
os.Rename("geth.aar", filepath.Join(GOBIN, "geth.aar"))
os.Rename("geth-sources.jar", filepath.Join(GOBIN, "geth-sources.jar"))
return
}
meta := newMavenMetadata(env)
@@ -850,7 +925,7 @@ func doAndroidArchive(cmdline []string) {
archive := "geth-" + archiveBasename("android", params.ArchiveVersion(env.Commit)) + ".aar"
os.Rename("geth.aar", archive)
if err := archiveUpload(archive, *upload, *signer); err != nil {
if err := archiveUpload(archive, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
// Sign and upload all the artifacts to Maven Central
@@ -884,11 +959,12 @@ func gomobileTool(subcmd string, args ...string) *exec.Cmd {
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") || strings.HasPrefix(e, "GOBIN=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
return cmd
}
@@ -942,10 +1018,11 @@ func newMavenMetadata(env build.Environment) mavenMetadata {
func doXCodeFramework(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. IOS_SIGNING_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "trunk")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. IOS_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. IOS_SIGNIFY_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "trunk")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
@@ -957,7 +1034,7 @@ func doXCodeFramework(cmdline []string) {
if *local {
// If we're building locally, use the build folder and stop afterwards
bind.Dir, _ = filepath.Abs(GOBIN)
bind.Dir = GOBIN
build.MustRun(bind)
return
}
@@ -973,14 +1050,14 @@ func doXCodeFramework(cmdline []string) {
maybeSkipArchive(env)
// Sign and upload the framework to Azure
if err := archiveUpload(archive+".tar.gz", *upload, *signer); err != nil {
if err := archiveUpload(archive+".tar.gz", *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
// Prepare and upload a PodSpec to CocoaPods
if *deploy != "" {
meta := newPodMetadata(env, archive)
build.Render("build/pod.podspec", "Geth.podspec", 0755, meta)
build.MustRunCommand("pod", *deploy, "push", "Geth.podspec", "--allow-warnings", "--verbose")
build.MustRunCommand("pod", *deploy, "push", "Geth.podspec", "--allow-warnings")
}
}

View File

@@ -44,7 +44,7 @@ func main() {
natdesc = flag.String("nat", "none", "port mapping mechanism (any|none|upnp|pmp|extip:<IP>)")
netrestrict = flag.String("netrestrict", "", "restrict network communication to the given IP networks (CIDR masks)")
runv5 = flag.Bool("v5", false, "run a v5 topic discovery bootnode")
verbosity = flag.Int("verbosity", int(log.LvlInfo), "log verbosity (0-9)")
verbosity = flag.Int("verbosity", int(log.LvlInfo), "log verbosity (0-5)")
vmodule = flag.String("vmodule", "", "log verbosity pattern")
nodeKey *ecdsa.PrivateKey

View File

@@ -94,7 +94,7 @@ with minimal requirements.
On the `client` qube, we need to create a listener which will receive the request from the Dapp, and proxy it.
[qubes-client.py](qubes/client/qubes-client.py):
[qubes-client.py](qubes/qubes-client.py):
```python

View File

@@ -10,6 +10,64 @@ TL;DR: Given a version number MAJOR.MINOR.PATCH, increment the:
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
### 6.1.0
The API-method `account_signGnosisSafeTx` was added. This method takes two parameters,
`[address, safeTx]`. The latter, `safeTx`, can be copy-pasted from the gnosis relay. For example:
```
{
"jsonrpc": "2.0",
"method": "account_signGnosisSafeTx",
"params": ["0xfd1c4226bfD1c436672092F4eCbfC270145b7256",
{
"safe": "0x25a6c4BBd32B2424A9c99aEB0584Ad12045382B3",
"to": "0xB372a646f7F05Cc1785018dBDA7EBc734a2A20E2",
"value": "20000000000000000",
"data": null,
"operation": 0,
"gasToken": "0x0000000000000000000000000000000000000000",
"safeTxGas": 27845,
"baseGas": 0,
"gasPrice": "0",
"refundReceiver": "0x0000000000000000000000000000000000000000",
"nonce": 2,
"executionDate": null,
"submissionDate": "2020-09-15T21:54:49.617634Z",
"modified": "2020-09-15T21:54:49.617634Z",
"blockNumber": null,
"transactionHash": null,
"safeTxHash": "0x2edfbd5bc113ff18c0631595db32eb17182872d88d9bf8ee4d8c2dd5db6d95e2",
"executor": null,
"isExecuted": false,
"isSuccessful": null,
"ethGasPrice": null,
"gasUsed": null,
"fee": null,
"origin": null,
"dataDecoded": null,
"confirmationsRequired": null,
"confirmations": [
{
"owner": "0xAd2e180019FCa9e55CADe76E4487F126Fd08DA34",
"submissionDate": "2020-09-15T21:54:49.663299Z",
"transactionHash": null,
"confirmationType": "CONFIRMATION",
"signature": "0x95a7250bb645f831c86defc847350e7faff815b2fb586282568e96cc859e39315876db20a2eed5f7a0412906ec5ab57652a6f645ad4833f345bda059b9da2b821c",
"signatureType": "EOA"
}
],
"signatures": null
}
],
"id": 67
}
```
Not all fields are required, though. This method is really just a UX helper, which massages the
input to conform to the `EIP-712` [specification](https://docs.gnosis.io/safe/docs/contracts_tx_execution/#transaction-hash)
for the Gnosis Safe, and making the output be directly importable to by a relay service.
### 6.0.0

View File

@@ -29,7 +29,6 @@ import (
"math/big"
"os"
"os/signal"
"os/user"
"path/filepath"
"runtime"
"sort"
@@ -55,7 +54,7 @@ import (
"github.com/ethereum/go-ethereum/signer/rules"
"github.com/ethereum/go-ethereum/signer/storage"
colorable "github.com/mattn/go-colorable"
"github.com/mattn/go-colorable"
"github.com/mattn/go-isatty"
"gopkg.in/urfave/cli.v1"
)
@@ -666,8 +665,8 @@ func signer(c *cli.Context) error {
Version: "1.0"},
}
if c.GlobalBool(utils.HTTPEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.HTTPVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.HTTPCORSDomainFlag.Name))
vhosts := utils.SplitAndTrim(c.GlobalString(utils.HTTPVirtualHostsFlag.Name))
cors := utils.SplitAndTrim(c.GlobalString(utils.HTTPCORSDomainFlag.Name))
srv := rpc.NewServer()
err := node.RegisterApisFromWhitelist(rpcAPI, []string{"account"}, srv, false)
@@ -736,21 +735,11 @@ func signer(c *cli.Context) error {
return nil
}
// splitAndTrim splits input separated by a comma
// and trims excessive white space from the substrings.
func splitAndTrim(input string) []string {
result := strings.Split(input, ",")
for i, r := range result {
result[i] = strings.TrimSpace(r)
}
return result
}
// DefaultConfigDir is the default config directory to use for the vaults and other
// persistence requirements.
func DefaultConfigDir() string {
// Try to place the data folder in the user's home dir
home := homeDir()
home := utils.HomeDir()
if home != "" {
if runtime.GOOS == "darwin" {
return filepath.Join(home, "Library", "Signer")
@@ -758,26 +747,15 @@ func DefaultConfigDir() string {
appdata := os.Getenv("APPDATA")
if appdata != "" {
return filepath.Join(appdata, "Signer")
} else {
return filepath.Join(home, "AppData", "Roaming", "Signer")
}
} else {
return filepath.Join(home, ".clef")
return filepath.Join(home, "AppData", "Roaming", "Signer")
}
return filepath.Join(home, ".clef")
}
// As we cannot guess a stable location, return empty and handle later
return ""
}
func homeDir() string {
if home := os.Getenv("HOME"); home != "" {
return home
}
if usr, err := user.Current(); err == nil {
return usr.HomeDir
}
return ""
}
func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
var (
file string

105
cmd/devp2p/README.md Normal file
View File

@@ -0,0 +1,105 @@
# The devp2p command
The devp2p command line tool is a utility for low-level peer-to-peer debugging and
protocol development purposes. It can do many things.
### ENR Decoding
Use `devp2p enrdump <base64>` to verify and display an Ethereum Node Record.
### Node Key Management
The `devp2p key ...` command family deals with node key files.
Run `devp2p key generate mynode.key` to create a new node key in the `mynode.key` file.
Run `devp2p key to-enode mynode.key -ip 127.0.0.1 -tcp 30303` to create an enode:// URL
corresponding to the given node key and address information.
### Maintaining DNS Discovery Node Lists
The devp2p command can create and publish DNS discovery node lists.
Run `devp2p dns sign <directory>` to update the signature of a DNS discovery tree.
Run `devp2p dns sync <enrtree-URL>` to download a complete DNS discovery tree.
Run `devp2p dns to-cloudflare <directory>` to publish a tree to CloudFlare DNS.
Run `devp2p dns to-route53 <directory>` to publish a tree to Amazon Route53.
You can find more information about these commands in the [DNS Discovery Setup Guide][dns-tutorial].
### Discovery v4 Utilities
The `devp2p discv4 ...` command family deals with the [Node Discovery v4][discv4]
protocol.
Run `devp2p discv4 ping <enode/ENR>` to ping a node.
Run `devp2p discv4 resolve <enode/ENR>` to find the most recent node record of a node in
the DHT.
Run `devp2p discv4 crawl <nodes.json path>` to create or update a JSON node set.
### Discovery v5 Utilities
The `devp2p discv5 ...` command family deals with the [Node Discovery v5][discv5]
protocol. This protocol is currently under active development.
Run `devp2p discv5 ping <ENR>` to ping a node.
Run `devp2p discv5 resolve <ENR>` to find the most recent node record of a node in
the discv5 DHT.
Run `devp2p discv5 listen` to run a Discovery v5 node.
Run `devp2p discv5 crawl <nodes.json path>` to create or update a JSON node set containing
discv5 nodes.
### Discovery Test Suites
The devp2p command also contains interactive test suites for Discovery v4 and Discovery
v5.
To run these tests against your implementation, you need to set up a networking
environment where two separate UDP listening addresses are available on the same machine.
The two listening addresses must also be routed such that they are able to reach the node
you want to test.
For example, if you want to run the test on your local host, and the node under test is
also on the local host, you need to assign two IP addresses (or a larger range) to your
loopback interface. On macOS, this can be done by executing the following command:
sudo ifconfig lo0 add 127.0.0.2
You can now run either test suite as follows: Start the node under test first, ensuring
that it won't talk to the Internet (i.e. disable bootstrapping). An easy way to prevent
unintended connections to the global DHT is listening on `127.0.0.1`.
Now get the ENR of your node and store it in the `NODE` environment variable.
Start the test by running `devp2p discv5 test -listen1 127.0.0.1 -listen2 127.0.0.2 $NODE`.
### Eth Protocol Test Suite
The Eth Protocol test suite is a conformance test suite for the [eth protocol][eth].
To run the eth protocol test suite against your implementation, the node needs to be initialized as such:
1. initialize the geth node with the `genesis.json` file contained in the `testdata` directory
2. import the `halfchain.rlp` file in the `testdata` directory
3. run geth with the following flags:
```
geth --datadir <datadir> --nodiscover --nat=none --networkid 19763 --verbosity 5
```
Then, run the following command, replacing `<enode ID>` with the enode of the geth node:
```
devp2p rlpx eth-test <enode ID> cmd/devp2p/internal/ethtest/testdata/fullchain.rlp cmd/devp2p/internal/ethtest/testdata/genesis.json
```
[eth]: https://github.com/ethereum/devp2p/blob/master/caps/eth.md
[dns-tutorial]: https://geth.ethereum.org/docs/developers/dns-discovery-setup
[discv4]: https://github.com/ethereum/devp2p/tree/master/discv4.md
[discv5]: https://github.com/ethereum/devp2p/tree/master/discv5/discv5.md

View File

@@ -19,14 +19,12 @@ package main
import (
"fmt"
"net"
"os"
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v4test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/params"
@@ -82,7 +80,13 @@ var (
Name: "test",
Usage: "Runs tests against a node",
Action: discv4Test,
Flags: []cli.Flag{remoteEnodeFlag, testPatternFlag, testListen1Flag, testListen2Flag},
Flags: []cli.Flag{
remoteEnodeFlag,
testPatternFlag,
testTAPFlag,
testListen1Flag,
testListen2Flag,
},
}
)
@@ -113,20 +117,6 @@ var (
Usage: "Enode of the remote node under test",
EnvVar: "REMOTE_ENODE",
}
testPatternFlag = cli.StringFlag{
Name: "run",
Usage: "Pattern of test suite(s) to run",
}
testListen1Flag = cli.StringFlag{
Name: "listen1",
Usage: "IP address of the first tester",
Value: v4test.Listen1,
}
testListen2Flag = cli.StringFlag{
Name: "listen2",
Usage: "IP address of the second tester",
Value: v4test.Listen2,
}
)
func discv4Ping(ctx *cli.Context) error {
@@ -213,6 +203,7 @@ func discv4Crawl(ctx *cli.Context) error {
return nil
}
// discv4Test runs the protocol test suite.
func discv4Test(ctx *cli.Context) error {
// Configure test package globals.
if !ctx.IsSet(remoteEnodeFlag.Name) {
@@ -221,18 +212,7 @@ func discv4Test(ctx *cli.Context) error {
v4test.Remote = ctx.String(remoteEnodeFlag.Name)
v4test.Listen1 = ctx.String(testListen1Flag.Name)
v4test.Listen2 = ctx.String(testListen2Flag.Name)
// Filter and run test cases.
tests := v4test.AllTests
if ctx.IsSet(testPatternFlag.Name) {
tests = utesting.MatchTests(tests, ctx.String(testPatternFlag.Name))
}
results := utesting.RunTests(tests, os.Stdout)
if fails := utesting.CountFailures(results); fails > 0 {
return fmt.Errorf("%v/%v tests passed.", len(tests)-fails, len(tests))
}
fmt.Printf("%v/%v passed\n", len(tests), len(tests))
return nil
return runTests(ctx, v4test.AllTests)
}
// startV4 starts an ephemeral discovery V4 node.
@@ -286,7 +266,11 @@ func listen(ln *enode.LocalNode, addr string) *net.UDPConn {
}
usocket := socket.(*net.UDPConn)
uaddr := socket.LocalAddr().(*net.UDPAddr)
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
if uaddr.IP.IsUnspecified() {
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
} else {
ln.SetFallbackIP(uaddr.IP)
}
ln.SetFallbackUDP(uaddr.Port)
return usocket
}
@@ -294,7 +278,11 @@ func listen(ln *enode.LocalNode, addr string) *net.UDPConn {
func parseBootnodes(ctx *cli.Context) ([]*enode.Node, error) {
s := params.RinkebyBootnodes
if ctx.IsSet(bootnodesFlag.Name) {
s = strings.Split(ctx.String(bootnodesFlag.Name), ",")
input := ctx.String(bootnodesFlag.Name)
if input == "" {
return nil, nil
}
s = strings.Split(input, ",")
}
nodes := make([]*enode.Node, len(s))
var err error

View File

@@ -20,6 +20,7 @@ import (
"fmt"
"time"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v5test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/p2p/discover"
"gopkg.in/urfave/cli.v1"
@@ -33,6 +34,7 @@ var (
discv5PingCommand,
discv5ResolveCommand,
discv5CrawlCommand,
discv5TestCommand,
discv5ListenCommand,
},
}
@@ -53,6 +55,17 @@ var (
Action: discv5Crawl,
Flags: []cli.Flag{bootnodesFlag, crawlTimeoutFlag},
}
discv5TestCommand = cli.Command{
Name: "test",
Usage: "Runs protocol tests against a node",
Action: discv5Test,
Flags: []cli.Flag{
testPatternFlag,
testTAPFlag,
testListen1Flag,
testListen2Flag,
},
}
discv5ListenCommand = cli.Command{
Name: "listen",
Usage: "Runs a node",
@@ -103,6 +116,16 @@ func discv5Crawl(ctx *cli.Context) error {
return nil
}
// discv5Test runs the protocol test suite.
func discv5Test(ctx *cli.Context) error {
suite := &v5test.Suite{
Dest: getNodeArg(ctx),
Listen1: ctx.String(testListen1Flag.Name),
Listen2: ctx.String(testListen2Flag.Name),
}
return runTests(ctx, suite.AllTests())
}
func discv5Listen(ctx *cli.Context) error {
disc := startV5(ctx)
defer disc.Close()

View File

@@ -30,7 +30,7 @@ import (
"github.com/ethereum/go-ethereum/console/prompt"
"github.com/ethereum/go-ethereum/p2p/dnsdisc"
"github.com/ethereum/go-ethereum/p2p/enode"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
var (

View File

@@ -21,6 +21,7 @@ import (
"encoding/base64"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"net"
"os"
@@ -69,22 +70,30 @@ func enrdump(ctx *cli.Context) error {
if err != nil {
return fmt.Errorf("INVALID: %v", err)
}
fmt.Print(dumpRecord(r))
dumpRecord(os.Stdout, r)
return nil
}
// dumpRecord creates a human-readable description of the given node record.
func dumpRecord(r *enr.Record) string {
out := new(bytes.Buffer)
if n, err := enode.New(enode.ValidSchemes, r); err != nil {
func dumpRecord(out io.Writer, r *enr.Record) {
n, err := enode.New(enode.ValidSchemes, r)
if err != nil {
fmt.Fprintf(out, "INVALID: %v\n", err)
} else {
fmt.Fprintf(out, "Node ID: %v\n", n.ID())
dumpNodeURL(out, n)
}
kv := r.AppendElements(nil)[1:]
fmt.Fprintf(out, "Record has sequence number %d and %d key/value pairs.\n", r.Seq(), len(kv)/2)
fmt.Fprint(out, dumpRecordKV(kv, 2))
return out.String()
}
func dumpNodeURL(out io.Writer, n *enode.Node) {
var key enode.Secp256k1
if n.Load(&key) != nil {
return // no secp256k1 public key
}
fmt.Fprintf(out, "URLv4: %s\n", n.URLv4())
}
func dumpRecordKV(kv []interface{}, indent int) string {

View File

@@ -0,0 +1,167 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"compress/gzip"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"math/big"
"os"
"strings"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/forkid"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
)
type Chain struct {
blocks []*types.Block
chainConfig *params.ChainConfig
}
func (c *Chain) WriteTo(writer io.Writer) error {
for _, block := range c.blocks {
if err := rlp.Encode(writer, block); err != nil {
return err
}
}
return nil
}
// Len returns the length of the chain.
func (c *Chain) Len() int {
return len(c.blocks)
}
// TD calculates the total difficulty of the chain.
func (c *Chain) TD(height int) *big.Int { // TODO later on channge scheme so that the height is included in range
sum := big.NewInt(0)
for _, block := range c.blocks[:height] {
sum.Add(sum, block.Difficulty())
}
return sum
}
// ForkID gets the fork id of the chain.
func (c *Chain) ForkID() forkid.ID {
return forkid.NewID(c.chainConfig, c.blocks[0].Hash(), uint64(c.Len()))
}
// Shorten returns a copy chain of a desired height from the imported
func (c *Chain) Shorten(height int) *Chain {
blocks := make([]*types.Block, height)
copy(blocks, c.blocks[:height])
config := *c.chainConfig
return &Chain{
blocks: blocks,
chainConfig: &config,
}
}
// Head returns the chain head.
func (c *Chain) Head() *types.Block {
return c.blocks[c.Len()-1]
}
func (c *Chain) GetHeaders(req GetBlockHeaders) (BlockHeaders, error) {
if req.Amount < 1 {
return nil, fmt.Errorf("no block headers requested")
}
headers := make(BlockHeaders, req.Amount)
var blockNumber uint64
// range over blocks to check if our chain has the requested header
for _, block := range c.blocks {
if block.Hash() == req.Origin.Hash || block.Number().Uint64() == req.Origin.Number {
headers[0] = block.Header()
blockNumber = block.Number().Uint64()
}
}
if headers[0] == nil {
return nil, fmt.Errorf("no headers found for given origin number %v, hash %v", req.Origin.Number, req.Origin.Hash)
}
if req.Reverse {
for i := 1; i < int(req.Amount); i++ {
blockNumber -= (1 - req.Skip)
headers[i] = c.blocks[blockNumber].Header()
}
return headers, nil
}
for i := 1; i < int(req.Amount); i++ {
blockNumber += (1 + req.Skip)
headers[i] = c.blocks[blockNumber].Header()
}
return headers, nil
}
// loadChain takes the given chain.rlp file, and decodes and returns
// the blocks from the file.
func loadChain(chainfile string, genesis string) (*Chain, error) {
chainConfig, err := ioutil.ReadFile(genesis)
if err != nil {
return nil, err
}
var gen core.Genesis
if err := json.Unmarshal(chainConfig, &gen); err != nil {
return nil, err
}
gblock := gen.ToBlock(nil)
// Load chain.rlp.
fh, err := os.Open(chainfile)
if err != nil {
return nil, err
}
defer fh.Close()
var reader io.Reader = fh
if strings.HasSuffix(chainfile, ".gz") {
if reader, err = gzip.NewReader(reader); err != nil {
return nil, err
}
}
stream := rlp.NewStream(reader, 0)
var blocks = make([]*types.Block, 1)
blocks[0] = gblock
for i := 0; ; i++ {
var b types.Block
if err := stream.Decode(&b); err == io.EOF {
break
} else if err != nil {
return nil, fmt.Errorf("at block index %d: %v", i, err)
}
if b.NumberU64() != uint64(i+1) {
return nil, fmt.Errorf("block at index %d has wrong number %d", i, b.NumberU64())
}
blocks = append(blocks, &b)
}
c := &Chain{blocks: blocks, chainConfig: gen.Config}
return c, nil
}

View File

@@ -0,0 +1,150 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"path/filepath"
"strconv"
"testing"
"github.com/ethereum/go-ethereum/p2p"
"github.com/stretchr/testify/assert"
)
// TestEthProtocolNegotiation tests whether the test suite
// can negotiate the highest eth protocol in a status message exchange
func TestEthProtocolNegotiation(t *testing.T) {
var tests = []struct {
conn *Conn
caps []p2p.Cap
expected uint32
}{
{
conn: &Conn{},
caps: []p2p.Cap{
{Name: "eth", Version: 63},
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
expected: uint32(65),
},
{
conn: &Conn{},
caps: []p2p.Cap{
{Name: "eth", Version: 0},
{Name: "eth", Version: 89},
{Name: "eth", Version: 65},
},
expected: uint32(65),
},
{
conn: &Conn{},
caps: []p2p.Cap{
{Name: "eth", Version: 63},
{Name: "eth", Version: 64},
{Name: "wrongProto", Version: 65},
},
expected: uint32(64),
},
}
for i, tt := range tests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
tt.conn.negotiateEthProtocol(tt.caps)
assert.Equal(t, tt.expected, uint32(tt.conn.ethProtocolVersion))
})
}
}
// TestChain_GetHeaders tests whether the test suite can correctly
// respond to a GetBlockHeaders request from a node.
func TestChain_GetHeaders(t *testing.T) {
chainFile, err := filepath.Abs("./testdata/chain.rlp")
if err != nil {
t.Fatal(err)
}
genesisFile, err := filepath.Abs("./testdata/genesis.json")
if err != nil {
t.Fatal(err)
}
chain, err := loadChain(chainFile, genesisFile)
if err != nil {
t.Fatal(err)
}
var tests = []struct {
req GetBlockHeaders
expected BlockHeaders
}{
{
req: GetBlockHeaders{
Origin: hashOrNumber{
Number: uint64(2),
},
Amount: uint64(5),
Skip: 1,
Reverse: false,
},
expected: BlockHeaders{
chain.blocks[2].Header(),
chain.blocks[4].Header(),
chain.blocks[6].Header(),
chain.blocks[8].Header(),
chain.blocks[10].Header(),
},
},
{
req: GetBlockHeaders{
Origin: hashOrNumber{
Number: uint64(chain.Len() - 1),
},
Amount: uint64(3),
Skip: 0,
Reverse: true,
},
expected: BlockHeaders{
chain.blocks[chain.Len()-1].Header(),
chain.blocks[chain.Len()-2].Header(),
chain.blocks[chain.Len()-3].Header(),
},
},
{
req: GetBlockHeaders{
Origin: hashOrNumber{
Hash: chain.Head().Hash(),
},
Amount: uint64(1),
Skip: 0,
Reverse: false,
},
expected: BlockHeaders{
chain.Head().Header(),
},
},
}
for i, tt := range tests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
headers, err := chain.GetHeaders(tt.req)
if err != nil {
t.Fatal(err)
}
assert.Equal(t, headers, tt.expected)
})
}
}

View File

@@ -0,0 +1,80 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"crypto/rand"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
)
// largeNumber returns a very large big.Int.
func largeNumber(megabytes int) *big.Int {
buf := make([]byte, megabytes*1024*1024)
rand.Read(buf)
bigint := new(big.Int)
bigint.SetBytes(buf)
return bigint
}
// largeBuffer returns a very large buffer.
func largeBuffer(megabytes int) []byte {
buf := make([]byte, megabytes*1024*1024)
rand.Read(buf)
return buf
}
// largeString returns a very large string.
func largeString(megabytes int) string {
buf := make([]byte, megabytes*1024*1024)
rand.Read(buf)
return hexutil.Encode(buf)
}
func largeBlock() *types.Block {
return types.NewBlockWithHeader(largeHeader())
}
// Returns a random hash
func randHash() common.Hash {
var h common.Hash
rand.Read(h[:])
return h
}
func largeHeader() *types.Header {
return &types.Header{
MixDigest: randHash(),
ReceiptHash: randHash(),
TxHash: randHash(),
Nonce: types.BlockNonce{},
Extra: []byte{},
Bloom: types.Bloom{},
GasUsed: 0,
Coinbase: common.Address{},
GasLimit: 0,
UncleHash: randHash(),
Time: 1337,
ParentHash: randHash(),
Root: randHash(),
Number: largeNumber(2),
Difficulty: largeNumber(2),
}
}

View File

@@ -0,0 +1,426 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"fmt"
"net"
"time"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/stretchr/testify/assert"
)
var pretty = spew.ConfigState{
Indent: " ",
DisableCapacities: true,
DisablePointerAddresses: true,
SortKeys: true,
}
var timeout = 20 * time.Second
// Suite represents a structure used to test the eth
// protocol of a node(s).
type Suite struct {
Dest *enode.Node
chain *Chain
fullChain *Chain
}
// NewSuite creates and returns a new eth-test suite that can
// be used to test the given node against the given blockchain
// data.
func NewSuite(dest *enode.Node, chainfile string, genesisfile string) *Suite {
chain, err := loadChain(chainfile, genesisfile)
if err != nil {
panic(err)
}
return &Suite{
Dest: dest,
chain: chain.Shorten(1000),
fullChain: chain,
}
}
func (s *Suite) AllTests() []utesting.Test {
return []utesting.Test{
{Name: "Status", Fn: s.TestStatus},
{Name: "GetBlockHeaders", Fn: s.TestGetBlockHeaders},
{Name: "Broadcast", Fn: s.TestBroadcast},
{Name: "GetBlockBodies", Fn: s.TestGetBlockBodies},
{Name: "TestLargeAnnounce", Fn: s.TestLargeAnnounce},
{Name: "TestMaliciousHandshake", Fn: s.TestMaliciousHandshake},
{Name: "TestMaliciousStatus", Fn: s.TestMaliciousStatus},
{Name: "TestTransactions", Fn: s.TestTransaction},
{Name: "TestMaliciousTransactions", Fn: s.TestMaliciousTx},
}
}
// TestStatus attempts to connect to the given node and exchange
// a status message with it, and then check to make sure
// the chain head is correct.
func (s *Suite) TestStatus(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
// get protoHandshake
conn.handshake(t)
// get status
switch msg := conn.statusExchange(t, s.chain, nil).(type) {
case *Status:
t.Logf("got status message: %s", pretty.Sdump(msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestMaliciousStatus sends a status package with a large total difficulty.
func (s *Suite) TestMaliciousStatus(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
// get protoHandshake
conn.handshake(t)
status := &Status{
ProtocolVersion: uint32(conn.ethProtocolVersion),
NetworkID: s.chain.chainConfig.ChainID.Uint64(),
TD: largeNumber(2),
Head: s.chain.blocks[s.chain.Len()-1].Hash(),
Genesis: s.chain.blocks[0].Hash(),
ForkID: s.chain.ForkID(),
}
// get status
switch msg := conn.statusExchange(t, s.chain, status).(type) {
case *Status:
t.Logf("%+v\n", msg)
default:
t.Fatalf("expected status, got: %#v ", msg)
}
// wait for disconnect
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
return
default:
t.Fatalf("expected disconnect, got: %s", pretty.Sdump(msg))
}
}
// TestGetBlockHeaders tests whether the given node can respond to
// a `GetBlockHeaders` request and that the response is accurate.
func (s *Suite) TestGetBlockHeaders(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
conn.handshake(t)
conn.statusExchange(t, s.chain, nil)
// get block headers
req := &GetBlockHeaders{
Origin: hashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 2,
Skip: 1,
Reverse: false,
}
if err := conn.Write(req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *BlockHeaders:
headers := msg
for _, header := range *headers {
num := header.Number.Uint64()
t.Logf("received header (%d): %s", num, pretty.Sdump(header))
assert.Equal(t, s.chain.blocks[int(num)].Header(), header)
}
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestGetBlockBodies tests whether the given node can respond to
// a `GetBlockBodies` request and that the response is accurate.
func (s *Suite) TestGetBlockBodies(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
conn.handshake(t)
conn.statusExchange(t, s.chain, nil)
// create block bodies request
req := &GetBlockBodies{s.chain.blocks[54].Hash(), s.chain.blocks[75].Hash()}
if err := conn.Write(req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *BlockBodies:
t.Logf("received %d block bodies", len(*msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestBroadcast tests whether a block announcement is correctly
// propagated to the given node's peer(s).
func (s *Suite) TestBroadcast(t *utesting.T) {
sendConn, receiveConn := s.setupConnection(t), s.setupConnection(t)
nextBlock := len(s.chain.blocks)
blockAnnouncement := &NewBlock{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
}
s.testAnnounce(t, sendConn, receiveConn, blockAnnouncement)
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock(s.chain.Head()); err != nil {
t.Fatal(err)
}
}
// TestMaliciousHandshake tries to send malicious data during the handshake.
func (s *Suite) TestMaliciousHandshake(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
// write hello to client
pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:]
handshakes := []*Hello{
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 64},
},
ID: pub0,
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: append(pub0, byte(0)),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: append(pub0, pub0...),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: largeBuffer(2),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 64},
},
ID: largeBuffer(2),
},
}
for i, handshake := range handshakes {
t.Logf("Testing malicious handshake %v\n", i)
// Init the handshake
if err := conn.Write(handshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// check that the peer disconnected
timeout := 20 * time.Second
// Discard one hello
for i := 0; i < 2; i++ {
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
case *Hello:
// Hello's are send concurrently, so ignore them
continue
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// Dial for the next round
conn, err = s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
}
}
// TestLargeAnnounce tests the announcement mechanism with a large block.
func (s *Suite) TestLargeAnnounce(t *utesting.T) {
nextBlock := len(s.chain.blocks)
blocks := []*NewBlock{
{
Block: largeBlock(),
TD: s.fullChain.TD(nextBlock + 1),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: largeNumber(2),
},
{
Block: largeBlock(),
TD: largeNumber(2),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
},
}
for i, blockAnnouncement := range blocks[0:3] {
t.Logf("Testing malicious announcement: %v\n", i)
sendConn := s.setupConnection(t)
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Invalid announcement, check that peer disconnected
switch msg := sendConn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
break
default:
t.Fatalf("unexpected: %s wanted disconnect", pretty.Sdump(msg))
}
}
// Test the last block as a valid block
sendConn := s.setupConnection(t)
receiveConn := s.setupConnection(t)
s.testAnnounce(t, sendConn, receiveConn, blocks[3])
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock(s.fullChain.blocks[nextBlock]); err != nil {
t.Fatal(err)
}
}
func (s *Suite) testAnnounce(t *utesting.T, sendConn, receiveConn *Conn, blockAnnouncement *NewBlock) {
// Announce the block.
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
s.waitAnnounce(t, receiveConn, blockAnnouncement)
}
func (s *Suite) waitAnnounce(t *utesting.T, conn *Conn, blockAnnouncement *NewBlock) {
timeout := 20 * time.Second
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *NewBlock:
t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block))
assert.Equal(t,
blockAnnouncement.Block.Header(), msg.Block.Header(),
"wrong block header in announcement",
)
assert.Equal(t,
blockAnnouncement.TD, msg.TD,
"wrong TD in announcement",
)
case *NewBlockHashes:
hashes := *msg
t.Logf("received NewBlockHashes message: %s", pretty.Sdump(hashes))
assert.Equal(t,
blockAnnouncement.Block.Hash(), hashes[0].Hash,
"wrong block hash in announcement",
)
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
func (s *Suite) setupConnection(t *utesting.T) *Conn {
// create conn
sendConn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
sendConn.handshake(t)
sendConn.statusExchange(t, s.chain, nil)
return sendConn
}
// dial attempts to dial the given node and perform a handshake,
// returning the created Conn if successful.
func (s *Suite) dial() (*Conn, error) {
var conn Conn
fd, err := net.Dial("tcp", fmt.Sprintf("%v:%d", s.Dest.IP(), s.Dest.TCP()))
if err != nil {
return nil, err
}
conn.Conn = rlpx.NewConn(fd, s.Dest.Pubkey())
// do encHandshake
conn.ourKey, _ = crypto.GenerateKey()
_, err = conn.Handshake(conn.ourKey)
if err != nil {
return nil, err
}
return &conn, nil
}
func (s *Suite) TestTransaction(t *utesting.T) {
tests := []*types.Transaction{
getNextTxFromChain(t, s),
unknownTx(t, s),
}
for i, tx := range tests {
t.Logf("Testing tx propagation: %v\n", i)
sendSuccessfulTx(t, s, tx)
}
}
func (s *Suite) TestMaliciousTx(t *utesting.T) {
tests := []*types.Transaction{
getOldTxFromChain(t, s),
invalidNonceTx(t, s),
hugeAmount(t, s),
hugeGasPrice(t, s),
hugeData(t, s),
}
for i, tx := range tests {
t.Logf("Testing malicious tx propagation: %v\n", i)
sendFailingTx(t, s, tx)
}
}

Binary file not shown.

View File

@@ -0,0 +1,26 @@
{
"config": {
"chainId": 19763,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"ethash": {}
},
"nonce": "0xdeadbeefdeadbeef",
"timestamp": "0x0",
"extraData": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0x80000000",
"difficulty": "0x20000",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"alloc": {
"71562b71999873db5b286df957af199ec94617f7": {
"balance": "0xffffffffffffffffffffffffff"
}
},
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
}

Binary file not shown.

View File

@@ -0,0 +1,180 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
)
//var faucetAddr = common.HexToAddress("0x71562b71999873DB5b286dF957af199Ec94617F7")
var faucetKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
func sendSuccessfulTx(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn := s.setupConnection(t)
t.Logf("sending tx: %v %v %v\n", tx.Hash().String(), tx.GasPrice(), tx.Gas())
// Send the transaction
if err := sendConn.Write(Transactions([]*types.Transaction{tx})); err != nil {
t.Fatal(err)
}
time.Sleep(100 * time.Millisecond)
recvConn := s.setupConnection(t)
// Wait for the transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *Transactions:
recTxs := *msg
if len(recTxs) < 1 {
t.Fatalf("received transactions do not match send: %v", recTxs)
}
if tx.Hash() != recTxs[len(recTxs)-1].Hash() {
t.Fatalf("received transactions do not match send: got %v want %v", recTxs, tx)
}
case *NewPooledTransactionHashes:
txHashes := *msg
if len(txHashes) < 1 {
t.Fatalf("received transactions do not match send: %v", txHashes)
}
if tx.Hash() != txHashes[len(txHashes)-1] {
t.Fatalf("wrong announcement received, wanted %v got %v", tx, txHashes)
}
default:
t.Fatalf("unexpected message in sendSuccessfulTx: %s", pretty.Sdump(msg))
}
}
func sendFailingTx(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn, recvConn := s.setupConnection(t), s.setupConnection(t)
// Wait for a transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *NewPooledTransactionHashes:
break
default:
t.Logf("unexpected message, logging: %v", pretty.Sdump(msg))
}
// Send the transaction
if err := sendConn.Write(Transactions([]*types.Transaction{tx})); err != nil {
t.Fatal(err)
}
// Wait for another transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *Transactions:
t.Fatalf("Received unexpected transaction announcement: %v", msg)
case *NewPooledTransactionHashes:
t.Fatalf("Received unexpected pooledTx announcement: %v", msg)
case *Error:
// Transaction should not be announced -> wait for timeout
return
default:
t.Fatalf("unexpected message in sendFailingTx: %s", pretty.Sdump(msg))
}
}
func unknownTx(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce()+1, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
}
func getNextTxFromChain(t *utesting.T, s *Suite) *types.Transaction {
// Get a new transaction
var tx *types.Transaction
for _, blocks := range s.fullChain.blocks[s.chain.Len():] {
txs := blocks.Transactions()
if txs.Len() != 0 {
tx = txs[0]
break
}
}
if tx == nil {
t.Fatal("could not find transaction")
}
return tx
}
func getOldTxFromChain(t *utesting.T, s *Suite) *types.Transaction {
var tx *types.Transaction
for _, blocks := range s.fullChain.blocks[:s.chain.Len()-1] {
txs := blocks.Transactions()
if txs.Len() != 0 {
tx = txs[0]
break
}
}
if tx == nil {
t.Fatal("could not find transaction")
}
return tx
}
func invalidNonceTx(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce()-2, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
}
func hugeAmount(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
amount := largeNumber(2)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, amount, tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
}
func hugeGasPrice(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
gasPrice := largeNumber(2)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), gasPrice, tx.Data())
return signWithFaucet(t, txNew)
}
func hugeData(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), tx.GasPrice(), largeBuffer(2))
return signWithFaucet(t, txNew)
}
func signWithFaucet(t *utesting.T, tx *types.Transaction) *types.Transaction {
signer := types.HomesteadSigner{}
signedTx, err := types.SignTx(tx, signer, faucetKey)
if err != nil {
t.Fatalf("could not sign tx: %v\n", err)
}
return signedTx
}

View File

@@ -0,0 +1,395 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"crypto/ecdsa"
"fmt"
"io"
"math/big"
"reflect"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/forkid"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/ethereum/go-ethereum/rlp"
)
type Message interface {
Code() int
}
type Error struct {
err error
}
func (e *Error) Unwrap() error { return e.err }
func (e *Error) Error() string { return e.err.Error() }
func (e *Error) Code() int { return -1 }
func (e *Error) String() string { return e.Error() }
func errorf(format string, args ...interface{}) *Error {
return &Error{fmt.Errorf(format, args...)}
}
// Hello is the RLP structure of the protocol handshake.
type Hello struct {
Version uint64
Name string
Caps []p2p.Cap
ListenPort uint64
ID []byte // secp256k1 public key
// Ignore additional fields (for forward compatibility).
Rest []rlp.RawValue `rlp:"tail"`
}
func (h Hello) Code() int { return 0x00 }
// Disconnect is the RLP structure for a disconnect message.
type Disconnect struct {
Reason p2p.DiscReason
}
func (d Disconnect) Code() int { return 0x01 }
type Ping struct{}
func (p Ping) Code() int { return 0x02 }
type Pong struct{}
func (p Pong) Code() int { return 0x03 }
// Status is the network packet for the status message for eth/64 and later.
type Status struct {
ProtocolVersion uint32
NetworkID uint64
TD *big.Int
Head common.Hash
Genesis common.Hash
ForkID forkid.ID
}
func (s Status) Code() int { return 16 }
// NewBlockHashes is the network packet for the block announcements.
type NewBlockHashes []struct {
Hash common.Hash // Hash of one particular block being announced
Number uint64 // Number of one particular block being announced
}
func (nbh NewBlockHashes) Code() int { return 17 }
type Transactions []*types.Transaction
func (t Transactions) Code() int { return 18 }
// GetBlockHeaders represents a block header query.
type GetBlockHeaders struct {
Origin hashOrNumber // Block from which to retrieve headers
Amount uint64 // Maximum number of headers to retrieve
Skip uint64 // Blocks to skip between consecutive headers
Reverse bool // Query direction (false = rising towards latest, true = falling towards genesis)
}
func (g GetBlockHeaders) Code() int { return 19 }
type BlockHeaders []*types.Header
func (bh BlockHeaders) Code() int { return 20 }
// GetBlockBodies represents a GetBlockBodies request
type GetBlockBodies []common.Hash
func (gbb GetBlockBodies) Code() int { return 21 }
// BlockBodies is the network packet for block content distribution.
type BlockBodies []*types.Body
func (bb BlockBodies) Code() int { return 22 }
// NewBlock is the network packet for the block propagation message.
type NewBlock struct {
Block *types.Block
TD *big.Int
}
func (nb NewBlock) Code() int { return 23 }
// NewPooledTransactionHashes is the network packet for the tx hash propagation message.
type NewPooledTransactionHashes [][32]byte
func (nb NewPooledTransactionHashes) Code() int { return 24 }
// HashOrNumber is a combined field for specifying an origin block.
type hashOrNumber struct {
Hash common.Hash // Block hash from which to retrieve headers (excludes Number)
Number uint64 // Block hash from which to retrieve headers (excludes Hash)
}
// EncodeRLP is a specialized encoder for hashOrNumber to encode only one of the
// two contained union fields.
func (hn *hashOrNumber) EncodeRLP(w io.Writer) error {
if hn.Hash == (common.Hash{}) {
return rlp.Encode(w, hn.Number)
}
if hn.Number != 0 {
return fmt.Errorf("both origin hash (%x) and number (%d) provided", hn.Hash, hn.Number)
}
return rlp.Encode(w, hn.Hash)
}
// DecodeRLP is a specialized decoder for hashOrNumber to decode the contents
// into either a block hash or a block number.
func (hn *hashOrNumber) DecodeRLP(s *rlp.Stream) error {
_, size, _ := s.Kind()
origin, err := s.Raw()
if err == nil {
switch {
case size == 32:
err = rlp.DecodeBytes(origin, &hn.Hash)
case size <= 8:
err = rlp.DecodeBytes(origin, &hn.Number)
default:
err = fmt.Errorf("invalid input size %d for origin", size)
}
}
return err
}
// Conn represents an individual connection with a peer
type Conn struct {
*rlpx.Conn
ourKey *ecdsa.PrivateKey
ethProtocolVersion uint
}
func (c *Conn) Read() Message {
code, rawData, _, err := c.Conn.Read()
if err != nil {
return errorf("could not read from connection: %v", err)
}
var msg Message
switch int(code) {
case (Hello{}).Code():
msg = new(Hello)
case (Ping{}).Code():
msg = new(Ping)
case (Pong{}).Code():
msg = new(Pong)
case (Disconnect{}).Code():
msg = new(Disconnect)
case (Status{}).Code():
msg = new(Status)
case (GetBlockHeaders{}).Code():
msg = new(GetBlockHeaders)
case (BlockHeaders{}).Code():
msg = new(BlockHeaders)
case (GetBlockBodies{}).Code():
msg = new(GetBlockBodies)
case (BlockBodies{}).Code():
msg = new(BlockBodies)
case (NewBlock{}).Code():
msg = new(NewBlock)
case (NewBlockHashes{}).Code():
msg = new(NewBlockHashes)
case (Transactions{}).Code():
msg = new(Transactions)
case (NewPooledTransactionHashes{}).Code():
msg = new(NewPooledTransactionHashes)
default:
return errorf("invalid message code: %d", code)
}
if err := rlp.DecodeBytes(rawData, msg); err != nil {
return errorf("could not rlp decode message: %v", err)
}
return msg
}
// ReadAndServe serves GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) ReadAndServe(chain *Chain, timeout time.Duration) Message {
start := time.Now()
for time.Since(start) < timeout {
timeout := time.Now().Add(10 * time.Second)
c.SetReadDeadline(timeout)
switch msg := c.Read().(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
req := *msg
headers, err := chain.GetHeaders(req)
if err != nil {
return errorf("could not get headers for inbound header request: %v", err)
}
if err := c.Write(headers); err != nil {
return errorf("could not write to connection: %v", err)
}
default:
return msg
}
}
return errorf("no message received within %v", timeout)
}
func (c *Conn) Write(msg Message) error {
payload, err := rlp.EncodeToBytes(msg)
if err != nil {
return err
}
_, err = c.Conn.Write(uint64(msg.Code()), payload)
return err
}
// handshake checks to make sure a `HELLO` is received.
func (c *Conn) handshake(t *utesting.T) Message {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(10 * time.Second))
// write hello to client
pub0 := crypto.FromECDSAPub(&c.ourKey.PublicKey)[1:]
ourHandshake := &Hello{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: pub0,
}
if err := c.Write(ourHandshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// read hello from client
switch msg := c.Read().(type) {
case *Hello:
// set snappy if version is at least 5
if msg.Version >= 5 {
c.SetSnappy(true)
}
c.negotiateEthProtocol(msg.Caps)
if c.ethProtocolVersion == 0 {
t.Fatalf("unexpected eth protocol version")
}
return msg
default:
t.Fatalf("bad handshake: %#v", msg)
return nil
}
}
// negotiateEthProtocol sets the Conn's eth protocol version
// to highest advertised capability from peer
func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
var highestEthVersion uint
for _, capability := range caps {
if capability.Name != "eth" {
continue
}
if capability.Version > highestEthVersion && capability.Version <= 65 {
highestEthVersion = capability.Version
}
}
c.ethProtocolVersion = highestEthVersion
}
// statusExchange performs a `Status` message exchange with the given
// node.
func (c *Conn) statusExchange(t *utesting.T, chain *Chain, status *Status) Message {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(20 * time.Second))
// read status message from client
var message Message
loop:
for {
switch msg := c.Read().(type) {
case *Status:
if msg.Head != chain.blocks[chain.Len()-1].Hash() {
t.Fatalf("wrong head block in status: %s", msg.Head.String())
}
if msg.TD.Cmp(chain.TD(chain.Len())) != 0 {
t.Fatalf("wrong TD in status: %v", msg.TD)
}
if !reflect.DeepEqual(msg.ForkID, chain.ForkID()) {
t.Fatalf("wrong fork ID in status: %v", msg.ForkID)
}
message = msg
break loop
case *Disconnect:
t.Fatalf("disconnect received: %v", msg.Reason)
case *Ping:
c.Write(&Pong{}) // TODO (renaynay): in the future, this should be an error
// (PINGs should not be a response upon fresh connection)
default:
t.Fatalf("bad status message: %s", pretty.Sdump(msg))
}
}
// make sure eth protocol version is set for negotiation
if c.ethProtocolVersion == 0 {
t.Fatalf("eth protocol version must be set in Conn")
}
if status == nil {
// write status message to client
status = &Status{
ProtocolVersion: uint32(c.ethProtocolVersion),
NetworkID: chain.chainConfig.ChainID.Uint64(),
TD: chain.TD(chain.Len()),
Head: chain.blocks[chain.Len()-1].Hash(),
Genesis: chain.blocks[0].Hash(),
ForkID: chain.ForkID(),
}
}
if err := c.Write(*status); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
return message
}
// waitForBlock waits for confirmation from the client that it has
// imported the given block.
func (c *Conn) waitForBlock(block *types.Block) error {
defer c.SetReadDeadline(time.Time{})
timeout := time.Now().Add(20 * time.Second)
c.SetReadDeadline(timeout)
for {
req := &GetBlockHeaders{Origin: hashOrNumber{Hash: block.Hash()}, Amount: 1}
if err := c.Write(req); err != nil {
return err
}
switch msg := c.Read().(type) {
case *BlockHeaders:
if len(*msg) > 0 {
return nil
}
time.Sleep(100 * time.Millisecond)
default:
return fmt.Errorf("invalid message: %s", pretty.Sdump(msg))
}
}
}

View File

@@ -0,0 +1,377 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v5test
import (
"bytes"
"net"
"sync"
"time"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p/discover/v5wire"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/netutil"
)
// Suite is the discv5 test suite.
type Suite struct {
Dest *enode.Node
Listen1, Listen2 string // listening addresses
}
func (s *Suite) listen1(log logger) (*conn, net.PacketConn) {
c := newConn(s.Dest, log)
l := c.listen(s.Listen1)
return c, l
}
func (s *Suite) listen2(log logger) (*conn, net.PacketConn, net.PacketConn) {
c := newConn(s.Dest, log)
l1, l2 := c.listen(s.Listen1), c.listen(s.Listen2)
return c, l1, l2
}
func (s *Suite) AllTests() []utesting.Test {
return []utesting.Test{
{Name: "Ping", Fn: s.TestPing},
{Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID},
{Name: "PingMultiIP", Fn: s.TestPingMultiIP},
{Name: "PingHandshakeInterrupted", Fn: s.TestPingHandshakeInterrupted},
{Name: "TalkRequest", Fn: s.TestTalkRequest},
{Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance},
{Name: "FindnodeResults", Fn: s.TestFindnodeResults},
}
}
// This test sends PING and expects a PONG response.
func (s *Suite) TestPing(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
switch resp := conn.reqresp(l1, ping).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping, l1)
default:
t.Fatal("expected PONG, got", resp.Name())
}
}
func checkPong(t *utesting.T, pong *v5wire.Pong, ping *v5wire.Ping, c net.PacketConn) {
if !bytes.Equal(pong.ReqID, ping.ReqID) {
t.Fatalf("wrong request ID %x in PONG, want %x", pong.ReqID, ping.ReqID)
}
if !pong.ToIP.Equal(laddr(c).IP) {
t.Fatalf("wrong destination IP %v in PONG, want %v", pong.ToIP, laddr(c).IP)
}
if int(pong.ToPort) != laddr(c).Port {
t.Fatalf("wrong destination port %v in PONG, want %v", pong.ToPort, laddr(c).Port)
}
}
// This test sends PING with a 9-byte request ID, which isn't allowed by the spec.
// The remote node should not respond.
func (s *Suite) TestPingLargeRequestID(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
ping := &v5wire.Ping{ReqID: make([]byte, 9)}
switch resp := conn.reqresp(l1, ping).(type) {
case *v5wire.Pong:
t.Errorf("PONG response with unknown request ID %x", resp.ReqID)
case *readError:
if resp.err == v5wire.ErrInvalidReqID {
t.Error("response with oversized request ID")
} else if !netutil.IsTimeout(resp.err) {
t.Error(resp)
}
}
}
// In this test, a session is established from one IP as usual. The session is then reused
// on another IP, which shouldn't work. The remote node should respond with WHOAREYOU for
// the attempt from a different IP.
func (s *Suite) TestPingMultiIP(t *utesting.T) {
conn, l1, l2 := s.listen2(t)
defer conn.close()
// Create the session on l1.
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
resp := conn.reqresp(l1, ping)
if resp.Kind() != v5wire.PongMsg {
t.Fatal("expected PONG, got", resp)
}
checkPong(t, resp.(*v5wire.Pong), ping, l1)
// Send on l2. This reuses the session because there is only one codec.
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l2, ping2, nil)
switch resp := conn.read(l2).(type) {
case *v5wire.Pong:
t.Fatalf("remote responded to PING from %v for session on IP %v", laddr(l2).IP, laddr(l1).IP)
case *v5wire.Whoareyou:
t.Logf("got WHOAREYOU for new session as expected")
resp.Node = s.Dest
conn.write(l2, ping2, resp)
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
// Catch the PONG on l2.
switch resp := conn.read(l2).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping2, l2)
default:
t.Fatal("expected PONG, got", resp)
}
// Try on l1 again.
ping3 := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l1, ping3, nil)
switch resp := conn.read(l1).(type) {
case *v5wire.Pong:
t.Fatalf("remote responded to PING from %v for session on IP %v", laddr(l1).IP, laddr(l2).IP)
case *v5wire.Whoareyou:
t.Logf("got WHOAREYOU for new session as expected")
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
}
// This test starts a handshake, but doesn't finish it and sends a second ordinary message
// packet instead of a handshake message packet. The remote node should respond with
// another WHOAREYOU challenge for the second packet.
func (s *Suite) TestPingHandshakeInterrupted(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
// First PING triggers challenge.
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l1, ping, nil)
switch resp := conn.read(l1).(type) {
case *v5wire.Whoareyou:
t.Logf("got WHOAREYOU for PING")
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
// Send second PING.
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
switch resp := conn.reqresp(l1, ping2).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping2, l1)
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
}
// This test sends TALKREQ and expects an empty TALKRESP response.
func (s *Suite) TestTalkRequest(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
// Non-empty request ID.
id := conn.nextReqID()
resp := conn.reqresp(l1, &v5wire.TalkRequest{ReqID: id, Protocol: "test-protocol"})
switch resp := resp.(type) {
case *v5wire.TalkResponse:
if !bytes.Equal(resp.ReqID, id) {
t.Fatalf("wrong request ID %x in TALKRESP, want %x", resp.ReqID, id)
}
if len(resp.Message) > 0 {
t.Fatalf("non-empty message %x in TALKRESP", resp.Message)
}
default:
t.Fatal("expected TALKRESP, got", resp.Name())
}
// Empty request ID.
resp = conn.reqresp(l1, &v5wire.TalkRequest{Protocol: "test-protocol"})
switch resp := resp.(type) {
case *v5wire.TalkResponse:
if len(resp.ReqID) > 0 {
t.Fatalf("wrong request ID %x in TALKRESP, want empty byte array", resp.ReqID)
}
if len(resp.Message) > 0 {
t.Fatalf("non-empty message %x in TALKRESP", resp.Message)
}
default:
t.Fatal("expected TALKRESP, got", resp.Name())
}
}
// This test checks that the remote node returns itself for FINDNODE with distance zero.
func (s *Suite) TestFindnodeZeroDistance(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
nodes, err := conn.findnode(l1, []uint{0})
if err != nil {
t.Fatal(err)
}
if len(nodes) != 1 {
t.Fatalf("remote returned more than one node for FINDNODE [0]")
}
if nodes[0].ID() != conn.remote.ID() {
t.Errorf("ID of response node is %v, want %v", nodes[0].ID(), conn.remote.ID())
}
}
// In this test, multiple nodes ping the node under test. After waiting for them to be
// accepted into the remote table, the test checks that they are returned by FINDNODE.
func (s *Suite) TestFindnodeResults(t *utesting.T) {
// Create bystanders.
nodes := make([]*bystander, 5)
added := make(chan enode.ID, len(nodes))
for i := range nodes {
nodes[i] = newBystander(t, s, added)
defer nodes[i].close()
}
// Get them added to the remote table.
timeout := 60 * time.Second
timeoutCh := time.After(timeout)
for count := 0; count < len(nodes); {
select {
case id := <-added:
t.Logf("bystander node %v added to remote table", id)
count++
case <-timeoutCh:
t.Errorf("remote added %d bystander nodes in %v, need %d to continue", count, timeout, len(nodes))
t.Logf("this can happen if the node has a non-empty table from previous runs")
return
}
}
t.Logf("all %d bystander nodes were added", len(nodes))
// Collect our nodes by distance.
var dists []uint
expect := make(map[enode.ID]*enode.Node)
for _, bn := range nodes {
n := bn.conn.localNode.Node()
expect[n.ID()] = n
d := uint(enode.LogDist(n.ID(), s.Dest.ID()))
if !containsUint(dists, d) {
dists = append(dists, d)
}
}
// Send FINDNODE for all distances.
conn, l1 := s.listen1(t)
defer conn.close()
foundNodes, err := conn.findnode(l1, dists)
if err != nil {
t.Fatal(err)
}
t.Logf("remote returned %d nodes for distance list %v", len(foundNodes), dists)
for _, n := range foundNodes {
delete(expect, n.ID())
}
if len(expect) > 0 {
t.Errorf("missing %d nodes in FINDNODE result", len(expect))
t.Logf("this can happen if the test is run multiple times in quick succession")
t.Logf("and the remote node hasn't removed dead nodes from previous runs yet")
} else {
t.Logf("all %d expected nodes were returned", len(nodes))
}
}
// A bystander is a node whose only purpose is filling a spot in the remote table.
type bystander struct {
dest *enode.Node
conn *conn
l net.PacketConn
addedCh chan enode.ID
done sync.WaitGroup
}
func newBystander(t *utesting.T, s *Suite, added chan enode.ID) *bystander {
conn, l := s.listen1(t)
conn.setEndpoint(l) // bystander nodes need IP/port to get pinged
bn := &bystander{
conn: conn,
l: l,
dest: s.Dest,
addedCh: added,
}
bn.done.Add(1)
go bn.loop()
return bn
}
// id returns the node ID of the bystander.
func (bn *bystander) id() enode.ID {
return bn.conn.localNode.ID()
}
// close shuts down loop.
func (bn *bystander) close() {
bn.conn.close()
bn.done.Wait()
}
// loop answers packets from the remote node until quit.
func (bn *bystander) loop() {
defer bn.done.Done()
var (
lastPing time.Time
wasAdded bool
)
for {
// Ping the remote node.
if !wasAdded && time.Since(lastPing) > 10*time.Second {
bn.conn.reqresp(bn.l, &v5wire.Ping{
ReqID: bn.conn.nextReqID(),
ENRSeq: bn.dest.Seq(),
})
lastPing = time.Now()
}
// Answer packets.
switch p := bn.conn.read(bn.l).(type) {
case *v5wire.Ping:
bn.conn.write(bn.l, &v5wire.Pong{
ReqID: p.ReqID,
ENRSeq: bn.conn.localNode.Seq(),
ToIP: bn.dest.IP(),
ToPort: uint16(bn.dest.UDP()),
}, nil)
wasAdded = true
bn.notifyAdded()
case *v5wire.Findnode:
bn.conn.write(bn.l, &v5wire.Nodes{ReqID: p.ReqID, Total: 1}, nil)
wasAdded = true
bn.notifyAdded()
case *v5wire.TalkRequest:
bn.conn.write(bn.l, &v5wire.TalkResponse{ReqID: p.ReqID}, nil)
case *readError:
if !netutil.IsTemporaryError(p.err) {
bn.conn.logf("shutting down: %v", p.err)
return
}
}
}
}
func (bn *bystander) notifyAdded() {
if bn.addedCh != nil {
bn.addedCh <- bn.id()
bn.addedCh = nil
}
}

View File

@@ -0,0 +1,263 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v5test
import (
"bytes"
"crypto/ecdsa"
"encoding/binary"
"fmt"
"net"
"time"
"github.com/ethereum/go-ethereum/common/mclock"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/discover/v5wire"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
)
// readError represents an error during packet reading.
// This exists to facilitate type-switching on the result of conn.read.
type readError struct {
err error
}
func (p *readError) Kind() byte { return 99 }
func (p *readError) Name() string { return fmt.Sprintf("error: %v", p.err) }
func (p *readError) Error() string { return p.err.Error() }
func (p *readError) Unwrap() error { return p.err }
func (p *readError) RequestID() []byte { return nil }
func (p *readError) SetRequestID([]byte) {}
// readErrorf creates a readError with the given text.
func readErrorf(format string, args ...interface{}) *readError {
return &readError{fmt.Errorf(format, args...)}
}
// This is the response timeout used in tests.
const waitTime = 300 * time.Millisecond
// conn is a connection to the node under test.
type conn struct {
localNode *enode.LocalNode
localKey *ecdsa.PrivateKey
remote *enode.Node
remoteAddr *net.UDPAddr
listeners []net.PacketConn
log logger
codec *v5wire.Codec
lastRequest v5wire.Packet
lastChallenge *v5wire.Whoareyou
idCounter uint32
}
type logger interface {
Logf(string, ...interface{})
}
// newConn sets up a connection to the given node.
func newConn(dest *enode.Node, log logger) *conn {
key, err := crypto.GenerateKey()
if err != nil {
panic(err)
}
db, err := enode.OpenDB("")
if err != nil {
panic(err)
}
ln := enode.NewLocalNode(db, key)
return &conn{
localKey: key,
localNode: ln,
remote: dest,
remoteAddr: &net.UDPAddr{IP: dest.IP(), Port: dest.UDP()},
codec: v5wire.NewCodec(ln, key, mclock.System{}),
log: log,
}
}
func (tc *conn) setEndpoint(c net.PacketConn) {
tc.localNode.SetStaticIP(laddr(c).IP)
tc.localNode.SetFallbackUDP(laddr(c).Port)
}
func (tc *conn) listen(ip string) net.PacketConn {
l, err := net.ListenPacket("udp", fmt.Sprintf("%v:0", ip))
if err != nil {
panic(err)
}
tc.listeners = append(tc.listeners, l)
return l
}
// close shuts down all listeners and the local node.
func (tc *conn) close() {
for _, l := range tc.listeners {
l.Close()
}
tc.localNode.Database().Close()
}
// nextReqID creates a request id.
func (tc *conn) nextReqID() []byte {
id := make([]byte, 4)
tc.idCounter++
binary.BigEndian.PutUint32(id, tc.idCounter)
return id
}
// reqresp performs a request/response interaction on the given connection.
// The request is retried if a handshake is requested.
func (tc *conn) reqresp(c net.PacketConn, req v5wire.Packet) v5wire.Packet {
reqnonce := tc.write(c, req, nil)
switch resp := tc.read(c).(type) {
case *v5wire.Whoareyou:
if resp.Nonce != reqnonce {
return readErrorf("wrong nonce %x in WHOAREYOU (want %x)", resp.Nonce[:], reqnonce[:])
}
resp.Node = tc.remote
tc.write(c, req, resp)
return tc.read(c)
default:
return resp
}
}
// findnode sends a FINDNODE request and waits for its responses.
func (tc *conn) findnode(c net.PacketConn, dists []uint) ([]*enode.Node, error) {
var (
findnode = &v5wire.Findnode{ReqID: tc.nextReqID(), Distances: dists}
reqnonce = tc.write(c, findnode, nil)
first = true
total uint8
results []*enode.Node
)
for n := 1; n > 0; {
switch resp := tc.read(c).(type) {
case *v5wire.Whoareyou:
// Handle handshake.
if resp.Nonce == reqnonce {
resp.Node = tc.remote
tc.write(c, findnode, resp)
} else {
return nil, fmt.Errorf("unexpected WHOAREYOU (nonce %x), waiting for NODES", resp.Nonce[:])
}
case *v5wire.Ping:
// Handle ping from remote.
tc.write(c, &v5wire.Pong{
ReqID: resp.ReqID,
ENRSeq: tc.localNode.Seq(),
}, nil)
case *v5wire.Nodes:
// Got NODES! Check request ID.
if !bytes.Equal(resp.ReqID, findnode.ReqID) {
return nil, fmt.Errorf("NODES response has wrong request id %x", resp.ReqID)
}
// Check total count. It should be greater than one
// and needs to be the same across all responses.
if first {
if resp.Total == 0 || resp.Total > 6 {
return nil, fmt.Errorf("invalid NODES response 'total' %d (not in (0,7))", resp.Total)
}
total = resp.Total
n = int(total) - 1
first = false
} else {
n--
if resp.Total != total {
return nil, fmt.Errorf("invalid NODES response 'total' %d (!= %d)", resp.Total, total)
}
}
// Check nodes.
nodes, err := checkRecords(resp.Nodes)
if err != nil {
return nil, fmt.Errorf("invalid node in NODES response: %v", err)
}
results = append(results, nodes...)
default:
return nil, fmt.Errorf("expected NODES, got %v", resp)
}
}
return results, nil
}
// write sends a packet on the given connection.
func (tc *conn) write(c net.PacketConn, p v5wire.Packet, challenge *v5wire.Whoareyou) v5wire.Nonce {
packet, nonce, err := tc.codec.Encode(tc.remote.ID(), tc.remoteAddr.String(), p, challenge)
if err != nil {
panic(fmt.Errorf("can't encode %v packet: %v", p.Name(), err))
}
if _, err := c.WriteTo(packet, tc.remoteAddr); err != nil {
tc.logf("Can't send %s: %v", p.Name(), err)
} else {
tc.logf(">> %s", p.Name())
}
return nonce
}
// read waits for an incoming packet on the given connection.
func (tc *conn) read(c net.PacketConn) v5wire.Packet {
buf := make([]byte, 1280)
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
return &readError{err}
}
n, fromAddr, err := c.ReadFrom(buf)
if err != nil {
return &readError{err}
}
_, _, p, err := tc.codec.Decode(buf[:n], fromAddr.String())
if err != nil {
return &readError{err}
}
tc.logf("<< %s", p.Name())
return p
}
// logf prints to the test log.
func (tc *conn) logf(format string, args ...interface{}) {
if tc.log != nil {
tc.log.Logf("(%s) %s", tc.localNode.ID().TerminalString(), fmt.Sprintf(format, args...))
}
}
func laddr(c net.PacketConn) *net.UDPAddr {
return c.LocalAddr().(*net.UDPAddr)
}
func checkRecords(records []*enr.Record) ([]*enode.Node, error) {
nodes := make([]*enode.Node, len(records))
for i := range records {
n, err := enode.New(enode.ValidSchemes, records[i])
if err != nil {
return nil, err
}
nodes[i] = n
}
return nodes, nil
}
func containsUint(ints []uint, x uint) bool {
for i := range ints {
if ints[i] == x {
return true
}
}
return false
}

View File

@@ -63,6 +63,7 @@ func init() {
discv5Command,
dnsCommand,
nodesetCommand,
rlpxCommand,
}
}
@@ -80,7 +81,7 @@ func commandHasFlag(ctx *cli.Context, flag cli.Flag) bool {
// getNodeArg handles the common case of a single node descriptor argument.
func getNodeArg(ctx *cli.Context) *enode.Node {
if ctx.NArg() != 1 {
if ctx.NArg() < 1 {
exit("missing node as command-line argument")
}
n, err := parseNode(ctx.Args()[0])

View File

@@ -95,6 +95,7 @@ var filterFlags = map[string]nodeFilterC{
"-min-age": {1, minAgeFilter},
"-eth-network": {1, ethFilter},
"-les-server": {0, lesFilter},
"-snap": {0, snapFilter},
}
func parseFilters(args []string) ([]nodeFilter, error) {
@@ -104,15 +105,15 @@ func parseFilters(args []string) ([]nodeFilter, error) {
if !ok {
return nil, fmt.Errorf("invalid filter %q", args[0])
}
if len(args) < fc.narg {
return nil, fmt.Errorf("filter %q wants %d arguments, have %d", args[0], fc.narg, len(args))
if len(args)-1 < fc.narg {
return nil, fmt.Errorf("filter %q wants %d arguments, have %d", args[0], fc.narg, len(args)-1)
}
filter, err := fc.fn(args[1:])
filter, err := fc.fn(args[1 : 1+fc.narg])
if err != nil {
return nil, fmt.Errorf("%s: %v", args[0], err)
}
filters = append(filters, filter)
args = args[fc.narg+1:]
args = args[1+fc.narg:]
}
return filters, nil
}
@@ -191,3 +192,13 @@ func lesFilter(args []string) (nodeFilter, error) {
}
return f, nil
}
func snapFilter(args []string) (nodeFilter, error) {
f := func(n nodeJSON) bool {
var snap struct {
_ []rlp.RawValue `rlp:"tail"`
}
return n.N.Load(enr.WithEntry("snap", &snap)) == nil
}
return f, nil
}

99
cmd/devp2p/rlpxcmd.go Normal file
View File

@@ -0,0 +1,99 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"net"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/ethtest"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/ethereum/go-ethereum/rlp"
"gopkg.in/urfave/cli.v1"
)
var (
rlpxCommand = cli.Command{
Name: "rlpx",
Usage: "RLPx Commands",
Subcommands: []cli.Command{
rlpxPingCommand,
rlpxEthTestCommand,
},
}
rlpxPingCommand = cli.Command{
Name: "ping",
Usage: "ping <node>",
Action: rlpxPing,
}
rlpxEthTestCommand = cli.Command{
Name: "eth-test",
Usage: "Runs tests against a node",
ArgsUsage: "<node> <chain.rlp> <genesis.json>",
Action: rlpxEthTest,
Flags: []cli.Flag{
testPatternFlag,
testTAPFlag,
},
}
)
func rlpxPing(ctx *cli.Context) error {
n := getNodeArg(ctx)
fd, err := net.Dial("tcp", fmt.Sprintf("%v:%d", n.IP(), n.TCP()))
if err != nil {
return err
}
conn := rlpx.NewConn(fd, n.Pubkey())
ourKey, _ := crypto.GenerateKey()
_, err = conn.Handshake(ourKey)
if err != nil {
return err
}
code, data, _, err := conn.Read()
if err != nil {
return err
}
switch code {
case 0:
var h ethtest.Hello
if err := rlp.DecodeBytes(data, &h); err != nil {
return fmt.Errorf("invalid handshake: %v", err)
}
fmt.Printf("%+v\n", h)
case 1:
var msg []p2p.DiscReason
if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
return fmt.Errorf("invalid disconnect message")
}
return fmt.Errorf("received disconnect message: %v", msg[0])
default:
return fmt.Errorf("invalid message code %d, expected handshake (code zero)", code)
}
return nil
}
// rlpxEthTest runs the eth protocol test suite.
func rlpxEthTest(ctx *cli.Context) error {
if ctx.NArg() < 3 {
exit("missing path to chain.rlp as command-line argument")
}
suite := ethtest.NewSuite(getNodeArg(ctx), ctx.Args()[1], ctx.Args()[2])
return runTests(ctx, suite.AllTests())
}

69
cmd/devp2p/runtest.go Normal file
View File

@@ -0,0 +1,69 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"os"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v4test"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
var (
testPatternFlag = cli.StringFlag{
Name: "run",
Usage: "Pattern of test suite(s) to run",
}
testTAPFlag = cli.BoolFlag{
Name: "tap",
Usage: "Output TAP",
}
// These two are specific to the discovery tests.
testListen1Flag = cli.StringFlag{
Name: "listen1",
Usage: "IP address of the first tester",
Value: v4test.Listen1,
}
testListen2Flag = cli.StringFlag{
Name: "listen2",
Usage: "IP address of the second tester",
Value: v4test.Listen2,
}
)
func runTests(ctx *cli.Context, tests []utesting.Test) error {
// Filter test cases.
if ctx.IsSet(testPatternFlag.Name) {
tests = utesting.MatchTests(tests, ctx.String(testPatternFlag.Name))
}
// Disable logging unless explicitly enabled.
if !ctx.GlobalIsSet("verbosity") && !ctx.GlobalIsSet("vmodule") {
log.Root().SetHandler(log.DiscardHandler())
}
// Run the tests.
var run = utesting.RunTests
if ctx.Bool(testTAPFlag.Name) {
run = utesting.RunTAP
}
results := run(tests, os.Stdout)
if utesting.CountFailures(results) > 0 {
os.Exit(1)
}
return nil
}

View File

@@ -29,6 +29,8 @@ Command line params that has to be supported are
--trace Output full trace logs to files <txhash>.jsonl
--trace.nomemory Disable full memory dump in traces
--trace.nostack Disable stack output in traces
--trace.noreturndata Disable return data output in traces
--output.basedir value Specifies where output files are placed. Will be created if it does not exist. (default: ".")
--output.alloc alloc Determines where to put the alloc of the post-state.
`stdout` - into the stdout output
`stderr` - into the stderr output
@@ -232,13 +234,13 @@ Example where blockhashes are provided:
./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace
```
```
cat trace-0.jsonl | grep BLOCKHASH -C2
cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2
```
```
{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memory":"0x","memSize":0,"stack":[],"returnStack":[],"depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memory":"0x","memSize":0,"stack":["0x1"],"returnStack":[],"depth":1,"refund":0,"opName":"BLOCKHASH","error":""}
{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memory":"0x","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"returnStack":[],"depth":1,"refund":0,"opName":"STOP","error":""}
{"output":"","gasUsed":"0x17","time":155861}
{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memory":"0x","memSize":0,"stack":[],"returnStack":[],"returnData":null,"depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memory":"0x","memSize":0,"stack":["0x1"],"returnStack":[],"returnData":null,"depth":1,"refund":0,"opName":"BLOCKHASH","error":""}
{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memory":"0x","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"returnStack":[],"returnData":null,"depth":1,"refund":0,"opName":"STOP","error":""}
{"output":"","gasUsed":"0x17","time":112885}
```
In this example, the caller has not provided the required blockhash:
@@ -254,9 +256,9 @@ Error code: 4
Another thing that can be done, is to chain invocations:
```
./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json
INFO [06-29|11:52:04.934] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [06-29|11:52:04.936] rejected tx index=0 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [06-29|11:52:04.936] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [08-03|15:25:15.168] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [08-03|15:25:15.169] rejected tx index=0 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [08-03|15:25:15.169] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
```
What happened here, is that we first applied two identical transactions, so the second one was rejected.

View File

@@ -23,7 +23,7 @@ import (
"github.com/ethereum/go-ethereum/cmd/evm/internal/compiler"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
var compileCommand = cli.Command{

View File

@@ -23,7 +23,7 @@ import (
"strings"
"github.com/ethereum/go-ethereum/core/asm"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
var disasmCommand = cli.Command{

View File

@@ -34,6 +34,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"golang.org/x/crypto/sha3"
)
@@ -81,7 +82,7 @@ type stEnvMarshaling struct {
// Apply applies a set of transactions to a pre-state
func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
txs types.Transactions, miningReward int64,
getTracerFn func(txIndex int) (tracer vm.Tracer, err error)) (*state.StateDB, *ExecutionResult, error) {
getTracerFn func(txIndex int, txHash common.Hash) (tracer vm.Tracer, err error)) (*state.StateDB, *ExecutionResult, error) {
// Capture errors for BLOCKHASH operation, if we haven't been supplied the
// required blockhashes
@@ -109,7 +110,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
txIndex = 0
)
gaspool.AddGas(pre.Env.GasLimit)
vmContext := vm.Context{
vmContext := vm.BlockContext{
CanTransfer: core.CanTransfer,
Transfer: core.Transfer,
Coinbase: pre.Env.Coinbase,
@@ -118,7 +119,6 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
Difficulty: pre.Env.Difficulty,
GasLimit: pre.Env.GasLimit,
GetHash: getHash,
// GasPrice and Origin needs to be set per transaction
}
// If DAO is supported/enabled, we need to handle it here. In geth 'proper', it's
// done in StateProcessor.Process(block, ...), right before transactions are applied.
@@ -135,17 +135,26 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
rejectedTxs = append(rejectedTxs, i)
continue
}
tracer, err := getTracerFn(txIndex)
tracer, err := getTracerFn(txIndex, tx.Hash())
if err != nil {
return nil, nil, err
}
vmConfig.Tracer = tracer
vmConfig.Debug = (tracer != nil)
statedb.Prepare(tx.Hash(), blockHash, txIndex)
vmContext.GasPrice = msg.GasPrice()
vmContext.Origin = msg.From()
txContext := core.NewEVMTxContext(msg)
evm := vm.NewEVM(vmContext, statedb, chainConfig, vmConfig)
evm := vm.NewEVM(vmContext, txContext, statedb, chainConfig, vmConfig)
if chainConfig.IsYoloV2(vmContext.BlockNumber) {
statedb.AddAddressToAccessList(msg.From())
if dst := msg.To(); dst != nil {
statedb.AddAddressToAccessList(*dst)
// If it's a create-tx, the destination will be added inside evm.create
}
for _, addr := range evm.ActivePrecompiles() {
statedb.AddAddressToAccessList(addr)
}
}
snapshot := statedb.Snapshot()
// (ret []byte, usedGas uint64, failed bool, err error)
msgResult, err := core.ApplyMessage(evm, msg, gaspool)
@@ -174,7 +183,7 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
receipt.GasUsed = msgResult.UsedGas
// if the transaction created a contract, store the creation address in the receipt.
if msg.To() == nil {
receipt.ContractAddress = crypto.CreateAddress(evm.Context.Origin, tx.Nonce())
receipt.ContractAddress = crypto.CreateAddress(evm.TxContext.Origin, tx.Nonce())
}
// Set the receipt logs and create a bloom for filtering
receipt.Logs = statedb.GetLogs(tx.Hash())
@@ -220,8 +229,8 @@ func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
}
execRs := &ExecutionResult{
StateRoot: root,
TxRoot: types.DeriveSha(includedTxs),
ReceiptRoot: types.DeriveSha(receipts),
TxRoot: types.DeriveSha(includedTxs, new(trie.Trie)),
ReceiptRoot: types.DeriveSha(receipts, new(trie.Trie)),
Bloom: types.CreateBloom(receipts),
LogsHash: rlpHash(statedb.Logs()),
Receipts: receipts,

View File

@@ -42,6 +42,11 @@ var (
Name: "trace.noreturndata",
Usage: "Disable return data output in traces",
}
OutputBasedir = cli.StringFlag{
Name: "output.basedir",
Usage: "Specifies where output files are placed. Will be created if it does not exist.",
Value: "",
}
OutputAllocFlag = cli.StringFlag{
Name: "output.alloc",
Usage: "Determines where to put the `alloc` of the post-state.\n" +

View File

@@ -22,6 +22,7 @@ import (
"io/ioutil"
"math/big"
"os"
"path"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
@@ -75,11 +76,22 @@ func Main(ctx *cli.Context) error {
log.Root().SetHandler(glogger)
var (
err error
tracer vm.Tracer
err error
tracer vm.Tracer
baseDir = ""
)
var getTracer func(txIndex int) (vm.Tracer, error)
var getTracer func(txIndex int, txHash common.Hash) (vm.Tracer, error)
// If user specified a basedir, make sure it exists
if ctx.IsSet(OutputBasedir.Name) {
if base := ctx.String(OutputBasedir.Name); len(base) > 0 {
err := os.MkdirAll(base, 0755) // //rw-r--r--
if err != nil {
return NewError(ErrorIO, fmt.Errorf("failed creating output basedir: %v", err))
}
baseDir = base
}
}
if ctx.Bool(TraceFlag.Name) {
// Configure the EVM logger
logConfig := &vm.LogConfig{
@@ -95,11 +107,11 @@ func Main(ctx *cli.Context) error {
prevFile.Close()
}
}()
getTracer = func(txIndex int) (vm.Tracer, error) {
getTracer = func(txIndex int, txHash common.Hash) (vm.Tracer, error) {
if prevFile != nil {
prevFile.Close()
}
traceFile, err := os.Create(fmt.Sprintf("trace-%d.jsonl", txIndex))
traceFile, err := os.Create(path.Join(baseDir, fmt.Sprintf("trace-%d-%v.jsonl", txIndex, txHash.String())))
if err != nil {
return nil, NewError(ErrorIO, fmt.Errorf("failed creating trace-file: %v", err))
}
@@ -107,7 +119,7 @@ func Main(ctx *cli.Context) error {
return vm.NewJSONLogger(logConfig, traceFile), nil
}
} else {
getTracer = func(txIndex int) (tracer vm.Tracer, err error) {
getTracer = func(txIndex int, txHash common.Hash) (tracer vm.Tracer, err error) {
return nil, nil
}
}
@@ -197,7 +209,7 @@ func Main(ctx *cli.Context) error {
//postAlloc := state.DumpGenesisFormat(false, false, false)
collector := make(Alloc)
state.DumpToCollector(collector, false, false, false, nil, -1)
return dispatchOutput(ctx, result, collector)
return dispatchOutput(ctx, baseDir, result, collector)
}
@@ -224,12 +236,12 @@ func (g Alloc) OnAccount(addr common.Address, dumpAccount state.DumpAccount) {
}
// saveFile marshalls the object to the given file
func saveFile(filename string, data interface{}) error {
func saveFile(baseDir, filename string, data interface{}) error {
b, err := json.MarshalIndent(data, "", " ")
if err != nil {
return NewError(ErrorJson, fmt.Errorf("failed marshalling output: %v", err))
}
if err = ioutil.WriteFile(filename, b, 0644); err != nil {
if err = ioutil.WriteFile(path.Join(baseDir, filename), b, 0644); err != nil {
return NewError(ErrorIO, fmt.Errorf("failed writing output: %v", err))
}
return nil
@@ -237,26 +249,26 @@ func saveFile(filename string, data interface{}) error {
// dispatchOutput writes the output data to either stderr or stdout, or to the specified
// files
func dispatchOutput(ctx *cli.Context, result *ExecutionResult, alloc Alloc) error {
func dispatchOutput(ctx *cli.Context, baseDir string, result *ExecutionResult, alloc Alloc) error {
stdOutObject := make(map[string]interface{})
stdErrObject := make(map[string]interface{})
dispatch := func(fName, name string, obj interface{}) error {
dispatch := func(baseDir, fName, name string, obj interface{}) error {
switch fName {
case "stdout":
stdOutObject[name] = obj
case "stderr":
stdErrObject[name] = obj
default: // save to file
if err := saveFile(fName, obj); err != nil {
if err := saveFile(baseDir, fName, obj); err != nil {
return err
}
}
return nil
}
if err := dispatch(ctx.String(OutputAllocFlag.Name), "alloc", alloc); err != nil {
if err := dispatch(baseDir, ctx.String(OutputAllocFlag.Name), "alloc", alloc); err != nil {
return err
}
if err := dispatch(ctx.String(OutputResultFlag.Name), "result", result); err != nil {
if err := dispatch(baseDir, ctx.String(OutputResultFlag.Name), "result", result); err != nil {
return err
}
if len(stdOutObject) > 0 {

View File

@@ -146,6 +146,7 @@ var stateTransitionCommand = cli.Command{
t8ntool.TraceDisableMemoryFlag,
t8ntool.TraceDisableStackFlag,
t8ntool.TraceDisableReturnDataFlag,
t8ntool.OutputBasedir,
t8ntool.OutputAllocFlag,
t8ntool.OutputResultFlag,
t8ntool.InputAllocFlag,

View File

@@ -38,7 +38,7 @@ import (
"github.com/ethereum/go-ethereum/core/vm/runtime"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
var runCommand = cli.Command{

View File

@@ -28,7 +28,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/tests"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
var stateTestCommand = cli.Command{

View File

@@ -155,10 +155,10 @@ echo "Example where blockhashes are provided: "
cmd="./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace"
tick && echo $cmd && tick
$cmd 2>&1 >/dev/null
cmd="cat trace-0.jsonl | grep BLOCKHASH -C2"
cmd="cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2"
tick && echo $cmd && tick
echo "$ticks"
cat trace-0.jsonl | grep BLOCKHASH -C2
cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2
echo "$ticks"
echo ""

View File

@@ -43,6 +43,7 @@ import (
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
@@ -235,23 +236,21 @@ func newFaucet(genesis *core.Genesis, port int, enodes []*discv5.Node, network u
if err != nil {
return nil, err
}
// Assemble the Ethereum light client protocol
if err := stack.Register(func(ctx *node.ServiceContext) (node.Service, error) {
cfg := eth.DefaultConfig
cfg.SyncMode = downloader.LightSync
cfg.NetworkId = network
cfg.Genesis = genesis
return les.New(ctx, &cfg)
}); err != nil {
return nil, err
cfg := eth.DefaultConfig
cfg.SyncMode = downloader.LightSync
cfg.NetworkId = network
cfg.Genesis = genesis
utils.SetDNSDiscoveryDefaults(&cfg, genesis.ToBlock(nil).Hash())
lesBackend, err := les.New(stack, &cfg)
if err != nil {
return nil, fmt.Errorf("Failed to register the Ethereum service: %w", err)
}
// Assemble the ethstats monitoring and reporting service'
if stats != "" {
if err := stack.Register(func(ctx *node.ServiceContext) (node.Service, error) {
var serv *les.LightEthereum
ctx.Service(&serv)
return ethstats.New(stats, nil, serv)
}); err != nil {
if err := ethstats.New(stack, lesBackend.ApiBackend, lesBackend.Engine(), stats); err != nil {
return nil, err
}
}
@@ -268,7 +267,7 @@ func newFaucet(genesis *core.Genesis, port int, enodes []*discv5.Node, network u
// Attach to the client and retrieve and interesting metadatas
api, err := stack.Attach()
if err != nil {
stack.Stop()
stack.Close()
return nil, err
}
client := ethclient.NewClient(api)
@@ -733,7 +732,10 @@ func authTwitter(url string) (string, string, common.Address, error) {
// returning the username, avatar URL and Ethereum address to fund on success.
func authFacebook(url string) (string, string, common.Address, error) {
// Ensure the user specified a meaningful URL, no fancy nonsense
parts := strings.Split(url, "/")
parts := strings.Split(strings.Split(url, "?")[0], "/")
if parts[len(parts)-1] == "" {
parts = parts[0 : len(parts)-1]
}
if len(parts) < 4 || parts[len(parts)-2] != "posts" {
//lint:ignore ST1005 This error is to be displayed in the browser
return "", "", common.Address{}, errors.New("Invalid Facebook post URL")

View File

@@ -49,7 +49,7 @@
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="input-group">
<input id="url" name="url" type="text" class="form-control" placeholder="Social network URL containing your Ethereum address...">
<input id="url" name="url" type="text" class="form-control" placeholder="Social network URL containing your Ethereum address..."/>
<span class="input-group-btn">
<button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">Give me Ether <i class="fa fa-caret-down" aria-hidden="true"></i></button>
<ul class="dropdown-menu dropdown-menu-right">{{range $idx, $amount := .Amounts}}

View File

@@ -43,13 +43,13 @@ func tmpDatadirWithKeystore(t *testing.T) string {
}
func TestAccountListEmpty(t *testing.T) {
geth := runGeth(t, "account", "list")
geth := runGeth(t, "--nousb", "account", "list")
geth.ExpectExit()
}
func TestAccountList(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t, "account", "list", "--datadir", datadir)
geth := runGeth(t, "--nousb", "account", "list", "--datadir", datadir)
defer geth.ExpectExit()
if runtime.GOOS == "windows" {
geth.Expect(`
@@ -138,7 +138,7 @@ Fatal: Passwords do not match
func TestAccountUpdate(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t, "account", "update",
geth := runGeth(t, "--nousb", "account", "update",
"--datadir", datadir, "--lightkdf",
"f466859ead1932d743d622cb74fc058882e8648a")
defer geth.ExpectExit()
@@ -153,7 +153,7 @@ Repeat password: {{.InputLine "foobar2"}}
}
func TestWalletImport(t *testing.T) {
geth := runGeth(t, "wallet", "import", "--lightkdf", "testdata/guswallet.json")
geth := runGeth(t, "--nousb", "wallet", "import", "--lightkdf", "testdata/guswallet.json")
defer geth.ExpectExit()
geth.Expect(`
!! Unsupported terminal, password will be echoed.
@@ -168,7 +168,7 @@ Address: {d4584b5f6229b7be90727b0fc8c6b91bb427821f}
}
func TestWalletImportBadPassword(t *testing.T) {
geth := runGeth(t, "wallet", "import", "--lightkdf", "testdata/guswallet.json")
geth := runGeth(t, "--nousb", "wallet", "import", "--lightkdf", "testdata/guswallet.json")
defer geth.ExpectExit()
geth.Expect(`
!! Unsupported terminal, password will be echoed.
@@ -178,11 +178,8 @@ Fatal: could not decrypt key with given password
}
func TestUnlockFlag(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a",
"js", "testdata/empty.js")
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "js", "testdata/empty.js")
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
@@ -202,10 +199,9 @@ Password: {{.InputLine "foobar"}}
}
func TestUnlockFlagWrongPassword(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a")
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "js", "testdata/empty.js")
defer geth.ExpectExit()
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
@@ -221,11 +217,9 @@ Fatal: Failed to unlock account f466859ead1932d743d622cb74fc058882e8648a (could
// https://github.com/ethereum/go-ethereum/issues/1785
func TestUnlockFlagMultiIndex(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--unlock", "0,2",
"js", "testdata/empty.js")
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "--unlock", "0,2", "js", "testdata/empty.js")
geth.Expect(`
Unlocking account 0 | Attempt 1/3
!! Unsupported terminal, password will be echoed.
@@ -248,11 +242,9 @@ Password: {{.InputLine "foobar"}}
}
func TestUnlockFlagPasswordFile(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--password", "testdata/passwords.txt", "--unlock", "0,2",
"js", "testdata/empty.js")
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "--password", "testdata/passwords.txt", "--unlock", "0,2", "js", "testdata/empty.js")
geth.ExpectExit()
wantMessages := []string{
@@ -268,10 +260,9 @@ func TestUnlockFlagPasswordFile(t *testing.T) {
}
func TestUnlockFlagPasswordFileWrongPassword(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--password", "testdata/wrong-passwords.txt", "--unlock", "0,2")
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "--password",
"testdata/wrong-passwords.txt", "--unlock", "0,2")
defer geth.ExpectExit()
geth.Expect(`
Fatal: Failed to unlock account 0 (could not decrypt key with given password)
@@ -280,9 +271,9 @@ Fatal: Failed to unlock account 0 (could not decrypt key with given password)
func TestUnlockFlagAmbiguous(t *testing.T) {
store := filepath.Join("..", "..", "accounts", "keystore", "testdata", "dupes")
geth := runGeth(t,
"--keystore", store, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a",
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "--keystore",
store, "--unlock", "f466859ead1932d743d622cb74fc058882e8648a",
"js", "testdata/empty.js")
defer geth.ExpectExit()
@@ -318,9 +309,10 @@ In order to avoid this warning, you need to remove the following duplicate key f
func TestUnlockFlagAmbiguousWrongPassword(t *testing.T) {
store := filepath.Join("..", "..", "accounts", "keystore", "testdata", "dupes")
geth := runGeth(t,
"--keystore", store, "--nat", "none", "--nodiscover", "--maxpeers", "0", "--port", "0",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a")
geth := runMinimalGeth(t, "--port", "0", "--ipcdisable", "--datadir", tmpDatadirWithKeystore(t),
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a", "--keystore",
store, "--unlock", "f466859ead1932d743d622cb74fc058882e8648a")
defer geth.ExpectExit()
// Helper for the expect template, returns absolute keystore path.

View File

@@ -163,7 +163,7 @@ The export-preimages command export hash preimages to an RLP encoded stream`,
utils.RinkebyFlag,
utils.TxLookupLimitFlag,
utils.GoerliFlag,
utils.YoloV1Flag,
utils.YoloV2Flag,
utils.LegacyTestnetFlag,
},
Category: "BLOCKCHAIN COMMANDS",
@@ -213,7 +213,7 @@ Use "ethereum dump 0" to dump the genesis block.`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV1Flag,
utils.YoloV2Flag,
utils.LegacyTestnetFlag,
utils.SyncModeFlag,
},
@@ -239,8 +239,8 @@ func initGenesis(ctx *cli.Context) error {
if err := json.NewDecoder(file).Decode(genesis); err != nil {
utils.Fatalf("invalid genesis file: %v", err)
}
// Open an initialise both full and light databases
stack := makeFullNode(ctx)
// Open and initialise both full and light databases
stack, _ := makeConfigNode(ctx)
defer stack.Close()
for _, name := range []string{"chaindata", "lightchaindata"} {
@@ -277,7 +277,8 @@ func importChain(ctx *cli.Context) error {
utils.SetupMetrics(ctx)
// Start system runtime metrics collection
go metrics.CollectProcessMetrics(3 * time.Second)
stack := makeFullNode(ctx)
stack, _ := makeConfigNode(ctx)
defer stack.Close()
chain, db := utils.MakeChain(ctx, stack, false)
@@ -371,7 +372,8 @@ func exportChain(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an argument.")
}
stack := makeFullNode(ctx)
stack, _ := makeConfigNode(ctx)
defer stack.Close()
chain, _ := utils.MakeChain(ctx, stack, true)
@@ -406,7 +408,8 @@ func importPreimages(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an argument.")
}
stack := makeFullNode(ctx)
stack, _ := makeConfigNode(ctx)
defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack)
@@ -424,7 +427,8 @@ func exportPreimages(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an argument.")
}
stack := makeFullNode(ctx)
stack, _ := makeConfigNode(ctx)
defer stack.Close()
db := utils.MakeChainDatabase(ctx, stack)
@@ -446,7 +450,7 @@ func copyDb(ctx *cli.Context) error {
utils.Fatalf("Source ancient chain directory path argument missing")
}
// Initialize a new chain for the running node to sync into
stack := makeFullNode(ctx)
stack, _ := makeConfigNode(ctx)
defer stack.Close()
chain, chainDb := utils.MakeChain(ctx, stack, false)
@@ -554,7 +558,7 @@ func confirmAndRemoveDB(database string, kind string) {
}
func dump(ctx *cli.Context) error {
stack := makeFullNode(ctx)
stack, _ := makeConfigNode(ctx)
defer stack.Close()
chain, chainDb := utils.MakeChain(ctx, stack, true)

View File

@@ -24,13 +24,14 @@ import (
"reflect"
"unicode"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
whisper "github.com/ethereum/go-ethereum/whisper/whisperv6"
"github.com/naoina/toml"
)
@@ -72,9 +73,19 @@ type ethstatsConfig struct {
URL string `toml:",omitempty"`
}
// whisper has been deprecated, but clients out there might still have [Shh]
// in their config, which will crash. Cut them some slack by keeping the
// config, and displaying a message that those config switches are ineffectual.
// To be removed circa Q1 2021 -- @gballet.
type whisperDeprecatedConfig struct {
MaxMessageSize uint32 `toml:",omitempty"`
MinimumAcceptedPOW float64 `toml:",omitempty"`
RestrictConnectionBetweenLightClients bool `toml:",omitempty"`
}
type gethConfig struct {
Eth eth.Config
Shh whisper.Config
Shh whisperDeprecatedConfig
Node node.Config
Ethstats ethstatsConfig
}
@@ -104,11 +115,11 @@ func defaultNodeConfig() node.Config {
return cfg
}
// makeConfigNode loads geth configuration and creates a blank node instance.
func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
// Load defaults.
cfg := gethConfig{
Eth: eth.DefaultConfig,
Shh: whisper.DefaultConfig,
Node: defaultNodeConfig(),
}
@@ -117,6 +128,10 @@ func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
if err := loadConfig(file, &cfg); err != nil {
utils.Fatalf("%v", err)
}
if cfg.Shh != (whisperDeprecatedConfig{}) {
log.Warn("Deprecated whisper config detected. Whisper has been moved to github.com/ethereum/whisper")
}
}
// Apply flags.
@@ -129,49 +144,36 @@ func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
if ctx.GlobalIsSet(utils.EthStatsURLFlag.Name) {
cfg.Ethstats.URL = ctx.GlobalString(utils.EthStatsURLFlag.Name)
}
utils.SetShhConfig(ctx, stack, &cfg.Shh)
utils.SetShhConfig(ctx, stack)
return stack, cfg
}
// enableWhisper returns true in case one of the whisper flags is set.
func enableWhisper(ctx *cli.Context) bool {
func checkWhisper(ctx *cli.Context) {
for _, flag := range whisperFlags {
if ctx.GlobalIsSet(flag.GetName()) {
return true
log.Warn("deprecated whisper flag detected. Whisper has been moved to github.com/ethereum/whisper")
}
}
return false
}
func makeFullNode(ctx *cli.Context) *node.Node {
// makeFullNode loads geth configuration and creates the Ethereum backend.
func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) {
stack, cfg := makeConfigNode(ctx)
utils.RegisterEthService(stack, &cfg.Eth)
// Whisper must be explicitly enabled by specifying at least 1 whisper flag or in dev mode
shhEnabled := enableWhisper(ctx)
shhAutoEnabled := !ctx.GlobalIsSet(utils.WhisperEnabledFlag.Name) && ctx.GlobalIsSet(utils.DeveloperFlag.Name)
if shhEnabled || shhAutoEnabled {
if ctx.GlobalIsSet(utils.WhisperMaxMessageSizeFlag.Name) {
cfg.Shh.MaxMessageSize = uint32(ctx.Int(utils.WhisperMaxMessageSizeFlag.Name))
}
if ctx.GlobalIsSet(utils.WhisperMinPOWFlag.Name) {
cfg.Shh.MinimumAcceptedPOW = ctx.Float64(utils.WhisperMinPOWFlag.Name)
}
if ctx.GlobalIsSet(utils.WhisperRestrictConnectionBetweenLightClientsFlag.Name) {
cfg.Shh.RestrictConnectionBetweenLightClients = true
}
utils.RegisterShhService(stack, &cfg.Shh)
}
backend := utils.RegisterEthService(stack, &cfg.Eth)
checkWhisper(ctx)
// Configure GraphQL if requested
if ctx.GlobalIsSet(utils.GraphQLEnabledFlag.Name) {
utils.RegisterGraphQLService(stack, cfg.Node.GraphQLEndpoint(), cfg.Node.GraphQLCors, cfg.Node.GraphQLVirtualHosts, cfg.Node.HTTPTimeouts)
utils.RegisterGraphQLService(stack, backend, cfg.Node)
}
// Add the Ethereum Stats daemon if requested.
if cfg.Ethstats.URL != "" {
utils.RegisterEthStatsService(stack, cfg.Ethstats.URL)
utils.RegisterEthStatsService(stack, backend, cfg.Ethstats.URL)
}
return stack
return stack, backend
}
// dumpConfig is the dumpconfig command.

View File

@@ -78,12 +78,12 @@ JavaScript API. See https://github.com/ethereum/go-ethereum/wiki/JavaScript-Cons
func localConsole(ctx *cli.Context) error {
// Create and start the node based on the CLI flags
prepare(ctx)
node := makeFullNode(ctx)
startNode(ctx, node)
defer node.Close()
stack, backend := makeFullNode(ctx)
startNode(ctx, stack, backend)
defer stack.Close()
// Attach to the newly started node and start the JavaScript console
client, err := node.Attach()
client, err := stack.Attach()
if err != nil {
utils.Fatalf("Failed to attach to the inproc geth: %v", err)
}
@@ -136,8 +136,8 @@ func remoteConsole(ctx *cli.Context) error {
path = filepath.Join(path, "rinkeby")
} else if ctx.GlobalBool(utils.GoerliFlag.Name) {
path = filepath.Join(path, "goerli")
} else if ctx.GlobalBool(utils.YoloV1Flag.Name) {
path = filepath.Join(path, "yolo-v1")
} else if ctx.GlobalBool(utils.YoloV2Flag.Name) {
path = filepath.Join(path, "yolo-v2")
}
}
endpoint = fmt.Sprintf("%s/geth.ipc", path)
@@ -190,12 +190,12 @@ func dialRPC(endpoint string) (*rpc.Client, error) {
// everything down.
func ephemeralConsole(ctx *cli.Context) error {
// Create and start the node based on the CLI flags
node := makeFullNode(ctx)
startNode(ctx, node)
defer node.Close()
stack, backend := makeFullNode(ctx)
startNode(ctx, stack, backend)
defer stack.Close()
// Attach to the newly started node and start the JavaScript console
client, err := node.Attach()
client, err := stack.Attach()
if err != nil {
utils.Fatalf("Failed to attach to the inproc geth: %v", err)
}

View File

@@ -31,20 +31,29 @@ import (
)
const (
ipcAPIs = "admin:1.0 debug:1.0 eth:1.0 ethash:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 shh:1.0 txpool:1.0 web3:1.0"
ipcAPIs = "admin:1.0 debug:1.0 eth:1.0 ethash:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0"
httpAPIs = "eth:1.0 net:1.0 rpc:1.0 web3:1.0"
)
// spawns geth with the given command line args, using a set of flags to minimise
// memory and disk IO. If the args don't set --datadir, the
// child g gets a temporary data directory.
func runMinimalGeth(t *testing.T, args ...string) *testgeth {
// --ropsten to make the 'writing genesis to disk' faster (no accounts)
// --networkid=1337 to avoid cache bump
// --syncmode=full to avoid allocating fast sync bloom
allArgs := []string{"--ropsten", "--nousb", "--networkid", "1337", "--syncmode=full", "--port", "0",
"--nat", "none", "--nodiscover", "--maxpeers", "0", "--cache", "64"}
return runGeth(t, append(allArgs, args...)...)
}
// Tests that a node embedded within a console can be started up properly and
// then terminated by closing the input stream.
func TestConsoleWelcome(t *testing.T) {
coinbase := "0x8605cdbbdb6d264aa742e77020dcbc58fcdce182"
// Start a geth console, make sure it's cleaned up and terminate the console
geth := runGeth(t,
"--port", "0", "--maxpeers", "0", "--nodiscover", "--nat", "none",
"--etherbase", coinbase, "--shh",
"console")
geth := runMinimalGeth(t, "--etherbase", coinbase, "console")
// Gather all the infos the welcome message needs to contain
geth.SetTemplateFunc("goos", func() string { return runtime.GOOS })
@@ -66,16 +75,20 @@ at block: 0 ({{niltime}})
datadir: {{.Datadir}}
modules: {{apis}}
To exit, press ctrl-d
> {{.InputLine "exit"}}
`)
geth.ExpectExit()
}
// Tests that a console can be attached to a running node via various means.
func TestIPCAttachWelcome(t *testing.T) {
func TestAttachWelcome(t *testing.T) {
var (
ipc string
httpPort string
wsPort string
)
// Configure the instance for IPC attachment
coinbase := "0x8605cdbbdb6d264aa742e77020dcbc58fcdce182"
var ipc string
if runtime.GOOS == "windows" {
ipc = `\\.\pipe\geth` + strconv.Itoa(trulyRandInt(100000, 999999))
} else {
@@ -83,53 +96,28 @@ func TestIPCAttachWelcome(t *testing.T) {
defer os.RemoveAll(ws)
ipc = filepath.Join(ws, "geth.ipc")
}
// Note: we need --shh because testAttachWelcome checks for default
// list of ipc modules and shh is included there.
geth := runGeth(t,
"--port", "0", "--maxpeers", "0", "--nodiscover", "--nat", "none",
"--etherbase", coinbase, "--shh", "--ipcpath", ipc)
defer func() {
geth.Interrupt()
geth.ExpectExit()
}()
waitForEndpoint(t, ipc, 3*time.Second)
testAttachWelcome(t, geth, "ipc:"+ipc, ipcAPIs)
}
func TestHTTPAttachWelcome(t *testing.T) {
coinbase := "0x8605cdbbdb6d264aa742e77020dcbc58fcdce182"
port := strconv.Itoa(trulyRandInt(1024, 65536)) // Yeah, sometimes this will fail, sorry :P
geth := runGeth(t,
"--port", "0", "--maxpeers", "0", "--nodiscover", "--nat", "none",
"--etherbase", coinbase, "--http", "--http.port", port)
defer func() {
geth.Interrupt()
geth.ExpectExit()
}()
endpoint := "http://127.0.0.1:" + port
waitForEndpoint(t, endpoint, 3*time.Second)
testAttachWelcome(t, geth, endpoint, httpAPIs)
}
func TestWSAttachWelcome(t *testing.T) {
coinbase := "0x8605cdbbdb6d264aa742e77020dcbc58fcdce182"
port := strconv.Itoa(trulyRandInt(1024, 65536)) // Yeah, sometimes this will fail, sorry :P
geth := runGeth(t,
"--port", "0", "--maxpeers", "0", "--nodiscover", "--nat", "none",
"--etherbase", coinbase, "--ws", "--ws.port", port)
defer func() {
geth.Interrupt()
geth.ExpectExit()
}()
endpoint := "ws://127.0.0.1:" + port
waitForEndpoint(t, endpoint, 3*time.Second)
testAttachWelcome(t, geth, endpoint, httpAPIs)
// And HTTP + WS attachment
p := trulyRandInt(1024, 65533) // Yeah, sometimes this will fail, sorry :P
httpPort = strconv.Itoa(p)
wsPort = strconv.Itoa(p + 1)
geth := runMinimalGeth(t, "--etherbase", "0x8605cdbbdb6d264aa742e77020dcbc58fcdce182",
"--ipcpath", ipc,
"--http", "--http.port", httpPort,
"--ws", "--ws.port", wsPort)
t.Run("ipc", func(t *testing.T) {
waitForEndpoint(t, ipc, 3*time.Second)
testAttachWelcome(t, geth, "ipc:"+ipc, ipcAPIs)
})
t.Run("http", func(t *testing.T) {
endpoint := "http://127.0.0.1:" + httpPort
waitForEndpoint(t, endpoint, 3*time.Second)
testAttachWelcome(t, geth, endpoint, httpAPIs)
})
t.Run("ws", func(t *testing.T) {
endpoint := "ws://127.0.0.1:" + wsPort
waitForEndpoint(t, endpoint, 3*time.Second)
testAttachWelcome(t, geth, endpoint, httpAPIs)
})
}
func testAttachWelcome(t *testing.T, geth *testgeth, endpoint, apis string) {
@@ -161,6 +149,7 @@ at block: 0 ({{niltime}}){{if ipc}}
datadir: {{datadir}}{{end}}
modules: {{apis}}
To exit, press ctrl-d
> {{.InputLine "exit" }}
`)
attach.ExpectExit()

View File

@@ -115,12 +115,11 @@ func testDAOForkBlockNewChain(t *testing.T, test int, genesis string, expectBloc
if err := ioutil.WriteFile(json, []byte(genesis), 0600); err != nil {
t.Fatalf("test %d: failed to write genesis file: %v", test, err)
}
runGeth(t, "--datadir", datadir, "init", json).WaitExit()
runGeth(t, "--datadir", datadir, "--nousb", "--networkid", "1337", "init", json).WaitExit()
} else {
// Force chain initialization
args := []string{"--port", "0", "--maxpeers", "0", "--nodiscover", "--nat", "none", "--ipcdisable", "--datadir", datadir}
geth := runGeth(t, append(args, []string{"--exec", "2+2", "console"}...)...)
geth.WaitExit()
args := []string{"--port", "0", "--nousb", "--networkid", "1337", "--maxpeers", "0", "--nodiscover", "--nat", "none", "--ipcdisable", "--datadir", datadir}
runGeth(t, append(args, []string{"--exec", "2+2", "console"}...)...).WaitExit()
}
// Retrieve the DAO config flag from the database
path := filepath.Join(datadir, "geth", "chaindata")

View File

@@ -84,7 +84,7 @@ func TestCustomGenesis(t *testing.T) {
runGeth(t, "--nousb", "--datadir", datadir, "init", json).WaitExit()
// Query the custom genesis block
geth := runGeth(t, "--nousb",
geth := runGeth(t, "--nousb", "--networkid", "1337", "--syncmode=full",
"--datadir", datadir, "--maxpeers", "0", "--port", "0",
"--nodiscover", "--nat", "none", "--ipcdisable",
"--exec", tt.query, "console")

View File

@@ -2,7 +2,12 @@ package main
import (
"context"
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
"sync/atomic"
"testing"
"time"
@@ -95,24 +100,57 @@ func (g *gethrpc) waitSynced() {
}
}
func startGethWithRpc(t *testing.T, name string, args ...string) *gethrpc {
g := &gethrpc{name: name}
args = append([]string{"--networkid=42", "--port=0", "--nousb", "--http", "--http.port=0", "--http.api=admin,eth,les"}, args...)
// ipcEndpoint resolves an IPC endpoint based on a configured value, taking into
// account the set data folders as well as the designated platform we're currently
// running on.
func ipcEndpoint(ipcPath, datadir string) string {
// On windows we can only use plain top-level pipes
if runtime.GOOS == "windows" {
if strings.HasPrefix(ipcPath, `\\.\pipe\`) {
return ipcPath
}
return `\\.\pipe\` + ipcPath
}
// Resolve names into the data directory full paths otherwise
if filepath.Base(ipcPath) == ipcPath {
if datadir == "" {
return filepath.Join(os.TempDir(), ipcPath)
}
return filepath.Join(datadir, ipcPath)
}
return ipcPath
}
// nextIPC ensures that each ipc pipe gets a unique name.
// On linux, it works well to use ipc pipes all over the filesystem (in datadirs),
// but windows require pipes to sit in "\\.\pipe\". Therefore, to run several
// nodes simultaneously, we need to distinguish between them, which we do by
// the pipe filename instead of folder.
var nextIPC = uint32(0)
func startGethWithIpc(t *testing.T, name string, args ...string) *gethrpc {
ipcName := fmt.Sprintf("geth-%d.ipc", atomic.AddUint32(&nextIPC, 1))
args = append([]string{"--networkid=42", "--port=0", "--nousb", "--ipcpath", ipcName}, args...)
t.Logf("Starting %v with rpc: %v", name, args)
g.geth = runGeth(t, args...)
g := &gethrpc{
name: name,
geth: runGeth(t, args...),
}
// wait before we can attach to it. TODO: probe for it properly
time.Sleep(1 * time.Second)
var err error
ipcpath := filepath.Join(g.geth.Datadir, "geth.ipc")
g.rpc, err = rpc.Dial(ipcpath)
if err != nil {
t.Fatalf("%v rpc connect: %v", name, err)
ipcpath := ipcEndpoint(ipcName, g.geth.Datadir)
if g.rpc, err = rpc.Dial(ipcpath); err != nil {
t.Fatalf("%v rpc connect to %v: %v", name, ipcpath, err)
}
return g
}
func initGeth(t *testing.T) string {
g := runGeth(t, "--networkid=42", "init", "./testdata/clique.json")
args := []string{"--nousb", "--networkid=42", "init", "./testdata/clique.json"}
t.Logf("Initializing geth: %v ", args)
g := runGeth(t, args...)
datadir := g.Datadir
g.WaitExit()
return datadir
@@ -120,15 +158,16 @@ func initGeth(t *testing.T) string {
func startLightServer(t *testing.T) *gethrpc {
datadir := initGeth(t)
runGeth(t, "--datadir", datadir, "--password", "./testdata/password.txt", "account", "import", "./testdata/key.prv").WaitExit()
t.Logf("Importing keys to geth")
runGeth(t, "--nousb", "--datadir", datadir, "--password", "./testdata/password.txt", "account", "import", "./testdata/key.prv", "--lightkdf").WaitExit()
account := "0x02f0d131f1f97aef08aec6e3291b957d9efe7105"
server := startGethWithRpc(t, "lightserver", "--allow-insecure-unlock", "--datadir", datadir, "--password", "./testdata/password.txt", "--unlock", account, "--mine", "--light.serve=100", "--light.maxpeers=1", "--nodiscover", "--nat=extip:127.0.0.1")
server := startGethWithIpc(t, "lightserver", "--allow-insecure-unlock", "--datadir", datadir, "--password", "./testdata/password.txt", "--unlock", account, "--mine", "--light.serve=100", "--light.maxpeers=1", "--nodiscover", "--nat=extip:127.0.0.1", "--verbosity=4")
return server
}
func startClient(t *testing.T, name string) *gethrpc {
datadir := initGeth(t)
return startGethWithRpc(t, name, "--datadir", datadir, "--nodiscover", "--syncmode=light", "--nat=extip:127.0.0.1")
return startGethWithIpc(t, name, "--datadir", datadir, "--nodiscover", "--syncmode=light", "--nat=extip:127.0.0.1", "--verbosity=4")
}
func TestPriorityClient(t *testing.T) {
@@ -151,8 +190,8 @@ func TestPriorityClient(t *testing.T) {
prioCli := startClient(t, "prioCli")
defer prioCli.killAndWait()
// 3_000_000_000 once we move to Go 1.13
tokens := 3000000000
lightServer.callRPC(nil, "les_addBalance", prioCli.getNodeInfo().ID, tokens, "foobar")
tokens := uint64(3000000000)
lightServer.callRPC(nil, "les_addBalance", prioCli.getNodeInfo().ID, tokens)
prioCli.addPeer(lightServer)
// Check if priority client is actually syncing and the regular client got kicked out
@@ -166,6 +205,7 @@ func TestPriorityClient(t *testing.T) {
freeCli.getNodeInfo().ID: freeCli,
prioCli.getNodeInfo().ID: prioCli,
}
time.Sleep(1 * time.Second)
lightServer.callRPC(&peers, "admin_peers")
peersWithNames := make(map[string]string)
for _, p := range peers {

View File

@@ -36,13 +36,13 @@ import (
"github.com/ethereum/go-ethereum/eth/downloader"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/les"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/node"
gopsutil "github.com/shirou/gopsutil/mem"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
const (
@@ -108,9 +108,12 @@ var (
utils.CacheFlag,
utils.CacheDatabaseFlag,
utils.CacheTrieFlag,
utils.CacheTrieJournalFlag,
utils.CacheTrieRejournalFlag,
utils.CacheGCFlag,
utils.CacheSnapshotFlag,
utils.CacheNoPrefetchFlag,
utils.CachePreimagesFlag,
utils.ListenPortFlag,
utils.MaxPeersFlag,
utils.MaxPendingPeersFlag,
@@ -142,7 +145,7 @@ var (
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.YoloV1Flag,
utils.YoloV2Flag,
utils.VMEnableDebugFlag,
utils.NetworkIdFlag,
utils.EthStatsURLFlag,
@@ -152,6 +155,7 @@ var (
utils.LegacyGpoBlocksFlag,
utils.GpoPercentileFlag,
utils.LegacyGpoPercentileFlag,
utils.GpoMaxGasPriceFlag,
utils.EWASMInterpreterFlag,
utils.EVMInterpreterFlag,
configFileFlag,
@@ -169,8 +173,6 @@ var (
utils.LegacyRPCCORSDomainFlag,
utils.LegacyRPCVirtualHostsFlag,
utils.GraphQLEnabledFlag,
utils.GraphQLListenAddrFlag,
utils.GraphQLPortFlag,
utils.GraphQLCORSDomainFlag,
utils.GraphQLVirtualHostsFlag,
utils.HTTPApiFlag,
@@ -187,8 +189,8 @@ var (
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.InsecureUnlockAllowedFlag,
utils.RPCGlobalGasCap,
utils.RPCGlobalTxFeeCap,
utils.RPCGlobalGasCapFlag,
utils.RPCGlobalTxFeeCapFlag,
}
whisperFlags = []cli.Flag{
@@ -240,11 +242,10 @@ func init() {
makecacheCommand,
makedagCommand,
versionCommand,
versionCheckCommand,
licenseCommand,
// See config.go
dumpConfigCommand,
// See retesteth.go
retestethCommand,
// See cmd/utils/flags_legacy.go
utils.ShowDeprecated,
}
@@ -348,18 +349,20 @@ func geth(ctx *cli.Context) error {
if args := ctx.Args(); len(args) > 0 {
return fmt.Errorf("invalid command: %q", args[0])
}
prepare(ctx)
node := makeFullNode(ctx)
defer node.Close()
startNode(ctx, node)
node.Wait()
stack, backend := makeFullNode(ctx)
defer stack.Close()
startNode(ctx, stack, backend)
stack.Wait()
return nil
}
// startNode boots up the system node and all registered protocols, after which
// it unlocks any requested accounts, and starts the RPC/IPC interfaces and the
// miner.
func startNode(ctx *cli.Context, stack *node.Node) {
func startNode(ctx *cli.Context, stack *node.Node, backend ethapi.Backend) {
debug.Memsize.Add("node", stack)
// Start up the node itself
@@ -379,25 +382,6 @@ func startNode(ctx *cli.Context, stack *node.Node) {
}
ethClient := ethclient.NewClient(rpcClient)
// Set contract backend for ethereum service if local node
// is serving LES requests.
if ctx.GlobalInt(utils.LegacyLightServFlag.Name) > 0 || ctx.GlobalInt(utils.LightServeFlag.Name) > 0 {
var ethService *eth.Ethereum
if err := stack.Service(&ethService); err != nil {
utils.Fatalf("Failed to retrieve ethereum service: %v", err)
}
ethService.SetContractBackend(ethClient)
}
// Set contract backend for les service if local node is
// running as a light client.
if ctx.GlobalString(utils.SyncModeFlag.Name) == "light" {
var lesService *les.LightEthereum
if err := stack.Service(&lesService); err != nil {
utils.Fatalf("Failed to retrieve light ethereum service: %v", err)
}
lesService.SetContractBackend(ethClient)
}
go func() {
// Open any wallets already attached
for _, wallet := range stack.AccountManager().Wallets() {
@@ -449,7 +433,7 @@ func startNode(ctx *cli.Context, stack *node.Node) {
if timestamp := time.Unix(int64(done.Latest.Time), 0); time.Since(timestamp) < 10*time.Minute {
log.Info("Synchronisation completed", "latestnum", done.Latest.Number, "latesthash", done.Latest.Hash(),
"age", common.PrettyAge(timestamp))
stack.Stop()
stack.Close()
}
}
}()
@@ -461,24 +445,24 @@ func startNode(ctx *cli.Context, stack *node.Node) {
if ctx.GlobalString(utils.SyncModeFlag.Name) == "light" {
utils.Fatalf("Light clients do not support mining")
}
var ethereum *eth.Ethereum
if err := stack.Service(&ethereum); err != nil {
ethBackend, ok := backend.(*eth.EthAPIBackend)
if !ok {
utils.Fatalf("Ethereum service not running: %v", err)
}
// Set the gas price to the limits from the CLI and start mining
gasprice := utils.GlobalBig(ctx, utils.MinerGasPriceFlag.Name)
if ctx.GlobalIsSet(utils.LegacyMinerGasPriceFlag.Name) && !ctx.GlobalIsSet(utils.MinerGasPriceFlag.Name) {
gasprice = utils.GlobalBig(ctx, utils.LegacyMinerGasPriceFlag.Name)
}
ethereum.TxPool().SetGasPrice(gasprice)
ethBackend.TxPool().SetGasPrice(gasprice)
// start mining
threads := ctx.GlobalInt(utils.MinerThreadsFlag.Name)
if ctx.GlobalIsSet(utils.LegacyMinerThreadsFlag.Name) && !ctx.GlobalIsSet(utils.MinerThreadsFlag.Name) {
threads = ctx.GlobalInt(utils.LegacyMinerThreadsFlag.Name)
log.Warn("The flag --minerthreads is deprecated and will be removed in the future, please use --miner.threads")
}
if err := ethereum.StartMining(threads); err != nil {
if err := ethBackend.StartMining(threads); err != nil {
utils.Fatalf("Failed to start mining: %v", err)
}
}

View File

@@ -31,6 +31,18 @@ import (
)
var (
VersionCheckUrlFlag = cli.StringFlag{
Name: "check.url",
Usage: "URL to use when checking vulnerabilities",
Value: "https://geth.ethereum.org/docs/vulnerabilities/vulnerabilities.json",
}
VersionCheckVersionFlag = cli.StringFlag{
Name: "check.version",
Usage: "Version to check",
Value: fmt.Sprintf("Geth/v%v/%v-%v/%v",
params.VersionWithCommit(gitCommit, gitDate),
runtime.GOOS, runtime.GOARCH, runtime.Version()),
}
makecacheCommand = cli.Command{
Action: utils.MigrateFlags(makecache),
Name: "makecache",
@@ -65,6 +77,21 @@ Regular users do not need to execute it.
Category: "MISCELLANEOUS COMMANDS",
Description: `
The output of this command is supposed to be machine-readable.
`,
}
versionCheckCommand = cli.Command{
Action: utils.MigrateFlags(versionCheck),
Flags: []cli.Flag{
VersionCheckUrlFlag,
VersionCheckVersionFlag,
},
Name: "version-check",
Usage: "Checks (online) whether the current version suffers from any known security vulnerabilities",
ArgsUsage: "<versionstring (optional)>",
Category: "MISCELLANEOUS COMMANDS",
Description: `
The version-check command fetches vulnerability-information from https://geth.ethereum.org/docs/vulnerabilities/vulnerabilities.json,
and displays information about any security vulnerabilities that affect the currently executing version.
`,
}
licenseCommand = cli.Command{

View File

@@ -1,927 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"context"
"fmt"
"math/big"
"os"
"os/signal"
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/trie"
cli "gopkg.in/urfave/cli.v1"
)
var (
rpcPortFlag = cli.IntFlag{
Name: "rpcport",
Usage: "HTTP-RPC server listening port",
Value: node.DefaultHTTPPort,
}
retestethCommand = cli.Command{
Action: utils.MigrateFlags(retesteth),
Name: "retesteth",
Usage: "Launches geth in retesteth mode",
ArgsUsage: "",
Flags: []cli.Flag{rpcPortFlag},
Category: "MISCELLANEOUS COMMANDS",
Description: `Launches geth in retesteth mode (no database, no network, only retesteth RPC interface)`,
}
)
type RetestethTestAPI interface {
SetChainParams(ctx context.Context, chainParams ChainParams) (bool, error)
MineBlocks(ctx context.Context, number uint64) (bool, error)
ModifyTimestamp(ctx context.Context, interval uint64) (bool, error)
ImportRawBlock(ctx context.Context, rawBlock hexutil.Bytes) (common.Hash, error)
RewindToBlock(ctx context.Context, number uint64) (bool, error)
GetLogHash(ctx context.Context, txHash common.Hash) (common.Hash, error)
}
type RetestethEthAPI interface {
SendRawTransaction(ctx context.Context, rawTx hexutil.Bytes) (common.Hash, error)
BlockNumber(ctx context.Context) (uint64, error)
GetBlockByNumber(ctx context.Context, blockNr math.HexOrDecimal64, fullTx bool) (map[string]interface{}, error)
GetBlockByHash(ctx context.Context, blockHash common.Hash, fullTx bool) (map[string]interface{}, error)
GetBalance(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (*math.HexOrDecimal256, error)
GetCode(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (hexutil.Bytes, error)
GetTransactionCount(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (uint64, error)
}
type RetestethDebugAPI interface {
AccountRange(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
addressHash *math.HexOrDecimal256, maxResults uint64,
) (AccountRangeResult, error)
StorageRangeAt(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
address common.Address,
begin *math.HexOrDecimal256, maxResults uint64,
) (StorageRangeResult, error)
}
type RetestWeb3API interface {
ClientVersion(ctx context.Context) (string, error)
}
type RetestethAPI struct {
ethDb ethdb.Database
db state.Database
chainConfig *params.ChainConfig
author common.Address
extraData []byte
genesisHash common.Hash
engine *NoRewardEngine
blockchain *core.BlockChain
txMap map[common.Address]map[uint64]*types.Transaction // Sender -> Nonce -> Transaction
txSenders map[common.Address]struct{} // Set of transaction senders
blockInterval uint64
}
type ChainParams struct {
SealEngine string `json:"sealEngine"`
Params CParamsParams `json:"params"`
Genesis CParamsGenesis `json:"genesis"`
Accounts map[common.Address]CParamsAccount `json:"accounts"`
}
type CParamsParams struct {
AccountStartNonce math.HexOrDecimal64 `json:"accountStartNonce"`
HomesteadForkBlock *math.HexOrDecimal64 `json:"homesteadForkBlock"`
EIP150ForkBlock *math.HexOrDecimal64 `json:"EIP150ForkBlock"`
EIP158ForkBlock *math.HexOrDecimal64 `json:"EIP158ForkBlock"`
DaoHardforkBlock *math.HexOrDecimal64 `json:"daoHardforkBlock"`
ByzantiumForkBlock *math.HexOrDecimal64 `json:"byzantiumForkBlock"`
ConstantinopleForkBlock *math.HexOrDecimal64 `json:"constantinopleForkBlock"`
ConstantinopleFixForkBlock *math.HexOrDecimal64 `json:"constantinopleFixForkBlock"`
IstanbulBlock *math.HexOrDecimal64 `json:"istanbulForkBlock"`
ChainID *math.HexOrDecimal256 `json:"chainID"`
MaximumExtraDataSize math.HexOrDecimal64 `json:"maximumExtraDataSize"`
TieBreakingGas bool `json:"tieBreakingGas"`
MinGasLimit math.HexOrDecimal64 `json:"minGasLimit"`
MaxGasLimit math.HexOrDecimal64 `json:"maxGasLimit"`
GasLimitBoundDivisor math.HexOrDecimal64 `json:"gasLimitBoundDivisor"`
MinimumDifficulty math.HexOrDecimal256 `json:"minimumDifficulty"`
DifficultyBoundDivisor math.HexOrDecimal256 `json:"difficultyBoundDivisor"`
DurationLimit math.HexOrDecimal256 `json:"durationLimit"`
BlockReward math.HexOrDecimal256 `json:"blockReward"`
NetworkID math.HexOrDecimal256 `json:"networkID"`
}
type CParamsGenesis struct {
Nonce math.HexOrDecimal64 `json:"nonce"`
Difficulty *math.HexOrDecimal256 `json:"difficulty"`
MixHash *math.HexOrDecimal256 `json:"mixHash"`
Author common.Address `json:"author"`
Timestamp math.HexOrDecimal64 `json:"timestamp"`
ParentHash common.Hash `json:"parentHash"`
ExtraData hexutil.Bytes `json:"extraData"`
GasLimit math.HexOrDecimal64 `json:"gasLimit"`
}
type CParamsAccount struct {
Balance *math.HexOrDecimal256 `json:"balance"`
Precompiled *CPAccountPrecompiled `json:"precompiled"`
Code hexutil.Bytes `json:"code"`
Storage map[string]string `json:"storage"`
Nonce *math.HexOrDecimal64 `json:"nonce"`
}
type CPAccountPrecompiled struct {
Name string `json:"name"`
StartingBlock math.HexOrDecimal64 `json:"startingBlock"`
Linear *CPAPrecompiledLinear `json:"linear"`
}
type CPAPrecompiledLinear struct {
Base uint64 `json:"base"`
Word uint64 `json:"word"`
}
type AccountRangeResult struct {
AddressMap map[common.Hash]common.Address `json:"addressMap"`
NextKey common.Hash `json:"nextKey"`
}
type StorageRangeResult struct {
Complete bool `json:"complete"`
Storage map[common.Hash]SRItem `json:"storage"`
}
type SRItem struct {
Key string `json:"key"`
Value string `json:"value"`
}
type NoRewardEngine struct {
inner consensus.Engine
rewardsOn bool
}
func (e *NoRewardEngine) Author(header *types.Header) (common.Address, error) {
return e.inner.Author(header)
}
func (e *NoRewardEngine) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error {
return e.inner.VerifyHeader(chain, header, seal)
}
func (e *NoRewardEngine) VerifyHeaders(chain consensus.ChainReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {
return e.inner.VerifyHeaders(chain, headers, seals)
}
func (e *NoRewardEngine) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {
return e.inner.VerifyUncles(chain, block)
}
func (e *NoRewardEngine) VerifySeal(chain consensus.ChainReader, header *types.Header) error {
return e.inner.VerifySeal(chain, header)
}
func (e *NoRewardEngine) Prepare(chain consensus.ChainReader, header *types.Header) error {
return e.inner.Prepare(chain, header)
}
func (e *NoRewardEngine) accumulateRewards(config *params.ChainConfig, state *state.StateDB, header *types.Header, uncles []*types.Header) {
// Simply touch miner and uncle coinbase accounts
reward := big.NewInt(0)
for _, uncle := range uncles {
state.AddBalance(uncle.Coinbase, reward)
}
state.AddBalance(header.Coinbase, reward)
}
func (e *NoRewardEngine) Finalize(chain consensus.ChainReader, header *types.Header, statedb *state.StateDB, txs []*types.Transaction,
uncles []*types.Header) {
if e.rewardsOn {
e.inner.Finalize(chain, header, statedb, txs, uncles)
} else {
e.accumulateRewards(chain.Config(), statedb, header, uncles)
header.Root = statedb.IntermediateRoot(chain.Config().IsEIP158(header.Number))
}
}
func (e *NoRewardEngine) FinalizeAndAssemble(chain consensus.ChainReader, header *types.Header, statedb *state.StateDB, txs []*types.Transaction,
uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {
if e.rewardsOn {
return e.inner.FinalizeAndAssemble(chain, header, statedb, txs, uncles, receipts)
} else {
e.accumulateRewards(chain.Config(), statedb, header, uncles)
header.Root = statedb.IntermediateRoot(chain.Config().IsEIP158(header.Number))
// Header seems complete, assemble into a block and return
return types.NewBlock(header, txs, uncles, receipts), nil
}
}
func (e *NoRewardEngine) Seal(chain consensus.ChainReader, block *types.Block, results chan<- *types.Block, stop <-chan struct{}) error {
return e.inner.Seal(chain, block, results, stop)
}
func (e *NoRewardEngine) SealHash(header *types.Header) common.Hash {
return e.inner.SealHash(header)
}
func (e *NoRewardEngine) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int {
return e.inner.CalcDifficulty(chain, time, parent)
}
func (e *NoRewardEngine) APIs(chain consensus.ChainReader) []rpc.API {
return e.inner.APIs(chain)
}
func (e *NoRewardEngine) Close() error {
return e.inner.Close()
}
func (api *RetestethAPI) SetChainParams(ctx context.Context, chainParams ChainParams) (bool, error) {
// Clean up
if api.blockchain != nil {
api.blockchain.Stop()
}
if api.engine != nil {
api.engine.Close()
}
if api.ethDb != nil {
api.ethDb.Close()
}
ethDb := rawdb.NewMemoryDatabase()
accounts := make(core.GenesisAlloc)
for address, account := range chainParams.Accounts {
balance := big.NewInt(0)
if account.Balance != nil {
balance.Set((*big.Int)(account.Balance))
}
var nonce uint64
if account.Nonce != nil {
nonce = uint64(*account.Nonce)
}
if account.Precompiled == nil || account.Balance != nil {
storage := make(map[common.Hash]common.Hash)
for k, v := range account.Storage {
storage[common.HexToHash(k)] = common.HexToHash(v)
}
accounts[address] = core.GenesisAccount{
Balance: balance,
Code: account.Code,
Nonce: nonce,
Storage: storage,
}
}
}
chainId := big.NewInt(1)
if chainParams.Params.ChainID != nil {
chainId.Set((*big.Int)(chainParams.Params.ChainID))
}
var (
homesteadBlock *big.Int
daoForkBlock *big.Int
eip150Block *big.Int
eip155Block *big.Int
eip158Block *big.Int
byzantiumBlock *big.Int
constantinopleBlock *big.Int
petersburgBlock *big.Int
istanbulBlock *big.Int
)
if chainParams.Params.HomesteadForkBlock != nil {
homesteadBlock = big.NewInt(int64(*chainParams.Params.HomesteadForkBlock))
}
if chainParams.Params.DaoHardforkBlock != nil {
daoForkBlock = big.NewInt(int64(*chainParams.Params.DaoHardforkBlock))
}
if chainParams.Params.EIP150ForkBlock != nil {
eip150Block = big.NewInt(int64(*chainParams.Params.EIP150ForkBlock))
}
if chainParams.Params.EIP158ForkBlock != nil {
eip158Block = big.NewInt(int64(*chainParams.Params.EIP158ForkBlock))
eip155Block = eip158Block
}
if chainParams.Params.ByzantiumForkBlock != nil {
byzantiumBlock = big.NewInt(int64(*chainParams.Params.ByzantiumForkBlock))
}
if chainParams.Params.ConstantinopleForkBlock != nil {
constantinopleBlock = big.NewInt(int64(*chainParams.Params.ConstantinopleForkBlock))
}
if chainParams.Params.ConstantinopleFixForkBlock != nil {
petersburgBlock = big.NewInt(int64(*chainParams.Params.ConstantinopleFixForkBlock))
}
if constantinopleBlock != nil && petersburgBlock == nil {
petersburgBlock = big.NewInt(100000000000)
}
if chainParams.Params.IstanbulBlock != nil {
istanbulBlock = big.NewInt(int64(*chainParams.Params.IstanbulBlock))
}
genesis := &core.Genesis{
Config: &params.ChainConfig{
ChainID: chainId,
HomesteadBlock: homesteadBlock,
DAOForkBlock: daoForkBlock,
DAOForkSupport: true,
EIP150Block: eip150Block,
EIP155Block: eip155Block,
EIP158Block: eip158Block,
ByzantiumBlock: byzantiumBlock,
ConstantinopleBlock: constantinopleBlock,
PetersburgBlock: petersburgBlock,
IstanbulBlock: istanbulBlock,
},
Nonce: uint64(chainParams.Genesis.Nonce),
Timestamp: uint64(chainParams.Genesis.Timestamp),
ExtraData: chainParams.Genesis.ExtraData,
GasLimit: uint64(chainParams.Genesis.GasLimit),
Difficulty: big.NewInt(0).Set((*big.Int)(chainParams.Genesis.Difficulty)),
Mixhash: common.BigToHash((*big.Int)(chainParams.Genesis.MixHash)),
Coinbase: chainParams.Genesis.Author,
ParentHash: chainParams.Genesis.ParentHash,
Alloc: accounts,
}
chainConfig, genesisHash, err := core.SetupGenesisBlock(ethDb, genesis)
if err != nil {
return false, err
}
fmt.Printf("Chain config: %v\n", chainConfig)
var inner consensus.Engine
switch chainParams.SealEngine {
case "NoProof", "NoReward":
inner = ethash.NewFaker()
case "Ethash":
inner = ethash.New(ethash.Config{
CacheDir: "ethash",
CachesInMem: 2,
CachesOnDisk: 3,
CachesLockMmap: false,
DatasetsInMem: 1,
DatasetsOnDisk: 2,
DatasetsLockMmap: false,
}, nil, false)
default:
return false, fmt.Errorf("unrecognised seal engine: %s", chainParams.SealEngine)
}
engine := &NoRewardEngine{inner: inner, rewardsOn: chainParams.SealEngine != "NoReward"}
blockchain, err := core.NewBlockChain(ethDb, nil, chainConfig, engine, vm.Config{}, nil, nil)
if err != nil {
return false, err
}
api.chainConfig = chainConfig
api.genesisHash = genesisHash
api.author = chainParams.Genesis.Author
api.extraData = chainParams.Genesis.ExtraData
api.ethDb = ethDb
api.engine = engine
api.blockchain = blockchain
api.db = state.NewDatabase(api.ethDb)
api.txMap = make(map[common.Address]map[uint64]*types.Transaction)
api.txSenders = make(map[common.Address]struct{})
api.blockInterval = 0
return true, nil
}
func (api *RetestethAPI) SendRawTransaction(ctx context.Context, rawTx hexutil.Bytes) (common.Hash, error) {
tx := new(types.Transaction)
if err := rlp.DecodeBytes(rawTx, tx); err != nil {
// Return nil is not by mistake - some tests include sending transaction where gasLimit overflows uint64
return common.Hash{}, nil
}
signer := types.MakeSigner(api.chainConfig, big.NewInt(int64(api.currentNumber())))
sender, err := types.Sender(signer, tx)
if err != nil {
return common.Hash{}, err
}
if nonceMap, ok := api.txMap[sender]; ok {
nonceMap[tx.Nonce()] = tx
} else {
nonceMap = make(map[uint64]*types.Transaction)
nonceMap[tx.Nonce()] = tx
api.txMap[sender] = nonceMap
}
api.txSenders[sender] = struct{}{}
return tx.Hash(), nil
}
func (api *RetestethAPI) MineBlocks(ctx context.Context, number uint64) (bool, error) {
for i := 0; i < int(number); i++ {
if err := api.mineBlock(); err != nil {
return false, err
}
}
fmt.Printf("Mined %d blocks\n", number)
return true, nil
}
func (api *RetestethAPI) currentNumber() uint64 {
if current := api.blockchain.CurrentBlock(); current != nil {
return current.NumberU64()
}
return 0
}
func (api *RetestethAPI) mineBlock() error {
number := api.currentNumber()
parentHash := rawdb.ReadCanonicalHash(api.ethDb, number)
parent := rawdb.ReadBlock(api.ethDb, parentHash, number)
var timestamp uint64
if api.blockInterval == 0 {
timestamp = uint64(time.Now().Unix())
} else {
timestamp = parent.Time() + api.blockInterval
}
gasLimit := core.CalcGasLimit(parent, 9223372036854775807, 9223372036854775807)
header := &types.Header{
ParentHash: parent.Hash(),
Number: big.NewInt(int64(number + 1)),
GasLimit: gasLimit,
Extra: api.extraData,
Time: timestamp,
}
header.Coinbase = api.author
if api.engine != nil {
api.engine.Prepare(api.blockchain, header)
}
// If we are care about TheDAO hard-fork check whether to override the extra-data or not
if daoBlock := api.chainConfig.DAOForkBlock; daoBlock != nil {
// Check whether the block is among the fork extra-override range
limit := new(big.Int).Add(daoBlock, params.DAOForkExtraRange)
if header.Number.Cmp(daoBlock) >= 0 && header.Number.Cmp(limit) < 0 {
// Depending whether we support or oppose the fork, override differently
if api.chainConfig.DAOForkSupport {
header.Extra = common.CopyBytes(params.DAOForkBlockExtra)
} else if bytes.Equal(header.Extra, params.DAOForkBlockExtra) {
header.Extra = []byte{} // If miner opposes, don't let it use the reserved extra-data
}
}
}
statedb, err := api.blockchain.StateAt(parent.Root())
if err != nil {
return err
}
if api.chainConfig.DAOForkSupport && api.chainConfig.DAOForkBlock != nil && api.chainConfig.DAOForkBlock.Cmp(header.Number) == 0 {
misc.ApplyDAOHardFork(statedb)
}
gasPool := new(core.GasPool).AddGas(header.GasLimit)
txCount := 0
var txs []*types.Transaction
var receipts []*types.Receipt
var blockFull = gasPool.Gas() < params.TxGas
for address := range api.txSenders {
if blockFull {
break
}
m := api.txMap[address]
for nonce := statedb.GetNonce(address); ; nonce++ {
if tx, ok := m[nonce]; ok {
// Try to apply transactions to the state
statedb.Prepare(tx.Hash(), common.Hash{}, txCount)
snap := statedb.Snapshot()
receipt, err := core.ApplyTransaction(
api.chainConfig,
api.blockchain,
&api.author,
gasPool,
statedb,
header, tx, &header.GasUsed, *api.blockchain.GetVMConfig(),
)
if err != nil {
statedb.RevertToSnapshot(snap)
break
}
txs = append(txs, tx)
receipts = append(receipts, receipt)
delete(m, nonce)
if len(m) == 0 {
// Last tx for the sender
delete(api.txMap, address)
delete(api.txSenders, address)
}
txCount++
if gasPool.Gas() < params.TxGas {
blockFull = true
break
}
} else {
break // Gap in the nonces
}
}
}
block, err := api.engine.FinalizeAndAssemble(api.blockchain, header, statedb, txs, []*types.Header{}, receipts)
if err != nil {
return err
}
return api.importBlock(block)
}
func (api *RetestethAPI) importBlock(block *types.Block) error {
if _, err := api.blockchain.InsertChain([]*types.Block{block}); err != nil {
return err
}
fmt.Printf("Imported block %d, head is %d\n", block.NumberU64(), api.currentNumber())
return nil
}
func (api *RetestethAPI) ModifyTimestamp(ctx context.Context, interval uint64) (bool, error) {
api.blockInterval = interval
return true, nil
}
func (api *RetestethAPI) ImportRawBlock(ctx context.Context, rawBlock hexutil.Bytes) (common.Hash, error) {
block := new(types.Block)
if err := rlp.DecodeBytes(rawBlock, block); err != nil {
return common.Hash{}, err
}
fmt.Printf("Importing block %d with parent hash: %x, genesisHash: %x\n", block.NumberU64(), block.ParentHash(), api.genesisHash)
if err := api.importBlock(block); err != nil {
return common.Hash{}, err
}
return block.Hash(), nil
}
func (api *RetestethAPI) RewindToBlock(ctx context.Context, newHead uint64) (bool, error) {
if err := api.blockchain.SetHead(newHead); err != nil {
return false, err
}
// When we rewind, the transaction pool should be cleaned out.
api.txMap = make(map[common.Address]map[uint64]*types.Transaction)
api.txSenders = make(map[common.Address]struct{})
return true, nil
}
var emptyListHash common.Hash = common.HexToHash("0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347")
func (api *RetestethAPI) GetLogHash(ctx context.Context, txHash common.Hash) (common.Hash, error) {
receipt, _, _, _ := rawdb.ReadReceipt(api.ethDb, txHash, api.chainConfig)
if receipt == nil {
return emptyListHash, nil
} else {
if logListRlp, err := rlp.EncodeToBytes(receipt.Logs); err != nil {
return common.Hash{}, err
} else {
return common.BytesToHash(crypto.Keccak256(logListRlp)), nil
}
}
}
func (api *RetestethAPI) BlockNumber(ctx context.Context) (uint64, error) {
return api.currentNumber(), nil
}
func (api *RetestethAPI) GetBlockByNumber(ctx context.Context, blockNr math.HexOrDecimal64, fullTx bool) (map[string]interface{}, error) {
block := api.blockchain.GetBlockByNumber(uint64(blockNr))
if block != nil {
response, err := RPCMarshalBlock(block, true, fullTx)
if err != nil {
return nil, err
}
response["author"] = response["miner"]
response["totalDifficulty"] = (*hexutil.Big)(api.blockchain.GetTd(block.Hash(), uint64(blockNr)))
return response, err
}
return nil, fmt.Errorf("block %d not found", blockNr)
}
func (api *RetestethAPI) GetBlockByHash(ctx context.Context, blockHash common.Hash, fullTx bool) (map[string]interface{}, error) {
block := api.blockchain.GetBlockByHash(blockHash)
if block != nil {
response, err := RPCMarshalBlock(block, true, fullTx)
if err != nil {
return nil, err
}
response["author"] = response["miner"]
response["totalDifficulty"] = (*hexutil.Big)(api.blockchain.GetTd(block.Hash(), block.Number().Uint64()))
return response, err
}
return nil, fmt.Errorf("block 0x%x not found", blockHash)
}
func (api *RetestethAPI) AccountRange(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
addressHash *math.HexOrDecimal256, maxResults uint64,
) (AccountRangeResult, error) {
var (
header *types.Header
block *types.Block
)
if (*big.Int)(blockHashOrNumber).Cmp(big.NewInt(math.MaxInt64)) > 0 {
blockHash := common.BigToHash((*big.Int)(blockHashOrNumber))
header = api.blockchain.GetHeaderByHash(blockHash)
block = api.blockchain.GetBlockByHash(blockHash)
//fmt.Printf("Account range: %x, txIndex %d, start: %x, maxResults: %d\n", blockHash, txIndex, common.BigToHash((*big.Int)(addressHash)), maxResults)
} else {
blockNumber := (*big.Int)(blockHashOrNumber).Uint64()
header = api.blockchain.GetHeaderByNumber(blockNumber)
block = api.blockchain.GetBlockByNumber(blockNumber)
//fmt.Printf("Account range: %d, txIndex %d, start: %x, maxResults: %d\n", blockNumber, txIndex, common.BigToHash((*big.Int)(addressHash)), maxResults)
}
parentHeader := api.blockchain.GetHeaderByHash(header.ParentHash)
var root common.Hash
var statedb *state.StateDB
var err error
if parentHeader == nil || int(txIndex) >= len(block.Transactions()) {
root = header.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return AccountRangeResult{}, err
}
} else {
root = parentHeader.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return AccountRangeResult{}, err
}
// Recompute transactions up to the target index.
signer := types.MakeSigner(api.blockchain.Config(), block.Number())
for idx, tx := range block.Transactions() {
// Assemble the transaction call message and return if the requested offset
msg, _ := tx.AsMessage(signer)
context := core.NewEVMContext(msg, block.Header(), api.blockchain, nil)
// Not yet the searched for transaction, execute on top of the current state
vmenv := vm.NewEVM(context, statedb, api.blockchain.Config(), vm.Config{})
if _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.Gas())); err != nil {
return AccountRangeResult{}, fmt.Errorf("transaction %#x failed: %v", tx.Hash(), err)
}
// Ensure any modifications are committed to the state
// Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
root = statedb.IntermediateRoot(vmenv.ChainConfig().IsEIP158(block.Number()))
if idx == int(txIndex) {
// This is to make sure root can be opened by OpenTrie
root, err = statedb.Commit(api.chainConfig.IsEIP158(block.Number()))
if err != nil {
return AccountRangeResult{}, err
}
break
}
}
}
accountTrie, err := statedb.Database().OpenTrie(root)
if err != nil {
return AccountRangeResult{}, err
}
it := trie.NewIterator(accountTrie.NodeIterator(common.BigToHash((*big.Int)(addressHash)).Bytes()))
result := AccountRangeResult{AddressMap: make(map[common.Hash]common.Address)}
for i := 0; i < int(maxResults) && it.Next(); i++ {
if preimage := accountTrie.GetKey(it.Key); preimage != nil {
result.AddressMap[common.BytesToHash(it.Key)] = common.BytesToAddress(preimage)
}
}
//fmt.Printf("Number of entries returned: %d\n", len(result.AddressMap))
// Add the 'next key' so clients can continue downloading.
if it.Next() {
next := common.BytesToHash(it.Key)
result.NextKey = next
}
return result, nil
}
func (api *RetestethAPI) GetBalance(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (*math.HexOrDecimal256, error) {
//fmt.Printf("GetBalance %x, block %d\n", address, blockNr)
header := api.blockchain.GetHeaderByNumber(uint64(blockNr))
statedb, err := api.blockchain.StateAt(header.Root)
if err != nil {
return nil, err
}
return (*math.HexOrDecimal256)(statedb.GetBalance(address)), nil
}
func (api *RetestethAPI) GetCode(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (hexutil.Bytes, error) {
header := api.blockchain.GetHeaderByNumber(uint64(blockNr))
statedb, err := api.blockchain.StateAt(header.Root)
if err != nil {
return nil, err
}
return statedb.GetCode(address), nil
}
func (api *RetestethAPI) GetTransactionCount(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (uint64, error) {
header := api.blockchain.GetHeaderByNumber(uint64(blockNr))
statedb, err := api.blockchain.StateAt(header.Root)
if err != nil {
return 0, err
}
return statedb.GetNonce(address), nil
}
func (api *RetestethAPI) StorageRangeAt(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
address common.Address,
begin *math.HexOrDecimal256, maxResults uint64,
) (StorageRangeResult, error) {
var (
header *types.Header
block *types.Block
)
if (*big.Int)(blockHashOrNumber).Cmp(big.NewInt(math.MaxInt64)) > 0 {
blockHash := common.BigToHash((*big.Int)(blockHashOrNumber))
header = api.blockchain.GetHeaderByHash(blockHash)
block = api.blockchain.GetBlockByHash(blockHash)
//fmt.Printf("Storage range: %x, txIndex %d, addr: %x, start: %x, maxResults: %d\n",
// blockHash, txIndex, address, common.BigToHash((*big.Int)(begin)), maxResults)
} else {
blockNumber := (*big.Int)(blockHashOrNumber).Uint64()
header = api.blockchain.GetHeaderByNumber(blockNumber)
block = api.blockchain.GetBlockByNumber(blockNumber)
//fmt.Printf("Storage range: %d, txIndex %d, addr: %x, start: %x, maxResults: %d\n",
// blockNumber, txIndex, address, common.BigToHash((*big.Int)(begin)), maxResults)
}
parentHeader := api.blockchain.GetHeaderByHash(header.ParentHash)
var root common.Hash
var statedb *state.StateDB
var err error
if parentHeader == nil || int(txIndex) >= len(block.Transactions()) {
root = header.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return StorageRangeResult{}, err
}
} else {
root = parentHeader.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return StorageRangeResult{}, err
}
// Recompute transactions up to the target index.
signer := types.MakeSigner(api.blockchain.Config(), block.Number())
for idx, tx := range block.Transactions() {
// Assemble the transaction call message and return if the requested offset
msg, _ := tx.AsMessage(signer)
context := core.NewEVMContext(msg, block.Header(), api.blockchain, nil)
// Not yet the searched for transaction, execute on top of the current state
vmenv := vm.NewEVM(context, statedb, api.blockchain.Config(), vm.Config{})
if _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.Gas())); err != nil {
return StorageRangeResult{}, fmt.Errorf("transaction %#x failed: %v", tx.Hash(), err)
}
// Ensure any modifications are committed to the state
// Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
_ = statedb.IntermediateRoot(vmenv.ChainConfig().IsEIP158(block.Number()))
if idx == int(txIndex) {
// This is to make sure root can be opened by OpenTrie
_, err = statedb.Commit(vmenv.ChainConfig().IsEIP158(block.Number()))
if err != nil {
return StorageRangeResult{}, err
}
}
}
}
storageTrie := statedb.StorageTrie(address)
it := trie.NewIterator(storageTrie.NodeIterator(common.BigToHash((*big.Int)(begin)).Bytes()))
result := StorageRangeResult{Storage: make(map[common.Hash]SRItem)}
for i := 0; /*i < int(maxResults) && */ it.Next(); i++ {
if preimage := storageTrie.GetKey(it.Key); preimage != nil {
key := (*math.HexOrDecimal256)(big.NewInt(0).SetBytes(preimage))
v, _, err := rlp.SplitString(it.Value)
if err != nil {
return StorageRangeResult{}, err
}
value := (*math.HexOrDecimal256)(big.NewInt(0).SetBytes(v))
ks, _ := key.MarshalText()
vs, _ := value.MarshalText()
if len(ks)%2 != 0 {
ks = append(append(append([]byte{}, ks[:2]...), byte('0')), ks[2:]...)
}
if len(vs)%2 != 0 {
vs = append(append(append([]byte{}, vs[:2]...), byte('0')), vs[2:]...)
}
result.Storage[common.BytesToHash(it.Key)] = SRItem{
Key: string(ks),
Value: string(vs),
}
}
}
if it.Next() {
result.Complete = false
} else {
result.Complete = true
}
return result, nil
}
func (api *RetestethAPI) ClientVersion(ctx context.Context) (string, error) {
return "Geth-" + params.VersionWithCommit(gitCommit, gitDate), nil
}
// splitAndTrim splits input separated by a comma
// and trims excessive white space from the substrings.
func splitAndTrim(input string) []string {
result := strings.Split(input, ",")
for i, r := range result {
result[i] = strings.TrimSpace(r)
}
return result
}
func retesteth(ctx *cli.Context) error {
log.Info("Welcome to retesteth!")
// register signer API with server
var (
extapiURL string
)
apiImpl := &RetestethAPI{}
var testApi RetestethTestAPI = apiImpl
var ethApi RetestethEthAPI = apiImpl
var debugApi RetestethDebugAPI = apiImpl
var web3Api RetestWeb3API = apiImpl
rpcAPI := []rpc.API{
{
Namespace: "test",
Public: true,
Service: testApi,
Version: "1.0",
},
{
Namespace: "eth",
Public: true,
Service: ethApi,
Version: "1.0",
},
{
Namespace: "debug",
Public: true,
Service: debugApi,
Version: "1.0",
},
{
Namespace: "web3",
Public: true,
Service: web3Api,
Version: "1.0",
},
}
vhosts := splitAndTrim(ctx.GlobalString(utils.HTTPVirtualHostsFlag.Name))
cors := splitAndTrim(ctx.GlobalString(utils.HTTPCORSDomainFlag.Name))
// register apis and create handler stack
srv := rpc.NewServer()
err := node.RegisterApisFromWhitelist(rpcAPI, []string{"test", "eth", "debug", "web3"}, srv, false)
if err != nil {
utils.Fatalf("Could not register RPC apis: %w", err)
}
handler := node.NewHTTPHandlerStack(srv, cors, vhosts)
// start http server
var RetestethHTTPTimeouts = rpc.HTTPTimeouts{
ReadTimeout: 120 * time.Second,
WriteTimeout: 120 * time.Second,
IdleTimeout: 120 * time.Second,
}
httpEndpoint := fmt.Sprintf("%s:%d", ctx.GlobalString(utils.HTTPListenAddrFlag.Name), ctx.Int(rpcPortFlag.Name))
httpServer, _, err := node.StartHTTPEndpoint(httpEndpoint, RetestethHTTPTimeouts, handler)
if err != nil {
utils.Fatalf("Could not start RPC api: %v", err)
}
extapiURL = fmt.Sprintf("http://%s", httpEndpoint)
log.Info("HTTP endpoint opened", "url", extapiURL)
defer func() {
// Don't bother imposing a timeout here.
httpServer.Shutdown(context.Background())
log.Info("HTTP endpoint closed", "url", httpEndpoint)
}()
abortChan := make(chan os.Signal, 11)
signal.Notify(abortChan, os.Interrupt)
sig := <-abortChan
log.Info("Exiting...", "signal", sig)
return nil
}

View File

@@ -1,148 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
)
// RPCTransaction represents a transaction that will serialize to the RPC representation of a transaction
type RPCTransaction struct {
BlockHash common.Hash `json:"blockHash"`
BlockNumber *hexutil.Big `json:"blockNumber"`
From common.Address `json:"from"`
Gas hexutil.Uint64 `json:"gas"`
GasPrice *hexutil.Big `json:"gasPrice"`
Hash common.Hash `json:"hash"`
Input hexutil.Bytes `json:"input"`
Nonce hexutil.Uint64 `json:"nonce"`
To *common.Address `json:"to"`
TransactionIndex hexutil.Uint `json:"transactionIndex"`
Value *hexutil.Big `json:"value"`
V *hexutil.Big `json:"v"`
R *hexutil.Big `json:"r"`
S *hexutil.Big `json:"s"`
}
// newRPCTransaction returns a transaction that will serialize to the RPC
// representation, with the given location metadata set (if available).
func newRPCTransaction(tx *types.Transaction, blockHash common.Hash, blockNumber uint64, index uint64) *RPCTransaction {
var signer types.Signer = types.FrontierSigner{}
if tx.Protected() {
signer = types.NewEIP155Signer(tx.ChainId())
}
from, _ := types.Sender(signer, tx)
v, r, s := tx.RawSignatureValues()
result := &RPCTransaction{
From: from,
Gas: hexutil.Uint64(tx.Gas()),
GasPrice: (*hexutil.Big)(tx.GasPrice()),
Hash: tx.Hash(),
Input: hexutil.Bytes(tx.Data()),
Nonce: hexutil.Uint64(tx.Nonce()),
To: tx.To(),
Value: (*hexutil.Big)(tx.Value()),
V: (*hexutil.Big)(v),
R: (*hexutil.Big)(r),
S: (*hexutil.Big)(s),
}
if blockHash != (common.Hash{}) {
result.BlockHash = blockHash
result.BlockNumber = (*hexutil.Big)(new(big.Int).SetUint64(blockNumber))
result.TransactionIndex = hexutil.Uint(index)
}
return result
}
// newRPCTransactionFromBlockIndex returns a transaction that will serialize to the RPC representation.
func newRPCTransactionFromBlockIndex(b *types.Block, index uint64) *RPCTransaction {
txs := b.Transactions()
if index >= uint64(len(txs)) {
return nil
}
return newRPCTransaction(txs[index], b.Hash(), b.NumberU64(), index)
}
// newRPCTransactionFromBlockHash returns a transaction that will serialize to the RPC representation.
func newRPCTransactionFromBlockHash(b *types.Block, hash common.Hash) *RPCTransaction {
for idx, tx := range b.Transactions() {
if tx.Hash() == hash {
return newRPCTransactionFromBlockIndex(b, uint64(idx))
}
}
return nil
}
// RPCMarshalBlock converts the given block to the RPC output which depends on fullTx. If inclTx is true transactions are
// returned. When fullTx is true the returned block contains full transaction details, otherwise it will only contain
// transaction hashes.
func RPCMarshalBlock(b *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
head := b.Header() // copies the header once
fields := map[string]interface{}{
"number": (*hexutil.Big)(head.Number),
"hash": b.Hash(),
"parentHash": head.ParentHash,
"nonce": head.Nonce,
"mixHash": head.MixDigest,
"sha3Uncles": head.UncleHash,
"logsBloom": head.Bloom,
"stateRoot": head.Root,
"miner": head.Coinbase,
"difficulty": (*hexutil.Big)(head.Difficulty),
"extraData": hexutil.Bytes(head.Extra),
"size": hexutil.Uint64(b.Size()),
"gasLimit": hexutil.Uint64(head.GasLimit),
"gasUsed": hexutil.Uint64(head.GasUsed),
"timestamp": hexutil.Uint64(head.Time),
"transactionsRoot": head.TxHash,
"receiptsRoot": head.ReceiptHash,
}
if inclTx {
formatTx := func(tx *types.Transaction) (interface{}, error) {
return tx.Hash(), nil
}
if fullTx {
formatTx = func(tx *types.Transaction) (interface{}, error) {
return newRPCTransactionFromBlockHash(b, tx.Hash()), nil
}
}
txs := b.Transactions()
transactions := make([]interface{}, len(txs))
var err error
for i, tx := range txs {
if transactions[i], err = formatTx(tx); err != nil {
return nil, err
}
}
fields["transactions"] = transactions
}
uncles := b.Uncles()
uncleHashes := make([]common.Hash, len(uncles))
for i, uncle := range uncles {
uncleHashes[i] = uncle.Hash()
}
fields["uncles"] = uncleHashes
return fields, nil
}

View File

@@ -70,12 +70,12 @@ func runGeth(t *testing.T, args ...string) *testgeth {
tt := &testgeth{}
tt.TestCmd = cmdtest.NewTestCmd(t, tt)
for i, arg := range args {
switch {
case arg == "-datadir" || arg == "--datadir":
switch arg {
case "--datadir":
if i < len(args)-1 {
tt.Datadir = args[i+1]
}
case arg == "-etherbase" || arg == "--etherbase":
case "--etherbase":
if i < len(args)-1 {
tt.Etherbase = args[i+1]
}
@@ -84,7 +84,7 @@ func runGeth(t *testing.T, args ...string) *testgeth {
if tt.Datadir == "" {
tt.Datadir = tmpdir(t)
tt.Cleanup = func() { os.RemoveAll(tt.Datadir) }
args = append([]string{"-datadir", tt.Datadir}, args...)
args = append([]string{"--datadir", tt.Datadir}, args...)
// Remove the temporary datadir if something fails below.
defer func() {
if t.Failed() {

61
cmd/geth/testdata/vcheck/data.json vendored Normal file
View File

@@ -0,0 +1,61 @@
[
{
"name": "CorruptedDAG",
"uid": "GETH-2020-01",
"summary": "Mining nodes will generate erroneous PoW on epochs > `385`.",
"description": "A mining flaw could cause miners to erroneously calculate PoW, due to an index overflow, if DAG size is exceeding the maximum 32 bit unsigned value.\n\nThis occurred on the ETC chain on 2020-11-06. This is likely to trigger for ETH mainnet around block `11550000`/epoch `385`, slated to occur early January 2021.\n\nThis issue is relevant only for miners, non-mining nodes are unaffected, since non-mining nodes use a smaller verification cache instead of a full DAG.",
"links": [
"https://github.com/ethereum/go-ethereum/pull/21793",
"https://blog.ethereum.org/2020/11/12/geth_security_release/",
"https://github.com/ethereum/go-ethereum/commit/567d41d9363706b4b13ce0903804e8acf214af49"
],
"introduced": "v1.6.0",
"fixed": "v1.9.24",
"published": "2020-11-12",
"severity": "Medium",
"check": "Geth\\/v1\\.(6|7|8)\\..*|Geth\\/v1\\.9\\.2(1|2|3)-.*"
},
{
"name": "GoCrash",
"uid": "GETH-2020-02",
"summary": "A denial-of-service issue can be used to crash Geth nodes during block processing, due to an underlying bug in Go (CVE-2020-28362) versions < `1.15.5`, or `<1.14.12`",
"description": "The DoS issue can be used to crash all Geth nodes during block processing, the effects of which would be that a major part of the Ethereum network went offline.\n\nOutside of Go-Ethereum, the issue is most likely relevant for all forks of Geth (such as TurboGeth or ETCs core-geth) which is built with versions of Go which contains the vulnerability.",
"links": [
"https://blog.ethereum.org/2020/11/12/geth_security_release/",
"https://groups.google.com/g/golang-announce/c/NpBGTTmKzpM",
"https://github.com/golang/go/issues/42552"
],
"fixed": "v1.9.24",
"published": "2020-11-12",
"severity": "Critical",
"check": "Geth.*\\/go1\\.(11(.*)|12(.*)|13(.*)|14|14\\.(\\d|10|11|)|15|15\\.[0-4])$"
},
{
"name": "ShallowCopy",
"uid": "GETH-2020-03",
"summary": "A consensus flaw in Geth, related to `datacopy` precompile",
"description": "Geth erroneously performed a 'shallow' copy when the precompiled `datacopy` (at `0x00...04`) was invoked. An attacker could deploy a contract that uses the shallow copy to corrupt the contents of the `RETURNDATA`, thus causing a consensus failure.",
"links": [
"https://blog.ethereum.org/2020/11/12/geth_security_release/"
],
"introduced": "v1.9.7",
"fixed": "v1.9.17",
"published": "2020-11-12",
"severity": "Critical",
"check": "Geth\\/v1\\.9\\.(7|8|9|10|11|12|13|14|15|16).*$"
},
{
"name": "GethCrash",
"uid": "GETH-2020-04",
"summary": "A denial-of-service issue can be used to crash Geth nodes during block processing",
"description": "Full details to be disclosed at a later date",
"links": [
"https://blog.ethereum.org/2020/11/12/geth_security_release/"
],
"introduced": "v1.9.16",
"fixed": "v1.9.18",
"published": "2020-11-12",
"severity": "Critical",
"check": "Geth\\/v1\\.9.(16|17).*$"
}
]

View File

@@ -0,0 +1,4 @@
untrusted comment: signature from minisign secret key
RWQkliYstQBOKFQFQTjmCd6TPw07VZyWFSB3v4+1BM1kv8eHLE5FDy2OkPEqtdaL53xftlrHoJQie0uCcovdlSV8kpyxiLrxEQ0=
trusted comment: timestamp:1605618622 file:vulnerabilities.json
osAPs4QPdDkmiWQxqeMIzYv/b+ZGxJ+19Sbrk1Cpq4t2gHBT+lqFtwL3OCzKWWyjGRTmHfsVGBYpzEdPRQ0/BQ==

View File

@@ -0,0 +1,4 @@
untrusted comment: Here's a comment
RWQkliYstQBOKFQFQTjmCd6TPw07VZyWFSB3v4+1BM1kv8eHLE5FDy2OkPEqtdaL53xftlrHoJQie0uCcovdlSV8kpyxiLrxEQ0=
trusted comment: Here's a trusted comment
3CnkIuz9MEDa7uNyGZAbKZhuirwfiqm7E1uQHrd2SiO4Y8+Akw9vs052AyKw0s5nhbYHCZE2IMQdHNjKwxEGAQ==

Some files were not shown because too many files have changed in this diff Show More