Compare commits

...

718 Commits

Author SHA1 Message Date
56dec25ae2 params: release geth 1.10.0 stable 2021-03-03 17:44:17 +01:00
cd316d7c71 tests: update to latest tests (#22290)
This updates the consensus tests to commit 31d6630 and
adds support for access list transactions in the test runner.

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-03-03 15:50:07 +01:00
5a81dd97d5 cmd: retire whisper flags (#22421)
* cmd: retire whisper flags

* cmd/geth: remove whisper configs
2021-03-03 16:08:14 +02:00
b24804d88c les: fix nodiscover option on the client side (#22422) 2021-03-03 15:05:24 +01:00
ba999105ef core/forkid, params: unset Berlin fork number (#22413) 2021-03-03 12:05:27 +02:00
07e907c7d4 cmd/utils: fix txlookuplimit for archive node (#22419)
* cmd/utils: fix exclusive check for archive node

* cmd/utils: set the txlookuplimit to 0
2021-03-03 12:04:50 +02:00
c539a052bd params: update chts (#22418) 2021-03-03 12:04:25 +02:00
0540d3c6f6 cmd/geth: put allowUnsecureTx flag in RPC section (#22412) 2021-03-03 09:42:59 +02:00
19d7a37abb core/rawdb: fix the transaction indexer (#22395) 2021-03-01 11:26:10 +02:00
d96870428f les: UDP pre-negotiation of available server capacity (#22183)
This PR implements the first one of the "lespay" UDP queries which
is already useful in itself: the capacity query. The server pool is making
use of this query by doing a cheap UDP query to determine whether it is
worth starting the more expensive TCP connection process.
2021-03-01 10:24:20 +01:00
498458b410 core/state: fix eta calculation on pruning (#22386) 2021-02-26 16:33:37 +01:00
3822b09904 accounts/keystore: use github.com/google/uuid (#22217)
This replaces the github.com/pborman/uuid dependency with
github.com/google/uuid because the former is only a wrapper for
the latter (since v1.0.0).

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-02-26 15:28:34 +01:00
744707a490 Merge pull request #22380 from karalabe/berlin
all: define and enable the Berlin hard fork on all networks
2021-02-26 15:04:56 +02:00
27b31371d4 rpc: add separate size limit for websocket (#22385)
This makes the WebSocket message size limit independent of the
limit used for HTTP requests. The new limit for WebSocket messages 
is 15MB.
2021-02-26 13:40:35 +01:00
0928562670 all: define Berlin hard fork spec 2021-02-26 14:24:07 +02:00
dc109cce26 les: move server pool to les/vflux/client (#22377)
* les: move serverPool to les/vflux/client

* les: add metrics

* les: moved ValueTracker inside ServerPool

* les: protect against node registration before server pool is started

* les/vflux/client: fixed tests

* les: make peer registration safe
2021-02-25 21:08:34 +01:00
de9465f991 cmd/devp2p: add eth66 test suite (#22363)
Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-02-25 18:36:01 +01:00
bbfb1e4008 all: add support for EIP-2718, EIP-2930 transactions (#21502)
This adds support for EIP-2718 typed transactions as well as EIP-2930
access list transactions (tx type 1). These EIPs are scheduled for the
Berlin fork.

There very few changes to existing APIs in core/types, and several new APIs
to deal with access list transactions. In particular, there are two new
constructor functions for transactions: types.NewTx and types.SignNewTx.
Since the canonical encoding of typed transactions is not RLP-compatible,
Transaction now has new methods for encoding and decoding: MarshalBinary
and UnmarshalBinary.

The existing EIP-155 signer does not support the new transaction types.
All code dealing with transaction signatures should be updated to use the
newer EIP-2930 signer. To make this easier for future updates, we have
added new constructor functions for types.Signer: types.LatestSigner and
types.LatestSignerForChainID. 

This change also adds support for the YoloV3 testnet.

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Ryan Schneider <ryanleeschneider@gmail.com>
2021-02-25 15:26:57 +01:00
7a3c890009 les, light: improve txstatus retrieval (#22349)
Transaction unindexing will be enabled by default as of 1.10, which causes tx status retrieval will be broken without this PR. 

This PR introduces a retry mechanism in TxStatus retrieval.
2021-02-25 14:24:04 +01:00
378e961d85 cmd, eth, les: enable serving light clients when non-synced (#22250)
This PR adds a more CLI flag, so that the les-server can serve light clients even the local node is not synced yet.

This functionality is needed in some testing environments(e.g. hive). After launching the les server, no more blocks will be imported so the node is always marked as "non-synced".
2021-02-25 13:55:07 +01:00
96d9306413 Merge pull request #22381 from karalabe/lower-error-log
eth/protocols/snap: lower abortion and resumption logs to debug
2021-02-25 13:03:07 +02:00
b2b5c82aca eth/protocols/snap: lower abortion and resumption logs to debug 2021-02-25 12:56:18 +02:00
8e547eecd5 cmd/utils: remove deprecated command line flags (#22263)
This removes support for all deprecated flags except --rpc*.
2021-02-24 14:07:58 +01:00
f54dc4ab3d travis: manually install Android since Travis is stale (#22373) 2021-02-24 11:36:08 +02:00
bf5b379b13 Merge pull request #22369 from karalabe/android-bionic-builder
travis: bump builders to Bionic
2021-02-23 20:52:40 +02:00
70afe15f68 travis: bump builders to Bionic 2021-02-23 20:31:09 +02:00
b502c86662 Merge pull request #22368 from karalabe/ndk-bump
travis: bump Android NDK version
2021-02-23 19:58:37 +02:00
c9aa267049 travis: bump Android NDK version 2021-02-23 19:57:39 +02:00
cdb6a84339 Merge pull request #22350 from karalabe/disable-preimage-collection
cmd/utils: disable caching preimages by default
2021-02-23 19:29:36 +02:00
4ee8d2d305 travis, appveyor, build, Dockerfile: bump Go to 1.16 (#22351)
* travis, appveyor, build: bump Go to 1.16

* accounts/abi/bind: fix up Go mod files for Go 1.16
2021-02-23 18:42:43 +02:00
2743fb0429 Dockerfile: bump to Go 1.16 base images 2021-02-23 18:28:24 +02:00
2d1a0e9b03 accounts/abi/bind: fix up Go mod files for Go 1.16 2021-02-23 18:12:25 +02:00
142fbcfd6f internal/ethapi: reject non-replay-protected txs over RPC (#22339)
This PR prevents users from submitting transactions without EIP-155 enabled. This behaviour can be overridden by specifying the flag --rpc.allow-unprotected-txs=true.
2021-02-23 13:09:19 +01:00
c4a2b682ff cmd/geth: add db commands stats, compact, put, get, delete (#22014)
This PR introduces:

- db.put to put a value into the database
- db.get to read a value from the database
- db.delete to delete a value from the database
- db.stats to check compaction info from the database
- db.compact to trigger a db compaction

It also moves inspectdb to db.inspect.
2021-02-23 11:27:32 +01:00
3ecfdccd9a les: clean up server handler (#22357) 2021-02-22 14:33:11 +01:00
8f03e3b107 tests/fuzzers/les: add fuzzer for les server handler (#22282)
* les: refactored server handler

* tests/fuzzers/les: add fuzzer for les server handler

* tests, les: update les fuzzer

tests: update les fuzzer

tests/fuzzer/les: release resources

tests/fuzzer/les: pre-initialize all resources

* les: refactored server handler and fuzzer

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2021-02-20 10:40:38 +01:00
8647233a8e les: fix balance expiration (#22343)
* les/lespay/server: fix balance expiration and add test

* les: move client balances to a new db

* les: rename lespayDb to lesDb
2021-02-19 15:53:12 +01:00
c5023e1dc5 travis, appveyor, build: bump Go to 1.16 2021-02-19 16:03:17 +02:00
ca76db6116 cmd/utils: disable caching preimages by default 2021-02-19 15:56:41 +02:00
c027507e03 les: renamed lespay to vflux (#22347) 2021-02-19 14:44:16 +01:00
d36276d85e p2p/dnsdisc: fix hot-spin when all trees are empty (#22313)
In the random sync algorithm used by the DNS node iterator, we first pick a random
tree and then perform one sync action on that tree. This happens in a loop until any
node is found. If no trees contain any nodes, the iterator will enter a hot loop spinning
at 100% CPU.

The fix is complicated. The iterator now checks if a meaningful sync action can
be performed on any tree. If there is nothing to do, it waits for the next root record
recheck time to arrive and then tries again.

Fixes #22306
2021-02-19 09:54:46 +01:00
6ec1561044 eth: implement eth66 (#22241)
* eth/protocols/eth: split up the eth protocol handlers

* eth/protocols/eth: define eth-66 protocol messages

* eth/protocols/eth: poc implement getblockheaders on eth/66

* eth/protocols/eth: implement remaining eth-66 handlers

* eth/protocols: define handler map for eth 66

* eth/downloader: use protocol constants from eth package

* eth/protocols/eth: add ETH66 capability

* eth/downloader: tests for eth66

* eth/downloader: fix error in tests

* eth/protocols/eth: use eth66 for outgoing requests

* eth/protocols/eth: remove unused error type

* eth/protocols/eth: define protocol length

* eth/protocols/eth: fix pooled tx over eth66

* protocols/eth/handlers: revert behavioural change which caused tests to fail

* eth/downloader: fix failing test

* eth/protocols/eth: add testcases + fix flaw with header requests

* eth/protocols: change comments

* eth/protocols/eth: review fixes + fixed flaw in RequestOneHeader

* eth/protocols: documentation

* eth/protocols/eth: review concerns about types
2021-02-18 18:54:29 +02:00
b1835b3855 node: always show websocket url in logs (#22307) 2021-02-18 10:40:19 +01:00
9ec32a9e7b rlp: handle case of normal EOF in Stream.readFull (#22336)
io.Reader may return n > 0 and io.EOF at the end of the input stream.
readFull did not handle this correctly, looking only at the error. This fixes
it to check for n == len(buf) as well.
2021-02-18 10:19:49 +01:00
52e5c38aa5 core/state: copy the snap when copying the state (#22340)
* core/state: copy the snap when copying the state

* core/state: deep-copy snap stuff during state Copy
2021-02-18 10:05:47 +02:00
e01096f531 eth/handler, broadcast: optimize tx broadcast mechanism (#22176)
This PR optimizes the broadcast loop. Instead of iterating twice through a given set of transactions to weed out which peers have and which do not have a tx, to send/announce transactions, we do it only once.
2021-02-17 14:59:00 +01:00
1489c3f494 Merge pull request #22334 from karalabe/fix-snap-cancel
eth: fix snap sync cancellation
2021-02-16 16:30:07 +02:00
f9445e93bb cmd/devp2p/internal/ethtest: use shared message types (#22315)
This updates the eth protocol test suite to use the message type
definitions of the 'production' protocol implementation in eth/protocols/eth.
2021-02-16 15:23:03 +01:00
bfdff4c5b8 eth: fix snap sync cancellation 2021-02-16 16:11:33 +02:00
6291fc9230 Merge pull request #22331 from karalabe/enforce-min-snap-difflayers
core/state/snapshot: ensure Cap retains a min number of layers
2021-02-16 15:26:37 +02:00
9ec3329899 core/state/snapshot: ensure Cap retains a min number of layers 2021-02-16 15:25:20 +02:00
915c614959 Merge pull request #22332 from karalabe/fix-fastsync-restart-bloom-crash
trie: fix bloom crash on fast sync restart
2021-02-16 13:30:15 +02:00
f4fcd4f506 rpc: increase the number of subscriptions in storm test (#22316) 2021-02-16 11:40:59 +02:00
e991bdae24 trie: fix bloom crash on fast sync restart 2021-02-16 10:44:38 +02:00
77787802fe cmd/geth: fix js unclean shutdown (#22302) 2021-02-15 19:47:47 +01:00
08c878acd2 cmd/utils: add workaround for FreeBSD statfs quirk (#22310)
Make geth build on FreeBSD, fixes #22309.
2021-02-15 19:37:09 +01:00
7d1b711c7d les: enable les/4 and add tests (#22321) 2021-02-12 20:48:18 +01:00
2fc465a7be Merge pull request #22319 from karalabe/fix-defer-leak
core: fix temp memory blowup caused by defers holding on to state
2021-02-12 15:34:35 +02:00
ef227c5f42 core: fix temp memory blowup caused by defers holding on to state 2021-02-12 12:45:34 +02:00
111abdcfbd cmd/devp2p: fix documentation for eth-test (#22298) 2021-02-11 12:09:13 +01:00
1bbc8a1944 Merge pull request #22293 from karalabe/txunindex-1year
cmd/utils, eth/ethconfig: unindex txs older than ~1 year
2021-02-10 16:02:35 +02:00
409b16e5ab cmd/utils, eth/ethconfig: unindex txs older than ~1 year 2021-02-10 16:01:37 +02:00
cb3c7e4319 accounts/abi/bind: fixed unpacking error (#22230)
There was a dormant error with structured inputs that failed unpacking.
This commit fixes the error by switching casting to the better abi.ConvertType function.
It also adds a test for calling a view function that returns a struct
2021-02-10 13:12:13 +01:00
27786671d2 internal/debug: add switch to format logs with json (#22207)
adds a flag --log.json which if enabled makes the client format logs with JSON.
2021-02-09 10:42:55 +01:00
2fdba3aacb Merge pull request #22294 from holiman/pruner_compact_fix
core/state/pruner: fix compaction range error
2021-02-09 11:01:21 +02:00
74dbc20260 core/state/pruner: fix compaction range error 2021-02-08 20:31:52 +01:00
944d901436 Merge pull request #22291 from karalabe/fix-pruner-compaction
core/state/pruner: fix compaction after pruning
2021-02-08 19:19:38 +02:00
2728672c28 core/state/pruner: fix compaction after pruning 2021-02-08 19:18:40 +02:00
123e934e72 Merge pull request #22288 from karalabe/1.10.unstable
params: just to make snapshots a bit more official
2021-02-08 13:16:50 +02:00
f566dd305e all: bloom-filter based pruning mechanism (#21724)
* cmd, core, tests: initial state pruner

core: fix db inspector

cmd/geth: add verify-state

cmd/geth: add verification tool

core/rawdb: implement flatdb

cmd, core: fix rebase

core/state: use new contract code layout

core/state/pruner: avoid deleting genesis state

cmd/geth: add helper function

core, cmd: fix extract genesis

core: minor fixes

contracts: remove useless

core/state/snapshot: plugin stacktrie

core: polish

core/state/snapshot: iterate storage concurrently

core/state/snapshot: fix iteration

core: add comments

core/state/snapshot: polish code

core/state: polish

core/state/snapshot: rebase

core/rawdb: add comments

core/rawdb: fix tests

core/rawdb: improve tests

core/state/snapshot: fix concurrent iteration

core/state: run pruning during the recovery

core, trie: implement martin's idea

core, eth: delete flatdb and polish pruner

trie: fix import

core/state/pruner: add log

core/state/pruner: fix issues

core/state/pruner: don't read back

core/state/pruner: fix contract code write

core/state/pruner: check root node presence

cmd, core: polish log

core/state: use HEAD-127 as the target

core/state/snapshot: improve tests

cmd/geth: fix verification tool

cmd/geth: use HEAD as the verification default target

all: replace the bloomfilter with martin's fork

cmd, core: polish code

core, cmd: forcibly delete state root

core/state/pruner: add hash64

core/state/pruner: fix blacklist

core/state: remove blacklist

cmd, core: delete trie clean cache before pruning

cmd, core: fix lint

cmd, core: fix rebase

core/state: fix the special case for clique networks

core/state/snapshot: remove useless code

core/state/pruner: capping the snapshot after pruning

cmd, core, eth: fixes

core/rawdb: update db inspector

cmd/geth: polish code

core/state/pruner: fsync bloom filter

cmd, core: print warning log

core/state/pruner: adjust the parameters for bloom filter

cmd, core: create the bloom filter by size

core: polish

core/state/pruner: sanitize invalid bloomfilter size

cmd: address comments

cmd/geth: address comments

cmd/geth: address comment

core/state/pruner: address comments

core/state/pruner: rename homedir to datadir

cmd, core: address comments

core/state/pruner: address comment

core/state: address comments

core, cmd, tests: address comments

core: address comments

core/state/pruner: release the iterator after each commit

core/state/pruner: improve pruner

cmd, core: adjust bloom paramters

core/state/pruner: fix lint

core/state/pruner: fix tests

core: fix rebase

core/state/pruner: remove atomic rename

core/state/pruner: address comments

all: run go mod tidy

core/state/pruner: avoid false-positive for the middle state roots

core/state/pruner: add checks for middle roots

cmd/geth: replace crit with error

* core/state/pruner: fix lint

* core: drop legacy bloom filter

* core/state/snapshot: improve pruner

* core/state/snapshot: polish concurrent logs to report ETA vs. hashes

* core/state/pruner: add progress report for pruning and compaction too

* core: fix snapshot test API

* core/state: fix some pruning logs

* core/state/pruner: support recovering from bloom flush fail

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-02-08 13:16:30 +02:00
d86906f1e6 params: just to make snapshots a bit more official 2021-02-08 13:03:06 +02:00
bbe694fc52 Merge pull request #22280 from karalabe/snapshot-default
cmd/utils: enable snapshots by default
2021-02-08 12:58:54 +02:00
477fd420b3 metrics: fix cast omission in cpu_syscall.go (#22262)
fixes an regression which caused build failure on certain platforms
2021-02-08 11:36:49 +01:00
994cdc69c8 cmd/utils: enable snapshots by default 2021-02-07 20:13:59 +02:00
e74bd587f7 consensus: remove seal verification from the consensus engine interface (#22274) 2021-02-05 20:44:34 +02:00
7ed860d4f1 eth: don't wait for snap registration if we're not running snap (#22272)
Prevents a situation where we (not running snap) connects with a peer running snap, and get stalled waiting for snap registration to succeed (which will never happen), which cause a waitgroup wait to halt shutdown
2021-02-05 14:15:22 +01:00
fba5a63afe internal/ethapi: fix typo in comment (#22271) 2021-02-05 13:51:53 +01:00
098a2b6e26 eth: move eth.Config to a common package (#22205)
This moves the eth config definition into a separate package, eth/ethconfig. 
Packages eth and les can now import this common package instead of
importing eth from les, reducing dependencies.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-02-05 13:51:15 +01:00
28121324ac internal/ethapi: comment nitpick (#22270) 2021-02-05 12:35:55 +02:00
54735a6723 fuzzers: added consensys/gurvy library to bn256 differential fuzzer (#21812)
This pr adds consensys' gurvy bn256 variant into the code for differential fuzzing.
2021-02-03 15:04:28 +01:00
3512b41c5c core: reset txpool state on sethead (#22247)
fixes an issue where local transactions that were included in the chain before a SetHead were rejected if resubmitted, since the txpool had not reset the state to the current (older) state.
2021-02-03 11:02:35 +01:00
83e4c49e2b trie : use trie.NewStackTrie instead of new(trie.Trie) (#22246)
The PR makes use of the stacktrie, which is is more lenient on resource consumption, than the regular trie, in cases where we only need it for DeriveSha
2021-02-02 13:09:23 +01:00
ef84da8481 all: remove unneeded parentheses (#21921)
* remove uneeded convertion type

* remove redundant type in composite literal

* omit explicit type where implicit

* remove unused redundant parenthesis

* remove redundant import alias duktape
2021-02-02 11:32:44 +02:00
4eae0c6b6f cmd/geth, node: allow configuring JSON-RPC on custom path prefix (#22184)
This change allows users to set a custom path prefix on which to mount the http-rpc
or ws-rpc handlers via the new flags --http.rpcprefix and --ws.rpcprefix.

Fixes #21826

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-02-02 10:05:46 +01:00
e3430ac7df eth: check snap satelliteness, delegate drop to eth (#22235)
* eth: check snap satelliteness, delegate drop to eth

* eth: better handle eth/snap satellite relation, merge reg/unreg paths
2021-02-02 10:44:36 +02:00
3c728fb129 eth/tracers: fix unigram tracer (#22248) 2021-02-01 14:41:43 +01:00
f25b437b70 cmd/clef: don't check file permissions on windows (#22251)
Fixes #20123
2021-01-29 16:53:44 +01:00
7a800f98f6 les/utils: UDP rate limiter (#21930)
* les/utils: Limiter

* les/utils: dropped prior weight vs variable cost logic, using fixed weights

* les/utils: always create node selector in addressGroup

* les/utils: renamed request weight to request cost

* les/utils: simplified and improved the DoS penalty mechanism

* les/utils: minor fixes

* les/utils: made selection weight calculation nicer

* les/utils: fixed linter warning

* les/utils: more precise and reliable probabilistic test

* les/utils: fixed linter warning
2021-01-28 22:47:15 +01:00
eb21c652c0 cmd,core,eth,params,tests: define yolov3 + enable EIP-2565 (#22213)
Removes the yolov2 definition, adds yolov3, including EIP-2565. This PR also disables some of the erroneously generated blockchain and statetests, and adds the new genesis hash + alloc for yolov3. 
This PR disables the CLI switches for yolo, since it's not complete until we merge support for 2930.
2021-01-28 21:19:07 +01:00
2e5d141708 rpc: deprecate Client.ShhSubscribe (#22239)
It never worked, whisper uses polling.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-01-27 10:20:34 +01:00
a72fa88a0d les: switch to new discv5 (#21940)
This PR enables running the new discv5 protocol in both LES client
and server mode. In client mode it mixes discv5 and dnsdisc iterators
(if both are enabled) and filters incoming ENRs for "les" tag and fork ID.
The old p2p/discv5 package and all references to it are removed.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-01-26 21:41:35 +01:00
9c5729311e accounts/scwallet: update documentation (#22242) 2021-01-26 16:43:12 +01:00
ad038b6289 accounts/scwallet: use go-ethereum crypto instead of go-ecdh (#22212)
* accounts/scwallet: use go-ethereum crypto instead of go-ecdh

github.com/wsddn/go-ecdh is a wrapper package for ECDH functionality
with any elliptic curve.

Since 'generic' ECDH is not required in accounts/scwallet (the curve is
always secp256k1), we can just use the standard library functionality
and our own crypto libraries to perform ECDH and save a dependency.

* Update accounts/scwallet/securechannel.go

Co-authored-by: Guillaume Ballet <gballet@gmail.com>

* Use the correct key

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2021-01-26 16:01:13 +01:00
681618275c core: speed up header import (#21967)
This PR implements the following modifications

- Don't shortcut check if block is present, thus avoid disk lookup
- Don't check hash ancestry in early-check (it's still done in parallel checker)
- Don't check time.Now for every single header

Charts and background info can be found here: https://github.com/holiman/headerimport/blob/main/README.md
With these changes, writing 1M headers goes down to from 80s to 62s.
2021-01-26 12:17:11 +01:00
14d495491d core/state: fix panic in state dumping (#22225) 2021-01-26 12:15:31 +01:00
573f373d2b internal/ethapi: print tx details when submitting (#22170)
This adds more info about submitted transactions in log messages.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-01-26 12:13:55 +01:00
7202b410b0 tests/fuzzers/abi: fixed one-off panic with int.Min64 value (#22233)
* tests/fuzzers/abi: fixed one-off panic with int.Min64 value

* tests/fuzzers/abi: fixed one-off panic with int.Min64 value
2021-01-25 21:40:14 +01:00
d2779ed7ac eth, p2p: reserve half peer slots for snap peers during snap sync (#22171)
* eth, p2p: reserve half peer slots for snap peers during snap sync

* eth: less logging

* eth: rework the eth/snap peer reservation logic

* eth: rework the eth/snap peer reservation logic (again)
2021-01-25 20:06:52 +02:00
adf130def8 eth/tracers: move tracing APIs into eth/tracers (#22161)
This moves the tracing RPC API implementation to package eth/tracers.
By doing so, package eth no longer depends on tracing and the duktape JS engine.

The change also enables tracing using the light client. All tracing methods work with the
light client, but it's a lot slower compared to using a full node.
2021-01-25 14:36:39 +01:00
49cdcf5c70 core: reset to genesis when middle block is missing (#22135)
When a sethead/rewind finds that the targeted block is missing, it resets to genesis instead of crashing. Closes #22129
2021-01-25 14:29:45 +01:00
04a72260c5 snapshot: merge loops for better performance (#22160) 2021-01-25 14:25:55 +01:00
59a79137b9 go.mod: upgrade github.com/huin/goupnp (#22227)
This updates the goupnp dependency, fixing huin/goupnp#33
2021-01-25 12:46:09 +01:00
c0862f4f4c graphql: change receipt status to decimal instead of hex (#22187)
This PR fixes the receipt status field to be decimal instead of a hex string,
as called for by the spec.

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-01-25 11:31:18 +01:00
1770fe718a go.mod: update dependencies (#22216)
This updates go module dependencies as discussed in #22050.
2021-01-25 10:42:07 +01:00
797b0812ab eth/protocols/snap: snap sync testing (#22179)
* eth/protocols/snap: make timeout configurable

* eth/protocols/snap: snap sync testing

* eth/protocols/snap: test to trigger panic

* eth/protocols/snap: fix race condition on timeouts

* eth/protocols/snap: return error on cancelled sync

* squashme: updates + test causing panic + properly serve accounts in order

* eth/protocols/snap: revert failing storage response

* eth/protocols/snap: revert on bad responses (storage, code)

* eth/protocols/snap: fix account handling stall

* eth/protocols/snap: fix remaining revertal-issues

* eth/protocols/snap: timeouthandler for bytecode requests

* eth/protocols/snap: debugging + fix log message

* eth/protocols/snap: fix misspelliings in docs

* eth/protocols/snap: fix race in bytecode handling

* eth/protocols/snap: undo deduplication of storage roots

* synctests: refactor + minify panic testcase

* eth/protocols/snap: minor polishes

* eth: minor polishes to make logs more useful

* eth/protocols/snap: remove excessive logs from the test runs

* eth/protocols/snap: stress tests with concurrency

* eth/protocols/snap: further fixes to test cancel channel handling

* eth/protocols/snap: extend test timeouts on CI

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-01-25 08:17:05 +02:00
3708454f58 cmd, geth: CLI help fixes (#22220)
* cmd, geth: Reflect command being optional - closes 22218

* cmd, geth: Set current year to 2021
2021-01-24 11:37:39 +01:00
db35d77b63 cmd, geth: CLI help fixes (#22220)
* cmd, geth: Reflect command being optional - closes 22218

* cmd, geth: Set current year to 2021
2021-01-24 11:37:08 +01:00
f26c19cbcd common/mclock: remove dependency on github.com/aristanetworks/goarista (#22211)
It takes three lines of code to get to runtime.nanotime, no need to
pull a dependency for that.
2021-01-22 20:15:27 +01:00
9e1bd0f367 trie: fix range prover (#22210)
Fixes a special case when the trie only has a single trie node and the range proof only contains a single element.
2021-01-22 10:11:24 +01:00
231040c633 event: add ResubscribeErr (#22191)
This adds a way to get the error of the failing subscription
for logging/debugging purposes.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-01-21 13:47:38 +01:00
c4307a9339 eth/filters: fix potential deadlock in filter timeout loop (#22178)
This fixes #22131 and adds a test reproducing the issue.
2021-01-21 12:17:10 +01:00
ddadc3d273 Merge pull request #21047 from holiman/improve_updates_2
core: improve trie updates (part 2)
2021-01-21 01:48:08 +02:00
42f9f1f073 core/state: convert prefetcher to concurrent per-trie loader 2021-01-21 01:47:14 +02:00
1e1865b73f core: implement background trie prefetcher
Squashed from the following commits:

core/state: lazily init snapshot storage map
core/state: fix flawed meter on storage reads
core/state: make statedb/stateobjects reuse a hasher
core/blockchain, core/state: implement new trie prefetcher
core: make trie prefetcher deliver tries to statedb
core/state: refactor trie_prefetcher, export storage tries
blockchain: re-enable the next-block-prefetcher
state: remove panics in trie prefetcher
core/state/trie_prefetcher: address some review concerns

sq
2021-01-21 01:46:38 +02:00
81bf9f97c9 downloader: extract findAncestor search functions (#21744)
This is a simple refactoring, extracting common ancestor
negotiation logic to named function
2021-01-20 22:45:01 +01:00
7da8f75d5b go.mod: upgrade golang-lru (#22134) 2021-01-20 19:34:21 +01:00
d1301eb0df oss-fuzz: fix abi fuzzer (#22199) 2021-01-20 18:21:13 +01:00
45cb1a580a eth, les: add new config field SyncFromCheckpoint (#22123)
This PR introduces a new config field SyncFromCheckpoint for light client.

In some special scenarios, it's required to start synchronization from some
arbitrary checkpoint or even from the scratch. So this PR offers this
flexibility to users so that the synchronization start point can be configured.

There are two relevant configs: SyncFromCheckpoint and Checkpoint.

- If the SyncFromCheckpoint is true, the light client will try to sync from the
  specified checkpoint.

- If the Checkpoint is not configured, then the light client will sync from the
  scratch(from the latest header if the database is not empty)

Additional notes: these two configs are not visible in the CLI flags but only
accessable in the config file.

Example Usage:

[Eth]
SyncFromCheckpoint = true

[Eth.Checkpoint]
SectionIndex = 100
SectionHead = "0xabc"
CHTRoot = "0xabc"
BloomRoot = "0xabc"

PS. Historical checkpoint can be retrieved from the synced full node or light
client via les_getCheckpoint API.
2021-01-19 10:52:45 +01:00
24c1e3053b cmd/geth: graceful shutdown if disk is full (#22103)
Adding warnings of free disk space left and graceful shutdown when there is not enough space left.
This also adds a flag datadir.minfreedisk which can be used to set the trigger for low disk space, and setting it to zero disables the check. 

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-01-19 09:26:42 +01:00
5e9f5ca5d3 core/state/snapshot: write snapshot generator in batch (#22163)
* core/state/snapshot: write snapshot generator in batch

* core: refactor the tests

* core: update tests

* core: update tests
2021-01-18 14:39:43 +01:00
10555d4684 cmd/geth: dump config for metrics (#22083)
* cmd/geth: dump config

* cmd/geth: dump config

* cmd/geth: properly read config again

* cmd/geth: override metrics if flags are set

* cmd/geth: write metrics regardless if enabled

* cmd/geth: renamed to metricsfromcliargs

* metrics: add default configuration
2021-01-18 14:36:05 +01:00
398182284c tests/fuzzers/abi: better test generation (#22158)
* tests/fuzzers/abi: better test generation

* tests/fuzzers/abi: fixed packing issue

* oss-fuzz: enable abi fuzzer
2021-01-18 14:33:15 +01:00
034ecc3210 les: remove useless protocol defines (#22115)
This PR has two changes in the les protocol:

- the auxRoot is not supported. See ethereum/devp2p#171 for more information
- the empty response will be returned in GetHelperTrieProofsMsg request if the merkle
   proving is failed. note, for backward compatibility, the empty merkle proof as well as
   the request auxiliary data will still be returned in  les2/3 protocol no matter the proving
   is successful or not. the proving failure can happen e.g. request the proving for a
   non-included entry in helper trie (unstable header).
2021-01-16 19:06:18 +01:00
c76573a97b eth/protocols/eth: fix slice resize flaw (#22181) 2021-01-16 18:15:18 +01:00
8d62ee65b2 les: don't drop sentTo for normal cases (#22048) 2021-01-15 23:04:38 +01:00
3944976a9a Merge pull request #22177 from karalabe/snapshot-storage-logs
core/state/snapshot: add generation logs to storage too
2021-01-15 12:54:34 +02:00
c4deebbf1e core/state/snapshot: add generation logs to storage too 2021-01-15 12:26:46 +02:00
d13c59fef0 Merge pull request #22169 from karalabe/faucet-regen
cmd/faucet: update the embedded website asset
2021-01-14 12:19:57 +02:00
12969084d1 cmd/faucet: update the embedded website asset 2021-01-14 12:10:52 +02:00
96157a897b graphql: fix spurious travis failure (#22166)
The tests sometimes failed with certain go versions because
the behavior of http.Server.Shutdown changed over time. A bug
that was fixed in Go 1.15 could cause active connections on unrelated
servers to close unexpectedly. This is fixed by avoiding use of the
same port number in all tests.
2021-01-13 22:43:07 +01:00
2aaff0ad76 consensus/ethash: increase seal timeout for tests (#22162)
It seems that the 2 second timeout is not enough for Travis CI:

   --- FAIL: TestTestMode (2.00s)
       ethash_test.go:53: sealing result timeout
2021-01-13 11:44:20 +01:00
6296211a3e graphql: fix spurious error in test (#22164)
This solves an issue in graphql tests:

    graphql_test.go:38: could not create new node: datadir already used by another process
2021-01-13 11:42:26 +01:00
c94081774f tests: update the reference tests (#22009) 2021-01-13 11:29:28 +01:00
c7a6be163f cmd/utils: don't enumerate USB unless --usb is set (#22130)
USB enumeration still occured. Make sure it will only occur if --usb is set.
This also deprecates the 'NoUSB' config file option in favor of a new option 'USB'.
2021-01-13 11:14:36 +01:00
93a89b2681 go.mod: use github.com/holiman/bloomfilter/v2 (#22044)
* deps: use improved bloom filter implementation

* eth/handler, trie: use 4 keys for syncbloom + minor fixes

* eth/protocols, trie: revert change on syncbloom method signature
2021-01-12 17:39:31 +01:00
23f837c388 cmd/utils: avoid making console preloads absolute (#22109)
Resolves https://github.com/etclabscore/core-geth/issues/273

jsre.JSRE already handles establishing preload
file paths relative to the 'assets' path (aka docroot),
where it joins the assets dir and the file path if relative,
or uses the file path only if absolute.

The duplication of this logic by MakeConsolePreloads
caused preloaded files to have paths which contained
duplicate references to the assets dir path.

Date: 2020-12-30 08:25:01-06:00
Signed-off-by: meows <b5c6@protonmail.com>
2021-01-12 15:50:11 +01:00
984e752ce5 eth: return error from eth_chainID during sync before EIP-155 activates (#21686)
This changes the chainID RPC method to return an error when EIP-155 is not yet
active at the current block height. It used to simply return zero in this case, but
that's confusing.
2021-01-12 10:52:13 +01:00
39b3b8ffb4 graphql: fix issue with unmarshalling int32 into Long type #22153 2021-01-11 14:55:42 +01:00
49c2816d54 eth: improve log message (#22146)
* eth: fixed typos

* eth: fixed log message
2021-01-11 12:53:13 +01:00
79e2174e4d Merge pull request #22157 from karalabe/prque-tests
common/prque: pull in tests and benchmarks from upstream
2021-01-11 10:52:46 +02:00
ab5e3f400f common/prque: pull in tests and benchmarks from upstream 2021-01-11 10:31:03 +02:00
5a1b384352 core: persist bad blocks (#21827)
* core: persist bad blocks

* core, eth, internal: address comments

* core/rawdb: add badblocks to inspector

* core, eth: update

* internal: revert

* core, eth: only save 10 bad blocks

* core/rawdb: address comments

* core/rawdb: fix

* core: address comments
2021-01-10 12:54:15 +01:00
89030ec0b4 eth/downloader: fix race condition in tests (#22140)
* downloader: fix race condition in tests

* eth/downloader: fix race condition in tests

* Revert "downloader: fix race condition in tests"

This reverts commit 108033ebc6.
2021-01-09 17:29:19 +01:00
889f5645b5 ethclient: better test suite for ethclient package (#22127)
This commit extends the ethclient test suite and increases code coverage of the ethclient
package from ~15% to >55%. These tests act as early smoke tests to signal issues in the
RPC-interface. E.g. if a functionality like eth_chainId or eth_call breaks, the test
will break.
2021-01-08 21:29:25 +01:00
6b88ab75bc cmd/faucet: fix nonce-gap problem (#22145)
* cmd/faucet: avoid encoding for each client

* cmd/faucet: fix flaw in clearing of txs, avoid sending more than necessary

* cmd/faucet: fix flaw in tx cropping

* cmd/faucet: revert change to not always send tx info

* cmd/faucet: review fixes

* cmd/faucet: revert #22018, fix order in UI

* cmd/faucet: fix lock error

* cmd/faucet: revert json changes

* squashme
2021-01-08 12:17:15 +02:00
165f53fc6e les: remove transaction propagation limits (#22125) 2021-01-07 23:39:35 +01:00
d3952898c3 Merge pull request #22137 from karalabe/faucet-fb-fix
cmd/faucet: switch Facebook auth over to mobile site
2021-01-07 18:45:27 +02:00
3c6665e7d6 cmd/faucet: switch Facebook auth over to mobile site 2021-01-07 18:14:44 +02:00
4bb5c6ca7a eth/protocols/snap: speed up hash checks (#22023)
* eth/protocols/snap: speed up hash checks

* eth/protocols/snap: nit fix

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-01-07 18:12:41 +02:00
38310f9022 Merge pull request #22136 from karalabe/faucet-websocket-fix
cmd/faucet: fix websocket race regression after switching to gorilla
2021-01-07 12:58:42 +02:00
58b9db5f7c eth/protocols/snap: track reverts when peer rejects request (#22016)
* eth/protocols/snap: reschedule missed deliveries

* eth/protocols/snap: clarify log message

* eth/protocols/snap: revert failures async and update runloop

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-01-07 12:58:07 +02:00
44208d9258 cmd/faucet: fix websocket race regression after switching to gorilla 2021-01-07 10:23:50 +02:00
8bd8e1b24a Merge pull request #22122 from karalabe/snapshot-polishes
cmd/utils, eth/downloader: minor snap nitpicks
2021-01-07 09:12:20 +02:00
d2e1b17f18 snapshot, trie: fixed typos, mostly in snapshot pkg (#22133) 2021-01-07 08:36:21 +02:00
072fd96254 graphql: return decimal for estimateGas and cumulativeGas queries (#22126)
* estimateGas, cumulativeGas
* linted
* add test for estimateGas
2021-01-06 17:19:16 +01:00
d667ee2d10 crypto: fix ineffectual assignments (#22124)
* crypto/bls12381: fixed ineffectual assignment

* crypto/signify: fix ineffectual assignment
2021-01-06 13:06:44 +02:00
83d317cff9 cmd/utils, eth/downloader: minor snap nitpicks 2021-01-06 08:37:45 +02:00
618454214b eth/downloader: enhanced test cases for downloader queue (#22114) 2021-01-05 14:56:01 +01:00
9ba306d47e common/compiler: fix parsing of solc output with solidity v.0.8.0 (#22092)
Solidity 0.8.0 changes the way that output is marshalled. This patch allows to parse both
the legacy format used previously and the new format.

See also https://docs.soliditylang.org/en/breaking/080-breaking-changes.html#interface-changes
2021-01-05 14:48:22 +01:00
4714ce9430 cmd/geth: added --mainnet flag (#21932)
* cmd/geth: added --mainnet flag

* cmd/utils: set default genesis if --mainnet is specified

* cmd/utils: addressed comments
2021-01-05 14:31:23 +01:00
eb2a1dfdd2 graphql: use a decimal representation for gas limit and gas used (#21883)
This changes the JSON encoding of blocks returned by the API
to have decimal instead of hexadecimal numbers. The spec wants
it this way.

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-01-05 11:22:32 +01:00
664903dc88 cmd/geth: usb is off by default (#21984) 2021-01-05 11:18:22 +01:00
9584f56b9d miner: avoid sleeping in miner (#22108)
This PR removes a logic in the miner, which was originally intended to help temporary testnets based on ethash from "running off into the future". If the difficulty was low, and a few computers started mining several blocks per second, the ethash rules (which demand 1s delay between blocks) would push the blocktimes further and further away.
The solution was to make the miner sleep while this happened.

Nowadays, this problem is solved instead by PoA chains, and it's recommended to let testnets and devnets be based on clique instead. The existing logic is problematic, since it can cause stalls within the miner making it difficult for remote workers to submit work if the channel is blocked on a sleep.

Credits to Saar Tochner for reporting this via the bug bounty
2021-01-05 10:44:33 +01:00
6ada9f0f38 Merge pull request #22107 from karalabe/faucet-twitter
cmd: support v1.1 Twitter API in faucet, fix puppeth
2021-01-05 10:27:33 +02:00
e4571d8c12 cmd: support v1.1 Twitter API in faucet, fix puppeth 2021-01-04 14:13:21 +02:00
1951e20d10 SECURITY.md: link to release page (#22067)
Add links to go-ethereum's GitHub release page.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-01-04 12:42:47 +01:00
5c2a7ce2cc node: rename startNetworking to openEndpoints (#22105) 2021-01-04 12:39:25 +01:00
47820ef726 .github: Replace wiki links with new doc pages (#22065) (#22068) 2021-01-04 11:58:51 +01:00
Vie
f83fc302a5 cmd/geth: update copyright year (#22099) 2021-01-04 11:52:23 +01:00
167ff563d1 core/state/snapshot: gethring -> gathering typo (#22104) 2021-01-04 10:07:43 +02:00
0a3993c558 accounts/abi/bind: fix erroneous test (#22053)
closes #22049
2020-12-30 13:10:11 +01:00
a425a47ddc core/rawdb, eth/protocols : Method name typo fix (#22026) 2020-12-27 22:38:16 +01:00
c17a7733df docs: replace wiki links with new doc pages in readme.md (#22065) (#22066) 2020-12-27 22:28:08 +01:00
653e8b9dd9 eth/downloader: remove unnecessary condition (#22052) 2020-12-27 22:26:42 +01:00
ab0979f930 signer: docs - replace wiki links with new doc pages (#22069) 2020-12-27 22:18:57 +01:00
0a09a39325 eth/filters: replace wiki links with new doc pages (#22070) 2020-12-27 22:09:05 +01:00
2f8100615a cmd/geth: replace wiki links with new doc pages (#22071) 2020-12-27 22:01:28 +01:00
b13e9c4e3d tests/fuzzers: fix false positive in bitutil fuzzer (#22076) 2020-12-27 21:58:39 +01:00
9c6b5b904a eth, eth/tracers: include intrinsic gas in calltracer, expose for all tracers (#22038)
* eth/tracers: share tx gas price with js tracer

* eth/tracers: use `go generate`

* eth/tracers: try with another version of go-bindata

* eth/tracers: export txGas

* eth, eth/tracers: pass intrinsic gas to js tracers

eth/tracers: include tx gas in tracers usedGas

eth/tracers: fix prestate tracer's sender balance

eth/tracers: rm unnecessary import

eth/tracers: pass intrinsicGas separately to tracer

eth/tracers: fix tests broken by lack of txdata

eth, eth/tracers: minor fix

* eth/tracers: regenerate assets + unexport test-struct + add testcase

* eth/tracers: simplify tests + make table-driven

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-12-27 21:57:19 +01:00
25c0bd9b43 README.md: update Travis badge (#22079)
The legacy dot-org URL was displaying a message about the repository
having migrated to the dot-com service, which now covers open-source
projects as well.
2020-12-27 18:56:50 +01:00
b9012a039b common,crypto: move fuzzers out of core (#22029)
* common,crypto: move fuzzers out of core

* fuzzers: move vm fuzzer out from core

* fuzzing: rework cover package logic

* fuzzers: lint
2020-12-23 17:44:45 +01:00
158f72cc0c internal/ethapi: restore net_version RPC method (#22061)
During the snap and eth refactor, the net_version rpc call was falsely deprecated.
This restores the net_version RPC handler as most eth2 nodes and other software
depend on it.
2020-12-23 13:43:22 +01:00
61469cfeaf eth/downloader: fix typo in comment (#22019) 2020-12-21 15:39:58 +01:00
c5a3ffa363 eth/download/statesync : optimize to avoid a copy in state sync hashing (#22035)
* eth/download/statesync : state hash sum optimized

* go fmt with blank in imports

* keccak read arg fix
2020-12-21 11:54:39 +01:00
3c46f5570b cmd/faucet: sort requests by newest first (#22018) 2020-12-17 01:20:20 +01:00
c7f2536735 les: les/4 minimalistic version (#21909)
* les: allow tx unindexing in les/4 light server mode

* les: minor fixes

* les: more small fixes

* les: add meaningful constants for recentTxIndex handshake field
2020-12-15 20:12:14 +01:00
8cde2966af eth, core: speed up some tests (#22000) 2020-12-15 18:52:51 +01:00
0fe66f8ae4 eth/protocols/eth: remove magic numbers in test (#21999) 2020-12-14 14:31:23 +01:00
4859929798 cmd/geth: fixed parallelization flaw in account import test (#22002) 2020-12-14 14:08:53 +01:00
017831dd5b core, eth: split eth package, implement snap protocol (#21482)
This commit splits the eth package, separating the handling of eth and snap protocols. It also includes the capability to run snap sync (https://github.com/ethereum/devp2p/blob/master/caps/snap.md) , but does not enable it by default. 

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-12-14 10:27:15 +01:00
00d10e610f cmd/abigen: clarify abigen alias flag usage (#21875)
* doc: clarify abigen alias flag usage

update the `abigen --alias` flag help info, give an example to make it more clear

related issue: https://github.com/ethereum/go-ethereum/issues/21846

* Update cmd/abigen/main.go

Co-authored-by: ligi <ligi@ligi.de>

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: ligi <ligi@ligi.de>
2020-12-12 17:36:32 +01:00
38c1d592b7 abi/bind: fix error-handling in generated wrappers for functions returning structs (#22005)
Fixes the template used when generating code, which in some scenarios would lead to panic instead of returning an error.
2020-12-12 10:16:34 +01:00
4d48980e74 core, eth, les: implement unclean-shutdown marker (#21893)
This PR implements unclean shutdown marker. Every time geth boots, it adds a timestamp to a list of timestamps in the database. This list is capped at 10. At a clean shutdown, the timestamp is removed again. 
Thus, when geth exits unclean, the marker remains, and at boot up we show the most recent unclean shutdowns to the user, which makes it easier to diagnose root-causes to certain problems. 

Co-authored-by: Nagy Salem <me@muhnagy.com>
2020-12-11 15:56:00 +01:00
c49aae9870 consensus: refactor FinalizeAndAssemble to use Finalize (#21993) 2020-12-11 15:49:44 +01:00
efe6dd2904 consensus/ethash: implement faster difficulty calculators (#21976)
This PR adds re-written difficulty calculators, which are based on uint256. It also adds a fuzzer + oss-fuzz integration for the new fuzzer. It does differential fuzzing between the new and old calculators.

Note: this PR does not actually enable the new calculators.
2020-12-11 11:06:44 +01:00
88c696240d core/txpool: remove "local" notion from the txpool price heap (#21478)
* core: separate the local notion from the pricedHeap

* core: add benchmarks

* core: improve tests

* core: address comments

* core: degrade the panic to error message

* core: fix typo

* core: address comments

* core: address comment

* core: use PEAK instead of POP

* core: address comments
2020-12-11 10:44:57 +01:00
b47f4ca5cf cmd/faucet: use Twitter API instead of scraping webpage (#21850)
This PR adds support for using Twitter API to query the tweet and author details. There are two reasons behind this change:

- Twitter will be deprecating the legacy website on 15th December. The current method is expected to stop working then.
- More importantly, the current system uses Twitter handle for spam protection but the Twitter handle can be changed via automated calls. This allows bots to use the same tweet to withdraw funds infinite times as long as they keep changing their handle between every request. The Rinkeby as well as the Goerli faucet are being actively drained via this method. This PR changes the spam protection to be based on Twitter IDs instead of usernames. A user can not change their Twitter ID.
2020-12-11 10:35:39 +01:00
62dc59c2bd miner, test: fix potential goroutine leak (#21989)
In miner/worker.go, there are two goroutine using channel w.newWorkCh: newWorkerLoop() sends to this channel, and mainLoop() receives from this channel. Only the receive operation is in a select.

However, w.exitCh may be closed by another goroutine. This is fine for the receive since receive is in select, but if the send operation is blocking, then it will block forever. This commit puts the send in a select, so it won't block even if w.exitCh is closed.

Similarly, there are two goroutines using channel errc: the parent that runs the test receives from it, and the child created at line 573 sends to it. If the parent goroutine exits too early by calling t.Fatalf() at line 614, then the child goroutine will be blocked at line 574 forever. This commit adds 1 buffer to errc. Now send will not block, and receive is not influenced because receive still needs to wait for the send.
2020-12-11 10:29:42 +01:00
1a715d7db5 les: rework float conversion on arm64 and other architectures (#21994)
The previous fix #21960 converted the float to an intermediate signed int, before attempting the uint conversion. Although this works, this doesn't guarantee that other architectures will work the same.
2020-12-11 10:28:01 +01:00
fc0662bb23 params: begin v1.9.26 release cycle 2020-12-11 09:03:16 +01:00
e787272901 params: go-ethereum v1.9.25 stable 2020-12-11 09:03:16 +01:00
1d1f5fea4a build: upgrade to Go 1.15.6 (#21986) 2020-12-11 09:02:55 +01:00
004541098d les: introduce forkID (#21974)
* les: introduce forkID

* les: address comment
2020-12-10 17:20:55 +01:00
b44f24e3e6 core, trie: speed up some tests with quadratic processing flaw (#21987)
This commit fixes a flaw in two testcases, and brings down the exec-time from ~40s to ~8s for trie/TestIncompleteSync.

The checkConsistency was performed over and over again on the complete set of nodes, not just the recently added, turning it into a quadratic runtime.
2020-12-10 14:48:32 +01:00
9f6bb492bb les, light: remove untrusted header retrieval in ODR (#21907)
* les, light: remove untrusted header retrieval in ODR

* les: polish

* light: check the hash equality in odr
2020-12-10 14:33:52 +01:00
817a3fb562 p2p/enode: avoid crashing for invalid IP (#21981)
The database panicked for invalid IPs. This is usually no problem
because all code paths leading to node DB access verify the IP, but it's
dangerous because improper validation can turn this panic into a DoS
vulnerability. The quick fix here is to just turn database accesses
using invalid IP into a noop. This isn't great, but I'm planning to
remove the node DB for discv5 long-term, so it should be fine to have
this quick fix for half a year.

Fixes #21849
2020-12-09 20:21:31 +01:00
f935b1d542 crypto/signify, build: fix archive signing with signify (#21977)
This fixes some issues in crypto/signify and makes release signing work.

The archive signing step in ci.go used getenvBase64, which decodes the key data.
This is incorrect here because crypto/signify already base64-decodes the key.
2020-12-09 15:43:36 +01:00
915643a3e5 cmd/geth: add test to verify regexps in version check (#21962) 2020-12-09 13:59:24 +01:00
40b6ccf383 core,les: headerchain import in batches (#21471)
* core: add test for headerchain inserts

* core, light: write headerchains in batches

* core: change to one callback per batch of inserted headers + review concerns

* core: error-check on batch write

* core: unexport writeHeaders

* core: remove callback parameter in InsertHeaderChain

The semantics of InsertHeaderChain are now much simpler: it is now an
all-or-nothing operation. The new WriteStatus return value allows
callers to check for the canonicality of the insertion. This change
simplifies use of HeaderChain in package les, where the callback was
previously used to post chain events.

* core: skip some hashing when writing headers

* core: less hashing in header validation

* core: fix headerchain flaw regarding blacklisted hashes

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-12-09 11:13:02 +01:00
bd848aad7c common: improve printing of Hash and Address (#21834)
Both Hash and Address have a String method, which returns the value as
hex with 0x prefix. They also had a Format method which tried to print
the value using printf of []byte. The way Format worked was at odds with
String though, leading to a situation where fmt.Sprintf("%v", hash)
returned the decimal notation and hash.String() returned a hex string.

This commit makes it consistent again. Both types now support the %v,
%s, %q format verbs for 0x-prefixed hex output. %x, %X creates
unprefixed hex output. %d is also supported and returns the decimal
notation "[1 2 3...]".

For Address, the case of hex characters in %v, %s, %q output is
determined using the EIP-55 checksum. Using %x, %X with Address
disables checksumming.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-12-08 19:19:09 +01:00
ed0670cb17 accounts/abi/bind: allow specifying signer on transactOpts (#21356)
This commit enables users to specify which signer they want to use while creating their transactOpts.
Previously all contract interactions used the homestead signer. Now a user can specify whether they
want to sign with homestead or EIP155 and specify the chainID which adds another layer of security.

Closes #16484
2020-12-08 14:44:56 +01:00
6a4e730003 crypto/secp256k1: add workaround for go mod vendor (#21735)
Go won't vendor C files if there are no Go files present in the directory.
Workaround is to add dummy Go files.

Fixes: #20232
2020-12-08 10:47:56 +01:00
581c028d18 les: cosmetic rewrite of the arm64 float bug workaround (#21960)
* les: revert arm float bug workaround to check go 1.15

* add traces to reproduce outside travis

* simpler workaround
2020-12-07 14:04:27 +01:00
15339cf1c9 cmd/geth: implement vulnerability check (#21859)
* cmd/geth: implement vulnerability check

* cmd/geth: use minisign to verify vulnerability feed

* cmd/geth: add the test too

* cmd/geth: more minisig/signify testing

* cmd/geth: support multiple pubfiles for signing

* cmd/geth: add @holiman minisig pubkey

* cmd/geth: polishes on vulnerability check

* cmd/geth: fix ineffassign linter nit

* cmd/geth: add CVE to version check struct

* cmd/geth/testdata: add missing testfile

* cmd/geth: add more keys to versionchecker

* cmd/geth: support file:// URLs in version check

* cmd/geth: improve key ID printing when signature check fails

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-12-04 15:01:47 +01:00
7770e41cb5 core: improve contextual information on core errors (#21869)
A lot of times when we hit 'core' errors, example: invalid tx, the information provided is
insufficient. We miss several pieces of information: what account has nonce too high,
and what transaction in that block was offending?

This PR adds that information, using the new type of wrapped errors.
It also adds a testcase which (partly) verifies the output from the errors.

The first commit changes all usage of direct equality-checks on core errors, into
using errors.Is. The second commit adds contextual information. This wraps most
of the core errors with more information, and also wraps it one more time in
stateprocessor, to further provide tx index and tx hash, if such a tx is encoutered in
a block. The third commit uses the chainmaker to try to generate chains with such
errors in them, thus triggering the errors and checking that the generated string meets
expectations.
2020-12-04 12:22:19 +01:00
62cedb3aab core/vm/runtime: remove duplicated line (#21956)
This line is duplicated, though it doesn't cause any issues.
2020-12-04 08:54:07 +01:00
d7a64dc02b cmd/devp2p: add node filter for snap + fix arg error (#21950) 2020-12-03 13:16:20 +01:00
0b2f1446bb go.mod: update github.com/golang/snappy(#21934)
This updates the snappy library depency to include a fix for
a Go 1.16 incompatibility issue.
2020-12-02 16:42:38 +01:00
e9e86aeacb eth: fix error in tracing if reexec is set (#21830)
* eth: fix error in tracing if reexec is set

* eth: change pointer embedding to value-embedding
2020-12-02 12:49:20 +01:00
908c18073a params: update CHTs (#21941) 2020-12-02 09:17:59 +01:00
a2795c8055 les: fix nodiscover option (#21906) 2020-12-01 10:03:41 +01:00
e7db1dbc96 p2p/nodestate: fix deadlock during shutdown of les server (#21927)
This PR fixes a deadlock reported here: #21925

The cause is that many operations may be pending, but if the close happens, only one of them gets awoken and exits, the others remain waiting for a signal that never comes.
2020-11-30 18:58:47 +01:00
a1ddd9e1d3 cmd/devp2p/internal/ethtest: add transaction tests (#21857) 2020-11-30 15:23:48 +01:00
aba0c234c2 cmd/geth: make tests run quicker + use less memory and disk (#21919) 2020-11-30 14:43:20 +01:00
566cb4c5f0 accounts/keystore: add missing function doc for SignText (#21914)
Co-authored-by: Pascal Dierich <pascal@pascaldierich.com>
2020-11-30 09:03:24 +01:00
b71334ac3d accounts, signer: fix Ledger Live account derivation path (clef) (#21757)
* signer/core/api: fix derivation of ledger live accounts

For ledger hardware wallets, change account iteration as follows:

- ledger legacy: m/44'/60'/0'/X; for 0<=X<5
- ledger live: m/44'/60'/0'/0/X; for 0<=X<5

- ledger legacy: m/44'/60'/0'/X; for 0<=X<10
- ledger live: m/44'/60'/X'/0/0; for 0<=X<10

Non-ledger derivation is unchanged and remains as:
- non-ledger: m/44'/60'/0'/0/X; for 0<=X<10

* signer/core/api: derive ten default paths for all hardware wallets, plus ten legacy and ten live paths for ledger wallets

* signer/core/api: as .../0'/0/0 already included by default paths, do not include it again with ledger live paths

* accounts, signer: implement path iterators for hd wallets

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-29 13:43:15 +01:00
fa572cd297 crypto: signing builds with signify/minisign (#21798)
* internal/build: implement signify's signing func
* Add signify to the ci utility
* fix output file format
* Add unit test for signify
* holiman's + travis' feedback
* internal/build: verify signify's output
* crypto: move signify to common dir
* use go-minisign to verify binaries
* more holiman feedback
* crypto, ci: support minisign output
* only accept one-line trusted comments
* configurable untrusted comments
* code cleanup in tests
* revert to use ed25519 from the stdlib
* bug: fix for empty untrusted comments
* write timestamp as comment if trusted comment isn't present
* rename line checker to commentHasManyLines
* crypto: added signify fuzzer (#6)
* crypto: added signify fuzzer
* stuff
* crypto: updated signify fuzzer to fuzz comments
* crypto: repro signify crashes
* rebased fuzzer on build-signify branch
* hide fuzzer behind gofuzz build flag
* extract key data inside a single function
* don't treat \r as a newline
* travis: fix signing command line
* do not use an external binary in tests
* crypto: move signify to crypto/signify
* travis: fix formatting issue
* ci: fix linter build after package move

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-11-27 12:13:54 +01:00
429e7141f2 p2p/discover: fix deadlock in discv5 message dispatch (#21858)
This fixes a deadlock that could occur when a response packet arrived
after a call had already received enough responses and was about to
signal completion to the dispatch loop.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-11-25 22:16:36 +01:00
810f9e057d all: remove redundant conversions and import names (#21903) 2020-11-25 21:00:23 +01:00
f59ed3565d graphql: always return 400 if errors are present in the response (#21882)
* Make sure to return 400 when errors are present in the response

* graphql: use less memory in chainconfig for tests

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-25 10:19:36 +01:00
c92faee66e all: simplify nested complexity and if blocks ending with a return statement (#21854)
Changes:

    Simplify nested complexity
    If an if blocks ends with a return statement then remove the else nesting.

Most of the changes has also been reported in golint https://goreportcard.com/report/github.com/ethereum/go-ethereum#golint
2020-11-25 09:24:50 +01:00
29efe1fc7e core/types: fixed typo (#21897) 2020-11-25 08:53:20 +01:00
59b480ab4b cmd/devp2p/internal/ethtest: add 'large announcement' tests (#21792)
* cmd/devp2p/internal/ethtest: added large announcement tests

* cmd/devp2p/internal/ethtest: added large announcement tests

* cmd/devp2p/internal/ethtest: refactored stuff a bit

* cmd/devp2p/internal/ethtest: added TestMaliciousStatus/Handshake

* cmd/devp2p/internal/ethtest: fixed rebasing issue

* happy linter, happy life

* cmd/devp2p/internal/ethtest: used readAndServe

* stuff

* cmd/devp2p/internal/ethtest: fixed test cases
2020-11-24 16:09:17 +01:00
7e7a3f0f71 github: Remove vulnerability.md (#21894)
This type is automatically offered by github after changing to the new style and a security.md being present
2020-11-24 16:02:53 +01:00
bddd103a9f les: fix GetProofsV2 bug (#21896) 2020-11-24 10:55:17 +01:00
6b58409614 cmd/faucet: improve handling of facebook post url (#21838)
Resolves #21532

Co-authored-by: roger <dengjun@huobi.com>
2020-11-24 10:33:58 +01:00
ead814616c Merge pull request #21890 from ligi/issue_templates
github: Add new style of issue-templates
2020-11-23 17:47:20 +02:00
6104ab6b6d tests/fuzzers/bls1381: add bls fuzzer (#21796)
* added bls fuzzer

* crypto/bls12381: revert bls-changes, fixup fuzzer tests

* fuzzers: split bls fuzzing into 8 different units

* fuzzers/bls: remove (now stale) corpus

* crypto/bls12381: added blsfuzz corpus

* fuzzers/bls12381: fix the bls corpus

* fuzzers: fix oss-fuzz script

* tests/fuzzers: fixups on bls corpus

* test/fuzzers: remove leftover corpus

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-11-23 15:49:16 +01:00
f6e1aed504 github: Add new style of issue-templates
closes #20024
2020-11-23 13:12:42 +01:00
bddf5aaa2f les/utils: protect against WeightedRandomSelect overflow (#21839)
Also fixes a bug in les/flowcontrol that caused the overflow.
2020-11-23 10:18:33 +01:00
3ef52775c4 p2p: avoid spinning loop on out-of-handles (#21878)
* p2p: avoid busy-loop on temporary errors

* p2p: address review concerns
2020-11-20 15:14:25 +01:00
ebb9591c4d crypto/bn256: fix bn256Mul fuzzer to not hang on large input (#21872)
* crypto/bn256: fix bn256Mul fuzzer to not hang on large input

* Update crypto/bn256/bn256_fuzz.go

Co-authored-by: ligi <ligi@ligi.de>

Co-authored-by: ligi <ligi@ligi.de>
2020-11-20 08:53:10 +01:00
6f88d6530a trie, rpc, cmd/geth: fix tests on 32-bit and windows + minor rpc fixes (#21871)
* trie: fix tests to work on 32-bit systems

* les: make test work on 32-bit platform

* cmd/geth: fix windows-issues on tests

* trie: improve balance

* cmd/geth: make account tests less verbose + less mem intense

* rpc: make debug-level log output less verbose

* cmd/geth: lint
2020-11-19 22:50:47 +01:00
wbt
f1e1d9f874 node: support expressive origin rules in ws.origins (#21481)
* Only compare hostnames in ws.origins

Also using a helper function for ToLower consolidates all preparation steps in one function for more maintainable consistency.

Spaces => tabs

Remove a semicolon

Add space at start of comment

Remove parens around conditional

Handle case wehre parsed hostname is empty

When passing a single word like "localhost" the parsed hostname is an empty string. Handle this and the error-parsing case together as default, and the nonempty hostname case in the conditional.

Refactor with new originIsAllowed functions

Adds originIsAllowed() & ruleAllowsOrigin(); removes prepOriginForComparison

Remove blank line

Added tests for simple allowed-orign rule

which does not specify a protocol or port, just a hostname

Fix copy-paste: `:=` => `=`

Remove parens around conditional

Remove autoadded whitespace on blank lines

Compare scheme, hostname, and port with rule

if the rule specifies those portions.

Remove one autoadded trailing whitespace

Better handle case where only origin host is given

e.g. "localhost"

Remove parens around conditional

Refactor: attemptWebsocketConnectionFromOrigin DRY

Include return type on helper function

Provide srv obj in helper fn

Provide srv to helper fn

Remove stray underscore

Remove blank line

parent 93e666b4c1e7e49b8406dc83ed93f4a02ea49ac1
author wbt <wbt@users.noreply.github.com> 1598559718 -0400
committer Martin Holst Swende <martin@swende.se> 1605602257 +0100
gpgsig -----BEGIN PGP SIGNATURE-----

 iQFFBAABCAAvFiEEypmrtbNuJK1doP1AaDtDjAWl3fAFAl+zi9ARHG1hcnRpbkBz
 d2VuZGUuc2UACgkQaDtDjAWl3fDRiwgAoMtzU8dwRV7Q9xkCwWEx9Wz2f3n6jUr2
 VWBycDKGKwRkPPOER3oc9kzjGU/P1tFlK07PjfnAKZ9KWzxpDcJZwYM3xCBurG7A
 16y4YsQnzgPNONv3xIkdi3RZtDBIiPFFEmdZFFvZ/jKexfI6JIYPngCAoqdTIFb9
 On/aPvvVWQn1ExfmarsvvJ7kUDUG77tZipuacEH5FfFsfelBWOEYPe+I9ToUHskv
 +qO6rOkV1Ojk8eBc6o0R1PnApwCAlEhJs7aM/SEOg4B4ZJJneiFuEXBIG9+0yS2I
 NOicuDPLGucOB5nBsfIKI3USPeE+3jxdT8go2lN5Nrhm6MimoILDsQ==
 =sgUp
 -----END PGP SIGNATURE-----

Refactor: drop err var for more concise test lines

Add several tests for new WebSocket origin checks

Remove autoadded whitespace on blank lines

Restore TestWebsocketOrigins originally-named test

and rename the others to be helpers rather than full tests

Remove autoadded whitespace on blank line

Temporarily comment out new test sets

Uncomment test around origin rule with scheme

Remove tests without scheme on browser origin

per https://github.com/ethereum/go-ethereum/pull/21481/files#r479371498

Uncomment tests with port; remove some blank lines

Handle when browser does not specify scheme/port

Uncomment test for including scheme & port in rule

Add IP tests

* node: more tests + table-driven, ws origin changes

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-19 14:54:49 +01:00
28080463d2 Merge pull request #21861 from holiman/remove_retesteth
cmd/geth: remove retesteth
2020-11-19 07:49:40 +02:00
b9ff57c59e metrics: fix the panic for reading empty cpu stats (#21864) 2020-11-18 21:50:11 +01:00
23524f8900 all: disable recording preimage of trie keys (#21402)
* cmd, core, eth, light, trie: disable recording preimage by default

* core, eth: fix unit tests

* core: fix import

* all: change to nopreimage

* cmd, core, eth, trie: use cache.preimages flag

* cmd: enable preimages for archive node

* cmd/utils, trie: simplify preimage tracking a bit

* core: fix linter

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-11-18 11:51:33 +02:00
6b9858085f cmd/geth: improve les test on windows (#21860) 2020-11-17 12:01:19 +01:00
db87223269 crypto/secp256k1: add checking z sign in affineFromJacobian (#18419)
The z == 0 check is hit whenever we Add two points with the same x1/x2
coordinate. crypto/elliptic uses the same check in their affineFromJacobian
function. This change does not affect block processing or tx signature verification
in any way, because it does not use the Add or Double methods.
2020-11-17 11:47:17 +01:00
d513584e52 cmd/geth: remove retesteth 2020-11-17 11:44:38 +01:00
844485ec6a consensus/ethash: fix usage of *reflect.SliceHeader (#21372)
* consensus/ethash: only use *reflect.SliceHeader, not reflect.SliceHeader. See comment here: https://github.com/golang/go/issues/40397\#issuecomment-663748689

* consensus/ethash: pr feedback from @mdempsky, makes a copy of dest such that is not mutated

* consensus/ethash: remove noop assign

* consensus/ethash: apply same fix to another location

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-11-17 11:35:58 +02:00
1ea7537997 crypto/bn256: refine comments according to #19577, #21595, and #21836 (#21847) 2020-11-17 09:51:36 +01:00
92c56eb820 common: fix documentation of Address.SetBytes (#21814) 2020-11-16 14:08:13 +01:00
cf856ea1ad accounts/abi: template: set events Raw field in Parse methods (#21807) 2020-11-13 13:43:15 +01:00
2045a2bba3 core, all: split vm.Context into BlockContext and TxContext (#21672)
* all: core: split vm.Config into BlockConfig and TxConfig

* core: core/vm: reset EVM between tx in block instead of creating new

* core/vm: added docs
2020-11-13 13:42:19 +01:00
6f4cccf8d2 core/vm, protocol_params: implement eip-2565 modexp repricing (#21607)
* core/vm, protocol_params: implement eip-2565 modexp repricing

* core/vm: fix review concerns
2020-11-13 13:39:59 +01:00
0703c91fba tests/fuzzers: improve the fuzzers (#21829)
* tests/fuzzers, common/bitutil: make fuzzers use correct returnvalues + remove output

* tests/fuzzers/stacktrie: fix duplicate-key insertion in stacktrie (false positive)

* tests/fuzzers/stacktrie: fix compilation error

* tests/fuzzers: linter nits
2020-11-13 12:36:38 +01:00
9ded4e33c5 crypto/bn256: better comments for u, P and Order (#21836) 2020-11-13 10:17:23 +01:00
a19b4235c7 crypto/bn256: improve bn256 fuzzer (#21815)
* crypto/cloudflare: fix nil deref in random G1/G2 reading

* crypto/bn256: improve fuzzer

* crypto/bn256: fix some flaws in fuzzer
2020-11-13 09:27:57 +01:00
919229d63c params: begin v1.9.25 release cycle 2020-11-12 21:21:24 +01:00
cc05b050df params: release Geth v1.9.24 with Go 1.15.5 (#21842) 2020-11-12 21:10:15 +01:00
920a287117 .travis.yml: move test builders after install builders (#21833) 2020-11-11 23:52:50 +01:00
d49407427d build: fix regressions with the -dlgo change (#21831)
This fixes cross-build and mobile framework failures.
It also disables the mac test builder because it was failing
all the time in hard to understand ways and we can't afford
it anymore under Travis CI's new pricing.
2020-11-11 22:08:22 +01:00
d990df909d consensus/ethash: use 64bit indexes for the DAG generation (#21793)
* Bit boundary fix for the DAG generation routine

* Fix unnecessary conversion warnings

Co-authored-by: Sergey Pavlov <spavlov@gmail.com>
2020-11-11 21:13:12 +01:00
27d93c1848 build: add -dlgo flag in ci.go (#21824)
This new flag downloads a known version of Go and builds with it. This
is meant for environments where we can't easily upgrade the installed Go
version.

* .travis.yml: remove install step for PR test builders

We added this step originally to avoid re-building everything
for every test. go test has become much smarter in recent go
releases, so we no longer need to install anything here.
2020-11-11 14:34:43 +01:00
70868b1e4a fuzzers: removed fuzzbuzz configuration (#21813)
We decided to move our fuzzing efforts to oss-fuzz since fuzzbuzz is still early access.
2020-11-10 21:54:59 +02:00
941d8b5c5c scripts: create oss-fuzz script in go-ethereum (#21808) 2020-11-10 15:21:41 +01:00
c52dfd55fb p2p/simulations/adapters/exec: fix some issues (#21801)
- Remove the ws:// prefix from the status endpoint since
  the ws:// is already included in the stack.WSEndpoint().
- Don't register the services again in the node start.
  Registration is already done in the initialization stage.
- Expose admin namespace via websocket.
  This namespace is necessary for connecting the peers via websocket.
- Offer logging relevant options for exec adapter.
  It's really painful to mix all log output in the single console. So
  this PR offers two additional options for exec adapter in this case
  testers can config the log output(e.g. file output) and log level
  for each p2p node.
2020-11-10 14:19:44 +01:00
0c34eae172 Merge pull request #21803 from holiman/ethash
consensus/ethash: fix the percentage progress report
2020-11-09 17:57:23 +02:00
7c30f4d085 Merge pull request #21804 from karalabe/snapshot-marker-sync
core/state/snapshot: update generator marker in sync with flushes
2020-11-09 17:50:26 +02:00
040928d8bb Merge pull request #21805 from karalabe/travis-drop-1.13
travis: drop Go 1.13 builders as it's not supported any more
2020-11-09 17:49:56 +02:00
9e688fb64c Merge pull request #21806 from karalabe/deprecate-eoan
build: stop building for Ubuntu Eoan, not supported any more
2020-11-09 17:49:21 +02:00
1143dc6e29 build: stop building for Ubuntu Eoan, not supported any more 2020-11-09 17:43:54 +02:00
eb694ea706 travis: drop Go 1.13 builders as it's not supported any more 2020-11-09 17:39:42 +02:00
81678971db trie, tests/fuzzers: implement a stacktrie fuzzer + stacktrie fixes (#21799)
* trie: fix error in stacktrie not committing small roots

* fuzzers: make trie-fuzzer use correct returnvalues

* trie: improved tests

* tests/fuzzers: fuzzer for stacktrie vs regular trie

* test/fuzzers: make stacktrie fuzzer use 32-byte keys

* trie: fix error in stacktrie with small nodes

* trie: add (skipped) testcase for stacktrie

* tests/fuzzers: address review comments for stacktrie fuzzer

* trie: fix docs in stacktrie
2020-11-09 15:08:12 +01:00
7b7b327ff2 core/state/snapshot: update generator marker in sync with flushes 2020-11-09 16:03:58 +02:00
81ff700077 consensus/ethash: fix the percentage progress report 2020-11-09 11:56:29 +01:00
97fc1c3b1d Merge pull request #21787 from karalabe/pod-non-verbose
build: stop verbose output to keep travis from overflowing
2020-11-05 11:55:50 +02:00
6cfe494276 build: stop verbose output to keep travis from overflowing 2020-11-05 11:52:35 +02:00
175506e7fd core/types, rlp: optimize derivesha (#21728)
This PR contains a minor optimization in derivesha, by exposing the RLP
int-encoding and making use of it to write integers directly to a
buffer (an RLP integer is known to never require more than 9 bytes
total). rlp.AppendUint64 might be useful in other places too.

The code assumes, just as before, that the hasher (a trie) will copy the
key internally, which it does when doing keybytesToHex(key).

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-11-04 19:29:24 +01:00
36bb7ac083 cmd/devp2p/internal/ethtest: add correct chain files and improve test output (#21782)
This PR replaces the old test genesis.json and chain.rlp files in the testdata
directory for the eth protocol test suite, and also adds documentation for
running the eth test suite locally.

It also improves the test output text and adds more timeouts.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-11-04 17:36:56 +01:00
5d20fbbb6f cmd/devp2p, internal/utesting: implement TAP output (#21760)
TAP is a text format for test results. Parsers for it are available in many languages,
making it easy to consume. I want TAP output from our protocol tests because the
Hive wrapper around them needs to know about the test names and their individual
results and logs. It would also be possible to just write this info as JSON, but I don't
want to invent a new format.

This also improves the normal console output for tests (when running without --tap).
It now prints -- RUN lines before any output from the test, and indents the log output
by one space.
2020-11-04 15:02:58 +01:00
e6402677c2 core/state/snapshot: fix journal recovery from generating old journal (#21775)
* core/state/snapshot: print warning if failed to resolve journal

* core/state/snapshot: fix snapshot recovery

When we meet the snapshot journal consisted with:
- disk layer generator with new-format
- diff layer journal with old-format

The base layer should be returned without error.
The broken diff layer can be reconstructed later
but we definitely don't want to reconstruct the
huge diff layer.

* core: add tests
2020-11-04 13:41:46 +02:00
3eebf34038 common: remove ToHex and ToHexArray (#21610)
ToHex was deprecated a couple years ago. The last remaining use
was in ToHexArray, which itself only had a single call site.

This just moves ToHexArray near its only remaining call site and
implements it using hexutil.Encode. This changes the default behaviour
of ToHexArray and with it the behaviour of eth_getProof. Previously we
encoded an empty slice as 0, now the empty slice is encoded as 0x.
2020-11-04 11:20:39 +01:00
b63bffe820 les, p2p/simulations/adapters: fix issues found while simulating les (#21761)
This adds a few tiny fixes for les and the p2p simulation framework:

LES Parts

- Keep the LES-SERVER connection even it's non-synced

  We had this idea to reject the connections in LES protocol if the les-server itself is
  not synced. However, in LES protocol we will also receive the connection from another
  les-server. In this case even the local node is not synced yet, we should keep the tcp
  connection for other protocols(e.g. eth protocol).

- Don't count "invalid message" for non-existing GetBlockHeadersMsg request

  In the eth syncing mechanism (full sync, fast sync, light sync), it will try to fetch
  some non-existent blocks or headers(to ensure we indeed download all the missing chain).
  In this case, it's possible that the les-server will receive the request for
  non-existent headers. So don't count it as the "invalid message" for scheduling
  dropping.

- Copy the announce object in the closure

  Before the les-server pushes the latest headers to all connected clients, it will create
  a closure and queue it in the underlying request scheduler. In some scenarios it's
  problematic. E.g, in private networks, the block can be mined very fast. So before the
  first closure is executed, we may already update the latest_announce object. So actually
  the "announce" object we want to send is replaced.

  The downsize is the client will receive two announces with the same td and then drop the
  server.

P2P Simulation Framework

- Don't double register the protocol services in p2p-simulation "Start".

  The protocols upon the devp2p are registered in the "New node stage". So don't reigster
  them again when starting a node in the p2p simulation framework

- Add one more new config field "ExternalSigner", in order to use clef service in the
  framework.
2020-10-30 18:04:38 +01:00
b63e3c37a6 core: improve snapshot journal recovery (#21594)
* core/state/snapshot: introduce snapshot journal version

* core: update the disk layer in an atomic way

* core: persist the disk layer generator periodically

* core/state/snapshot: improve logging

* core/state/snapshot: forcibly ensure the legacy snapshot is matched

* core/state/snapshot: add debug logs

* core, tests: fix tests and special recovery case

* core: polish

* core: add more blockchain tests for snapshot recovery

* core/state: fix comment

* core: add recovery flag for snapshot

* core: add restart after start-after-crash tests

* core/rawdb: fix imports

* core: fix tests

* core: remove log

* core/state/snapshot: fix snapshot

* core: avoid callbacks in SetHead

* core: fix setHead cornercase where the threshold root has state

* core: small docs for the test cases

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-10-29 21:01:58 +02:00
43c278cdf9 core/state: disable snapshot iteration if it's not fully constructed (#21682)
* core/state/snapshot: add diskRoot function

* core/state/snapshot: disable iteration if the snapshot is generating

* core/state/snapshot: simplify the function

* core/state: panic for undefined layer
2020-10-28 14:27:37 +02:00
18145adf08 core/state: maintain one more diff layer (#21730)
* core/state: maintain one more diff layer

* core/state: address comment
2020-10-28 14:00:22 +02:00
296a27d106 accounts/abi/bind: restore error functionality (#21743)
* accounts/abi/bind: restore error functionality

* Update accounts/abi/bind/base.go

Co-authored-by: Guillaume Ballet <gballet@gmail.com>

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-10-27 17:22:44 +01:00
1a55e20d35 cmd/geth: fix dir path in geth attach for yolov2 network (#21749) 2020-10-26 14:45:08 +02:00
7b748e550a Merge pull request #21747 from holiman/yolov2update
params: update yolov2 bootnode with elastic ip
2020-10-23 17:48:43 +03:00
68ac4eb796 params: update yolov2 bootnode with elastic ip 2020-10-23 16:47:26 +02:00
8a94aa91fb Merge pull request #21745 from holiman/yolov2_bootnodes
utils, params: add yolov2 bootnode
2020-10-23 16:42:08 +03:00
f5182c7b9c utils, params: add yolov2 bootnode 2020-10-23 15:40:48 +02:00
95f720fffc cmd/devp2p/internal/ethtest: update test chain (#21742)
The old one was wrong in two ways: the first block in chain.rlp was the
genesis block, and the genesis difficulty was below minimum difficulty.

This also contains some other fixes to the test.
2020-10-23 13:34:44 +02:00
6487c002f6 all: implement EIP-2929 (gas cost increases for state access opcodes) + yolo-v2 (#21509)
* core/vm, core/state: implement EIP-2929 + YOLOv2

* core/state, core/vm: fix some review concerns

* core/state, core/vm: address review concerns

* core/vm: address review concerns

* core/vm: better documentation

* core/vm: unify sload cost as fully dynamic

* core/vm: fix typo

* core/vm/runtime: fix compilation flaw

* core/vm/runtime: fix renaming-err leftovers

* core/vm: renaming

* params/config: use correct yolov2 chainid for config

* core, params: use a proper new genesis for yolov2

* core/state/tests: golinter nitpicks
2020-10-23 08:26:57 +02:00
fb2c79df19 accounts/usbwallet: fix ledger version check (#21733)
The version check logic did not take into account the second digit (i.e. the '4' in v1.4.0) - this one line patch corrects this.
2020-10-21 16:56:45 +02:00
91c4607979 core: fix blockchain insert report time interval calculation (#21723) 2020-10-21 16:53:30 +02:00
85d81b2cdd les: remove clientPeerSet and serverSet (#21566)
* les: move NodeStateMachine from clientPool to LesServer

* les: new header broadcaster

* les: peerCommons.headInfo always contains last announced head

* les: remove clientPeerSet and serverSet

* les: fixed panic

* les: fixed --nodiscover option

* les: disconnect all peers at ns.Stop()

* les: added comments and fixed signed broadcasts

* les: removed unused parameter, fixed tests
2020-10-21 10:56:33 +02:00
3e82c9ef67 eth/api: fix potential nil deref in AccountRange (#21710)
* Fix potential nil pointer error when neither block number nor hash is specified to accountRange

* Update error description
2020-10-20 20:19:21 +02:00
9d25f34263 core: track and improve tx indexing/unindexing (#21331)
* core: add background indexer to waitgroup

* core: make indexer stopable

* core/rawdb: add unit tests

* core/rawdb: fix lint

* core/rawdb: fix tests

* core/rawdb: fix linter
2020-10-20 16:34:50 +02:00
6e7137103c miner: fixed race condition in tests (#21664) 2020-10-20 10:58:26 +02:00
cef3e2dc5a console: don't exit on ctrl-c, only on ctrl-d (#21660)
* add interrupt counter

* remove interrupt counter, allow ctrl-C to clear ONLY, ctrl-D will terminate console, stop node

* format

* add instructions to exit

* fix tests
2020-10-20 10:56:51 +02:00
b305591e14 core/vm: marshall returnData as hexstring in trace logs (#21715)
* core/vm: marshall returnData as hexstring in trace logs

* core/vm: marshall returnData as hexstring in trace logs
2020-10-16 11:28:03 +02:00
51d026ca85 params: begin v1.9.24 release cycle 2020-10-15 12:30:41 +02:00
8c2f271528 params: go-ethereum v1.9.23 stable 2020-10-15 12:29:42 +02:00
524aaf5ec6 p2p/discover: implement v5.1 wire protocol (#21647)
This change implements the Discovery v5.1 wire protocol and
also adds an interactive test suite for this protocol.
2020-10-14 12:28:17 +02:00
4eb01b21c8 miner: set etherbase even if mining isn't possible at the moment (#21707) 2020-10-14 11:59:11 +02:00
bdc7554918 params: update CHTs (#21706) 2020-10-14 11:57:37 +02:00
1fed223483 accounts/keystore: fix flaky test (#21703)
* accounts/keystore: add timeout to test to prevent failure on travis

The TestWalletNotifications test sporadically fails on travis.
This is because we shutdown the event collection before all events are received.
Adding a small timeout (10 milliseconds) allows the collector to be scheduled
and to consume all pending events before we shut it down.

* accounts/keystore: added newlines back in

* accounts/keystore: properly fix the walletNotifications test
2020-10-13 19:46:43 +02:00
1e10489196 miner: don't interrupt mining after successful sync (#21701)
* miner: exit loop when downloader Done or Failed

Following the logic of the comment at the method,
this fixes a regression introduced at 7cf56d6f06
, which would allow external parties to DoS with
blocks, preventing mining progress.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: remove ineff assign (lint)

Signed-off-by: meows <b5c6@protonmail.com>

* miner: update test re downloader events

Signed-off-by: meows <b5c6@protonmail.com>

* Revert "miner: remove ineff assign (lint)"

This reverts commit eaefcd34ab4862ebc936fb8a07578aa2744bc058.

* Revert "miner: exit loop when downloader Done or Failed"

This reverts commit 23abd34265aa246c38fc390bb72572ad6ae9fe3b.

* miner: add test showing imprecise TestMiner

Signed-off-by: meows <b5c6@protonmail.com>

* miner: fix waitForMiningState precision

This helper function would return an affirmation
on the first positive match on a desired bool.

This was imprecise; it return false positives
by not waiting initially for an 'updated' value.

This fix causes TestMiner_2 to fail, which is
expected.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: remove TestMiner_2 demonstrating broken test

This test demonstrated the imprecision of the test
helper function waitForMiningState. This function
has been fixed with 6d365c2851, and this test test
may now be removed.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: fix test regarding downloader event/mining expectations

See comment for logic.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: add test describing expectations for downloader/mining events

We expect that once the downloader emits a DoneEvent,
signaling a successful sync, that subsequent StartEvents
are not longer permitted to stop the miner.

This prevents a security vulnerability where forced syncs via
fake high blocks would stall mining operation.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: use 'canStop' state to fix downloader event handling

- Break downloader event handling into event
separating Done and Failed events. We need to
treat these cases differently since a DoneEvent
should prevent the miner from being stopped on
subsequent downloader Start events.

- Use canStop state to handle the one-off
case when a downloader first succeeds.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: improve comment wording

Signed-off-by: meows <b5c6@protonmail.com>

* miner: start mining on downloader events iff not already mining

Signed-off-by: meows <b5c6@protonmail.com>

* miner: refactor miner update logic w/r/t downloader events

This makes mining pause/start logic regarding downloader
events more explicit. Instead of eternally handling downloader
events after the first done event, the subscription is closed
when downloader events are no longer actionable.

Signed-off-by: meows <b5c6@protonmail.com>

* miner: fix handling downloader events on subcription closed

Signed-off-by: meows <b5c6@protonmail.com>

* miner: (lint:gosimple) use range over chan instead of for/select

Signed-off-by: meows <b5c6@protonmail.com>

* miner: refactor update loop to remove race condition

The go routine handling the downloader events handling
vars in parallel with the parent routine, causing a
race condition.

This change, though ugly, remove the condition while
still allowing the downloader event subscription to be
closed when the miner has no further use for it (ie DoneEvent).

* miner: alternate fix for miner-flaw

Co-authored-by: meows <b5c6@protonmail.com>
2020-10-13 15:12:06 +03:00
2a9ea6be87 cmd/geth, cmd/utils: fixed flags name (#21700) 2020-10-13 13:33:10 +02:00
7a5a822905 eth, p2p: use truncated names (#21698)
* peer: return localAddr instead of name to prevent spam

We currently use the name (which can be freely set by the peer) in several log messages.
This enables malicious actors to write spam into your geth log.
This commit returns the localAddr instead of the freely settable name.

* p2p: reduce usage of peer.Name in warn messages

* eth, p2p: use truncated names

* Update peer.go

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-10-13 13:28:24 +02:00
5c6155f9f4 internal/web3ext: improve some web3 apis (#21639)
* imporve some web3-ext apis

* Update web3ext.go

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-10-13 13:24:08 +02:00
348c3bc47d trie: fix flaw in stacktrie pool reuse (#21699) 2020-10-13 13:21:25 +02:00
94d1f5888a consensus/clique: unexport calcDifficulty and improve comment (#21619) 2020-10-13 11:00:42 +02:00
c37e68e7c1 all: replace RWMutex with Mutex in places where RLock is not used (#21622) 2020-10-13 10:58:41 +02:00
32341f88e3 console: fix admin.sleepBlocks (#21629) 2020-10-13 10:55:57 +02:00
66c3eb2f1a accouts, consensus, core: fix some comments (#21617) 2020-10-12 15:02:38 +02:00
86dd005544 trie: polish commit function (#21692)
* trie: polish commit function

* trie: fix typo
2020-10-12 12:08:04 +02:00
706f5e3b98 core: fix txpool off-by-one error (#21683) 2020-10-09 12:23:46 +03:00
19a1c95046 eth/downloader: cache parent hash instead of recomputing (#21678) 2020-10-09 09:09:10 +02:00
905ed109ed eth/downloader: fix data race around the ancientlimit (#21681)
* eth/downloader: fix data race around the ancientlimit

* eth/downloader: initialize the ancientlimit as 0
2020-10-09 09:58:30 +03:00
43cd31ea9f core/vm: dedup config check in markdown logger (#21655)
* core/vm: dedup config check

* review feedback: reuse buffer
2020-10-08 14:03:24 +02:00
5e86e4ed29 p2p/discover: remove use of shared hash instance for key derivation (#21673)
For some reason, using the shared hash causes a cryptographic incompatibility
when using Go 1.15. I noticed this during the development of Discovery v5.1
when I added test vector verification.

The go library commit that broke this is golang/go@97240d5, but the
way we used HKDF is slightly dodgy anyway and it's not a regression.
2020-10-08 11:19:54 +02:00
6d29e192e9 signer/core: don't mismatch reject and no accounts (#21677)
* signer/core: don't mismatch reject and zero accounts, fixes #21674

* signer/core: docs
2020-10-08 11:10:58 +03:00
015e78928a node: relax websocket connection header check (#21646)
This makes it accept the "upgrade,keep-alive" header value, which
apparently is a thing.
2020-10-07 20:05:14 +02:00
716864deba cmd/devp2p/internal/ethtest: improve eth test suite (#21615)
This fixes issues with the protocol handshake and status exchange
and adds support for responding to GetBlockHeaders requests.
2020-10-07 17:22:44 +02:00
e43d827a19 core/types: optimize bloom filters (#21624)
* core/types: tests for bloom

* core/types: refactored bloom filter for receipts, added tests

core/types: replaced old bloom implementation

core/types: change interface of bloom add+test

* core/types: refactor bloom

* core/types: minor tweak on LogsBloom

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-10-06 16:57:00 +03:00
eb87121300 core/bloombits: faster generator (#21625)
* core/bloombits: add benchmark

* core/bloombits: optimize inserts
2020-10-06 16:34:29 +03:00
2b2fd74158 params: update goerli testnet bootnodes (#21659)
* params: update pegasys besu bootnode

* params: update goerli initiative bootnodes
2020-10-06 08:35:21 +03:00
d9890a6a8f cmd/faucet: enable DNS discovery for known networks (#21636) 2020-10-05 13:50:26 +03:00
a15d71a255 core/state/snapshot: stop generator if it hits missing trie nodes (#21649)
* core/state/snapshot: exit Geth if generator hits missing trie nodes

* core/state/snapshot: error instead of hard die on generator fault

* core/state/snapshot: don't enable logging on the tests
2020-10-05 11:52:36 +03:00
9d1e2027a0 trie: add Commit-sequence tests for stacktrie commit (#21643) 2020-09-30 19:49:20 +02:00
053ed9cc84 trie: polishes to trie committer (#21351)
* trie: update tests to check commit integrity

* trie: polish committer

* trie: fix typo

* trie: remove hasvalue notion

According to the benchmarks, type assertion between the pointer and
interface is extremely fast.

BenchmarkIntmethod-12           1000000000               1.91 ns/op
BenchmarkInterface-12           1000000000               2.13 ns/op
BenchmarkTypeSwitch-12          1000000000               1.81 ns/op
BenchmarkTypeAssertion-12       2000000000               1.78 ns/op

So the overhead for asserting whether the shortnode has "valuenode"
child is super tiny. No necessary to have another field.

* trie: linter nitpicks

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-30 13:45:56 +02:00
dad26582b6 accounts, signer: implement gnosis safe support (#21593)
* accounts, signer: implement gnosis safe support

* common/math: add type for marshalling big to dec

* accounts, signer: properly sign gnosis requests

* signer, clef: implement account_signGnosisTx

* signer: fix auditlog print, change rpc-name (signGnosisTx to signGnosisSafeTx)

* signer: pass validation-messages/warnings to the UI for gnonsis-safe txs

* signer/core: minor change to validationmessages of typed data
2020-09-29 17:40:08 +02:00
6c8310ebb4 trie: use stacktrie for Derivesha operation (#21407)
core/types: use stacktrie for derivesha

trie: add stacktrie file

trie: fix linter

core/types: use stacktrie for derivesha

rebased: adapt stacktrie to the newer version of DeriveSha

Co-authored-by: Martin Holst Swende <martin@swende.se>

More linter fixes

review feedback: no key offset for nodes converted to hashes

trie: use EncodeRLP for full nodes

core/types: insert txs in order in derivesha

trie: tests for derivesha with stacktrie

trie: make stacktrie use pooled hashers

trie: make stacktrie reuse tmp slice space

trie: minor polishes on stacktrie

trie/stacktrie: less rlp dancing

core/types: explain the contorsions in DeriveSha

ci: fix goimport errors

trie: clear mem on subtrie hashing

squashme: linter fix

stracktrie: use pooling, less allocs (#3)

trie: in-place hex prefix, reduce allocs and add rawNode.EncodeRLP

Reintroduce the `[]node` method, add the missing `EncodeRLP` implementation for `rawNode` and calculate the hex prefix in place.

Co-authored-by: Martin Holst Swende <martin@swende.se>

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-29 17:38:13 +02:00
4ee11b072e cmd/bootnode,internal/debug: fix some comments (#21623) 2020-09-29 11:31:14 +02:00
901471f733 build: keep geth-sources.jar build result for JavaDoc (#21596)
* ci: tooltips for javadoc for mobile app

* f space
2020-09-28 20:11:30 +02:00
666092936c p2p/enode: remove unused code (#21612) 2020-09-28 20:10:11 +02:00
b007df89dd light: fix wrong description in a comment (#21573) 2020-09-28 14:30:10 +02:00
a04294d160 internal/web3ext: improve eth_getBlockByNumber and eth_getBlockByHash console api (#21608) 2020-09-28 14:28:38 +02:00
eebfb13053 core: free pointer from slice after popping element from price heap (#21572)
* Fix potential memory leak in price heap

* core: nil free pointer slice (alternative version)

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-28 14:24:01 +02:00
0ddd4612b7 core/vm, params: make 2200 in line with spec (#21605) 2020-09-28 14:14:45 +02:00
a90e645ccd mobile: added constructor for big int (#21597)
* mobile: added constructor for big int

* mobile: tiny nitpick
2020-09-28 14:12:08 +02:00
420b78659b accounts/abi: ABI explicit difference between Unpack and UnpackIntoInterface (#21091)
* accounts/abi: refactored abi.Unpack

* accounts/abi/bind: fixed error

* accounts/abi/bind: modified template

* accounts/abi/bind: added ToStruct for conversion

* accounts/abi: reenabled tests

* accounts/abi: fixed tests

* accounts/abi: fixed tests for packing/unpacking

* accounts/abi: fixed tests

* accounts/abi: added more logic to ToStruct

* accounts/abi/bind: fixed template

* accounts/abi/bind: fixed ToStruct conversion

* accounts/abi/: removed unused code

* accounts/abi: updated template

* accounts/abi: refactored unused code

* contracts/checkpointoracle: updated contracts to sol ^0.6.0

* accounts/abi: refactored reflection logic

* accounts/abi: less code duplication in Unpack*

* accounts/abi: fixed rebasing bug

* fix a few typos in comments

* rebase on master

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-09-28 14:10:26 +02:00
c9959145a9 params: begin v1.9.23 release cycle 2020-09-28 11:23:02 +03:00
c71a7e26a8 params: release Geth v1.9.22 2020-09-28 11:21:47 +03:00
7ddb44b80e Merge pull request #21635 from karalabe/cht-1.9.22
params: update CHTs for Geth v1.9.22
2020-09-28 11:21:17 +03:00
b5d362b2bf params: update CHTs for Geth v1.9.22 2020-09-28 11:19:22 +03:00
fdd42d425b cmd/devp2p/internal/ethtest: lower protocol version to 64 (#21604) 2020-09-24 10:46:43 +02:00
39f8268147 cmd/devp2p/internal/ethtest: update version in handshake (#21603) 2020-09-23 17:48:47 +02:00
a25899f3dc cmd/devp2p: add eth protocol test suite (#21598)
This change adds a test framework for the "eth" protocol and some basic
tests. The tests can be run using the './devp2p rlpx eth-test' command.
2020-09-23 15:18:17 +02:00
c1544423d6 internal/ethapi: fix nil deref + fix estimateGas console bindings (#21601)
* tried to fix

* fix for js api

* fix for nil pointer ex

* rev space

* rev space

* input call formatter
2020-09-23 13:08:40 +02:00
e5defccd58 trie: extend range proof (#21250)
* trie: support non-existent right proof

* trie: improve test

* trie: minor linter fix

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-09-23 12:44:09 +03:00
0921f8a74f internal/ethapi: add optional parameter blockNrOrHash to estimateGas (#21545)
This allows users to estimate gas on top of arbitrary blocks as well as pending and latest.
Tracing on pending is useful for most users as it takes into account the current txpool while
tracing on latest might be useful for users that have little to know knowledge of the current
transactions in the network.

If blockNrOrHash is not specified, estimateGas defaults to pending
2020-09-23 10:29:48 +02:00
25b16085da trie: support empty range proof (#21199) 2020-09-23 11:03:21 +03:00
e1365b2464 trie: fix gaped range proof test case (#21484) 2020-09-23 10:59:11 +03:00
fdb742419e cmd/clef, cmd/geth: use SplitAndTrim from cmd/utils (#21579) 2020-09-22 23:22:54 +02:00
129cf075e9 p2p: move rlpx into separate package (#21464)
This change moves the RLPx protocol implementation into a separate package,
p2p/rlpx. The new package can be used to establish RLPx connections for
protocol testing purposes.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-09-22 10:17:39 +02:00
2c097bb7a2 mobile: better api for java users (#21580)
* (mobile): Adds string representations for types

* mobile: better interfaces add stringer to types

Co-authored-by: sarath <sarath@melvault.com>
2020-09-21 16:33:35 +02:00
9a39c6bcb1 accounts/abi: improve documentation and names (#21540)
* accounts: abi/bid/backends; cleaned doc errors, camelCase refactors and anonymous variable assignments

* acounts/abi/bind: doc errors, anonymous parameter assignments

* accounts/abi: doc edits, camelCase refactors

* accounts/abi/bind: review fix

* reverted name changes

* name revert

Co-authored-by: Osoro Bironga <osoro@doctaroo.com>
2020-09-20 10:43:57 +02:00
f354c622ca core: fix a typo in comment (#21439) 2020-09-18 14:26:19 +02:00
2482ba016e Merge pull request #21529 from karalabe/dynamic-pivot
eth/downloader: dynamically move pivot even during chain sync
2020-09-18 12:29:33 +03:00
fb835c024c eth/downloader: dynamically move pivot even during chain sync 2020-09-18 11:37:42 +03:00
07751c3d26 cmd/geth: added counters to the geth inspect report (#21495)
* database: added counters

* Improved stats for ancient db

* Small improvement

* Better message and added percentage while counting receipts

* Fast counting for receipts

* added info message

* Show both receips itemscount  from ancient db and counted receipts

* Fixed default case

* Removed counter for receipts in ancient store

* Removed counting of receipts present in leveldb
2020-09-17 10:23:56 +02:00
faba018b29 cmd/utils: use preconfigured testnet flags instead of networkid (#21561)
* cmd/utils: use preconfigured testnet flags instead of networkid

* cmd/utils: shorter description

Co-authored-by: Martin Holst Swende <martin@swende.se>

* Update flags.go

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-16 13:17:50 +02:00
89884dc353 tests/fuzzers/abi: add fuzzer for fuzzing package accounts/abi (#21217)
* tests/fuzzers/abi: added abi fuzzer

* accounts/abi: fixed issues found by fuzzing

* tests/fuzzers/abi: update fuzzers, added repro test

* tests/fuzzers/abi: renamed abi_fuzzer to abifuzzer

* tests/fuzzers/abi: updated abi fuzzer

* tests/fuzzers/abi: updated abi fuzzer

* accounts/abi: minor style fix

* go.mod: added go-fuzz dependency

* tests/fuzzers/abi: updated abi fuzzer

* tests/fuzzers/abi: make linter happy

* tests/fuzzers/abi: make linter happy

* tests/fuzzers/abi: comment out false positives
2020-09-16 13:15:22 +02:00
93f047023f les/lespay/server: bump database version (#21571) 2020-09-16 11:51:16 +02:00
8696dd39cb params: allow setting Petersburg block before chain head (#21473)
* Allow setting PetersburgBlock before chainhead

if it is at the same block as ConstantinopleBlock

* Add a negative test
2020-09-16 09:39:35 +03:00
cf2a77af28 ethclient: fix BlockNumber (#21565)
It didn't actually work because it called a method that doesn't
exist. This fixes it also adds a test.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-09-15 11:29:51 +02:00
0185ee0993 core/rawdb: single point of maintenance for writing and deleting tx lookup indexes (#21480) 2020-09-15 10:37:01 +02:00
4764b2f0be COYPING: restore the full text text of GPL (#21568)
When the license was added to the repository, its text was changed (some
sections at the end removed) and, worse, the authors of go-ethereum
tried to claim copyright on the license text.

The correct way to apply GPL to a project is to copy it verbatim.
This change reverts the text of the GPL to the original.
2020-09-15 08:27:17 +02:00
b65c384181 eth/tracers: regenerate assets from #21549 (#21564) 2020-09-15 08:22:47 +02:00
4996fce25a les, les/lespay/server: refactor client pool (#21236)
* les, les/lespay/server: refactor client pool

* les: use ns.Operation and sub calls where needed

* les: fixed tests

* les: removed active/inactive logic from peerSet

* les: removed active/inactive peer logic

* les: fixed linter warnings

* les: fixed more linter errors and added missing metrics

* les: addressed comments

* cmd/geth: fixed TestPriorityClient

* les: simplified clientPool state machine

* les/lespay/server: do not use goroutine for balance callbacks

* internal/web3ext: fix addBalance required parameters

* les: removed freeCapacity, always connect at minCapacity initially

* les: only allow capacity change with priority status

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2020-09-14 22:44:20 +02:00
f7112cc182 rlp: add SplitUint64 (#21563)
This can be useful when working with raw RLP data.
2020-09-14 19:23:01 +02:00
71c37d82ad js/tracers: make calltracer report value in selfdestructs (#21549) 2020-09-14 14:57:28 +02:00
4eb9296910 p2p/nodestate: ensure correct callback order (#21436)
This PR adds an extra guarantee to NodeStateMachine: it ensures that all
immediate effects of a certain change are processed before any subsequent
effects of any of the immediate effects on the same node. In the original
version, if a cascaded change caused a subscription callback to be called
multiple times for the same node then these calls might have happened in a
wrong chronological order.

For example:

- a subscription to flag0 changes flag1 and flag2
- a subscription to flag1 changes flag3
- a subscription to flag1, flag2 and flag3 was called in the following order:

   [flag1] -> [flag1, flag3]
   [] -> [flag1]
   [flag1, flag3] -> [flag1, flag2, flag3]

This happened because the tree of changes was traversed in a "depth-first
order". Now it is traversed in a "breadth-first order"; each node has a
FIFO queue for pending callbacks and each triggered subscription callback
is added to the end of the list. The already existing guarantees are
retained; no SetState or SetField returns until the callback queue of the
node is empty again. Just like before, it is the responsibility of the
state machine design to ensure that infinite state loops are not possible.
Multiple changes affecting the same node can still happen simultaneously;
in this case the changes can be interleaved in the FIFO of the node but the
correct order is still guaranteed.

A new unit test is also added to verify callback order in the above scenario.
2020-09-14 14:01:18 +02:00
a99ac5335c Dockerfile: unexpose port 8547 as GraphQL was merged into HTTP endpoint (#21556) 2020-09-13 23:25:15 +03:00
4e2641319b p2p/discover: fix typo in comments (#21554) 2020-09-11 20:35:38 +02:00
df219e23df miner: fix regression, add test for starting while download (#21547)
Fixes a regression introduced in #21536
2020-09-11 18:17:09 +02:00
7cf56d6f06 miner: use channels instead of atomics in update loop (#21536)
This PR changes several different things:

- Adds test cases for the miner loop
- Stops the worker if it wasn't already stopped in worker.Close()
- Uses channels instead of atomics in the miner.update() loop

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-09-10 19:27:42 +02:00
d7f02b448a cmd/geth: print warning when whisper config is present in toml (#21544)
* cmd/geth: print warning when whisper config is present in toml

* Update cmd/geth/config.go

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-10 13:14:19 +00:00
1167639524 ethclient: add BlockNumber method (#21500)
This adds a new client method BlockNumber to fetch the most recent
block number of the chain.
2020-09-10 14:24:21 +02:00
4ea9737de6 go.mod: remove golang.org/x/sync (#21541) 2020-09-10 14:21:51 +02:00
a3cd8a040a core/vm: fix benchmark overflow + prep for precompile repricings (#21530)
* core/vm/testdata: add gascost expectations to testcases

* core/vm: verify expected gas in tests for precompiles

* core/vm: fix overflow flaw in gas/s calculation
2020-09-10 09:19:30 +02:00
328901c24c cmd, eth: offer maxprice flag for overwritting price cap (#21531)
* cmd, eth: offer maxprice flag for overwritting price cap

* eth: rename default price cap
2020-09-09 18:38:47 +03:00
3a98c6f6e6 Merge pull request #21537 from karalabe/les-reorg-fix
eth/downloader: only roll back light sync if not fully validating
2020-09-09 18:37:37 +03:00
d81c9d9b76 accounts/abi/bind/backends: reverted some stylistic changes (#21535) 2020-09-09 13:21:20 +00:00
367f12f734 eth/downloader: only roll back light sync if not fully validating 2020-09-09 14:13:26 +03:00
8d35b1eb2b params: begin v1.9.22 release cycle 2020-09-09 11:23:37 +03:00
0287d54847 params: release Geth v1.9.21 2020-09-09 11:22:11 +03:00
24562d9b0c Merge pull request #21534 from karalabe/cht-1.9.21
params: update CHTs for v1.9.21 release
2020-09-09 10:34:53 +03:00
dc681fc1f6 params: update CHTs for v1.9.21 release 2020-09-09 10:33:20 +03:00
86bcbb0d79 .github: remove whisper from CODEOWNERS (#21527) 2020-09-08 23:02:14 +03:00
066c75531d build: remove wnode from the list of packages binaries (#21526) 2020-09-08 16:13:48 +02:00
8327d1fdfc accounts/usbwallet, signer/core: show accounts from ledger legacy derivation paths (#21517)
* accounts/usbwallet, signer/core: un-hide accounts from ledger legacy derivation paths

* Update accounts/usbwallet/wallet.go

* Update signer/core/api.go

* Update signer/core/api.go
2020-09-08 14:07:55 +03:00
d54f2f2e5e whisper: remove whisper (#21487)
* whisper: remove whisper

* Update cmd/geth/config.go

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>

* cmd/geth: warn on enabling whisper + remove more whisper deps

* mobile: remove all whisper references

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-09-08 11:47:48 +03:00
c5d28f0b27 accounts: abi/bid/backends; cleaned doc errors, camelCase refactors and anonymous variable assignments (#21514)
Co-authored-by: Osoro Bironga <osoro@doctaroo.com>
2020-09-07 13:07:15 +02:00
de971cc845 eth: added trace_call to trace on top of arbitrary blocks (#21338)
* eth: Added TraceTransactionPending

* eth: Implement Trace_Call, remove traceTxPending

* eth: debug_call -> debug_traceCall, recompute tx environment if pruned

* eth: fix nil panic

* eth: improve block retrieving logic in tracers

* internal/web3ext: add debug_traceCall to console
2020-09-07 10:52:01 +02:00
f86324edb7 Merge pull request #21504 from karalabe/trie-path-sync
core, eth, trie: prepare trie sync for path based operation
2020-09-02 13:52:51 +03:00
eeaf191633 core, eth, trie: prepare trie sync for path based operation 2020-09-02 13:21:32 +03:00
3010f9fc75 eth/downloader: change intial download size (#21366)
This changes how the downloader works, a little bit. Previously, when block sync started,
we immediately started filling up to 8192 blocks. Usually this is fine, blocks are small
in the early numbers. The threshold then is lowered as we measure the size of the blocks
that are filled.

However, if the node is shut down and restarts syncing while we're in a heavy segment,
that might be bad. This PR introduces a more conservative initial threshold of 2K blocks
instead.
2020-09-02 11:01:46 +02:00
d90bbce954 go.mod : update goja dependency (#21432) 2020-09-01 12:56:22 +02:00
5cdb476dd1 "Downloader queue stats" is now provided once per minute (#21455)
* "Downloader queue stats" is now a DEBUG information

I think this info is more a DEBUG related information then an INFO. If it must remains an INFO, maybe it can be slow down to one time every 5 minutes or so.

* Update queue.go

"Downloader queue stats" information is now provided once every minute instead of once every 10 seconds.
2020-09-01 11:02:12 +02:00
ff23e265cd internal: fix personal.sign() (#21503) 2020-09-01 10:23:04 +02:00
12d8570322 accounts/abi: fix a bug in getTypeSize method (#21501)
* accounts/abi: fix a bug in getTypeSize method

e.g. for "Tuple[2]" type, the element of the array is a tuple type and the size of the tuple may not be 32.

* accounts/abi: add unit test of getTypeSize method
2020-09-01 07:29:54 +00:00
5883afb3ef rpc: fix issue with null JSON-RPC messages (#21497) 2020-08-28 16:27:58 +02:00
05280a7ae3 eth/tracers: revert reason in call_tracer + error for failed internal calls (#21387)
* tests: add testdata of call tracer

* eth/tracers: return revert reason in call_tracer

* eth/tracers: regenerate assets

* eth/tracers: add error message even if no exec occurrs, fixes #21438

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-08-27 11:33:45 +02:00
d97e0063d5 Merge pull request #21491 from karalabe/state-sync-leak-fix
core/state, eth, trie: stabilize memory use, fix memory leak
2020-08-27 11:24:28 +03:00
856307d8bb go.mod | goleveldb latest update (#21448)
* go.mod | goleveldb latest update

* go.mod update

* leveldb options

* go.mod: double check

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-08-26 15:53:12 +02:00
16d7eae1c8 eth: updated comments (#21490) 2020-08-26 13:20:12 +03:00
d8da0b3d81 core/state, eth, trie: stabilize memory use, fix memory leak 2020-08-26 13:05:06 +03:00
92b12ee6c6 accounts/abi/bind/backends: Disallow AdjustTime for non-empty blocks (#21334)
* accounts/abi/bind/backends: Disallow timeshift for non-empty blocks

* accounts/abi/bind/backends: added tests for adjust time

* accounts/abi/bind/simulated: added comments, fixed test for AdjustTime

* accounts/abi/bind/backends: updated comment
2020-08-26 09:37:00 +02:00
fc20680b95 params: begin v1.9.21 release cycle 2020-08-25 16:21:41 +02:00
979fc96899 params: release Geth v1.9.20 2020-08-25 16:20:37 +02:00
63a9d4b2ae Merge pull request #21486 from karalabe/cht-1.9.20
params: update CHTs for v1.9.20 release
2020-08-25 13:25:23 +03:00
ce5f94920d params: update CHTs for v1.9.20 release 2020-08-25 13:02:51 +03:00
341f451083 graphql: add support for retrieving the chain id (#21451) 2020-08-25 11:38:56 +03:00
d13b8e5570 Merge pull request #21483 from karalabe/freezer-truncate-silent
core/rawdb: only complain loudly if truncating many items
2020-08-25 09:21:44 +03:00
5655dce3b8 core/rawdb: only complain loudly if truncating many items 2020-08-25 09:03:14 +03:00
7b5107b73f p2p/discover: avoid dropping unverified nodes when table is almost empty (#21396)
This change improves discovery behavior in small networks. Very small
networks would often fail to bootstrap because all member nodes were
dropping table content due to findnode failure. The check is now changed
to avoid dropping nodes on findnode failure when their bucket is almost
empty. It also relaxes the liveness check requirement for FINDNODE/v4
response nodes, returning unverified nodes as results when there aren't
any verified nodes yet.

The "findnode failed" log now reports whether the node was dropped
instead of the number of results. The value of the "results" was
always zero by definition.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-24 14:42:39 +02:00
bdde616f23 Merge pull request #21477 from karalabe/snapshotter-shallow-generator
core/state/snapshot: reduce disk layer depth during generation
2020-08-24 14:00:57 +03:00
3ee91b9f2e core/state/snapshot: reduce disk layer depth during generation 2020-08-24 13:22:36 +03:00
0f4e7c9b0d eth: utilize sync bloom for getNodeData (#21445)
* eth/downloader, eth/handler: utilize sync bloom for getNodeData

* trie: handle if bloom is nil

* trie, downloader: check bloom nilness externally
2020-08-24 11:32:12 +03:00
1b5a867eec core: do less lookups when writing fast-sync block bodies (#21468) 2020-08-22 18:12:04 +02:00
87c0ba9213 core, eth, les, trie: add a prefix to contract code (#21080) 2020-08-21 15:10:40 +03:00
b68929caee Merge pull request #21472 from holiman/fix_dltest_fail
eth/downloader: fix rollback issue on short chains
2020-08-21 14:43:14 +03:00
9f7b79af00 eth/downloader: fix rollback issue on short chains 2020-08-21 13:36:08 +02:00
4e54b1a45e metrics: zero temp variable in updateMeter (#21470)
* metrics: zero temp variable in  updateMeter

Previously the temp variable was not updated properly after summing it to count.
This meant we had astronomically high metrics, now we zero out the temp whenever we
sum it onto the snapshot count

* metrics: move temp variable to be aligned, unit tests

Moves the temp variable in MeterSnapshot to be 64-bit aligned because of the atomic bug.
Adds a unit test, that catches the previous bug.
2020-08-21 11:04:36 +03:00
a70a79b285 Merge pull request #21466 from karalabe/go1.15
travis, dockerfile, appveyor, build: bump to Go 1.15
2020-08-20 17:41:26 +03:00
15fdaf2005 travis, dockerfile, appveyor, build: bump to Go 1.15 2020-08-20 16:41:37 +03:00
8cbdc8638f core: define and test chain rewind corner cases (#21409)
* core: define and test chain reparation cornercases

* core: write up a variety of set-head tests

* core, eth: unify chain rollbacks, handle all the cases

* core: make linter smile

* core: remove commented out legacy code

* core, eth/downloader: fix review comments

* core: revert a removed recovery mechanism
2020-08-20 13:01:24 +03:00
0bdd295cc0 core: more detailed metering for reorgs (#21420) 2020-08-20 09:49:35 +02:00
7ebc6c43ff cmd/evm: statet8n output folder + tx hashes on trace filenames (#21406)
* t8ntool: add output basedir

* t8ntool: add txhash to trace filename

* t8ntool: don't default to '.' basedir, allow absolute paths
2020-08-19 11:31:13 +02:00
560d44479c Merge pull request #21461 from karalabe/ppa-drop-disco
build: drop disco, enable groovy on Ubuntu PPAs
2020-08-19 10:29:54 +03:00
32b078d418 build: drop disco, enable groovy on Ubuntu PPAs 2020-08-19 10:28:08 +03:00
2ff464b29d core/state: fixed some comments (#21450) 2020-08-19 09:54:21 +03:00
f3bafecef7 metrics: make meter updates lock-free (#21446) 2020-08-18 11:27:04 +02:00
54add42550 cmd/geth/tests: try to fix spurious travis failure in les tests (#21410)
* cmd/geth/tests: try to fix spurious travis failure in les tests

* cmd/geth: les_test - remove extraneous option during boot
2020-08-14 14:18:12 +02:00
04926db204 params: begin v1.9.20 release cycle 2020-08-11 14:11:16 +03:00
3e0641923d params: release Geth v1.9.19 2020-08-11 14:10:21 +03:00
74925e547f Merge pull request #21437 from karalabe/cht-1.9.19
params: update CHTs for v1.9.19
2020-08-11 10:34:08 +03:00
7afdf792ab params: update CHTs for v1.9.19 2020-08-11 10:20:03 +03:00
c28fd9c079 tests: add Berlin-definition identical to YOLOv1 (#21435) 2020-08-10 21:06:14 +02:00
4baa574410 Merge pull request #21434 from karalabe/ethstats-split-rwlock
ethstats: split read and write lock, otherwise they'll lock up
2020-08-10 15:13:31 +03:00
9f45d6efae ethstats: split read and write lock, otherwise they'll lock up 2020-08-10 14:33:22 +03:00
cbbc54c495 Merge pull request #21433 from holiman/statsync_exiter
eth/downloader: allow all timers to exit
2020-08-10 12:08:46 +03:00
7cee2509c0 eth/downloader: allow all timers to exit 2020-08-10 10:42:33 +02:00
48b484c5ac Merge pull request #21428 from holiman/ethstats_moar
ethstats: overwrite old errors
2020-08-07 19:40:28 +03:00
06125bff89 Merge pull request #21429 from holiman/timerfix
eth/downloader: set deliverytime on drops and timeouts too
2020-08-07 16:36:33 +03:00
9fea1a5cf5 eth/downloader: set deliverytime on drops and timeouts too 2020-08-07 15:34:58 +02:00
e401f5ff10 les: close all connected les-server when shutdown (#21426)
* les: close all connected les-server when shutdown

* les: linter nitpick

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-08-07 15:33:00 +02:00
6a53ce29a4 ethstats: overwrite old errors 2020-08-07 14:44:44 +02:00
8f24097836 Merge pull request #21427 from karalabe/fix-statesync-delivery-time
eth/downloader: save the correct delivery time for state sync
2020-08-07 15:27:00 +03:00
4b9c0ea76d eth/downloader: save the correct delivery time for state sync 2020-08-07 15:17:13 +03:00
3bb8a4ed3f Merge pull request #21425 from holiman/leslock
les: update checktime even if check fails
2020-08-07 12:32:01 +03:00
983cb25a07 les: update checktime even if check fails 2020-08-07 10:57:02 +02:00
68754f3931 cmd/utils: grant snapshot cache to trie if disabled (#21416)
* cmd/utils: grant snapshot cache to trie if disabled

* eth: fix up default non-mainnet cache distribution
2020-08-06 15:28:31 +03:00
5d4512b113 eth: use maxQueuedTxAnns for to limit the number of transactions announced (#21419) 2020-08-06 15:19:00 +03:00
d21303f9dd cmd/geth: fixes db unavailability for chain commands (#21415)
* chaincmd should make config nodes instead of full nodes

* add documentation for using makeConfigNode instead of makeFullNode;

* add documentation to functions

* code style
2020-08-06 10:24:36 +03:00
4fde0cabc1 Merge pull request #21411 from holiman/fix_codelookup
core/vm: avoid map lookups for accessing jumpdest analysis
2020-08-06 08:09:15 +03:00
4a04127ce3 cmd/geth: fix import / export issues related to DB unavailability (#21414)
* should fix import / export issues related to DB unavailability

* document reason for makeConfigNode

* fix comment

* comment consistency

* remove comments

* lint
2020-08-06 08:02:05 +03:00
2de37f28e0 downloader: add eth65 tests (#21383)
* eth65 tests

linted

* remove non-latest eth light tests
2020-08-05 12:22:29 +03:00
5a88a7cf5b core: use errors.Is for consensus errors check (#21095) 2020-08-05 09:52:54 +02:00
1d25039ff5 p2p/nat: limit UPNP request concurrency (#21390)
This adds a lock around requests because some routers can't handle
concurrent requests. Requests are also rate-limited.
 
The Map function request a new mapping exactly when the map timeout
occurs instead of 5 minutes earlier. This should prevent duplicate mappings.
2020-08-05 09:51:37 +02:00
8ead45c20b core/vm: avoid map lookups for accessing jumpdest analysis 2020-08-04 15:45:35 +02:00
82a9e11058 ethstats: avoid concurrent write on websocket (#21404)
Fixes #21403
2020-08-04 12:21:51 +02:00
b35e4fce99 core: avoid modification of accountSet cache in tx_pool (#21159)
* core: avoid modification of accountSet cache in tx_pool

when runReorg, we may copy the dirtyAccounts' accountSet cache to promoteAddrs
in which accounts will be promoted, however, if we have reset request at the
same time, we may reuse promoteAddrs and modify the cache content which is
against the original intention of accountSet cache. So, we need to make a new
slice here to avoid modify accountSet cache.

* core: fix flatten condition + comment

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-04 11:51:53 +02:00
e24e05dd01 cmd/devp2p: print enode:// URL in enrdump (#21270)
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-04 11:33:07 +02:00
90dedea40f signer: EIP 712, parse bytes and bytesX as hex strings + correct padding (#21307)
* Handle hex strings for bytesX types

* Add tests for parseBytes

* Improve tests

* Return nil bytes if error is non-nil

* Right-pad instead of left-pad bytes

* More tests
2020-08-03 21:53:12 +02:00
c0c01612e9 node: refactor package node (#21105)
This PR significantly changes the APIs for instantiating Ethereum nodes in
a Go program. The new APIs are not backwards-compatible, but we feel that
this is made up for by the much simpler way of registering services on
node.Node. You can find more information and rationale in the design
document: https://gist.github.com/renaynay/5bec2de19fde66f4d04c535fd24f0775.

There is also a new feature in Node's Go API: it is now possible to
register arbitrary handlers on the user-facing HTTP server. In geth, this
facility is used to enable GraphQL.

There is a single minor change relevant for geth users in this PR: The
GraphQL API is no longer available separately from the JSON-RPC HTTP
server. If you want GraphQL, you need to enable it using the
./geth --http --graphql flag combination.

The --graphql.port and --graphql.addr flags are no longer available.
2020-08-03 19:40:46 +02:00
b2b14e6ce3 signer/core: EIP-712 encoded data should not reject a Domain without a ChainId (#21306)
* Do not check for a non-nil ChainId

* Add encoding test
2020-08-03 15:30:32 +02:00
290d6bd903 rpc: add SetHeader method to Client (#21392)
Resolves #20163

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-08-03 14:08:42 +02:00
9c2ac6fbd5 rpc: remove silly use of ReadVarint in subscription ID generator (#21391)
Found by @protolambda
2020-07-31 16:20:31 +02:00
a00dc5095b Merge pull request #21358 from hendrikhofstadt/fix/tx-sort-time
core: sort txs at the same gas price by received time
2020-07-30 10:23:36 +03:00
ff90894636 core/rawdb: convert some comments to godoc convention (#21384) 2020-07-29 21:53:59 +02:00
9e04c5ec83 core/bloombits: use single channel for shutdown (#20878)
This replaces the two-stage shutdown scheme with the one we
use almost everywhere else: a single quit channel signalling
termination.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-29 13:49:12 +02:00
abf2d7d74f build: use -trimpath flag when building executables (#21374)
* Disable symbol table and DWARF generation by default.
Trimpath if compiling with Go >= 1.13

* Set Go to minimum version 1.13. Revert debug symbol changes.
2020-07-29 13:48:03 +03:00
1976bb3df0 eth/downloader: remove eth62 (#21378)
* init

notes

removed some mentions of eth62, bumped protocol err too old to >=63

* remove sanity checks and bump supported protocol version up to 63

* remove 62 tests, still need to add 65

* remove 65 tests
2020-07-29 13:47:19 +03:00
350a0490ab les: fix unittest (#21382) 2020-07-29 13:44:14 +03:00
37564ceda6 miner: refactor helper functions in worker.go (#21044)
This reduces complexity of some lengthy functions in worker.go,
making the code easier to read.
2020-07-28 18:16:49 +02:00
28c5a8a54b les: implement new les fetcher (#20692)
* cmd, consensus, eth, les: implement light fetcher

* les: address comment

* les: address comment

* les: address comments

* les: check td after delivery

* les: add linearExpiredValue for error counter

* les: fix import

* les: fix dead lock

* les: order announces by td

* les: encapsulate invalid counter

* les: address comment

* les: add more checks during the delivery

* les: fix log

* eth, les: fix lint

* eth/fetcher: address comment
2020-07-28 18:02:35 +03:00
298a19bbc6 core: API-less transaction time sorting 2020-07-28 17:13:40 +03:00
c47052a580 core: sort txs at the same gas price by received time 2020-07-28 17:13:39 +03:00
93da0cf8a1 cmd, core, eth, light, trie: dump clean cache periodically (#20391)
* cmd, core, eth, light, trie: dump clean cache periodically

* eth: update config

* trie: minor fix

* core, trie: address comments

* eth: remove useless

* trie: print clean cache dump start too

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-28 16:30:31 +03:00
79ce5537ab signer/storage: fix a badly ordered error check (#21379) 2020-07-28 12:47:05 +03:00
8e7bee9b56 params: begin v1.9.19 release cycle 2020-07-27 14:58:45 +03:00
f538259187 params: release Geth v1.9.18 2020-07-27 14:53:53 +03:00
b1be979443 params: upgrade CHTs (#21376) 2020-07-27 12:57:15 +03:00
e997f92caf Merge pull request #21368 from holiman/update_uint256
deps: update uint256 to v1.1.1
2020-07-24 15:02:52 +03:00
56434bfa89 deps: update uint256 to v1.1.1 2020-07-24 14:00:08 +02:00
6793ffa12b Merge pull request #21300 from rjl493456442/txpool-fix-queued-evictions
core: fix queued transaction eviction
2020-07-24 11:14:42 +03:00
5413df1dfa core: fix heartbeat in txpool
core: address comment
2020-07-24 11:12:59 +03:00
c374447401 core: fix queued transaction eviction
Solves issue#20582. Non-executable transactions should not be evicted on each tick if there are no promote transactions or if a pending/reset empties the pending list. Tests and logging expanded to handle these cases in the future.

core/tx_pool: use a ts for each tx in the queue, but only update the heartbeat on promotion or pending replaced

queuedTs proper naming
2020-07-24 11:11:57 +03:00
105922180f eth/downloader: refactor downloader + queue (#21263)
* eth/downloader: refactor downloader + queue

downloader, fetcher: throttle-metrics, fetcher filter improvements, standalone resultcache

downloader: more accurate deliverytime calculation, less mem overhead in state requests

downloader/queue: increase underlying buffer of results, new throttle mechanism

eth/downloader: updates to tests

eth/downloader: fix up some review concerns

eth/downloader/queue: minor fixes

eth/downloader: minor fixes after review call

eth/downloader: testcases for queue.go

eth/downloader: minor change, don't set progress unless progress...

eth/downloader: fix flaw which prevented useless peers from being dropped

eth/downloader: try to fix tests

eth/downloader: verify non-deliveries against advertised remote head

eth/downloader: fix flaw with checking closed-status causing hang

eth/downloader: hashing avoidance

eth/downloader: review concerns + simplify resultcache and queue

eth/downloader: add back some locks, address review concerns

downloader/queue: fix remaining lock flaw

* eth/downloader: nitpick fixes

* eth/downloader: remove the *2*3/4 throttling threshold dance

* eth/downloader: print correct throttle threshold in stats

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-24 10:46:26 +03:00
3a57eecc69 mobile: fix build on iOS (#21362)
This fixes the iOS framework build by naming the second parameter of the
Signer interface method. The name is important because it becomes part
of the objc method signature.

Fixes #21340
2020-07-23 19:15:40 +02:00
997b55236e build: fix GOBIN for gomobile commands (#21361) 2020-07-23 12:34:08 +02:00
4c268e65a0 cmd/utils: implement configurable developer (--dev) account options (#21301)
* geth,utils: implement configurable developer account options

Prior to this change --dev (developer) mode
generated one account with an empty password,
irrespective of existing --password and --miner.etherbase
options.

This change makes --dev mode compatible with these
existing flags.

--dev mode may now be used in conjunction with
--password and --miner.etherbase flags to configure
the developer faucet using an existing keystore or
in creating a new account.

Signed-off-by: meows <b5c6@protonmail.com>

* main: remove key/pass flags from usage developer section

These flags are included already in other sections,
and it is not desired to duplicate them.

They were originally included in this section
along with added support for these flags in the
developer mode.

Signed-off-by: meows <b5c6@protonmail.com>
2020-07-23 06:47:34 +03:00
0b53e485d8 Merge pull request #21352 from karalabe/dev-noinit-genesis
cmd/utils: reuse existing genesis in persistent dev mode
2020-07-22 17:39:08 +03:00
9e22e912e3 cmd/utils: reuse existing genesis in persistent dev mode 2020-07-21 15:58:29 +03:00
123864fc05 whisper/whisperv6: improve test error messages (#21348) 2020-07-21 10:53:06 +02:00
7163a6664e ethclient: serialize negative block number as "pending" (#21177)
Fixes #21175

Co-authored-by: sammy007 <sammy007@users.noreply.github.com>
Co-authored-by: Adam Schmideg <adamschmideg@users.noreply.github.com>
2020-07-21 10:51:15 +02:00
4366c45e4e les: make clientPool.connectedBias configurable (#21305) 2020-07-21 10:23:40 +02:00
3a52c4dcf2 Merge pull request #21336 from karalabe/tiny-ref-optimization
core/vm: use pointers to operations vs. copy by value
2020-07-21 10:53:12 +03:00
722b742780 params: begin v1.9.18 release cycle 2020-07-20 15:58:33 +03:00
748f22c192 params: release Geth v1.9.17 2020-07-20 15:56:42 +03:00
43e2e58cbd accounts, internal: fix funding check when estimating gas (#21346)
* internal, accounts: fix funding check when estimate gas

* accounts, internal: address comments
2020-07-20 15:52:42 +03:00
35ddf36229 Merge pull request #21347 from karalabe/ethstats-fixes
ethstats: fix reconnection issue, implement primus pings
2020-07-20 11:47:13 +03:00
0fef66c739 ethstats: fix reconnection issue, implement primus pings 2020-07-20 11:46:41 +03:00
508891e64b core/vm: use pointers to operations vs. copy by value 2020-07-16 15:32:01 +03:00
9e88224eb8 core: raise gas limit in --dev mode, seed blake precompile (#21323)
* Set gasLimit in --dev mode to be 9m.

* core: Set gasLimit to 11.5 milion and add 1 wei allocation for BLAKE2b
2020-07-16 15:08:38 +03:00
295693759e core/vm: less allocations for various call variants (#21222)
* core/vm/runtime/tests: add more benchmarks

* core/vm: initial work on improving alloc count for calls to precompiles

name                                  old time/op    new time/op    delta
SimpleLoop/identity-precompile-10M-6     117ms ±75%      43ms ± 1%  -63.09%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   79.6ms ± 4%    70.5ms ± 1%  -11.42%  (p=0.008 n=5+5)

name                                  old alloc/op   new alloc/op   delta
SimpleLoop/identity-precompile-10M-6    24.4MB ± 0%     4.9MB ± 0%  -79.94%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   13.2kB ± 0%    13.2kB ± 0%     ~     (p=0.357 n=5+5)

name                                  old allocs/op  new allocs/op  delta
SimpleLoop/identity-precompile-10M-6      382k ± 0%      153k ± 0%  -59.99%  (p=0.000 n=5+4)
SimpleLoop/loop-10M-6                     40.0 ± 0%      40.0 ± 0%     ~     (all equal)

* core/vm: don't allocate big.int for touch

name                                  old time/op    new time/op    delta
SimpleLoop/identity-precompile-10M-6    43.3ms ± 1%    42.4ms ± 7%     ~     (p=0.151 n=5+5)
SimpleLoop/loop-10M-6                   70.5ms ± 1%    76.7ms ± 1%   +8.67%  (p=0.008 n=5+5)

name                                  old alloc/op   new alloc/op   delta
SimpleLoop/identity-precompile-10M-6    4.90MB ± 0%    2.46MB ± 0%  -49.83%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   13.2kB ± 0%    13.2kB ± 1%     ~     (p=0.571 n=5+5)

name                                  old allocs/op  new allocs/op  delta
SimpleLoop/identity-precompile-10M-6      153k ± 0%       76k ± 0%  -49.98%  (p=0.029 n=4+4)
SimpleLoop/loop-10M-6                     40.0 ± 0%      40.0 ± 0%     ~     (all equal)

* core/vm: reduce allocs in staticcall

name                                  old time/op    new time/op    delta
SimpleLoop/identity-precompile-10M-6    42.4ms ± 7%    37.5ms ± 6%  -11.68%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   76.7ms ± 1%    69.1ms ± 1%   -9.82%  (p=0.008 n=5+5)

name                                  old alloc/op   new alloc/op   delta
SimpleLoop/identity-precompile-10M-6    2.46MB ± 0%    0.02MB ± 0%  -99.35%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   13.2kB ± 1%    13.2kB ± 0%     ~     (p=0.143 n=5+5)

name                                  old allocs/op  new allocs/op  delta
SimpleLoop/identity-precompile-10M-6     76.4k ± 0%      0.1k ± 0%     ~     (p=0.079 n=4+5)
SimpleLoop/loop-10M-6                     40.0 ± 0%      40.0 ± 0%     ~     (all equal)

* trie: better use of hasher keccakState

* core/state/statedb: reduce allocations in getDeletedStateObject

* core/vm: reduce allocations in all call derivates

* core/vm: reduce allocations in call variants

- Make returnstack `uint32`
- Use a `sync.Pool` of `stack`s

* core/vm: fix tests

* core/vm: goimports

* core/vm: tracer fix + staticcall gas fix

* core/vm: add back snapshot to staticcall

* core/vm: review concerns + make returnstack pooled + enable returndata in traces

* core/vm: fix some test tracer method signatures

* core/vm: run gencodec, minor comment polish

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-16 15:06:19 +03:00
240d1851db trie: quell linter in commiter.go (#21329) 2020-07-15 11:00:04 +03:00
6c9f040ebe core: transaction pool optimizations (#21328)
* core: added local tx pool test case

* core, crypto: various allocation savings regarding tx handling

* core/txlist, txpool: save a reheap operation, avoid some bigint allocs

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-07-14 21:42:32 +02:00
5b081ab214 cmd/clef: change --rpcport to --http.port and update flags in docs (#21318) 2020-07-14 10:35:32 +02:00
6ef4495a8f p2p/discover: require table nodes to have an IP (#21330)
This fixes a corner case in discv5. The issue cannot happen in discv4
because it performs IP checks on all incoming node information.
2020-07-13 22:25:45 +02:00
79addac698 core/rawdb: better log messages for ancient failure (#21327) 2020-07-13 20:43:30 +02:00
7d5267e3a2 .github: Change Code Owners (#21326)
* modify code owners

* add marius
2020-07-13 16:20:20 +02:00
4edbc1f2bb internal/ethapi: cap txfee for SignTransaction and Resend (#21231) 2020-07-13 12:45:39 +02:00
6cf6e1d753 README.md: point Go API reference link to pkg.go.dev (#21321) 2020-07-13 11:22:30 +02:00
2e08dad9e6 p2p/discv5: unset pingEcho on pong timeout (#21324) 2020-07-13 11:20:47 +02:00
af258efdb9 light: goimports -w (#21325) 2020-07-13 11:17:49 +02:00
6eef141aef les: historical data garbage collection (#19570)
This change introduces garbage collection for the light client. Historical
chain data is deleted periodically. If you want to disable the GC, use
the --light.nopruning flag.
2020-07-13 11:02:54 +02:00
b8dd0890b3 params: begin v1.9.17 release cycle 2020-07-10 12:40:31 +02:00
ea3b00ad75 params: go-ethereum v1.9.16 stable 2020-07-10 12:38:48 +02:00
feb40e3a4d accounts/external: remove dependency on internal/ethapi (#21319)
Fixes #20535

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-10 11:33:31 +02:00
beabf95ad7 cmd/geth, cmd/puppeth: replace deprecated rpc and ws flags in tests and docs (#21317) 2020-07-09 17:48:40 +02:00
6ccce0906a common/math: use math/bits intrinsics for Safe* (#21316)
This is a resubmit of ledgerwatch/turbo-geth#556. The performance
benefit of this change is negligible, but it does remove a TODO.
2020-07-09 17:45:49 +02:00
bcb3087450 Revert "core, txpool: less allocations when handling transactions (#21232)"
Reverting because this change started handling account balances as
uint64 in the transaction pool, which is incorrect.

This reverts commit af5c97aebe.
2020-07-09 14:02:03 +02:00
967d8de77a eth/downloader: fix peer idleness tracking when restarting state sync (#21260)
This fixes two issues with state sync restarts:

When sync restarts with a new root, some peers can have in-flight requests.
Since all peers with active requests were marked idle when exiting sync,
the new sync would schedule more requests for those peers. When the
response for the earlier request arrived, the new sync would reject it and
mark the peer idle again, rendering the peer useless until it disconnected.

The other issue was that peers would not be marked idle when they had
delivered a response, but the response hadn't been processed before
restarting the state sync. This also made the peer useless because it
would be permanently marked busy.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-08 23:08:08 +02:00
7a556abe15 go.mod: upgrade to github.com/golang/snappy with arm64 asm (#21304) 2020-07-08 14:06:53 +02:00
5b1cfdef89 eth: increase timeout in TestBroadcastBlock (#21299) 2020-07-08 11:50:26 +02:00
c16967c267 cmd/clef: Fix broken link in README and other minor fixes (#21303) 2020-07-07 22:23:23 +02:00
6a48ae37b2 cmd/devp2p: add discv4 test suite (#21163)
This adds a test suite for discovery v4. The test suite is a port of the Hive suite for
discovery, and will replace the current suite on Hive soon-ish. The tests can be
run locally with this command:

    devp2p discv4 test -remote enode//...

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-07 14:37:33 +02:00
e5871b928f cmd/clef: Update README with external v6.0.0 & internal v7.0.1 APIs (#21298)
Changes include:
* Updates response docs for `account_new`, `account_list`, `account_signTransaction`
* Removes `account_import`, `account_export` docs
* Adds `account_version` docs
* Updates request docs for `ui_approveListing`, `ui_approveSignData`, `ui_showInfo`, `ui_showError`, `ui_onApprovedTx`
* Adds `ui_approveNewAccount`, `ui_onInputRequired` docs
2020-07-07 11:12:38 +02:00
6d8e51ab88 cmd, node: dump empty value config (#21296) 2020-07-06 22:09:30 +02:00
6315b6fcc0 rlp: reduce allocations for big.Int and byte array encoding (#21291)
This change further improves the performance of RLP encoding by removing
allocations for big.Int and [...]byte types. I have added a new benchmark
that measures RLP encoding of types.Block to verify that performance is
improved.
2020-07-06 11:17:09 +02:00
fa01117498 build/ci: handle split up listing (#21293) 2020-07-04 20:10:48 +02:00
490b380a04 cmd/geth: allow configuring metrics HTTP server on separate endpoint (#21290)
Exposing /debug/metrics and /debug/metrics/prometheus was dependent
on --pprof, which also exposes other HTTP APIs. This change makes it possible
to run the metrics server on an independent endpoint without enabling pprof.
2020-07-03 19:12:22 +02:00
61270e5e1c eth/gasprice: lighter gas price oracle for light client (#20409)
This PR reduces the bandwidth used by the light client to compute the
recommended gas price. The current mechanism for suggesting the price is:

- retrieve recent 20 blocks
- get the lowest gas price of these blocks
- sort the price array and return the middle(60%) one

This works for full nodes, which have all blocks available locally.
However, this is very expensive for the light client because the light
client needs to retrieve block bodies from the network.

The PR changes the default options for light client. With the new config,
the light client only retrieves the two latest blocks, but in order to
collect more sample transactions, the 3 lowest prices are collected from
each block.

This PR also changes the behavior for empty blocks. If the block is empty,
the lastest price is reused for sampling.
2020-07-03 14:50:35 +02:00
07a95ce571 les/checkpointoracle: don't lookup checkpoint more than once per minute (#21285)
* les/checkpointoracle: don't lookup checkpoint more than once per second

* les/checkpoint/oracle: change oracle checktime to 1 minute
2020-07-02 10:15:11 +02:00
04c4e50d72 ethapi: don't crash when keystore-specific methods are called but external signer used (#21279)
* console: prevent importRawKey from getting into CLI history

* internal/ethapi: error on keystore-methods when no keystore is present
2020-07-02 10:00:18 +02:00
7451fc637d internal/ethapi: default gas to maxgascap, not max int64 (#21284) 2020-07-02 09:43:42 +02:00
12867d152c rpc, internal/ethapi: default rpc gascap at 25M + better error message (#21229)
* rpc, internal/ethapi: default rpc gascap at 50M + better error message

* eth,internal: make globalgascap uint64

* core/tests: fix compilation failure

* eth/config: gascap at 25M + minor review concerns
2020-07-01 19:54:21 +02:00
af5c97aebe core, txpool: less allocations when handling transactions (#21232)
* core: use uint64 for total tx costs instead of big.Int

* core: added local tx pool test case

* core, crypto: various allocation savings regarding tx handling

* Update core/tx_list.go

* core: added tx.GasPriceIntCmp for comparison without allocation

adds a method to remove unneeded allocation in comparison to tx.gasPrice

* core: handle pools full of locals better

* core/tests: benchmark for tx_list

* core/txlist, txpool: save a reheap operation, avoid some bigint allocs

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-07-01 19:35:26 +02:00
8dfd66f701 rlp: avoid list header allocation in encoder (#21274)
List headers made up 11% of all allocations during sync. This change
removes most of those allocations by keeping the list header values
cached in the encoder buffer instead. Since encoder buffers are pooled,
list headers are no longer allocated in the common case where an
encoder buffer is available for reuse.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-01 13:49:19 +02:00
ec51cbb5fb cmd/geth: LES priority client test (#20719)
This adds a regression test for the LES priority client API.
2020-07-01 10:31:11 +02:00
d671dbd5b7 eth/downloader: fixes data race between synchronize and other methods (#21201)
* eth/downloaded: fixed datarace between synchronize and Progress

There was a race condition between `downloader.synchronize()` and `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders`
This PR changes the behavior of the downloader a bit.
Previously the functions `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders` read the syncMode anew within their loops. Now they read the syncMode at the start of their function and don't change it during their runtime.

* eth/downloaded: comment

* eth/downloader: added comment
2020-06-30 19:43:29 +02:00
1e635bd0bd go.mod: updated crypto deps causing build failure (#21276) 2020-06-30 16:05:59 +02:00
b86b1e6d43 go.mod: bump gopsutil version (#21275) 2020-06-30 15:22:21 +02:00
ddeea1e0c6 core: types: less allocations when hashing and tx handling (#21265)
* core, crypto: various allocation savings regarding tx handling

* core: reduce allocs for gas price comparison

This change reduces the allocations needed for comparing different transactions to each other.
A call to `tx.GasPrice()` copies the gas price as it has to be safe against modifications and
also needs to be threadsafe. For comparing and ordering different transactions we don't need
these guarantees

* core: added tx.GasPriceIntCmp for comparison without allocation

adds a method to remove unneeded allocation in comparison to tx.gasPrice

* core/types: pool legacykeccak256 objects in rlpHash

rlpHash is by far the most used function in core that allocates a legacyKeccak256 object on each call.
Since it is so widely used it makes sense to add pooling here so we relieve the GC.
On my machine these changes result in > 100 MILLION less allocations and > 30 GB less allocated memory.

* reverted some changes

* reverted some changes

* trie: use crypto.KeccakState instead of replicating code

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-06-30 11:59:06 +02:00
e376d2fb31 cmd/evm: add state transition tool for testing (#20958)
This PR implements the EVM state transition tool, which is intended
to be the replacement for our retesteth client implementation.
Documentation is present in the cmd/evm/README.md file.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-06-30 10:12:51 +02:00
dd91c7ce6a cmd: abstract getPassPhrase functions into one (#21219)
* [cmd] Abstract `getPassPhrase` functions into one.

* cmd/ethkey: fix compilation failure

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2020-06-30 09:56:40 +02:00
c13df14581 utils: fix ineffectual miner config flags (#21271)
Without use of global, these flags didn't actually modify
miner configuration, since we weren't grabbing from the
proper context scope, which should be global (vs. subcommand).

Signed-off-by: meows <b5c6@protonmail.com>
2020-06-30 09:05:25 +02:00
02cea2330d eth: returned revert reason in traceTx (#21195)
* eth: returned revert reason in traceTx

* eth: return result data
2020-06-26 12:19:31 +02:00
413358abb9 cmd/geth: make import cmd exit with 1 if import errors occurred (#21244)
The import command should not return a 0 status
code if the import finishes prematurely becaues
of an import error.

Returning the error causes the program to exit with 1
if the err is non nil.

Signed-off-by: meows <b5c6@protonmail.com>
2020-06-24 22:01:58 +02:00
0c82928981 core/vm: fix incorrect computation of BLS discount (#21253)
* core/vm: fix incorrect computation of discount

During testing on Yolov1 we found that the way geth calculates the discount
is not in line with the specification. Basically what we did is calculate
128 * Bls12381GXMulGas * discount / 1000 whenever we received more than 128 pairs
of values. Correct would be to calculate k * Bls12381... for k > 128.

* core/vm: better logic for discount calculation

* core/vm: better calculation logic, added worstcase benchmarks

* core/vm: better benchmarking logic
2020-06-24 21:58:28 +02:00
b482423e61 trie: reduce allocs in insertPreimage (#21261) 2020-06-24 21:56:27 +02:00
93142e50c3 eth: don't block if transaction broadcast loop fails (#21255)
* eth: don't block if transaction broadcast loop is returned

* eth: kick out peer if we failed to send message

* eth: address comment
2020-06-24 13:54:13 +03:00
23f1a0b783 crypto/secp256k1: enable 128-bit int code and endomorphism optimization (#21203)
* crypto/secp256k1: enable use of __int128

This speeds up scalar & field calculations a lot.

* crypto/secp256k1: enable endomorphism optimization
2020-06-24 13:51:32 +03:00
da180ba097 cmd/devp2p: add commands for node key management (#21202)
These commands mirror the key/URL generation functions of cmd/bootnode.

    $ devp2p key generate mynode.key
    $ devp2p key to-enode mynode.key -ip 203.0.113.21 -tcp 30304
    enode://78a7746089baf4b8615f54a5f0b67b22b1...
2020-06-24 10:41:53 +02:00
c42d1390d3 Merge pull request #21256 from karalabe/p2p-packet-metrics
p2p: measure packet throughput too, not just bandwidth
2020-06-24 09:44:23 +03:00
42ccb2fdbd p2p: measure packet throughput too, not just bandwidth 2020-06-24 09:36:20 +03:00
dce533c246 whisper: fix time.sleep by time.ticker in whisper_test (#21251) 2020-06-23 10:46:59 +02:00
9a188c975d common/fdlimit: build on DragonflyBSD (#21241)
* common/fdlimit: build on DragonflyBSD

* review feedback
2020-06-19 15:43:52 +02:00
3ebfeb09fe core/rawdb: fix high memory usage in freezer (#21243)
The ancients variable in the freezer is a list of hashes, which
identifies all of the hashes to be frozen. The slice is being allocated
with a capacity of `limit`, which is the number of the last block
this batch will attempt to add to the freezer. That means we are
allocating memory for all of the blocks in the freezer, not just
the ones to be added.

If instead we allocate `limit - f.frozen`, we will only allocate
enough space for the blocks we're about to add to the freezer. On
mainnet this reduces usage by about 320 MB.
2020-06-19 10:51:37 +03:00
5435e0d1a1 whisper : use timer.Ticker instead of sleep (#21240)
* whisper : use timer.Ticker instead of sleep

* lint: Fix linter error

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-06-18 17:58:49 +02:00
e029cc6616 go.mod: update snappy dependency (#21237) 2020-06-18 14:01:49 +03:00
56a319b9da cmd, eth, internal, les: add txfee cap (#21212)
* cmd, eth, internal, les: add gasprice cap

* cmd/utils, eth: add default value for gasprice cap

* all: use txfee cap

* cmd, eth: add fix

* cmd, internal: address comments
2020-06-17 10:46:31 +03:00
bcf19bc4be core/rawdb: swap tailId and itemOffset for deleted items in freezer (#21220)
* fix(freezer): tailId filenum offset were misplaced

* core/rawdb: assume first item in freezer always start from zero
2020-06-17 09:41:07 +02:00
eb9d7d15ec Merge pull request #21170 from duanhao0814/fix-dup-ecrecover
core: filter out txs with invalid signatures as soon as possible
2020-06-16 11:35:16 +03:00
a981b60c25 eth/downloader: don't use defer for unlock before return (#21227)
Co-authored-by: linjing <linjingjing@baidu.com>
2020-06-15 15:46:27 +03:00
9371b2f70c internal/web3ext: add missing params to debug.accountRange (#21208) 2020-06-11 15:41:43 +02:00
c85fdb76ee go.mod: update uint256 to 1.1.0 (#21206) 2020-06-11 07:27:43 +03:00
e30c0af861 build, internal/ethapi, crypto/bls12381: fix typos (#21210)
speicifc -> specific
assigened -> assigned
frobenious -> frobenius
2020-06-10 23:25:32 +03:00
4a19c0e7b8 core, eth, internal: include read storage entries in structlog output (#21204)
* core, eth, internal: extend structLog tracer

* core/vm, internal: add storage view

* core, internal: add slots to storage directly

* core: remove useless

* core: address martin's comment

* core/vm: fix tests
2020-06-10 11:46:13 +02:00
e9ba536d85 eth/downloader: fix spuriously failing tests (#21149)
* eth/downloader tests: fix spurious failing test due to race between receipts/headers

* miner tests: fix travis failure on arm64

* eth/downloader: tests - store td in ancients too
2020-06-09 11:39:19 +02:00
89043cba75 accounts/abi: make GetType public again (#21157) 2020-06-09 10:26:56 +02:00
Pau
d5c267fd30 accounts/keystore: fix typo in error message (#21200) 2020-06-09 10:23:42 +02:00
a0797e37f8 Merge pull request #21192 from karalabe/fix-escape-analysis
core/state: avoid escape analysis fault when accessing cached state
2020-06-08 16:38:05 +03:00
80e887d7bf core/state: avoid escape analysis fault when accessing cached state 2020-06-08 16:11:37 +03:00
cf6674539c core/vm: use uint256 in EVM implementation (#20787)
* core/vm: use fixed uint256 library instead of big

* core/vm: remove intpools

* core/vm: upgrade uint256, fixes uint256.NewFromBig

* core/vm: use uint256.Int by value in Stack

* core/vm: upgrade uint256 to v1.0.0

* core/vm: don't preallocate space for 1024 stack items (only 16)

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-06-08 15:24:40 +03:00
39abd92ca8 ethstats: use timer instead of time.sleep (#20924) 2020-06-08 14:27:08 +03:00
45b7535137 cmd/ethkey: support --passwordfile in generate command (#21183) 2020-06-08 11:55:51 +02:00
da06519347 params: begin v1.9.16 release cycle 2020-06-08 11:00:17 +02:00
0f77f34bb6 params: go-ethereum v1.9.15 stable 2020-06-08 10:56:48 +02:00
651233454e Merge pull request #21188 from karalabe/cht-1.9.15
params: update CHTs for 1.9.15 release
2020-06-08 11:16:23 +03:00
a5c827af86 params: update CHTs for 1.9.15 release 2020-06-08 11:14:33 +03:00
0b3f3be2b5 internal/ethapi: return revert reason for eth_call (#21083)
* internal/ethapi: return revert reason for eth_call

* internal/ethapi: moved revert reason logic to doCall

* accounts/abi/bind/backends: added revert reason logic to simulated backend

* internal/ethapi: fixed linting error

* internal/ethapi: check if require reason can be unpacked

* internal/ethapi: better error logic

* internal/ethapi: simplify logic

* internal/ethapi: return vmError()

* internal/ethapi: move handling of revert out of docall

* graphql: removed revert logic until spec change

* rpc: internal/ethapi: added custom error types

* graphql: use returndata instead of return

Return() checks if there is an error. If an error is found, we return nil.
For most use cases it can be beneficial to return the output even if there
was an error. This code should be changed anyway once the spec supports
error reasons in graphql responses

* accounts/abi/bind/backends: added tests for revert reason

* internal/ethapi: add errorCode to revert error

* internal/ethapi: add errorCode of 3 to revertError

* internal/ethapi: unified estimateGasErrors, simplified logic

* internal/ethapi: unified handling of errors in DoEstimateGas

* rpc: print error data field

* accounts/abi/bind/backends: unify simulatedBackend and RPC

* internal/ethapi: added binary data to revertError data

* internal/ethapi: refactored unpacking logic into newRevertError

* accounts/abi/bind/backends: fix EstimateGas

* accounts, console, internal, rpc: minor error interface cleanups

* Revert "accounts, console, internal, rpc: minor error interface cleanups"

This reverts commit 2d3ef53c53.

* re-apply the good parts of 2d3ef53c53

* rpc: add test for returning server error data from client

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-06-08 11:09:49 +03:00
Ev
88125d8bd0 core: fix typo in comments (#21181) 2020-06-08 10:53:56 +03:00
55f30db0ae core/vm, crypt/bls12381: fixed comments in bls (#21182)
* core/vm: crypto/bls12381: minor code comments

* crypto/bls12381: fix comment
2020-06-08 10:53:19 +03:00
9d93535674 node: missing comma on toml tags (#21187) 2020-06-08 10:52:18 +03:00
4b2ff1457a go.mod: upgrade go-duktape to hide unused function warning (#21168) 2020-06-04 17:42:05 +02:00
cefa2ab1bd Merge pull request #21173 from karalabe/faucet-delete-oldaccs
cmd/faucet: delete old keystore when importing new faucet key
2020-06-04 11:58:40 +03:00
b1b75f0089 accounts/keystore, cmd/faucet: return old account to allow unlock 2020-06-04 10:57:21 +03:00
201e345c65 Merge pull request #21172 from karalabe/faucet-twitter-mobile
acounts/keystore, cmd/faucet: fix faucet double import, fix twitter url
2020-06-04 09:30:40 +03:00
469b8739eb acounts/keystore, cmd/faucet: fix faucet double import, fix twitter url 2020-06-04 08:59:26 +03:00
8523ad450d core: filter out txs with invalid signatures as soon as possible
Once we detect an invalid transaction during recovering signatures, we should
directly exclude this transaction to avoid validating the signatures hereafter.

This should optimize the validations times of transactions with invalid signatures
to only one time.
2020-06-04 10:37:21 +08:00
8b83125739 Merge pull request #21162 from karalabe/daofork-order-check-fix
cmd/geth: fix the fork orders for DAO tests
2020-06-03 12:20:57 +03:00
f52ff0f1e9 cmd/geth: fix the fork orders for DAO tests 2020-06-03 12:17:54 +03:00
890757f03a cmd, core, params: inital support for yolo-v1 testnet (#21154)
* core,params,puppeth: inital support for yolo-v1 testnet

* cmd/geth, core: add yolov1 console flag

* cmd, core, params: YoloV1 bakein fixups

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-06-03 12:05:15 +03:00
4fc678542d core/vm, crypto/bls12381, params: add bls12-381 elliptic curve precompiles (#21018)
* crypto: add bls12-381 elliptic curve wrapper

* params: add bls12-381 precompile gas parameters

* core/vm: add bls12-381 precompiles

* core/vm: add bls12-381 precompile tests

* go.mod, go.sum: use latest bls12381 lib

* core/vm: move point encode/decode functions to base library

* crypto/bls12381: introduce bls12-381 library init function

* crypto/bls12381: import bls12381 elliptic curve implementation

* go.mod, go.sum: remove bls12-381 library

* remove unsued frobenious coeffs

supress warning for inp that used in asm

* add mappings tests for zero inputs

fix swu g2 minus z inverse constant

* crypto/bls12381: fix typo

* crypto/bls12381: better comments for bls12381 constants

* crypto/bls12381: swu, use single conditional for e2

* crypto/bls12381: utils, delete empty line

* crypto/bls12381: utils, use FromHex for string to big

* crypto/bls12381: g1, g2, strict length check for FromBytes

* crypto/bls12381: field_element, comparision changes

* crypto/bls12381: change swu, isogeny constants with hex values

* core/vm: fix point multiplication comments

* core/vm: fix multiexp gas calculation and lookup for g1 and g2

* core/vm: simpler imput length check for multiexp and pairing precompiles

* core/vm: rm empty multiexp result declarations

* crypto/bls12381: remove modulus type definition

* crypto/bls12381: use proper init function

* crypto/bls12381: get rid of new lines at fatal desciprtions

* crypto/bls12-381: fix no-adx assembly multiplication

* crypto/bls12-381: remove old config function

* crypto/bls12381: update multiplication backend

this commit changes mul backend to 6limb eip1962 backend

mul assign operations are dropped

* core/vm/contracts_tests: externalize test vectors for precompiles

* core/vm/contracts_test: externalize failure-cases for precompiles

* core/vm: linting

* go.mod: tiny up sum file

* core/vm: fix goimports linter issues

* crypto/bls12381: build tags for plain ASM or ADX implementation

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-06-03 09:44:32 +03:00
3f649d4852 core: collect NewTxsEvent items without holding reorg lock (#21145) 2020-06-02 18:52:20 +02:00
5f6f5e345e console: handle undefined + null in console funcs (#21160) 2020-06-02 18:06:22 +02:00
d98c42c0e3 rpc: send websocket ping when connection is idle (#21142)
* rpc: send websocket ping when connection is idle

* rpc: use non-blocking send for websocket pingReset
2020-06-02 15:04:44 +03:00
723bd8c17f p2p/discover: move discv4 encoding to new 'v4wire' package (#21147)
This moves all v4 protocol definitions to a new package, p2p/discover/v4wire.
The new package will be used for low-level protocol tests.
2020-06-02 13:20:19 +02:00
cd57d5cd38 core/vm: EIP-2315, JUMPSUB for the EVM (#20619)
* core/vm: implement EIP 2315, subroutines for the EVM

* core/vm: eip 2315 - lintfix + check jump dest validity + check ret stack size constraints

  logger: markdown-friendly traces, validate jumpdest, more testcase, correct opcodes

* core/vm: update subroutines acc to eip: disallow walk-into

* core/vm/eips: gas cost changes for subroutines

* core/vm: update opcodes for EIP-2315

* core/vm: define RETURNSUB as a 'jumping' operation + review concerns

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-06-02 13:30:16 +03:00
a35382de94 metrics: replace gosigar with gopsutil (#21041)
* replace gosigar with gopsutil

* removed check for whether GOOS is openbsd

* removed accidental import of runtime

* potential fix for difference in units between gosig and gopsutil

* fixed lint error

* remove multiplication factor

* uses cpu.ClocksPerSec as the multiplication factor

* changed dependency from shirou to renaynay (#20)

* updated dep

* switching back from using renaynay fork to using upstream as PRs were merged on upstream

* removed empty line

* optimized imports

* tidied go mod
2020-06-02 12:08:33 +03:00
a5eee8d1dc eth/downloader: more context in errors (#21067)
This PR makes use of go 1.13 error handling, wrapping errors and using
errors.Is to check a wrapped root-cause. It also removes the travis
builders for go 1.11 and go 1.12.
2020-05-29 11:12:43 +02:00
389da6aa48 trie: enforce monotonic range in prover and return end marker (#21130)
* trie: add hasRightElement indicator

* trie: ensure the range is monotonic increasing

* trie: address comment and fix lint

* trie: address comment

* trie: make linter happy

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-27 17:37:37 +03:00
b2c59e297b consensus/clique: make internal error private (#21132)
Co-authored-by: linjing <linjingjing@baidu.com>
2020-05-27 17:12:13 +03:00
9219e0fba4 eth: interrupt chain insertion on shutdown (#21114)
This adds a new API method on core.BlockChain to allow interrupting
running data inserts, and calls the method before shutting down the
downloader.

The BlockChain interrupt checks are now done through a method instead
of inlining the atomic load everywhere. There is no loss of efficiency from
this and it makes the interrupt protocol a lot clearer because the check is
defined next to the method that sets the flag.
2020-05-26 21:37:37 +02:00
4873a9d3c3 build: upgrade to golangci lint v1.27.0 (#21127)
* build: upgrade to golangci-lint v1.27.0

* build: raise lint timeout to 3 minutes
2020-05-26 14:24:22 +03:00
070a5e1252 trie: fix for range proof (#21107)
* trie: fix for range proof

* trie: fix typo
2020-05-26 13:11:29 +03:00
81e9caed7d ethstats: avoid blocking chan when received invalid stats request (#21073)
* ethstats: avoid blocking chan when received invalid stats request

* ethstats: minor code polishes

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-26 13:09:00 +03:00
7ddb40239b ethdb/leveldb: use timer instead of time.After (#21066) 2020-05-26 11:03:37 +02:00
2f66a8d614 metrics/prometheus: define TYPE once, add tests (#21068)
* metrics/prometheus: define type once for histograms

* metrics/prometheus: test collector
2020-05-26 12:00:09 +03:00
dbf6b8a797 cmd/utils: fix default DNS discovery configuration (#21124) 2020-05-25 19:50:36 +02:00
befecc9fdf consensus/ethash: fix flaky test by reading seal results (#21085) 2020-05-25 18:01:03 +02:00
e868adde30 core/vm: improve jumpdest lookup (#21123) 2020-05-25 16:12:48 +02:00
25a661e0c2 consensus/clique: remove redundant pair of parentheses (#21104) 2020-05-25 12:00:18 +02:00
4f2784b38f all: fix typos in comments (#21118) 2020-05-25 10:21:28 +02:00
48e3b95e77 miner: replace use of 'self' as receiver name (#21113) 2020-05-25 10:20:09 +02:00
b4a2681120 les, les/lespay: implement new server pool (#20758)
This PR reimplements the light client server pool. It is also a first step
to move certain logic into a new lespay package. This package will contain
the implementation of the lespay token sale functions, the token buying and
selling logic and other components related to peer selection/prioritization
and service quality evaluation. Over the long term this package will be
reusable for incentivizing future protocols.

Since the LES peer logic is now based on enode.Iterator, it can now use
DNS-based fallback discovery to find servers.

This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
2020-05-22 13:46:34 +02:00
65ce550b37 trie: extend range proofs with non-existence (#21000)
* trie: implement range proof with non-existent edge proof

* trie: fix cornercase

* trie: consider empty range

* trie: add singleSide test

* trie: support all-elements range proof

* trie: fix typo

* trie: tiny typos and formulations

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-20 15:45:38 +03:00
0a99efa61f whisper: use canonical import name of package go-ethereum (#21099) 2020-05-20 10:32:54 +03:00
d5b7d1cc34 accounts: add blockByNumberNoLock() to avoid double-lock (#20983)
* abi/bind/backends: testcase for double-lock

* accounts: add blockByNumberNoLock to avoid double-lock

* backend/simulated: use stateroot, not blockhash for retrieveing state

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-05-19 12:48:27 +02:00
e0987f67e0 cmd/clef, signer/core: password input fixes (#20960)
* cmd/clef, signer/core: use better terminal input for passwords, make it possible to avoid boot-up warning

* all: move commonly used prompter to isolated (small) package

* cmd/clef: Add new --acceptWarn to clef README

* cmd/clef: rename flag 'acceptWarn' to 'suppress-bootwarn'

Co-authored-by: ligi <ligi@ligi.de>
2020-05-19 10:44:46 +02:00
3666da8a4b console: fix unlockAccount argument count check (#21081) 2020-05-14 14:12:52 +03:00
f3f1e59eea accounts/abi: simplify reflection logic (#21058)
* accounts/abi: simplified reflection logic

* accounts/abi: simplified reflection logic

* accounts/abi: removed unpack

* accounts/abi: removed comments

* accounts/abi: removed uneccessary complications

* accounts/abi: minor changes in error messages

* accounts/abi: removed unnused code

* accounts/abi: fixed indexed argument unpacking

* accounts/abi: removed superfluous test cases

This commit removes two test cases. The first one is trivially invalid as we have the same
test cases as passing in packing_test.go L375. The second one passes now,
because we don't need the mapArgNamesToStructFields in unpack_atomic anymore.
Checking for purely underscored arg names generally should not be something we do
as the abi/contract is generally out of the control of the user.

* accounts/abi: removed comments, debug println

* accounts/abi: added commented out code

* accounts/abi: addressed comments

* accounts/abi: remove unnecessary dst.CanSet check

* accounts/abi: added dst.CanSet checks
2020-05-13 17:50:18 +02:00
677724af0c cmd: fix log contexts (#21077) 2020-05-13 18:34:24 +03:00
46698d7931 params: begin v1.9.15 release cycle 2020-05-13 12:33:58 +03:00
6d74d1e5f7 params: release go-ethereum v1.9.14 2020-05-13 12:31:35 +03:00
a188a1e150 ethstats: stop report ticker in each loop cycle #21070 (#21071)
Co-authored-by: Hao Duan <duan.hao@hyperchain.cn>
2020-05-13 12:06:19 +03:00
d02301f758 core: fix missing receipt on Clique crashes (#21045)
* core: fix missing receipt

* core: address comment
2020-05-13 11:33:48 +03:00
0b63915430 accounts/abi: allow overloaded argument names (#21060)
* accounts/abi: allow overloaded argument names

In solidity it is possible to create the following contract:
```
contract Overloader {
    struct F { uint _f; uint __f; uint f; }
    function f(F memory f) public {}
}
```
This however resulted in a panic in the abi package.

* accounts/abi fixed error handling
2020-05-12 13:02:23 +02:00
b8ea9042e5 accounts/abi: accounts/abi/bind: Move topics to abi package (#21057)
* accounts/abi/bind: added test cases for waitDeployed

* accounts/abi/bind: added test case for boundContract

* accounts/abi/bind: removed unnecessary resolve methods

* accounts/abi: moved topics from /bind to /abi

* accounts/abi/bind: cleaned up format... functions

* accounts/abi: improved log message

* accounts/abi: added type tests

* accounts/abi/bind: remove superfluous template methods
2020-05-12 12:21:40 +02:00
7b7e5921a4 miner: support disabling empty blockprecommits form the Go API (#20736)
* cmd, miner: add noempty-precommit flag

* cmd, miner: get rid of external flag

* miner: change bool to atomic int

* miner: fix tiny typo

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-12 13:11:34 +03:00
7540c53e72 core/rawdb: remove unused math (#21065) 2020-05-12 12:19:15 +03:00
aaede53738 core/rawdb : log format fix for Unindexing transaction (#21064)
* core/rawdb : log format fix for Unindexing transaction

* core/rawdb: tiny fixup

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-12 11:46:35 +03:00
53cac027d0 les: drop the message if the entire p2p connection is stuck (#21033)
* les: drop the message if the entire p2p connection is stuck

* les: fix lint
2020-05-12 11:02:15 +03:00
7ace5a3a8b core: fixup blockchain tests (#21062)
core: fixup blockchain tests
2020-05-12 08:46:10 +03:00
40859a2441 Merge pull request #21061 from karalabe/cht-1.9.14
params: bump CHTs for the v1.9.14 release
2020-05-11 19:05:47 +03:00
4535230059 cmd, core, eth: background transaction indexing (#20302)
* cmd, core, eth: init tx lookup in background

* core/rawdb: tiny log fixes to make it clearer what's happening

* core, eth: fix rebase errors

* core/rawdb: make reindexing less generic, but more optimal

* rlp: implement rlp list iterator

* core/rawdb: new implementation of tx indexing/unindex using generic tx iterator and hashing rlp-data

* core/rawdb, cmd/utils: fix review concerns

* cmd/utils: fix merge issue

* core/rawdb: add some log formatting polishes

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-11 18:58:43 +03:00
126ac94f36 params: bump CHTs for the v1.9.14 release 2020-05-11 18:56:09 +03:00
6f54ae24cd p2p: add 0 port check in dialer (#21008)
* p2p: add low port check in dialer

We already have a check like this for UDP ports, add a similar one in
the dialer. This prevents dials to port zero and it's also an extra
layer of protection against spamming HTTP servers.

* p2p/discover: use errLowPort in v4 code

* p2p: change port check

* p2p: add comment

* p2p/simulations/adapters: ensure assigned port is in all node records
2020-05-11 18:11:17 +03:00
069a7e1f8a core/rawdb: stop freezer process as part of freezer.Close() (#21010)
* core/rawdb: Stop freezer process as part of freezer.Close()

When you call db.Close(), it was closing the leveldb database first,
then closing the freezer, but never stopping the freezer process.
This could cause the freezer to attempt to write to leveldb after
leveldb had been closed, leading to a crash with a non-zero exit code.

This change adds a quit channel to the freezer, and freezer.Close()
will not return until the freezer process has stopped.

Additionally, when you call freezerdb.Close(), it will close the
AncientStore before closing leveldb, to ensure that the freezer goroutine
will be stopped before leveldb is closed.

* core/rawdb: Fix formatting for golint

* core/rawdb: Use backoff flag to avoid repeating select

* core/rawdb: Include accidentally omitted backoff
2020-05-11 15:11:17 +03:00
bd60295de5 console: fix some crashes/errors in the bridge (#21050)
Fixes #21046
2020-05-11 11:59:21 +02:00
930e82d7f4 params, cmd/utils: remove outdated discv5 bootnodes, deprecate flags (#20949)
* params: remove outdated discv5 bootnodes

* cmd/utils: deprecated bootnodesv4/v5 flags
2020-05-11 11:16:32 +03:00
37877e86ed Merge pull request #21056 from karalabe/statedb-simpler-code
core/state: make GetCodeSize mirror GetCode implementation wise
2020-05-11 11:09:25 +03:00
263622f44f accounts/abi/bind/backend, internal/ethapi: recap gas limit with balance (#21043)
* accounts/abi/bind/backend, internal/ethapi: recap gas limit with balance

* accounts, internal: address comment and fix lint

* accounts, internal: extend log message

* tiny nits to format hexutil.Big and nil properly

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-11 11:08:20 +03:00
0b2edf05bb core/state: make GetCodeSize mirror GetCode implementation wise 2020-05-11 10:28:56 +03:00
e29e4c2376 build: fix CLI params for windows LNK files (#21055)
* build: Fix CLI params for windows LNK files

closes #21054

* Remove parameters
2020-05-11 10:05:37 +03:00
82f9ed49fa core/state: avoid statedb.dbErr due to emptyCode (#21051)
* core/state: more verbose stateb errors

* core/state: fix flaw

* core/state: fixed lint

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-05-08 21:52:20 +03:00
b0b65d017f core/state: abort commit if read errors have occurred (#21039)
This finally adds the error check that the documentation of StateDB.dbErr
promises to do. dbErr was added in 9e5f03b6c (June 2017), and the check was
already missing in that commit. We somehow survived without it for three years.
2020-05-07 15:13:34 +02:00
1152f45849 core/state: include zero-address in state dump if present (#21038)
* Include 0x0000 address into the dump if it is present

* core/state: go fmt

Co-authored-by: Alexey Akhunov <akhounov@gmail.com>
2020-05-07 16:06:31 +03:00
dd88bd82c9 accounts/abi/bind: add void if no return args specified (#21002)
* accounts/abi/bind: add void if no return args specified

Currently the java generator generates invalid input on pure/view functions
that have no return type. e.g. `function f(uint u) view public {}`
This is not a problem in practice as people rarely ever write functions like this.

* accounts/abi/bind: use elseif instead of nested if
2020-05-07 10:24:10 +03:00
85944c2561 core/state/snapshot: fix typo (#21037) 2020-05-07 10:07:59 +03:00
87c463c47a Merge pull request #21036 from karalabe/snapshot-storage-leak
core/state/snapshot: don't create storage list for non-existing accounts
2020-05-06 18:11:15 +03:00
90af6dae6e core/state/snapshot: don't create storage list for non-existing accounts 2020-05-06 17:22:38 +03:00
39c64d85a2 core: avoid double-lock in tx_pool_test (#20984) 2020-05-06 15:47:59 +02:00
234cc8e77f eth/downloader: minor typo fixes in comments (#21035) 2020-05-06 15:35:04 +02:00
5cdc2dffda trie: fix TestBadRangeProof unit test (#21034) 2020-05-06 15:33:57 +02:00
c2147ee154 eth: don't inadvertently enable snapshots in archive nodes (#21025)
* eth: don't reassign more cache than is being made available

* eth: don't inadvertently enable snapshot in a case where --snapshot wasn't given
2020-05-06 13:01:01 +03:00
b98259868b Merge pull request #21032 from karalabe/skip-announce-goroutine-eth64
eth: skip transaction announcer goroutine on eth<65
2020-05-06 12:59:33 +03:00
292570ad6c Merge pull request #21028 from karalabe/memfix-32bit-arch
cmd/geth: handle memfixes on 32bit arch with large RAM
2020-05-06 12:58:51 +03:00
34ed2d834a eth: skip transaction announcer goroutine on eth<65 2020-05-06 12:47:27 +03:00
933acf3389 account/abi: remove superfluous type checking (#21022)
* accounts/abi: added getType func to Type struct

* accounts/abi: fixed tuple unpack

* accounts/abi: removed type.Type

* accounts/abi: added comment

* accounts/abi: removed unused types

* accounts/abi: removed superfluous declarations

* accounts/abi: typo
2020-05-05 16:30:43 +02:00
44a3b8c04c Merge pull request #21026 from karalabe/disable-highmem-test
tests: skip consensus test using 1GB RAM
2020-05-05 15:11:40 +03:00
a52511e692 build: raise test timeout back to 10 mins (#21027) 2020-05-05 15:11:00 +03:00
8f8ff8d601 cmd/geth: handle memfixes on 32bit arch with large RAM 2020-05-05 14:22:51 +03:00
4515772993 tests: skip consensus test using 1GB RAM 2020-05-05 12:27:09 +03:00
c989bca173 cmd/utils: renames flags related to http-rpc server (#20935)
* rpc flags related to starting http server renamed to http

* old rpc flags aliased and still functional

* pprof flags fixed

* renames gpo related flags

* linted

* renamed rpc flags for consistency and clarity

* added warn logs

* added more warn logs for all deprecated flags for consistency

* moves legacy flags to separate file, hides older flags under show-deprecated-flags command

* legacy prefix and moved some more legacy flags to legacy file

* fixed circular import

* added docs

* fixed imports lint error

* added notes about when flags were deprecated

* cmd/utils: group flags by deprecation date + reorder by date,

* modified deprecated comments for consistency, added warn log for --rpc

* making sure deprecated flags are still functional

* show-deprecated-flags command cleaned up

* fixed lint errors

* corrected merge conflict

* IsSet --> GlobalIsSet

* uncategorized flags, if not deprecated, displayed under misc

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-05-05 11:19:17 +03:00
587656619d Merge pull request #21023 from karalabe/snapshot-verify-iterator-release
core/state/snapshot: release iterator after verification
2020-05-04 17:10:17 +03:00
da59147014 core/state/snapshot: release iterator after verification 2020-05-04 15:14:08 +03:00
5e45db7610 Merge pull request #21021 from karalabe/tests-snapshot-gen-cleanup
tests: cleanup snapshot generator goroutine leak
2020-05-04 15:12:35 +03:00
ab72803e6f accounts/abi: move U256Bytes to common/math (#21020) 2020-05-04 14:09:14 +02:00
e872083d44 accounts/abi: removed Kind from Type struct (#21009)
* accounts/abi: removed Kind from Type struct

* accounts/abi: removed unused code
2020-05-04 13:20:20 +02:00
65cd28aa0e tests: cleanup snapshot generator goroutine leak 2020-05-04 12:10:02 +03:00
510b6f90db accounts/external: convert signature v value to 0/1 (#20997)
This fixes an issue with clef, which already transforms the signature
to use the legacy 27/28 encoding.

Fixes #20994
2020-05-01 13:52:41 +02:00
c43be6cf87 les: remove invalid use of t.Fatal in TestHandshake (#21012) 2020-05-01 13:48:52 +02:00
7e4d1925f0 go.sum: run go mod tidy (#21014) 2020-05-01 12:51:04 +02:00
d2d3166f35 accounts/external: fill account-cache if that hasn't already been done, fixes #20995 (#20998) 2020-04-30 18:57:06 +02:00
2337aa64eb core/state/snapshot: fix trie generator reporter (#21004) 2020-04-30 10:43:50 +03:00
3cebfb6664 Merge pull request #21003 from karalabe/snapshot-journal-nilfix
core/state/snapshot: fix journal nil deserialziation
2020-04-30 10:26:25 +03:00
4b6f6ffe23 core/state/snapshot: fix journal nil deserialziation 2020-04-29 18:00:29 +03:00
26d271dfbb core/state/snapshot: implement storage iterator (#20971)
* core/state/snapshot: implement storage iterator

* core/state/snapshot, tests: implement helper function

* core/state/snapshot: fix storage issue

If an account is deleted in the tx_1 but recreated in the tx_2,
the it can happen that in this diff layer, both destructedSet
and storageData records this account. In this case, the storage
iterator should be able to iterate the slots belong to new account
but disable further iteration in deeper layers(belong to old account)

* core/state/snapshot: address peter and martin's comment

* core/state: address comments

* core/state/snapshot: fix test
2020-04-29 12:53:08 +03:00
1264c19f11 go.mod : goupnp v1.0.0 upgrade (#20996) 2020-04-28 14:57:07 +03:00
7f95a85fd4 signer, log: properly escape character sequences (#20987)
* signer: properly handle terminal escape characters

* log: use strconv conversion instead of custom escape function

* log: remove relection tests for nil
2020-04-28 14:28:38 +03:00
0708b573bc event, whisper/whisperv6: use defer where possible (#20940) 2020-04-28 10:53:08 +02:00
9887edd580 rpc: add explicit 200 response for empty HTTP GET (#20952) 2020-04-28 10:43:21 +02:00
1893266c59 go.mod: upgrade to golang-lru v0.5.4 (#20992)
golang-lru is now a go module, and the upgrade corrects a couple
of minor issues. In particular, the library could crash if you inserted
nil into an LRU cache.
2020-04-28 10:22:23 +02:00
92a7538ed3 core: improve TestLogRebirth (#20961)
This is a resubmit of #20668 which rewrites the problematic test
without any additional goroutines. It also documents the test better.

The purpose of this test is checking whether log events are sent
correctly when importing blocks. The test was written at a time when
blockchain events were delivered asynchronously, making the check hard
to pull off. Now that core.BlockChain delivers events synchronously
during the call to InsertChain, the test can be simplified.

Co-authored-by: BurtonQin <bobbqqin@gmail.com>
2020-04-28 10:06:49 +02:00
5c3993444d rpc: make ExampleClientSubscription work with the geth API (#19483)
This corrects the call to eth_getBlockByNumber, which previously
returned this error:

  can't get latest block: missing value for required argument 1

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-27 17:25:24 +02:00
ba068d40dd core: add check in AddChildIndexer to avoid double lock (#20982)
This fixes a theoretical double lock condition which could occur in

    indexer.AddChildIndexer(indexer)

Nobody would ever do that though.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-27 15:16:30 +02:00
e32ee6ac05 accounts/abi: added abi test cases, minor bug fixes (#20903)
* accounts/abi: added documentation

* accounts/abi: reduced usage of arguments.LengthNonIndexed

* accounts/abi: simplified reflection logic

* accounts/abi: moved testjson data into global declaration

* accounts/abi: removed duplicate test cases

* accounts/abi: reworked abi tests

* accounts/abi: added more tests for abi packing

* accounts/abi/bind: refactored base tests

* accounts/abi: run pack tests as subtests

* accounts/abi: removed duplicate tests

* accounts/abi: removed unnused arguments.LengthNonIndexed

Due to refactors to the code, we do not need the arguments.LengthNonIndexed function anymore.
You can still get the length by calling len(arguments.NonIndexed())

* accounts/abi: added type test

* accounts/abi: modified unpack test to pack test

* accounts/abi: length check on arrayTy

* accounts/abi: test invalid abi

* accounts/abi: fixed rebase error

* accounts/abi: fixed rebase errors

* accounts/abi: removed unused definition

* accounts/abi: merged packing/unpacking tests

* accounts/abi: fixed [][][32]bytes encoding

* accounts/abi: added tuple test cases

* accounts/abi: renamed getMockLog -> newMockLog

* accounts/abi: removed duplicate test

* accounts/abi: bools -> booleans
2020-04-27 15:07:33 +02:00
40283d0522 node: shut down all node-related HTTP servers gracefully (#20956)
Rather than just closing the underlying network listener to stop our
HTTP servers, use the graceful shutdown procedure, waiting for any
in-process requests to finish.
2020-04-27 11:16:00 +02:00
a070e23178 Merge pull request #20988 from karalabe/catchup-shutdown
eth: fix shutdown regression to abort downloads, not just cancel
2020-04-27 12:02:13 +03:00
b0bbd47185 eth: fix shutdown regression to abort downloads, not just cancel 2020-04-27 11:22:15 +03:00
1aa83290f5 p2p/enode: update code comment (#20972)
It is possible to specify enode URLs using domain name since
commit b90cdbaa79, but the code comment still said that only
IP addresses are allowed.

Co-authored-by: admin@komgo.io <KomgoRocks2018!>
2020-04-24 16:50:03 +02:00
8a2e8faadd core/state/snapshot: fix binary iterator (#20970) 2020-04-24 14:43:49 +03:00
44ff3f3dc9 trie: initial implementation for range proof (#20908)
* trie: initial implementation for range proof

* trie: add benchmark

* trie: fix lint

* trie: fix minor issue

* trie: unset the edge valuenode as well

* trie: unset the edge valuenode as nilValuenode
2020-04-24 14:37:56 +03:00
38aab0aa83 accounts/keystore: fix double import race (#20915)
* accounts/keystore: fix race in Import/ImportECDSA

* accounts/keystore: added import/export tests

* cmd/geth: improved TestAccountImport test

* accounts/keystore: added import/export tests

* accounts/keystore: fixed naming

* accounts/keystore: fixed typo

* accounts/keystore: use mutex instead of rwmutex

* accounts: use errors instead of fmt
2020-04-22 12:52:29 +03:00
2ec7232191 core: mirror full node reorg logic in light client too (#20931)
* core: fix the condition of reorg

* core: fix nitpick to only retrieve head once

* core: don't reorg if received chain is longer at same diff

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-04-22 11:27:47 +03:00
b9df7ecdc3 all: seperate consensus error and evm internal error (#20830)
* all: seperate consensus error and evm internal error

There are actually two types of error will be returned when
a tranaction/message call is executed: (a) consensus error
(b) evm internal error. The former should be converted to
a consensus issue, e.g. The sender doesn't enough asset to
purchase the gas it specifies. The latter is allowed since
evm itself is a blackbox and internal error is allowed to happen.

This PR emphasizes the difference by introducing a executionResult
structure. The evm error is embedded inside. So if any error
returned, it indicates consensus issue happens.

And also this PR improve the `EstimateGas` API to return the concrete
revert reason if the transaction always fails

* all: polish

* accounts/abi/bind/backends: add tests

* accounts/abi/bind/backends, internal: cleanup error message

* all: address comments

* core: fix lint

* accounts, core, eth, internal: address comments

* accounts, internal: resolve revert reason if possible

* accounts, internal: address comments
2020-04-22 11:25:36 +03:00
c60c0c97e7 go.mod : update fastcache to 1.5.7 (#20936) 2020-04-22 10:31:32 +03:00
870d4c4970 Merge pull request #20953 from holiman/fixdlseek
core/state/snapshot: make difflayer account iterator seek operation inclusive
2020-04-22 09:50:01 +03:00
6c458f32f8 p2p: defer wait group done in protocol start (#20951) 2020-04-21 16:42:18 +02:00
c036fe35a8 core/state/snapshot: make difflayer account iterator seek operation inclusive 2020-04-21 16:26:02 +02:00
7599999dcd snapshot: add Unlock before return (#20948)
* Forget Unlock in snapshot

* Remove Unlock before panic
2020-04-21 11:11:38 +03:00
79b68dd78d Merge pull request #20923 from holiman/fix_seckeybuf
trie: fix concurrent usage of secKeyBuf, ref #20920
2020-04-20 16:52:04 +03:00
648b0cb714 cmd, core: remove override muir glacier and override istanbul (#20942) 2020-04-20 12:46:38 +03:00
ac9c03f910 accounts/abi: Prevent recalculation of internal fields (#20895)
* accounts/abi: prevent recalculation of ID, Sig and String

* accounts/abi: fixed unpacking of no values

* accounts/abi: multiple fixes to arguments

* accounts/abi: refactored methodName and eventName

This commit moves the complicated logic of how we assign method names
and event names if they already exist into their own functions for
better readability.

* accounts/abi: prevent recalculation of internal

In this commit, I changed the way we calculate the string
representations, sig representations and the id's of methods. Before
that these fields would be recalculated everytime someone called .Sig()
.String() or .ID() on a method or an event.

Additionally this commit fixes issue #20856 as we assign names to inputs
with no name (input with name "" becomes "arg0")

* accounts/abi: added unnamed event params test

* accounts/abi: fixed rebasing errors in method sig

* accounts/abi: fixed rebasing errors in method sig

* accounts/abi: addressed comments

* accounts/abi: added FunctionType enumeration

* accounts/abi/bind: added test for unnamed arguments

* accounts/abi: improved readability in NewMethod, nitpicks

* accounts/abi: method/eventName -> overloadedMethodName
2020-04-20 09:01:04 +02:00
ca22d0761b event: fix inconsistency in Lock and Unlock (#20933)
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-17 14:51:38 +02:00
7a63faf734 p2p/discover: add helper methods to UDPv5 (#20918)
This adds two new methods to UDPv5, AllNodes and LocalNode.

AllNodes returns all the nodes stored in the local table; this is
useful for the purposes of metrics collection and also debugging any
potential issues with other discovery v5 implementations.

LocalNode returns the local node object. The reason for exposing this
is so that users can modify and set/delete new key-value entries in
the local record.
2020-04-16 15:58:37 +02:00
3bf1054a13 params: begin v1.9.14 release cycle 2020-04-16 10:41:32 +03:00
af4080b4b7 trie: fix concurrent usage of secKeyBuf, ref #20920 2020-04-15 11:07:29 +02:00
887 changed files with 91530 additions and 40776 deletions

11
.github/CODEOWNERS vendored
View File

@ -3,21 +3,20 @@
accounts/usbwallet @karalabe
accounts/scwallet @gballet
accounts/abi @gballet
accounts/abi @gballet @MariusVanDerWijden
cmd/clef @holiman
cmd/puppeth @karalabe
consensus @karalabe
core/ @karalabe @holiman @rjl493456442
dashboard/ @kurkomisi
eth/ @karalabe @holiman @rjl493456442
graphql/ @gballet
les/ @zsfelfoldi @rjl493456442
light/ @zsfelfoldi @rjl493456442
mobile/ @karalabe @ligi
node/ @fjl @renaynay
p2p/ @fjl @zsfelfoldi
rpc/ @fjl @holiman
p2p/simulations @zelig @janos @justelad
p2p/protocols @zelig @janos @justelad
p2p/testing @zelig @janos @justelad
p2p/simulations @fjl
p2p/protocols @fjl
p2p/testing @fjl
signer/ @holiman
whisper/ @gballet @gluk256

View File

@ -30,11 +30,11 @@ Please make sure your contributions adhere to our coding guidelines:
Before you submit a feature request, please check and make sure that it isn't
possible through some other means. The JavaScript-enabled console is a powerful
feature in the right hands. Please check our
[Wiki page](https://github.com/ethereum/go-ethereum/wiki) for more info
[Geth documentation page](https://geth.ethereum.org/docs/) for more info
and help.
## Configuration, dependencies, and tests
Please see the [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/devguide)
for more details on configuring your environment, managing project dependencies
and testing procedures.

View File

@ -1,8 +1,10 @@
Hi there,
Please note that this is an issue tracker reserved for bug reports and feature requests.
For general questions please use [discord](https://discord.gg/nthXNEv) or the Ethereum stack exchange at https://ethereum.stackexchange.com.
---
name: Report a bug
about: Something with go-ethereum is not working as expected
title: ''
labels: 'type:bug'
assignees: ''
---
#### System information

17
.github/ISSUE_TEMPLATE/feature.md vendored Normal file
View File

@ -0,0 +1,17 @@
---
name: Request a feature
about: Report a missing feature - e.g. as a step before submitting a PR
title: ''
labels: 'type:feature'
assignees: ''
---
# Rationale
Why should this feature exist?
What are the use-cases?
# Implementation
Do you have ideas regarding the implementation of this feature?
Are you willing to implement this feature?

9
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@ -0,0 +1,9 @@
---
name: Ask a question
about: Something is unclear
title: ''
labels: 'type:docs'
assignees: ''
---
This should only be used in very rare cases e.g. if you are not 100% sure if something is a bug or asking a question that leads to improving the documentation. For general questions please use [discord](https://discord.gg/nthXNEv) or the Ethereum stack exchange at https://ethereum.stackexchange.com.

View File

@ -1,7 +1,7 @@
# This file configures github.com/golangci/golangci-lint.
run:
timeout: 2m
timeout: 3m
tests: true
# default is true. Enables skipping of directories:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$

View File

@ -5,7 +5,7 @@ jobs:
allow_failures:
- stage: build
os: osx
go: 1.14.x
go: 1.15.x
env:
- azure-osx
- azure-ios
@ -15,8 +15,8 @@ jobs:
# This builder only tests code linters on latest version of Go
- stage: lint
os: linux
dist: xenial
go: 1.14.x
dist: bionic
go: 1.16.x
env:
- lint
git:
@ -24,85 +24,12 @@ jobs:
script:
- go run build/ci.go lint
- stage: build
os: linux
dist: xenial
go: 1.11.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: linux
dist: xenial
go: 1.12.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: linux
dist: xenial
go: 1.13.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- stage: build
os: linux
arch: amd64
dist: xenial
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
if: type = pull_request
os: linux
arch: arm64
dist: xenial
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: osx
osx_image: xcode11.3
go: 1.14.x
env:
- GO111MODULE=on
script:
- echo "Increase the maximum number of open file descriptors on macOS"
- NOFILE=20480
- sudo sysctl -w kern.maxfiles=$NOFILE
- sudo sysctl -w kern.maxfilesperproc=$NOFILE
- sudo launchctl limit maxfiles $NOFILE $NOFILE
- sudo launchctl limit maxfiles
- ulimit -S -n $NOFILE
- ulimit -n
- unset -f cd # workaround for https://github.com/travis-ci/travis-ci/issues/8703
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# This builder does the Ubuntu PPA upload
- stage: build
if: type = push
os: linux
dist: xenial
go: 1.14.x
dist: bionic
go: 1.16.x
env:
- ubuntu-ppa
- GO111MODULE=on
@ -119,15 +46,15 @@ jobs:
- python-paramiko
script:
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -goversion 1.14.2 -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Linux Azure uploads
- stage: build
if: type = push
os: linux
dist: xenial
dist: bionic
sudo: required
go: 1.14.x
go: 1.16.x
env:
- azure-linux
- GO111MODULE=on
@ -139,32 +66,32 @@ jobs:
- gcc-multilib
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -arch 386
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo -arch 386
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# Switch over GCC to cross compilation (breaks 386, hence why do it here only)
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
- sudo ln -s /usr/include/asm-generic /usr/include/asm
- GOARM=5 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=6 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=7 go run build/ci.go install -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=5 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- GOARM=6 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- GOARM=7 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# This builder does the Linux Azure MIPS xgo uploads
- stage: build
if: type = push
os: linux
dist: xenial
dist: bionic
services:
- docker
go: 1.14.x
go: 1.16.x
env:
- azure-linux-mips
- GO111MODULE=on
@ -173,38 +100,29 @@ jobs:
script:
- go run build/ci.go xgo --alltools -- --targets=linux/mips --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips; do mv -f "${bin}" "${bin/-linux-mips/}"; done
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mipsle --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mipsle; do mv -f "${bin}" "${bin/-linux-mipsle/}"; done
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mips64 --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips64; do mv -f "${bin}" "${bin/-linux-mips64/}"; done
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mips64le --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips64le; do mv -f "${bin}" "${bin/-linux-mips64le/}"; done
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# This builder does the Android Maven and Azure uploads
- stage: build
if: type = push
os: linux
dist: xenial
dist: bionic
addons:
apt:
packages:
- oracle-java8-installer
- oracle-java8-set-default
language: android
android:
components:
- platform-tools
- tools
- android-15
- android-19
- android-24
- openjdk-8-jdk
env:
- azure-android
- maven-android
@ -212,25 +130,33 @@ jobs:
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz | tar -xz
# Install Android and it's dependencies manually, Travis is stale
- export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
- curl https://dl.google.com/android/repository/commandlinetools-linux-6858069_latest.zip -o android.zip
- unzip -q android.zip -d $HOME/sdk && rm android.zip
- mv $HOME/sdk/cmdline-tools $HOME/sdk/latest && mkdir $HOME/sdk/cmdline-tools && mv $HOME/sdk/latest $HOME/sdk/cmdline-tools
- export PATH=$PATH:$HOME/sdk/cmdline-tools/latest/bin
- export ANDROID_HOME=$HOME/sdk
- yes | sdkmanager --licenses >/dev/null
- sdkmanager "platform-tools" "platforms;android-15" "platforms;android-19" "platforms;android-24" "ndk-bundle"
# Install Go to allow building with
- curl https://dl.google.com/go/go1.16.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go
script:
# Build the Android archive and upload it to Maven Central and Azure
- curl https://dl.google.com/android/repository/android-ndk-r19b-linux-x86_64.zip -o android-ndk-r19b.zip
- unzip -q android-ndk-r19b.zip && rm android-ndk-r19b.zip
- mv android-ndk-r19b $ANDROID_HOME/ndk-bundle
- mkdir -p $GOPATH/src/github.com/ethereum
- ln -s `pwd` $GOPATH/src/github.com/ethereum/go-ethereum
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -signify SIGNIFY_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- stage: build
if: type = push
os: osx
go: 1.14.x
go: 1.16.x
env:
- azure-osx
- azure-ios
@ -239,8 +165,8 @@ jobs:
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# Build the iOS framework and upload it to CocoaPods and Azure
- gem uninstall cocoapods -a -x
@ -255,14 +181,45 @@ jobs:
# Workaround for https://github.com/golang/go/issues/23749
- export CGO_CFLAGS_ALLOW='-fmodules|-fblocks|-fobjc-arc'
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -signify SIGNIFY_KEY -deploy trunk -upload gethstore/builds
# These builders run the tests
- stage: build
os: linux
arch: amd64
dist: bionic
go: 1.16.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
if: type = pull_request
os: linux
arch: arm64
dist: bionic
go: 1.16.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: linux
dist: bionic
go: 1.15.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -coverage $TEST_PACKAGES
# This builder does the Azure archive purges to avoid accumulating junk
- stage: build
if: type = cron
os: linux
dist: xenial
go: 1.14.x
dist: bionic
go: 1.16.x
env:
- azure-purge
- GO111MODULE=on

59
COPYING
View File

@ -1,7 +1,7 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2014 The go-ethereum Authors.
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
@ -616,4 +616,59 @@ above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.14-alpine as builder
FROM golang:1.16-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
@ -12,5 +12,5 @@ FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
EXPOSE 8545 8546 8547 30303 30303/udp
EXPOSE 8545 8546 30303 30303/udp
ENTRYPOINT ["geth"]

View File

@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.14-alpine as builder
FROM golang:1.16-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
@ -12,4 +12,4 @@ FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 8547 30303 30303/udp
EXPOSE 8545 8546 30303 30303/udp

View File

@ -24,7 +24,9 @@ android:
$(GORUN) build/ci.go aar --local
@echo "Done building."
@echo "Import \"$(GOBIN)/geth.aar\" to use the library."
@echo "Import \"$(GOBIN)/geth-sources.jar\" to add javadocs"
@echo "For more info see https://stackoverflow.com/questions/20994336/android-studio-how-to-attach-javadoc"
ios:
$(GORUN) build/ci.go xcode --local
@echo "Done building."

View File

@ -4,9 +4,9 @@ Official Golang implementation of the Ethereum protocol.
[![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://godoc.org/github.com/ethereum/go-ethereum)
)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc)
[![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[![Travis](https://travis-ci.org/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
[![Travis](https://travis-ci.com/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.com/ethereum/go-ethereum)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv)
Automated builds are available for stable releases and the unstable master branch. Binary
@ -14,7 +14,7 @@ archives are published at https://geth.ethereum.org/downloads/.
## Building the source
For prerequisites and detailed build instructions please read the [Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum) on the wiki.
For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/install-and-build/installing-geth).
Building `geth` requires both a Go (version 1.13 or later) and a C compiler. You can install
them using your favourite package manager. Once the dependencies are installed, run
@ -36,18 +36,18 @@ directory.
| Command | Description |
| :-----------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/interface/command-line-options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/dapp/native-bindings) page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://eth.wiki/json-rpc/API) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://eth.wiki/en/fundamentals/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
## Running `geth`
Going through all the possible command line flags is out of scope here (please consult our
[CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options)),
[CLI Wiki page](https://geth.ethereum.org/docs/interface/command-line-options)),
but we've enumerated a few common parameter combos to get you up to speed quickly
on how you can run your own `geth` instance.
@ -66,9 +66,9 @@ This command will:
* Start `geth` in fast sync mode (default, can be changed with the `--syncmode` flag),
causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive.
* Start up `geth`'s built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://github.com/ethereum/wiki/wiki/JavaScript-API)
as well as `geth`'s own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
* Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://web3js.readthedocs.io/en/)
as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server).
This tool is optional and if you leave it out you can always attach to an already running
`geth` instance with `geth attach`.
@ -108,7 +108,7 @@ accounts available between them.*
### Full node on the Rinkeby test network
Go Ethereum also supports connecting to the older proof-of-authority based test network
Go Ethereum also supports connecting to the older proof-of-authority based test network
called [*Rinkeby*](https://www.rinkeby.io) which is operated by members of the community.
```shell
@ -117,10 +117,10 @@ $ geth --rinkeby console
### Full node on the Ropsten test network
In addition to Görli and Rinkeby, Geth also supports the ancient Ropsten testnet. The
In addition to Görli and Rinkeby, Geth also supports the ancient Ropsten testnet. The
Ropsten test network is based on the Ethash proof-of-work consensus algorithm. As such,
it has certain extra overhead and is more susceptible to reorganization attacks due to the
network's low difficulty/security.
network's low difficulty/security.
```shell
$ geth --ropsten console
@ -162,7 +162,7 @@ above command does. It will also create a persistent volume in your home direct
saving your blockchain as well as map the default ports. There is also an `alpine` tag
available for a slim version of the image.
Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containers
Do not forget `--http.addr 0.0.0.0`, if you want to access RPC from other containers
and/or hosts. By default, `geth` binds to the local interface and RPC endpoints is not
accessible from the outside.
@ -170,8 +170,8 @@ accessible from the outside.
As a developer, sooner rather than later you'll want to start interacting with `geth` and the
Ethereum network via your own programs and not manually through the console. To aid
this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC)
and [`geth` specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)).
this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://eth.wiki/json-rpc/API)
and [`geth` specific APIs](https://geth.ethereum.org/docs/rpc/server)).
These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based
platforms, and named pipes on Windows).
@ -182,16 +182,16 @@ you'd expect.
HTTP based JSON-RPC API options:
* `--rpc` Enable the HTTP-RPC server
* `--rpcaddr` HTTP-RPC server listening interface (default: `localhost`)
* `--rpcport` HTTP-RPC server listening port (default: `8545`)
* `--rpcapi` API's offered over the HTTP-RPC interface (default: `eth,net,web3`)
* `--rpccorsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--http` Enable the HTTP-RPC server
* `--http.addr` HTTP-RPC server listening interface (default: `localhost`)
* `--http.port` HTTP-RPC server listening port (default: `8545`)
* `--http.api` API's offered over the HTTP-RPC interface (default: `eth,net,web3`)
* `--http.corsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--ws` Enable the WS-RPC server
* `--wsaddr` WS-RPC server listening interface (default: `localhost`)
* `--wsport` WS-RPC server listening port (default: `8546`)
* `--wsapi` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--wsorigins` Origins from which to accept websockets requests
* `--ws.addr` WS-RPC server listening interface (default: `localhost`)
* `--ws.port` WS-RPC server listening port (default: `8546`)
* `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--ws.origins` Origins from which to accept websockets requests
* `--ipcdisable` Disable the IPC-RPC server
* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,shh,txpool,web3`)
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
@ -277,7 +277,7 @@ $ bootnode --genkey=boot.key
$ bootnode --nodekey=boot.key
```
With the bootnode online, it will display an [`enode` URL](https://github.com/ethereum/wiki/wiki/enode-url-format)
With the bootnode online, it will display an [`enode` URL](https://eth.wiki/en/fundamentals/enode-url-format)
that other nodes can use to connect to it and exchange peer information. Make sure to
replace the displayed IP address information (most probably `[::]`) with your externally
accessible IP to get the actual `enode` URL.
@ -314,13 +314,13 @@ ones either). To start a `geth` instance for mining, run it with all your usual
by:
```shell
$ geth <usual-flags> --mine --miner.threads=1 --etherbase=0x0000000000000000000000000000000000000000
$ geth <usual-flags> --mine --miner.threads=1 --miner.etherbase=0x0000000000000000000000000000000000000000
```
Which will start mining blocks and transactions on a single CPU thread, crediting all
proceedings to the account specified by `--etherbase`. You can further tune the mining
by changing the default gas limit blocks converge to (`--targetgaslimit`) and the price
transactions are accepted at (`--gasprice`).
proceedings to the account specified by `--miner.etherbase`. You can further tune the mining
by changing the default gas limit blocks converge to (`--miner.targetgaslimit`) and the price
transactions are accepted at (`--miner.gasprice`).
## Contribution
@ -344,7 +344,7 @@ Please make sure your contributions adhere to our coding guidelines:
* Commit messages should be prefixed with the package(s) they modify.
* E.g. "eth, rpc: make trace configs optional"
Please see the [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/devguide)
for more details on configuring your environment, managing project dependencies, and
testing procedures.

View File

@ -2,31 +2,29 @@
## Supported Versions
Please see Releases. We recommend to use the most recent released version.
Please see [Releases](https://github.com/ethereum/go-ethereum/releases). We recommend using the [most recently released version](https://github.com/ethereum/go-ethereum/releases/latest).
## Audit reports
Audit reports are published in the `docs` folder: https://github.com/ethereum/go-ethereum/tree/master/docs/audits
| Scope | Date | Report Link |
| ------- | ------- | ----------- |
| `geth` | 20170425 | [pdf](https://github.com/ethereum/go-ethereum/blob/master/docs/audits/2017-04-25_Geth-audit_Truesec.pdf) |
| `clef` | 20180914 | [pdf](https://github.com/ethereum/go-ethereum/blob/master/docs/audits/2018-09-14_Clef-audit_NCC.pdf) |
## Reporting a Vulnerability
**Please do not file a public ticket** mentioning the vulnerability.
To find out how to disclose a vulnerability in Ethereum visit [https://bounty.ethereum.org](https://bounty.ethereum.org) or email bounty@ethereum.org.
To find out how to disclose a vulnerability in Ethereum visit [https://bounty.ethereum.org](https://bounty.ethereum.org) or email bounty@ethereum.org. Please read the [disclosure page](https://github.com/ethereum/go-ethereum/security/advisories?state=published) for more information about publically disclosed security vulnerabilities.
Use the built-in `geth version-check` feature to check whether the software is affected by any known vulnerability. This command will fetch the latest [`vulnerabilities.json`](https://geth.ethereum.org/docs/vulnerabilities/vulnerabilities.json) file which contains known security vulnerabilities concerning `geth`, and cross-check the data against its own version number.
The following key may be used to communicate sensitive information to developers.
Fingerprint: `AE96 ED96 9E47 9B00 84F3 E17F E88D 3334 FA5F 6A0A`
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

View File

@ -24,6 +24,7 @@ import (
"io"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// The ABI holds information about a contract's context and available
@ -76,42 +77,62 @@ func (abi ABI) Pack(name string, args ...interface{}) ([]byte, error) {
return nil, err
}
// Pack up the method ID too if not a constructor and return
return append(method.ID(), arguments...), nil
return append(method.ID, arguments...), nil
}
// Unpack output in v according to the abi specification
func (abi ABI) Unpack(v interface{}, name string, data []byte) (err error) {
func (abi ABI) getArguments(name string, data []byte) (Arguments, error) {
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
var args Arguments
if method, ok := abi.Methods[name]; ok {
if len(data)%32 != 0 {
return fmt.Errorf("abi: improperly formatted output: %s - Bytes: [%+v]", string(data), data)
return nil, fmt.Errorf("abi: improperly formatted output: %s - Bytes: [%+v]", string(data), data)
}
return method.Outputs.Unpack(v, data)
args = method.Outputs
}
if event, ok := abi.Events[name]; ok {
return event.Inputs.Unpack(v, data)
args = event.Inputs
}
return fmt.Errorf("abi: could not locate named method or event")
if args == nil {
return nil, errors.New("abi: could not locate named method or event")
}
return args, nil
}
// UnpackIntoMap unpacks a log into the provided map[string]interface{}
// Unpack unpacks the output according to the abi specification.
func (abi ABI) Unpack(name string, data []byte) ([]interface{}, error) {
args, err := abi.getArguments(name, data)
if err != nil {
return nil, err
}
return args.Unpack(data)
}
// UnpackIntoInterface unpacks the output in v according to the abi specification.
// It performs an additional copy. Please only use, if you want to unpack into a
// structure that does not strictly conform to the abi structure (e.g. has additional arguments)
func (abi ABI) UnpackIntoInterface(v interface{}, name string, data []byte) error {
args, err := abi.getArguments(name, data)
if err != nil {
return err
}
unpacked, err := args.Unpack(data)
if err != nil {
return err
}
return args.Copy(v, unpacked)
}
// UnpackIntoMap unpacks a log into the provided map[string]interface{}.
func (abi ABI) UnpackIntoMap(v map[string]interface{}, name string, data []byte) (err error) {
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
if method, ok := abi.Methods[name]; ok {
if len(data)%32 != 0 {
return fmt.Errorf("abi: improperly formatted output")
}
return method.Outputs.UnpackIntoMap(v, data)
args, err := abi.getArguments(name, data)
if err != nil {
return err
}
if event, ok := abi.Events[name]; ok {
return event.Inputs.UnpackIntoMap(v, data)
}
return fmt.Errorf("abi: could not locate named method or event")
return args.UnpackIntoMap(v, data)
}
// UnmarshalJSON implements json.Unmarshaler interface
// UnmarshalJSON implements json.Unmarshaler interface.
func (abi *ABI) UnmarshalJSON(data []byte) error {
var fields []struct {
Type string
@ -139,59 +160,17 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
for _, field := range fields {
switch field.Type {
case "constructor":
abi.Constructor = Method{
Inputs: field.Inputs,
// Note for constructor the `StateMutability` can only
// be payable or nonpayable according to the output of
// compiler. So constant is always false.
StateMutability: field.StateMutability,
// Legacy fields, keep them for backward compatibility
Constant: field.Constant,
Payable: field.Payable,
}
abi.Constructor = NewMethod("", "", Constructor, field.StateMutability, field.Constant, field.Payable, field.Inputs, nil)
case "function":
name := field.Name
_, ok := abi.Methods[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", field.Name, idx)
_, ok = abi.Methods[name]
}
abi.Methods[name] = Method{
Name: name,
RawName: field.Name,
StateMutability: field.StateMutability,
Inputs: field.Inputs,
Outputs: field.Outputs,
// Legacy fields, keep them for backward compatibility
Constant: field.Constant,
Payable: field.Payable,
}
name := abi.overloadedMethodName(field.Name)
abi.Methods[name] = NewMethod(name, field.Name, Function, field.StateMutability, field.Constant, field.Payable, field.Inputs, field.Outputs)
case "fallback":
// New introduced function type in v0.6.0, check more detail
// here https://solidity.readthedocs.io/en/v0.6.0/contracts.html#fallback-function
if abi.HasFallback() {
return errors.New("only single fallback is allowed")
}
abi.Fallback = Method{
Name: "",
RawName: "",
// The `StateMutability` can only be payable or nonpayable,
// so the constant is always false.
StateMutability: field.StateMutability,
IsFallback: true,
// Fallback doesn't have any input or output
Inputs: nil,
Outputs: nil,
// Legacy fields, keep them for backward compatibility
Constant: field.Constant,
Payable: field.Payable,
}
abi.Fallback = NewMethod("", "", Fallback, field.StateMutability, field.Constant, field.Payable, nil, nil)
case "receive":
// New introduced function type in v0.6.0, check more detail
// here https://solidity.readthedocs.io/en/v0.6.0/contracts.html#fallback-function
@ -201,49 +180,55 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
if field.StateMutability != "payable" {
return errors.New("the statemutability of receive can only be payable")
}
abi.Receive = Method{
Name: "",
RawName: "",
// The `StateMutability` can only be payable, so constant
// is always true while payable is always false.
StateMutability: field.StateMutability,
IsReceive: true,
// Receive doesn't have any input or output
Inputs: nil,
Outputs: nil,
// Legacy fields, keep them for backward compatibility
Constant: field.Constant,
Payable: field.Payable,
}
abi.Receive = NewMethod("", "", Receive, field.StateMutability, field.Constant, field.Payable, nil, nil)
case "event":
name := field.Name
_, ok := abi.Events[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", field.Name, idx)
_, ok = abi.Events[name]
}
abi.Events[name] = Event{
Name: name,
RawName: field.Name,
Anonymous: field.Anonymous,
Inputs: field.Inputs,
}
name := abi.overloadedEventName(field.Name)
abi.Events[name] = NewEvent(name, field.Name, field.Anonymous, field.Inputs)
default:
return fmt.Errorf("abi: could not recognize type %v of field %v", field.Type, field.Name)
}
}
return nil
}
// MethodById looks up a method by the 4-byte id
// returns nil if none found
// overloadedMethodName returns the next available name for a given function.
// Needed since solidity allows for function overload.
//
// e.g. if the abi contains Methods send, send1
// overloadedMethodName would return send2 for input send.
func (abi *ABI) overloadedMethodName(rawName string) string {
name := rawName
_, ok := abi.Methods[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", rawName, idx)
_, ok = abi.Methods[name]
}
return name
}
// overloadedEventName returns the next available name for a given event.
// Needed since solidity allows for event overload.
//
// e.g. if the abi contains events received, received1
// overloadedEventName would return received2 for input received.
func (abi *ABI) overloadedEventName(rawName string) string {
name := rawName
_, ok := abi.Events[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", rawName, idx)
_, ok = abi.Events[name]
}
return name
}
// MethodById looks up a method by the 4-byte id,
// returns nil if none found.
func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
if len(sigdata) < 4 {
return nil, fmt.Errorf("data too short (%d bytes) for abi method lookup", len(sigdata))
}
for _, method := range abi.Methods {
if bytes.Equal(method.ID(), sigdata[:4]) {
if bytes.Equal(method.ID, sigdata[:4]) {
return &method, nil
}
}
@ -254,7 +239,7 @@ func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
// ABI and returns nil if none found.
func (abi *ABI) EventByID(topic common.Hash) (*Event, error) {
for _, event := range abi.Events {
if bytes.Equal(event.ID().Bytes(), topic.Bytes()) {
if bytes.Equal(event.ID.Bytes(), topic.Bytes()) {
return &event, nil
}
}
@ -263,10 +248,32 @@ func (abi *ABI) EventByID(topic common.Hash) (*Event, error) {
// HasFallback returns an indicator whether a fallback function is included.
func (abi *ABI) HasFallback() bool {
return abi.Fallback.IsFallback
return abi.Fallback.Type == Fallback
}
// HasReceive returns an indicator whether a receive function is included.
func (abi *ABI) HasReceive() bool {
return abi.Receive.IsReceive
return abi.Receive.Type == Receive
}
// revertSelector is a special function selector for revert reason unpacking.
var revertSelector = crypto.Keccak256([]byte("Error(string)"))[:4]
// UnpackRevert resolves the abi-encoded revert reason. According to the solidity
// spec https://solidity.readthedocs.io/en/latest/control-structures.html#revert,
// the provided revert reason is abi-encoded as if it were a call to a function
// `Error(string)`. So it's a special tool for it.
func UnpackRevert(data []byte) (string, error) {
if len(data) < 4 {
return "", errors.New("invalid data for unpacking")
}
if !bytes.Equal(data[:4], revertSelector) {
return "", errors.New("invalid data for unpacking")
}
typ, _ := NewType("string", "", nil)
unpacked, err := (Arguments{{Type: typ}}).Unpack(data[4:])
if err != nil {
return "", err
}
return unpacked[0].(string), nil
}

View File

@ -19,6 +19,7 @@ package abi
import (
"bytes"
"encoding/hex"
"errors"
"fmt"
"math/big"
"reflect"
@ -26,17 +27,13 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/crypto"
)
const jsondata = `
[
{ "type" : "function", "name" : "balance", "stateMutability" : "view" },
{ "type" : "function", "name" : "send", "inputs" : [ { "name" : "amount", "type" : "uint256" } ] }
]`
const jsondata2 = `
[
{ "type" : "function", "name" : ""},
{ "type" : "function", "name" : "balance", "stateMutability" : "view" },
{ "type" : "function", "name" : "send", "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "test", "inputs" : [ { "name" : "number", "type" : "uint32" } ] },
@ -45,6 +42,8 @@ const jsondata2 = `
{ "type" : "function", "name" : "address", "inputs" : [ { "name" : "inputs", "type" : "address" } ] },
{ "type" : "function", "name" : "uint64[2]", "inputs" : [ { "name" : "inputs", "type" : "uint64[2]" } ] },
{ "type" : "function", "name" : "uint64[]", "inputs" : [ { "name" : "inputs", "type" : "uint64[]" } ] },
{ "type" : "function", "name" : "int8", "inputs" : [ { "name" : "inputs", "type" : "int8" } ] },
{ "type" : "function", "name" : "bytes32", "inputs" : [ { "name" : "inputs", "type" : "bytes32" } ] },
{ "type" : "function", "name" : "foo", "inputs" : [ { "name" : "inputs", "type" : "uint32" } ] },
{ "type" : "function", "name" : "bar", "inputs" : [ { "name" : "inputs", "type" : "uint32" }, { "name" : "string", "type" : "uint16" } ] },
{ "type" : "function", "name" : "slice", "inputs" : [ { "name" : "inputs", "type" : "uint32[2]" } ] },
@ -53,30 +52,83 @@ const jsondata2 = `
{ "type" : "function", "name" : "sliceMultiAddress", "inputs" : [ { "name" : "a", "type" : "address[]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray", "inputs" : [ { "name" : "a", "type" : "uint256[2][2]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray2", "inputs" : [ { "name" : "a", "type" : "uint8[][2]" } ] },
{ "type" : "function", "name" : "nestedSlice", "inputs" : [ { "name" : "a", "type" : "uint8[][]" } ] }
{ "type" : "function", "name" : "nestedSlice", "inputs" : [ { "name" : "a", "type" : "uint8[][]" } ] },
{ "type" : "function", "name" : "receive", "inputs" : [ { "name" : "memo", "type" : "bytes" }], "outputs" : [], "payable" : true, "stateMutability" : "payable" },
{ "type" : "function", "name" : "fixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "fixedArrBytes", "stateMutability" : "view", "inputs" : [ { "name" : "bytes", "type" : "bytes" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "mixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" } ] },
{ "type" : "function", "name" : "doubleFixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type" : "uint256[2]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] },
{ "type" : "function", "name" : "multipleMixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type" : "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] },
{ "type" : "function", "name" : "overloadedNames", "stateMutability" : "view", "inputs": [ { "components": [ { "internalType": "uint256", "name": "_f", "type": "uint256" }, { "internalType": "uint256", "name": "__f", "type": "uint256"}, { "internalType": "uint256", "name": "f", "type": "uint256"}],"internalType": "struct Overloader.F", "name": "f","type": "tuple"}]}
]`
var (
Uint256, _ = NewType("uint256", "", nil)
Uint32, _ = NewType("uint32", "", nil)
Uint16, _ = NewType("uint16", "", nil)
String, _ = NewType("string", "", nil)
Bool, _ = NewType("bool", "", nil)
Bytes, _ = NewType("bytes", "", nil)
Bytes32, _ = NewType("bytes32", "", nil)
Address, _ = NewType("address", "", nil)
Uint64Arr, _ = NewType("uint64[]", "", nil)
AddressArr, _ = NewType("address[]", "", nil)
Int8, _ = NewType("int8", "", nil)
// Special types for testing
Uint32Arr2, _ = NewType("uint32[2]", "", nil)
Uint64Arr2, _ = NewType("uint64[2]", "", nil)
Uint256Arr, _ = NewType("uint256[]", "", nil)
Uint256Arr2, _ = NewType("uint256[2]", "", nil)
Uint256Arr3, _ = NewType("uint256[3]", "", nil)
Uint256ArrNested, _ = NewType("uint256[2][2]", "", nil)
Uint8ArrNested, _ = NewType("uint8[][2]", "", nil)
Uint8SliceNested, _ = NewType("uint8[][]", "", nil)
TupleF, _ = NewType("tuple", "struct Overloader.F", []ArgumentMarshaling{
{Name: "_f", Type: "uint256"},
{Name: "__f", Type: "uint256"},
{Name: "f", Type: "uint256"}})
)
var methods = map[string]Method{
"": NewMethod("", "", Function, "", false, false, nil, nil),
"balance": NewMethod("balance", "balance", Function, "view", false, false, nil, nil),
"send": NewMethod("send", "send", Function, "", false, false, []Argument{{"amount", Uint256, false}}, nil),
"test": NewMethod("test", "test", Function, "", false, false, []Argument{{"number", Uint32, false}}, nil),
"string": NewMethod("string", "string", Function, "", false, false, []Argument{{"inputs", String, false}}, nil),
"bool": NewMethod("bool", "bool", Function, "", false, false, []Argument{{"inputs", Bool, false}}, nil),
"address": NewMethod("address", "address", Function, "", false, false, []Argument{{"inputs", Address, false}}, nil),
"uint64[]": NewMethod("uint64[]", "uint64[]", Function, "", false, false, []Argument{{"inputs", Uint64Arr, false}}, nil),
"uint64[2]": NewMethod("uint64[2]", "uint64[2]", Function, "", false, false, []Argument{{"inputs", Uint64Arr2, false}}, nil),
"int8": NewMethod("int8", "int8", Function, "", false, false, []Argument{{"inputs", Int8, false}}, nil),
"bytes32": NewMethod("bytes32", "bytes32", Function, "", false, false, []Argument{{"inputs", Bytes32, false}}, nil),
"foo": NewMethod("foo", "foo", Function, "", false, false, []Argument{{"inputs", Uint32, false}}, nil),
"bar": NewMethod("bar", "bar", Function, "", false, false, []Argument{{"inputs", Uint32, false}, {"string", Uint16, false}}, nil),
"slice": NewMethod("slice", "slice", Function, "", false, false, []Argument{{"inputs", Uint32Arr2, false}}, nil),
"slice256": NewMethod("slice256", "slice256", Function, "", false, false, []Argument{{"inputs", Uint256Arr2, false}}, nil),
"sliceAddress": NewMethod("sliceAddress", "sliceAddress", Function, "", false, false, []Argument{{"inputs", AddressArr, false}}, nil),
"sliceMultiAddress": NewMethod("sliceMultiAddress", "sliceMultiAddress", Function, "", false, false, []Argument{{"a", AddressArr, false}, {"b", AddressArr, false}}, nil),
"nestedArray": NewMethod("nestedArray", "nestedArray", Function, "", false, false, []Argument{{"a", Uint256ArrNested, false}, {"b", AddressArr, false}}, nil),
"nestedArray2": NewMethod("nestedArray2", "nestedArray2", Function, "", false, false, []Argument{{"a", Uint8ArrNested, false}}, nil),
"nestedSlice": NewMethod("nestedSlice", "nestedSlice", Function, "", false, false, []Argument{{"a", Uint8SliceNested, false}}, nil),
"receive": NewMethod("receive", "receive", Function, "payable", false, true, []Argument{{"memo", Bytes, false}}, []Argument{}),
"fixedArrStr": NewMethod("fixedArrStr", "fixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr", Uint256Arr2, false}}, nil),
"fixedArrBytes": NewMethod("fixedArrBytes", "fixedArrBytes", Function, "view", false, false, []Argument{{"bytes", Bytes, false}, {"fixedArr", Uint256Arr2, false}}, nil),
"mixedArrStr": NewMethod("mixedArrStr", "mixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr", Uint256Arr2, false}, {"dynArr", Uint256Arr, false}}, nil),
"doubleFixedArrStr": NewMethod("doubleFixedArrStr", "doubleFixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr1", Uint256Arr2, false}, {"fixedArr2", Uint256Arr3, false}}, nil),
"multipleMixedArrStr": NewMethod("multipleMixedArrStr", "multipleMixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr1", Uint256Arr2, false}, {"dynArr", Uint256Arr, false}, {"fixedArr2", Uint256Arr3, false}}, nil),
"overloadedNames": NewMethod("overloadedNames", "overloadedNames", Function, "view", false, false, []Argument{{"f", TupleF, false}}, nil),
}
func TestReader(t *testing.T) {
Uint256, _ := NewType("uint256", "", nil)
exp := ABI{
Methods: map[string]Method{
"balance": {
"balance", "balance", "view", false, false, false, false, nil, nil,
},
"send": {
"send", "send", "", false, false, false, false, []Argument{
{"amount", Uint256, false},
}, nil,
},
},
abi := ABI{
Methods: methods,
}
abi, err := JSON(strings.NewReader(jsondata))
exp, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Error(err)
t.Fatal(err)
}
// deep equal fails for some reason
for name, expM := range exp.Methods {
gotM, exist := abi.Methods[name]
if !exist {
@ -98,8 +150,55 @@ func TestReader(t *testing.T) {
}
}
func TestInvalidABI(t *testing.T) {
json := `[{ "type" : "function", "name" : "", "constant" : fals }]`
_, err := JSON(strings.NewReader(json))
if err == nil {
t.Fatal("invalid json should produce error")
}
json2 := `[{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "typ" : "uint256" } ] }]`
_, err = JSON(strings.NewReader(json2))
if err == nil {
t.Fatal("invalid json should produce error")
}
}
// TestConstructor tests a constructor function.
// The test is based on the following contract:
// contract TestConstructor {
// constructor(uint256 a, uint256 b) public{}
// }
func TestConstructor(t *testing.T) {
json := `[{ "inputs": [{"internalType": "uint256","name": "a","type": "uint256" },{ "internalType": "uint256","name": "b","type": "uint256"}],"stateMutability": "nonpayable","type": "constructor"}]`
method := NewMethod("", "", Constructor, "nonpayable", false, false, []Argument{{"a", Uint256, false}, {"b", Uint256, false}}, nil)
// Test from JSON
abi, err := JSON(strings.NewReader(json))
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(abi.Constructor, method) {
t.Error("Missing expected constructor")
}
// Test pack/unpack
packed, err := abi.Pack("", big.NewInt(1), big.NewInt(2))
if err != nil {
t.Error(err)
}
unpacked, err := abi.Constructor.Inputs.Unpack(packed)
if err != nil {
t.Error(err)
}
if !reflect.DeepEqual(unpacked[0], big.NewInt(1)) {
t.Error("Unable to pack/unpack from constructor")
}
if !reflect.DeepEqual(unpacked[1], big.NewInt(2)) {
t.Error("Unable to pack/unpack from constructor")
}
}
func TestTestNumbers(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
@ -135,60 +234,22 @@ func TestTestNumbers(t *testing.T) {
}
}
func TestTestString(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
if _, err := abi.Pack("string", "hello world"); err != nil {
t.Error(err)
}
}
func TestTestBool(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
if _, err := abi.Pack("bool", true); err != nil {
t.Error(err)
}
}
func TestTestSlice(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
slice := make([]uint64, 2)
if _, err := abi.Pack("uint64[2]", slice); err != nil {
t.Error(err)
}
if _, err := abi.Pack("uint64[]", slice); err != nil {
t.Error(err)
}
}
func TestMethodSignature(t *testing.T) {
String, _ := NewType("string", "", nil)
m := Method{"foo", "foo", "", false, false, false, false, []Argument{{"bar", String, false}, {"baz", String, false}}, nil}
m := NewMethod("foo", "foo", Function, "", false, false, []Argument{{"bar", String, false}, {"baz", String, false}}, nil)
exp := "foo(string,string)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
if m.Sig != exp {
t.Error("signature mismatch", exp, "!=", m.Sig)
}
idexp := crypto.Keccak256([]byte(exp))[:4]
if !bytes.Equal(m.ID(), idexp) {
t.Errorf("expected ids to match %x != %x", m.ID(), idexp)
if !bytes.Equal(m.ID, idexp) {
t.Errorf("expected ids to match %x != %x", m.ID, idexp)
}
uintt, _ := NewType("uint256", "", nil)
m = Method{"foo", "foo", "", false, false, false, false, []Argument{{"bar", uintt, false}}, nil}
m = NewMethod("foo", "foo", Function, "", false, false, []Argument{{"bar", Uint256, false}}, nil)
exp = "foo(uint256)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
if m.Sig != exp {
t.Error("signature mismatch", exp, "!=", m.Sig)
}
// Method with tuple arguments
@ -204,10 +265,10 @@ func TestMethodSignature(t *testing.T) {
{Name: "y", Type: "int256"},
}},
})
m = Method{"foo", "foo", "", false, false, false, false, []Argument{{"s", s, false}, {"bar", String, false}}, nil}
m = NewMethod("foo", "foo", Function, "", false, false, []Argument{{"s", s, false}, {"bar", String, false}}, nil)
exp = "foo((int256,int256[],(int256,int256)[],(int256,int256)[2]),string)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
if m.Sig != exp {
t.Error("signature mismatch", exp, "!=", m.Sig)
}
}
@ -219,12 +280,12 @@ func TestOverloadedMethodSignature(t *testing.T) {
}
check := func(name string, expect string, method bool) {
if method {
if abi.Methods[name].Sig() != expect {
t.Fatalf("The signature of overloaded method mismatch, want %s, have %s", expect, abi.Methods[name].Sig())
if abi.Methods[name].Sig != expect {
t.Fatalf("The signature of overloaded method mismatch, want %s, have %s", expect, abi.Methods[name].Sig)
}
} else {
if abi.Events[name].Sig() != expect {
t.Fatalf("The signature of overloaded event mismatch, want %s, have %s", expect, abi.Events[name].Sig())
if abi.Events[name].Sig != expect {
t.Fatalf("The signature of overloaded event mismatch, want %s, have %s", expect, abi.Events[name].Sig)
}
}
}
@ -235,7 +296,7 @@ func TestOverloadedMethodSignature(t *testing.T) {
}
func TestMultiPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
@ -400,15 +461,7 @@ func TestInputVariableInputLength(t *testing.T) {
}
func TestInputFixedArrayAndVariableInputLength(t *testing.T) {
const definition = `[
{ "type" : "function", "name" : "fixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "fixedArrBytes", "constant" : true, "inputs" : [ { "name" : "str", "type" : "bytes" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "mixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type": "uint256[2]" }, { "name" : "dynArr", "type": "uint256[]" } ] },
{ "type" : "function", "name" : "doubleFixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type": "uint256[2]" }, { "name" : "fixedArr2", "type": "uint256[3]" } ] },
{ "type" : "function", "name" : "multipleMixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type": "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] }
]`
abi, err := JSON(strings.NewReader(definition))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Error(err)
}
@ -555,7 +608,7 @@ func TestInputFixedArrayAndVariableInputLength(t *testing.T) {
strvalue = common.RightPadBytes([]byte(strin), 32)
fixedarrin1value1 = common.LeftPadBytes(fixedarrin1[0].Bytes(), 32)
fixedarrin1value2 = common.LeftPadBytes(fixedarrin1[1].Bytes(), 32)
dynarroffset = U256(big.NewInt(int64(256 + ((len(strin)/32)+1)*32)))
dynarroffset = math.U256Bytes(big.NewInt(int64(256 + ((len(strin)/32)+1)*32)))
dynarrlength = make([]byte, 32)
dynarrlength[31] = byte(len(dynarrin))
dynarrinvalue1 = common.LeftPadBytes(dynarrin[0].Bytes(), 32)
@ -602,8 +655,6 @@ func TestBareEvents(t *testing.T) {
{ "type" : "event", "name" : "tuple", "inputs" : [{ "indexed":false, "name":"t", "type":"tuple", "components":[{"name":"a", "type":"uint256"}] }, { "indexed":true, "name":"arg1", "type":"address" }] }
]`
arg0, _ := NewType("uint256", "", nil)
arg1, _ := NewType("address", "", nil)
tuple, _ := NewType("tuple", "", []ArgumentMarshaling{{Name: "a", Type: "uint256"}})
expectedEvents := map[string]struct {
@ -613,12 +664,12 @@ func TestBareEvents(t *testing.T) {
"balance": {false, nil},
"anon": {true, nil},
"args": {false, []Argument{
{Name: "arg0", Type: arg0, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
{Name: "arg0", Type: Uint256, Indexed: false},
{Name: "arg1", Type: Address, Indexed: true},
}},
"tuple": {false, []Argument{
{Name: "t", Type: tuple, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
{Name: "arg1", Type: Address, Indexed: true},
}},
}
@ -692,7 +743,7 @@ func TestUnpackEvent(t *testing.T) {
}
var ev ReceivedEvent
err = abi.Unpack(&ev, "received", data)
err = abi.UnpackIntoInterface(&ev, "received", data)
if err != nil {
t.Error(err)
}
@ -701,7 +752,7 @@ func TestUnpackEvent(t *testing.T) {
Sender common.Address
}
var receivedAddrEv ReceivedAddrEvent
err = abi.Unpack(&receivedAddrEv, "receivedAddr", data)
err = abi.UnpackIntoInterface(&receivedAddrEv, "receivedAddr", data)
if err != nil {
t.Error(err)
}
@ -891,45 +942,25 @@ func TestUnpackIntoMapNamingConflict(t *testing.T) {
}
func TestABI_MethodById(t *testing.T) {
const abiJSON = `[
{"type":"function","name":"receive","constant":false,"inputs":[{"name":"memo","type":"bytes"}],"outputs":[],"payable":true,"stateMutability":"payable"},
{"type":"event","name":"received","anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}]},
{"type":"function","name":"fixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr","type":"uint256[2]"}]},
{"type":"function","name":"fixedArrBytes","constant":true,"inputs":[{"name":"str","type":"bytes"},{"name":"fixedArr","type":"uint256[2]"}]},
{"type":"function","name":"mixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr","type":"uint256[2]"},{"name":"dynArr","type":"uint256[]"}]},
{"type":"function","name":"doubleFixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr1","type":"uint256[2]"},{"name":"fixedArr2","type":"uint256[3]"}]},
{"type":"function","name":"multipleMixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr1","type":"uint256[2]"},{"name":"dynArr","type":"uint256[]"},{"name":"fixedArr2","type":"uint256[3]"}]},
{"type":"function","name":"balance","constant":true},
{"type":"function","name":"send","constant":false,"inputs":[{"name":"amount","type":"uint256"}]},
{"type":"function","name":"test","constant":false,"inputs":[{"name":"number","type":"uint32"}]},
{"type":"function","name":"string","constant":false,"inputs":[{"name":"inputs","type":"string"}]},
{"type":"function","name":"bool","constant":false,"inputs":[{"name":"inputs","type":"bool"}]},
{"type":"function","name":"address","constant":false,"inputs":[{"name":"inputs","type":"address"}]},
{"type":"function","name":"uint64[2]","constant":false,"inputs":[{"name":"inputs","type":"uint64[2]"}]},
{"type":"function","name":"uint64[]","constant":false,"inputs":[{"name":"inputs","type":"uint64[]"}]},
{"type":"function","name":"foo","constant":false,"inputs":[{"name":"inputs","type":"uint32"}]},
{"type":"function","name":"bar","constant":false,"inputs":[{"name":"inputs","type":"uint32"},{"name":"string","type":"uint16"}]},
{"type":"function","name":"_slice","constant":false,"inputs":[{"name":"inputs","type":"uint32[2]"}]},
{"type":"function","name":"__slice256","constant":false,"inputs":[{"name":"inputs","type":"uint256[2]"}]},
{"type":"function","name":"sliceAddress","constant":false,"inputs":[{"name":"inputs","type":"address[]"}]},
{"type":"function","name":"sliceMultiAddress","constant":false,"inputs":[{"name":"a","type":"address[]"},{"name":"b","type":"address[]"}]}
]
`
abi, err := JSON(strings.NewReader(abiJSON))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
for name, m := range abi.Methods {
a := fmt.Sprintf("%v", m)
m2, err := abi.MethodById(m.ID())
m2, err := abi.MethodById(m.ID)
if err != nil {
t.Fatalf("Failed to look up ABI method: %v", err)
}
b := fmt.Sprintf("%v", m2)
if a != b {
t.Errorf("Method %v (id %x) not 'findable' by id in ABI", name, m.ID())
t.Errorf("Method %v (id %x) not 'findable' by id in ABI", name, m.ID)
}
}
// test unsuccessful lookups
if _, err = abi.MethodById(crypto.Keccak256()); err == nil {
t.Error("Expected error: no method with this id")
}
// Also test empty
if _, err := abi.MethodById([]byte{0x00}); err == nil {
t.Errorf("Expected error, too short to decode data")
@ -995,8 +1026,8 @@ func TestABI_EventById(t *testing.T) {
t.Errorf("We should find a event for topic %s, test #%d", topicID.Hex(), testnum)
}
if event.ID() != topicID {
t.Errorf("Event id %s does not match topic %s, test #%d", event.ID().Hex(), topicID.Hex(), testnum)
if event.ID != topicID {
t.Errorf("Event id %s does not match topic %s, test #%d", event.ID.Hex(), topicID.Hex(), testnum)
}
unknowntopicID := crypto.Keccak256Hash([]byte("unknownEvent"))
@ -1010,26 +1041,6 @@ func TestABI_EventById(t *testing.T) {
}
}
func TestDuplicateMethodNames(t *testing.T) {
abiJSON := `[{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"},{"name":"customFallback","type":"string"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
if _, ok := contractAbi.Methods["transfer"]; !ok {
t.Fatalf("Could not find original method")
}
if _, ok := contractAbi.Methods["transfer0"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer1"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer2"]; ok {
t.Fatalf("Should not have found extra method")
}
}
// TestDoubleDuplicateMethodNames checks that if transfer0 already exists, there won't be a name
// conflict and that the second transfer method will be renamed transfer1.
func TestDoubleDuplicateMethodNames(t *testing.T) {
@ -1051,3 +1062,87 @@ func TestDoubleDuplicateMethodNames(t *testing.T) {
t.Fatalf("Should not have found extra method")
}
}
// TestDoubleDuplicateEventNames checks that if send0 already exists, there won't be a name
// conflict and that the second send event will be renamed send1.
// The test runs the abi of the following contract.
// contract DuplicateEvent {
// event send(uint256 a);
// event send0();
// event send();
// }
func TestDoubleDuplicateEventNames(t *testing.T) {
abiJSON := `[{"anonymous": false,"inputs": [{"indexed": false,"internalType": "uint256","name": "a","type": "uint256"}],"name": "send","type": "event"},{"anonymous": false,"inputs": [],"name": "send0","type": "event"},{ "anonymous": false, "inputs": [],"name": "send","type": "event"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
if _, ok := contractAbi.Events["send"]; !ok {
t.Fatalf("Could not find original event")
}
if _, ok := contractAbi.Events["send0"]; !ok {
t.Fatalf("Could not find duplicate event")
}
if _, ok := contractAbi.Events["send1"]; !ok {
t.Fatalf("Could not find duplicate event")
}
if _, ok := contractAbi.Events["send2"]; ok {
t.Fatalf("Should not have found extra event")
}
}
// TestUnnamedEventParam checks that an event with unnamed parameters is
// correctly handled.
// The test runs the abi of the following contract.
// contract TestEvent {
// event send(uint256, uint256);
// }
func TestUnnamedEventParam(t *testing.T) {
abiJSON := `[{ "anonymous": false, "inputs": [{ "indexed": false,"internalType": "uint256", "name": "","type": "uint256"},{"indexed": false,"internalType": "uint256","name": "","type": "uint256"}],"name": "send","type": "event"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
event, ok := contractAbi.Events["send"]
if !ok {
t.Fatalf("Could not find event")
}
if event.Inputs[0].Name != "arg0" {
t.Fatalf("Could not find input")
}
if event.Inputs[1].Name != "arg1" {
t.Fatalf("Could not find input")
}
}
func TestUnpackRevert(t *testing.T) {
t.Parallel()
var cases = []struct {
input string
expect string
expectErr error
}{
{"", "", errors.New("invalid data for unpacking")},
{"08c379a1", "", errors.New("invalid data for unpacking")},
{"08c379a00000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000d72657665727420726561736f6e00000000000000000000000000000000000000", "revert reason", nil},
}
for index, c := range cases {
t.Run(fmt.Sprintf("case %d", index), func(t *testing.T) {
got, err := UnpackRevert(common.Hex2Bytes(c.input))
if c.expectErr != nil {
if err == nil {
t.Fatalf("Expected non-nil error")
}
if err.Error() != c.expectErr.Error() {
t.Fatalf("Expected error mismatch, want %v, got %v", c.expectErr, err)
}
return
}
if c.expect != got {
t.Fatalf("Output mismatch, want %v, got %v", c.expect, got)
}
})
}
}

View File

@ -41,7 +41,7 @@ type ArgumentMarshaling struct {
Indexed bool
}
// UnmarshalJSON implements json.Unmarshaler interface
// UnmarshalJSON implements json.Unmarshaler interface.
func (argument *Argument) UnmarshalJSON(data []byte) error {
var arg ArgumentMarshaling
err := json.Unmarshal(data, &arg)
@ -59,19 +59,7 @@ func (argument *Argument) UnmarshalJSON(data []byte) error {
return nil
}
// LengthNonIndexed returns the number of arguments when not counting 'indexed' ones. Only events
// can ever have 'indexed' arguments, it should always be false on arguments for method input/output
func (arguments Arguments) LengthNonIndexed() int {
out := 0
for _, arg := range arguments {
if !arg.Indexed {
out++
}
}
return out
}
// NonIndexed returns the arguments with indexed arguments filtered out
// NonIndexed returns the arguments with indexed arguments filtered out.
func (arguments Arguments) NonIndexed() Arguments {
var ret []Argument
for _, arg := range arguments {
@ -82,217 +70,128 @@ func (arguments Arguments) NonIndexed() Arguments {
return ret
}
// isTuple returns true for non-atomic constructs, like (uint,uint) or uint[]
// isTuple returns true for non-atomic constructs, like (uint,uint) or uint[].
func (arguments Arguments) isTuple() bool {
return len(arguments) > 1
}
// Unpack performs the operation hexdata -> Go format
func (arguments Arguments) Unpack(v interface{}, data []byte) error {
// Unpack performs the operation hexdata -> Go format.
func (arguments Arguments) Unpack(data []byte) ([]interface{}, error) {
if len(data) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
} else {
return nil // Nothing to unmarshal, return
return nil, fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
}
// Nothing to unmarshal, return default variables
nonIndexedArgs := arguments.NonIndexed()
defaultVars := make([]interface{}, len(nonIndexedArgs))
for index, arg := range nonIndexedArgs {
defaultVars[index] = reflect.New(arg.Type.GetType())
}
return defaultVars, nil
}
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
if arguments.isTuple() {
return arguments.unpackTuple(v, marshalledValues)
}
return arguments.unpackAtomic(v, marshalledValues[0])
return arguments.UnpackValues(data)
}
// UnpackIntoMap performs the operation hexdata -> mapping of argument name to argument value
// UnpackIntoMap performs the operation hexdata -> mapping of argument name to argument value.
func (arguments Arguments) UnpackIntoMap(v map[string]interface{}, data []byte) error {
if len(data) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
} else {
return nil // Nothing to unmarshal, return
}
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
return arguments.unpackIntoMap(v, marshalledValues)
}
// unpack sets the unmarshalled value to go format.
// Note the dst here must be settable.
func unpack(t *Type, dst interface{}, src interface{}) error {
var (
dstVal = reflect.ValueOf(dst).Elem()
srcVal = reflect.ValueOf(src)
)
tuple, typ := false, t
for {
if typ.T == SliceTy || typ.T == ArrayTy {
typ = typ.Elem
continue
}
tuple = typ.T == TupleTy
break
}
if !tuple {
return set(dstVal, srcVal)
}
// Dereferences interface or pointer wrapper
dstVal = indirectInterfaceOrPtr(dstVal)
switch t.T {
case TupleTy:
if dstVal.Kind() != reflect.Struct {
return fmt.Errorf("abi: invalid dst value for unpack, want struct, got %s", dstVal.Kind())
}
fieldmap, err := mapArgNamesToStructFields(t.TupleRawNames, dstVal)
if err != nil {
return err
}
for i, elem := range t.TupleElems {
fname := fieldmap[t.TupleRawNames[i]]
field := dstVal.FieldByName(fname)
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't found in the given value", t.TupleRawNames[i])
}
if err := unpack(elem, field.Addr().Interface(), srcVal.Field(i).Interface()); err != nil {
return err
}
}
return nil
case SliceTy:
if dstVal.Kind() != reflect.Slice {
return fmt.Errorf("abi: invalid dst value for unpack, want slice, got %s", dstVal.Kind())
}
slice := reflect.MakeSlice(dstVal.Type(), srcVal.Len(), srcVal.Len())
for i := 0; i < slice.Len(); i++ {
if err := unpack(t.Elem, slice.Index(i).Addr().Interface(), srcVal.Index(i).Interface()); err != nil {
return err
}
}
dstVal.Set(slice)
case ArrayTy:
if dstVal.Kind() != reflect.Array {
return fmt.Errorf("abi: invalid dst value for unpack, want array, got %s", dstVal.Kind())
}
array := reflect.New(dstVal.Type()).Elem()
for i := 0; i < array.Len(); i++ {
if err := unpack(t.Elem, array.Index(i).Addr().Interface(), srcVal.Index(i).Interface()); err != nil {
return err
}
}
dstVal.Set(array)
}
return nil
}
// unpackIntoMap unpacks marshalledValues into the provided map[string]interface{}
func (arguments Arguments) unpackIntoMap(v map[string]interface{}, marshalledValues []interface{}) error {
// Make sure map is not nil
if v == nil {
return fmt.Errorf("abi: cannot unpack into a nil map")
}
if len(data) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
}
return nil // Nothing to unmarshal, return
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
for i, arg := range arguments.NonIndexed() {
v[arg.Name] = marshalledValues[i]
}
return nil
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues interface{}) error {
if arguments.LengthNonIndexed() == 0 {
return nil
// Copy performs the operation go format -> provided struct.
func (arguments Arguments) Copy(v interface{}, values []interface{}) error {
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
}
argument := arguments.NonIndexed()[0]
elem := reflect.ValueOf(v).Elem()
if elem.Kind() == reflect.Struct && argument.Type.T != TupleTy {
fieldmap, err := mapArgNamesToStructFields([]string{argument.Name}, elem)
if err != nil {
return err
if len(values) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to copy no values while %d arguments are expected", len(arguments))
}
field := elem.FieldByName(fieldmap[argument.Name])
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't be found in the given value", argument.Name)
}
return unpack(&argument.Type, field.Addr().Interface(), marshalledValues)
return nil // Nothing to copy, return
}
return unpack(&argument.Type, elem.Addr().Interface(), marshalledValues)
if arguments.isTuple() {
return arguments.copyTuple(v, values)
}
return arguments.copyAtomic(v, values[0])
}
// unpackTuple unpacks ( hexdata -> go ) a batch of values.
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
var (
value = reflect.ValueOf(v).Elem()
typ = value.Type()
kind = value.Kind()
)
if err := requireUnpackKind(value, typ, kind, arguments); err != nil {
return err
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) copyAtomic(v interface{}, marshalledValues interface{}) error {
dst := reflect.ValueOf(v).Elem()
src := reflect.ValueOf(marshalledValues)
// If the interface is a struct, get of abi->struct_field mapping
var abi2struct map[string]string
if kind == reflect.Struct {
var (
argNames []string
err error
)
for _, arg := range arguments.NonIndexed() {
argNames = append(argNames, arg.Name)
if dst.Kind() == reflect.Struct && src.Kind() != reflect.Struct {
return set(dst.Field(0), src)
}
return set(dst, src)
}
// copyTuple copies a batch of values from marshalledValues to v.
func (arguments Arguments) copyTuple(v interface{}, marshalledValues []interface{}) error {
value := reflect.ValueOf(v).Elem()
nonIndexedArgs := arguments.NonIndexed()
switch value.Kind() {
case reflect.Struct:
argNames := make([]string, len(nonIndexedArgs))
for i, arg := range nonIndexedArgs {
argNames[i] = arg.Name
}
abi2struct, err = mapArgNamesToStructFields(argNames, value)
var err error
abi2struct, err := mapArgNamesToStructFields(argNames, value)
if err != nil {
return err
}
}
for i, arg := range arguments.NonIndexed() {
switch kind {
case reflect.Struct:
for i, arg := range nonIndexedArgs {
field := value.FieldByName(abi2struct[arg.Name])
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't be found in the given value", arg.Name)
}
if err := unpack(&arg.Type, field.Addr().Interface(), marshalledValues[i]); err != nil {
if err := set(field, reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
case reflect.Slice, reflect.Array:
if value.Len() < i {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
v := value.Index(i)
if err := requireAssignable(v, reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
if err := unpack(&arg.Type, v.Addr().Interface(), marshalledValues[i]); err != nil {
return err
}
default:
return fmt.Errorf("abi:[2] cannot unmarshal tuple in to %v", typ)
}
case reflect.Slice, reflect.Array:
if value.Len() < len(marshalledValues) {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
for i := range nonIndexedArgs {
if err := set(value.Index(i), reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
}
default:
return fmt.Errorf("abi:[2] cannot unmarshal tuple in to %v", value.Type())
}
return nil
}
// UnpackValues can be used to unpack ABI-encoded hexdata according to the ABI-specification,
// without supplying a struct to unpack into. Instead, this method returns a list containing the
// values. An atomic argument will be a list with one element.
func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
retval := make([]interface{}, 0, arguments.LengthNonIndexed())
nonIndexedArgs := arguments.NonIndexed()
retval := make([]interface{}, 0, len(nonIndexedArgs))
virtualArgs := 0
for index, arg := range arguments.NonIndexed() {
marshalledValue, err := ToGoType((index+virtualArgs)*32, arg.Type, data)
for index, arg := range nonIndexedArgs {
marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data)
if arg.Type.T == ArrayTy && !isDynamicType(arg.Type) {
// If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256.
@ -318,18 +217,18 @@ func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
return retval, nil
}
// PackValues performs the operation Go format -> Hexdata
// It is the semantic opposite of UnpackValues
// PackValues performs the operation Go format -> Hexdata.
// It is the semantic opposite of UnpackValues.
func (arguments Arguments) PackValues(args []interface{}) ([]byte, error) {
return arguments.Pack(args...)
}
// Pack performs the operation Go format -> Hexdata
// Pack performs the operation Go format -> Hexdata.
func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
// Make sure arguments match up and pack them
abiArgs := arguments
if len(args) != len(abiArgs) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(abiArgs))
return nil, fmt.Errorf("argument count mismatch: got %d for %d", len(args), len(abiArgs))
}
// variable input is the output appended at the end of packed
// output. This is used for strings and bytes types input.

View File

@ -21,6 +21,7 @@ import (
"errors"
"io"
"io/ioutil"
"math/big"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/external"
@ -28,11 +29,21 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
)
// ErrNoChainID is returned whenever the user failed to specify a chain id.
var ErrNoChainID = errors.New("no chain id specified")
// ErrNotAuthorized is returned when an account is not properly unlocked.
var ErrNotAuthorized = errors.New("not authorized to sign this account")
// NewTransactor is a utility method to easily create a transaction signer from
// an encrypted json key stream and the associated passphrase.
//
// Deprecated: Use NewTransactorWithChainID instead.
func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
log.Warn("WARNING: NewTransactor has been deprecated in favour of NewTransactorWithChainID")
json, err := ioutil.ReadAll(keyin)
if err != nil {
return nil, err
@ -45,13 +56,17 @@ func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
}
// NewKeyStoreTransactor is a utility method to easily create a transaction signer from
// an decrypted key from a keystore
// an decrypted key from a keystore.
//
// Deprecated: Use NewKeyStoreTransactorWithChainID instead.
func NewKeyStoreTransactor(keystore *keystore.KeyStore, account accounts.Account) (*TransactOpts, error) {
log.Warn("WARNING: NewKeyStoreTransactor has been deprecated in favour of NewTransactorWithChainID")
signer := types.HomesteadSigner{}
return &TransactOpts{
From: account.Address,
Signer: func(signer types.Signer, address common.Address, tx *types.Transaction) (*types.Transaction, error) {
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, errors.New("not authorized to sign this account")
return nil, ErrNotAuthorized
}
signature, err := keystore.SignHash(account, signer.Hash(tx).Bytes())
if err != nil {
@ -64,13 +79,17 @@ func NewKeyStoreTransactor(keystore *keystore.KeyStore, account accounts.Account
// NewKeyedTransactor is a utility method to easily create a transaction signer
// from a single private key.
//
// Deprecated: Use NewKeyedTransactorWithChainID instead.
func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
log.Warn("WARNING: NewKeyedTransactor has been deprecated in favour of NewKeyedTransactorWithChainID")
keyAddr := crypto.PubkeyToAddress(key.PublicKey)
signer := types.HomesteadSigner{}
return &TransactOpts{
From: keyAddr,
Signer: func(signer types.Signer, address common.Address, tx *types.Transaction) (*types.Transaction, error) {
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != keyAddr {
return nil, errors.New("not authorized to sign this account")
return nil, ErrNotAuthorized
}
signature, err := crypto.Sign(signer.Hash(tx).Bytes(), key)
if err != nil {
@ -81,14 +100,73 @@ func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
}
}
// NewTransactorWithChainID is a utility method to easily create a transaction signer from
// an encrypted json key stream and the associated passphrase.
func NewTransactorWithChainID(keyin io.Reader, passphrase string, chainID *big.Int) (*TransactOpts, error) {
json, err := ioutil.ReadAll(keyin)
if err != nil {
return nil, err
}
key, err := keystore.DecryptKey(json, passphrase)
if err != nil {
return nil, err
}
return NewKeyedTransactorWithChainID(key.PrivateKey, chainID)
}
// NewKeyStoreTransactorWithChainID is a utility method to easily create a transaction signer from
// an decrypted key from a keystore.
func NewKeyStoreTransactorWithChainID(keystore *keystore.KeyStore, account accounts.Account, chainID *big.Int) (*TransactOpts, error) {
if chainID == nil {
return nil, ErrNoChainID
}
signer := types.LatestSignerForChainID(chainID)
return &TransactOpts{
From: account.Address,
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, ErrNotAuthorized
}
signature, err := keystore.SignHash(account, signer.Hash(tx).Bytes())
if err != nil {
return nil, err
}
return tx.WithSignature(signer, signature)
},
}, nil
}
// NewKeyedTransactorWithChainID is a utility method to easily create a transaction signer
// from a single private key.
func NewKeyedTransactorWithChainID(key *ecdsa.PrivateKey, chainID *big.Int) (*TransactOpts, error) {
keyAddr := crypto.PubkeyToAddress(key.PublicKey)
if chainID == nil {
return nil, ErrNoChainID
}
signer := types.LatestSignerForChainID(chainID)
return &TransactOpts{
From: keyAddr,
Signer: func(address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != keyAddr {
return nil, ErrNotAuthorized
}
signature, err := crypto.Sign(signer.Hash(tx).Bytes(), key)
if err != nil {
return nil, err
}
return tx.WithSignature(signer, signature)
},
}, nil
}
// NewClefTransactor is a utility method to easily create a transaction signer
// with a clef backend.
func NewClefTransactor(clef *external.ExternalSigner, account accounts.Account) *TransactOpts {
return &TransactOpts{
From: account.Address,
Signer: func(signer types.Signer, address common.Address, transaction *types.Transaction) (*types.Transaction, error) {
Signer: func(address common.Address, transaction *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, errors.New("not authorized to sign this account")
return nil, ErrNotAuthorized
}
return clef.SignTx(account, transaction, nil) // Clef enforces its own chain id
},

View File

@ -41,7 +41,7 @@ var (
ErrNoCodeAfterDeploy = errors.New("no contract code after deployment")
)
// ContractCaller defines the methods needed to allow operating with contract on a read
// ContractCaller defines the methods needed to allow operating with a contract on a read
// only basis.
type ContractCaller interface {
// CodeAt returns the code of the given account. This is needed to differentiate
@ -62,8 +62,8 @@ type PendingContractCaller interface {
PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error)
}
// ContractTransactor defines the methods needed to allow operating with contract
// on a write only basis. Beside the transacting method, the remainder are helpers
// ContractTransactor defines the methods needed to allow operating with a contract
// on a write only basis. Besides the transacting method, the remainder are helpers
// used when the user does not provide some needed values, but rather leaves it up
// to the transactor to decide.
type ContractTransactor interface {

View File

@ -25,8 +25,10 @@ import (
"time"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
@ -38,22 +40,22 @@ import (
"github.com/ethereum/go-ethereum/eth/filters"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
)
// This nil assignment ensures compile time that SimulatedBackend implements bind.ContractBackend.
// This nil assignment ensures at compile time that SimulatedBackend implements bind.ContractBackend.
var _ bind.ContractBackend = (*SimulatedBackend)(nil)
var (
errBlockNumberUnsupported = errors.New("simulatedBackend cannot access blocks other than the latest block")
errBlockDoesNotExist = errors.New("block does not exist in blockchain")
errTransactionDoesNotExist = errors.New("transaction does not exist")
errGasEstimationFailed = errors.New("gas required exceeds allowance or always failing transaction")
)
// SimulatedBackend implements bind.ContractBackend, simulating a blockchain in
// the background. Its main purpose is to allow easily testing contract bindings.
// the background. Its main purpose is to allow for easy testing of contract bindings.
// Simulated backend implements the following interfaces:
// ChainReader, ChainStateReader, ContractBackend, ContractCaller, ContractFilterer, ContractTransactor,
// DeployBackend, GasEstimator, GasPricer, LogFilterer, PendingContractCaller, TransactionReader, and TransactionSender
@ -63,7 +65,7 @@ type SimulatedBackend struct {
mu sync.Mutex
pendingBlock *types.Block // Currently pending block that will be imported on request
pendingState *state.StateDB // Currently pending state that will be the active on on request
pendingState *state.StateDB // Currently pending state that will be the active on request
events *filters.EventSystem // Event system for filtering log events live
@ -72,10 +74,11 @@ type SimulatedBackend struct {
// NewSimulatedBackendWithDatabase creates a new binding backend based on the given database
// and uses a simulated blockchain for testing purposes.
// A simulated backend always uses chainID 1337.
func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
genesis.MustCommit(database)
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil)
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil, nil)
backend := &SimulatedBackend{
database: database,
@ -89,6 +92,7 @@ func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.Genesis
// NewSimulatedBackend creates a new binding backend using a simulated blockchain
// for testing purposes.
// A simulated backend always uses chainID 1337.
func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
return NewSimulatedBackendWithDatabase(rawdb.NewMemoryDatabase(), alloc, gasLimit)
}
@ -121,10 +125,9 @@ func (b *SimulatedBackend) Rollback() {
func (b *SimulatedBackend) rollback() {
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(int, *core.BlockGen) {})
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
b.pendingState, _ = state.New(b.pendingBlock.Root(), b.blockchain.StateCache(), nil)
}
// stateByBlockNumber retrieves a state by a given blocknumber.
@ -132,11 +135,11 @@ func (b *SimulatedBackend) stateByBlockNumber(ctx context.Context, blockNumber *
if blockNumber == nil || blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) == 0 {
return b.blockchain.State()
}
block, err := b.BlockByNumber(ctx, blockNumber)
block, err := b.blockByNumberNoLock(ctx, blockNumber)
if err != nil {
return nil, err
}
return b.blockchain.StateAt(block.Hash())
return b.blockchain.StateAt(block.Root())
}
// CodeAt returns the code associated with a certain account in the blockchain.
@ -144,12 +147,12 @@ func (b *SimulatedBackend) CodeAt(ctx context.Context, contract common.Address,
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
return statedb.GetCode(contract), nil
return stateDB.GetCode(contract), nil
}
// BalanceAt returns the wei balance of a certain account in the blockchain.
@ -157,12 +160,12 @@ func (b *SimulatedBackend) BalanceAt(ctx context.Context, contract common.Addres
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
return statedb.GetBalance(contract), nil
return stateDB.GetBalance(contract), nil
}
// NonceAt returns the nonce of a certain account in the blockchain.
@ -170,12 +173,12 @@ func (b *SimulatedBackend) NonceAt(ctx context.Context, contract common.Address,
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return 0, err
}
return statedb.GetNonce(contract), nil
return stateDB.GetNonce(contract), nil
}
// StorageAt returns the value of key in the storage of an account in the blockchain.
@ -183,12 +186,12 @@ func (b *SimulatedBackend) StorageAt(ctx context.Context, contract common.Addres
b.mu.Lock()
defer b.mu.Unlock()
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
stateDB, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
val := statedb.GetState(contract, key)
val := stateDB.GetState(contract, key)
return val[:], nil
}
@ -220,7 +223,7 @@ func (b *SimulatedBackend) TransactionByHash(ctx context.Context, txHash common.
return nil, false, ethereum.NotFound
}
// BlockByHash retrieves a block based on the block hash
// BlockByHash retrieves a block based on the block hash.
func (b *SimulatedBackend) BlockByHash(ctx context.Context, hash common.Hash) (*types.Block, error) {
b.mu.Lock()
defer b.mu.Unlock()
@ -243,6 +246,12 @@ func (b *SimulatedBackend) BlockByNumber(ctx context.Context, number *big.Int) (
b.mu.Lock()
defer b.mu.Unlock()
return b.blockByNumberNoLock(ctx, number)
}
// blockByNumberNoLock retrieves a block from the database by number, caching it
// (associated with its hash) if found without Lock.
func (b *SimulatedBackend) blockByNumberNoLock(ctx context.Context, number *big.Int) (*types.Block, error) {
if number == nil || number.Cmp(b.pendingBlock.Number()) == 0 {
return b.blockchain.CurrentBlock(), nil
}
@ -285,7 +294,7 @@ func (b *SimulatedBackend) HeaderByNumber(ctx context.Context, block *big.Int) (
return b.blockchain.GetHeaderByNumber(uint64(block.Int64())), nil
}
// TransactionCount returns the number of transactions in a given block
// TransactionCount returns the number of transactions in a given block.
func (b *SimulatedBackend) TransactionCount(ctx context.Context, blockHash common.Hash) (uint, error) {
b.mu.Lock()
defer b.mu.Unlock()
@ -302,7 +311,7 @@ func (b *SimulatedBackend) TransactionCount(ctx context.Context, blockHash commo
return uint(block.Transactions().Len()), nil
}
// TransactionInBlock returns the transaction for a specific block at a specific index
// TransactionInBlock returns the transaction for a specific block at a specific index.
func (b *SimulatedBackend) TransactionInBlock(ctx context.Context, blockHash common.Hash, index uint) (*types.Transaction, error) {
b.mu.Lock()
defer b.mu.Unlock()
@ -337,6 +346,36 @@ func (b *SimulatedBackend) PendingCodeAt(ctx context.Context, contract common.Ad
return b.pendingState.GetCode(contract), nil
}
func newRevertError(result *core.ExecutionResult) *revertError {
reason, errUnpack := abi.UnpackRevert(result.Revert())
err := errors.New("execution reverted")
if errUnpack == nil {
err = fmt.Errorf("execution reverted: %v", reason)
}
return &revertError{
error: err,
reason: hexutil.Encode(result.Revert()),
}
}
// revertError is an API error that encompasses an EVM revert with JSON error
// code and a binary data blob.
type revertError struct {
error
reason string // revert reason hex encoded
}
// ErrorCode returns the JSON error code for a revert.
// See: https://github.com/ethereum/wiki/wiki/JSON-RPC-Error-Codes-Improvement-Proposal
func (e *revertError) ErrorCode() int {
return 3
}
// ErrorData returns the hex encoded revert reason.
func (e *revertError) ErrorData() interface{} {
return e.reason
}
// CallContract executes a contract call.
func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) {
b.mu.Lock()
@ -345,12 +384,19 @@ func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallM
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return nil, errBlockNumberUnsupported
}
state, err := b.blockchain.State()
stateDB, err := b.blockchain.State()
if err != nil {
return nil, err
}
rval, _, _, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), state)
return rval, err
res, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), stateDB)
if err != nil {
return nil, err
}
// If the result contains a revert reason, try to unpack and return it.
if len(res.Revert()) > 0 {
return nil, newRevertError(res)
}
return res.Return(), res.Err
}
// PendingCallContract executes a contract call on the pending state.
@ -359,8 +405,15 @@ func (b *SimulatedBackend) PendingCallContract(ctx context.Context, call ethereu
defer b.mu.Unlock()
defer b.pendingState.RevertToSnapshot(b.pendingState.Snapshot())
rval, _, _, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
return rval, err
res, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
if err != nil {
return nil, err
}
// If the result contains a revert reason, try to unpack and return it.
if len(res.Revert()) > 0 {
return nil, newRevertError(res)
}
return res.Return(), res.Err
}
// PendingNonceAt implements PendingStateReader.PendingNonceAt, retrieving
@ -395,25 +448,57 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
} else {
hi = b.pendingBlock.GasLimit()
}
// Recap the highest gas allowance with account's balance.
if call.GasPrice != nil && call.GasPrice.BitLen() != 0 {
balance := b.pendingState.GetBalance(call.From) // from can't be nil
available := new(big.Int).Set(balance)
if call.Value != nil {
if call.Value.Cmp(available) >= 0 {
return 0, errors.New("insufficient funds for transfer")
}
available.Sub(available, call.Value)
}
allowance := new(big.Int).Div(available, call.GasPrice)
if allowance.IsUint64() && hi > allowance.Uint64() {
transfer := call.Value
if transfer == nil {
transfer = new(big.Int)
}
log.Warn("Gas estimation capped by limited funds", "original", hi, "balance", balance,
"sent", transfer, "gasprice", call.GasPrice, "fundable", allowance)
hi = allowance.Uint64()
}
}
cap = hi
// Create a helper to check if a gas allowance results in an executable transaction
executable := func(gas uint64) bool {
executable := func(gas uint64) (bool, *core.ExecutionResult, error) {
call.Gas = gas
snapshot := b.pendingState.Snapshot()
_, _, failed, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
res, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
b.pendingState.RevertToSnapshot(snapshot)
if err != nil || failed {
return false
if err != nil {
if errors.Is(err, core.ErrIntrinsicGas) {
return true, nil, nil // Special case, raise gas limit
}
return true, nil, err // Bail out
}
return true
return res.Failed(), res, nil
}
// Execute the binary search and hone in on an executable gas limit
for lo+1 < hi {
mid := (hi + lo) / 2
if !executable(mid) {
failed, _, err := executable(mid)
// If the error is not nil(consensus error), it means the provided message
// call or transaction will never be accepted no matter how much gas it is
// assigned. Return the error directly, don't struggle any more
if err != nil {
return 0, err
}
if failed {
lo = mid
} else {
hi = mid
@ -421,8 +506,19 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
}
// Reject the transaction as invalid if it still fails at the highest allowance
if hi == cap {
if !executable(hi) {
return 0, errGasEstimationFailed
failed, result, err := executable(hi)
if err != nil {
return 0, err
}
if failed {
if result != nil && result.Err != vm.ErrOutOfGas {
if len(result.Revert()) > 0 {
return 0, newRevertError(result)
}
return 0, result.Err
}
// Otherwise, the specified gas cap is too low
return 0, fmt.Errorf("gas required exceeds allowance (%d)", cap)
}
}
return hi, nil
@ -430,7 +526,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
// callContract implements common code between normal and pending contract calls.
// state is modified during execution, make sure to copy it if necessary.
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, statedb *state.StateDB) ([]byte, uint64, bool, error) {
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, stateDB *state.StateDB) (*core.ExecutionResult, error) {
// Ensure message is initialized properly.
if call.GasPrice == nil {
call.GasPrice = big.NewInt(1)
@ -442,18 +538,19 @@ func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallM
call.Value = new(big.Int)
}
// Set infinite balance to the fake caller account.
from := statedb.GetOrNewStateObject(call.From)
from := stateDB.GetOrNewStateObject(call.From)
from.SetBalance(math.MaxBig256)
// Execute the call.
msg := callmsg{call}
msg := callMsg{call}
evmContext := core.NewEVMContext(msg, block.Header(), b.blockchain, nil)
txContext := core.NewEVMTxContext(msg)
evmContext := core.NewEVMBlockContext(block.Header(), b.blockchain, nil)
// Create a new environment which holds all relevant information
// about the transaction and calling mechanisms.
vmenv := vm.NewEVM(evmContext, statedb, b.config, vm.Config{})
gaspool := new(core.GasPool).AddGas(math.MaxUint64)
vmEnv := vm.NewEVM(evmContext, txContext, stateDB, b.config, vm.Config{})
gasPool := new(core.GasPool).AddGas(math.MaxUint64)
return core.NewStateTransition(vmenv, msg, gaspool).TransitionDb()
return core.NewStateTransition(vmEnv, msg, gasPool).TransitionDb()
}
// SendTransaction updates the pending block to include the given transaction.
@ -462,7 +559,10 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
b.mu.Lock()
defer b.mu.Unlock()
sender, err := types.Sender(types.NewEIP155Signer(b.config.ChainID), tx)
// Check transaction validity.
block := b.blockchain.CurrentBlock()
signer := types.MakeSigner(b.blockchain.Config(), block.Number())
sender, err := types.Sender(signer, tx)
if err != nil {
panic(fmt.Errorf("invalid transaction: %v", err))
}
@ -471,16 +571,17 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
panic(fmt.Errorf("invalid transaction nonce: got %d, want %d", tx.Nonce(), nonce))
}
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
// Include tx in chain.
blocks, _ := core.GenerateChain(b.config, block, ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTxWithChain(b.blockchain, tx)
}
block.AddTxWithChain(b.blockchain, tx)
})
statedb, _ := b.blockchain.State()
stateDB, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil)
return nil
}
@ -494,7 +595,7 @@ func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.Filter
// Block filter requested, construct a single-shot filter
filter = filters.NewBlockFilter(&filterBackend{b.database, b.blockchain}, *query.BlockHash, query.Addresses, query.Topics)
} else {
// Initialize unset filter boundaried to run from genesis to chain head
// Initialize unset filter boundaries to run from genesis to chain head
from := int64(0)
if query.FromBlock != nil {
from = query.FromBlock.Int64()
@ -512,8 +613,8 @@ func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.Filter
return nil, err
}
res := make([]types.Log, len(logs))
for i, log := range logs {
res[i] = *log
for i, nLog := range logs {
res[i] = *nLog
}
return res, nil
}
@ -534,9 +635,9 @@ func (b *SimulatedBackend) SubscribeFilterLogs(ctx context.Context, query ethere
for {
select {
case logs := <-sink:
for _, log := range logs {
for _, nlog := range logs {
select {
case ch <- *log:
case ch <- *nlog:
case err := <-sub.Err():
return err
case <-quit:
@ -552,7 +653,7 @@ func (b *SimulatedBackend) SubscribeFilterLogs(ctx context.Context, query ethere
}), nil
}
// SubscribeNewHead returns an event subscription for a new header
// SubscribeNewHead returns an event subscription for a new header.
func (b *SimulatedBackend) SubscribeNewHead(ctx context.Context, ch chan<- *types.Header) (ethereum.Subscription, error) {
// subscribe to a new head
sink := make(chan *types.Header)
@ -580,20 +681,22 @@ func (b *SimulatedBackend) SubscribeNewHead(ctx context.Context, ch chan<- *type
}
// AdjustTime adds a time shift to the simulated clock.
// It can only be called on empty blocks.
func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
b.mu.Lock()
defer b.mu.Unlock()
if len(b.pendingBlock.Transactions()) != 0 {
return errors.New("Could not adjust time on non-empty block")
}
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTx(tx)
}
block.OffsetTime(int64(adjustment.Seconds()))
})
statedb, _ := b.blockchain.State()
stateDB, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil)
return nil
}
@ -603,19 +706,20 @@ func (b *SimulatedBackend) Blockchain() *core.BlockChain {
return b.blockchain
}
// callmsg implements core.Message to allow passing it as a transaction simulator.
type callmsg struct {
// callMsg implements core.Message to allow passing it as a transaction simulator.
type callMsg struct {
ethereum.CallMsg
}
func (m callmsg) From() common.Address { return m.CallMsg.From }
func (m callmsg) Nonce() uint64 { return 0 }
func (m callmsg) CheckNonce() bool { return false }
func (m callmsg) To() *common.Address { return m.CallMsg.To }
func (m callmsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callmsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callmsg) Value() *big.Int { return m.CallMsg.Value }
func (m callmsg) Data() []byte { return m.CallMsg.Data }
func (m callMsg) From() common.Address { return m.CallMsg.From }
func (m callMsg) Nonce() uint64 { return 0 }
func (m callMsg) CheckNonce() bool { return false }
func (m callMsg) To() *common.Address { return m.CallMsg.To }
func (m callMsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callMsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callMsg) Value() *big.Int { return m.CallMsg.Value }
func (m callMsg) Data() []byte { return m.CallMsg.Data }
func (m callMsg) AccessList() types.AccessList { return m.CallMsg.AccessList }
// filterBackend implements filters.Backend to support filtering for logs without
// taking bloom-bits acceleration structures into account.

View File

@ -19,7 +19,9 @@ package backends
import (
"bytes"
"context"
"errors"
"math/big"
"reflect"
"strings"
"testing"
"time"
@ -37,7 +39,7 @@ import (
func TestSimulatedBackend(t *testing.T) {
var gasLimit uint64 = 8000029
key, _ := crypto.GenerateKey() // nolint: gosec
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
genAlloc := make(core.GenesisAlloc)
genAlloc[auth.From] = core.GenesisAccount{Balance: big.NewInt(9223372036854775807)}
@ -105,14 +107,18 @@ const deployedCode = `60806040526004361061003b576000357c010000000000000000000000
// expected return value contains "hello world"
var expectedReturn = []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 11, 104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
func simTestBackend(testAddr common.Address) *SimulatedBackend {
return NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
}
func TestNewSimulatedBackend(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
expectedBal := big.NewInt(10000000000)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: expectedBal},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
if sim.config != params.AllEthashProtocolChanges {
@ -123,8 +129,8 @@ func TestNewSimulatedBackend(t *testing.T) {
t.Errorf("expected sim blockchain config to equal params.AllEthashProtocolChanges, got %v", sim.config)
}
statedb, _ := sim.blockchain.State()
bal := statedb.GetBalance(testAddr)
stateDB, _ := sim.blockchain.State()
bal := stateDB.GetBalance(testAddr)
if bal.Cmp(expectedBal) != 0 {
t.Errorf("expected balance for test address not received. expected: %v actual: %v", expectedBal, bal)
}
@ -137,8 +143,7 @@ func TestSimulatedBackend_AdjustTime(t *testing.T) {
defer sim.Close()
prevTime := sim.pendingBlock.Time()
err := sim.AdjustTime(time.Second)
if err != nil {
if err := sim.AdjustTime(time.Second); err != nil {
t.Error(err)
}
newTime := sim.pendingBlock.Time()
@ -148,14 +153,48 @@ func TestSimulatedBackend_AdjustTime(t *testing.T) {
}
}
func TestNewSimulatedBackend_AdjustTimeFail(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
// Create tx and send
tx := types.NewTransaction(0, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
signedTx, err := types.SignTx(tx, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
}
sim.SendTransaction(context.Background(), signedTx)
// AdjustTime should fail on non-empty block
if err := sim.AdjustTime(time.Second); err == nil {
t.Error("Expected adjust time to error on non-empty block")
}
sim.Commit()
prevTime := sim.pendingBlock.Time()
if err := sim.AdjustTime(time.Minute); err != nil {
t.Error(err)
}
newTime := sim.pendingBlock.Time()
if newTime-prevTime != uint64(time.Minute.Seconds()) {
t.Errorf("adjusted time not equal to a minute. prev: %v, new: %v", prevTime, newTime)
}
// Put a transaction after adjusting time
tx2 := types.NewTransaction(1, testAddr, big.NewInt(1000), params.TxGas, big.NewInt(1), nil)
signedTx2, err := types.SignTx(tx2, types.HomesteadSigner{}, testKey)
if err != nil {
t.Errorf("could not sign tx: %v", err)
}
sim.SendTransaction(context.Background(), signedTx2)
sim.Commit()
newTime = sim.pendingBlock.Time()
if newTime-prevTime >= uint64(time.Minute.Seconds()) {
t.Errorf("time adjusted, but shouldn't be: prev: %v, new: %v", prevTime, newTime)
}
}
func TestSimulatedBackend_BalanceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
expectedBal := big.NewInt(10000000000)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: expectedBal},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -228,11 +267,7 @@ func TestSimulatedBackend_BlockByNumber(t *testing.T) {
func TestSimulatedBackend_NonceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -267,16 +302,22 @@ func TestSimulatedBackend_NonceAt(t *testing.T) {
if newNonce != nonce+uint64(1) {
t.Errorf("received incorrect nonce. expected 1, got %v", nonce)
}
// create some more blocks
sim.Commit()
// Check that we can get data for an older block/state
newNonce, err = sim.NonceAt(bgCtx, testAddr, big.NewInt(1))
if err != nil {
t.Fatalf("could not get nonce for test addr: %v", err)
}
if newNonce != nonce+uint64(1) {
t.Fatalf("received incorrect nonce. expected 1, got %v", nonce)
}
}
func TestSimulatedBackend_SendTransaction(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -356,36 +397,194 @@ func TestSimulatedBackend_TransactionByHash(t *testing.T) {
}
func TestSimulatedBackend_EstimateGas(t *testing.T) {
sim := NewSimulatedBackend(
core.GenesisAlloc{}, 10000000,
)
/*
pragma solidity ^0.6.4;
contract GasEstimation {
function PureRevert() public { revert(); }
function Revert() public { revert("revert reason");}
function OOG() public { for (uint i = 0; ; i++) {}}
function Assert() public { assert(false);}
function Valid() public {}
}*/
const contractAbi = "[{\"inputs\":[],\"name\":\"Assert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"OOG\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"PureRevert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"Revert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"Valid\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]"
const contractBin = "0x60806040523480156100115760006000fd5b50610017565b61016e806100266000396000f3fe60806040523480156100115760006000fd5b506004361061005c5760003560e01c806350f6fe3414610062578063aa8b1d301461006c578063b9b046f914610076578063d8b9839114610080578063e09fface1461008a5761005c565b60006000fd5b61006a610094565b005b6100746100ad565b005b61007e6100b5565b005b6100886100c2565b005b610092610135565b005b6000600090505b5b808060010191505061009b565b505b565b60006000fd5b565b600015156100bf57fe5b5b565b6040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252600d8152602001807f72657665727420726561736f6e0000000000000000000000000000000000000081526020015060200191505060405180910390fd5b565b5b56fea2646970667358221220345bbcbb1a5ecf22b53a78eaebf95f8ee0eceff6d10d4b9643495084d2ec934a64736f6c63430006040033"
key, _ := crypto.GenerateKey()
addr := crypto.PubkeyToAddress(key.PublicKey)
opts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(params.Ether)}}, 10000000)
defer sim.Close()
bgCtx := context.Background()
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
gas, err := sim.EstimateGas(bgCtx, ethereum.CallMsg{
From: testAddr,
To: &testAddr,
Value: big.NewInt(1000),
Data: []byte{},
})
if err != nil {
t.Errorf("could not estimate gas: %v", err)
parsed, _ := abi.JSON(strings.NewReader(contractAbi))
contractAddr, _, _, _ := bind.DeployContract(opts, parsed, common.FromHex(contractBin), sim)
sim.Commit()
var cases = []struct {
name string
message ethereum.CallMsg
expect uint64
expectError error
expectData interface{}
}{
{"plain transfer(valid)", ethereum.CallMsg{
From: addr,
To: &addr,
Gas: 0,
GasPrice: big.NewInt(0),
Value: big.NewInt(1),
Data: nil,
}, params.TxGas, nil, nil},
{"plain transfer(invalid)", ethereum.CallMsg{
From: addr,
To: &contractAddr,
Gas: 0,
GasPrice: big.NewInt(0),
Value: big.NewInt(1),
Data: nil,
}, 0, errors.New("execution reverted"), nil},
{"Revert", ethereum.CallMsg{
From: addr,
To: &contractAddr,
Gas: 0,
GasPrice: big.NewInt(0),
Value: nil,
Data: common.Hex2Bytes("d8b98391"),
}, 0, errors.New("execution reverted: revert reason"), "0x08c379a00000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000d72657665727420726561736f6e00000000000000000000000000000000000000"},
{"PureRevert", ethereum.CallMsg{
From: addr,
To: &contractAddr,
Gas: 0,
GasPrice: big.NewInt(0),
Value: nil,
Data: common.Hex2Bytes("aa8b1d30"),
}, 0, errors.New("execution reverted"), nil},
{"OOG", ethereum.CallMsg{
From: addr,
To: &contractAddr,
Gas: 100000,
GasPrice: big.NewInt(0),
Value: nil,
Data: common.Hex2Bytes("50f6fe34"),
}, 0, errors.New("gas required exceeds allowance (100000)"), nil},
{"Assert", ethereum.CallMsg{
From: addr,
To: &contractAddr,
Gas: 100000,
GasPrice: big.NewInt(0),
Value: nil,
Data: common.Hex2Bytes("b9b046f9"),
}, 0, errors.New("invalid opcode: opcode 0xfe not defined"), nil},
{"Valid", ethereum.CallMsg{
From: addr,
To: &contractAddr,
Gas: 100000,
GasPrice: big.NewInt(0),
Value: nil,
Data: common.Hex2Bytes("e09fface"),
}, 21275, nil, nil},
}
for _, c := range cases {
got, err := sim.EstimateGas(context.Background(), c.message)
if c.expectError != nil {
if err == nil {
t.Fatalf("Expect error, got nil")
}
if c.expectError.Error() != err.Error() {
t.Fatalf("Expect error, want %v, got %v", c.expectError, err)
}
if c.expectData != nil {
if err, ok := err.(*revertError); !ok {
t.Fatalf("Expect revert error, got %T", err)
} else if !reflect.DeepEqual(err.ErrorData(), c.expectData) {
t.Fatalf("Error data mismatch, want %v, got %v", c.expectData, err.ErrorData())
}
}
continue
}
if got != c.expect {
t.Fatalf("Gas estimation mismatch, want %d, got %d", c.expect, got)
}
}
}
if gas != params.TxGas {
t.Errorf("expected 21000 gas cost for a transaction got %v", gas)
func TestSimulatedBackend_EstimateGasWithPrice(t *testing.T) {
key, _ := crypto.GenerateKey()
addr := crypto.PubkeyToAddress(key.PublicKey)
sim := NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(params.Ether*2 + 2e17)}}, 10000000)
defer sim.Close()
recipient := common.HexToAddress("deadbeef")
var cases = []struct {
name string
message ethereum.CallMsg
expect uint64
expectError error
}{
{"EstimateWithoutPrice", ethereum.CallMsg{
From: addr,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(0),
Value: big.NewInt(1000),
Data: nil,
}, 21000, nil},
{"EstimateWithPrice", ethereum.CallMsg{
From: addr,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(1000),
Value: big.NewInt(1000),
Data: nil,
}, 21000, nil},
{"EstimateWithVeryHighPrice", ethereum.CallMsg{
From: addr,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(1e14), // gascost = 2.1ether
Value: big.NewInt(1e17), // the remaining balance for fee is 2.1ether
Data: nil,
}, 21000, nil},
{"EstimateWithSuperhighPrice", ethereum.CallMsg{
From: addr,
To: &recipient,
Gas: 0,
GasPrice: big.NewInt(2e14), // gascost = 4.2ether
Value: big.NewInt(1000),
Data: nil,
}, 21000, errors.New("gas required exceeds allowance (10999)")}, // 10999=(2.2ether-1000wei)/(2e14)
}
for _, c := range cases {
got, err := sim.EstimateGas(context.Background(), c.message)
if c.expectError != nil {
if err == nil {
t.Fatalf("Expect error, got nil")
}
if c.expectError.Error() != err.Error() {
t.Fatalf("Expect error, want %v, got %v", c.expectError, err)
}
continue
}
if got != c.expect {
t.Fatalf("Gas estimation mismatch, want %d, got %d", c.expect, got)
}
}
}
func TestSimulatedBackend_HeaderByHash(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -406,11 +605,7 @@ func TestSimulatedBackend_HeaderByHash(t *testing.T) {
func TestSimulatedBackend_HeaderByNumber(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -457,11 +652,7 @@ func TestSimulatedBackend_HeaderByNumber(t *testing.T) {
func TestSimulatedBackend_TransactionCount(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
currentBlock, err := sim.BlockByNumber(bgCtx, nil)
@ -511,11 +702,7 @@ func TestSimulatedBackend_TransactionCount(t *testing.T) {
func TestSimulatedBackend_TransactionInBlock(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -578,11 +765,7 @@ func TestSimulatedBackend_TransactionInBlock(t *testing.T) {
func TestSimulatedBackend_PendingNonceAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -644,11 +827,7 @@ func TestSimulatedBackend_PendingNonceAt(t *testing.T) {
func TestSimulatedBackend_TransactionReceipt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
}, 10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -694,12 +873,7 @@ func TestSimulatedBackend_SuggestGasPrice(t *testing.T) {
func TestSimulatedBackend_PendingCodeAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
},
10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
code, err := sim.CodeAt(bgCtx, testAddr, nil)
@ -714,7 +888,7 @@ func TestSimulatedBackend_PendingCodeAt(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
auth := bind.NewKeyedTransactor(testKey)
auth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
contractAddr, tx, contract, err := bind.DeployContract(auth, parsed, common.FromHex(abiBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v tx: %v contract: %v", err, tx, contract)
@ -735,12 +909,7 @@ func TestSimulatedBackend_PendingCodeAt(t *testing.T) {
func TestSimulatedBackend_CodeAt(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
},
10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
code, err := sim.CodeAt(bgCtx, testAddr, nil)
@ -755,7 +924,7 @@ func TestSimulatedBackend_CodeAt(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
auth := bind.NewKeyedTransactor(testKey)
auth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
contractAddr, tx, contract, err := bind.DeployContract(auth, parsed, common.FromHex(abiBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v tx: %v contract: %v", err, tx, contract)
@ -779,12 +948,7 @@ func TestSimulatedBackend_CodeAt(t *testing.T) {
// receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
func TestSimulatedBackend_PendingAndCallContract(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := NewSimulatedBackend(
core.GenesisAlloc{
testAddr: {Balance: big.NewInt(10000000000)},
},
10000000,
)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
@ -792,7 +956,7 @@ func TestSimulatedBackend_PendingAndCallContract(t *testing.T) {
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
contractAuth := bind.NewKeyedTransactor(testKey)
contractAuth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
addr, _, _, err := bind.DeployContract(contractAuth, parsed, common.FromHex(abiBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v", err)
@ -800,7 +964,7 @@ func TestSimulatedBackend_PendingAndCallContract(t *testing.T) {
input, err := parsed.Pack("receive", []byte("X"))
if err != nil {
t.Errorf("could pack receive function on contract: %v", err)
t.Errorf("could not pack receive function on contract: %v", err)
}
// make sure you can call the contract in pending state
@ -840,3 +1004,113 @@ func TestSimulatedBackend_PendingAndCallContract(t *testing.T) {
t.Errorf("response from calling contract was expected to be 'hello world' instead received %v", string(res))
}
}
// This test is based on the following contract:
/*
contract Reverter {
function revertString() public pure{
require(false, "some error");
}
function revertNoString() public pure {
require(false, "");
}
function revertASM() public pure {
assembly {
revert(0x0, 0x0)
}
}
function noRevert() public pure {
assembly {
// Assembles something that looks like require(false, "some error") but is not reverted
mstore(0x0, 0x08c379a000000000000000000000000000000000000000000000000000000000)
mstore(0x4, 0x0000000000000000000000000000000000000000000000000000000000000020)
mstore(0x24, 0x000000000000000000000000000000000000000000000000000000000000000a)
mstore(0x44, 0x736f6d65206572726f7200000000000000000000000000000000000000000000)
return(0x0, 0x64)
}
}
}*/
func TestSimulatedBackend_CallContractRevert(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
bgCtx := context.Background()
reverterABI := `[{"inputs": [],"name": "noRevert","outputs": [],"stateMutability": "pure","type": "function"},{"inputs": [],"name": "revertASM","outputs": [],"stateMutability": "pure","type": "function"},{"inputs": [],"name": "revertNoString","outputs": [],"stateMutability": "pure","type": "function"},{"inputs": [],"name": "revertString","outputs": [],"stateMutability": "pure","type": "function"}]`
reverterBin := "608060405234801561001057600080fd5b506101d3806100206000396000f3fe608060405234801561001057600080fd5b506004361061004c5760003560e01c80634b409e01146100515780639b340e361461005b5780639bd6103714610065578063b7246fc11461006f575b600080fd5b610059610079565b005b6100636100ca565b005b61006d6100cf565b005b610077610145565b005b60006100c8576040517f08c379a0000000000000000000000000000000000000000000000000000000008152600401808060200182810382526000815260200160200191505060405180910390fd5b565b600080fd5b6000610143576040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252600a8152602001807f736f6d65206572726f720000000000000000000000000000000000000000000081525060200191505060405180910390fd5b565b7f08c379a0000000000000000000000000000000000000000000000000000000006000526020600452600a6024527f736f6d65206572726f720000000000000000000000000000000000000000000060445260646000f3fea2646970667358221220cdd8af0609ec4996b7360c7c780bad5c735740c64b1fffc3445aa12d37f07cb164736f6c63430006070033"
parsed, err := abi.JSON(strings.NewReader(reverterABI))
if err != nil {
t.Errorf("could not get code at test addr: %v", err)
}
contractAuth, _ := bind.NewKeyedTransactorWithChainID(testKey, big.NewInt(1337))
addr, _, _, err := bind.DeployContract(contractAuth, parsed, common.FromHex(reverterBin), sim)
if err != nil {
t.Errorf("could not deploy contract: %v", err)
}
inputs := make(map[string]interface{}, 3)
inputs["revertASM"] = nil
inputs["revertNoString"] = ""
inputs["revertString"] = "some error"
call := make([]func([]byte) ([]byte, error), 2)
call[0] = func(input []byte) ([]byte, error) {
return sim.PendingCallContract(bgCtx, ethereum.CallMsg{
From: testAddr,
To: &addr,
Data: input,
})
}
call[1] = func(input []byte) ([]byte, error) {
return sim.CallContract(bgCtx, ethereum.CallMsg{
From: testAddr,
To: &addr,
Data: input,
}, nil)
}
// Run pending calls then commit
for _, cl := range call {
for key, val := range inputs {
input, err := parsed.Pack(key)
if err != nil {
t.Errorf("could not pack %v function on contract: %v", key, err)
}
res, err := cl(input)
if err == nil {
t.Errorf("call to %v was not reverted", key)
}
if res != nil {
t.Errorf("result from %v was not nil: %v", key, res)
}
if val != nil {
rerr, ok := err.(*revertError)
if !ok {
t.Errorf("expect revert error")
}
if rerr.Error() != "execution reverted: "+val.(string) {
t.Errorf("error was malformed: got %v want %v", rerr.Error(), val)
}
} else {
// revert(0x0,0x0)
if err.Error() != "execution reverted" {
t.Errorf("error was malformed: got %v want %v", err, "execution reverted")
}
}
}
input, err := parsed.Pack("noRevert")
if err != nil {
t.Errorf("could not pack noRevert function on contract: %v", err)
}
res, err := cl(input)
if err != nil {
t.Error("call to noRevert was reverted")
}
if res == nil {
t.Errorf("result from noRevert was nil")
}
sim.Commit()
}
}

View File

@ -32,7 +32,7 @@ import (
// SignerFn is a signer function callback when a contract requires a method to
// sign the transaction before submission.
type SignerFn func(types.Signer, common.Address, *types.Transaction) (*types.Transaction, error)
type SignerFn func(common.Address, *types.Transaction) (*types.Transaction, error)
// CallOpts is the collection of options to fine tune a contract call request.
type CallOpts struct {
@ -49,7 +49,7 @@ type TransactOpts struct {
Nonce *big.Int // Nonce to use for the transaction execution (nil = use pending state)
Signer SignerFn // Method to use for signing the transaction (mandatory)
Value *big.Int // Funds to transfer along along the transaction (nil = 0 = no funds)
Value *big.Int // Funds to transfer along the transaction (nil = 0 = no funds)
GasPrice *big.Int // Gas price to use for the transaction execution (nil = gas price oracle)
GasLimit uint64 // Gas limit to set for the transaction execution (0 = estimate)
@ -117,11 +117,14 @@ func DeployContract(opts *TransactOpts, abi abi.ABI, bytecode []byte, backend Co
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string, params ...interface{}) error {
func (c *BoundContract) Call(opts *CallOpts, results *[]interface{}, method string, params ...interface{}) error {
// Don't crash on a lazy user
if opts == nil {
opts = new(CallOpts)
}
if results == nil {
results = new([]interface{})
}
// Pack the input, call and unpack the results
input, err := c.abi.Pack(method, params...)
if err != nil {
@ -149,7 +152,10 @@ func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string,
}
} else {
output, err = c.caller.CallContract(ctx, msg, opts.BlockNumber)
if err == nil && len(output) == 0 {
if err != nil {
return err
}
if len(output) == 0 {
// Make sure we have a contract to operate on, and bail out otherwise.
if code, err = c.caller.CodeAt(ctx, c.address, opts.BlockNumber); err != nil {
return err
@ -158,10 +164,14 @@ func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string,
}
}
}
if err != nil {
if len(*results) == 0 {
res, err := c.abi.Unpack(method, output)
*results = res
return err
}
return c.abi.Unpack(result, method, output)
res := *results
return c.abi.UnpackIntoInterface(res[0], method, output)
}
// Transact invokes the (paid) contract method with params as input values.
@ -177,7 +187,7 @@ func (c *BoundContract) Transact(opts *TransactOpts, method string, params ...in
}
// RawTransact initiates a transaction with the given raw calldata as the input.
// It's usually used to initiates transaction for invoking **Fallback** function.
// It's usually used to initiate transactions for invoking **Fallback** function.
func (c *BoundContract) RawTransact(opts *TransactOpts, calldata []byte) (*types.Transaction, error) {
// todo(rjl493456442) check the method is payable or not,
// reject invalid transaction at the first place
@ -246,7 +256,7 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
if opts.Signer == nil {
return nil, errors.New("no signer to authorize the transaction with")
}
signedTx, err := opts.Signer(types.HomesteadSigner{}, opts.From, rawTx)
signedTx, err := opts.Signer(opts.From, rawTx)
if err != nil {
return nil, err
}
@ -264,9 +274,9 @@ func (c *BoundContract) FilterLogs(opts *FilterOpts, name string, query ...[]int
opts = new(FilterOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].ID()}}, query...)
query = append([][]interface{}{{c.abi.Events[name].ID}}, query...)
topics, err := makeTopics(query...)
topics, err := abi.MakeTopics(query...)
if err != nil {
return nil, nil, err
}
@ -313,9 +323,9 @@ func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]inter
opts = new(WatchOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].ID()}}, query...)
query = append([][]interface{}{{c.abi.Events[name].ID}}, query...)
topics, err := makeTopics(query...)
topics, err := abi.MakeTopics(query...)
if err != nil {
return nil, nil, err
}
@ -339,7 +349,7 @@ func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]inter
// UnpackLog unpacks a retrieved log into the provided output structure.
func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log) error {
if len(log.Data) > 0 {
if err := c.abi.Unpack(out, event, log.Data); err != nil {
if err := c.abi.UnpackIntoInterface(out, event, log.Data); err != nil {
return err
}
}
@ -349,7 +359,7 @@ func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log)
indexed = append(indexed, arg)
}
}
return parseTopics(out, indexed, log.Topics[1:])
return abi.ParseTopics(out, indexed, log.Topics[1:])
}
// UnpackLogIntoMap unpacks a retrieved log into the provided map.
@ -365,7 +375,7 @@ func (c *BoundContract) UnpackLogIntoMap(out map[string]interface{}, event strin
indexed = append(indexed, arg)
}
}
return parseTopicsIntoMap(out, indexed, log.Topics[1:])
return abi.ParseTopicsIntoMap(out, indexed, log.Topics[1:])
}
// ensureContext is a helper method to ensure a context is not nil, even if the

View File

@ -17,9 +17,9 @@
package bind_test
import (
"bytes"
"context"
"math/big"
"reflect"
"strings"
"testing"
@ -34,8 +34,10 @@ import (
)
type mockCaller struct {
codeAtBlockNumber *big.Int
callContractBlockNumber *big.Int
codeAtBlockNumber *big.Int
callContractBlockNumber *big.Int
pendingCodeAtCalled bool
pendingCallContractCalled bool
}
func (mc *mockCaller) CodeAt(ctx context.Context, contract common.Address, blockNumber *big.Int) ([]byte, error) {
@ -47,6 +49,16 @@ func (mc *mockCaller) CallContract(ctx context.Context, call ethereum.CallMsg, b
mc.callContractBlockNumber = blockNumber
return nil, nil
}
func (mc *mockCaller) PendingCodeAt(ctx context.Context, contract common.Address) ([]byte, error) {
mc.pendingCodeAtCalled = true
return nil, nil
}
func (mc *mockCaller) PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error) {
mc.pendingCallContractCalled = true
return nil, nil
}
func TestPassingBlockNumber(t *testing.T) {
mc := &mockCaller{}
@ -59,11 +71,10 @@ func TestPassingBlockNumber(t *testing.T) {
},
},
}, mc, nil, nil)
var ret string
blockNumber := big.NewInt(42)
bc.Call(&bind.CallOpts{BlockNumber: blockNumber}, &ret, "something")
bc.Call(&bind.CallOpts{BlockNumber: blockNumber}, nil, "something")
if mc.callContractBlockNumber != blockNumber {
t.Fatalf("CallContract() was not passed the block number")
@ -73,7 +84,7 @@ func TestPassingBlockNumber(t *testing.T) {
t.Fatalf("CodeAt() was not passed the block number")
}
bc.Call(&bind.CallOpts{}, &ret, "something")
bc.Call(&bind.CallOpts{}, nil, "something")
if mc.callContractBlockNumber != nil {
t.Fatalf("CallContract() was passed a block number when it should not have been")
@ -82,57 +93,39 @@ func TestPassingBlockNumber(t *testing.T) {
if mc.codeAtBlockNumber != nil {
t.Fatalf("CodeAt() was passed a block number when it should not have been")
}
bc.Call(&bind.CallOpts{BlockNumber: blockNumber, Pending: true}, nil, "something")
if !mc.pendingCallContractCalled {
t.Fatalf("CallContract() was not passed the block number")
}
if !mc.pendingCodeAtCalled {
t.Fatalf("CodeAt() was not passed the block number")
}
}
const hexData = "0x000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158"
func TestUnpackIndexedStringTyLogIntoMap(t *testing.T) {
hash := crypto.Keccak256Hash([]byte("testName"))
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x0"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"name","type":"string"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"name": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["name"] != expectedReceivedMap["name"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedSliceTyLogIntoMap(t *testing.T) {
@ -141,51 +134,23 @@ func TestUnpackIndexedSliceTyLogIntoMap(t *testing.T) {
t.Fatal(err)
}
hash := crypto.Keccak256Hash(sliceBytes)
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x0"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"names","type":"string[]"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"names": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["names"] != expectedReceivedMap["names"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedArrayTyLogIntoMap(t *testing.T) {
@ -194,51 +159,23 @@ func TestUnpackIndexedArrayTyLogIntoMap(t *testing.T) {
t.Fatal(err)
}
hash := crypto.Keccak256Hash(arrBytes)
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x0"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"addresses","type":"address[2]"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"addresses": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["addresses"] != expectedReceivedMap["addresses"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedFuncTyLogIntoMap(t *testing.T) {
@ -249,99 +186,72 @@ func TestUnpackIndexedFuncTyLogIntoMap(t *testing.T) {
functionTyBytes := append(addrBytes, functionSelector...)
var functionTy [24]byte
copy(functionTy[:], functionTyBytes[0:24])
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
common.BytesToHash(functionTyBytes),
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
common.BytesToHash(functionTyBytes),
}
mockLog := newMockLog(topics, common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"function","type":"function"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"function": functionTy,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["function"] != expectedReceivedMap["function"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedBytesTyLogIntoMap(t *testing.T) {
byts := []byte{1, 2, 3, 4, 5}
hash := crypto.Keccak256Hash(byts)
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
bytes := []byte{1, 2, 3, 4, 5}
hash := crypto.Keccak256Hash(bytes)
topics := []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"content","type":"bytes"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"content": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func unpackAndCheck(t *testing.T, bc *bind.BoundContract, expected map[string]interface{}, mockLog types.Log) {
received := make(map[string]interface{})
if err := bc.UnpackLogIntoMap(received, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
if len(received) != len(expected) {
t.Fatalf("unpacked map length %v not equal expected length of %v", len(received), len(expected))
}
if receivedMap["content"] != expectedReceivedMap["content"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
for name, elem := range expected {
if !reflect.DeepEqual(elem, received[name]) {
t.Errorf("field %v does not match expected, want %v, got %v", name, elem, received[name])
}
}
}
func newMockLog(topics []common.Hash, txHash common.Hash) types.Log {
return types.Log{
Address: common.HexToAddress("0x0"),
Topics: topics,
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: txHash,
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
}
}

View File

@ -52,7 +52,7 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
// contracts is the map of each individual contract requested binding
contracts = make(map[string]*tmplContract)
// structs is the map of all reclared structs shared by passed contracts.
// structs is the map of all redeclared structs shared by passed contracts.
structs = make(map[string]*tmplStruct)
// isLib is the map used to flag each encountered library as such
@ -80,10 +80,10 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
fallback *tmplMethod
receive *tmplMethod
// identifiers are used to detect duplicated identifier of function
// and event. For all calls, transacts and events, abigen will generate
// identifiers are used to detect duplicated identifiers of functions
// and events. For all calls, transacts and events, abigen will generate
// corresponding bindings. However we have to ensure there is no
// identifier coliision in the bindings of these categories.
// identifier collisions in the bindings of these categories.
callIdentifiers = make(map[string]bool)
transactIdentifiers = make(map[string]bool)
eventIdentifiers = make(map[string]bool)
@ -220,8 +220,6 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
"bindtype": bindType[lang],
"bindtopictype": bindTopicType[lang],
"namedtype": namedType[lang],
"formatmethod": formatMethod,
"formatevent": formatEvent,
"capitalise": capitalise,
"decapitalise": decapitalise,
}
@ -248,7 +246,7 @@ var bindType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) stri
LangJava: bindTypeJava,
}
// bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go one.
// bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go ones.
func bindBasicTypeGo(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
@ -288,7 +286,7 @@ func bindTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
}
}
// bindBasicTypeJava converts basic solidity types(except array, slice and tuple) to Java one.
// bindBasicTypeJava converts basic solidity types(except array, slice and tuple) to Java ones.
func bindBasicTypeJava(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
@ -332,7 +330,7 @@ func bindBasicTypeJava(kind abi.Type) string {
}
// pluralizeJavaType explicitly converts multidimensional types to predefined
// type in go side.
// types in go side.
func pluralizeJavaType(typ string) string {
switch typ {
case "boolean":
@ -371,7 +369,7 @@ var bindTopicType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct)
}
// bindTopicTypeGo converts a Solidity topic type to a Go one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
// functionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeGo(kind, structs)
@ -388,7 +386,7 @@ func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
}
// bindTopicTypeJava converts a Solidity topic type to a Java one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
// functionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeJava(kind, structs)
@ -396,7 +394,7 @@ func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
// parameters that are not value types i.e. arrays and structs are not
// stored directly but instead a keccak256-hash of an encoding is stored.
//
// We only convert stringS and bytes to hash, still need to deal with
// We only convert strings and bytes to hash, still need to deal with
// array(both fixed-size and dynamic-size) and struct.
if bound == "String" || bound == "byte[]" {
bound = "Hash"
@ -417,7 +415,7 @@ var bindStructType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct
func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
// We compose raw struct name and canonical parameter expression
// We compose a raw struct name and a canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
@ -456,7 +454,7 @@ func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
// We compose raw struct name and canonical parameter expression
// We compose a raw struct name and a canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
@ -488,7 +486,7 @@ func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
}
// namedType is a set of functions that transform language specific types to
// named versions that my be used inside method names.
// named versions that may be used inside method names.
var namedType = map[Lang]func(string, abi.Type) string{
LangGo: func(string, abi.Type) string { panic("this shouldn't be needed") },
LangJava: namedTypeJava,
@ -530,16 +528,14 @@ func alias(aliases map[string]string, n string) string {
}
// methodNormalizer is a name transformer that modifies Solidity method names to
// conform to target language naming concentions.
// conform to target language naming conventions.
var methodNormalizer = map[Lang]func(string) string{
LangGo: abi.ToCamelCase,
LangJava: decapitalise,
}
// capitalise makes a camel-case string which starts with an upper case character.
func capitalise(input string) string {
return abi.ToCamelCase(input)
}
var capitalise = abi.ToCamelCase
// decapitalise makes a camel-case string which starts with a lower case character.
func decapitalise(input string) string {
@ -588,74 +584,3 @@ func hasStruct(t abi.Type) bool {
return false
}
}
// resolveArgName converts a raw argument representation into a user friendly format.
func resolveArgName(arg abi.Argument, structs map[string]*tmplStruct) string {
var (
prefix string
embedded string
typ = &arg.Type
)
loop:
for {
switch typ.T {
case abi.SliceTy:
prefix += "[]"
case abi.ArrayTy:
prefix += fmt.Sprintf("[%d]", typ.Size)
default:
embedded = typ.TupleRawName + typ.String()
break loop
}
typ = typ.Elem
}
if s, exist := structs[embedded]; exist {
return prefix + s.Name
} else {
return arg.Type.String()
}
}
// formatMethod transforms raw method representation into a user friendly one.
func formatMethod(method abi.Method, structs map[string]*tmplStruct) string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", resolveArgName(input, structs), input.Name)
}
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
outputs[i] = resolveArgName(output, structs)
if len(output.Name) > 0 {
outputs[i] += fmt.Sprintf(" %v", output.Name)
}
}
// Extract meaningful state mutability of solidity method.
// If it's default value, never print it.
state := method.StateMutability
if state == "nonpayable" {
state = ""
}
if state != "" {
state = state + " "
}
identity := fmt.Sprintf("function %v", method.RawName)
if method.IsFallback {
identity = "fallback"
} else if method.IsReceive {
identity = "receive"
}
return fmt.Sprintf("%s(%v) %sreturns(%v)", identity, strings.Join(inputs, ", "), state, strings.Join(outputs, ", "))
}
// formatEvent transforms raw event representation into a user friendly one.
func formatEvent(event abi.Event, structs map[string]*tmplStruct) string {
inputs := make([]string, len(event.Inputs))
for i, input := range event.Inputs {
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", resolveArgName(input, structs), input.Name)
} else {
inputs[i] = fmt.Sprintf("%v %v", resolveArgName(input, structs), input.Name)
}
}
return fmt.Sprintf("event %v(%v)", event.RawName, strings.Join(inputs, ", "))
}

View File

@ -199,7 +199,8 @@ var bindTests = []struct {
{"type":"event","name":"indexed","inputs":[{"name":"addr","type":"address","indexed":true},{"name":"num","type":"int256","indexed":true}]},
{"type":"event","name":"mixed","inputs":[{"name":"addr","type":"address","indexed":true},{"name":"num","type":"int256"}]},
{"type":"event","name":"anonymous","anonymous":true,"inputs":[]},
{"type":"event","name":"dynamic","inputs":[{"name":"idxStr","type":"string","indexed":true},{"name":"idxDat","type":"bytes","indexed":true},{"name":"str","type":"string"},{"name":"dat","type":"bytes"}]}
{"type":"event","name":"dynamic","inputs":[{"name":"idxStr","type":"string","indexed":true},{"name":"idxDat","type":"bytes","indexed":true},{"name":"str","type":"string"},{"name":"dat","type":"bytes"}]},
{"type":"event","name":"unnamed","inputs":[{"name":"","type":"uint256","indexed": true},{"name":"","type":"uint256","indexed":true}]}
]
`},
`
@ -249,6 +250,12 @@ var bindTests = []struct {
fmt.Println(event.Addr) // Make sure the reconstructed indexed fields are present
fmt.Println(res, str, dat, hash, err)
oit, err := e.FilterUnnamed(nil, []*big.Int{}, []*big.Int{})
arg0 := oit.Event.Arg0 // Make sure unnamed arguments are handled correctly
arg1 := oit.Event.Arg1 // Make sure unnamed arguments are handled correctly
fmt.Println(arg0, arg1)
}
// Run a tiny reflection test to ensure disallowed methods don't appear
if _, ok := reflect.TypeOf(&EventChecker{}).MethodByName("FilterAnonymous"); ok {
@ -289,7 +296,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -344,7 +351,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -390,7 +397,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -448,7 +455,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -496,7 +503,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -522,6 +529,70 @@ var bindTests = []struct {
nil,
nil,
},
// Tests that structs are correctly unpacked
{
`Structs`,
`
pragma solidity ^0.6.5;
pragma experimental ABIEncoderV2;
contract Structs {
struct A {
bytes32 B;
}
function F() public view returns (A[] memory a, uint256[] memory c, bool[] memory d) {
A[] memory a = new A[](2);
a[0].B = bytes32(uint256(1234) << 96);
uint256[] memory c;
bool[] memory d;
return (a, c, d);
}
function G() public view returns (A[] memory a) {
A[] memory a = new A[](2);
a[0].B = bytes32(uint256(1234) << 96);
return a;
}
}
`,
[]string{`608060405234801561001057600080fd5b50610278806100206000396000f3fe608060405234801561001057600080fd5b50600436106100365760003560e01c806328811f591461003b5780636fecb6231461005b575b600080fd5b610043610070565b604051610052939291906101a0565b60405180910390f35b6100636100d6565b6040516100529190610186565b604080516002808252606082810190935282918291829190816020015b610095610131565b81526020019060019003908161008d575050805190915061026960611b9082906000906100be57fe5b60209081029190910101515293606093508392509050565b6040805160028082526060828101909352829190816020015b6100f7610131565b8152602001906001900390816100ef575050805190915061026960611b90829060009061012057fe5b602090810291909101015152905090565b60408051602081019091526000815290565b815260200190565b6000815180845260208085019450808401835b8381101561017b578151518752958201959082019060010161015e565b509495945050505050565b600060208252610199602083018461014b565b9392505050565b6000606082526101b3606083018661014b565b6020838203818501528186516101c98185610239565b91508288019350845b818110156101f3576101e5838651610143565b9484019492506001016101d2565b505084810360408601528551808252908201925081860190845b8181101561022b57825115158552938301939183019160010161020d565b509298975050505050505050565b9081526020019056fea2646970667358221220eb85327e285def14230424c52893aebecec1e387a50bb6b75fc4fdbed647f45f64736f6c63430006050033`},
[]string{`[{"inputs":[],"name":"F","outputs":[{"components":[{"internalType":"bytes32","name":"B","type":"bytes32"}],"internalType":"structStructs.A[]","name":"a","type":"tuple[]"},{"internalType":"uint256[]","name":"c","type":"uint256[]"},{"internalType":"bool[]","name":"d","type":"bool[]"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"G","outputs":[{"components":[{"internalType":"bytes32","name":"B","type":"bytes32"}],"internalType":"structStructs.A[]","name":"a","type":"tuple[]"}],"stateMutability":"view","type":"function"}]`},
`
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/crypto"
`,
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
// Deploy a structs method invoker contract and execute its default method
_, _, structs, err := DeployStructs(auth, sim)
if err != nil {
t.Fatalf("Failed to deploy defaulter contract: %v", err)
}
sim.Commit()
opts := bind.CallOpts{}
if _, err := structs.F(&opts); err != nil {
t.Fatalf("Failed to invoke F method: %v", err)
}
if _, err := structs.G(&opts); err != nil {
t.Fatalf("Failed to invoke G method: %v", err)
}
`,
nil,
nil,
nil,
nil,
},
// Tests that non-existent contracts are reported as such (though only simulator test)
{
`NonExistent`,
@ -562,6 +633,45 @@ var bindTests = []struct {
nil,
nil,
},
{
`NonExistentStruct`,
`
contract NonExistentStruct {
function Struct() public view returns(uint256 a, uint256 b) {
return (10, 10);
}
}
`,
[]string{`6080604052348015600f57600080fd5b5060888061001e6000396000f3fe6080604052348015600f57600080fd5b506004361060285760003560e01c8063d5f6622514602d575b600080fd5b6033604c565b6040805192835260208301919091528051918290030190f35b600a809156fea264697066735822beefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeefbeef64736f6c6343decafe0033`},
[]string{`[{"inputs":[],"name":"Struct","outputs":[{"internalType":"uint256","name":"a","type":"uint256"},{"internalType":"uint256","name":"b","type":"uint256"}],"stateMutability":"pure","type":"function"}]`},
`
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
`,
`
// Create a simulator and wrap a non-deployed contract
sim := backends.NewSimulatedBackend(core.GenesisAlloc{}, uint64(10000000000))
defer sim.Close()
nonexistent, err := NewNonExistentStruct(common.Address{}, sim)
if err != nil {
t.Fatalf("Failed to access non-existent contract: %v", err)
}
// Ensure that contract calls fail with the appropriate error
if res, err := nonexistent.Struct(nil); err == nil {
t.Fatalf("Call succeeded on non-existent contract: %v", res)
} else if (err != bind.ErrNoCode) {
t.Fatalf("Error mismatch: have %v, want %v", err, bind.ErrNoCode)
}
`,
nil,
nil,
nil,
nil,
},
// Tests that gas estimation works for contracts with weird gas mechanics too.
{
`FunkyGasPattern`,
@ -591,7 +701,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -641,7 +751,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -716,7 +826,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -810,7 +920,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -1000,7 +1110,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -1135,7 +1245,7 @@ var bindTests = []struct {
`
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -1277,7 +1387,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -1343,7 +1453,7 @@ var bindTests = []struct {
`
// Initialize test accounts
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -1437,7 +1547,7 @@ var bindTests = []struct {
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 10000000)
defer sim.Close()
transactOpts := bind.NewKeyedTransactor(key)
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
_, _, _, err := DeployIdentifierCollision(transactOpts, sim)
if err != nil {
t.Fatalf("failed to deploy contract: %v", err)
@ -1499,7 +1609,7 @@ var bindTests = []struct {
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 10000000)
defer sim.Close()
transactOpts := bind.NewKeyedTransactor(key)
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
_, _, c1, err := DeployContractOne(transactOpts, sim)
if err != nil {
t.Fatal("Failed to deploy contract")
@ -1556,7 +1666,7 @@ var bindTests = []struct {
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
auth, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
defer sim.Close()
@ -1594,11 +1704,7 @@ var bindTests = []struct {
contract NewFallbacks {
event Fallback(bytes data);
fallback() external {
bytes memory data;
assembly {
calldatacopy(data, 0, calldatasize())
}
emit Fallback(data);
emit Fallback(msg.data);
}
event Received(address addr, uint value);
@ -1607,7 +1713,7 @@ var bindTests = []struct {
}
}
`,
[]string{"60806040523480156100115760006000fd5b50610017565b61016e806100266000396000f3fe60806040526004361061000d575b36610081575b7f88a5966d370b9919b20f3e2c13ff65706f196a4e32cc2c12bf57088f885258743334604051808373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020018281526020019250505060405180910390a15b005b34801561008e5760006000fd5b505b606036600082377f9043988963722edecc2099c75b0af0ff76af14ffca42ed6bce059a20a2a9f986816040518080602001828103825283818151815260200191508051906020019080838360005b838110156100fa5780820151818401525b6020810190506100de565b50505050905090810190601f1680156101275780820380516001836020036101000a031916815260200191505b509250505060405180910390a1505b00fea26469706673582212205643ca37f40c2b352dc541f42e9e6720de065de756324b7fcc9fb1d67eda4a7d64736f6c63430006040033"},
[]string{"6080604052348015600f57600080fd5b506101078061001f6000396000f3fe608060405236605f577f88a5966d370b9919b20f3e2c13ff65706f196a4e32cc2c12bf57088f885258743334604051808373ffffffffffffffffffffffffffffffffffffffff1681526020018281526020019250505060405180910390a1005b348015606a57600080fd5b507f9043988963722edecc2099c75b0af0ff76af14ffca42ed6bce059a20a2a9f98660003660405180806020018281038252848482818152602001925080828437600081840152601f19601f820116905080830192505050935050505060405180910390a100fea26469706673582212201f994dcfbc53bf610b19176f9a361eafa77b447fd9c796fa2c615dfd0aaf3b8b64736f6c634300060c0033"},
[]string{`[{"anonymous":false,"inputs":[{"indexed":false,"internalType":"bytes","name":"data","type":"bytes"}],"name":"Fallback","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"addr","type":"address"},{"indexed":false,"internalType":"uint256","name":"value","type":"uint256"}],"name":"Received","type":"event"},{"stateMutability":"nonpayable","type":"fallback"},{"stateMutability":"payable","type":"receive"}]`},
`
"bytes"
@ -1625,7 +1731,7 @@ var bindTests = []struct {
sim := backends.NewSimulatedBackend(core.GenesisAlloc{addr: {Balance: big.NewInt(1000000000)}}, 1000000)
defer sim.Close()
opts := bind.NewKeyedTransactor(key)
opts, _ := bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
_, _, c, err := DeployNewFallbacks(opts, sim)
if err != nil {
t.Fatalf("Failed to deploy contract: %v", err)
@ -1655,6 +1761,7 @@ var bindTests = []struct {
}
// Test fallback function
gotEvent = false
opts.Value = nil
calldata := []byte{0x01, 0x02, 0x03}
c.Fallback(opts, calldata)
@ -1689,11 +1796,11 @@ func TestGolangBindings(t *testing.T) {
t.Skip("go sdk not found for testing")
}
// Create a temporary workspace for the test suite
ws, err := ioutil.TempDir("", "")
ws, err := ioutil.TempDir("", "binding-test")
if err != nil {
t.Fatalf("failed to create temporary workspace: %v", err)
}
defer os.RemoveAll(ws)
//defer os.RemoveAll(ws)
pkg := filepath.Join(ws, "bindtest")
if err = os.MkdirAll(pkg, 0700); err != nil {
@ -1739,11 +1846,16 @@ func TestGolangBindings(t *testing.T) {
t.Fatalf("failed to convert binding test to modules: %v\n%s", err, out)
}
pwd, _ := os.Getwd()
replacer := exec.Command(gocmd, "mod", "edit", "-replace", "github.com/ethereum/go-ethereum="+filepath.Join(pwd, "..", "..", "..")) // Repo root
replacer := exec.Command(gocmd, "mod", "edit", "-x", "-require", "github.com/ethereum/go-ethereum@v0.0.0", "-replace", "github.com/ethereum/go-ethereum="+filepath.Join(pwd, "..", "..", "..")) // Repo root
replacer.Dir = pkg
if out, err := replacer.CombinedOutput(); err != nil {
t.Fatalf("failed to replace binding test dependency to current source tree: %v\n%s", err, out)
}
tidier := exec.Command(gocmd, "mod", "tidy")
tidier.Dir = pkg
if out, err := tidier.CombinedOutput(); err != nil {
t.Fatalf("failed to tidy Go module file: %v\n%s", err, out)
}
// Test the entire package and report any failures
cmd := exec.Command(gocmd, "test", "-v", "-count", "1")
cmd.Dir = pkg

View File

@ -30,7 +30,7 @@ type tmplData struct {
type tmplContract struct {
Type string // Type name of the main contract binding
InputABI string // JSON ABI used as the input to generate the binding from
InputBin string // Optional EVM bytecode used to denetare deploy code from
InputBin string // Optional EVM bytecode used to generate deploy code from
FuncSigs map[string]string // Optional map: string signature -> 4-byte signature
Constructor abi.Method // Contract constructor for deploy parametrization
Calls map[string]*tmplMethod // Contract calls that only read state data
@ -50,7 +50,8 @@ type tmplMethod struct {
Structured bool // Whether the returns should be accumulated into a struct
}
// tmplEvent is a wrapper around an a
// tmplEvent is a wrapper around an abi.Event that contains a few preprocessed
// and cached data fields.
type tmplEvent struct {
Original abi.Event // Original event as parsed by the abi package
Normalized abi.Event // Normalized version of the parsed fields
@ -64,7 +65,7 @@ type tmplField struct {
SolKind abi.Type // Raw abi type information
}
// tmplStruct is a wrapper around an abi.tuple contains a auto-generated
// tmplStruct is a wrapper around an abi.tuple and contains an auto-generated
// struct name.
type tmplStruct struct {
Name string // Auto-generated struct name(before solidity v0.5.11) or raw name.
@ -78,8 +79,8 @@ var tmplSource = map[Lang]string{
LangJava: tmplSourceJava,
}
// tmplSourceGo is the Go source template use to generate the contract binding
// based on.
// tmplSourceGo is the Go source template that the generated Go contract binding
// is based on.
const tmplSourceGo = `
// Code generated - DO NOT EDIT.
// This file is a generated binding and any manual changes will be lost.
@ -103,7 +104,6 @@ var (
_ = big.NewInt
_ = strings.NewReader
_ = ethereum.NotFound
_ = abi.U256
_ = bind.Bind
_ = common.Big1
_ = types.BloomLookup
@ -261,7 +261,7 @@ var (
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (_{{$contract.Type}} *{{$contract.Type}}Raw) Call(opts *bind.CallOpts, result interface{}, method string, params ...interface{}) error {
func (_{{$contract.Type}} *{{$contract.Type}}Raw) Call(opts *bind.CallOpts, result *[]interface{}, method string, params ...interface{}) error {
return _{{$contract.Type}}.Contract.{{$contract.Type}}Caller.contract.Call(opts, result, method, params...)
}
@ -280,7 +280,7 @@ var (
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (_{{$contract.Type}} *{{$contract.Type}}CallerRaw) Call(opts *bind.CallOpts, result interface{}, method string, params ...interface{}) error {
func (_{{$contract.Type}} *{{$contract.Type}}CallerRaw) Call(opts *bind.CallOpts, result *[]interface{}, method string, params ...interface{}) error {
return _{{$contract.Type}}.Contract.contract.Call(opts, result, method, params...)
}
@ -298,33 +298,40 @@ var (
{{range .Calls}}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Caller) {{.Normalized.Name}}(opts *bind.CallOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} },{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}}{{end}} error) {
{{if .Structured}}ret := new(struct{
{{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}}
{{end}}
}){{else}}var (
{{range $i, $_ := .Normalized.Outputs}}ret{{$i}} = new({{bindtype .Type $structs}})
{{end}}
){{end}}
out := {{if .Structured}}ret{{else}}{{if eq (len .Normalized.Outputs) 1}}ret0{{else}}&[]interface{}{
{{range $i, $_ := .Normalized.Outputs}}ret{{$i}},
{{end}}
}{{end}}{{end}}
err := _{{$contract.Type}}.contract.Call(opts, out, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
return {{if .Structured}}*ret,{{else}}{{range $i, $_ := .Normalized.Outputs}}*ret{{$i}},{{end}}{{end}} err
var out []interface{}
err := _{{$contract.Type}}.contract.Call(opts, &out, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
{{if .Structured}}
outstruct := new(struct{ {{range .Normalized.Outputs}} {{.Name}} {{bindtype .Type $structs}}; {{end}} })
if err != nil {
return *outstruct, err
}
{{range $i, $t := .Normalized.Outputs}}
outstruct.{{.Name}} = *abi.ConvertType(out[{{$i}}], new({{bindtype .Type $structs}})).(*{{bindtype .Type $structs}}){{end}}
return *outstruct, err
{{else}}
if err != nil {
return {{range $i, $_ := .Normalized.Outputs}}*new({{bindtype .Type $structs}}), {{end}} err
}
{{range $i, $t := .Normalized.Outputs}}
out{{$i}} := *abi.ConvertType(out[{{$i}}], new({{bindtype .Type $structs}})).(*{{bindtype .Type $structs}}){{end}}
return {{range $i, $t := .Normalized.Outputs}}out{{$i}}, {{end}} err
{{end}}
}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}} {{end}} error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.CallOpts {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}CallerSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}} {{end}} error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.CallOpts {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
@ -333,21 +340,21 @@ var (
{{range .Transacts}}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) {{.Normalized.Name}}(opts *bind.TransactOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.Transact(opts, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
@ -356,21 +363,21 @@ var (
{{if .Fallback}}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{formatmethod .Fallback.Original $structs}}
// Solidity: {{.Fallback.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) Fallback(opts *bind.TransactOpts, calldata []byte) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.RawTransact(opts, calldata)
}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{formatmethod .Fallback.Original $structs}}
// Solidity: {{.Fallback.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) Fallback(calldata []byte) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Fallback(&_{{$contract.Type}}.TransactOpts, calldata)
}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{formatmethod .Fallback.Original $structs}}
// Solidity: {{.Fallback.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) Fallback(calldata []byte) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Fallback(&_{{$contract.Type}}.TransactOpts, calldata)
}
@ -379,21 +386,21 @@ var (
{{if .Receive}}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{formatmethod .Receive.Original $structs}}
// Solidity: {{.Receive.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) Receive(opts *bind.TransactOpts) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.RawTransact(opts, nil) // calldata is disallowed for receive function
}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{formatmethod .Receive.Original $structs}}
// Solidity: {{.Receive.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) Receive() (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Receive(&_{{$contract.Type}}.TransactOpts)
}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{formatmethod .Receive.Original $structs}}
// Solidity: {{.Receive.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) Receive() (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Receive(&_{{$contract.Type}}.TransactOpts)
}
@ -472,7 +479,7 @@ var (
// Filter{{.Normalized.Name}} is a free log retrieval operation binding the contract event 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatevent .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Filter{{.Normalized.Name}}(opts *bind.FilterOpts{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type $structs}}{{end}}{{end}}) (*{{$contract.Type}}{{.Normalized.Name}}Iterator, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
@ -489,7 +496,7 @@ var (
// Watch{{.Normalized.Name}} is a free log subscription operation binding the contract event 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatevent .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Watch{{.Normalized.Name}}(opts *bind.WatchOpts, sink chan<- *{{$contract.Type}}{{.Normalized.Name}}{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type $structs}}{{end}}{{end}}) (event.Subscription, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
@ -531,12 +538,13 @@ var (
// Parse{{.Normalized.Name}} is a log parse operation binding the contract event 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatevent .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Parse{{.Normalized.Name}}(log types.Log) (*{{$contract.Type}}{{.Normalized.Name}}, error) {
event := new({{$contract.Type}}{{.Normalized.Name}})
if err := _{{$contract.Type}}.contract.UnpackLog(event, "{{.Original.Name}}", log); err != nil {
return nil, err
}
event.Raw = log
return event, nil
}
@ -544,8 +552,8 @@ var (
{{end}}
`
// tmplSourceJava is the Java source template use to generate the contract binding
// based on.
// tmplSourceJava is the Java source template that the generated Java contract binding
// is based on.
const tmplSourceJava = `
// This file is an automatically generated Java binding. Do not modify as any
// change will likely be lost upon the next re-generation!
@ -625,7 +633,7 @@ import java.util.*;
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{.Original.String}}
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else if eq (len .Normalized.Outputs) 0}}void{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
@ -663,7 +671,7 @@ import java.util.*;
{{if .Fallback}}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{formatmethod .Fallback.Original $structs}}
// Solidity: {{.Fallback.Original.String}}
public Transaction Fallback(TransactOpts opts, byte[] calldata) throws Exception {
return this.Contract.rawTransact(opts, calldata);
}
@ -672,7 +680,7 @@ import java.util.*;
{{if .Receive}}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{formatmethod .Receive.Original $structs}}
// Solidity: {{.Receive.Original.String}}
public Transaction Receive(TransactOpts opts) throws Exception {
return this.Contract.rawTransact(opts, null);
}

View File

@ -18,7 +18,7 @@ package bind
import (
"context"
"fmt"
"errors"
"time"
"github.com/ethereum/go-ethereum/common"
@ -56,14 +56,14 @@ func WaitMined(ctx context.Context, b DeployBackend, tx *types.Transaction) (*ty
// contract address when it is mined. It stops waiting when ctx is canceled.
func WaitDeployed(ctx context.Context, b DeployBackend, tx *types.Transaction) (common.Address, error) {
if tx.To() != nil {
return common.Address{}, fmt.Errorf("tx is not contract creation")
return common.Address{}, errors.New("tx is not contract creation")
}
receipt, err := WaitMined(ctx, b, tx)
if err != nil {
return common.Address{}, err
}
if receipt.ContractAddress == (common.Address{}) {
return common.Address{}, fmt.Errorf("zero address")
return common.Address{}, errors.New("zero address")
}
// Check that code has indeed been deployed at the address.
// This matters on pre-Homestead chains: OOG in the constructor

View File

@ -18,6 +18,7 @@ package bind_test
import (
"context"
"errors"
"math/big"
"testing"
"time"
@ -84,7 +85,7 @@ func TestWaitDeployed(t *testing.T) {
select {
case <-mined:
if err != test.wantErr {
t.Errorf("test %q: error mismatch: got %q, want %q", name, err, test.wantErr)
t.Errorf("test %q: error mismatch: want %q, got %q", name, test.wantErr, err)
}
if address != test.wantAddress {
t.Errorf("test %q: unexpected contract address %s", name, address.Hex())
@ -94,3 +95,40 @@ func TestWaitDeployed(t *testing.T) {
}
}
}
func TestWaitDeployedCornerCases(t *testing.T) {
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
},
10000000,
)
defer backend.Close()
// Create a transaction to an account.
code := "6060604052600a8060106000396000f360606040526008565b00"
tx := types.NewTransaction(0, common.HexToAddress("0x01"), big.NewInt(0), 3000000, big.NewInt(1), common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
backend.SendTransaction(ctx, tx)
backend.Commit()
notContentCreation := errors.New("tx is not contract creation")
if _, err := bind.WaitDeployed(ctx, backend, tx); err.Error() != notContentCreation.Error() {
t.Errorf("error missmatch: want %q, got %q, ", notContentCreation, err)
}
// Create a transaction that is not mined.
tx = types.NewContractCreation(1, big.NewInt(0), 3000000, big.NewInt(1), common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
go func() {
contextCanceled := errors.New("context canceled")
if _, err := bind.WaitDeployed(ctx, backend, tx); err.Error() != contextCanceled.Error() {
t.Errorf("error missmatch: want %q, got %q, ", contextCanceled, err)
}
}()
backend.SendTransaction(ctx, tx)
cancel()
}

View File

@ -39,23 +39,21 @@ func formatSliceString(kind reflect.Kind, sliceSize int) string {
// type in t.
func sliceTypeCheck(t Type, val reflect.Value) error {
if val.Kind() != reflect.Slice && val.Kind() != reflect.Array {
return typeErr(formatSliceString(t.Kind, t.Size), val.Type())
return typeErr(formatSliceString(t.GetType().Kind(), t.Size), val.Type())
}
if t.T == ArrayTy && val.Len() != t.Size {
return typeErr(formatSliceString(t.Elem.Kind, t.Size), formatSliceString(val.Type().Elem().Kind(), val.Len()))
return typeErr(formatSliceString(t.Elem.GetType().Kind(), t.Size), formatSliceString(val.Type().Elem().Kind(), val.Len()))
}
if t.Elem.T == SliceTy {
if t.Elem.T == SliceTy || t.Elem.T == ArrayTy {
if val.Len() > 0 {
return sliceTypeCheck(*t.Elem, val.Index(0))
}
} else if t.Elem.T == ArrayTy {
return sliceTypeCheck(*t.Elem, val.Index(0))
}
if elemKind := val.Type().Elem().Kind(); elemKind != t.Elem.Kind {
return typeErr(formatSliceString(t.Elem.Kind, t.Size), val.Type())
if val.Type().Elem().Kind() != t.Elem.GetType().Kind() {
return typeErr(formatSliceString(t.Elem.GetType().Kind(), t.Size), val.Type())
}
return nil
}
@ -68,10 +66,10 @@ func typeCheck(t Type, value reflect.Value) error {
}
// Check base type validity. Element types will be checked later on.
if t.Kind != value.Kind() {
return typeErr(t.Kind, value.Kind())
if t.GetType().Kind() != value.Kind() {
return typeErr(t.GetType().Kind(), value.Kind())
} else if t.T == FixedBytesTy && t.Size != value.Len() {
return typeErr(t.Type, value.Type())
return typeErr(t.GetType(), value.Type())
} else {
return nil
}

View File

@ -32,7 +32,7 @@ type Event struct {
// the raw name and a suffix will be added in the case of a event overload.
//
// e.g.
// There are two events have same name:
// These are two events that have the same name:
// * foo(int,int)
// * foo(uint,uint)
// The event name of the first one wll be resolved as foo while the second one
@ -42,36 +42,59 @@ type Event struct {
RawName string
Anonymous bool
Inputs Arguments
str string
// Sig contains the string signature according to the ABI spec.
// e.g. event foo(uint32 a, int b) = "foo(uint32,int256)"
// Please note that "int" is substitute for its canonical representation "int256"
Sig string
// ID returns the canonical representation of the event's signature used by the
// abi definition to identify event names and types.
ID common.Hash
}
// NewEvent creates a new Event.
// It sanitizes the input arguments to remove unnamed arguments.
// It also precomputes the id, signature and string representation
// of the event.
func NewEvent(name, rawName string, anonymous bool, inputs Arguments) Event {
// sanitize inputs to remove inputs without names
// and precompute string and sig representation.
names := make([]string, len(inputs))
types := make([]string, len(inputs))
for i, input := range inputs {
if input.Name == "" {
inputs[i] = Argument{
Name: fmt.Sprintf("arg%d", i),
Indexed: input.Indexed,
Type: input.Type,
}
} else {
inputs[i] = input
}
// string representation
names[i] = fmt.Sprintf("%v %v", input.Type, inputs[i].Name)
if input.Indexed {
names[i] = fmt.Sprintf("%v indexed %v", input.Type, inputs[i].Name)
}
// sig representation
types[i] = input.Type.String()
}
str := fmt.Sprintf("event %v(%v)", rawName, strings.Join(names, ", "))
sig := fmt.Sprintf("%v(%v)", rawName, strings.Join(types, ","))
id := common.BytesToHash(crypto.Keccak256([]byte(sig)))
return Event{
Name: name,
RawName: rawName,
Anonymous: anonymous,
Inputs: inputs,
str: str,
Sig: sig,
ID: id,
}
}
func (e Event) String() string {
inputs := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", input.Type, input.Name)
}
}
return fmt.Sprintf("event %v(%v)", e.RawName, strings.Join(inputs, ", "))
}
// Sig returns the event string signature according to the ABI spec.
//
// Example
//
// event foo(uint32 a, int b) = "foo(uint32,int256)"
//
// Please note that "int" is substitute for its canonical representation "int256"
func (e Event) Sig() string {
types := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
types[i] = input.Type.String()
}
return fmt.Sprintf("%v(%v)", e.RawName, strings.Join(types, ","))
}
// ID returns the canonical representation of the event's signature used by the
// abi definition to identify event names and types.
func (e Event) ID() common.Hash {
return common.BytesToHash(crypto.Keccak256([]byte(e.Sig())))
return e.str
}

View File

@ -104,8 +104,8 @@ func TestEventId(t *testing.T) {
}
for name, event := range abi.Events {
if event.ID() != test.expectations[name] {
t.Errorf("expected id to be %x, got %x", test.expectations[name], event.ID())
if event.ID != test.expectations[name] {
t.Errorf("expected id to be %x, got %x", test.expectations[name], event.ID)
}
}
}
@ -147,10 +147,6 @@ func TestEventString(t *testing.T) {
// TestEventMultiValueWithArrayUnpack verifies that array fields will be counted after parsing array.
func TestEventMultiValueWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": false, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 [2]uint8
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
@ -158,10 +154,10 @@ func TestEventMultiValueWithArrayUnpack(t *testing.T) {
for ; i <= 3; i++ {
b.Write(packNum(reflect.ValueOf(i)))
}
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{1, 2}, rst.Value1)
require.Equal(t, uint8(3), rst.Value2)
unpacked, err := abi.Unpack("test", b.Bytes())
require.NoError(t, err)
require.Equal(t, [2]uint8{1, 2}, unpacked[0])
require.Equal(t, uint8(3), unpacked[1])
}
func TestEventTupleUnpack(t *testing.T) {
@ -312,14 +308,14 @@ func TestEventTupleUnpack(t *testing.T) {
&[]interface{}{common.Address{}, new(big.Int)},
&[]interface{}{},
jsonEventPledge,
"abi: insufficient number of elements in the list/array for unpack, want 3, got 2",
"abi: insufficient number of arguments for unpack, want 3, got 2",
"Can not unpack Pledge event into too short slice",
}, {
pledgeData1,
new(map[string]interface{}),
&[]interface{}{},
jsonEventPledge,
"abi: cannot unmarshal tuple into map[string]interface {}",
"abi:[2] cannot unmarshal tuple in to map[string]interface {}",
"Can not unpack Pledge event into map",
}, {
mixedCaseData1,
@ -351,14 +347,14 @@ func unpackTestEventData(dest interface{}, hexData string, jsonEvent []byte, ass
var e Event
assert.NoError(json.Unmarshal(jsonEvent, &e), "Should be able to unmarshal event ABI")
a := ABI{Events: map[string]Event{"e": e}}
return a.Unpack(dest, "e", data)
return a.UnpackIntoInterface(dest, "e", data)
}
// TestEventUnpackIndexed verifies that indexed field will be skipped by event decoder.
func TestEventUnpackIndexed(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 uint8
Value1 uint8 // indexed
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
@ -366,16 +362,16 @@ func TestEventUnpackIndexed(t *testing.T) {
var b bytes.Buffer
b.Write(packNum(reflect.ValueOf(uint8(8))))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.NoError(t, abi.UnpackIntoInterface(&rst, "test", b.Bytes()))
require.Equal(t, uint8(0), rst.Value1)
require.Equal(t, uint8(8), rst.Value2)
}
// TestEventIndexedWithArrayUnpack verifies that decoder will not overlow when static array is indexed input.
// TestEventIndexedWithArrayUnpack verifies that decoder will not overflow when static array is indexed input.
func TestEventIndexedWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"string"}]}]`
type testStruct struct {
Value1 [2]uint8
Value1 [2]uint8 // indexed
Value2 string
}
abi, err := JSON(strings.NewReader(definition))
@ -388,7 +384,7 @@ func TestEventIndexedWithArrayUnpack(t *testing.T) {
b.Write(common.RightPadBytes([]byte(stringOut), 32))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.NoError(t, abi.UnpackIntoInterface(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{0, 0}, rst.Value1)
require.Equal(t, stringOut, rst.Value2)
}

View File

@ -23,11 +23,29 @@ import (
"github.com/ethereum/go-ethereum/crypto"
)
// FunctionType represents different types of functions a contract might have.
type FunctionType int
const (
// Constructor represents the constructor of the contract.
// The constructor function is called while deploying a contract.
Constructor FunctionType = iota
// Fallback represents the fallback function.
// This function is executed if no other function matches the given function
// signature and no receive function is specified.
Fallback
// Receive represents the receive function.
// This function is executed on plain Ether transfers.
Receive
// Function represents a normal function.
Function
)
// Method represents a callable given a `Name` and whether the method is a constant.
// If the method is `Const` no transaction needs to be created for this
// particular Method call. It can easily be simulated using a local VM.
// For example a `Balance()` method only needs to retrieve something
// from the storage and therefore requires no Tx to be send to the
// from the storage and therefore requires no Tx to be sent to the
// network. A method such as `Transact` does require a Tx and thus will
// be flagged `false`.
// Input specifies the required input parameters for this gives method.
@ -36,7 +54,7 @@ type Method struct {
// the raw name and a suffix will be added in the case of a function overload.
//
// e.g.
// There are two functions have same name:
// These are two functions that have the same name:
// * foo(int,int)
// * foo(uint,uint)
// The method name of the first one will be resolved as foo while the second one
@ -44,6 +62,10 @@ type Method struct {
Name string
RawName string // RawName is the raw method name parsed from ABI
// Type indicates whether the method is a
// special fallback introduced in solidity v0.6.0
Type FunctionType
// StateMutability indicates the mutability state of method,
// the default value is nonpayable. It can be empty if the abi
// is generated by legacy compiler.
@ -53,69 +75,84 @@ type Method struct {
Constant bool
Payable bool
// The following two flags indicates whether the method is a
// special fallback introduced in solidity v0.6.0
IsFallback bool
IsReceive bool
Inputs Arguments
Outputs Arguments
str string
// Sig returns the methods string signature according to the ABI spec.
// e.g. function foo(uint32 a, int b) = "foo(uint32,int256)"
// Please note that "int" is substitute for its canonical representation "int256"
Sig string
// ID returns the canonical representation of the method's signature used by the
// abi definition to identify method names and types.
ID []byte
}
// Sig returns the methods string signature according to the ABI spec.
//
// Example
//
// function foo(uint32 a, int b) = "foo(uint32,int256)"
//
// Please note that "int" is substitute for its canonical representation "int256"
func (method Method) Sig() string {
// Short circuit if the method is special. Fallback
// and Receive don't have signature at all.
if method.IsFallback || method.IsReceive {
return ""
}
types := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
// NewMethod creates a new Method.
// A method should always be created using NewMethod.
// It also precomputes the sig representation and the string representation
// of the method.
func NewMethod(name string, rawName string, funType FunctionType, mutability string, isConst, isPayable bool, inputs Arguments, outputs Arguments) Method {
var (
types = make([]string, len(inputs))
inputNames = make([]string, len(inputs))
outputNames = make([]string, len(outputs))
)
for i, input := range inputs {
inputNames[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
types[i] = input.Type.String()
}
return fmt.Sprintf("%v(%v)", method.RawName, strings.Join(types, ","))
}
func (method Method) String() string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
}
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
outputs[i] = output.Type.String()
for i, output := range outputs {
outputNames[i] = output.Type.String()
if len(output.Name) > 0 {
outputs[i] += fmt.Sprintf(" %v", output.Name)
outputNames[i] += fmt.Sprintf(" %v", output.Name)
}
}
// calculate the signature and method id. Note only function
// has meaningful signature and id.
var (
sig string
id []byte
)
if funType == Function {
sig = fmt.Sprintf("%v(%v)", rawName, strings.Join(types, ","))
id = crypto.Keccak256([]byte(sig))[:4]
}
// Extract meaningful state mutability of solidity method.
// If it's default value, never print it.
state := method.StateMutability
state := mutability
if state == "nonpayable" {
state = ""
}
if state != "" {
state = state + " "
}
identity := fmt.Sprintf("function %v", method.RawName)
if method.IsFallback {
identity := fmt.Sprintf("function %v", rawName)
if funType == Fallback {
identity = "fallback"
} else if method.IsReceive {
} else if funType == Receive {
identity = "receive"
} else if funType == Constructor {
identity = "constructor"
}
str := fmt.Sprintf("%v(%v) %sreturns(%v)", identity, strings.Join(inputNames, ", "), state, strings.Join(outputNames, ", "))
return Method{
Name: name,
RawName: rawName,
Type: funType,
StateMutability: mutability,
Constant: isConst,
Payable: isPayable,
Inputs: inputs,
Outputs: outputs,
str: str,
Sig: sig,
ID: id,
}
return fmt.Sprintf("%v(%v) %sreturns(%v)", identity, strings.Join(inputs, ", "), state, strings.Join(outputs, ", "))
}
// ID returns the canonical representation of the method's signature used by the
// abi definition to identify method names and types.
func (method Method) ID() []byte {
return crypto.Keccak256([]byte(method.Sig()))[:4]
func (method Method) String() string {
return method.str
}
// IsConstant returns the indicator whether the method is read-only.

View File

@ -137,7 +137,7 @@ func TestMethodSig(t *testing.T) {
}
for _, test := range cases {
got := abi.Methods[test.method].Sig()
got := abi.Methods[test.method].Sig
if got != test.expect {
t.Errorf("expected string to be %s, got %s", test.expect, got)
}

View File

@ -17,6 +17,8 @@
package abi
import (
"errors"
"fmt"
"math/big"
"reflect"
@ -25,7 +27,7 @@ import (
)
// packBytesSlice packs the given bytes as [L, V] as the canonical representation
// bytes slice
// bytes slice.
func packBytesSlice(bytes []byte, l int) []byte {
len := packNum(reflect.ValueOf(l))
return append(len, common.RightPadBytes(bytes, (l+31)/32*32)...)
@ -33,49 +35,51 @@ func packBytesSlice(bytes []byte, l int) []byte {
// packElement packs the given reflect value according to the abi specification in
// t.
func packElement(t Type, reflectValue reflect.Value) []byte {
func packElement(t Type, reflectValue reflect.Value) ([]byte, error) {
switch t.T {
case IntTy, UintTy:
return packNum(reflectValue)
return packNum(reflectValue), nil
case StringTy:
return packBytesSlice([]byte(reflectValue.String()), reflectValue.Len())
return packBytesSlice([]byte(reflectValue.String()), reflectValue.Len()), nil
case AddressTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.LeftPadBytes(reflectValue.Bytes(), 32)
return common.LeftPadBytes(reflectValue.Bytes(), 32), nil
case BoolTy:
if reflectValue.Bool() {
return math.PaddedBigBytes(common.Big1, 32)
return math.PaddedBigBytes(common.Big1, 32), nil
}
return math.PaddedBigBytes(common.Big0, 32)
return math.PaddedBigBytes(common.Big0, 32), nil
case BytesTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return packBytesSlice(reflectValue.Bytes(), reflectValue.Len())
if reflectValue.Type() != reflect.TypeOf([]byte{}) {
return []byte{}, errors.New("Bytes type is neither slice nor array")
}
return packBytesSlice(reflectValue.Bytes(), reflectValue.Len()), nil
case FixedBytesTy, FunctionTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.RightPadBytes(reflectValue.Bytes(), 32)
return common.RightPadBytes(reflectValue.Bytes(), 32), nil
default:
panic("abi: fatal error")
return []byte{}, fmt.Errorf("Could not pack element, unknown type: %v", t.T)
}
}
// packNum packs the given number (using the reflect value) and will cast it to appropriate number representation
// packNum packs the given number (using the reflect value) and will cast it to appropriate number representation.
func packNum(value reflect.Value) []byte {
switch kind := value.Kind(); kind {
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return U256(new(big.Int).SetUint64(value.Uint()))
return math.U256Bytes(new(big.Int).SetUint64(value.Uint()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return U256(big.NewInt(value.Int()))
return math.U256Bytes(big.NewInt(value.Int()))
case reflect.Ptr:
return U256(new(big.Int).Set(value.Interface().(*big.Int)))
return math.U256Bytes(new(big.Int).Set(value.Interface().(*big.Int)))
default:
panic("abi: fatal error")
}
}

View File

@ -18,623 +18,51 @@ package abi
import (
"bytes"
"encoding/hex"
"fmt"
"math"
"math/big"
"reflect"
"strconv"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
)
// TestPack tests the general pack/unpack tests in packing_test.go
func TestPack(t *testing.T) {
for i, test := range []struct {
typ string
components []ArgumentMarshaling
input interface{}
output []byte
}{
{
"uint8",
nil,
uint8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint8[]",
nil,
[]uint8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16",
nil,
uint16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16[]",
nil,
[]uint16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32",
nil,
uint32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32[]",
nil,
[]uint32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64",
nil,
uint64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64[]",
nil,
[]uint64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256",
nil,
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256[]",
nil,
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8",
nil,
int8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8[]",
nil,
[]int8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16",
nil,
int16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16[]",
nil,
[]int16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32",
nil,
int32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32[]",
nil,
[]int32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64",
nil,
int64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64[]",
nil,
[]int64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256",
nil,
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256[]",
nil,
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"bytes1",
nil,
[1]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes2",
nil,
[2]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes3",
nil,
[3]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes4",
nil,
[4]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes5",
nil,
[5]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes6",
nil,
[6]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes7",
nil,
[7]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes8",
nil,
[8]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes9",
nil,
[9]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes10",
nil,
[10]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes11",
nil,
[11]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes12",
nil,
[12]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes13",
nil,
[13]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes14",
nil,
[14]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes15",
nil,
[15]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes16",
nil,
[16]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes17",
nil,
[17]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes18",
nil,
[18]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes19",
nil,
[19]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes20",
nil,
[20]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes21",
nil,
[21]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes22",
nil,
[22]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes23",
nil,
[23]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
nil,
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes25",
nil,
[25]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes26",
nil,
[26]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes27",
nil,
[27]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes28",
nil,
[28]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes29",
nil,
[29]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes30",
nil,
[30]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes31",
nil,
[31]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes32",
nil,
[32]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"uint32[2][3][4]",
nil,
[4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018"),
},
{
"address[]",
nil,
[]common.Address{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000"),
},
{
"bytes32[]",
nil,
[]common.Hash{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000201000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000"),
},
{
"function",
nil,
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"string",
nil,
"foobar",
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000006666f6f6261720000000000000000000000000000000000000000000000000000"),
},
{
"string[]",
nil,
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset 128 to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"string[2]",
nil,
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"bytes32[][]",
nil,
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
for i, test := range packUnpackTests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
encb, err := hex.DecodeString(test.packed)
if err != nil {
t.Fatalf("invalid hex %s: %v", test.packed, err)
}
inDef := fmt.Sprintf(`[{ "name" : "method", "type": "function", "inputs": %s}]`, test.def)
inAbi, err := JSON(strings.NewReader(inDef))
if err != nil {
t.Fatalf("invalid ABI definition %s, %v", inDef, err)
}
var packed []byte
packed, err = inAbi.Pack("method", test.unpacked)
{
"bytes32[][2]",
nil,
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[3][2]",
nil,
[][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
// static tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "int64"},
{Name: "b", Type: "int256"},
{Name: "c", Type: "int256"},
{Name: "d", Type: "bool"},
{Name: "e", Type: "bytes32[3][2]"},
},
struct {
A int64
B *big.Int
C *big.Int
D bool
E [][]common.Hash
}{1, big.NewInt(1), big.NewInt(-1), true, [][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001" + // struct[a]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // struct[c]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[d]
"0100000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // struct[e] array[1][2]
},
{
// dynamic tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "string"},
{Name: "b", Type: "int64"},
{Name: "c", Type: "bytes"},
{Name: "d", Type: "string[]"},
{Name: "e", Type: "int256[]"},
{Name: "f", Type: "address[]"},
},
struct {
FieldA string `abi:"a"` // Test whether abi tag works
FieldB int64 `abi:"b"`
C []byte
D []string
E []*big.Int
F []common.Address
}{"foobar", 1, []byte{1}, []string{"foo", "bar"}, []*big.Int{big.NewInt(1), big.NewInt(-1)}, []common.Address{{1}, {2}}},
common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"0000000000000000000000000000000000000000000000000000000000000100" + // struct[c] offset
"0000000000000000000000000000000000000000000000000000000000000140" + // struct[d] offset
"0000000000000000000000000000000000000000000000000000000000000220" + // struct[e] offset
"0000000000000000000000000000000000000000000000000000000000000280" + // struct[f] offset
"0000000000000000000000000000000000000000000000000000000000000006" + // struct[a] length
"666f6f6261720000000000000000000000000000000000000000000000000000" + // struct[a] "foobar"
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[c] length
"0100000000000000000000000000000000000000000000000000000000000000" + // []byte{1}
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[d] length
"0000000000000000000000000000000000000000000000000000000000000040" + // foo offset
"0000000000000000000000000000000000000000000000000000000000000080" + // bar offset
"0000000000000000000000000000000000000000000000000000000000000003" + // foo length
"666f6f0000000000000000000000000000000000000000000000000000000000" + // foo
"0000000000000000000000000000000000000000000000000000000000000003" + // bar offset
"6261720000000000000000000000000000000000000000000000000000000000" + // bar
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[e] length
"0000000000000000000000000000000000000000000000000000000000000001" + // 1
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // -1
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[f] length
"0000000000000000000000000100000000000000000000000000000000000000" + // common.Address{1}
"0000000000000000000000000200000000000000000000000000000000000000"), // common.Address{2}
},
{
// nested tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "tuple", Components: []ArgumentMarshaling{{Name: "a", Type: "uint256"}, {Name: "b", Type: "uint256[]"}}},
{Name: "b", Type: "int256[]"},
},
struct {
A struct {
FieldA *big.Int `abi:"a"`
B []*big.Int
}
B []*big.Int
}{
A: struct {
FieldA *big.Int `abi:"a"` // Test whether abi tag works for nested tuple
B []*big.Int
}{big.NewInt(1), []*big.Int{big.NewInt(1), big.NewInt(0)}},
B: []*big.Int{big.NewInt(1), big.NewInt(0)}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // a offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // b offset
"0000000000000000000000000000000000000000000000000000000000000001" + // a.a value
"0000000000000000000000000000000000000000000000000000000000000040" + // a.b offset
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b length
"0000000000000000000000000000000000000000000000000000000000000001" + // a.b[0] value
"0000000000000000000000000000000000000000000000000000000000000000" + // a.b[1] value
"0000000000000000000000000000000000000000000000000000000000000002" + // b length
"0000000000000000000000000000000000000000000000000000000000000001" + // b[0] value
"0000000000000000000000000000000000000000000000000000000000000000"), // b[1] value
},
{
// tuple slice
"tuple[]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256[]"},
},
[]struct {
A *big.Int
B []*big.Int
}{
{big.NewInt(-1), []*big.Int{big.NewInt(1), big.NewInt(0)}},
{big.NewInt(1), []*big.Int{big.NewInt(2), big.NewInt(-1)}},
},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // tuple length
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // tuple[1] offset
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].B length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].B[0] value
"0000000000000000000000000000000000000000000000000000000000000000" + // tuple[0].B[1] value
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[1].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B length
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B[0] value
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].B[1] value
},
{
// static tuple array
"tuple[2]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256"},
},
[2]struct {
A *big.Int
B *big.Int
}{
{big.NewInt(-1), big.NewInt(1)},
{big.NewInt(1), big.NewInt(-1)},
},
common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].a
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].b
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].a
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].b
},
{
// dynamic tuple array
"tuple[2]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256[]"},
},
[2]struct {
A []*big.Int
}{
{[]*big.Int{big.NewInt(-1), big.NewInt(1)}},
{[]*big.Int{big.NewInt(1), big.NewInt(-1)}},
},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000c0" + // tuple[1] offset
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[0].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].A length
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A[0]
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].A[1]
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[1].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].A length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A[0]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].A[1]
},
} {
typ, err := NewType(test.typ, "", test.components)
if err != nil {
t.Fatalf("%v failed. Unexpected parse error: %v", i, err)
}
output, err := typ.pack(reflect.ValueOf(test.input))
if err != nil {
t.Fatalf("%v failed. Unexpected pack error: %v", i, err)
}
if !bytes.Equal(output, test.output) {
t.Errorf("input %d for typ: %v failed. Expected bytes: '%x' Got: '%x'", i, typ.String(), test.output, output)
}
if err != nil {
t.Fatalf("test %d (%v) failed: %v", i, test.def, err)
}
if !reflect.DeepEqual(packed[4:], encb) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, encb, packed[4:])
}
})
}
}
func TestMethodPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
sig := abi.Methods["slice"].ID()
sig := abi.Methods["slice"].ID
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@ -648,7 +76,7 @@ func TestMethodPack(t *testing.T) {
}
var addrA, addrB = common.Address{1}, common.Address{2}
sig = abi.Methods["sliceAddress"].ID()
sig = abi.Methods["sliceAddress"].ID
sig = append(sig, common.LeftPadBytes([]byte{32}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrA[:], 32)...)
@ -663,7 +91,7 @@ func TestMethodPack(t *testing.T) {
}
var addrC, addrD = common.Address{3}, common.Address{4}
sig = abi.Methods["sliceMultiAddress"].ID()
sig = abi.Methods["sliceMultiAddress"].ID
sig = append(sig, common.LeftPadBytes([]byte{64}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{160}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@ -681,7 +109,7 @@ func TestMethodPack(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["slice256"].ID()
sig = abi.Methods["slice256"].ID
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@ -695,7 +123,7 @@ func TestMethodPack(t *testing.T) {
}
a := [2][2]*big.Int{{big.NewInt(1), big.NewInt(1)}, {big.NewInt(2), big.NewInt(0)}}
sig = abi.Methods["nestedArray"].ID()
sig = abi.Methods["nestedArray"].ID
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@ -712,7 +140,7 @@ func TestMethodPack(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["nestedArray2"].ID()
sig = abi.Methods["nestedArray2"].ID
sig = append(sig, common.LeftPadBytes([]byte{0x20}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x40}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x80}, 32)...)
@ -728,7 +156,7 @@ func TestMethodPack(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["nestedSlice"].ID()
sig = abi.Methods["nestedSlice"].ID
sig = append(sig, common.LeftPadBytes([]byte{0x20}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x02}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x40}, 32)...)

View File

@ -0,0 +1,990 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"math/big"
"github.com/ethereum/go-ethereum/common"
)
type packUnpackTest struct {
def string
unpacked interface{}
packed string
}
var packUnpackTests = []packUnpackTest{
// Booleans
{
def: `[{ "type": "bool" }]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: true,
},
{
def: `[{ "type": "bool" }]`,
packed: "0000000000000000000000000000000000000000000000000000000000000000",
unpacked: false,
},
// Integers
{
def: `[{ "type": "uint8" }]`,
unpacked: uint8(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{ "type": "uint8[]" }]`,
unpacked: []uint8{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{ "type": "uint16" }]`,
unpacked: uint16(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{ "type": "uint16[]" }]`,
unpacked: []uint16{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint17"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: big.NewInt(1),
},
{
def: `[{"type": "uint32"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: uint32(1),
},
{
def: `[{"type": "uint32[]"}]`,
unpacked: []uint32{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint64"}]`,
unpacked: uint64(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint64[]"}]`,
unpacked: []uint64{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint256"}]`,
unpacked: big.NewInt(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint256[]"}]`,
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int8"}]`,
unpacked: int8(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int8[]"}]`,
unpacked: []int8{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int16"}]`,
unpacked: int16(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int16[]"}]`,
unpacked: []int16{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int17"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: big.NewInt(1),
},
{
def: `[{"type": "int32"}]`,
unpacked: int32(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int32"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: int32(1),
},
{
def: `[{"type": "int32[]"}]`,
unpacked: []int32{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int64"}]`,
unpacked: int64(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int64[]"}]`,
unpacked: []int64{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int256"}]`,
unpacked: big.NewInt(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int256"}]`,
packed: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
unpacked: big.NewInt(-1),
},
{
def: `[{"type": "int256[]"}]`,
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
// Address
{
def: `[{"type": "address"}]`,
packed: "0000000000000000000000000100000000000000000000000000000000000000",
unpacked: common.Address{1},
},
{
def: `[{"type": "address[]"}]`,
unpacked: []common.Address{{1}, {2}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000100000000000000000000000000000000000000" +
"0000000000000000000000000200000000000000000000000000000000000000",
},
// Bytes
{
def: `[{"type": "bytes1"}]`,
unpacked: [1]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes2"}]`,
unpacked: [2]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes3"}]`,
unpacked: [3]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes4"}]`,
unpacked: [4]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes5"}]`,
unpacked: [5]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes6"}]`,
unpacked: [6]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes7"}]`,
unpacked: [7]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes8"}]`,
unpacked: [8]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes9"}]`,
unpacked: [9]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes10"}]`,
unpacked: [10]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes11"}]`,
unpacked: [11]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes12"}]`,
unpacked: [12]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes13"}]`,
unpacked: [13]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes14"}]`,
unpacked: [14]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes15"}]`,
unpacked: [15]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes16"}]`,
unpacked: [16]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes17"}]`,
unpacked: [17]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes18"}]`,
unpacked: [18]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes19"}]`,
unpacked: [19]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes20"}]`,
unpacked: [20]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes21"}]`,
unpacked: [21]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes22"}]`,
unpacked: [22]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes23"}]`,
unpacked: [23]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes24"}]`,
unpacked: [24]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes25"}]`,
unpacked: [25]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes26"}]`,
unpacked: [26]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes27"}]`,
unpacked: [27]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes28"}]`,
unpacked: [28]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes29"}]`,
unpacked: [29]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes30"}]`,
unpacked: [30]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes31"}]`,
unpacked: [31]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes32"}]`,
unpacked: [32]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes32"}]`,
packed: "0100000000000000000000000000000000000000000000000000000000000000",
unpacked: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "bytes"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000020" +
"0100000000000000000000000000000000000000000000000000000000000000",
unpacked: common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
def: `[{"type": "bytes32"}]`,
packed: "0100000000000000000000000000000000000000000000000000000000000000",
unpacked: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
// Functions
{
def: `[{"type": "function"}]`,
packed: "0100000000000000000000000000000000000000000000000000000000000000",
unpacked: [24]byte{1},
},
// Slice and Array
{
def: `[{"type": "uint8[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint8{1, 2},
},
{
def: `[{"type": "uint8[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: []uint8{},
},
{
def: `[{"type": "uint256[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: []*big.Int{},
},
{
def: `[{"type": "uint8[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint8{1, 2},
},
{
def: `[{"type": "int8[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int8{1, 2},
},
{
def: `[{"type": "int16[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []int16{1, 2},
},
{
def: `[{"type": "int16[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int16{1, 2},
},
{
def: `[{"type": "int32[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []int32{1, 2},
},
{
def: `[{"type": "int32[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int32{1, 2},
},
{
def: `[{"type": "int64[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []int64{1, 2},
},
{
def: `[{"type": "int64[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int64{1, 2},
},
{
def: `[{"type": "int256[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "int256[3]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003",
unpacked: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: [][]uint8{},
},
{
def: `[{"type": "uint8[][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"00000000000000000000000000000000000000000000000000000000000000a0" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"00000000000000000000000000000000000000000000000000000000000000a0" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003",
unpacked: [][]uint8{{1, 2}, {1, 2, 3}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000060" +
"0000000000000000000000000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: [2][]uint8{{}, {}},
},
{
def: `[{"type": "uint8[][2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001",
unpacked: [2][]uint8{{1}, {1}},
},
{
def: `[{"type": "uint8[2][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: [][2]uint8{},
},
{
def: `[{"type": "uint8[2][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint8[2][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint16{1, 2},
},
{
def: `[{"type": "uint16[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint16{1, 2},
},
{
def: `[{"type": "uint32[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint32{1, 2},
},
{
def: `[{"type": "uint32[2][3][4]"}]`,
unpacked: [4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"0000000000000000000000000000000000000000000000000000000000000004" +
"0000000000000000000000000000000000000000000000000000000000000005" +
"0000000000000000000000000000000000000000000000000000000000000006" +
"0000000000000000000000000000000000000000000000000000000000000007" +
"0000000000000000000000000000000000000000000000000000000000000008" +
"0000000000000000000000000000000000000000000000000000000000000009" +
"000000000000000000000000000000000000000000000000000000000000000a" +
"000000000000000000000000000000000000000000000000000000000000000b" +
"000000000000000000000000000000000000000000000000000000000000000c" +
"000000000000000000000000000000000000000000000000000000000000000d" +
"000000000000000000000000000000000000000000000000000000000000000e" +
"000000000000000000000000000000000000000000000000000000000000000f" +
"0000000000000000000000000000000000000000000000000000000000000010" +
"0000000000000000000000000000000000000000000000000000000000000011" +
"0000000000000000000000000000000000000000000000000000000000000012" +
"0000000000000000000000000000000000000000000000000000000000000013" +
"0000000000000000000000000000000000000000000000000000000000000014" +
"0000000000000000000000000000000000000000000000000000000000000015" +
"0000000000000000000000000000000000000000000000000000000000000016" +
"0000000000000000000000000000000000000000000000000000000000000017" +
"0000000000000000000000000000000000000000000000000000000000000018",
},
{
def: `[{"type": "bytes32[]"}]`,
unpacked: [][32]byte{{1}, {2}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0100000000000000000000000000000000000000000000000000000000000000" +
"0200000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "uint32[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint32{1, 2},
},
{
def: `[{"type": "uint64[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint64{1, 2},
},
{
def: `[{"type": "uint64[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint64{1, 2},
},
{
def: `[{"type": "uint256[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "uint256[3]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003",
unpacked: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "string[4]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"00000000000000000000000000000000000000000000000000000000000000c0" +
"0000000000000000000000000000000000000000000000000000000000000100" +
"0000000000000000000000000000000000000000000000000000000000000140" +
"0000000000000000000000000000000000000000000000000000000000000005" +
"48656c6c6f000000000000000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000005" +
"576f726c64000000000000000000000000000000000000000000000000000000" +
"000000000000000000000000000000000000000000000000000000000000000b" +
"476f2d657468657265756d000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000008" +
"457468657265756d000000000000000000000000000000000000000000000000",
unpacked: [4]string{"Hello", "World", "Go-ethereum", "Ethereum"},
},
{
def: `[{"type": "string[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"0000000000000000000000000000000000000000000000000000000000000008" +
"457468657265756d000000000000000000000000000000000000000000000000" +
"000000000000000000000000000000000000000000000000000000000000000b" +
"676f2d657468657265756d000000000000000000000000000000000000000000",
unpacked: []string{"Ethereum", "go-ethereum"},
},
{
def: `[{"type": "bytes[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"f0f0f00000000000000000000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"f0f0f00000000000000000000000000000000000000000000000000000000000",
unpacked: [][]byte{{0xf0, 0xf0, 0xf0}, {0xf0, 0xf0, 0xf0}},
},
{
def: `[{"type": "uint256[2][][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"00000000000000000000000000000000000000000000000000000000000000e0" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000000c8" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000003e8" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000000c8" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000003e8",
unpacked: [][][2]*big.Int{{{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}, {{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}},
},
// struct outputs
{
def: `[{"components": [{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: struct {
Int1 *big.Int
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"components": [{"name":"int_one","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"components": [{"name":"int__one","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"components": [{"name":"int_one_","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"components": [{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}], "type":"tuple"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: struct {
IntOne *big.Int
Intone *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "string"}]`,
unpacked: "foobar",
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000006" +
"666f6f6261720000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "string[]"}]`,
unpacked: []string{"hello", "foobar"},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset 128 to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000", // str[1]
},
{
def: `[{"type": "string[2]"}]`,
unpacked: [2]string{"hello", "foobar"},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" + // offset to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000", // str[1]
},
{
def: `[{"type": "bytes32[][]"}]`,
unpacked: [][][32]byte{{{1}, {2}}, {{3}, {4}, {5}}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // array[1][2]
},
{
def: `[{"type": "bytes32[][2]"}]`,
unpacked: [2][][32]byte{{{1}, {2}}, {{3}, {4}, {5}}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // array[1][2]
},
{
def: `[{"type": "bytes32[3][2]"}]`,
unpacked: [2][3][32]byte{{{1}, {2}, {3}}, {{3}, {4}, {5}}},
packed: "0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // array[1][2]
},
{
// static tuple
def: `[{"components": [{"name":"a","type":"int64"},
{"name":"b","type":"int256"},
{"name":"c","type":"int256"},
{"name":"d","type":"bool"},
{"name":"e","type":"bytes32[3][2]"}], "type":"tuple"}]`,
unpacked: struct {
A int64
B *big.Int
C *big.Int
D bool
E [2][3][32]byte
}{1, big.NewInt(1), big.NewInt(-1), true, [2][3][32]byte{{{1}, {2}, {3}}, {{3}, {4}, {5}}}},
packed: "0000000000000000000000000000000000000000000000000000000000000001" + // struct[a]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // struct[c]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[d]
"0100000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // struct[e] array[1][2]
},
{
def: `[{"components": [{"name":"a","type":"string"},
{"name":"b","type":"int64"},
{"name":"c","type":"bytes"},
{"name":"d","type":"string[]"},
{"name":"e","type":"int256[]"},
{"name":"f","type":"address[]"}], "type":"tuple"}]`,
unpacked: struct {
A string
B int64
C []byte
D []string
E []*big.Int
F []common.Address
}{"foobar", 1, []byte{1}, []string{"foo", "bar"}, []*big.Int{big.NewInt(1), big.NewInt(-1)}, []common.Address{{1}, {2}}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" + // struct a
"00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"0000000000000000000000000000000000000000000000000000000000000100" + // struct[c] offset
"0000000000000000000000000000000000000000000000000000000000000140" + // struct[d] offset
"0000000000000000000000000000000000000000000000000000000000000220" + // struct[e] offset
"0000000000000000000000000000000000000000000000000000000000000280" + // struct[f] offset
"0000000000000000000000000000000000000000000000000000000000000006" + // struct[a] length
"666f6f6261720000000000000000000000000000000000000000000000000000" + // struct[a] "foobar"
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[c] length
"0100000000000000000000000000000000000000000000000000000000000000" + // []byte{1}
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[d] length
"0000000000000000000000000000000000000000000000000000000000000040" + // foo offset
"0000000000000000000000000000000000000000000000000000000000000080" + // bar offset
"0000000000000000000000000000000000000000000000000000000000000003" + // foo length
"666f6f0000000000000000000000000000000000000000000000000000000000" + // foo
"0000000000000000000000000000000000000000000000000000000000000003" + // bar offset
"6261720000000000000000000000000000000000000000000000000000000000" + // bar
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[e] length
"0000000000000000000000000000000000000000000000000000000000000001" + // 1
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // -1
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[f] length
"0000000000000000000000000100000000000000000000000000000000000000" + // common.Address{1}
"0000000000000000000000000200000000000000000000000000000000000000", // common.Address{2}
},
{
def: `[{"components": [{ "type": "tuple","components": [{"name": "a","type": "uint256"},
{"name": "b","type": "uint256[]"}],
"name": "a","type": "tuple"},
{"name": "b","type": "uint256[]"}], "type": "tuple"}]`,
unpacked: struct {
A struct {
A *big.Int
B []*big.Int
}
B []*big.Int
}{
A: struct {
A *big.Int
B []*big.Int
}{big.NewInt(1), []*big.Int{big.NewInt(1), big.NewInt(2)}},
B: []*big.Int{big.NewInt(1), big.NewInt(2)}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" + // struct a
"0000000000000000000000000000000000000000000000000000000000000040" + // a offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // b offset
"0000000000000000000000000000000000000000000000000000000000000001" + // a.a value
"0000000000000000000000000000000000000000000000000000000000000040" + // a.b offset
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b length
"0000000000000000000000000000000000000000000000000000000000000001" + // a.b[0] value
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b[1] value
"0000000000000000000000000000000000000000000000000000000000000002" + // b length
"0000000000000000000000000000000000000000000000000000000000000001" + // b[0] value
"0000000000000000000000000000000000000000000000000000000000000002", // b[1] value
},
{
def: `[{"components": [{"name": "a","type": "int256"},
{"name": "b","type": "int256[]"}],
"name": "a","type": "tuple[]"}]`,
unpacked: []struct {
A *big.Int
B []*big.Int
}{
{big.NewInt(-1), []*big.Int{big.NewInt(1), big.NewInt(3)}},
{big.NewInt(1), []*big.Int{big.NewInt(2), big.NewInt(-1)}},
},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple length
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // tuple[1] offset
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].B length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].B[0] value
"0000000000000000000000000000000000000000000000000000000000000003" + // tuple[0].B[1] value
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[1].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B length
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B[0] value
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // tuple[1].B[1] value
},
{
def: `[{"components": [{"name": "a","type": "int256"},
{"name": "b","type": "int256"}],
"name": "a","type": "tuple[2]"}]`,
unpacked: [2]struct {
A *big.Int
B *big.Int
}{
{big.NewInt(-1), big.NewInt(1)},
{big.NewInt(1), big.NewInt(-1)},
},
packed: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].a
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].b
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].a
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // tuple[1].b
},
{
def: `[{"components": [{"name": "a","type": "int256[]"}],
"name": "a","type": "tuple[2]"}]`,
unpacked: [2]struct {
A []*big.Int
}{
{[]*big.Int{big.NewInt(-1), big.NewInt(1)}},
{[]*big.Int{big.NewInt(1), big.NewInt(-1)}},
},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000c0" + // tuple[1] offset
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[0].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].A length
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A[0]
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].A[1]
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[1].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].A length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A[0]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // tuple[1].A[1]
},
}

View File

@ -17,57 +17,74 @@
package abi
import (
"errors"
"fmt"
"math/big"
"reflect"
"strings"
)
// ConvertType converts an interface of a runtime type into a interface of the
// given type
// e.g. turn
// var fields []reflect.StructField
// fields = append(fields, reflect.StructField{
// Name: "X",
// Type: reflect.TypeOf(new(big.Int)),
// Tag: reflect.StructTag("json:\"" + "x" + "\""),
// }
// into
// type TupleT struct { X *big.Int }
func ConvertType(in interface{}, proto interface{}) interface{} {
protoType := reflect.TypeOf(proto)
if reflect.TypeOf(in).ConvertibleTo(protoType) {
return reflect.ValueOf(in).Convert(protoType).Interface()
}
// Use set as a last ditch effort
if err := set(reflect.ValueOf(proto), reflect.ValueOf(in)); err != nil {
panic(err)
}
return proto
}
// indirect recursively dereferences the value until it either gets the value
// or finds a big.Int
func indirect(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Ptr && v.Elem().Type() != derefbigT {
if v.Kind() == reflect.Ptr && v.Elem().Type() != reflect.TypeOf(big.Int{}) {
return indirect(v.Elem())
}
return v
}
// indirectInterfaceOrPtr recursively dereferences the value until value is not interface.
func indirectInterfaceOrPtr(v reflect.Value) reflect.Value {
if (v.Kind() == reflect.Interface || v.Kind() == reflect.Ptr) && v.Elem().IsValid() {
return indirect(v.Elem())
}
return v
}
// reflectIntKind returns the reflect using the given size and
// reflectIntType returns the reflect using the given size and
// unsignedness.
func reflectIntKindAndType(unsigned bool, size int) (reflect.Kind, reflect.Type) {
func reflectIntType(unsigned bool, size int) reflect.Type {
if unsigned {
switch size {
case 8:
return reflect.TypeOf(uint8(0))
case 16:
return reflect.TypeOf(uint16(0))
case 32:
return reflect.TypeOf(uint32(0))
case 64:
return reflect.TypeOf(uint64(0))
}
}
switch size {
case 8:
if unsigned {
return reflect.Uint8, uint8T
}
return reflect.Int8, int8T
return reflect.TypeOf(int8(0))
case 16:
if unsigned {
return reflect.Uint16, uint16T
}
return reflect.Int16, int16T
return reflect.TypeOf(int16(0))
case 32:
if unsigned {
return reflect.Uint32, uint32T
}
return reflect.Int32, int32T
return reflect.TypeOf(int32(0))
case 64:
if unsigned {
return reflect.Uint64, uint64T
}
return reflect.Int64, int64T
return reflect.TypeOf(int64(0))
}
return reflect.Ptr, bigT
return reflect.TypeOf(&big.Int{})
}
// mustArrayToBytesSlice creates a new byte slice with the exact same size as value
// mustArrayToByteSlice creates a new byte slice with the exact same size as value
// and copies the bytes in value to the new slice.
func mustArrayToByteSlice(value reflect.Value) reflect.Value {
slice := reflect.MakeSlice(reflect.TypeOf([]byte{}), value.Len(), value.Len())
@ -84,12 +101,16 @@ func set(dst, src reflect.Value) error {
switch {
case dstType.Kind() == reflect.Interface && dst.Elem().IsValid():
return set(dst.Elem(), src)
case dstType.Kind() == reflect.Ptr && dstType.Elem() != derefbigT:
case dstType.Kind() == reflect.Ptr && dstType.Elem() != reflect.TypeOf(big.Int{}):
return set(dst.Elem(), src)
case srcType.AssignableTo(dstType) && dst.CanSet():
dst.Set(src)
case dstType.Kind() == reflect.Slice && srcType.Kind() == reflect.Slice:
case dstType.Kind() == reflect.Slice && srcType.Kind() == reflect.Slice && dst.CanSet():
return setSlice(dst, src)
case dstType.Kind() == reflect.Array:
return setArray(dst, src)
case dstType.Kind() == reflect.Struct:
return setStruct(dst, src)
default:
return fmt.Errorf("abi: cannot unmarshal %v in to %v", src.Type(), dst.Type())
}
@ -98,38 +119,59 @@ func set(dst, src reflect.Value) error {
// setSlice attempts to assign src to dst when slices are not assignable by default
// e.g. src: [][]byte -> dst: [][15]byte
// setSlice ignores if we cannot copy all of src' elements.
func setSlice(dst, src reflect.Value) error {
slice := reflect.MakeSlice(dst.Type(), src.Len(), src.Len())
for i := 0; i < src.Len(); i++ {
v := src.Index(i)
reflect.Copy(slice.Index(i), v)
}
dst.Set(slice)
return nil
}
// requireAssignable assures that `dest` is a pointer and it's not an interface.
func requireAssignable(dst, src reflect.Value) error {
if dst.Kind() != reflect.Ptr && dst.Kind() != reflect.Interface {
return fmt.Errorf("abi: cannot unmarshal %v into %v", src.Type(), dst.Type())
}
return nil
}
// requireUnpackKind verifies preconditions for unpacking `args` into `kind`
func requireUnpackKind(v reflect.Value, t reflect.Type, k reflect.Kind,
args Arguments) error {
switch k {
case reflect.Struct:
case reflect.Slice, reflect.Array:
if minLen := args.LengthNonIndexed(); v.Len() < minLen {
return fmt.Errorf("abi: insufficient number of elements in the list/array for unpack, want %d, got %d",
minLen, v.Len())
if src.Index(i).Kind() == reflect.Struct {
if err := set(slice.Index(i), src.Index(i)); err != nil {
return err
}
} else {
// e.g. [][32]uint8 to []common.Hash
if err := set(slice.Index(i), src.Index(i)); err != nil {
return err
}
}
}
if dst.CanSet() {
dst.Set(slice)
return nil
}
return errors.New("Cannot set slice, destination not settable")
}
func setArray(dst, src reflect.Value) error {
if src.Kind() == reflect.Ptr {
return set(dst, indirect(src))
}
array := reflect.New(dst.Type()).Elem()
min := src.Len()
if src.Len() > dst.Len() {
min = dst.Len()
}
for i := 0; i < min; i++ {
if err := set(array.Index(i), src.Index(i)); err != nil {
return err
}
}
if dst.CanSet() {
dst.Set(array)
return nil
}
return errors.New("Cannot set array, destination not settable")
}
func setStruct(dst, src reflect.Value) error {
for i := 0; i < src.NumField(); i++ {
srcField := src.Field(i)
dstField := dst.Field(i)
if !dstField.IsValid() || !srcField.IsValid() {
return fmt.Errorf("Could not find src field: %v value: %v in destination", srcField.Type().Name(), srcField)
}
if err := set(dstField, srcField); err != nil {
return err
}
default:
return fmt.Errorf("abi: cannot unmarshal tuple into %v", t)
}
return nil
}
@ -156,9 +198,8 @@ func mapArgNamesToStructFields(argNames []string, value reflect.Value) (map[stri
continue
}
// skip fields that have no abi:"" tag.
var ok bool
var tagName string
if tagName, ok = typ.Field(i).Tag.Lookup("abi"); !ok {
tagName, ok := typ.Field(i).Tag.Lookup("abi")
if !ok {
continue
}
// check if tag is empty.

View File

@ -17,6 +17,7 @@
package abi
import (
"math/big"
"reflect"
"testing"
)
@ -189,3 +190,72 @@ func TestReflectNameToStruct(t *testing.T) {
})
}
}
func TestConvertType(t *testing.T) {
// Test Basic Struct
type T struct {
X *big.Int
Y *big.Int
}
// Create on-the-fly structure
var fields []reflect.StructField
fields = append(fields, reflect.StructField{
Name: "X",
Type: reflect.TypeOf(new(big.Int)),
Tag: "json:\"" + "x" + "\"",
})
fields = append(fields, reflect.StructField{
Name: "Y",
Type: reflect.TypeOf(new(big.Int)),
Tag: "json:\"" + "y" + "\"",
})
val := reflect.New(reflect.StructOf(fields))
val.Elem().Field(0).Set(reflect.ValueOf(big.NewInt(1)))
val.Elem().Field(1).Set(reflect.ValueOf(big.NewInt(2)))
// ConvertType
out := *ConvertType(val.Interface(), new(T)).(*T)
if out.X.Cmp(big.NewInt(1)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out.X, big.NewInt(1))
}
if out.Y.Cmp(big.NewInt(2)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out.Y, big.NewInt(2))
}
// Slice Type
val2 := reflect.MakeSlice(reflect.SliceOf(reflect.StructOf(fields)), 2, 2)
val2.Index(0).Field(0).Set(reflect.ValueOf(big.NewInt(1)))
val2.Index(0).Field(1).Set(reflect.ValueOf(big.NewInt(2)))
val2.Index(1).Field(0).Set(reflect.ValueOf(big.NewInt(3)))
val2.Index(1).Field(1).Set(reflect.ValueOf(big.NewInt(4)))
out2 := *ConvertType(val2.Interface(), new([]T)).(*[]T)
if out2[0].X.Cmp(big.NewInt(1)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[0].X, big.NewInt(1))
}
if out2[0].Y.Cmp(big.NewInt(2)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[1].Y, big.NewInt(2))
}
if out2[1].X.Cmp(big.NewInt(3)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[0].X, big.NewInt(1))
}
if out2[1].Y.Cmp(big.NewInt(4)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out2[1].Y, big.NewInt(2))
}
// Array Type
val3 := reflect.New(reflect.ArrayOf(2, reflect.StructOf(fields)))
val3.Elem().Index(0).Field(0).Set(reflect.ValueOf(big.NewInt(1)))
val3.Elem().Index(0).Field(1).Set(reflect.ValueOf(big.NewInt(2)))
val3.Elem().Index(1).Field(0).Set(reflect.ValueOf(big.NewInt(3)))
val3.Elem().Index(1).Field(1).Set(reflect.ValueOf(big.NewInt(4)))
out3 := *ConvertType(val3.Interface(), new([2]T)).(*[2]T)
if out3[0].X.Cmp(big.NewInt(1)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[0].X, big.NewInt(1))
}
if out3[0].Y.Cmp(big.NewInt(2)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[1].Y, big.NewInt(2))
}
if out3[1].X.Cmp(big.NewInt(3)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[0].X, big.NewInt(1))
}
if out3[1].Y.Cmp(big.NewInt(4)) != 0 {
t.Errorf("ConvertType failed, got %v want %v", out3[1].Y, big.NewInt(2))
}
}

View File

@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind
package abi
import (
"encoding/binary"
@ -23,13 +23,12 @@ import (
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// makeTopics converts a filter query argument list into a filter topic set.
func makeTopics(query ...[]interface{}) ([][]common.Hash, error) {
// MakeTopics converts a filter query argument list into a filter topic set.
func MakeTopics(query ...[]interface{}) ([][]common.Hash, error) {
topics := make([][]common.Hash, len(query))
for i, filter := range query {
for _, rule := range filter {
@ -103,7 +102,7 @@ func genIntType(rule int64, size uint) []byte {
var topic [common.HashLength]byte
if rule < 0 {
// if a rule is negative, we need to put it into two's complement.
// extended to common.Hashlength bytes.
// extended to common.HashLength bytes.
topic = [common.HashLength]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}
}
for i := uint(0); i < size; i++ {
@ -112,19 +111,19 @@ func genIntType(rule int64, size uint) []byte {
return topic[:]
}
// parseTopics converts the indexed topic fields into actual log field values.
func parseTopics(out interface{}, fields abi.Arguments, topics []common.Hash) error {
// ParseTopics converts the indexed topic fields into actual log field values.
func ParseTopics(out interface{}, fields Arguments, topics []common.Hash) error {
return parseTopicWithSetter(fields, topics,
func(arg abi.Argument, reconstr interface{}) {
field := reflect.ValueOf(out).Elem().FieldByName(capitalise(arg.Name))
func(arg Argument, reconstr interface{}) {
field := reflect.ValueOf(out).Elem().FieldByName(ToCamelCase(arg.Name))
field.Set(reflect.ValueOf(reconstr))
})
}
// parseTopicsIntoMap converts the indexed topic field-value pairs into map key-value pairs
func parseTopicsIntoMap(out map[string]interface{}, fields abi.Arguments, topics []common.Hash) error {
// ParseTopicsIntoMap converts the indexed topic field-value pairs into map key-value pairs.
func ParseTopicsIntoMap(out map[string]interface{}, fields Arguments, topics []common.Hash) error {
return parseTopicWithSetter(fields, topics,
func(arg abi.Argument, reconstr interface{}) {
func(arg Argument, reconstr interface{}) {
out[arg.Name] = reconstr
})
}
@ -134,7 +133,7 @@ func parseTopicsIntoMap(out map[string]interface{}, fields abi.Arguments, topics
//
// Note, dynamic types cannot be reconstructed since they get mapped to Keccak256
// hashes as the topic value!
func parseTopicWithSetter(fields abi.Arguments, topics []common.Hash, setter func(abi.Argument, interface{})) error {
func parseTopicWithSetter(fields Arguments, topics []common.Hash, setter func(Argument, interface{})) error {
// Sanity check that the fields and topics match up
if len(fields) != len(topics) {
return errors.New("topic/field count mismatch")
@ -146,13 +145,13 @@ func parseTopicWithSetter(fields abi.Arguments, topics []common.Hash, setter fun
}
var reconstr interface{}
switch arg.Type.T {
case abi.TupleTy:
case TupleTy:
return errors.New("tuple type in topic reconstruction")
case abi.StringTy, abi.BytesTy, abi.SliceTy, abi.ArrayTy:
case StringTy, BytesTy, SliceTy, ArrayTy:
// Array types (including strings and bytes) have their keccak256 hashes stored in the topic- not a hash
// whose bytes can be decoded to the actual value- so the best we can do is retrieve that hash
reconstr = topics[i]
case abi.FunctionTy:
case FunctionTy:
if garbage := binary.BigEndian.Uint64(topics[i][0:8]); garbage != 0 {
return fmt.Errorf("bind: got improperly encoded function type, got %v", topics[i].Bytes())
}
@ -161,7 +160,7 @@ func parseTopicWithSetter(fields abi.Arguments, topics []common.Hash, setter fun
reconstr = tmp
default:
var err error
reconstr, err = abi.ToGoType(0, arg.Type, topics[i].Bytes())
reconstr, err = toGoType(0, arg.Type, topics[i].Bytes())
if err != nil {
return err
}

View File

@ -14,14 +14,13 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind
package abi
import (
"math/big"
"reflect"
"testing"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
@ -119,7 +118,7 @@ func TestMakeTopics(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := makeTopics(tt.args.query...)
got, err := MakeTopics(tt.args.query...)
if (err != nil) != tt.wantErr {
t.Errorf("makeTopics() error = %v, wantErr %v", err, tt.wantErr)
return
@ -135,7 +134,7 @@ type args struct {
createObj func() interface{}
resultObj func() interface{}
resultMap func() map[string]interface{}
fields abi.Arguments
fields Arguments
topics []common.Hash
}
@ -149,6 +148,14 @@ type int256Struct struct {
Int256Value *big.Int
}
type hashStruct struct {
HashValue common.Hash
}
type funcStruct struct {
FuncValue [24]byte
}
type topicTest struct {
name string
args args
@ -156,10 +163,12 @@ type topicTest struct {
}
func setupTopicsTests() []topicTest {
bytesType, _ := abi.NewType("bytes5", "", nil)
int8Type, _ := abi.NewType("int8", "", nil)
int256Type, _ := abi.NewType("int256", "", nil)
tupleType, _ := abi.NewType("tuple(int256,int8)", "", nil)
bytesType, _ := NewType("bytes5", "", nil)
int8Type, _ := NewType("int8", "", nil)
int256Type, _ := NewType("int256", "", nil)
tupleType, _ := NewType("tuple(int256,int8)", "", nil)
stringType, _ := NewType("string", "", nil)
funcType, _ := NewType("function", "", nil)
tests := []topicTest{
{
@ -170,7 +179,7 @@ func setupTopicsTests() []topicTest {
resultMap: func() map[string]interface{} {
return map[string]interface{}{"staticBytes": [5]byte{1, 2, 3, 4, 5}}
},
fields: abi.Arguments{abi.Argument{
fields: Arguments{Argument{
Name: "staticBytes",
Type: bytesType,
Indexed: true,
@ -189,7 +198,7 @@ func setupTopicsTests() []topicTest {
resultMap: func() map[string]interface{} {
return map[string]interface{}{"int8Value": int8(-1)}
},
fields: abi.Arguments{abi.Argument{
fields: Arguments{Argument{
Name: "int8Value",
Type: int8Type,
Indexed: true,
@ -209,7 +218,7 @@ func setupTopicsTests() []topicTest {
resultMap: func() map[string]interface{} {
return map[string]interface{}{"int256Value": big.NewInt(-1)}
},
fields: abi.Arguments{abi.Argument{
fields: Arguments{Argument{
Name: "int256Value",
Type: int256Type,
Indexed: true,
@ -222,12 +231,55 @@ func setupTopicsTests() []topicTest {
wantErr: false,
},
{
name: "tuple(int256, int8)",
name: "hash type",
args: args{
createObj: func() interface{} { return &hashStruct{} },
resultObj: func() interface{} { return &hashStruct{crypto.Keccak256Hash([]byte("stringtopic"))} },
resultMap: func() map[string]interface{} {
return map[string]interface{}{"hashValue": crypto.Keccak256Hash([]byte("stringtopic"))}
},
fields: Arguments{Argument{
Name: "hashValue",
Type: stringType,
Indexed: true,
}},
topics: []common.Hash{
crypto.Keccak256Hash([]byte("stringtopic")),
},
},
wantErr: false,
},
{
name: "function type",
args: args{
createObj: func() interface{} { return &funcStruct{} },
resultObj: func() interface{} {
return &funcStruct{[24]byte{255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}}
},
resultMap: func() map[string]interface{} {
return map[string]interface{}{"funcValue": [24]byte{255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}}
},
fields: Arguments{Argument{
Name: "funcValue",
Type: funcType,
Indexed: true,
}},
topics: []common.Hash{
{0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: false,
},
{
name: "error on topic/field count mismatch",
args: args{
createObj: func() interface{} { return nil },
resultObj: func() interface{} { return nil },
resultMap: func() map[string]interface{} { return make(map[string]interface{}) },
fields: abi.Arguments{abi.Argument{
fields: Arguments{Argument{
Name: "tupletype",
Type: tupleType,
Indexed: true,
@ -236,6 +288,59 @@ func setupTopicsTests() []topicTest {
},
wantErr: true,
},
{
name: "error on unindexed arguments",
args: args{
createObj: func() interface{} { return &int256Struct{} },
resultObj: func() interface{} { return &int256Struct{} },
resultMap: func() map[string]interface{} { return make(map[string]interface{}) },
fields: Arguments{Argument{
Name: "int256Value",
Type: int256Type,
Indexed: false,
}},
topics: []common.Hash{
{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: true,
},
{
name: "error on tuple in topic reconstruction",
args: args{
createObj: func() interface{} { return &tupleType },
resultObj: func() interface{} { return &tupleType },
resultMap: func() map[string]interface{} { return make(map[string]interface{}) },
fields: Arguments{Argument{
Name: "tupletype",
Type: tupleType,
Indexed: true,
}},
topics: []common.Hash{{0}},
},
wantErr: true,
},
{
name: "error on improper encoded function",
args: args{
createObj: func() interface{} { return &funcStruct{} },
resultObj: func() interface{} { return &funcStruct{} },
resultMap: func() map[string]interface{} {
return make(map[string]interface{})
},
fields: Arguments{Argument{
Name: "funcValue",
Type: funcType,
Indexed: true,
}},
topics: []common.Hash{
{0, 0, 0, 0, 0, 0, 0, 128, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: true,
},
}
return tests
@ -247,7 +352,7 @@ func TestParseTopics(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
createObj := tt.args.createObj()
if err := parseTopics(createObj, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
if err := ParseTopics(createObj, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
t.Errorf("parseTopics() error = %v, wantErr %v", err, tt.wantErr)
}
resultObj := tt.args.resultObj()
@ -264,7 +369,7 @@ func TestParseTopicsIntoMap(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
outMap := make(map[string]interface{})
if err := parseTopicsIntoMap(outMap, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
if err := ParseTopicsIntoMap(outMap, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
t.Errorf("parseTopicsIntoMap() error = %v, wantErr %v", err, tt.wantErr)
}
resultMap := tt.args.resultMap()

View File

@ -23,6 +23,8 @@ import (
"regexp"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/common"
)
// Type enumerator
@ -42,20 +44,19 @@ const (
FunctionTy
)
// Type is the reflection of the supported argument type
// Type is the reflection of the supported argument type.
type Type struct {
Elem *Type
Kind reflect.Kind
Type reflect.Type
Size int
T byte // Our own type checking
stringKind string // holds the unparsed string for deriving signatures
// Tuple relative fields
TupleRawName string // Raw struct name defined in source code, may be empty.
TupleElems []*Type // Type information of all tuple fields
TupleRawNames []string // Raw field name of all tuple fields
TupleRawName string // Raw struct name defined in source code, may be empty.
TupleElems []*Type // Type information of all tuple fields
TupleRawNames []string // Raw field name of all tuple fields
TupleType reflect.Type // Underlying struct of the tuple
}
var (
@ -94,20 +95,16 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
if len(intz) == 0 {
// is a slice
typ.T = SliceTy
typ.Kind = reflect.Slice
typ.Elem = &embeddedType
typ.Type = reflect.SliceOf(embeddedType.Type)
typ.stringKind = embeddedType.stringKind + sliced
} else if len(intz) == 1 {
// is a array
// is an array
typ.T = ArrayTy
typ.Kind = reflect.Array
typ.Elem = &embeddedType
typ.Size, err = strconv.Atoi(intz[0])
if err != nil {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
typ.Type = reflect.ArrayOf(typ.Size, embeddedType.Type)
typ.stringKind = embeddedType.stringKind + sliced
} else {
return Type{}, fmt.Errorf("invalid formatting of array type")
@ -139,36 +136,24 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
// varType is the parsed abi type
switch varType := parsedType[1]; varType {
case "int":
typ.Kind, typ.Type = reflectIntKindAndType(false, varSize)
typ.Size = varSize
typ.T = IntTy
case "uint":
typ.Kind, typ.Type = reflectIntKindAndType(true, varSize)
typ.Size = varSize
typ.T = UintTy
case "bool":
typ.Kind = reflect.Bool
typ.T = BoolTy
typ.Type = reflect.TypeOf(bool(false))
case "address":
typ.Kind = reflect.Array
typ.Type = addressT
typ.Size = 20
typ.T = AddressTy
case "string":
typ.Kind = reflect.String
typ.Type = reflect.TypeOf("")
typ.T = StringTy
case "bytes":
if varSize == 0 {
typ.T = BytesTy
typ.Kind = reflect.Slice
typ.Type = reflect.SliceOf(reflect.TypeOf(byte(0)))
} else {
typ.T = FixedBytesTy
typ.Kind = reflect.Array
typ.Size = varSize
typ.Type = reflect.ArrayOf(varSize, reflect.TypeOf(byte(0)))
}
case "tuple":
var (
@ -178,17 +163,20 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
expression string // canonical parameter expression
)
expression += "("
overloadedNames := make(map[string]string)
for idx, c := range components {
cType, err := NewType(c.Type, c.InternalType, c.Components)
if err != nil {
return Type{}, err
}
if ToCamelCase(c.Name) == "" {
return Type{}, errors.New("abi: purely anonymous or underscored field is not supported")
fieldName, err := overloadedArgName(c.Name, overloadedNames)
if err != nil {
return Type{}, err
}
overloadedNames[fieldName] = fieldName
fields = append(fields, reflect.StructField{
Name: ToCamelCase(c.Name), // reflect.StructOf will panic for any exported field.
Type: cType.Type,
Name: fieldName, // reflect.StructOf will panic for any exported field.
Type: cType.GetType(),
Tag: reflect.StructTag("json:\"" + c.Name + "\""),
})
elems = append(elems, &cType)
@ -199,8 +187,8 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
}
}
expression += ")"
typ.Kind = reflect.Struct
typ.Type = reflect.StructOf(fields)
typ.TupleType = reflect.StructOf(fields)
typ.TupleElems = elems
typ.TupleRawNames = names
typ.T = TupleTy
@ -217,10 +205,8 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
}
case "function":
typ.Kind = reflect.Array
typ.T = FunctionTy
typ.Size = 24
typ.Type = reflect.ArrayOf(24, reflect.TypeOf(byte(0)))
default:
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
@ -228,7 +214,57 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
return
}
// String implements Stringer
// GetType returns the reflection type of the ABI type.
func (t Type) GetType() reflect.Type {
switch t.T {
case IntTy:
return reflectIntType(false, t.Size)
case UintTy:
return reflectIntType(true, t.Size)
case BoolTy:
return reflect.TypeOf(false)
case StringTy:
return reflect.TypeOf("")
case SliceTy:
return reflect.SliceOf(t.Elem.GetType())
case ArrayTy:
return reflect.ArrayOf(t.Size, t.Elem.GetType())
case TupleTy:
return t.TupleType
case AddressTy:
return reflect.TypeOf(common.Address{})
case FixedBytesTy:
return reflect.ArrayOf(t.Size, reflect.TypeOf(byte(0)))
case BytesTy:
return reflect.SliceOf(reflect.TypeOf(byte(0)))
case HashTy:
// hashtype currently not used
return reflect.ArrayOf(32, reflect.TypeOf(byte(0)))
case FixedPointTy:
// fixedpoint type currently not used
return reflect.ArrayOf(32, reflect.TypeOf(byte(0)))
case FunctionTy:
return reflect.ArrayOf(24, reflect.TypeOf(byte(0)))
default:
panic("Invalid type")
}
}
func overloadedArgName(rawName string, names map[string]string) (string, error) {
fieldName := ToCamelCase(rawName)
if fieldName == "" {
return "", errors.New("abi: purely anonymous or underscored field is not supported")
}
// Handle overloaded fieldNames
_, ok := names[fieldName]
for idx := 0; ok; idx++ {
fieldName = fmt.Sprintf("%s%d", ToCamelCase(rawName), idx)
_, ok = names[fieldName]
}
return fieldName, nil
}
// String implements Stringer.
func (t Type) String() (out string) {
return t.stringKind
}
@ -310,7 +346,7 @@ func (t Type) pack(v reflect.Value) ([]byte, error) {
return append(ret, tail...), nil
default:
return packElement(t, v), nil
return packElement(t, v)
}
}
@ -350,7 +386,7 @@ func isDynamicType(t Type) bool {
func getTypeSize(t Type) int {
if t.T == ArrayTy && !isDynamicType(*t.Elem) {
// Recursively calculate type size if it is a nested array
if t.Elem.T == ArrayTy {
if t.Elem.T == ArrayTy || t.Elem.T == TupleTy {
return t.Size * getTypeSize(*t.Elem)
}
return t.Size * 32

View File

@ -36,58 +36,58 @@ func TestTypeRegexp(t *testing.T) {
components []ArgumentMarshaling
kind Type
}{
{"bool", nil, Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}},
{"bool[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool(nil)), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", nil, Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][][2]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][2][]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", nil, Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", nil, Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", nil, Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", nil, Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", nil, Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", nil, Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", nil, Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", nil, Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", nil, Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", nil, Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", nil, Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]*big.Int{}), Size: 2, Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", nil, Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}},
{"bytes[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]byte{}), Elem: &Type{Kind: reflect.Slice, Type: reflect.TypeOf([]byte{}), T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]byte{}), Elem: &Type{T: BytesTy, Type: reflect.TypeOf([]byte{}), Kind: reflect.Slice, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][32]byte{}), Elem: &Type{Kind: reflect.Array, Type: reflect.TypeOf([32]byte{}), T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][32]byte{}), Elem: &Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", nil, Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}},
{"string[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]string{}), Elem: &Type{Kind: reflect.String, Type: reflect.TypeOf(""), T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]string{}), Elem: &Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}, stringKind: "string[2]"}},
{"address", nil, Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
{"bool", nil, Type{T: BoolTy, stringKind: "bool"}},
{"bool[]", nil, Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", nil, Type{Size: 2, T: ArrayTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", nil, Type{T: SliceTy, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", nil, Type{T: SliceTy, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: SliceTy, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", nil, Type{T: SliceTy, Elem: &Type{T: SliceTy, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", nil, Type{T: SliceTy, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", nil, Type{Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", nil, Type{Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", nil, Type{Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", nil, Type{Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", nil, Type{Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", nil, Type{T: SliceTy, Elem: &Type{Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", nil, Type{T: SliceTy, Elem: &Type{Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", nil, Type{Size: 2, T: ArrayTy, Elem: &Type{Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", nil, Type{T: SliceTy, Elem: &Type{Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", nil, Type{T: SliceTy, Elem: &Type{Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", nil, Type{T: SliceTy, Elem: &Type{Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", nil, Type{Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", nil, Type{Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", nil, Type{Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", nil, Type{Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", nil, Type{Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", nil, Type{T: SliceTy, Elem: &Type{Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", nil, Type{T: SliceTy, Elem: &Type{Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", nil, Type{T: SliceTy, Elem: &Type{Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", nil, Type{T: SliceTy, Elem: &Type{Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", nil, Type{T: SliceTy, Elem: &Type{Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", nil, Type{T: FixedBytesTy, Size: 32, stringKind: "bytes32"}},
{"bytes[]", nil, Type{T: SliceTy, Elem: &Type{T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", nil, Type{T: SliceTy, Elem: &Type{T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", nil, Type{T: StringTy, stringKind: "string"}},
{"string[]", nil, Type{T: SliceTy, Elem: &Type{T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: StringTy, stringKind: "string"}, stringKind: "string[2]"}},
{"address", nil, Type{Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", nil, Type{T: SliceTy, Elem: &Type{Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
// TODO when fixed types are implemented properly
// {"fixed", nil, Type{}},
// {"fixed128x128", nil, Type{}},
@ -95,14 +95,14 @@ func TestTypeRegexp(t *testing.T) {
// {"fixed[2]", nil, Type{}},
// {"fixed128x128[]", nil, Type{}},
// {"fixed128x128[2]", nil, Type{}},
{"tuple", []ArgumentMarshaling{{Name: "a", Type: "int64"}}, Type{Kind: reflect.Struct, T: TupleTy, Type: reflect.TypeOf(struct {
{"tuple", []ArgumentMarshaling{{Name: "a", Type: "int64"}}, Type{T: TupleTy, TupleType: reflect.TypeOf(struct {
A int64 `json:"a"`
}{}), stringKind: "(int64)",
TupleElems: []*Type{{Kind: reflect.Int64, T: IntTy, Type: reflect.TypeOf(int64(0)), Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"a"}}},
{"tuple with long name", []ArgumentMarshaling{{Name: "aTypicalParamName", Type: "int64"}}, Type{Kind: reflect.Struct, T: TupleTy, Type: reflect.TypeOf(struct {
TupleElems: []*Type{{T: IntTy, Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"a"}}},
{"tuple with long name", []ArgumentMarshaling{{Name: "aTypicalParamName", Type: "int64"}}, Type{T: TupleTy, TupleType: reflect.TypeOf(struct {
ATypicalParamName int64 `json:"aTypicalParamName"`
}{}), stringKind: "(int64)",
TupleElems: []*Type{{Kind: reflect.Int64, T: IntTy, Type: reflect.TypeOf(int64(0)), Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"aTypicalParamName"}}},
TupleElems: []*Type{{T: IntTy, Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"aTypicalParamName"}}},
}
for _, tt := range tests {
@ -255,7 +255,7 @@ func TestTypeCheck(t *testing.T) {
{"bytes", nil, [2]byte{0, 1}, "abi: cannot use array as type slice as argument"},
{"bytes", nil, common.Hash{1}, "abi: cannot use array as type slice as argument"},
{"string", nil, "hello world", ""},
{"string", nil, string(""), ""},
{"string", nil, "", ""},
{"string", nil, []byte{}, "abi: cannot use slice as type string as argument"},
{"bytes32[]", nil, [][32]byte{{}}, ""},
{"function", nil, [24]byte{}, ""},
@ -306,3 +306,63 @@ func TestTypeCheck(t *testing.T) {
}
}
}
func TestInternalType(t *testing.T) {
components := []ArgumentMarshaling{{Name: "a", Type: "int64"}}
internalType := "struct a.b[]"
kind := Type{
T: TupleTy,
TupleType: reflect.TypeOf(struct {
A int64 `json:"a"`
}{}),
stringKind: "(int64)",
TupleRawName: "ab[]",
TupleElems: []*Type{{T: IntTy, Size: 64, stringKind: "int64"}},
TupleRawNames: []string{"a"},
}
blob := "tuple"
typ, err := NewType(blob, internalType, components)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", blob, err)
}
if !reflect.DeepEqual(typ, kind) {
t.Errorf("type %q: parsed type mismatch:\nGOT %s\nWANT %s ", blob, spew.Sdump(typeWithoutStringer(typ)), spew.Sdump(typeWithoutStringer(kind)))
}
}
func TestGetTypeSize(t *testing.T) {
var testCases = []struct {
typ string
components []ArgumentMarshaling
typSize int
}{
// simple array
{"uint256[2]", nil, 32 * 2},
{"address[3]", nil, 32 * 3},
{"bytes32[4]", nil, 32 * 4},
// array array
{"uint256[2][3][4]", nil, 32 * (2 * 3 * 4)},
// array tuple
{"tuple[2]", []ArgumentMarshaling{{Name: "x", Type: "bytes32"}, {Name: "y", Type: "bytes32"}}, (32 * 2) * 2},
// simple tuple
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "uint256"}, {Name: "y", Type: "uint256"}}, 32 * 2},
// tuple array
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "bytes32[2]"}}, 32 * 2},
// tuple tuple
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "tuple", Components: []ArgumentMarshaling{{Name: "x", Type: "bytes32"}}}}, 32},
{"tuple", []ArgumentMarshaling{{Name: "x", Type: "tuple", Components: []ArgumentMarshaling{{Name: "x", Type: "bytes32[2]"}, {Name: "y", Type: "uint256"}}}}, 32 * (2 + 1)},
}
for i, data := range testCases {
typ, err := NewType(data.typ, "", data.components)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", data.typ, err)
}
result := getTypeSize(typ)
if result != data.typSize {
t.Errorf("case %d type %q: get type size error: actual: %d expected: %d", i, data.typ, result, data.typSize)
}
}
}

View File

@ -26,40 +26,44 @@ import (
)
var (
// MaxUint256 is the maximum value that can be represented by a uint256
// MaxUint256 is the maximum value that can be represented by a uint256.
MaxUint256 = new(big.Int).Sub(new(big.Int).Lsh(common.Big1, 256), common.Big1)
// MaxInt256 is the maximum value that can be represented by a int256
// MaxInt256 is the maximum value that can be represented by a int256.
MaxInt256 = new(big.Int).Sub(new(big.Int).Lsh(common.Big1, 255), common.Big1)
)
// ReadInteger reads the integer based on its kind and returns the appropriate value
func ReadInteger(typ byte, kind reflect.Kind, b []byte) interface{} {
switch kind {
case reflect.Uint8:
return b[len(b)-1]
case reflect.Uint16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case reflect.Uint32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case reflect.Uint64:
return binary.BigEndian.Uint64(b[len(b)-8:])
case reflect.Int8:
// ReadInteger reads the integer based on its kind and returns the appropriate value.
func ReadInteger(typ Type, b []byte) interface{} {
if typ.T == UintTy {
switch typ.Size {
case 8:
return b[len(b)-1]
case 16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case 32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case 64:
return binary.BigEndian.Uint64(b[len(b)-8:])
default:
// the only case left for unsigned integer is uint256.
return new(big.Int).SetBytes(b)
}
}
switch typ.Size {
case 8:
return int8(b[len(b)-1])
case reflect.Int16:
case 16:
return int16(binary.BigEndian.Uint16(b[len(b)-2:]))
case reflect.Int32:
case 32:
return int32(binary.BigEndian.Uint32(b[len(b)-4:]))
case reflect.Int64:
case 64:
return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
default:
// the only case left for integer is int256/uint256.
ret := new(big.Int).SetBytes(b)
if typ == UintTy {
return ret
}
// the only case left for integer is int256
// big.SetBytes can't tell if a number is negative or positive in itself.
// On EVM, if the returned number > max int256, it is negative.
// A number is > max int256 if the bit at position 255 is set.
ret := new(big.Int).SetBytes(b)
if ret.Bit(255) == 1 {
ret.Add(MaxUint256, new(big.Int).Neg(ret))
ret.Add(ret, common.Big1)
@ -69,7 +73,7 @@ func ReadInteger(typ byte, kind reflect.Kind, b []byte) interface{} {
}
}
// reads a bool
// readBool reads a bool.
func readBool(word []byte) (bool, error) {
for _, b := range word[:31] {
if b != 0 {
@ -87,7 +91,8 @@ func readBool(word []byte) (bool, error) {
}
// A function type is simply the address with the function selection signature at the end.
// This enforces that standard by always presenting it as a 24-array (address + sig = 24 bytes)
//
// readFunctionType enforces that standard by always presenting it as a 24-array (address + sig = 24 bytes)
func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
if t.T != FunctionTy {
return [24]byte{}, fmt.Errorf("abi: invalid type in call to make function type byte array")
@ -100,20 +105,20 @@ func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
return
}
// ReadFixedBytes uses reflection to create a fixed array to be read from
// ReadFixedBytes uses reflection to create a fixed array to be read from.
func ReadFixedBytes(t Type, word []byte) (interface{}, error) {
if t.T != FixedBytesTy {
return nil, fmt.Errorf("abi: invalid type in call to make fixed byte array")
}
// convert
array := reflect.New(t.Type).Elem()
array := reflect.New(t.GetType()).Elem()
reflect.Copy(array, reflect.ValueOf(word[0:t.Size]))
return array.Interface(), nil
}
// iteratively unpack elements
// forEachUnpack iteratively unpack elements.
func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error) {
if size < 0 {
return nil, fmt.Errorf("cannot marshal input to array, size is negative (%d)", size)
@ -127,10 +132,10 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
if t.T == SliceTy {
// declare our slice
refSlice = reflect.MakeSlice(t.Type, size, size)
refSlice = reflect.MakeSlice(t.GetType(), size, size)
} else if t.T == ArrayTy {
// declare our array
refSlice = reflect.New(t.Type).Elem()
refSlice = reflect.New(t.GetType()).Elem()
} else {
return nil, fmt.Errorf("abi: invalid type in array/slice unpacking stage")
}
@ -140,7 +145,7 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
elemSize := getTypeSize(*t.Elem)
for i, j := start, 0; j < size; i, j = i+elemSize, j+1 {
inter, err := ToGoType(i, *t.Elem, output)
inter, err := toGoType(i, *t.Elem, output)
if err != nil {
return nil, err
}
@ -154,10 +159,10 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
}
func forTupleUnpack(t Type, output []byte) (interface{}, error) {
retval := reflect.New(t.Type).Elem()
retval := reflect.New(t.GetType()).Elem()
virtualArgs := 0
for index, elem := range t.TupleElems {
marshalledValue, err := ToGoType((index+virtualArgs)*32, *elem, output)
marshalledValue, err := toGoType((index+virtualArgs)*32, *elem, output)
if elem.T == ArrayTy && !isDynamicType(*elem) {
// If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256.
@ -183,9 +188,9 @@ func forTupleUnpack(t Type, output []byte) (interface{}, error) {
return retval.Interface(), nil
}
// ToGoType parses the output bytes and recursively assigns the value of these bytes
// toGoType parses the output bytes and recursively assigns the value of these bytes
// into a go type with accordance with the ABI spec.
func ToGoType(index int, t Type, output []byte) (interface{}, error) {
func toGoType(index int, t Type, output []byte) (interface{}, error) {
if index+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %d require %d", len(output), index+32)
}
@ -214,21 +219,23 @@ func ToGoType(index int, t Type, output []byte) (interface{}, error) {
return nil, err
}
return forTupleUnpack(t, output[begin:])
} else {
return forTupleUnpack(t, output[index:])
}
return forTupleUnpack(t, output[index:])
case SliceTy:
return forEachUnpack(t, output[begin:], 0, length)
case ArrayTy:
if isDynamicType(*t.Elem) {
offset := int64(binary.BigEndian.Uint64(returnOutput[len(returnOutput)-8:]))
offset := binary.BigEndian.Uint64(returnOutput[len(returnOutput)-8:])
if offset > uint64(len(output)) {
return nil, fmt.Errorf("abi: toGoType offset greater than output length: offset: %d, len(output): %d", offset, len(output))
}
return forEachUnpack(t, output[offset:], 0, t.Size)
}
return forEachUnpack(t, output[index:], 0, t.Size)
case StringTy: // variable arrays are written at the end of the return bytes
return string(output[begin : begin+length]), nil
case IntTy, UintTy:
return ReadInteger(t.T, t.Kind, returnOutput), nil
return ReadInteger(t, returnOutput), nil
case BoolTy:
return readBool(returnOutput)
case AddressTy:
@ -246,7 +253,7 @@ func ToGoType(index int, t Type, output []byte) (interface{}, error) {
}
}
// interprets a 32 byte slice as an offset and then determines which indice to look to decode the type.
// lengthPrefixPointsTo interprets a 32 byte slice as an offset and then determines which indices to look to decode the type.
func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err error) {
bigOffsetEnd := big.NewInt(0).SetBytes(output[index : index+32])
bigOffsetEnd.Add(bigOffsetEnd, common.Big32)

View File

@ -30,6 +30,32 @@ import (
"github.com/stretchr/testify/require"
)
// TestUnpack tests the general pack/unpack tests in packing_test.go
func TestUnpack(t *testing.T) {
for i, test := range packUnpackTests {
t.Run(strconv.Itoa(i)+" "+test.def, func(t *testing.T) {
//Unpack
def := fmt.Sprintf(`[{ "name" : "method", "type": "function", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.packed)
if err != nil {
t.Fatalf("invalid hex %s: %v", test.packed, err)
}
out, err := abi.Unpack("method", encb)
if err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
}
if !reflect.DeepEqual(test.unpacked, ConvertType(out[0], test.unpacked)) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, test.unpacked, out[0])
}
})
}
}
type unpackTest struct {
def string // ABI definition JSON
enc string // evm return data
@ -52,16 +78,6 @@ func (test unpackTest) checkError(err error) error {
var unpackTests = []unpackTest{
// Bools
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: true,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000000",
want: false,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000001000000000001",
@ -75,11 +91,6 @@ var unpackTests = []unpackTest{
err: "abi: improperly encoded boolean value",
},
// Integers
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint32(1),
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
@ -92,16 +103,6 @@ var unpackTests = []unpackTest{
want: uint16(0),
err: "abi: cannot unmarshal *big.Int in to uint16",
},
{
def: `[{"type": "uint17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int32(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
@ -114,38 +115,10 @@ var unpackTests = []unpackTest{
want: int16(0),
err: "abi: cannot unmarshal *big.Int in to int16",
},
{
def: `[{"type": "int17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int256"}]`,
enc: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
want: big.NewInt(-1),
},
// Address
{
def: `[{"type": "address"}]`,
enc: "0000000000000000000000000100000000000000000000000000000000000000",
want: common.Address{1},
},
// Bytes
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{},
err: "abi: cannot unmarshal []uint8 in to [32]uint8",
want: [32]byte{1},
},
{
def: `[{"type": "bytes32"}]`,
@ -153,245 +126,13 @@ var unpackTests = []unpackTest{
want: []byte(nil),
err: "abi: cannot unmarshal [32]uint8 in to []uint8",
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
// Functions
{
def: `[{"type": "function"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [24]byte{1},
},
// Slice and Array
{
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint8{1, 2},
},
{
def: `[{"type": "uint8[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000",
want: []uint8{},
},
{
def: `[{"type": "uint256[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000",
want: []*big.Int{},
},
{
def: `[{"type": "uint8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint8{1, 2},
},
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000",
want: [][]uint8{},
},
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [][]uint8{{1, 2}, {1, 2, 3}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
want: [2][]uint8{{}, {}},
},
{
def: `[{"type": "uint8[][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
want: [2][]uint8{{1}, {1}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000000",
want: [][2]uint8{},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint16{1, 2},
},
{
def: `[{"type": "uint16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint16{1, 2},
},
{
def: `[{"type": "uint32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint32{1, 2},
},
{
def: `[{"type": "uint32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint32{1, 2},
},
{
def: `[{"type": "uint32[2][3][4]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018",
want: [4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
},
{
def: `[{"type": "uint64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint64{1, 2},
},
{
def: `[{"type": "uint64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint64{1, 2},
},
{
def: `[{"type": "uint256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "uint256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "string[4]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000000548656c6c6f0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005576f726c64000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b476f2d657468657265756d0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000",
want: [4]string{"Hello", "World", "Go-ethereum", "Ethereum"},
},
{
def: `[{"type": "string[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b676f2d657468657265756d000000000000000000000000000000000000000000",
want: []string{"Ethereum", "go-ethereum"},
},
{
def: `[{"type": "bytes[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000003f0f0f000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003f0f0f00000000000000000000000000000000000000000000000000000000000",
want: [][]byte{{0xf0, 0xf0, 0xf0}, {0xf0, 0xf0, 0xf0}},
},
{
def: `[{"type": "uint256[2][][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000003e80000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000003e8",
want: [][][2]*big.Int{{{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}, {{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}},
},
{
def: `[{"type": "int8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int8{1, 2},
},
{
def: `[{"type": "int8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int8{1, 2},
},
{
def: `[{"type": "int16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int16{1, 2},
},
{
def: `[{"type": "int16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int16{1, 2},
},
{
def: `[{"type": "int32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int32{1, 2},
},
{
def: `[{"type": "int32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int32{1, 2},
},
{
def: `[{"type": "int64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int64{1, 2},
},
{
def: `[{"type": "int64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int64{1, 2},
},
{
def: `[{"type": "int256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "int256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
// struct outputs
{
def: `[{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int_one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
Intone *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"___","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
Intone *big.Int
}{},
err: "abi: purely underscored output cannot unpack to struct",
}{IntOne: big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"},{"name":"IntOne","type":"int256"}]`,
@ -438,11 +179,36 @@ var unpackTests = []unpackTest{
}{},
err: "abi: purely underscored output cannot unpack to struct",
},
// Make sure only the first argument is consumed
{
def: `[{"name":"int_one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
}
func TestUnpack(t *testing.T) {
// TestLocalUnpackTests runs test specially designed only for unpacking.
// All test cases that can be used to test packing and unpacking should move to packing_test.go
func TestLocalUnpackTests(t *testing.T) {
for i, test := range unpackTests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
//Unpack
def := fmt.Sprintf(`[{ "name" : "method", "type": "function", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
@ -453,7 +219,7 @@ func TestUnpack(t *testing.T) {
t.Fatalf("invalid hex %s: %v", test.enc, err)
}
outptr := reflect.New(reflect.TypeOf(test.want))
err = abi.Unpack(outptr.Interface(), "method", encb)
err = abi.UnpackIntoInterface(outptr.Interface(), "method", encb)
if err := test.checkError(err); err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
@ -466,7 +232,7 @@ func TestUnpack(t *testing.T) {
}
}
func TestUnpackSetDynamicArrayOutput(t *testing.T) {
func TestUnpackIntoInterfaceSetDynamicArrayOutput(t *testing.T) {
abi, err := JSON(strings.NewReader(`[{"constant":true,"inputs":[],"name":"testDynamicFixedBytes15","outputs":[{"name":"","type":"bytes15[]"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"testDynamicFixedBytes32","outputs":[{"name":"","type":"bytes32[]"}],"payable":false,"stateMutability":"view","type":"function"}]`))
if err != nil {
t.Fatal(err)
@ -481,7 +247,7 @@ func TestUnpackSetDynamicArrayOutput(t *testing.T) {
)
// test 32
err = abi.Unpack(&out32, "testDynamicFixedBytes32", marshalledReturn32)
err = abi.UnpackIntoInterface(&out32, "testDynamicFixedBytes32", marshalledReturn32)
if err != nil {
t.Fatal(err)
}
@ -498,7 +264,7 @@ func TestUnpackSetDynamicArrayOutput(t *testing.T) {
}
// test 15
err = abi.Unpack(&out15, "testDynamicFixedBytes32", marshalledReturn15)
err = abi.UnpackIntoInterface(&out15, "testDynamicFixedBytes32", marshalledReturn15)
if err != nil {
t.Fatal(err)
}
@ -592,14 +358,14 @@ func TestMethodMultiReturn(t *testing.T) {
}, {
&[]interface{}{new(int)},
&[]interface{}{},
"abi: insufficient number of elements in the list/array for unpack, want 2, got 1",
"abi: insufficient number of arguments for unpack, want 2, got 1",
"Can not unpack into a slice with wrong types",
}}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
require := require.New(t)
err := abi.Unpack(tc.dest, "multi", data)
err := abi.UnpackIntoInterface(tc.dest, "multi", data)
if tc.error == "" {
require.Nil(err, "Should be able to unpack method outputs.")
require.Equal(tc.expected, tc.dest)
@ -622,7 +388,7 @@ func TestMultiReturnWithArray(t *testing.T) {
ret1, ret1Exp := new([3]uint64), [3]uint64{9, 9, 9}
ret2, ret2Exp := new(uint64), uint64(8)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@ -646,7 +412,7 @@ func TestMultiReturnWithStringArray(t *testing.T) {
ret2, ret2Exp := new(common.Address), common.HexToAddress("ab1257528b3782fb40d7ed5f72e624b744dffb2f")
ret3, ret3Exp := new([2]string), [2]string{"Ethereum", "Hello, Ethereum!"}
ret4, ret4Exp := new(bool), false
if err := abi.Unpack(&[]interface{}{ret1, ret2, ret3, ret4}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2, ret3, ret4}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@ -684,7 +450,7 @@ func TestMultiReturnWithStringSlice(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000065")) // output[1][1] value
ret1, ret1Exp := new([]string), []string{"ethereum", "go-ethereum"}
ret2, ret2Exp := new([]*big.Int), []*big.Int{big.NewInt(100), big.NewInt(101)}
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@ -724,7 +490,7 @@ func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
{{0x411, 0x412, 0x413}, {0x421, 0x422, 0x423}},
}
ret2, ret2Exp := new(uint64), uint64(0x9876)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
if err := abi.UnpackIntoInterface(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
@ -763,7 +529,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000a"))
buff.Write(common.Hex2Bytes("0102000000000000000000000000000000000000000000000000000000000000"))
err = abi.Unpack(&mixedBytes, "mixedBytes", buff.Bytes())
err = abi.UnpackIntoInterface(&mixedBytes, "mixedBytes", buff.Bytes())
if err != nil {
t.Error(err)
} else {
@ -778,7 +544,7 @@ func TestUnmarshal(t *testing.T) {
// marshal int
var Int *big.Int
err = abi.Unpack(&Int, "int", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
err = abi.UnpackIntoInterface(&Int, "int", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
@ -789,7 +555,7 @@ func TestUnmarshal(t *testing.T) {
// marshal bool
var Bool bool
err = abi.Unpack(&Bool, "bool", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
err = abi.UnpackIntoInterface(&Bool, "bool", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
@ -806,7 +572,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(bytesOut)
var Bytes []byte
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@ -822,7 +588,7 @@ func TestUnmarshal(t *testing.T) {
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@ -838,7 +604,7 @@ func TestUnmarshal(t *testing.T) {
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@ -848,7 +614,7 @@ func TestUnmarshal(t *testing.T) {
}
// marshal dynamic bytes output empty
err = abi.Unpack(&Bytes, "bytes", nil)
err = abi.UnpackIntoInterface(&Bytes, "bytes", nil)
if err == nil {
t.Error("expected error")
}
@ -859,7 +625,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000005"))
buff.Write(common.RightPadBytes([]byte("hello"), 32))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
@ -873,7 +639,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.RightPadBytes([]byte("hello"), 32))
var hash common.Hash
err = abi.Unpack(&hash, "fixed", buff.Bytes())
err = abi.UnpackIntoInterface(&hash, "fixed", buff.Bytes())
if err != nil {
t.Error(err)
}
@ -886,12 +652,12 @@ func TestUnmarshal(t *testing.T) {
// marshal error
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
err = abi.UnpackIntoInterface(&Bytes, "bytes", buff.Bytes())
if err == nil {
t.Error("expected error")
}
err = abi.Unpack(&Bytes, "multi", make([]byte, 64))
err = abi.UnpackIntoInterface(&Bytes, "multi", make([]byte, 64))
if err == nil {
t.Error("expected error")
}
@ -902,7 +668,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000003"))
// marshal int array
var intArray [3]*big.Int
err = abi.Unpack(&intArray, "intArraySingle", buff.Bytes())
err = abi.UnpackIntoInterface(&intArray, "intArraySingle", buff.Bytes())
if err != nil {
t.Error(err)
}
@ -923,7 +689,7 @@ func TestUnmarshal(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000100000000000000000000000000000000000000"))
var outAddr []common.Address
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
err = abi.UnpackIntoInterface(&outAddr, "addressSliceSingle", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
@ -950,7 +716,7 @@ func TestUnmarshal(t *testing.T) {
A []common.Address
B []common.Address
}
err = abi.Unpack(&outAddrStruct, "addressSliceDouble", buff.Bytes())
err = abi.UnpackIntoInterface(&outAddrStruct, "addressSliceDouble", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
@ -978,7 +744,7 @@ func TestUnmarshal(t *testing.T) {
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000100"))
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
err = abi.UnpackIntoInterface(&outAddr, "addressSliceSingle", buff.Bytes())
if err == nil {
t.Fatal("expected error:", err)
}
@ -1001,7 +767,7 @@ func TestUnpackTuple(t *testing.T) {
B *big.Int
}{new(big.Int), new(big.Int)}
err = abi.Unpack(&v, "tuple", buff.Bytes())
err = abi.UnpackIntoInterface(&v, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
} else {
@ -1073,7 +839,7 @@ func TestUnpackTuple(t *testing.T) {
A: big.NewInt(1),
}
err = abi.Unpack(&ret, "tuple", buff.Bytes())
err = abi.UnpackIntoInterface(&ret, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
}

View File

@ -21,7 +21,7 @@ import (
"fmt"
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
@ -88,7 +88,7 @@ type Wallet interface {
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// Note, self derivation will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//
@ -129,6 +129,8 @@ type Wallet interface {
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignHashWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
//
// This method should return the signature in 'canonical' format, with v 0 or 1
SignText(account Account, text []byte) ([]byte, error)
// SignTextWithPassphrase is identical to Signtext, but also takes a password

View File

@ -27,7 +27,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/signer/core"
@ -131,6 +130,12 @@ func (api *ExternalSigner) Accounts() []accounts.Account {
func (api *ExternalSigner) Contains(account accounts.Account) bool {
api.cacheMu.RLock()
defer api.cacheMu.RUnlock()
if api.cache == nil {
// If we haven't already fetched the accounts, it's time to do so now
api.cacheMu.RUnlock()
api.Accounts()
api.cacheMu.RLock()
}
for _, a := range api.cache {
if a.Address == account.Address && (account.URL == (accounts.URL{}) || account.URL == api.URL()) {
return true
@ -161,7 +166,7 @@ func (api *ExternalSigner) SignData(account accounts.Account, mimeType string, d
hexutil.Encode(data)); err != nil {
return nil, err
}
// If V is on 27/28-form, convert to to 0/1 for Clique
// If V is on 27/28-form, convert to 0/1 for Clique
if mimeType == accounts.MimetypeClique && (res[64] == 27 || res[64] == 28) {
res[64] -= 27 // Transform V from 27/28 to 0/1 for Clique use
}
@ -169,19 +174,29 @@ func (api *ExternalSigner) SignData(account accounts.Account, mimeType string, d
}
func (api *ExternalSigner) SignText(account accounts.Account, text []byte) ([]byte, error) {
var res hexutil.Bytes
var signature hexutil.Bytes
var signAddress = common.NewMixedcaseAddress(account.Address)
if err := api.client.Call(&res, "account_signData",
if err := api.client.Call(&signature, "account_signData",
accounts.MimetypeTextPlain,
&signAddress, // Need to use the pointer here, because of how MarshalJSON is defined
hexutil.Encode(text)); err != nil {
return nil, err
}
return res, nil
if signature[64] == 27 || signature[64] == 28 {
// If clef is used as a backend, it may already have transformed
// the signature to ethereum-type signature.
signature[64] -= 27 // Transform V from Ethereum-legacy to 0/1
}
return signature, nil
}
// signTransactionResult represents the signinig result returned by clef.
type signTransactionResult struct {
Raw hexutil.Bytes `json:"raw"`
Tx *types.Transaction `json:"tx"`
}
func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
res := ethapi.SignTransactionResult{}
data := hexutil.Bytes(tx.Data())
var to *common.MixedcaseAddress
if tx.To() != nil {
@ -197,6 +212,7 @@ func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transactio
To: to,
From: common.NewMixedcaseAddress(account.Address),
}
var res signTransactionResult
if err := api.client.Call(&res, "account_signTransaction", args); err != nil {
return nil, err
}

View File

@ -150,3 +150,31 @@ func (path *DerivationPath) UnmarshalJSON(b []byte) error {
*path, err = ParseDerivationPath(dp)
return err
}
// DefaultIterator creates a BIP-32 path iterator, which progresses by increasing the last component:
// i.e. m/44'/60'/0'/0/0, m/44'/60'/0'/0/1, m/44'/60'/0'/0/2, ... m/44'/60'/0'/0/N.
func DefaultIterator(base DerivationPath) func() DerivationPath {
path := make(DerivationPath, len(base))
copy(path[:], base[:])
// Set it back by one, so the first call gives the first result
path[len(path)-1]--
return func() DerivationPath {
path[len(path)-1]++
return path
}
}
// LedgerLiveIterator creates a bip44 path iterator for Ledger Live.
// Ledger Live increments the third component rather than the fifth component
// i.e. m/44'/60'/0'/0/0, m/44'/60'/1'/0/0, m/44'/60'/2'/0/0, ... m/44'/60'/N'/0/0.
func LedgerLiveIterator(base DerivationPath) func() DerivationPath {
path := make(DerivationPath, len(base))
copy(path[:], base[:])
// Set it back by one, so the first call gives the first result
path[2]--
return func() DerivationPath {
// ledgerLivePathIterator iterates on the third component
path[2]++
return path
}
}

View File

@ -17,6 +17,7 @@
package accounts
import (
"fmt"
"reflect"
"testing"
)
@ -61,7 +62,7 @@ func TestHDPathParsing(t *testing.T) {
// Weird inputs just to ensure they work
{" m / 44 '\n/\n 60 \n\n\t' /\n0 ' /\t\t 0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
// Invaid derivation paths
// Invalid derivation paths
{"", nil}, // Empty relative derivation path
{"m", nil}, // Empty absolute derivation path
{"m/", nil}, // Missing last derivation component
@ -77,3 +78,41 @@ func TestHDPathParsing(t *testing.T) {
}
}
}
func testDerive(t *testing.T, next func() DerivationPath, expected []string) {
t.Helper()
for i, want := range expected {
if have := next(); fmt.Sprintf("%v", have) != want {
t.Errorf("step %d, have %v, want %v", i, have, want)
}
}
}
func TestHdPathIteration(t *testing.T) {
testDerive(t, DefaultIterator(DefaultBaseDerivationPath),
[]string{
"m/44'/60'/0'/0/0", "m/44'/60'/0'/0/1",
"m/44'/60'/0'/0/2", "m/44'/60'/0'/0/3",
"m/44'/60'/0'/0/4", "m/44'/60'/0'/0/5",
"m/44'/60'/0'/0/6", "m/44'/60'/0'/0/7",
"m/44'/60'/0'/0/8", "m/44'/60'/0'/0/9",
})
testDerive(t, DefaultIterator(LegacyLedgerBaseDerivationPath),
[]string{
"m/44'/60'/0'/0", "m/44'/60'/0'/1",
"m/44'/60'/0'/2", "m/44'/60'/0'/3",
"m/44'/60'/0'/4", "m/44'/60'/0'/5",
"m/44'/60'/0'/6", "m/44'/60'/0'/7",
"m/44'/60'/0'/8", "m/44'/60'/0'/9",
})
testDerive(t, LedgerLiveIterator(DefaultBaseDerivationPath),
[]string{
"m/44'/60'/0'/0/0", "m/44'/60'/1'/0/0",
"m/44'/60'/2'/0/0", "m/44'/60'/3'/0/0",
"m/44'/60'/4'/0/0", "m/44'/60'/5'/0/0",
"m/44'/60'/6'/0/0", "m/44'/60'/7'/0/0",
"m/44'/60'/8'/0/0", "m/44'/60'/9'/0/0",
})
}

View File

@ -262,7 +262,7 @@ func (ac *accountCache) scanAccounts() error {
switch {
case err != nil:
log.Debug("Failed to decode keystore key", "path", path, "err", err)
case (addr == common.Address{}):
case addr == common.Address{}:
log.Debug("Failed to decode keystore key", "path", path, "err", "missing or zero address")
default:
return &accounts.Account{

View File

@ -32,7 +32,7 @@ import (
type fileCache struct {
all mapset.Set // Set of all files from the keystore folder
lastMod time.Time // Last time instance when a file was modified
mu sync.RWMutex
mu sync.Mutex
}
// scan performs a new scan on the given directory, compares against the already

View File

@ -32,7 +32,7 @@ import (
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/pborman/uuid"
"github.com/google/uuid"
)
const (
@ -110,7 +110,10 @@ func (k *Key) UnmarshalJSON(j []byte) (err error) {
}
u := new(uuid.UUID)
*u = uuid.Parse(keyJSON.Id)
*u, err = uuid.Parse(keyJSON.Id)
if err != nil {
return err
}
k.Id = *u
addr, err := hex.DecodeString(keyJSON.Address)
if err != nil {
@ -128,7 +131,10 @@ func (k *Key) UnmarshalJSON(j []byte) (err error) {
}
func newKeyFromECDSA(privateKeyECDSA *ecdsa.PrivateKey) *Key {
id := uuid.NewRandom()
id, err := uuid.NewRandom()
if err != nil {
panic(fmt.Sprintf("Could not create random uuid: %v", err))
}
key := &Key{
Id: id,
Address: crypto.PubkeyToAddress(privateKeyECDSA.PublicKey),

View File

@ -24,7 +24,6 @@ import (
"crypto/ecdsa"
crand "crypto/rand"
"errors"
"fmt"
"math/big"
"os"
"path/filepath"
@ -44,6 +43,10 @@ var (
ErrLocked = accounts.NewAuthNeededError("password or unlock")
ErrNoMatch = errors.New("no key for given address or file")
ErrDecrypt = errors.New("could not decrypt key with given password")
// ErrAccountAlreadyExists is returned if an account attempted to import is
// already present in the keystore.
ErrAccountAlreadyExists = errors.New("account already exists")
)
// KeyStoreType is the reflect type of a keystore backend.
@ -67,7 +70,8 @@ type KeyStore struct {
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
mu sync.RWMutex
mu sync.RWMutex
importMu sync.Mutex // Import Mutex locks the import to prevent two insertions from racing
}
type unlocked struct {
@ -279,11 +283,9 @@ func (ks *KeyStore) SignTx(a accounts.Account, tx *types.Transaction, chainID *b
if !found {
return nil, ErrLocked
}
// Depending on the presence of the chain ID, sign with EIP155 or homestead
if chainID != nil {
return types.SignTx(tx, types.NewEIP155Signer(chainID), unlockedKey.PrivateKey)
}
return types.SignTx(tx, types.HomesteadSigner{}, unlockedKey.PrivateKey)
// Depending on the presence of the chain ID, sign with 2718 or homestead
signer := types.LatestSignerForChainID(chainID)
return types.SignTx(tx, signer, unlockedKey.PrivateKey)
}
// SignHashWithPassphrase signs hash if the private key matching the given address
@ -306,12 +308,9 @@ func (ks *KeyStore) SignTxWithPassphrase(a accounts.Account, passphrase string,
return nil, err
}
defer zeroKey(key.PrivateKey)
// Depending on the presence of the chain ID, sign with EIP155 or homestead
if chainID != nil {
return types.SignTx(tx, types.NewEIP155Signer(chainID), key.PrivateKey)
}
return types.SignTx(tx, types.HomesteadSigner{}, key.PrivateKey)
// Depending on the presence of the chain ID, sign with or without replay protection.
signer := types.LatestSignerForChainID(chainID)
return types.SignTx(tx, signer, key.PrivateKey)
}
// Unlock unlocks the given account indefinitely.
@ -443,14 +442,27 @@ func (ks *KeyStore) Import(keyJSON []byte, passphrase, newPassphrase string) (ac
if err != nil {
return accounts.Account{}, err
}
ks.importMu.Lock()
defer ks.importMu.Unlock()
if ks.cache.hasAddress(key.Address) {
return accounts.Account{
Address: key.Address,
}, ErrAccountAlreadyExists
}
return ks.importKey(key, newPassphrase)
}
// ImportECDSA stores the given key into the key directory, encrypting it with the passphrase.
func (ks *KeyStore) ImportECDSA(priv *ecdsa.PrivateKey, passphrase string) (accounts.Account, error) {
ks.importMu.Lock()
defer ks.importMu.Unlock()
key := newKeyFromECDSA(priv)
if ks.cache.hasAddress(key.Address) {
return accounts.Account{}, fmt.Errorf("account already exists")
return accounts.Account{
Address: key.Address,
}, ErrAccountAlreadyExists
}
return ks.importKey(key, passphrase)
}

View File

@ -23,11 +23,14 @@ import (
"runtime"
"sort"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/event"
)
@ -333,11 +336,95 @@ func TestWalletNotifications(t *testing.T) {
// Shut down the event collector and check events.
sub.Unsubscribe()
<-updates
for ev := range updates {
events = append(events, walletEvent{ev, ev.Wallet.Accounts()[0]})
}
checkAccounts(t, live, ks.Wallets())
checkEvents(t, wantEvents, events)
}
// TestImportExport tests the import functionality of a keystore.
func TestImportECDSA(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
key, err := crypto.GenerateKey()
if err != nil {
t.Fatalf("failed to generate key: %v", key)
}
if _, err = ks.ImportECDSA(key, "old"); err != nil {
t.Errorf("importing failed: %v", err)
}
if _, err = ks.ImportECDSA(key, "old"); err == nil {
t.Errorf("importing same key twice succeeded")
}
if _, err = ks.ImportECDSA(key, "new"); err == nil {
t.Errorf("importing same key twice succeeded")
}
}
// TestImportECDSA tests the import and export functionality of a keystore.
func TestImportExport(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
acc, err := ks.NewAccount("old")
if err != nil {
t.Fatalf("failed to create account: %v", acc)
}
json, err := ks.Export(acc, "old", "new")
if err != nil {
t.Fatalf("failed to export account: %v", acc)
}
dir2, ks2 := tmpKeyStore(t, true)
defer os.RemoveAll(dir2)
if _, err = ks2.Import(json, "old", "old"); err == nil {
t.Errorf("importing with invalid password succeeded")
}
acc2, err := ks2.Import(json, "new", "new")
if err != nil {
t.Errorf("importing failed: %v", err)
}
if acc.Address != acc2.Address {
t.Error("imported account does not match exported account")
}
if _, err = ks2.Import(json, "new", "new"); err == nil {
t.Errorf("importing a key twice succeeded")
}
}
// TestImportRace tests the keystore on races.
// This test should fail under -race if importing races.
func TestImportRace(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
acc, err := ks.NewAccount("old")
if err != nil {
t.Fatalf("failed to create account: %v", acc)
}
json, err := ks.Export(acc, "old", "new")
if err != nil {
t.Fatalf("failed to export account: %v", acc)
}
dir2, ks2 := tmpKeyStore(t, true)
defer os.RemoveAll(dir2)
var atom uint32
var wg sync.WaitGroup
wg.Add(2)
for i := 0; i < 2; i++ {
go func() {
defer wg.Done()
if _, err := ks2.Import(json, "new", "new"); err != nil {
atomic.AddUint32(&atom, 1)
}
}()
}
wg.Wait()
if atom != 1 {
t.Errorf("Import is racy")
}
}
// checkAccounts checks that all known live accounts are present in the wallet list.
func checkAccounts(t *testing.T, live map[common.Address]accounts.Account, wallets []accounts.Wallet) {
if len(live) != len(wallets) {

View File

@ -42,7 +42,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/crypto"
"github.com/pborman/uuid"
"github.com/google/uuid"
"golang.org/x/crypto/pbkdf2"
"golang.org/x/crypto/scrypt"
)
@ -228,9 +228,12 @@ func DecryptKey(keyjson []byte, auth string) (*Key, error) {
return nil, err
}
key := crypto.ToECDSAUnsafe(keyBytes)
id, err := uuid.FromBytes(keyId)
if err != nil {
return nil, err
}
return &Key{
Id: uuid.UUID(keyId),
Id: id,
Address: crypto.PubkeyToAddress(key.PublicKey),
PrivateKey: key,
}, nil
@ -276,7 +279,11 @@ func decryptKeyV3(keyProtected *encryptedKeyJSONV3, auth string) (keyBytes []byt
if keyProtected.Version != version {
return nil, nil, fmt.Errorf("version not supported: %v", keyProtected.Version)
}
keyId = uuid.Parse(keyProtected.Id)
keyUUID, err := uuid.Parse(keyProtected.Id)
if err != nil {
return nil, nil, err
}
keyId = keyUUID[:]
plainText, err := DecryptDataV3(keyProtected.Crypto, auth)
if err != nil {
return nil, nil, err
@ -285,7 +292,11 @@ func decryptKeyV3(keyProtected *encryptedKeyJSONV3, auth string) (keyBytes []byt
}
func decryptKeyV1(keyProtected *encryptedKeyJSONV1, auth string) (keyBytes []byte, keyId []byte, err error) {
keyId = uuid.Parse(keyProtected.Id)
keyUUID, err := uuid.Parse(keyProtected.Id)
if err != nil {
return nil, nil, err
}
keyId = keyUUID[:]
mac, err := hex.DecodeString(keyProtected.Crypto.MAC)
if err != nil {
return nil, nil, err

View File

@ -27,7 +27,7 @@ import (
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/crypto"
"github.com/pborman/uuid"
"github.com/google/uuid"
"golang.org/x/crypto/pbkdf2"
)
@ -37,7 +37,10 @@ func importPreSaleKey(keyStore keyStore, keyJSON []byte, password string) (accou
if err != nil {
return accounts.Account{}, nil, err
}
key.Id = uuid.NewRandom()
key.Id, err = uuid.NewRandom()
if err != nil {
return accounts.Account{}, nil, err
}
a := accounts.Account{
Address: key.Address,
URL: accounts.URL{
@ -86,7 +89,7 @@ func decryptPreSaleKey(fileContent []byte, password string) (key *Key, err error
ecKey := crypto.ToECDSAUnsafe(ethPriv)
key = &Key{
Id: nil,
Id: uuid.UUID{},
Address: crypto.PubkeyToAddress(ecKey.PublicKey),
PrivateKey: ecKey,
}

View File

@ -19,7 +19,7 @@ package keystore
import (
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
@ -58,7 +58,7 @@ func (w *keystoreWallet) Open(passphrase string) error { return nil }
func (w *keystoreWallet) Close() error { return nil }
// Accounts implements accounts.Wallet, returning an account list consisting of
// a single account that the plain kestore wallet contains.
// a single account that the plain keystore wallet contains.
func (w *keystoreWallet) Accounts() []accounts.Account {
return []accounts.Account{w.account}
}
@ -93,12 +93,12 @@ func (w *keystoreWallet) signHash(account accounts.Account, hash []byte) ([]byte
return w.keystore.SignHash(account, hash)
}
// SignData signs keccak256(data). The mimetype parameter describes the type of data being signed
// SignData signs keccak256(data). The mimetype parameter describes the type of data being signed.
func (w *keystoreWallet) SignData(account accounts.Account, mimeType string, data []byte) ([]byte, error) {
return w.signHash(account, crypto.Keccak256(data))
}
// SignDataWithPassphrase signs keccak256(data). The mimetype parameter describes the type of data being signed
// SignDataWithPassphrase signs keccak256(data). The mimetype parameter describes the type of data being signed.
func (w *keystoreWallet) SignDataWithPassphrase(account accounts.Account, passphrase, mimeType string, data []byte) ([]byte, error) {
// Make sure the requested account is contained within
if !w.Contains(account) {
@ -108,12 +108,14 @@ func (w *keystoreWallet) SignDataWithPassphrase(account accounts.Account, passph
return w.keystore.SignHashWithPassphrase(account, passphrase, crypto.Keccak256(data))
}
// SignText implements accounts.Wallet, attempting to sign the hash of
// the given text with the given account.
func (w *keystoreWallet) SignText(account accounts.Account, text []byte) ([]byte, error) {
return w.signHash(account, accounts.TextHash(text))
}
// SignTextWithPassphrase implements accounts.Wallet, attempting to sign the
// given hash with the given account using passphrase as extra authentication.
// hash of the given text with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignTextWithPassphrase(account accounts.Account, passphrase string, text []byte) ([]byte, error) {
// Make sure the requested account is contained within
if !w.Contains(account) {

View File

@ -31,12 +31,16 @@
Write down the URL (`keycard://044def09` in this example). Then ask `geth` to open the wallet:
```
> personal.openWallet("keycard://044def09")
Please enter the pairing password:
> personal.openWallet("keycard://044def09", "pairing password")
```
Enter the pairing password that you have received during card initialization. Same with the PIN that you will subsequently be
asked for.
The pairing password has been generated during the card initialization process.
The process needs to be repeated once more with the PIN:
```
> personal.openWallet("keycard://044def09", "PIN number")
```
If everything goes well, you should see your new account when typing `personal` on the console:

View File

@ -220,7 +220,7 @@ func (hub *Hub) refreshWallets() {
// Mark the reader as present
seen[reader] = struct{}{}
// If we alreay know about this card, skip to the next reader, otherwise clean up
// If we already know about this card, skip to the next reader, otherwise clean up
if wallet, ok := hub.wallets[reader]; ok {
if err := wallet.ping(); err == nil {
continue

View File

@ -20,6 +20,7 @@ import (
"bytes"
"crypto/aes"
"crypto/cipher"
"crypto/elliptic"
"crypto/rand"
"crypto/sha256"
"crypto/sha512"
@ -27,7 +28,6 @@ import (
"github.com/ethereum/go-ethereum/crypto"
pcsc "github.com/gballet/go-libpcsclite"
"github.com/wsddn/go-ecdh"
"golang.org/x/crypto/pbkdf2"
"golang.org/x/text/unicode/norm"
)
@ -63,26 +63,19 @@ type SecureChannelSession struct {
// NewSecureChannelSession creates a new secure channel for the given card and public key.
func NewSecureChannelSession(card *pcsc.Card, keyData []byte) (*SecureChannelSession, error) {
// Generate an ECDSA keypair for ourselves
gen := ecdh.NewEllipticECDH(crypto.S256())
private, public, err := gen.GenerateKey(rand.Reader)
key, err := crypto.GenerateKey()
if err != nil {
return nil, err
}
cardPublic, ok := gen.Unmarshal(keyData)
if !ok {
return nil, fmt.Errorf("could not unmarshal public key from card")
}
secret, err := gen.GenerateSharedSecret(private, cardPublic)
cardPublic, err := crypto.UnmarshalPubkey(keyData)
if err != nil {
return nil, err
return nil, fmt.Errorf("could not unmarshal public key from card: %v", err)
}
secret, _ := key.Curve.ScalarMult(cardPublic.X, cardPublic.Y, key.D.Bytes())
return &SecureChannelSession{
card: card,
secret: secret,
publicKey: gen.Marshal(public),
secret: secret.Bytes(),
publicKey: elliptic.Marshal(crypto.S256(), key.PublicKey.X, key.PublicKey.Y),
}, nil
}

View File

@ -33,7 +33,7 @@ import (
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
@ -362,7 +362,7 @@ func (w *Wallet) Open(passphrase string) error {
return err
}
// Pairing succeeded, fall through to PIN checks. This will of course fail,
// but we can't return ErrPINNeeded directly here becase we don't know whether
// but we can't return ErrPINNeeded directly here because we don't know whether
// a PIN check or a PIN reset is needed.
passphrase = ""
}
@ -637,7 +637,7 @@ func (w *Wallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Accoun
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// Note, self derivation will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//
@ -699,7 +699,7 @@ func (w *Wallet) signHash(account accounts.Account, hash []byte) ([]byte, error)
// the needed details via SignTxWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
func (w *Wallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
signer := types.NewEIP155Signer(chainID)
signer := types.LatestSignerForChainID(chainID)
hash := signer.Hash(tx)
sig, err := w.signHash(account, hash[:])
if err != nil {

View File

@ -162,7 +162,7 @@ func (w *ledgerDriver) SignTx(path accounts.DerivationPath, tx *types.Transactio
return common.Address{}, nil, accounts.ErrWalletClosed
}
// Ensure the wallet is capable of signing the given transaction
if chainID != nil && w.version[0] <= 1 && w.version[2] <= 2 {
if chainID != nil && w.version[0] <= 1 && w.version[1] <= 0 && w.version[2] <= 2 {
//lint:ignore ST1005 brand name displayed on the console
return common.Address{}, nil, fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least", w.version[0], w.version[1], w.version[2])
}

View File

@ -255,9 +255,11 @@ func (w *trezorDriver) trezorSign(derivationPath []uint32, tx *types.Transaction
if chainID == nil {
signer = new(types.HomesteadSigner)
} else {
// Trezor backend does not support typed transactions yet.
signer = types.NewEIP155Signer(chainID)
signature[64] -= byte(chainID.Uint64()*2 + 35)
}
// Inject the final signature into the transaction and sanity check the sender
signed, err := tx.WithSignature(signer, signature)
if err != nil {

View File

@ -25,7 +25,7 @@ import (
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
@ -368,18 +368,22 @@ func (w *wallet) selfDerive() {
w.log.Warn("USB wallet nonce retrieval failed", "err", err)
break
}
// If the next account is empty, stop self-derivation, but add for the last base path
// We've just self-derived a new account, start tracking it locally
// unless the account was empty.
path := make(accounts.DerivationPath, len(nextPaths[i]))
copy(path[:], nextPaths[i][:])
if balance.Sign() == 0 && nonce == 0 {
empty = true
// If it indeed was empty, make a log output for it anyway. In the case
// of legacy-ledger, the first account on the legacy-path will
// be shown to the user, even if we don't actively track it
if i < len(nextAddrs)-1 {
w.log.Info("Skipping trakcking first account on legacy path, use personal.deriveAccount(<url>,<path>, false) to track",
"path", path, "address", nextAddrs[i])
break
}
}
// We've just self-derived a new account, start tracking it locally
path := make(accounts.DerivationPath, len(nextPaths[i]))
copy(path[:], nextPaths[i][:])
paths = append(paths, path)
account := accounts.Account{
Address: nextAddrs[i],
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
@ -489,7 +493,7 @@ func (w *wallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Accoun
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// Note, self derivation will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//

View File

@ -24,13 +24,13 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://dl.google.com/go/go1.14.2.windows-%GETH_ARCH%.zip
- 7z x go1.14.2.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://dl.google.com/go/go1.16.windows-%GETH_ARCH%.zip
- 7z x go1.16.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version
build_script:
- go run build\ci.go install
- go run build\ci.go install -dlgo
after_build:
- go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds

View File

@ -1,21 +1,31 @@
# This file contains sha256 checksums of optional build dependencies.
98de84e69726a66da7b4e58eac41b99cbe274d7e8906eeb8a5b7eb0aadee7f7c go1.14.2.src.tar.gz
7688063d55656105898f323d90a79a39c378d86fe89ae192eb3b7fc46347c95a go1.16.src.tar.gz
6000a9522975d116bf76044967d7e69e04e982e9625330d9a539a8b45395f9a8 go1.16.darwin-amd64.tar.gz
ea435a1ac6d497b03e367fdfb74b33e961d813883468080f6e239b3b03bea6aa go1.16.linux-386.tar.gz
013a489ebb3e24ef3d915abe5b94c3286c070dfe0818d5bca8108f1d6e8440d2 go1.16.linux-amd64.tar.gz
3770f7eb22d05e25fbee8fb53c2a4e897da043eb83c69b9a14f8d98562cd8098 go1.16.linux-arm64.tar.gz
d1d9404b1dbd77afa2bdc70934e10fbfcf7d785c372efc29462bb7d83d0a32fd go1.16.linux-armv6l.tar.gz
481492a17d42193d471b93b7a06da3555331bd833b76336afc87be820c48933f go1.16.windows-386.zip
5cc88fa506b3d5c453c54c3ea218fc8dd05d7362ae1de15bb67986b72089ce93 go1.16.windows-amd64.zip
d7d6c70b05a7c2f68b48aab5ab8cb5116b8444c9ddad131673b152e7cff7c726 go1.16.freebsd-386.tar.gz
40b03216f6945fb6883a50604fc7f409a83f62171607229a9c598e701e684f8a go1.16.freebsd-amd64.tar.gz
27a1aaa988e930b7932ce459c8a63ad5b3333b3a06b016d87ff289f2a11aacd6 go1.16.linux-ppc64le.tar.gz
be4c9e4e2cf058efc4e3eb013a760cb989ddc4362f111950c990d1c63b27ccbe go1.16.linux-s390x.tar.gz
aeaa5498682246b87d0b77ece283897348ea03d98e816760a074058bfca60b2a golangci-lint-1.24.0-windows-amd64.zip
7e854a70d449fe77b7a91583ec88c8603eb3bf96c45d52797dc4ba3f2f278dbe golangci-lint-1.24.0-darwin-386.tar.gz
835101fae192c3a2e7a51cb19d5ac3e1a40b0e311955e89bc21d61de78635979 golangci-lint-1.24.0-linux-armv6.tar.gz
a041a6e6a61c9ff3dbe58673af13ea00c76bcd462abede0ade645808e97cdd6d golangci-lint-1.24.0-windows-386.zip
7cc73eb9ca02b7a766c72b913f8080401862b10e7bb90c09b085415a81f21609 golangci-lint-1.24.0-freebsd-armv6.tar.gz
537bb2186987b5e68ad4e8829230557f26087c3028eb736dea1662a851bad73d golangci-lint-1.24.0-linux-armv7.tar.gz
8cb1bc1e63d8f0d9b71fcb10b38887e1646a6b8a120ded2e0cd7c3284528f633 golangci-lint-1.24.0-linux-mips64.tar.gz
095d3f8bf7fc431739861574d0b58d411a617df2ed5698ce5ae5ecc66d23d44d golangci-lint-1.24.0-freebsd-armv7.tar.gz
e245df27cec3827aef9e7afbac59e92816978ee3b64f84f7b88562ff4b2ac225 golangci-lint-1.24.0-linux-arm64.tar.gz
35d6d5927e19f0577cf527f0e4441dbb37701d87e8cf729c98a510fce397fbf7 golangci-lint-1.24.0-linux-ppc64le.tar.gz
a1ed66353b8ceb575d78db3051491bce3ac1560e469a9bc87e8554486fec7dfe golangci-lint-1.24.0-freebsd-386.tar.gz
241ca454102e909de04957ff8a5754c757cefa255758b3e1fba8a4533d19d179 golangci-lint-1.24.0-linux-amd64.tar.gz
ff488423db01a0ec8ffbe4e1d65ef1be6a8d5e6d7930cf380ce8aaf714125470 golangci-lint-1.24.0-linux-386.tar.gz
f05af56f15ebbcf77663a8955d1e39009b584ce8ea4c5583669369d80353a113 golangci-lint-1.24.0-darwin-amd64.tar.gz
b0096796c0ffcd6c350a2ec006100e7ef5f0597b43a204349d4f997273fb32a7 golangci-lint-1.24.0-freebsd-amd64.tar.gz
c9c2867380e85628813f1f7d1c3cfc6c6f7931e89bea86f567ff451b8cdb6654 golangci-lint-1.24.0-linux-mips64le.tar.gz
2feb97fa61c934aa3eba9bc104ab5dd8fb946791d58e64060e8857e800eeae0b golangci-lint-1.24.0-linux-s390x.tar.gz
d998a84eea42f2271aca792a7b027ca5c1edfcba229e8e5a844c9ac3f336df35 golangci-lint-1.27.0-linux-armv7.tar.gz
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint.exe-1.27.0-windows-amd64.zip
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint-1.27.0-windows-amd64.zip
0e2a57d6ba709440d3ed018ef1037465fa010ed02595829092860e5cf863042e golangci-lint-1.27.0-freebsd-386.tar.gz
90205fc42ab5ed0096413e790d88ac9b4ed60f4c47e576d13dc0660f7ed4b013 golangci-lint-1.27.0-linux-arm64.tar.gz
8d345e4e88520e21c113d81978e89ad77fc5b13bfdf20e5bca86b83fc4261272 golangci-lint-1.27.0-linux-amd64.tar.gz
cc619634a77f18dc73df2a0725be13116d64328dc35131ca1737a850d6f76a59 golangci-lint-1.27.0-freebsd-armv7.tar.gz
fe683583cfc9eeec83e498c0d6159d87b5e1919dbe4b6c3b3913089642906069 golangci-lint-1.27.0-linux-s390x.tar.gz
058f5579bee75bdaacbaf75b75e1369f7ad877fd8b3b145aed17a17545de913e golangci-lint-1.27.0-freebsd-armv6.tar.gz
38e1e3dadbe3f56ab62b4de82ee0b88e8fad966d8dfd740a26ef94c2edef9818 golangci-lint-1.27.0-linux-armv6.tar.gz
071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint.exe-1.27.0-windows-386.zip
071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint-1.27.0-windows-386.zip
5f37e2b33914ecddb7cad38186ef4ec61d88172fc04f930fa0267c91151ff306 golangci-lint-1.27.0-linux-386.tar.gz
4d94cfb51fdebeb205f1d5a349ac2b683c30591c5150708073c1c329e15965f0 golangci-lint-1.27.0-freebsd-amd64.tar.gz
52572ba8ff07d5169c2365d3de3fec26dc55a97522094d13d1596199580fa281 golangci-lint-1.27.0-linux-ppc64le.tar.gz
3fb1a1683a29c6c0a8cd76135f62b606fbdd538d5a7aeab94af1af70ffdc2fd4 golangci-lint-1.27.0-darwin-amd64.tar.gz

View File

@ -26,7 +26,7 @@ Available commands are:
install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables
test [ -coverage ] [ packages... ] -- runs the tests
lint -- runs certain pre-selected linters
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artifacts
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts
importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
nsis -- creates a Windows NSIS installer
@ -46,12 +46,11 @@ import (
"encoding/base64"
"flag"
"fmt"
"go/parser"
"go/token"
"io/ioutil"
"log"
"os"
"os/exec"
"path"
"path/filepath"
"regexp"
"runtime"
@ -59,6 +58,7 @@ import (
"time"
"github.com/cespare/cp"
"github.com/ethereum/go-ethereum/crypto/signify"
"github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params"
)
@ -79,7 +79,6 @@ var (
executablePath("geth"),
executablePath("puppeth"),
executablePath("rlpdump"),
executablePath("wnode"),
executablePath("clef"),
}
@ -109,10 +108,6 @@ var (
BinaryName: "rlpdump",
Description: "Developer utility tool that prints RLP structures.",
},
{
BinaryName: "wnode",
Description: "Ethereum Whisper diagnostic tool",
},
{
BinaryName: "clef",
Description: "Ethereum account management tool.",
@ -139,19 +134,25 @@ var (
// Note: zesty is unsupported because it was officially deprecated on Launchpad.
// Note: artful is unsupported because it was officially deprecated on Launchpad.
// Note: cosmic is unsupported because it was officially deprecated on Launchpad.
// Note: disco is unsupported because it was officially deprecated on Launchpad.
// Note: eoan is unsupported because it was officially deprecated on Launchpad.
debDistroGoBoots = map[string]string{
"trusty": "golang-1.11",
"xenial": "golang-go",
"bionic": "golang-go",
"disco": "golang-go",
"eoan": "golang-go",
"focal": "golang-go",
"groovy": "golang-go",
}
debGoBootPaths = map[string]string{
"golang-1.11": "/usr/lib/go-1.11",
"golang-go": "/usr/lib/go",
}
// This is the version of go that will be downloaded by
//
// go run ci.go install -dlgo
dlgoVersion = "1.16"
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@ -202,108 +203,130 @@ func main() {
func doInstall(cmdline []string) {
var (
dlgo = flag.Bool("dlgo", false, "Download Go and build with it")
arch = flag.String("arch", "", "Architecture to cross build for")
cc = flag.String("cc", "", "C compiler to cross build with")
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
// Check Go version. People regularly open issues about compilation
// Check local Go version. People regularly open issues about compilation
// failure with outdated Go. This should save them the trouble.
if !strings.Contains(runtime.Version(), "devel") {
// Figure out the minor version number since we can't textually compare (1.10 < 1.9)
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if minor < 11 {
if minor < 13 {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.11 and cannot")
log.Println("go-ethereum requires at least Go version 1.13 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
}
}
// Compile packages given as arguments, or everything if there are no arguments.
packages := []string{"./..."}
if flag.NArg() > 0 {
packages = flag.Args()
// Choose which go command we're going to use.
var gobuild *exec.Cmd
if !*dlgo {
// Default behavior: use the go version which runs ci.go right now.
gobuild = goTool("build")
} else {
// Download of Go requested. This is for build environments where the
// installed version is too old and cannot be upgraded easily.
cachedir := filepath.Join("build", "cache")
goroot := downloadGo(runtime.GOARCH, runtime.GOOS, cachedir)
gobuild = localGoTool(goroot, "build")
}
if *arch == "" || *arch == runtime.GOARCH {
goinstall := goTool("install", buildFlags(env)...)
if runtime.GOARCH == "arm64" {
goinstall.Args = append(goinstall.Args, "-p", "1")
}
goinstall.Args = append(goinstall.Args, "-v")
goinstall.Args = append(goinstall.Args, packages...)
build.MustRun(goinstall)
return
// Configure environment for cross build.
if *arch != "" || *arch != runtime.GOARCH {
gobuild.Env = append(gobuild.Env, "CGO_ENABLED=1")
gobuild.Env = append(gobuild.Env, "GOARCH="+*arch)
}
// Seems we are cross compiling, work around forbidden GOBIN
goinstall := goToolArch(*arch, *cc, "install", buildFlags(env)...)
goinstall.Args = append(goinstall.Args, "-v")
goinstall.Args = append(goinstall.Args, []string{"-buildmode", "archive"}...)
goinstall.Args = append(goinstall.Args, packages...)
build.MustRun(goinstall)
// Configure C compiler.
if *cc != "" {
gobuild.Env = append(gobuild.Env, "CC="+*cc)
} else if os.Getenv("CC") != "" {
gobuild.Env = append(gobuild.Env, "CC="+os.Getenv("CC"))
}
if cmds, err := ioutil.ReadDir("cmd"); err == nil {
for _, cmd := range cmds {
pkgs, err := parser.ParseDir(token.NewFileSet(), filepath.Join(".", "cmd", cmd.Name()), nil, parser.PackageClauseOnly)
if err != nil {
log.Fatal(err)
}
for name := range pkgs {
if name == "main" {
gobuild := goToolArch(*arch, *cc, "build", buildFlags(env)...)
gobuild.Args = append(gobuild.Args, "-v")
gobuild.Args = append(gobuild.Args, []string{"-o", executablePath(cmd.Name())}...)
gobuild.Args = append(gobuild.Args, "."+string(filepath.Separator)+filepath.Join("cmd", cmd.Name()))
build.MustRun(gobuild)
break
}
}
}
// arm64 CI builders are memory-constrained and can't handle concurrent builds,
// better disable it. This check isn't the best, it should probably
// check for something in env instead.
if runtime.GOARCH == "arm64" {
gobuild.Args = append(gobuild.Args, "-p", "1")
}
// Put the default settings in.
gobuild.Args = append(gobuild.Args, buildFlags(env)...)
// We use -trimpath to avoid leaking local paths into the built executables.
gobuild.Args = append(gobuild.Args, "-trimpath")
// Show packages during build.
gobuild.Args = append(gobuild.Args, "-v")
// Now we choose what we're even building.
// Default: collect all 'main' packages in cmd/ and build those.
packages := flag.Args()
if len(packages) == 0 {
packages = build.FindMainPackages("./cmd")
}
// Do the build!
for _, pkg := range packages {
args := make([]string, len(gobuild.Args))
copy(args, gobuild.Args)
args = append(args, "-o", executablePath(path.Base(pkg)))
args = append(args, pkg)
build.MustRun(&exec.Cmd{Path: gobuild.Path, Args: args, Env: gobuild.Env})
}
}
// buildFlags returns the go tool flags for building.
func buildFlags(env build.Environment) (flags []string) {
var ld []string
if env.Commit != "" {
ld = append(ld, "-X", "main.gitCommit="+env.Commit)
ld = append(ld, "-X", "main.gitDate="+env.Date)
}
// Strip DWARF on darwin. This used to be required for certain things,
// and there is no downside to this, so we just keep doing it.
if runtime.GOOS == "darwin" {
ld = append(ld, "-s")
}
if len(ld) > 0 {
flags = append(flags, "-ldflags", strings.Join(ld, " "))
}
return flags
}
// goTool returns the go tool. This uses the Go version which runs ci.go.
func goTool(subcmd string, args ...string) *exec.Cmd {
return goToolArch(runtime.GOARCH, os.Getenv("CC"), subcmd, args...)
cmd := build.GoTool(subcmd, args...)
goToolSetEnv(cmd)
return cmd
}
func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd {
cmd := build.GoTool(subcmd, args...)
if arch == "" || arch == runtime.GOARCH {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
} else {
cmd.Env = append(cmd.Env, "CGO_ENABLED=1")
cmd.Env = append(cmd.Env, "GOARCH="+arch)
}
if cc != "" {
cmd.Env = append(cmd.Env, "CC="+cc)
}
// localGoTool returns the go tool from the given GOROOT.
func localGoTool(goroot string, subcmd string, args ...string) *exec.Cmd {
gotool := filepath.Join(goroot, "bin", "go")
cmd := exec.Command(gotool, subcmd)
goToolSetEnv(cmd)
cmd.Env = append(cmd.Env, "GOROOT="+goroot)
cmd.Args = append(cmd.Args, args...)
return cmd
}
// goToolSetEnv forwards the build environment to the go tool.
func goToolSetEnv(cmd *exec.Cmd) {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOBIN=") {
if strings.HasPrefix(e, "GOBIN=") || strings.HasPrefix(e, "CC=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
return cmd
}
// Running The Tests
@ -325,7 +348,7 @@ func doTest(cmdline []string) {
// Test a single package at a time. CI builders are slow
// and some tests run into timeouts under load.
gotest := goTool("test", buildFlags(env)...)
gotest.Args = append(gotest.Args, "-p", "1", "-timeout", "5m")
gotest.Args = append(gotest.Args, "-p", "1")
if *coverage {
gotest.Args = append(gotest.Args, "-covermode=atomic", "-cover")
}
@ -356,7 +379,7 @@ func doLint(cmdline []string) {
// downloadLinter downloads and unpacks golangci-lint.
func downloadLinter(cachedir string) string {
const version = "1.24.0"
const version = "1.27.0"
csdb := build.MustLoadChecksums("build/checksums.txt")
base := fmt.Sprintf("golangci-lint-%s-%s-%s", version, runtime.GOOS, runtime.GOARCH)
@ -365,7 +388,7 @@ func downloadLinter(cachedir string) string {
if err := csdb.DownloadFile(url, archivePath); err != nil {
log.Fatal(err)
}
if err := build.ExtractTarballArchive(archivePath, cachedir); err != nil {
if err := build.ExtractArchive(archivePath, cachedir); err != nil {
log.Fatal(err)
}
return filepath.Join(cachedir, base, "golangci-lint")
@ -374,11 +397,12 @@ func downloadLinter(cachedir string) string {
// Release Packaging
func doArchive(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
atype = flag.String("type", "zip", "Type of archive to write (zip|tar)")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. LINUX_SIGNING_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
ext string
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
atype = flag.String("type", "zip", "Type of archive to write (zip|tar)")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. LINUX_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify key (e.g. LINUX_SIGNIFY_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
ext string
)
flag.CommandLine.Parse(cmdline)
switch *atype {
@ -405,7 +429,7 @@ func doArchive(cmdline []string) {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools} {
if err := archiveUpload(archive, *upload, *signer); err != nil {
if err := archiveUpload(archive, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
}
@ -425,7 +449,7 @@ func archiveBasename(arch string, archiveVersion string) string {
return platform + "-" + archiveVersion
}
func archiveUpload(archive string, blobstore string, signer string) error {
func archiveUpload(archive string, blobstore string, signer string, signifyVar string) error {
// If signing was requested, generate the signature files
if signer != "" {
key := getenvBase64(signer)
@ -433,6 +457,14 @@ func archiveUpload(archive string, blobstore string, signer string) error {
return err
}
}
if signifyVar != "" {
key := os.Getenv(signifyVar)
untrustedComment := "verify with geth-release.pub"
trustedComment := fmt.Sprintf("%s (%s)", archive, time.Now().UTC().Format(time.RFC1123))
if err := signify.SignFile(archive, archive+".sig", key, untrustedComment, trustedComment); err != nil {
return err
}
}
// If uploading to Azure was requested, push the archive possibly with its signature
if blobstore != "" {
auth := build.AzureBlobstoreConfig{
@ -448,6 +480,11 @@ func archiveUpload(archive string, blobstore string, signer string) error {
return err
}
}
if signifyVar != "" {
if err := build.AzureBlobstoreUpload(archive+".sig", filepath.Base(archive+".sig"), auth); err != nil {
return err
}
}
}
return nil
}
@ -471,13 +508,12 @@ func maybeSkipArchive(env build.Environment) {
// Debian Packaging
func doDebianSource(cmdline []string) {
var (
goversion = flag.String("goversion", "", `Go version to build with (will be included in the source package)`)
cachedir = flag.String("cachedir", "./build/cache", `Filesystem path to cache the downloaded Go bundles at`)
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
cachedir = flag.String("cachedir", "./build/cache", `Filesystem path to cache the downloaded Go bundles at`)
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
)
flag.CommandLine.Parse(cmdline)
*workdir = makeWorkdir(*workdir)
@ -492,10 +528,10 @@ func doDebianSource(cmdline []string) {
}
// Download and verify the Go source package.
gobundle := downloadGoSources(*goversion, *cachedir)
gobundle := downloadGoSources(*cachedir)
// Download all the dependencies needed to build the sources and run the ci script
srcdepfetch := goTool("install", "-n", "./...")
srcdepfetch := goTool("mod", "download")
srcdepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath"))
build.MustRun(srcdepfetch)
@ -511,7 +547,7 @@ func doDebianSource(cmdline []string) {
pkgdir := stageDebianSource(*workdir, meta)
// Add Go source code
if err := build.ExtractTarballArchive(gobundle, pkgdir); err != nil {
if err := build.ExtractArchive(gobundle, pkgdir); err != nil {
log.Fatalf("Failed to extract Go sources: %v", err)
}
if err := os.Rename(filepath.Join(pkgdir, "go"), filepath.Join(pkgdir, ".go")); err != nil {
@ -543,9 +579,10 @@ func doDebianSource(cmdline []string) {
}
}
func downloadGoSources(version string, cachedir string) string {
// downloadGoSources downloads the Go source tarball.
func downloadGoSources(cachedir string) string {
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.src.tar.gz", version)
file := fmt.Sprintf("go%s.src.tar.gz", dlgoVersion)
url := "https://dl.google.com/go/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
@ -554,6 +591,41 @@ func downloadGoSources(version string, cachedir string) string {
return dst
}
// downloadGo downloads the Go binary distribution and unpacks it into a temporary
// directory. It returns the GOROOT of the unpacked toolchain.
func downloadGo(goarch, goos, cachedir string) string {
if goarch == "arm" {
goarch = "armv6l"
}
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.%s-%s", dlgoVersion, goos, goarch)
if goos == "windows" {
file += ".zip"
} else {
file += ".tar.gz"
}
url := "https://golang.org/dl/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
log.Fatal(err)
}
ucache, err := os.UserCacheDir()
if err != nil {
log.Fatal(err)
}
godir := filepath.Join(ucache, fmt.Sprintf("geth-go-%s-%s-%s", dlgoVersion, goos, goarch))
if err := build.ExtractArchive(dst, godir); err != nil {
log.Fatal(err)
}
goroot, err := filepath.Abs(filepath.Join(godir, "go"))
if err != nil {
log.Fatal(err)
}
return goroot
}
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
@ -749,6 +821,7 @@ func doWindowsInstaller(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
signify = flag.String("signify key", "", `Environment variable holding the signify signing key (e.g. WINDOWS_SIGNIFY_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
)
@ -810,7 +883,7 @@ func doWindowsInstaller(cmdline []string) {
filepath.Join(*workdir, "geth.nsi"),
)
// Sign and publish installer.
if err := archiveUpload(installer, *upload, *signer); err != nil {
if err := archiveUpload(installer, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
}
@ -819,10 +892,11 @@ func doWindowsInstaller(cmdline []string) {
func doAndroidArchive(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. ANDROID_SIGNING_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "https://oss.sonatype.org")`)
upload = flag.String("upload", "", `Destination to upload the archive (usually "gethstore/builds")`)
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. ANDROID_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. ANDROID_SIGNIFY_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "https://oss.sonatype.org")`)
upload = flag.String("upload", "", `Destination to upload the archive (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
@ -838,6 +912,7 @@ func doAndroidArchive(cmdline []string) {
if *local {
// If we're building locally, copy bundle to build dir and skip Maven
os.Rename("geth.aar", filepath.Join(GOBIN, "geth.aar"))
os.Rename("geth-sources.jar", filepath.Join(GOBIN, "geth-sources.jar"))
return
}
meta := newMavenMetadata(env)
@ -850,7 +925,7 @@ func doAndroidArchive(cmdline []string) {
archive := "geth-" + archiveBasename("android", params.ArchiveVersion(env.Commit)) + ".aar"
os.Rename("geth.aar", archive)
if err := archiveUpload(archive, *upload, *signer); err != nil {
if err := archiveUpload(archive, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
// Sign and upload all the artifacts to Maven Central
@ -884,11 +959,12 @@ func gomobileTool(subcmd string, args ...string) *exec.Cmd {
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") || strings.HasPrefix(e, "GOBIN=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
return cmd
}
@ -942,10 +1018,11 @@ func newMavenMetadata(env build.Environment) mavenMetadata {
func doXCodeFramework(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. IOS_SIGNING_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "trunk")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. IOS_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. IOS_SIGNIFY_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "trunk")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
@ -957,7 +1034,7 @@ func doXCodeFramework(cmdline []string) {
if *local {
// If we're building locally, use the build folder and stop afterwards
bind.Dir, _ = filepath.Abs(GOBIN)
bind.Dir = GOBIN
build.MustRun(bind)
return
}
@ -973,14 +1050,14 @@ func doXCodeFramework(cmdline []string) {
maybeSkipArchive(env)
// Sign and upload the framework to Azure
if err := archiveUpload(archive+".tar.gz", *upload, *signer); err != nil {
if err := archiveUpload(archive+".tar.gz", *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
// Prepare and upload a PodSpec to CocoaPods
if *deploy != "" {
meta := newPodMetadata(env, archive)
build.Render("build/pod.podspec", "Geth.podspec", 0755, meta)
build.MustRunCommand("pod", *deploy, "push", "Geth.podspec", "--allow-warnings", "--verbose")
build.MustRunCommand("pod", *deploy, "push", "Geth.podspec", "--allow-warnings")
}
}
@ -1098,6 +1175,8 @@ func doPurge(cmdline []string) {
if err != nil {
log.Fatal(err)
}
fmt.Printf("Found %d blobs\n", len(blobs))
// Iterate over the blobs, collect and sort all unstable builds
for i := 0; i < len(blobs); i++ {
if !strings.Contains(blobs[i].Name, "unstable") {
@ -1119,6 +1198,7 @@ func doPurge(cmdline []string) {
break
}
}
fmt.Printf("Deleting %d blobs\n", len(blobs))
// Delete all marked as such and return
if err := build.AzureBlobstoreDelete(auth, blobs); err != nil {
log.Fatal(err)

View File

@ -19,9 +19,9 @@ Section "Geth" GETH_IDX
# Create start menu launcher
createDirectory "$SMPROGRAMS\${APPNAME}"
createShortCut "$SMPROGRAMS\${APPNAME}\${APPNAME}.lnk" "$INSTDIR\geth.exe" "--fast" "--cache=512"
createShortCut "$SMPROGRAMS\${APPNAME}\Attach.lnk" "$INSTDIR\geth.exe" "attach" "" ""
createShortCut "$SMPROGRAMS\${APPNAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe" "" "" ""
createShortCut "$SMPROGRAMS\${APPNAME}\${APPNAME}.lnk" "$INSTDIR\geth.exe"
createShortCut "$SMPROGRAMS\${APPNAME}\Attach.lnk" "$INSTDIR\geth.exe" "attach"
createShortCut "$SMPROGRAMS\${APPNAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe"
# Firewall - remove rules (if exists)
SimpleFC::AdvRemoveRule "Geth incoming peers (TCP:30303)"

View File

@ -30,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/compiler"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
@ -95,12 +96,12 @@ var (
}
aliasFlag = cli.StringFlag{
Name: "alias",
Usage: "Comma separated aliases for function and event renaming, e.g. foo=bar",
Usage: "Comma separated aliases for function and event renaming, e.g. original1=alias1, original2=alias2",
}
)
func init() {
app = utils.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app = flags.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Flags = []cli.Flag{
abiFlag,
binFlag,
@ -117,7 +118,7 @@ func init() {
aliasFlag,
}
app.Action = utils.MigrateFlags(abigen)
cli.CommandHelpTemplate = utils.OriginCommandHelpTemplate
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
}
func abigen(c *cli.Context) error {

View File

@ -28,7 +28,6 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/nat"
"github.com/ethereum/go-ethereum/p2p/netutil"
@ -44,7 +43,7 @@ func main() {
natdesc = flag.String("nat", "none", "port mapping mechanism (any|none|upnp|pmp|extip:<IP>)")
netrestrict = flag.String("netrestrict", "", "restrict network communication to the given IP networks (CIDR masks)")
runv5 = flag.Bool("v5", false, "run a v5 topic discovery bootnode")
verbosity = flag.Int("verbosity", int(log.LvlInfo), "log verbosity (0-9)")
verbosity = flag.Int("verbosity", int(log.LvlInfo), "log verbosity (0-5)")
vmodule = flag.String("vmodule", "", "log verbosity pattern")
nodeKey *ecdsa.PrivateKey
@ -121,17 +120,17 @@ func main() {
printNotice(&nodeKey.PublicKey, *realaddr)
db, _ := enode.OpenDB("")
ln := enode.NewLocalNode(db, nodeKey)
cfg := discover.Config{
PrivateKey: nodeKey,
NetRestrict: restrictList,
}
if *runv5 {
if _, err := discv5.ListenUDP(nodeKey, conn, "", restrictList); err != nil {
if _, err := discover.ListenV5(conn, ln, cfg); err != nil {
utils.Fatalf("%v", err)
}
} else {
db, _ := enode.OpenDB("")
ln := enode.NewLocalNode(db, nodeKey)
cfg := discover.Config{
PrivateKey: nodeKey,
NetRestrict: restrictList,
}
if _, err := discover.ListenUDP(conn, ln, cfg); err != nil {
utils.Fatalf("%v", err)
}

View File

@ -22,8 +22,8 @@ import (
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/fdlimit"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
@ -37,7 +37,7 @@ var (
var app *cli.App
func init() {
app = utils.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app = flags.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Commands = []cli.Command{
commandStatus,
commandDeploy,
@ -48,7 +48,7 @@ func init() {
oracleFlag,
nodeURLFlag,
}
cli.CommandHelpTemplate = utils.OriginCommandHelpTemplate
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
}
// Commonly used command line flags.

View File

@ -9,7 +9,7 @@ Clef can run as a daemon on the same machine, off a usb-stick like [USB armory](
Check out the
* [CLI tutorial](tutorial.md) for some concrete examples on how Clef works.
* [Setup docs](docs/setup.md) for infos on how to configure Clef on QubesOS or USB Armory.
* [Setup docs](docs/setup.md) for information on how to configure Clef on QubesOS or USB Armory.
* [Data types](datatypes.md) for details on the communication messages between Clef and an external UI.
## Command line flags
@ -33,12 +33,12 @@ GLOBAL OPTIONS:
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--nousb Disables monitoring for and managing USB hardware wallets
--pcscdpath value Path to the smartcard daemon (pcscd) socket file (default: "/run/pcscd/pcscd.comm")
--rpcaddr value HTTP-RPC server listening interface (default: "localhost")
--rpcvhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--http.addr value HTTP-RPC server listening interface (default: "localhost")
--http.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--ipcdisable Disable the IPC-RPC server
--ipcpath Filename for IPC socket/pipe within the datadir (explicit paths escape it)
--rpc Enable the HTTP-RPC server
--rpcport value HTTP-RPC server listening port (default: 8550)
--http Enable the HTTP-RPC server
--http.port value HTTP-RPC server listening port (default: 8550)
--signersecret value A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash
--4bytedb-custom value File used for writing new 4byte-identifiers submitted via API (default: "./4byte-custom.json")
--auditlog value File used to emit audit logs. Set to "" to disable (default: "audit.log")
@ -46,6 +46,7 @@ GLOBAL OPTIONS:
--stdio-ui Use STDIN/STDOUT as a channel for an external UI. This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user interface, and can be used when Clef is started by an external process.
--stdio-ui-test Mechanism to test interface between Clef and UI. Requires 'stdio-ui'.
--advanced If enabled, issues warnings instead of rejections for suspicious requests. Default off
--suppress-bootwarn If set, does not show the warning during boot
--help, -h show help
--version, -v print the version
```
@ -112,11 +113,11 @@ Some snags and todos
### External API
Clef listens to HTTP requests on `rpcaddr`:`rpcport` (or to IPC on `ipcpath`), with the same JSON-RPC standard as Geth. The messages are expected to be [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Clef listens to HTTP requests on `http.addr`:`http.port` (or to IPC on `ipcpath`), with the same JSON-RPC standard as Geth. The messages are expected to be [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Some of these call can require user interaction. Clients must be aware that responses may be delayed significantly or may never be received if a users decides to ignore the confirmation request.
Some of these calls can require user interaction. Clients must be aware that responses may be delayed significantly or may never be received if a user decides to ignore the confirmation request.
The External API is **untrusted**: it does not accept credentials over this API, nor does it expect that requests have any authority.
The External API is **untrusted**: it does not accept credentials, nor does it expect that requests have any authority.
### Internal UI API
@ -145,13 +146,11 @@ See the [external API changelog](extapi_changelog.md) for information about chan
All hex encoded values must be prefixed with `0x`.
## Methods
### account_new
#### Create new password protected account
The signer will generate a new private key, encrypts it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and stores it in the keystore directory.
The signer will generate a new private key, encrypt it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and store it in the keystore directory.
The client is responsible for creating a backup of the keystore. If the keystore is lost there is no method of retrieving lost accounts.
#### Arguments
@ -160,7 +159,6 @@ None
#### Result
- address [string]: account address that is derived from the generated key
- url [string]: location of the keyfile
#### Sample call
```json
@ -172,14 +170,11 @@ None
}
```
Response
```
```json
{
"id": 0,
"jsonrpc": "2.0",
"result": {
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"url": "keystore:///my/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
"result": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133"
}
```
@ -195,8 +190,6 @@ None
#### Result
- array with account records:
- account.address [string]: account address that is derived from the generated key
- account.type [string]: type of the
- account.url [string]: location of the account
#### Sample call
```json
@ -207,21 +200,13 @@ None
}
```
Response
```
```json
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"address": "0xafb2f771f58513609765698f65d3f2f0224a956f",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T07-26-47.162109726Z--afb2f771f58513609765698f65d3f2f0224a956f"
},
{
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
"0xafb2f771f58513609765698f65d3f2f0224a956f",
"0xbea9183f8f4f03d427f6bcea17388bdff1cab133"
]
}
```
@ -229,10 +214,10 @@ Response
### account_signTransaction
#### Sign transactions
Signs a transactions and responds with the signed transaction in RLP encoded form.
Signs a transaction and responds with the signed transaction in RLP-encoded and JSON forms.
#### Arguments
2. transaction object:
1. transaction object:
- `from` [address]: account to send the transaction from
- `to` [address]: receiver account. If omitted or `0x`, will cause contract creation.
- `gas` [number]: maximum amount of gas to burn
@ -240,12 +225,13 @@ Response
- `value` [number:optional]: amount of Wei to send with the transaction
- `data` [data:optional]: input data
- `nonce` [number]: account nonce
3. method signature [string:optional]
1. method signature [string:optional]
- The method signature, if present, is to aid decoding the calldata. Should consist of `methodname(paramtype,...)`, e.g. `transfer(uint256,address)`. The signer may use this data to parse the supplied calldata, and show the user. The data, however, is considered totally untrusted, and reliability is not expected.
#### Result
- signed transaction in RLP encoded form [data]
- raw [data]: signed transaction in RLP encoded form
- tx [json]: signed transaction in JSON form
#### Sample call
```json
@ -270,11 +256,22 @@ Response
```json
{
"id": 2,
"jsonrpc": "2.0",
"error": {
"code": -32000,
"message": "Request denied"
"id": 2,
"result": {
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1234",
"gas": "0x55555",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x1234",
"input": "0xabcd",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
}
```
@ -326,7 +323,7 @@ Response
Bash example:
```bash
#curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
> curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":{"raw":"0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","tx":{"nonce":"0x0","gasPrice":"0x1","gas":"0x333","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0","value":"0x0","input":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012","v":"0x26","r":"0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e","s":"0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","hash":"0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"}}}
```
@ -373,7 +370,7 @@ Response
### account_signTypedData
#### Sign data
Signs a chunk of structured data conformant to [EIP712]([EIP-712](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-712.md)) and returns the calculated signature.
Signs a chunk of structured data conformant to [EIP-712](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-712.md) and returns the calculated signature.
#### Arguments
- account [address]: account to sign with
@ -469,7 +466,7 @@ Response
### account_ecRecover
#### Sign data
#### Recover the signing address
Derive the address from the account that was used to sign data with content type `text/plain` and the signature.
@ -487,7 +484,6 @@ Derive the address from the account that was used to sign data with content type
"jsonrpc": "2.0",
"method": "account_ecRecover",
"params": [
"data/plain",
"0xaabbccdd",
"0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
]
@ -503,117 +499,36 @@ Response
}
```
### account_import
### account_version
#### Import account
Import a private key into the keystore. The imported key is expected to be encrypted according to the web3 keystore
format.
#### Get external API version
Get the version of the external API used by Clef.
#### Arguments
- account [object]: key in [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) (retrieved with account_export)
None
#### Result
- imported key [object]:
- key.address [address]: address of the imported key
- key.type [string]: type of the account
- key.url [string]: key URL
* external API version [string]
#### Sample call
```json
{
"id": 6,
"id": 0,
"jsonrpc": "2.0",
"method": "account_import",
"params": [
{
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
}
]
"method": "account_version",
"params": []
}
```
Response
```json
{
"id": 6,
"jsonrpc": "2.0",
"result": {
"address": "0xc7412fc59930fd90099c917a50e5f11d0934b2f5",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T11-00-42.032024108Z--c7412fc59930fd90099c917a50e5f11d0934b2f5"
}
}
```
### account_export
#### Export account from keystore
Export a private key from the keystore. The exported private key is encrypted with the original password. When the
key is imported later this password is required.
#### Arguments
- account [address]: export private key that is associated with this account
#### Result
- exported key, see [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) for
more information
#### Sample call
```json
{
"id": 5,
"jsonrpc": "2.0",
"method": "account_export",
"params": [
"0xc7412fc59930fd90099c917a50e5f11d0934b2f5"
]
}
```
Response
```json
{
"id": 5,
"jsonrpc": "2.0",
"result": {
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
}
"id": 0,
"jsonrpc": "2.0",
"result": "6.0.0"
}
```
@ -625,7 +540,7 @@ By starting the signer with the switch `--stdio-ui-test`, the signer will invoke
denials. This can be used during development to ensure that the API is (at least somewhat) correctly implemented.
See `pythonsigner`, which can be invoked via `python3 pythonsigner.py test` to perform the 'denial-handshake-test'.
All methods in this API uses object-based parameters, so that there can be no mixups of parameters: each piece of data is accessed by key.
All methods in this API use object-based parameters, so that there can be no mixup of parameters: each piece of data is accessed by key.
See the [ui API changelog](intapi_changelog.md) for information about changes to this API.
@ -784,12 +699,10 @@ Invoked when a request for account listing has been made.
{
"accounts": [
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-20T14-44-54.089682944Z--123409812340981234098123409812deadbeef42",
"address": "0x123409812340981234098123409812deadbeef42"
},
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-23T21-59-03.199240693Z--cafebabedeadbeef34098123409812deadbeef42",
"address": "0xcafebabedeadbeef34098123409812deadbeef42"
}
@ -819,7 +732,13 @@ Invoked when a request for account listing has been made.
{
"address": "0x123409812340981234098123409812deadbeef42",
"raw_data": "0x01020304",
"message": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"messages": [
{
"name": "message",
"value": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"type": "text/plain"
}
],
"hash": "0x7e3a4e7a9d1744bc5c675c25e1234ca8ed9162bd17f78b9085e48047c15ac310",
"meta": {
"remote": "signer binary",
@ -829,12 +748,34 @@ Invoked when a request for account listing has been made.
}
]
}
```
### ApproveNewAccount / `ui_approveNewAccount`
Invoked when a request for creating a new account has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 4,
"method": "ui_approveNewAccount",
"params": [
{
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ShowInfo / `ui_showInfo`
The UI should show the info to the user. Does not expect response.
The UI should show the info (a single message) to the user. Does not expect response.
#### Sample call
@ -844,9 +785,7 @@ The UI should show the info to the user. Does not expect response.
"id": 9,
"method": "ui_showInfo",
"params": [
{
"text": "Tests completed"
}
"Tests completed"
]
}
@ -854,18 +793,16 @@ The UI should show the info to the user. Does not expect response.
### ShowError / `ui_showError`
The UI should show the info to the user. Does not expect response.
The UI should show the error (a single message) to the user. Does not expect response.
```json
{
"jsonrpc": "2.0",
"id": 2,
"method": "ShowError",
"method": "ui_showError",
"params": [
{
"text": "Testing 'ShowError'"
}
"Something bad happened!"
]
}
@ -879,9 +816,36 @@ When implementing rate-limited rules, this callback should be used.
TLDR; Use this method to keep track of signed transactions, instead of using the data in `ApproveTx`.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onApprovedTx",
"params": [
{
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1",
"gas": "0x333",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
]
}
```
### OnSignerStartup / `ui_onSignerStartup`
This method provide the UI with information about what API version the signer uses (both internal and external) aswell as build-info and external API,
This method provides the UI with information about what API version the signer uses (both internal and external) as well as build-info and external API,
in k/v-form.
Example call:
@ -905,6 +869,27 @@ Example call:
```
### OnInputRequired / `ui_onInputRequired`
Invoked when Clef requires user input (e.g. a password).
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onInputRequired",
"params": [
{
"title": "Account password",
"prompt": "Please enter the password for account 0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"isPassword": true
}
]
}
```
### Rules for UI apis

View File

@ -3,7 +3,7 @@
These data types are defined in the channel between clef and the UI
### SignDataRequest
SignDataRequest contains information about a pending request to sign some data. The data to be signed can be of various types, defined by content-type. Clef has done most of the work in canonicalizing and making sense of the data, and it's up to the UI to presentthe user with the contents of the `message`
SignDataRequest contains information about a pending request to sign some data. The data to be signed can be of various types, defined by content-type. Clef has done most of the work in canonicalizing and making sense of the data, and it's up to the UI to present the user with the contents of the `message`
Example:
```json

View File

@ -94,7 +94,7 @@ with minimal requirements.
On the `client` qube, we need to create a listener which will receive the request from the Dapp, and proxy it.
[qubes-client.py](qubes/client/qubes-client.py):
[qubes-client.py](qubes/qubes-client.py):
```python
@ -186,7 +186,7 @@ from other qubes.
## USBArmory
The [USB armory](https://inversepath.com/usbarmory) is an open source hardware design with an 800 Mhz ARM processor. It is a pocket-size
The [USB armory](https://inversepath.com/usbarmory) is an open source hardware design with an 800 MHz ARM processor. It is a pocket-size
computer. When inserted into a laptop, it identifies itself as a USB network interface, basically adding another network
to your computer. Over this new network interface, you can SSH into the device.

View File

@ -10,6 +10,64 @@ TL;DR: Given a version number MAJOR.MINOR.PATCH, increment the:
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
### 6.1.0
The API-method `account_signGnosisSafeTx` was added. This method takes two parameters,
`[address, safeTx]`. The latter, `safeTx`, can be copy-pasted from the gnosis relay. For example:
```
{
"jsonrpc": "2.0",
"method": "account_signGnosisSafeTx",
"params": ["0xfd1c4226bfD1c436672092F4eCbfC270145b7256",
{
"safe": "0x25a6c4BBd32B2424A9c99aEB0584Ad12045382B3",
"to": "0xB372a646f7F05Cc1785018dBDA7EBc734a2A20E2",
"value": "20000000000000000",
"data": null,
"operation": 0,
"gasToken": "0x0000000000000000000000000000000000000000",
"safeTxGas": 27845,
"baseGas": 0,
"gasPrice": "0",
"refundReceiver": "0x0000000000000000000000000000000000000000",
"nonce": 2,
"executionDate": null,
"submissionDate": "2020-09-15T21:54:49.617634Z",
"modified": "2020-09-15T21:54:49.617634Z",
"blockNumber": null,
"transactionHash": null,
"safeTxHash": "0x2edfbd5bc113ff18c0631595db32eb17182872d88d9bf8ee4d8c2dd5db6d95e2",
"executor": null,
"isExecuted": false,
"isSuccessful": null,
"ethGasPrice": null,
"gasUsed": null,
"fee": null,
"origin": null,
"dataDecoded": null,
"confirmationsRequired": null,
"confirmations": [
{
"owner": "0xAd2e180019FCa9e55CADe76E4487F126Fd08DA34",
"submissionDate": "2020-09-15T21:54:49.663299Z",
"transactionHash": null,
"confirmationType": "CONFIRMATION",
"signature": "0x95a7250bb645f831c86defc847350e7faff815b2fb586282568e96cc859e39315876db20a2eed5f7a0412906ec5ab57652a6f645ad4833f345bda059b9da2b821c",
"signatureType": "EOA"
}
],
"signatures": null
}
],
"id": 67
}
```
Not all fields are required, though. This method is really just a UX helper, which massages the
input to conform to the `EIP-712` [specification](https://docs.gnosis.io/safe/docs/contracts_tx_execution/#transaction-hash)
for the Gnosis Safe, and making the output be directly importable to by a relay service.
### 6.0.0

View File

@ -12,7 +12,7 @@ Additional labels for pre-release and build metadata are available as extensions
### 7.0.1
Added `clef_New` to the internal API calleable from a UI.
Added `clef_New` to the internal API callable from a UI.
> `New` creates a new password protected Account. The private key is protected with
> the given password. Users are responsible to backup the private key that is stored
@ -161,7 +161,7 @@ UserInputResponse struct {
#### 1.2.0
* Add `OnStartup` method, to provide the UI with information about what API version
the signer uses (both internal and external) aswell as build-info and external api.
the signer uses (both internal and external) as well as build-info and external api.
Example call:
```json

View File

@ -29,9 +29,9 @@ import (
"math/big"
"os"
"os/signal"
"os/user"
"path/filepath"
"runtime"
"sort"
"strings"
"time"
@ -40,10 +40,10 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
@ -53,7 +53,8 @@ import (
"github.com/ethereum/go-ethereum/signer/fourbyte"
"github.com/ethereum/go-ethereum/signer/rules"
"github.com/ethereum/go-ethereum/signer/storage"
colorable "github.com/mattn/go-colorable"
"github.com/mattn/go-colorable"
"github.com/mattn/go-isatty"
"gopkg.in/urfave/cli.v1"
)
@ -82,6 +83,10 @@ var (
Name: "advanced",
Usage: "If enabled, issues warnings instead of rejections for suspicious requests. Default off",
}
acceptFlag = cli.BoolFlag{
Name: "suppress-bootwarn",
Usage: "If set, does not show the warning during boot",
}
keystoreFlag = cli.StringFlag{
Name: "keystore",
Value: filepath.Join(node.DefaultDataDir(), "keystore"),
@ -98,7 +103,7 @@ var (
Usage: "Chain id to use for signing (1=mainnet, 3=Ropsten, 4=Rinkeby, 5=Goerli)",
}
rpcPortFlag = cli.IntFlag{
Name: "rpcport",
Name: "http.port",
Usage: "HTTP-RPC server listening port",
Value: node.DefaultHTTPPort + 5,
}
@ -196,6 +201,7 @@ The delpw command removes a password for a given address (keyfile).
logLevelFlag,
keystoreFlag,
utils.LightKDFFlag,
acceptFlag,
},
Description: `
The newaccount command creates a new keystore-backed account. It is a convenience-method
@ -211,6 +217,36 @@ The gendoc generates example structures of the json-rpc communication types.
`}
)
// AppHelpFlagGroups is the application flags, grouped by functionality.
var AppHelpFlagGroups = []flags.FlagGroup{
{
Name: "FLAGS",
Flags: []cli.Flag{
logLevelFlag,
keystoreFlag,
configdirFlag,
chainIdFlag,
utils.LightKDFFlag,
utils.NoUSBFlag,
utils.SmartCardDaemonPathFlag,
utils.HTTPListenAddrFlag,
utils.HTTPVirtualHostsFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.HTTPEnabledFlag,
rpcPortFlag,
signerSecretFlag,
customDBFlag,
auditLogFlag,
ruleFlag,
stdiouiFlag,
testFlag,
advancedMode,
acceptFlag,
},
},
}
func init() {
app.Name = "Clef"
app.Usage = "Manage Ethereum account operations"
@ -222,11 +258,11 @@ func init() {
utils.LightKDFFlag,
utils.NoUSBFlag,
utils.SmartCardDaemonPathFlag,
utils.RPCListenAddrFlag,
utils.RPCVirtualHostsFlag,
utils.HTTPListenAddrFlag,
utils.HTTPVirtualHostsFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.RPCEnabledFlag,
utils.HTTPEnabledFlag,
rpcPortFlag,
signerSecretFlag,
customDBFlag,
@ -235,6 +271,7 @@ func init() {
stdiouiFlag,
testFlag,
advancedMode,
acceptFlag,
}
app.Action = signer
app.Commands = []cli.Command{initCommand,
@ -243,7 +280,41 @@ func init() {
delCredentialCommand,
newAccountCommand,
gendocCommand}
cli.CommandHelpTemplate = utils.OriginCommandHelpTemplate
cli.CommandHelpTemplate = flags.CommandHelpTemplate
// Override the default app help template
cli.AppHelpTemplate = flags.ClefAppHelpTemplate
// Override the default app help printer, but only for the global app help
originalHelpPrinter := cli.HelpPrinter
cli.HelpPrinter = func(w io.Writer, tmpl string, data interface{}) {
if tmpl == flags.ClefAppHelpTemplate {
// Render out custom usage screen
originalHelpPrinter(w, tmpl, flags.HelpData{App: data, FlagGroups: AppHelpFlagGroups})
} else if tmpl == flags.CommandHelpTemplate {
// Iterate over all command specific flags and categorize them
categorized := make(map[string][]cli.Flag)
for _, flag := range data.(cli.Command).Flags {
if _, ok := categorized[flag.String()]; !ok {
categorized[flags.FlagCategory(flag, AppHelpFlagGroups)] = append(categorized[flags.FlagCategory(flag, AppHelpFlagGroups)], flag)
}
}
// sort to get a stable ordering
sorted := make([]flags.FlagGroup, 0, len(categorized))
for cat, flgs := range categorized {
sorted = append(sorted, flags.FlagGroup{Name: cat, Flags: flgs})
}
sort.Sort(flags.ByCategory(sorted))
// add sorted array to data and render with default printer
originalHelpPrinter(w, tmpl, map[string]interface{}{
"cmd": data,
"categorizedFlags": sorted,
})
} else {
originalHelpPrinter(w, tmpl, data)
}
}
}
func main() {
@ -283,7 +354,7 @@ func initializeSecrets(c *cli.Context) error {
text := "The master seed of clef will be locked with a password.\nPlease specify a password. Do not forget this password!"
var password string
for {
password = getPassPhrase(text, true)
password = utils.GetPassPhrase(text, true)
if err := core.ValidatePasswordFormat(password); err != nil {
fmt.Printf("invalid password: %v\n", err)
} else {
@ -356,7 +427,7 @@ func setCredential(ctx *cli.Context) error {
utils.Fatalf("Invalid address specified: %s", addr)
}
address := common.HexToAddress(addr)
password := getPassPhrase("Please enter a password to store for this address:", true)
password := utils.GetPassPhrase("Please enter a password to store for this address:", true)
fmt.Println()
stretchedKey, err := readMasterKey(ctx, nil)
@ -433,8 +504,10 @@ func initialize(c *cli.Context) error {
if c.GlobalBool(stdiouiFlag.Name) {
logOutput = os.Stderr
// If using the stdioui, we can't do the 'confirm'-flow
fmt.Fprint(logOutput, legalWarning)
} else {
if !c.GlobalBool(acceptFlag.Name) {
fmt.Fprint(logOutput, legalWarning)
}
} else if !c.GlobalBool(acceptFlag.Name) {
if !confirm(legalWarning) {
return fmt.Errorf("aborted by user")
}
@ -579,9 +652,9 @@ func signer(c *cli.Context) error {
Service: api,
Version: "1.0"},
}
if c.GlobalBool(utils.RPCEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.RPCVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.RPCCORSDomainFlag.Name))
if c.GlobalBool(utils.HTTPEnabledFlag.Name) {
vhosts := utils.SplitAndTrim(c.GlobalString(utils.HTTPVirtualHostsFlag.Name))
cors := utils.SplitAndTrim(c.GlobalString(utils.HTTPCORSDomainFlag.Name))
srv := rpc.NewServer()
err := node.RegisterApisFromWhitelist(rpcAPI, []string{"account"}, srv, false)
@ -590,17 +663,21 @@ func signer(c *cli.Context) error {
}
handler := node.NewHTTPHandlerStack(srv, cors, vhosts)
// set port
port := c.Int(rpcPortFlag.Name)
// start http server
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.RPCListenAddrFlag.Name), c.Int(rpcPortFlag.Name))
listener, err := node.StartHTTPEndpoint(httpEndpoint, rpc.DefaultHTTPTimeouts, handler)
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.HTTPListenAddrFlag.Name), port)
httpServer, addr, err := node.StartHTTPEndpoint(httpEndpoint, rpc.DefaultHTTPTimeouts, handler)
if err != nil {
utils.Fatalf("Could not start RPC api: %v", err)
}
extapiURL = fmt.Sprintf("http://%v/", listener.Addr())
extapiURL = fmt.Sprintf("http://%v/", addr)
log.Info("HTTP endpoint opened", "url", extapiURL)
defer func() {
listener.Close()
// Don't bother imposing a timeout here.
httpServer.Shutdown(context.Background())
log.Info("HTTP endpoint closed", "url", extapiURL)
}()
}
@ -640,21 +717,11 @@ func signer(c *cli.Context) error {
return nil
}
// splitAndTrim splits input separated by a comma
// and trims excessive white space from the substrings.
func splitAndTrim(input string) []string {
result := strings.Split(input, ",")
for i, r := range result {
result[i] = strings.TrimSpace(r)
}
return result
}
// DefaultConfigDir is the default config directory to use for the vaults and other
// persistence requirements.
func DefaultConfigDir() string {
// Try to place the data folder in the user's home dir
home := homeDir()
home := utils.HomeDir()
if home != "" {
if runtime.GOOS == "darwin" {
return filepath.Join(home, "Library", "Signer")
@ -662,26 +729,15 @@ func DefaultConfigDir() string {
appdata := os.Getenv("APPDATA")
if appdata != "" {
return filepath.Join(appdata, "Signer")
} else {
return filepath.Join(home, "AppData", "Roaming", "Signer")
}
} else {
return filepath.Join(home, ".clef")
return filepath.Join(home, "AppData", "Roaming", "Signer")
}
return filepath.Join(home, ".clef")
}
// As we cannot guess a stable location, return empty and handle later
return ""
}
func homeDir() string {
if home := os.Getenv("HOME"); home != "" {
return home
}
if usr, err := user.Current(); err == nil {
return usr.HomeDir
}
return ""
}
func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
var (
file string
@ -711,7 +767,7 @@ func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
}
password = resp.Text
} else {
password = getPassPhrase("Decrypt master seed of clef", false)
password = utils.GetPassPhrase("Decrypt master seed of clef", false)
}
masterSeed, err := decryptSeed(cipherKey, password)
if err != nil {
@ -731,14 +787,16 @@ func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
// checkFile is a convenience function to check if a file
// * exists
// * is mode 0400
// * is mode 0400 (unix only)
func checkFile(filename string) error {
info, err := os.Stat(filename)
if err != nil {
return fmt.Errorf("failed stat on %s: %v", filename, err)
}
// Check the unix permission bits
if info.Mode().Perm()&0377 != 0 {
// However, on windows, we cannot use the unix perm-bits, see
// https://github.com/ethereum/go-ethereum/issues/20123
if runtime.GOOS != "windows" && info.Mode().Perm()&0377 != 0 {
return fmt.Errorf("file (%v) has insecure file permissions (%v)", filename, info.Mode().String())
}
return nil
@ -906,27 +964,6 @@ func testExternalUI(api *core.SignerAPI) {
}
// getPassPhrase retrieves the password associated with clef, either fetched
// from a list of preloaded passphrases, or requested interactively from the user.
// TODO: there are many `getPassPhrase` functions, it will be better to abstract them into one.
func getPassPhrase(prompt string, confirmation bool) string {
fmt.Println(prompt)
password, err := console.Stdin.PromptPassword("Password: ")
if err != nil {
utils.Fatalf("Failed to read password: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat password: ")
if err != nil {
utils.Fatalf("Failed to read password confirmation: %v", err)
}
if password != confirm {
utils.Fatalf("Passwords do not match")
}
}
return password
}
type encryptedSeedStorage struct {
Description string `json:"description"`
Version int `json:"version"`
@ -977,7 +1014,7 @@ func GenDoc(ctx *cli.Context) {
if data, err := json.MarshalIndent(v, "", " "); err == nil {
output = append(output, fmt.Sprintf("### %s\n\n%s\n\nExample:\n```json\n%s\n```", name, desc, data))
} else {
log.Error("Error generating output", err)
log.Error("Error generating output", "err", err)
}
}
)
@ -1069,7 +1106,7 @@ func GenDoc(ctx *cli.Context) {
rlpdata := common.FromHex("0xf85d640101948a8eafb1cf62bfbeb1741769dae1a9dd47996192018026a0716bd90515acb1e68e5ac5867aa11a1e65399c3349d479f5fb698554ebc6f293a04e8a4ebfff434e971e0ef12c5bf3a881b06fd04fc3f8b8a7291fb67a26a1d4ed")
var tx types.Transaction
rlp.DecodeBytes(rlpdata, &tx)
tx.UnmarshalBinary(rlpdata)
add("OnApproved - SignTransactionResult", desc, &ethapi.SignTransactionResult{Raw: rlpdata, Tx: &tx})
}

117
cmd/devp2p/README.md Normal file
View File

@ -0,0 +1,117 @@
# The devp2p command
The devp2p command line tool is a utility for low-level peer-to-peer debugging and
protocol development purposes. It can do many things.
### ENR Decoding
Use `devp2p enrdump <base64>` to verify and display an Ethereum Node Record.
### Node Key Management
The `devp2p key ...` command family deals with node key files.
Run `devp2p key generate mynode.key` to create a new node key in the `mynode.key` file.
Run `devp2p key to-enode mynode.key -ip 127.0.0.1 -tcp 30303` to create an enode:// URL
corresponding to the given node key and address information.
### Maintaining DNS Discovery Node Lists
The devp2p command can create and publish DNS discovery node lists.
Run `devp2p dns sign <directory>` to update the signature of a DNS discovery tree.
Run `devp2p dns sync <enrtree-URL>` to download a complete DNS discovery tree.
Run `devp2p dns to-cloudflare <directory>` to publish a tree to CloudFlare DNS.
Run `devp2p dns to-route53 <directory>` to publish a tree to Amazon Route53.
You can find more information about these commands in the [DNS Discovery Setup Guide][dns-tutorial].
### Discovery v4 Utilities
The `devp2p discv4 ...` command family deals with the [Node Discovery v4][discv4]
protocol.
Run `devp2p discv4 ping <enode/ENR>` to ping a node.
Run `devp2p discv4 resolve <enode/ENR>` to find the most recent node record of a node in
the DHT.
Run `devp2p discv4 crawl <nodes.json path>` to create or update a JSON node set.
### Discovery v5 Utilities
The `devp2p discv5 ...` command family deals with the [Node Discovery v5][discv5]
protocol. This protocol is currently under active development.
Run `devp2p discv5 ping <ENR>` to ping a node.
Run `devp2p discv5 resolve <ENR>` to find the most recent node record of a node in
the discv5 DHT.
Run `devp2p discv5 listen` to run a Discovery v5 node.
Run `devp2p discv5 crawl <nodes.json path>` to create or update a JSON node set containing
discv5 nodes.
### Discovery Test Suites
The devp2p command also contains interactive test suites for Discovery v4 and Discovery
v5.
To run these tests against your implementation, you need to set up a networking
environment where two separate UDP listening addresses are available on the same machine.
The two listening addresses must also be routed such that they are able to reach the node
you want to test.
For example, if you want to run the test on your local host, and the node under test is
also on the local host, you need to assign two IP addresses (or a larger range) to your
loopback interface. On macOS, this can be done by executing the following command:
sudo ifconfig lo0 add 127.0.0.2
You can now run either test suite as follows: Start the node under test first, ensuring
that it won't talk to the Internet (i.e. disable bootstrapping). An easy way to prevent
unintended connections to the global DHT is listening on `127.0.0.1`.
Now get the ENR of your node and store it in the `NODE` environment variable.
Start the test by running `devp2p discv5 test -listen1 127.0.0.1 -listen2 127.0.0.2 $NODE`.
### Eth Protocol Test Suite
The Eth Protocol test suite is a conformance test suite for the [eth protocol][eth].
To run the eth protocol test suite against your implementation, the node needs to be initialized as such:
1. initialize the geth node with the `genesis.json` file contained in the `testdata` directory
2. import the `halfchain.rlp` file in the `testdata` directory
3. run geth with the following flags:
```
geth --datadir <datadir> --nodiscover --nat=none --networkid 19763 --verbosity 5
```
Then, run the following command, replacing `<enode>` with the enode of the geth node:
```
devp2p rlpx eth-test <enode> cmd/devp2p/internal/ethtest/testdata/chain.rlp cmd/devp2p/internal/ethtest/testdata/genesis.json
```
Repeat the above process (re-initialising the node) in order to run the Eth Protocol test suite again.
#### Eth66 Test Suite
The Eth66 test suite is also a conformance test suite for the eth 66 protocol version specifically.
To run the eth66 protocol test suite, initialize a geth node as described above and run the following command,
replacing `<enode>` with the enode of the geth node:
```
devp2p rlpx eth66-test <enode> cmd/devp2p/internal/ethtest/testdata/chain.rlp cmd/devp2p/internal/ethtest/testdata/genesis.json
```
[eth]: https://github.com/ethereum/devp2p/blob/master/caps/eth.md
[dns-tutorial]: https://geth.ethereum.org/docs/developers/dns-discovery-setup
[discv4]: https://github.com/ethereum/devp2p/tree/master/discv4.md
[discv5]: https://github.com/ethereum/devp2p/tree/master/discv5/discv5.md

View File

@ -22,6 +22,7 @@ import (
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v4test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/discover"
@ -40,6 +41,7 @@ var (
discv4ResolveCommand,
discv4ResolveJSONCommand,
discv4CrawlCommand,
discv4TestCommand,
},
}
discv4PingCommand = cli.Command{
@ -74,6 +76,18 @@ var (
Action: discv4Crawl,
Flags: []cli.Flag{bootnodesFlag, crawlTimeoutFlag},
}
discv4TestCommand = cli.Command{
Name: "test",
Usage: "Runs tests against a node",
Action: discv4Test,
Flags: []cli.Flag{
remoteEnodeFlag,
testPatternFlag,
testTAPFlag,
testListen1Flag,
testListen2Flag,
},
}
)
var (
@ -98,6 +112,11 @@ var (
Usage: "Time limit for the crawl.",
Value: 30 * time.Minute,
}
remoteEnodeFlag = cli.StringFlag{
Name: "remote",
Usage: "Enode of the remote node under test",
EnvVar: "REMOTE_ENODE",
}
)
func discv4Ping(ctx *cli.Context) error {
@ -184,6 +203,18 @@ func discv4Crawl(ctx *cli.Context) error {
return nil
}
// discv4Test runs the protocol test suite.
func discv4Test(ctx *cli.Context) error {
// Configure test package globals.
if !ctx.IsSet(remoteEnodeFlag.Name) {
return fmt.Errorf("Missing -%v", remoteEnodeFlag.Name)
}
v4test.Remote = ctx.String(remoteEnodeFlag.Name)
v4test.Listen1 = ctx.String(testListen1Flag.Name)
v4test.Listen2 = ctx.String(testListen2Flag.Name)
return runTests(ctx, v4test.AllTests)
}
// startV4 starts an ephemeral discovery V4 node.
func startV4(ctx *cli.Context) *discover.UDPv4 {
ln, config := makeDiscoveryConfig(ctx)
@ -235,7 +266,11 @@ func listen(ln *enode.LocalNode, addr string) *net.UDPConn {
}
usocket := socket.(*net.UDPConn)
uaddr := socket.LocalAddr().(*net.UDPAddr)
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
if uaddr.IP.IsUnspecified() {
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
} else {
ln.SetFallbackIP(uaddr.IP)
}
ln.SetFallbackUDP(uaddr.Port)
return usocket
}
@ -243,7 +278,11 @@ func listen(ln *enode.LocalNode, addr string) *net.UDPConn {
func parseBootnodes(ctx *cli.Context) ([]*enode.Node, error) {
s := params.RinkebyBootnodes
if ctx.IsSet(bootnodesFlag.Name) {
s = strings.Split(ctx.String(bootnodesFlag.Name), ",")
input := ctx.String(bootnodesFlag.Name)
if input == "" {
return nil, nil
}
s = strings.Split(input, ",")
}
nodes := make([]*enode.Node, len(s))
var err error

View File

@ -20,6 +20,7 @@ import (
"fmt"
"time"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v5test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/p2p/discover"
"gopkg.in/urfave/cli.v1"
@ -33,6 +34,7 @@ var (
discv5PingCommand,
discv5ResolveCommand,
discv5CrawlCommand,
discv5TestCommand,
discv5ListenCommand,
},
}
@ -53,6 +55,17 @@ var (
Action: discv5Crawl,
Flags: []cli.Flag{bootnodesFlag, crawlTimeoutFlag},
}
discv5TestCommand = cli.Command{
Name: "test",
Usage: "Runs protocol tests against a node",
Action: discv5Test,
Flags: []cli.Flag{
testPatternFlag,
testTAPFlag,
testListen1Flag,
testListen2Flag,
},
}
discv5ListenCommand = cli.Command{
Name: "listen",
Usage: "Runs a node",
@ -103,6 +116,16 @@ func discv5Crawl(ctx *cli.Context) error {
return nil
}
// discv5Test runs the protocol test suite.
func discv5Test(ctx *cli.Context) error {
suite := &v5test.Suite{
Dest: getNodeArg(ctx),
Listen1: ctx.String(testListen1Flag.Name),
Listen2: ctx.String(testListen2Flag.Name),
}
return runTests(ctx, suite.AllTests())
}
func discv5Listen(ctx *cli.Context) error {
disc := startV5(ctx)
defer disc.Close()

View File

@ -27,10 +27,10 @@ import (
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/console/prompt"
"github.com/ethereum/go-ethereum/p2p/dnsdisc"
"github.com/ethereum/go-ethereum/p2p/enode"
cli "gopkg.in/urfave/cli.v1"
"gopkg.in/urfave/cli.v1"
)
var (
@ -226,7 +226,7 @@ func loadSigningKey(keyfile string) *ecdsa.PrivateKey {
if err != nil {
exit(fmt.Errorf("failed to read the keyfile at '%s': %v", keyfile, err))
}
password, _ := console.Stdin.PromptPassword("Please enter the password for '" + keyfile + "': ")
password, _ := prompt.Stdin.PromptPassword("Please enter the password for '" + keyfile + "': ")
key, err := keystore.DecryptKey(keyjson, password)
if err != nil {
exit(fmt.Errorf("error decrypting key: %v", err))

View File

@ -21,6 +21,7 @@ import (
"encoding/base64"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"net"
"os"
@ -69,22 +70,30 @@ func enrdump(ctx *cli.Context) error {
if err != nil {
return fmt.Errorf("INVALID: %v", err)
}
fmt.Print(dumpRecord(r))
dumpRecord(os.Stdout, r)
return nil
}
// dumpRecord creates a human-readable description of the given node record.
func dumpRecord(r *enr.Record) string {
out := new(bytes.Buffer)
if n, err := enode.New(enode.ValidSchemes, r); err != nil {
func dumpRecord(out io.Writer, r *enr.Record) {
n, err := enode.New(enode.ValidSchemes, r)
if err != nil {
fmt.Fprintf(out, "INVALID: %v\n", err)
} else {
fmt.Fprintf(out, "Node ID: %v\n", n.ID())
dumpNodeURL(out, n)
}
kv := r.AppendElements(nil)[1:]
fmt.Fprintf(out, "Record has sequence number %d and %d key/value pairs.\n", r.Seq(), len(kv)/2)
fmt.Fprint(out, dumpRecordKV(kv, 2))
return out.String()
}
func dumpNodeURL(out io.Writer, n *enode.Node) {
var key enode.Secp256k1
if n.Load(&key) != nil {
return // no secp256k1 public key
}
fmt.Fprintf(out, "URLv4: %s\n", n.URLv4())
}
func dumpRecordKV(kv []interface{}, indent int) string {

View File

@ -0,0 +1,167 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"compress/gzip"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"math/big"
"os"
"strings"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/forkid"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
)
type Chain struct {
blocks []*types.Block
chainConfig *params.ChainConfig
}
func (c *Chain) WriteTo(writer io.Writer) error {
for _, block := range c.blocks {
if err := rlp.Encode(writer, block); err != nil {
return err
}
}
return nil
}
// Len returns the length of the chain.
func (c *Chain) Len() int {
return len(c.blocks)
}
// TD calculates the total difficulty of the chain.
func (c *Chain) TD(height int) *big.Int { // TODO later on channge scheme so that the height is included in range
sum := big.NewInt(0)
for _, block := range c.blocks[:height] {
sum.Add(sum, block.Difficulty())
}
return sum
}
// ForkID gets the fork id of the chain.
func (c *Chain) ForkID() forkid.ID {
return forkid.NewID(c.chainConfig, c.blocks[0].Hash(), uint64(c.Len()))
}
// Shorten returns a copy chain of a desired height from the imported
func (c *Chain) Shorten(height int) *Chain {
blocks := make([]*types.Block, height)
copy(blocks, c.blocks[:height])
config := *c.chainConfig
return &Chain{
blocks: blocks,
chainConfig: &config,
}
}
// Head returns the chain head.
func (c *Chain) Head() *types.Block {
return c.blocks[c.Len()-1]
}
func (c *Chain) GetHeaders(req GetBlockHeaders) (BlockHeaders, error) {
if req.Amount < 1 {
return nil, fmt.Errorf("no block headers requested")
}
headers := make(BlockHeaders, req.Amount)
var blockNumber uint64
// range over blocks to check if our chain has the requested header
for _, block := range c.blocks {
if block.Hash() == req.Origin.Hash || block.Number().Uint64() == req.Origin.Number {
headers[0] = block.Header()
blockNumber = block.Number().Uint64()
}
}
if headers[0] == nil {
return nil, fmt.Errorf("no headers found for given origin number %v, hash %v", req.Origin.Number, req.Origin.Hash)
}
if req.Reverse {
for i := 1; i < int(req.Amount); i++ {
blockNumber -= (1 - req.Skip)
headers[i] = c.blocks[blockNumber].Header()
}
return headers, nil
}
for i := 1; i < int(req.Amount); i++ {
blockNumber += (1 + req.Skip)
headers[i] = c.blocks[blockNumber].Header()
}
return headers, nil
}
// loadChain takes the given chain.rlp file, and decodes and returns
// the blocks from the file.
func loadChain(chainfile string, genesis string) (*Chain, error) {
chainConfig, err := ioutil.ReadFile(genesis)
if err != nil {
return nil, err
}
var gen core.Genesis
if err := json.Unmarshal(chainConfig, &gen); err != nil {
return nil, err
}
gblock := gen.ToBlock(nil)
// Load chain.rlp.
fh, err := os.Open(chainfile)
if err != nil {
return nil, err
}
defer fh.Close()
var reader io.Reader = fh
if strings.HasSuffix(chainfile, ".gz") {
if reader, err = gzip.NewReader(reader); err != nil {
return nil, err
}
}
stream := rlp.NewStream(reader, 0)
var blocks = make([]*types.Block, 1)
blocks[0] = gblock
for i := 0; ; i++ {
var b types.Block
if err := stream.Decode(&b); err == io.EOF {
break
} else if err != nil {
return nil, fmt.Errorf("at block index %d: %v", i, err)
}
if b.NumberU64() != uint64(i+1) {
return nil, fmt.Errorf("block at index %d has wrong number %d", i, b.NumberU64())
}
blocks = append(blocks, &b)
}
c := &Chain{blocks: blocks, chainConfig: gen.Config}
return c, nil
}

View File

@ -0,0 +1,151 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"path/filepath"
"strconv"
"testing"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/p2p"
"github.com/stretchr/testify/assert"
)
// TestEthProtocolNegotiation tests whether the test suite
// can negotiate the highest eth protocol in a status message exchange
func TestEthProtocolNegotiation(t *testing.T) {
var tests = []struct {
conn *Conn
caps []p2p.Cap
expected uint32
}{
{
conn: &Conn{},
caps: []p2p.Cap{
{Name: "eth", Version: 63},
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
expected: uint32(65),
},
{
conn: &Conn{},
caps: []p2p.Cap{
{Name: "eth", Version: 0},
{Name: "eth", Version: 89},
{Name: "eth", Version: 65},
},
expected: uint32(65),
},
{
conn: &Conn{},
caps: []p2p.Cap{
{Name: "eth", Version: 63},
{Name: "eth", Version: 64},
{Name: "wrongProto", Version: 65},
},
expected: uint32(64),
},
}
for i, tt := range tests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
tt.conn.negotiateEthProtocol(tt.caps)
assert.Equal(t, tt.expected, uint32(tt.conn.ethProtocolVersion))
})
}
}
// TestChain_GetHeaders tests whether the test suite can correctly
// respond to a GetBlockHeaders request from a node.
func TestChain_GetHeaders(t *testing.T) {
chainFile, err := filepath.Abs("./testdata/chain.rlp")
if err != nil {
t.Fatal(err)
}
genesisFile, err := filepath.Abs("./testdata/genesis.json")
if err != nil {
t.Fatal(err)
}
chain, err := loadChain(chainFile, genesisFile)
if err != nil {
t.Fatal(err)
}
var tests = []struct {
req GetBlockHeaders
expected BlockHeaders
}{
{
req: GetBlockHeaders{
Origin: eth.HashOrNumber{
Number: uint64(2),
},
Amount: uint64(5),
Skip: 1,
Reverse: false,
},
expected: BlockHeaders{
chain.blocks[2].Header(),
chain.blocks[4].Header(),
chain.blocks[6].Header(),
chain.blocks[8].Header(),
chain.blocks[10].Header(),
},
},
{
req: GetBlockHeaders{
Origin: eth.HashOrNumber{
Number: uint64(chain.Len() - 1),
},
Amount: uint64(3),
Skip: 0,
Reverse: true,
},
expected: BlockHeaders{
chain.blocks[chain.Len()-1].Header(),
chain.blocks[chain.Len()-2].Header(),
chain.blocks[chain.Len()-3].Header(),
},
},
{
req: GetBlockHeaders{
Origin: eth.HashOrNumber{
Hash: chain.Head().Hash(),
},
Amount: uint64(1),
Skip: 0,
Reverse: false,
},
expected: BlockHeaders{
chain.Head().Header(),
},
},
}
for i, tt := range tests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
headers, err := chain.GetHeaders(tt.req)
if err != nil {
t.Fatal(err)
}
assert.Equal(t, headers, tt.expected)
})
}
}

View File

@ -0,0 +1,382 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
)
// TestStatus_66 attempts to connect to the given node and exchange
// a status message with it on the eth66 protocol, and then check to
// make sure the chain head is correct.
func (s *Suite) TestStatus_66(t *utesting.T) {
conn := s.dial66(t)
// get protoHandshake
conn.handshake(t)
// get status
switch msg := conn.statusExchange66(t, s.chain).(type) {
case *Status:
status := *msg
if status.ProtocolVersion != uint32(66) {
t.Fatalf("mismatch in version: wanted 66, got %d", status.ProtocolVersion)
}
t.Logf("got status message: %s", pretty.Sdump(msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestGetBlockHeaders_66 tests whether the given node can respond to
// an eth66 `GetBlockHeaders` request and that the response is accurate.
func (s *Suite) TestGetBlockHeaders_66(t *utesting.T) {
conn := s.setupConnection66(t)
// get block headers
req := &eth.GetBlockHeadersPacket66{
RequestId: 3,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 2,
Skip: 1,
Reverse: false,
},
}
// write message
headers := s.getBlockHeaders66(t, conn, req, req.RequestId)
// check for correct headers
headersMatch(t, s.chain, headers)
}
// TestSimultaneousRequests_66 sends two simultaneous `GetBlockHeader` requests
// with different request IDs and checks to make sure the node responds with the correct
// headers per request.
func (s *Suite) TestSimultaneousRequests_66(t *utesting.T) {
// create two connections
conn1, conn2 := s.setupConnection66(t), s.setupConnection66(t)
// create two requests
req1 := &eth.GetBlockHeadersPacket66{
RequestId: 111,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 2,
Skip: 1,
Reverse: false,
},
}
req2 := &eth.GetBlockHeadersPacket66{
RequestId: 222,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 4,
Skip: 1,
Reverse: false,
},
}
// wait for headers for first request
headerChan := make(chan BlockHeaders, 1)
go func(headers chan BlockHeaders) {
headers <- s.getBlockHeaders66(t, conn1, req1, req1.RequestId)
}(headerChan)
// check headers of second request
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn2, req2, req2.RequestId))
// check headers of first request
headersMatch(t, s.chain, <-headerChan)
}
// TestBroadcast_66 tests whether a block announcement is correctly
// propagated to the given node's peer(s) on the eth66 protocol.
func (s *Suite) TestBroadcast_66(t *utesting.T) {
sendConn, receiveConn := s.setupConnection66(t), s.setupConnection66(t)
nextBlock := len(s.chain.blocks)
blockAnnouncement := &NewBlock{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
}
s.testAnnounce66(t, sendConn, receiveConn, blockAnnouncement)
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock66(s.chain.Head()); err != nil {
t.Fatal(err)
}
}
// TestGetBlockBodies_66 tests whether the given node can respond to
// a `GetBlockBodies` request and that the response is accurate over
// the eth66 protocol.
func (s *Suite) TestGetBlockBodies_66(t *utesting.T) {
conn := s.setupConnection66(t)
// create block bodies request
id := uint64(55)
req := &eth.GetBlockBodiesPacket66{
RequestId: id,
GetBlockBodiesPacket: eth.GetBlockBodiesPacket{
s.chain.blocks[54].Hash(),
s.chain.blocks[75].Hash(),
},
}
if err := conn.write66(req, GetBlockBodies{}.Code()); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
reqID, msg := conn.readAndServe66(s.chain, timeout)
switch msg := msg.(type) {
case BlockBodies:
if reqID != req.RequestId {
t.Fatalf("request ID mismatch: wanted %d, got %d", req.RequestId, reqID)
}
t.Logf("received %d block bodies", len(msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestLargeAnnounce_66 tests the announcement mechanism with a large block.
func (s *Suite) TestLargeAnnounce_66(t *utesting.T) {
nextBlock := len(s.chain.blocks)
blocks := []*NewBlock{
{
Block: largeBlock(),
TD: s.fullChain.TD(nextBlock + 1),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: largeNumber(2),
},
{
Block: largeBlock(),
TD: largeNumber(2),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
},
}
for i, blockAnnouncement := range blocks[0:3] {
t.Logf("Testing malicious announcement: %v\n", i)
sendConn := s.setupConnection66(t)
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Invalid announcement, check that peer disconnected
switch msg := sendConn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
break
default:
t.Fatalf("unexpected: %s wanted disconnect", pretty.Sdump(msg))
}
}
// Test the last block as a valid block
sendConn := s.setupConnection66(t)
receiveConn := s.setupConnection66(t)
s.testAnnounce66(t, sendConn, receiveConn, blocks[3])
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock66(s.fullChain.blocks[nextBlock]); err != nil {
t.Fatal(err)
}
}
// TestMaliciousHandshake_66 tries to send malicious data during the handshake.
func (s *Suite) TestMaliciousHandshake_66(t *utesting.T) {
conn := s.dial66(t)
// write hello to client
pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:]
handshakes := []*Hello{
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 66},
},
ID: pub0,
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
{Name: "eth", Version: 66},
},
ID: append(pub0, byte(0)),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
{Name: "eth", Version: 66},
},
ID: append(pub0, pub0...),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
{Name: "eth", Version: 66},
},
ID: largeBuffer(2),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 66},
},
ID: largeBuffer(2),
},
}
for i, handshake := range handshakes {
t.Logf("Testing malicious handshake %v\n", i)
// Init the handshake
if err := conn.Write(handshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// check that the peer disconnected
timeout := 20 * time.Second
// Discard one hello
for i := 0; i < 2; i++ {
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
case *Hello:
// Hello's are sent concurrently, so ignore them
continue
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// Dial for the next round
conn = s.dial66(t)
}
}
// TestMaliciousStatus_66 sends a status package with a large total difficulty.
func (s *Suite) TestMaliciousStatus_66(t *utesting.T) {
conn := s.dial66(t)
// get protoHandshake
conn.handshake(t)
status := &Status{
ProtocolVersion: uint32(66),
NetworkID: s.chain.chainConfig.ChainID.Uint64(),
TD: largeNumber(2),
Head: s.chain.blocks[s.chain.Len()-1].Hash(),
Genesis: s.chain.blocks[0].Hash(),
ForkID: s.chain.ForkID(),
}
// get status
switch msg := conn.statusExchange(t, s.chain, status).(type) {
case *Status:
t.Logf("%+v\n", msg)
default:
t.Fatalf("expected status, got: %#v ", msg)
}
// wait for disconnect
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
return
default:
t.Fatalf("expected disconnect, got: %s", pretty.Sdump(msg))
}
}
func (s *Suite) TestTransaction_66(t *utesting.T) {
tests := []*types.Transaction{
getNextTxFromChain(t, s),
unknownTx(t, s),
}
for i, tx := range tests {
t.Logf("Testing tx propagation: %v\n", i)
sendSuccessfulTx66(t, s, tx)
}
}
func (s *Suite) TestMaliciousTx_66(t *utesting.T) {
tests := []*types.Transaction{
getOldTxFromChain(t, s),
invalidNonceTx(t, s),
hugeAmount(t, s),
hugeGasPrice(t, s),
hugeData(t, s),
}
for i, tx := range tests {
t.Logf("Testing malicious tx propagation: %v\n", i)
sendFailingTx66(t, s, tx)
}
}
// TestZeroRequestID_66 checks that a request ID of zero is still handled
// by the node.
func (s *Suite) TestZeroRequestID_66(t *utesting.T) {
conn := s.setupConnection66(t)
req := &eth.GetBlockHeadersPacket66{
RequestId: 0,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Number: 0,
},
Amount: 2,
},
}
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req, req.RequestId))
}
// TestSameRequestID_66 sends two requests with the same request ID
// concurrently to a single node.
func (s *Suite) TestSameRequestID_66(t *utesting.T) {
conn := s.setupConnection66(t)
// create two separate requests with same ID
reqID := uint64(1234)
req1 := &eth.GetBlockHeadersPacket66{
RequestId: reqID,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Number: 0,
},
Amount: 2,
},
}
req2 := &eth.GetBlockHeadersPacket66{
RequestId: reqID,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Number: 33,
},
Amount: 2,
},
}
// send requests concurrently
go func() {
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req2, reqID))
}()
// check response from first request
headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req1, reqID))
}

View File

@ -0,0 +1,270 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"fmt"
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rlp"
"github.com/stretchr/testify/assert"
)
func (c *Conn) statusExchange66(t *utesting.T, chain *Chain) Message {
status := &Status{
ProtocolVersion: uint32(66),
NetworkID: chain.chainConfig.ChainID.Uint64(),
TD: chain.TD(chain.Len()),
Head: chain.blocks[chain.Len()-1].Hash(),
Genesis: chain.blocks[0].Hash(),
ForkID: chain.ForkID(),
}
return c.statusExchange(t, chain, status)
}
func (s *Suite) dial66(t *utesting.T) *Conn {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
conn.caps = append(conn.caps, p2p.Cap{Name: "eth", Version: 66})
return conn
}
func (c *Conn) write66(req eth.Packet, code int) error {
payload, err := rlp.EncodeToBytes(req)
if err != nil {
return err
}
_, err = c.Conn.Write(uint64(code), payload)
return err
}
func (c *Conn) read66() (uint64, Message) {
code, rawData, _, err := c.Conn.Read()
if err != nil {
return 0, errorf("could not read from connection: %v", err)
}
var msg Message
switch int(code) {
case (Hello{}).Code():
msg = new(Hello)
case (Ping{}).Code():
msg = new(Ping)
case (Pong{}).Code():
msg = new(Pong)
case (Disconnect{}).Code():
msg = new(Disconnect)
case (Status{}).Code():
msg = new(Status)
case (GetBlockHeaders{}).Code():
ethMsg := new(eth.GetBlockHeadersPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetBlockHeaders(*ethMsg.GetBlockHeadersPacket)
case (BlockHeaders{}).Code():
ethMsg := new(eth.BlockHeadersPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, BlockHeaders(ethMsg.BlockHeadersPacket)
case (GetBlockBodies{}).Code():
ethMsg := new(eth.GetBlockBodiesPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, GetBlockBodies(ethMsg.GetBlockBodiesPacket)
case (BlockBodies{}).Code():
ethMsg := new(eth.BlockBodiesPacket66)
if err := rlp.DecodeBytes(rawData, ethMsg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return ethMsg.RequestId, BlockBodies(ethMsg.BlockBodiesPacket)
case (NewBlock{}).Code():
msg = new(NewBlock)
case (NewBlockHashes{}).Code():
msg = new(NewBlockHashes)
case (Transactions{}).Code():
msg = new(Transactions)
case (NewPooledTransactionHashes{}).Code():
msg = new(NewPooledTransactionHashes)
default:
msg = errorf("invalid message code: %d", code)
}
if msg != nil {
if err := rlp.DecodeBytes(rawData, msg); err != nil {
return 0, errorf("could not rlp decode message: %v", err)
}
return 0, msg
}
return 0, errorf("invalid message: %s", string(rawData))
}
// ReadAndServe serves GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) readAndServe66(chain *Chain, timeout time.Duration) (uint64, Message) {
start := time.Now()
for time.Since(start) < timeout {
timeout := time.Now().Add(10 * time.Second)
c.SetReadDeadline(timeout)
reqID, msg := c.read66()
switch msg := msg.(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
headers, err := chain.GetHeaders(*msg)
if err != nil {
return 0, errorf("could not get headers for inbound header request: %v", err)
}
if err := c.Write(headers); err != nil {
return 0, errorf("could not write to connection: %v", err)
}
default:
return reqID, msg
}
}
return 0, errorf("no message received within %v", timeout)
}
func (s *Suite) setupConnection66(t *utesting.T) *Conn {
// create conn
sendConn := s.dial66(t)
sendConn.handshake(t)
sendConn.statusExchange66(t, s.chain)
return sendConn
}
func (s *Suite) testAnnounce66(t *utesting.T, sendConn, receiveConn *Conn, blockAnnouncement *NewBlock) {
// Announce the block.
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
s.waitAnnounce66(t, receiveConn, blockAnnouncement)
}
func (s *Suite) waitAnnounce66(t *utesting.T, conn *Conn, blockAnnouncement *NewBlock) {
timeout := 20 * time.Second
_, msg := conn.readAndServe66(s.chain, timeout)
switch msg := msg.(type) {
case *NewBlock:
t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block))
assert.Equal(t,
blockAnnouncement.Block.Header(), msg.Block.Header(),
"wrong block header in announcement",
)
assert.Equal(t,
blockAnnouncement.TD, msg.TD,
"wrong TD in announcement",
)
case *NewBlockHashes:
blockHashes := *msg
t.Logf("received NewBlockHashes message: %s", pretty.Sdump(blockHashes))
assert.Equal(t, blockAnnouncement.Block.Hash(), blockHashes[0].Hash,
"wrong block hash in announcement",
)
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// waitForBlock66 waits for confirmation from the client that it has
// imported the given block.
func (c *Conn) waitForBlock66(block *types.Block) error {
defer c.SetReadDeadline(time.Time{})
timeout := time.Now().Add(20 * time.Second)
c.SetReadDeadline(timeout)
for {
req := eth.GetBlockHeadersPacket66{
RequestId: 54,
GetBlockHeadersPacket: &eth.GetBlockHeadersPacket{
Origin: eth.HashOrNumber{
Hash: block.Hash(),
},
Amount: 1,
},
}
if err := c.write66(req, GetBlockHeaders{}.Code()); err != nil {
return err
}
reqID, msg := c.read66()
// check message
switch msg := msg.(type) {
case BlockHeaders:
// check request ID
if reqID != req.RequestId {
return fmt.Errorf("request ID mismatch: wanted %d, got %d", req.RequestId, reqID)
}
if len(msg) > 0 {
return nil
}
time.Sleep(100 * time.Millisecond)
default:
return fmt.Errorf("invalid message: %s", pretty.Sdump(msg))
}
}
}
func sendSuccessfulTx66(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn := s.setupConnection66(t)
sendSuccessfulTxWithConn(t, s, tx, sendConn)
}
func sendFailingTx66(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn, recvConn := s.setupConnection66(t), s.setupConnection66(t)
sendFailingTxWithConns(t, s, tx, sendConn, recvConn)
}
func (s *Suite) getBlockHeaders66(t *utesting.T, conn *Conn, req eth.Packet, expectedID uint64) BlockHeaders {
if err := conn.write66(req, GetBlockHeaders{}.Code()); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// check block headers response
reqID, msg := conn.readAndServe66(s.chain, timeout)
switch msg := msg.(type) {
case BlockHeaders:
if reqID != expectedID {
t.Fatalf("request ID mismatch: wanted %d, got %d", expectedID, reqID)
}
return msg
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
return nil
}
}
func headersMatch(t *utesting.T, chain *Chain, headers BlockHeaders) {
for _, header := range headers {
num := header.Number.Uint64()
t.Logf("received header (%d): %s", num, pretty.Sdump(header.Hash()))
assert.Equal(t, chain.blocks[int(num)].Header(), header)
}
}

View File

@ -0,0 +1,80 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"crypto/rand"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
)
// largeNumber returns a very large big.Int.
func largeNumber(megabytes int) *big.Int {
buf := make([]byte, megabytes*1024*1024)
rand.Read(buf)
bigint := new(big.Int)
bigint.SetBytes(buf)
return bigint
}
// largeBuffer returns a very large buffer.
func largeBuffer(megabytes int) []byte {
buf := make([]byte, megabytes*1024*1024)
rand.Read(buf)
return buf
}
// largeString returns a very large string.
func largeString(megabytes int) string {
buf := make([]byte, megabytes*1024*1024)
rand.Read(buf)
return hexutil.Encode(buf)
}
func largeBlock() *types.Block {
return types.NewBlockWithHeader(largeHeader())
}
// Returns a random hash
func randHash() common.Hash {
var h common.Hash
rand.Read(h[:])
return h
}
func largeHeader() *types.Header {
return &types.Header{
MixDigest: randHash(),
ReceiptHash: randHash(),
TxHash: randHash(),
Nonce: types.BlockNonce{},
Extra: []byte{},
Bloom: types.Bloom{},
GasUsed: 0,
Coinbase: common.Address{},
GasLimit: 0,
UncleHash: randHash(),
Time: 1337,
ParentHash: randHash(),
Root: randHash(),
Number: largeNumber(2),
Difficulty: largeNumber(2),
}
}

View File

@ -0,0 +1,450 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"fmt"
"net"
"time"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/stretchr/testify/assert"
)
var pretty = spew.ConfigState{
Indent: " ",
DisableCapacities: true,
DisablePointerAddresses: true,
SortKeys: true,
}
var timeout = 20 * time.Second
// Suite represents a structure used to test the eth
// protocol of a node(s).
type Suite struct {
Dest *enode.Node
chain *Chain
fullChain *Chain
}
// NewSuite creates and returns a new eth-test suite that can
// be used to test the given node against the given blockchain
// data.
func NewSuite(dest *enode.Node, chainfile string, genesisfile string) (*Suite, error) {
chain, err := loadChain(chainfile, genesisfile)
if err != nil {
return nil, err
}
return &Suite{
Dest: dest,
chain: chain.Shorten(1000),
fullChain: chain,
}, nil
}
func (s *Suite) EthTests() []utesting.Test {
return []utesting.Test{
// status
{Name: "Status", Fn: s.TestStatus},
{Name: "Status_66", Fn: s.TestStatus_66},
// get block headers
{Name: "GetBlockHeaders", Fn: s.TestGetBlockHeaders},
{Name: "GetBlockHeaders_66", Fn: s.TestGetBlockHeaders_66},
{Name: "TestSimultaneousRequests_66", Fn: s.TestSimultaneousRequests_66},
{Name: "TestSameRequestID_66", Fn: s.TestSameRequestID_66},
{Name: "TestZeroRequestID_66", Fn: s.TestZeroRequestID_66},
// get block bodies
{Name: "GetBlockBodies", Fn: s.TestGetBlockBodies},
{Name: "GetBlockBodies_66", Fn: s.TestGetBlockBodies_66},
// broadcast
{Name: "Broadcast", Fn: s.TestBroadcast},
{Name: "Broadcast_66", Fn: s.TestBroadcast_66},
{Name: "TestLargeAnnounce", Fn: s.TestLargeAnnounce},
{Name: "TestLargeAnnounce_66", Fn: s.TestLargeAnnounce_66},
// malicious handshakes + status
{Name: "TestMaliciousHandshake", Fn: s.TestMaliciousHandshake},
{Name: "TestMaliciousStatus", Fn: s.TestMaliciousStatus},
{Name: "TestMaliciousHandshake_66", Fn: s.TestMaliciousHandshake_66},
{Name: "TestMaliciousStatus_66", Fn: s.TestMaliciousStatus},
// test transactions
{Name: "TestTransactions", Fn: s.TestTransaction},
{Name: "TestTransactions_66", Fn: s.TestTransaction_66},
{Name: "TestMaliciousTransactions", Fn: s.TestMaliciousTx},
{Name: "TestMaliciousTransactions_66", Fn: s.TestMaliciousTx_66},
}
}
// TestStatus attempts to connect to the given node and exchange
// a status message with it, and then check to make sure
// the chain head is correct.
func (s *Suite) TestStatus(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
// get protoHandshake
conn.handshake(t)
// get status
switch msg := conn.statusExchange(t, s.chain, nil).(type) {
case *Status:
t.Logf("got status message: %s", pretty.Sdump(msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestMaliciousStatus sends a status package with a large total difficulty.
func (s *Suite) TestMaliciousStatus(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
// get protoHandshake
conn.handshake(t)
status := &Status{
ProtocolVersion: uint32(conn.ethProtocolVersion),
NetworkID: s.chain.chainConfig.ChainID.Uint64(),
TD: largeNumber(2),
Head: s.chain.blocks[s.chain.Len()-1].Hash(),
Genesis: s.chain.blocks[0].Hash(),
ForkID: s.chain.ForkID(),
}
// get status
switch msg := conn.statusExchange(t, s.chain, status).(type) {
case *Status:
t.Logf("%+v\n", msg)
default:
t.Fatalf("expected status, got: %#v ", msg)
}
// wait for disconnect
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
return
default:
t.Fatalf("expected disconnect, got: %s", pretty.Sdump(msg))
}
}
// TestGetBlockHeaders tests whether the given node can respond to
// a `GetBlockHeaders` request and that the response is accurate.
func (s *Suite) TestGetBlockHeaders(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
conn.handshake(t)
conn.statusExchange(t, s.chain, nil)
// get block headers
req := &GetBlockHeaders{
Origin: eth.HashOrNumber{
Hash: s.chain.blocks[1].Hash(),
},
Amount: 2,
Skip: 1,
Reverse: false,
}
if err := conn.Write(req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *BlockHeaders:
headers := *msg
for _, header := range headers {
num := header.Number.Uint64()
t.Logf("received header (%d): %s", num, pretty.Sdump(header.Hash()))
assert.Equal(t, s.chain.blocks[int(num)].Header(), header)
}
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestGetBlockBodies tests whether the given node can respond to
// a `GetBlockBodies` request and that the response is accurate.
func (s *Suite) TestGetBlockBodies(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
conn.handshake(t)
conn.statusExchange(t, s.chain, nil)
// create block bodies request
req := &GetBlockBodies{
s.chain.blocks[54].Hash(),
s.chain.blocks[75].Hash(),
}
if err := conn.Write(req); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *BlockBodies:
t.Logf("received %d block bodies", len(*msg))
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// TestBroadcast tests whether a block announcement is correctly
// propagated to the given node's peer(s).
func (s *Suite) TestBroadcast(t *utesting.T) {
sendConn, receiveConn := s.setupConnection(t), s.setupConnection(t)
nextBlock := len(s.chain.blocks)
blockAnnouncement := &NewBlock{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
}
s.testAnnounce(t, sendConn, receiveConn, blockAnnouncement)
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock(s.chain.Head()); err != nil {
t.Fatal(err)
}
}
// TestMaliciousHandshake tries to send malicious data during the handshake.
func (s *Suite) TestMaliciousHandshake(t *utesting.T) {
conn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
// write hello to client
pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:]
handshakes := []*Hello{
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 64},
},
ID: pub0,
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: append(pub0, byte(0)),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: append(pub0, pub0...),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
},
ID: largeBuffer(2),
},
{
Version: 5,
Caps: []p2p.Cap{
{Name: largeString(2), Version: 64},
},
ID: largeBuffer(2),
},
}
for i, handshake := range handshakes {
t.Logf("Testing malicious handshake %v\n", i)
// Init the handshake
if err := conn.Write(handshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// check that the peer disconnected
timeout := 20 * time.Second
// Discard one hello
for i := 0; i < 2; i++ {
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
case *Hello:
// Hello's are send concurrently, so ignore them
continue
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
// Dial for the next round
conn, err = s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
}
}
// TestLargeAnnounce tests the announcement mechanism with a large block.
func (s *Suite) TestLargeAnnounce(t *utesting.T) {
nextBlock := len(s.chain.blocks)
blocks := []*NewBlock{
{
Block: largeBlock(),
TD: s.fullChain.TD(nextBlock + 1),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: largeNumber(2),
},
{
Block: largeBlock(),
TD: largeNumber(2),
},
{
Block: s.fullChain.blocks[nextBlock],
TD: s.fullChain.TD(nextBlock + 1),
},
}
for i, blockAnnouncement := range blocks[0:3] {
t.Logf("Testing malicious announcement: %v\n", i)
sendConn := s.setupConnection(t)
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// Invalid announcement, check that peer disconnected
switch msg := sendConn.ReadAndServe(s.chain, timeout).(type) {
case *Disconnect:
case *Error:
break
default:
t.Fatalf("unexpected: %s wanted disconnect", pretty.Sdump(msg))
}
}
// Test the last block as a valid block
sendConn := s.setupConnection(t)
receiveConn := s.setupConnection(t)
s.testAnnounce(t, sendConn, receiveConn, blocks[3])
// update test suite chain
s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock])
// wait for client to update its chain
if err := receiveConn.waitForBlock(s.fullChain.blocks[nextBlock]); err != nil {
t.Fatal(err)
}
}
func (s *Suite) testAnnounce(t *utesting.T, sendConn, receiveConn *Conn, blockAnnouncement *NewBlock) {
// Announce the block.
if err := sendConn.Write(blockAnnouncement); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
s.waitAnnounce(t, receiveConn, blockAnnouncement)
}
func (s *Suite) waitAnnounce(t *utesting.T, conn *Conn, blockAnnouncement *NewBlock) {
timeout := 20 * time.Second
switch msg := conn.ReadAndServe(s.chain, timeout).(type) {
case *NewBlock:
t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block))
assert.Equal(t,
blockAnnouncement.Block.Header(), msg.Block.Header(),
"wrong block header in announcement",
)
assert.Equal(t,
blockAnnouncement.TD, msg.TD,
"wrong TD in announcement",
)
case *NewBlockHashes:
message := *msg
t.Logf("received NewBlockHashes message: %s", pretty.Sdump(message))
assert.Equal(t, blockAnnouncement.Block.Hash(), message[0].Hash,
"wrong block hash in announcement",
)
default:
t.Fatalf("unexpected: %s", pretty.Sdump(msg))
}
}
func (s *Suite) setupConnection(t *utesting.T) *Conn {
// create conn
sendConn, err := s.dial()
if err != nil {
t.Fatalf("could not dial: %v", err)
}
sendConn.handshake(t)
sendConn.statusExchange(t, s.chain, nil)
return sendConn
}
// dial attempts to dial the given node and perform a handshake,
// returning the created Conn if successful.
func (s *Suite) dial() (*Conn, error) {
var conn Conn
// dial
fd, err := net.Dial("tcp", fmt.Sprintf("%v:%d", s.Dest.IP(), s.Dest.TCP()))
if err != nil {
return nil, err
}
conn.Conn = rlpx.NewConn(fd, s.Dest.Pubkey())
// do encHandshake
conn.ourKey, _ = crypto.GenerateKey()
_, err = conn.Handshake(conn.ourKey)
if err != nil {
return nil, err
}
// set default p2p capabilities
conn.caps = []p2p.Cap{
{Name: "eth", Version: 64},
{Name: "eth", Version: 65},
}
return &conn, nil
}
func (s *Suite) TestTransaction(t *utesting.T) {
tests := []*types.Transaction{
getNextTxFromChain(t, s),
unknownTx(t, s),
}
for i, tx := range tests {
t.Logf("Testing tx propagation: %v\n", i)
sendSuccessfulTx(t, s, tx)
}
}
func (s *Suite) TestMaliciousTx(t *utesting.T) {
tests := []*types.Transaction{
getOldTxFromChain(t, s),
invalidNonceTx(t, s),
hugeAmount(t, s),
hugeGasPrice(t, s),
hugeData(t, s),
}
for i, tx := range tests {
t.Logf("Testing malicious tx propagation: %v\n", i)
sendFailingTx(t, s, tx)
}
}

Binary file not shown.

View File

@ -0,0 +1,26 @@
{
"config": {
"chainId": 19763,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"ethash": {}
},
"nonce": "0xdeadbeefdeadbeef",
"timestamp": "0x0",
"extraData": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0x80000000",
"difficulty": "0x20000",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"alloc": {
"71562b71999873db5b286df957af199ec94617f7": {
"balance": "0xffffffffffffffffffffffffff"
}
},
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
}

Binary file not shown.

View File

@ -0,0 +1,189 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
)
//var faucetAddr = common.HexToAddress("0x71562b71999873DB5b286dF957af199Ec94617F7")
var faucetKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
func sendSuccessfulTx(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn := s.setupConnection(t)
sendSuccessfulTxWithConn(t, s, tx, sendConn)
}
func sendSuccessfulTxWithConn(t *utesting.T, s *Suite, tx *types.Transaction, sendConn *Conn) {
t.Logf("sending tx: %v %v %v\n", tx.Hash().String(), tx.GasPrice(), tx.Gas())
// Send the transaction
if err := sendConn.Write(&Transactions{tx}); err != nil {
t.Fatal(err)
}
time.Sleep(100 * time.Millisecond)
recvConn := s.setupConnection(t)
// Wait for the transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *Transactions:
recTxs := *msg
for _, gotTx := range recTxs {
if gotTx.Hash() == tx.Hash() {
// Ok
return
}
}
t.Fatalf("missing transaction: got %v missing %v", recTxs, tx.Hash())
case *NewPooledTransactionHashes:
txHashes := *msg
for _, gotHash := range txHashes {
if gotHash == tx.Hash() {
return
}
}
t.Fatalf("missing transaction announcement: got %v missing %v", txHashes, tx.Hash())
default:
t.Fatalf("unexpected message in sendSuccessfulTx: %s", pretty.Sdump(msg))
}
}
func sendFailingTx(t *utesting.T, s *Suite, tx *types.Transaction) {
sendConn, recvConn := s.setupConnection(t), s.setupConnection(t)
sendFailingTxWithConns(t, s, tx, sendConn, recvConn)
}
func sendFailingTxWithConns(t *utesting.T, s *Suite, tx *types.Transaction, sendConn, recvConn *Conn) {
// Wait for a transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *NewPooledTransactionHashes:
break
default:
t.Logf("unexpected message, logging: %v", pretty.Sdump(msg))
}
// Send the transaction
if err := sendConn.Write(&Transactions{tx}); err != nil {
t.Fatal(err)
}
// Wait for another transaction announcement
switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) {
case *Transactions:
t.Fatalf("Received unexpected transaction announcement: %v", msg)
case *NewPooledTransactionHashes:
t.Fatalf("Received unexpected pooledTx announcement: %v", msg)
case *Error:
// Transaction should not be announced -> wait for timeout
return
default:
t.Fatalf("unexpected message in sendFailingTx: %s", pretty.Sdump(msg))
}
}
func unknownTx(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce()+1, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
}
func getNextTxFromChain(t *utesting.T, s *Suite) *types.Transaction {
// Get a new transaction
var tx *types.Transaction
for _, blocks := range s.fullChain.blocks[s.chain.Len():] {
txs := blocks.Transactions()
if txs.Len() != 0 {
tx = txs[0]
break
}
}
if tx == nil {
t.Fatal("could not find transaction")
}
return tx
}
func getOldTxFromChain(t *utesting.T, s *Suite) *types.Transaction {
var tx *types.Transaction
for _, blocks := range s.fullChain.blocks[:s.chain.Len()-1] {
txs := blocks.Transactions()
if txs.Len() != 0 {
tx = txs[0]
break
}
}
if tx == nil {
t.Fatal("could not find transaction")
}
return tx
}
func invalidNonceTx(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce()-2, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
}
func hugeAmount(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
amount := largeNumber(2)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, amount, tx.Gas(), tx.GasPrice(), tx.Data())
return signWithFaucet(t, txNew)
}
func hugeGasPrice(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
gasPrice := largeNumber(2)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), gasPrice, tx.Data())
return signWithFaucet(t, txNew)
}
func hugeData(t *utesting.T, s *Suite) *types.Transaction {
tx := getNextTxFromChain(t, s)
var to common.Address
if tx.To() != nil {
to = *tx.To()
}
txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), tx.GasPrice(), largeBuffer(2))
return signWithFaucet(t, txNew)
}
func signWithFaucet(t *utesting.T, tx *types.Transaction) *types.Transaction {
signer := types.HomesteadSigner{}
signedTx, err := types.SignTx(tx, signer, faucetKey)
if err != nil {
t.Fatalf("could not sign tx: %v\n", err)
}
return signedTx
}

View File

@ -0,0 +1,341 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethtest
import (
"crypto/ecdsa"
"fmt"
"reflect"
"time"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/ethereum/go-ethereum/rlp"
)
type Message interface {
Code() int
}
type Error struct {
err error
}
func (e *Error) Unwrap() error { return e.err }
func (e *Error) Error() string { return e.err.Error() }
func (e *Error) Code() int { return -1 }
func (e *Error) String() string { return e.Error() }
func errorf(format string, args ...interface{}) *Error {
return &Error{fmt.Errorf(format, args...)}
}
// Hello is the RLP structure of the protocol handshake.
type Hello struct {
Version uint64
Name string
Caps []p2p.Cap
ListenPort uint64
ID []byte // secp256k1 public key
// Ignore additional fields (for forward compatibility).
Rest []rlp.RawValue `rlp:"tail"`
}
func (h Hello) Code() int { return 0x00 }
// Disconnect is the RLP structure for a disconnect message.
type Disconnect struct {
Reason p2p.DiscReason
}
func (d Disconnect) Code() int { return 0x01 }
type Ping struct{}
func (p Ping) Code() int { return 0x02 }
type Pong struct{}
func (p Pong) Code() int { return 0x03 }
// Status is the network packet for the status message for eth/64 and later.
type Status eth.StatusPacket
func (s Status) Code() int { return 16 }
// NewBlockHashes is the network packet for the block announcements.
type NewBlockHashes eth.NewBlockHashesPacket
func (nbh NewBlockHashes) Code() int { return 17 }
type Transactions eth.TransactionsPacket
func (t Transactions) Code() int { return 18 }
// GetBlockHeaders represents a block header query.
type GetBlockHeaders eth.GetBlockHeadersPacket
func (g GetBlockHeaders) Code() int { return 19 }
type BlockHeaders eth.BlockHeadersPacket
func (bh BlockHeaders) Code() int { return 20 }
// GetBlockBodies represents a GetBlockBodies request
type GetBlockBodies eth.GetBlockBodiesPacket
func (gbb GetBlockBodies) Code() int { return 21 }
// BlockBodies is the network packet for block content distribution.
type BlockBodies eth.BlockBodiesPacket
func (bb BlockBodies) Code() int { return 22 }
// NewBlock is the network packet for the block propagation message.
type NewBlock eth.NewBlockPacket
func (nb NewBlock) Code() int { return 23 }
// NewPooledTransactionHashes is the network packet for the tx hash propagation message.
type NewPooledTransactionHashes eth.NewPooledTransactionHashesPacket
func (nb NewPooledTransactionHashes) Code() int { return 24 }
// Conn represents an individual connection with a peer
type Conn struct {
*rlpx.Conn
ourKey *ecdsa.PrivateKey
ethProtocolVersion uint
caps []p2p.Cap
}
func (c *Conn) Read() Message {
code, rawData, _, err := c.Conn.Read()
if err != nil {
return errorf("could not read from connection: %v", err)
}
var msg Message
switch int(code) {
case (Hello{}).Code():
msg = new(Hello)
case (Ping{}).Code():
msg = new(Ping)
case (Pong{}).Code():
msg = new(Pong)
case (Disconnect{}).Code():
msg = new(Disconnect)
case (Status{}).Code():
msg = new(Status)
case (GetBlockHeaders{}).Code():
msg = new(GetBlockHeaders)
case (BlockHeaders{}).Code():
msg = new(BlockHeaders)
case (GetBlockBodies{}).Code():
msg = new(GetBlockBodies)
case (BlockBodies{}).Code():
msg = new(BlockBodies)
case (NewBlock{}).Code():
msg = new(NewBlock)
case (NewBlockHashes{}).Code():
msg = new(NewBlockHashes)
case (Transactions{}).Code():
msg = new(Transactions)
case (NewPooledTransactionHashes{}).Code():
msg = new(NewPooledTransactionHashes)
default:
return errorf("invalid message code: %d", code)
}
// if message is devp2p, decode here
if err := rlp.DecodeBytes(rawData, msg); err != nil {
return errorf("could not rlp decode message: %v", err)
}
return msg
}
// ReadAndServe serves GetBlockHeaders requests while waiting
// on another message from the node.
func (c *Conn) ReadAndServe(chain *Chain, timeout time.Duration) Message {
start := time.Now()
for time.Since(start) < timeout {
timeout := time.Now().Add(10 * time.Second)
c.SetReadDeadline(timeout)
switch msg := c.Read().(type) {
case *Ping:
c.Write(&Pong{})
case *GetBlockHeaders:
req := *msg
headers, err := chain.GetHeaders(req)
if err != nil {
return errorf("could not get headers for inbound header request: %v", err)
}
if err := c.Write(headers); err != nil {
return errorf("could not write to connection: %v", err)
}
default:
return msg
}
}
return errorf("no message received within %v", timeout)
}
func (c *Conn) Write(msg Message) error {
// check if message is eth protocol message
var (
payload []byte
err error
)
payload, err = rlp.EncodeToBytes(msg)
if err != nil {
return err
}
_, err = c.Conn.Write(uint64(msg.Code()), payload)
return err
}
// handshake checks to make sure a `HELLO` is received.
func (c *Conn) handshake(t *utesting.T) Message {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(10 * time.Second))
// write hello to client
pub0 := crypto.FromECDSAPub(&c.ourKey.PublicKey)[1:]
ourHandshake := &Hello{
Version: 5,
Caps: c.caps,
ID: pub0,
}
if err := c.Write(ourHandshake); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
// read hello from client
switch msg := c.Read().(type) {
case *Hello:
// set snappy if version is at least 5
if msg.Version >= 5 {
c.SetSnappy(true)
}
c.negotiateEthProtocol(msg.Caps)
if c.ethProtocolVersion == 0 {
t.Fatalf("unexpected eth protocol version")
}
return msg
default:
t.Fatalf("bad handshake: %#v", msg)
return nil
}
}
// negotiateEthProtocol sets the Conn's eth protocol version
// to highest advertised capability from peer
func (c *Conn) negotiateEthProtocol(caps []p2p.Cap) {
var highestEthVersion uint
for _, capability := range caps {
if capability.Name != "eth" {
continue
}
if capability.Version > highestEthVersion && capability.Version <= 65 {
highestEthVersion = capability.Version
}
}
c.ethProtocolVersion = highestEthVersion
}
// statusExchange performs a `Status` message exchange with the given
// node.
func (c *Conn) statusExchange(t *utesting.T, chain *Chain, status *Status) Message {
defer c.SetDeadline(time.Time{})
c.SetDeadline(time.Now().Add(20 * time.Second))
// read status message from client
var message Message
loop:
for {
switch msg := c.Read().(type) {
case *Status:
if msg.Head != chain.blocks[chain.Len()-1].Hash() {
t.Fatalf("wrong head block in status: %s", msg.Head.String())
}
if msg.TD.Cmp(chain.TD(chain.Len())) != 0 {
t.Fatalf("wrong TD in status: %v", msg.TD)
}
if !reflect.DeepEqual(msg.ForkID, chain.ForkID()) {
t.Fatalf("wrong fork ID in status: %v", msg.ForkID)
}
message = msg
break loop
case *Disconnect:
t.Fatalf("disconnect received: %v", msg.Reason)
case *Ping:
c.Write(&Pong{}) // TODO (renaynay): in the future, this should be an error
// (PINGs should not be a response upon fresh connection)
default:
t.Fatalf("bad status message: %s", pretty.Sdump(msg))
}
}
// make sure eth protocol version is set for negotiation
if c.ethProtocolVersion == 0 {
t.Fatalf("eth protocol version must be set in Conn")
}
if status == nil {
// write status message to client
status = &Status{
ProtocolVersion: uint32(c.ethProtocolVersion),
NetworkID: chain.chainConfig.ChainID.Uint64(),
TD: chain.TD(chain.Len()),
Head: chain.blocks[chain.Len()-1].Hash(),
Genesis: chain.blocks[0].Hash(),
ForkID: chain.ForkID(),
}
}
if err := c.Write(status); err != nil {
t.Fatalf("could not write to connection: %v", err)
}
return message
}
// waitForBlock waits for confirmation from the client that it has
// imported the given block.
func (c *Conn) waitForBlock(block *types.Block) error {
defer c.SetReadDeadline(time.Time{})
timeout := time.Now().Add(20 * time.Second)
c.SetReadDeadline(timeout)
for {
req := &GetBlockHeaders{Origin: eth.HashOrNumber{Hash: block.Hash()}, Amount: 1}
if err := c.Write(req); err != nil {
return err
}
switch msg := c.Read().(type) {
case *BlockHeaders:
if len(*msg) > 0 {
return nil
}
time.Sleep(100 * time.Millisecond)
default:
return fmt.Errorf("invalid message: %s", pretty.Sdump(msg))
}
}
}

View File

@ -0,0 +1,467 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v4test
import (
"bytes"
"crypto/rand"
"fmt"
"net"
"reflect"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p/discover/v4wire"
)
const (
expiration = 20 * time.Second
wrongPacket = 66
macSize = 256 / 8
)
var (
// Remote node under test
Remote string
// IP where the first tester is listening, port will be assigned
Listen1 string = "127.0.0.1"
// IP where the second tester is listening, port will be assigned
// Before running the test, you may have to `sudo ifconfig lo0 add 127.0.0.2` (on MacOS at least)
Listen2 string = "127.0.0.2"
)
type pingWithJunk struct {
Version uint
From, To v4wire.Endpoint
Expiration uint64
JunkData1 uint
JunkData2 []byte
}
func (req *pingWithJunk) Name() string { return "PING/v4" }
func (req *pingWithJunk) Kind() byte { return v4wire.PingPacket }
type pingWrongType struct {
Version uint
From, To v4wire.Endpoint
Expiration uint64
}
func (req *pingWrongType) Name() string { return "WRONG/v4" }
func (req *pingWrongType) Kind() byte { return wrongPacket }
func futureExpiration() uint64 {
return uint64(time.Now().Add(expiration).Unix())
}
// This test just sends a PING packet and expects a response.
func BasicPing(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// checkPong verifies that reply is a valid PONG matching the given ping hash.
func (te *testenv) checkPong(reply v4wire.Packet, pingHash []byte) error {
if reply == nil || reply.Kind() != v4wire.PongPacket {
return fmt.Errorf("expected PONG reply, got %v", reply)
}
pong := reply.(*v4wire.Pong)
if !bytes.Equal(pong.ReplyTok, pingHash) {
return fmt.Errorf("PONG reply token mismatch: got %x, want %x", pong.ReplyTok, pingHash)
}
wantEndpoint := te.localEndpoint(te.l1)
if !reflect.DeepEqual(pong.To, wantEndpoint) {
return fmt.Errorf("PONG 'to' endpoint mismatch: got %+v, want %+v", pong.To, wantEndpoint)
}
if v4wire.Expired(pong.Expiration) {
return fmt.Errorf("PONG is expired (%v)", pong.Expiration)
}
return nil
}
// This test sends a PING packet with wrong 'to' field and expects a PONG response.
func PingWrongTo(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: wrongEndpoint,
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with wrong 'from' field and expects a PONG response.
func PingWrongFrom(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: wrongEndpoint,
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with additional data at the end and expects a PONG
// response. The remote node should respond because EIP-8 mandates ignoring additional
// trailing data.
func PingExtraData(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
pingHash := te.send(te.l1, &pingWithJunk{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
JunkData1: 42,
JunkData2: []byte{9, 8, 7, 6, 5, 4, 3, 2, 1},
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with additional data and wrong 'from' field
// and expects a PONG response.
func PingExtraDataWrongFrom(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
req := pingWithJunk{
Version: 4,
From: wrongEndpoint,
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
JunkData1: 42,
JunkData2: []byte{9, 8, 7, 6, 5, 4, 3, 2, 1},
}
pingHash := te.send(te.l1, &req)
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with an expiration in the past.
// The remote node should not respond.
func PingPastExpiration(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: -futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if reply != nil {
t.Fatal("Expected no reply, got", reply)
}
}
// This test sends an invalid packet. The remote node should not respond.
func WrongPacketType(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
te.send(te.l1, &pingWrongType{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if reply != nil {
t.Fatal("Expected no reply, got", reply)
}
}
// This test verifies that the default behaviour of ignoring 'from' fields is unaffected by
// the bonding process. After bonding, it pings the target with a different from endpoint.
func BondThenPingWithWrongFrom(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: wrongEndpoint,
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test just sends FINDNODE. The remote node should not reply
// because the endpoint proof has not completed.
func FindnodeWithoutEndpointProof(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
req := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(req.Target[:])
te.send(te.l1, &req)
reply, _, _ := te.read(te.l1)
if reply != nil {
t.Fatal("Expected no response, got", reply)
}
}
// BasicFindnode sends a FINDNODE request after performing the endpoint
// proof. The remote node should respond.
func BasicFindnode(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
findnode := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l1, &findnode)
reply, _, err := te.read(te.l1)
if err != nil {
t.Fatal("read find nodes", err)
}
if reply.Kind() != v4wire.NeighborsPacket {
t.Fatal("Expected neighbors, got", reply.Name())
}
}
// This test sends an unsolicited NEIGHBORS packet after the endpoint proof, then sends
// FINDNODE to read the remote table. The remote node should not return the node contained
// in the unsolicited NEIGHBORS packet.
func UnsolicitedNeighbors(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
// Send unsolicited NEIGHBORS response.
fakeKey, _ := crypto.GenerateKey()
encFakeKey := v4wire.EncodePubkey(&fakeKey.PublicKey)
neighbors := v4wire.Neighbors{
Expiration: futureExpiration(),
Nodes: []v4wire.Node{{
ID: encFakeKey,
IP: net.IP{1, 2, 3, 4},
UDP: 30303,
TCP: 30303,
}},
}
te.send(te.l1, &neighbors)
// Check if the remote node included the fake node.
te.send(te.l1, &v4wire.Findnode{
Expiration: futureExpiration(),
Target: encFakeKey,
})
reply, _, err := te.read(te.l1)
if err != nil {
t.Fatal("read find nodes", err)
}
if reply.Kind() != v4wire.NeighborsPacket {
t.Fatal("Expected neighbors, got", reply.Name())
}
nodes := reply.(*v4wire.Neighbors).Nodes
if contains(nodes, encFakeKey) {
t.Fatal("neighbors response contains node from earlier unsolicited neighbors response")
}
}
// This test sends FINDNODE with an expiration timestamp in the past.
// The remote node should not respond.
func FindnodePastExpiration(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
findnode := v4wire.Findnode{Expiration: -futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l1, &findnode)
for {
reply, _, _ := te.read(te.l1)
if reply == nil {
return
} else if reply.Kind() == v4wire.NeighborsPacket {
t.Fatal("Unexpected NEIGHBORS response for expired FINDNODE request")
}
}
}
// bond performs the endpoint proof with the remote node.
func bond(t *utesting.T, te *testenv) {
te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
var gotPing, gotPong bool
for !gotPing || !gotPong {
req, hash, err := te.read(te.l1)
if err != nil {
t.Fatal(err)
}
switch req.(type) {
case *v4wire.Ping:
te.send(te.l1, &v4wire.Pong{
To: te.remoteEndpoint(),
ReplyTok: hash,
Expiration: futureExpiration(),
})
gotPing = true
case *v4wire.Pong:
// TODO: maybe verify pong data here
gotPong = true
}
}
}
// This test attempts to perform a traffic amplification attack against a
// 'victim' endpoint using FINDNODE. In this attack scenario, the attacker
// attempts to complete the endpoint proof non-interactively by sending a PONG
// with mismatching reply token from the 'victim' endpoint. The attack works if
// the remote node does not verify the PONG reply token field correctly. The
// attacker could then perform traffic amplification by sending many FINDNODE
// requests to the discovery node, which would reply to the 'victim' address.
func FindnodeAmplificationInvalidPongHash(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
// Send PING to start endpoint verification.
te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
var gotPing, gotPong bool
for !gotPing || !gotPong {
req, _, err := te.read(te.l1)
if err != nil {
t.Fatal(err)
}
switch req.(type) {
case *v4wire.Ping:
// Send PONG from this node ID, but with invalid ReplyTok.
te.send(te.l1, &v4wire.Pong{
To: te.remoteEndpoint(),
ReplyTok: make([]byte, macSize),
Expiration: futureExpiration(),
})
gotPing = true
case *v4wire.Pong:
gotPong = true
}
}
// Now send FINDNODE. The remote node should not respond because our
// PONG did not reference the PING hash.
findnode := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l1, &findnode)
// If we receive a NEIGHBORS response, the attack worked and the test fails.
reply, _, _ := te.read(te.l1)
if reply != nil && reply.Kind() == v4wire.NeighborsPacket {
t.Error("Got neighbors")
}
}
// This test attempts to perform a traffic amplification attack using FINDNODE.
// The attack works if the remote node does not verify the IP address of FINDNODE
// against the endpoint verification proof done by PING/PONG.
func FindnodeAmplificationWrongIP(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
// Do the endpoint proof from the l1 IP.
bond(t, te)
// Now send FINDNODE from the same node ID, but different IP address.
// The remote node should not respond.
findnode := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l2, &findnode)
// If we receive a NEIGHBORS response, the attack worked and the test fails.
reply, _, _ := te.read(te.l2)
if reply != nil {
t.Error("Got NEIGHORS response for FINDNODE from wrong IP")
}
}
var AllTests = []utesting.Test{
{Name: "Ping/Basic", Fn: BasicPing},
{Name: "Ping/WrongTo", Fn: PingWrongTo},
{Name: "Ping/WrongFrom", Fn: PingWrongFrom},
{Name: "Ping/ExtraData", Fn: PingExtraData},
{Name: "Ping/ExtraDataWrongFrom", Fn: PingExtraDataWrongFrom},
{Name: "Ping/PastExpiration", Fn: PingPastExpiration},
{Name: "Ping/WrongPacketType", Fn: WrongPacketType},
{Name: "Ping/BondThenPingWithWrongFrom", Fn: BondThenPingWithWrongFrom},
{Name: "Findnode/WithoutEndpointProof", Fn: FindnodeWithoutEndpointProof},
{Name: "Findnode/BasicFindnode", Fn: BasicFindnode},
{Name: "Findnode/UnsolicitedNeighbors", Fn: UnsolicitedNeighbors},
{Name: "Findnode/PastExpiration", Fn: FindnodePastExpiration},
{Name: "Amplification/InvalidPongHash", Fn: FindnodeAmplificationInvalidPongHash},
{Name: "Amplification/WrongIP", Fn: FindnodeAmplificationWrongIP},
}

View File

@ -0,0 +1,123 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v4test
import (
"crypto/ecdsa"
"fmt"
"net"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/discover/v4wire"
"github.com/ethereum/go-ethereum/p2p/enode"
)
const waitTime = 300 * time.Millisecond
type testenv struct {
l1, l2 net.PacketConn
key *ecdsa.PrivateKey
remote *enode.Node
remoteAddr *net.UDPAddr
}
func newTestEnv(remote string, listen1, listen2 string) *testenv {
l1, err := net.ListenPacket("udp", fmt.Sprintf("%v:0", listen1))
if err != nil {
panic(err)
}
l2, err := net.ListenPacket("udp", fmt.Sprintf("%v:0", listen2))
if err != nil {
panic(err)
}
key, err := crypto.GenerateKey()
if err != nil {
panic(err)
}
node, err := enode.Parse(enode.ValidSchemes, remote)
if err != nil {
panic(err)
}
if node.IP() == nil || node.UDP() == 0 {
var ip net.IP
var tcpPort, udpPort int
if ip = node.IP(); ip == nil {
ip = net.ParseIP("127.0.0.1")
}
if tcpPort = node.TCP(); tcpPort == 0 {
tcpPort = 30303
}
if udpPort = node.TCP(); udpPort == 0 {
udpPort = 30303
}
node = enode.NewV4(node.Pubkey(), ip, tcpPort, udpPort)
}
addr := &net.UDPAddr{IP: node.IP(), Port: node.UDP()}
return &testenv{l1, l2, key, node, addr}
}
func (te *testenv) close() {
te.l1.Close()
te.l2.Close()
}
func (te *testenv) send(c net.PacketConn, req v4wire.Packet) []byte {
packet, hash, err := v4wire.Encode(te.key, req)
if err != nil {
panic(fmt.Errorf("can't encode %v packet: %v", req.Name(), err))
}
if _, err := c.WriteTo(packet, te.remoteAddr); err != nil {
panic(fmt.Errorf("can't send %v: %v", req.Name(), err))
}
return hash
}
func (te *testenv) read(c net.PacketConn) (v4wire.Packet, []byte, error) {
buf := make([]byte, 2048)
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
return nil, nil, err
}
n, _, err := c.ReadFrom(buf)
if err != nil {
return nil, nil, err
}
p, _, hash, err := v4wire.Decode(buf[:n])
return p, hash, err
}
func (te *testenv) localEndpoint(c net.PacketConn) v4wire.Endpoint {
addr := c.LocalAddr().(*net.UDPAddr)
return v4wire.Endpoint{
IP: addr.IP.To4(),
UDP: uint16(addr.Port),
TCP: 0,
}
}
func (te *testenv) remoteEndpoint() v4wire.Endpoint {
return v4wire.NewEndpoint(te.remoteAddr, 0)
}
func contains(ns []v4wire.Node, key v4wire.Pubkey) bool {
for _, n := range ns {
if n.ID == key {
return true
}
}
return false
}

View File

@ -0,0 +1,377 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v5test
import (
"bytes"
"net"
"sync"
"time"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p/discover/v5wire"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/netutil"
)
// Suite is the discv5 test suite.
type Suite struct {
Dest *enode.Node
Listen1, Listen2 string // listening addresses
}
func (s *Suite) listen1(log logger) (*conn, net.PacketConn) {
c := newConn(s.Dest, log)
l := c.listen(s.Listen1)
return c, l
}
func (s *Suite) listen2(log logger) (*conn, net.PacketConn, net.PacketConn) {
c := newConn(s.Dest, log)
l1, l2 := c.listen(s.Listen1), c.listen(s.Listen2)
return c, l1, l2
}
func (s *Suite) AllTests() []utesting.Test {
return []utesting.Test{
{Name: "Ping", Fn: s.TestPing},
{Name: "PingLargeRequestID", Fn: s.TestPingLargeRequestID},
{Name: "PingMultiIP", Fn: s.TestPingMultiIP},
{Name: "PingHandshakeInterrupted", Fn: s.TestPingHandshakeInterrupted},
{Name: "TalkRequest", Fn: s.TestTalkRequest},
{Name: "FindnodeZeroDistance", Fn: s.TestFindnodeZeroDistance},
{Name: "FindnodeResults", Fn: s.TestFindnodeResults},
}
}
// This test sends PING and expects a PONG response.
func (s *Suite) TestPing(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
switch resp := conn.reqresp(l1, ping).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping, l1)
default:
t.Fatal("expected PONG, got", resp.Name())
}
}
func checkPong(t *utesting.T, pong *v5wire.Pong, ping *v5wire.Ping, c net.PacketConn) {
if !bytes.Equal(pong.ReqID, ping.ReqID) {
t.Fatalf("wrong request ID %x in PONG, want %x", pong.ReqID, ping.ReqID)
}
if !pong.ToIP.Equal(laddr(c).IP) {
t.Fatalf("wrong destination IP %v in PONG, want %v", pong.ToIP, laddr(c).IP)
}
if int(pong.ToPort) != laddr(c).Port {
t.Fatalf("wrong destination port %v in PONG, want %v", pong.ToPort, laddr(c).Port)
}
}
// This test sends PING with a 9-byte request ID, which isn't allowed by the spec.
// The remote node should not respond.
func (s *Suite) TestPingLargeRequestID(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
ping := &v5wire.Ping{ReqID: make([]byte, 9)}
switch resp := conn.reqresp(l1, ping).(type) {
case *v5wire.Pong:
t.Errorf("PONG response with unknown request ID %x", resp.ReqID)
case *readError:
if resp.err == v5wire.ErrInvalidReqID {
t.Error("response with oversized request ID")
} else if !netutil.IsTimeout(resp.err) {
t.Error(resp)
}
}
}
// In this test, a session is established from one IP as usual. The session is then reused
// on another IP, which shouldn't work. The remote node should respond with WHOAREYOU for
// the attempt from a different IP.
func (s *Suite) TestPingMultiIP(t *utesting.T) {
conn, l1, l2 := s.listen2(t)
defer conn.close()
// Create the session on l1.
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
resp := conn.reqresp(l1, ping)
if resp.Kind() != v5wire.PongMsg {
t.Fatal("expected PONG, got", resp)
}
checkPong(t, resp.(*v5wire.Pong), ping, l1)
// Send on l2. This reuses the session because there is only one codec.
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l2, ping2, nil)
switch resp := conn.read(l2).(type) {
case *v5wire.Pong:
t.Fatalf("remote responded to PING from %v for session on IP %v", laddr(l2).IP, laddr(l1).IP)
case *v5wire.Whoareyou:
t.Logf("got WHOAREYOU for new session as expected")
resp.Node = s.Dest
conn.write(l2, ping2, resp)
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
// Catch the PONG on l2.
switch resp := conn.read(l2).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping2, l2)
default:
t.Fatal("expected PONG, got", resp)
}
// Try on l1 again.
ping3 := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l1, ping3, nil)
switch resp := conn.read(l1).(type) {
case *v5wire.Pong:
t.Fatalf("remote responded to PING from %v for session on IP %v", laddr(l1).IP, laddr(l2).IP)
case *v5wire.Whoareyou:
t.Logf("got WHOAREYOU for new session as expected")
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
}
// This test starts a handshake, but doesn't finish it and sends a second ordinary message
// packet instead of a handshake message packet. The remote node should respond with
// another WHOAREYOU challenge for the second packet.
func (s *Suite) TestPingHandshakeInterrupted(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
// First PING triggers challenge.
ping := &v5wire.Ping{ReqID: conn.nextReqID()}
conn.write(l1, ping, nil)
switch resp := conn.read(l1).(type) {
case *v5wire.Whoareyou:
t.Logf("got WHOAREYOU for PING")
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
// Send second PING.
ping2 := &v5wire.Ping{ReqID: conn.nextReqID()}
switch resp := conn.reqresp(l1, ping2).(type) {
case *v5wire.Pong:
checkPong(t, resp, ping2, l1)
default:
t.Fatal("expected WHOAREYOU, got", resp)
}
}
// This test sends TALKREQ and expects an empty TALKRESP response.
func (s *Suite) TestTalkRequest(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
// Non-empty request ID.
id := conn.nextReqID()
resp := conn.reqresp(l1, &v5wire.TalkRequest{ReqID: id, Protocol: "test-protocol"})
switch resp := resp.(type) {
case *v5wire.TalkResponse:
if !bytes.Equal(resp.ReqID, id) {
t.Fatalf("wrong request ID %x in TALKRESP, want %x", resp.ReqID, id)
}
if len(resp.Message) > 0 {
t.Fatalf("non-empty message %x in TALKRESP", resp.Message)
}
default:
t.Fatal("expected TALKRESP, got", resp.Name())
}
// Empty request ID.
resp = conn.reqresp(l1, &v5wire.TalkRequest{Protocol: "test-protocol"})
switch resp := resp.(type) {
case *v5wire.TalkResponse:
if len(resp.ReqID) > 0 {
t.Fatalf("wrong request ID %x in TALKRESP, want empty byte array", resp.ReqID)
}
if len(resp.Message) > 0 {
t.Fatalf("non-empty message %x in TALKRESP", resp.Message)
}
default:
t.Fatal("expected TALKRESP, got", resp.Name())
}
}
// This test checks that the remote node returns itself for FINDNODE with distance zero.
func (s *Suite) TestFindnodeZeroDistance(t *utesting.T) {
conn, l1 := s.listen1(t)
defer conn.close()
nodes, err := conn.findnode(l1, []uint{0})
if err != nil {
t.Fatal(err)
}
if len(nodes) != 1 {
t.Fatalf("remote returned more than one node for FINDNODE [0]")
}
if nodes[0].ID() != conn.remote.ID() {
t.Errorf("ID of response node is %v, want %v", nodes[0].ID(), conn.remote.ID())
}
}
// In this test, multiple nodes ping the node under test. After waiting for them to be
// accepted into the remote table, the test checks that they are returned by FINDNODE.
func (s *Suite) TestFindnodeResults(t *utesting.T) {
// Create bystanders.
nodes := make([]*bystander, 5)
added := make(chan enode.ID, len(nodes))
for i := range nodes {
nodes[i] = newBystander(t, s, added)
defer nodes[i].close()
}
// Get them added to the remote table.
timeout := 60 * time.Second
timeoutCh := time.After(timeout)
for count := 0; count < len(nodes); {
select {
case id := <-added:
t.Logf("bystander node %v added to remote table", id)
count++
case <-timeoutCh:
t.Errorf("remote added %d bystander nodes in %v, need %d to continue", count, timeout, len(nodes))
t.Logf("this can happen if the node has a non-empty table from previous runs")
return
}
}
t.Logf("all %d bystander nodes were added", len(nodes))
// Collect our nodes by distance.
var dists []uint
expect := make(map[enode.ID]*enode.Node)
for _, bn := range nodes {
n := bn.conn.localNode.Node()
expect[n.ID()] = n
d := uint(enode.LogDist(n.ID(), s.Dest.ID()))
if !containsUint(dists, d) {
dists = append(dists, d)
}
}
// Send FINDNODE for all distances.
conn, l1 := s.listen1(t)
defer conn.close()
foundNodes, err := conn.findnode(l1, dists)
if err != nil {
t.Fatal(err)
}
t.Logf("remote returned %d nodes for distance list %v", len(foundNodes), dists)
for _, n := range foundNodes {
delete(expect, n.ID())
}
if len(expect) > 0 {
t.Errorf("missing %d nodes in FINDNODE result", len(expect))
t.Logf("this can happen if the test is run multiple times in quick succession")
t.Logf("and the remote node hasn't removed dead nodes from previous runs yet")
} else {
t.Logf("all %d expected nodes were returned", len(nodes))
}
}
// A bystander is a node whose only purpose is filling a spot in the remote table.
type bystander struct {
dest *enode.Node
conn *conn
l net.PacketConn
addedCh chan enode.ID
done sync.WaitGroup
}
func newBystander(t *utesting.T, s *Suite, added chan enode.ID) *bystander {
conn, l := s.listen1(t)
conn.setEndpoint(l) // bystander nodes need IP/port to get pinged
bn := &bystander{
conn: conn,
l: l,
dest: s.Dest,
addedCh: added,
}
bn.done.Add(1)
go bn.loop()
return bn
}
// id returns the node ID of the bystander.
func (bn *bystander) id() enode.ID {
return bn.conn.localNode.ID()
}
// close shuts down loop.
func (bn *bystander) close() {
bn.conn.close()
bn.done.Wait()
}
// loop answers packets from the remote node until quit.
func (bn *bystander) loop() {
defer bn.done.Done()
var (
lastPing time.Time
wasAdded bool
)
for {
// Ping the remote node.
if !wasAdded && time.Since(lastPing) > 10*time.Second {
bn.conn.reqresp(bn.l, &v5wire.Ping{
ReqID: bn.conn.nextReqID(),
ENRSeq: bn.dest.Seq(),
})
lastPing = time.Now()
}
// Answer packets.
switch p := bn.conn.read(bn.l).(type) {
case *v5wire.Ping:
bn.conn.write(bn.l, &v5wire.Pong{
ReqID: p.ReqID,
ENRSeq: bn.conn.localNode.Seq(),
ToIP: bn.dest.IP(),
ToPort: uint16(bn.dest.UDP()),
}, nil)
wasAdded = true
bn.notifyAdded()
case *v5wire.Findnode:
bn.conn.write(bn.l, &v5wire.Nodes{ReqID: p.ReqID, Total: 1}, nil)
wasAdded = true
bn.notifyAdded()
case *v5wire.TalkRequest:
bn.conn.write(bn.l, &v5wire.TalkResponse{ReqID: p.ReqID}, nil)
case *readError:
if !netutil.IsTemporaryError(p.err) {
bn.conn.logf("shutting down: %v", p.err)
return
}
}
}
}
func (bn *bystander) notifyAdded() {
if bn.addedCh != nil {
bn.addedCh <- bn.id()
bn.addedCh = nil
}
}

View File

@ -0,0 +1,263 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v5test
import (
"bytes"
"crypto/ecdsa"
"encoding/binary"
"fmt"
"net"
"time"
"github.com/ethereum/go-ethereum/common/mclock"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/discover/v5wire"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
)
// readError represents an error during packet reading.
// This exists to facilitate type-switching on the result of conn.read.
type readError struct {
err error
}
func (p *readError) Kind() byte { return 99 }
func (p *readError) Name() string { return fmt.Sprintf("error: %v", p.err) }
func (p *readError) Error() string { return p.err.Error() }
func (p *readError) Unwrap() error { return p.err }
func (p *readError) RequestID() []byte { return nil }
func (p *readError) SetRequestID([]byte) {}
// readErrorf creates a readError with the given text.
func readErrorf(format string, args ...interface{}) *readError {
return &readError{fmt.Errorf(format, args...)}
}
// This is the response timeout used in tests.
const waitTime = 300 * time.Millisecond
// conn is a connection to the node under test.
type conn struct {
localNode *enode.LocalNode
localKey *ecdsa.PrivateKey
remote *enode.Node
remoteAddr *net.UDPAddr
listeners []net.PacketConn
log logger
codec *v5wire.Codec
lastRequest v5wire.Packet
lastChallenge *v5wire.Whoareyou
idCounter uint32
}
type logger interface {
Logf(string, ...interface{})
}
// newConn sets up a connection to the given node.
func newConn(dest *enode.Node, log logger) *conn {
key, err := crypto.GenerateKey()
if err != nil {
panic(err)
}
db, err := enode.OpenDB("")
if err != nil {
panic(err)
}
ln := enode.NewLocalNode(db, key)
return &conn{
localKey: key,
localNode: ln,
remote: dest,
remoteAddr: &net.UDPAddr{IP: dest.IP(), Port: dest.UDP()},
codec: v5wire.NewCodec(ln, key, mclock.System{}),
log: log,
}
}
func (tc *conn) setEndpoint(c net.PacketConn) {
tc.localNode.SetStaticIP(laddr(c).IP)
tc.localNode.SetFallbackUDP(laddr(c).Port)
}
func (tc *conn) listen(ip string) net.PacketConn {
l, err := net.ListenPacket("udp", fmt.Sprintf("%v:0", ip))
if err != nil {
panic(err)
}
tc.listeners = append(tc.listeners, l)
return l
}
// close shuts down all listeners and the local node.
func (tc *conn) close() {
for _, l := range tc.listeners {
l.Close()
}
tc.localNode.Database().Close()
}
// nextReqID creates a request id.
func (tc *conn) nextReqID() []byte {
id := make([]byte, 4)
tc.idCounter++
binary.BigEndian.PutUint32(id, tc.idCounter)
return id
}
// reqresp performs a request/response interaction on the given connection.
// The request is retried if a handshake is requested.
func (tc *conn) reqresp(c net.PacketConn, req v5wire.Packet) v5wire.Packet {
reqnonce := tc.write(c, req, nil)
switch resp := tc.read(c).(type) {
case *v5wire.Whoareyou:
if resp.Nonce != reqnonce {
return readErrorf("wrong nonce %x in WHOAREYOU (want %x)", resp.Nonce[:], reqnonce[:])
}
resp.Node = tc.remote
tc.write(c, req, resp)
return tc.read(c)
default:
return resp
}
}
// findnode sends a FINDNODE request and waits for its responses.
func (tc *conn) findnode(c net.PacketConn, dists []uint) ([]*enode.Node, error) {
var (
findnode = &v5wire.Findnode{ReqID: tc.nextReqID(), Distances: dists}
reqnonce = tc.write(c, findnode, nil)
first = true
total uint8
results []*enode.Node
)
for n := 1; n > 0; {
switch resp := tc.read(c).(type) {
case *v5wire.Whoareyou:
// Handle handshake.
if resp.Nonce == reqnonce {
resp.Node = tc.remote
tc.write(c, findnode, resp)
} else {
return nil, fmt.Errorf("unexpected WHOAREYOU (nonce %x), waiting for NODES", resp.Nonce[:])
}
case *v5wire.Ping:
// Handle ping from remote.
tc.write(c, &v5wire.Pong{
ReqID: resp.ReqID,
ENRSeq: tc.localNode.Seq(),
}, nil)
case *v5wire.Nodes:
// Got NODES! Check request ID.
if !bytes.Equal(resp.ReqID, findnode.ReqID) {
return nil, fmt.Errorf("NODES response has wrong request id %x", resp.ReqID)
}
// Check total count. It should be greater than one
// and needs to be the same across all responses.
if first {
if resp.Total == 0 || resp.Total > 6 {
return nil, fmt.Errorf("invalid NODES response 'total' %d (not in (0,7))", resp.Total)
}
total = resp.Total
n = int(total) - 1
first = false
} else {
n--
if resp.Total != total {
return nil, fmt.Errorf("invalid NODES response 'total' %d (!= %d)", resp.Total, total)
}
}
// Check nodes.
nodes, err := checkRecords(resp.Nodes)
if err != nil {
return nil, fmt.Errorf("invalid node in NODES response: %v", err)
}
results = append(results, nodes...)
default:
return nil, fmt.Errorf("expected NODES, got %v", resp)
}
}
return results, nil
}
// write sends a packet on the given connection.
func (tc *conn) write(c net.PacketConn, p v5wire.Packet, challenge *v5wire.Whoareyou) v5wire.Nonce {
packet, nonce, err := tc.codec.Encode(tc.remote.ID(), tc.remoteAddr.String(), p, challenge)
if err != nil {
panic(fmt.Errorf("can't encode %v packet: %v", p.Name(), err))
}
if _, err := c.WriteTo(packet, tc.remoteAddr); err != nil {
tc.logf("Can't send %s: %v", p.Name(), err)
} else {
tc.logf(">> %s", p.Name())
}
return nonce
}
// read waits for an incoming packet on the given connection.
func (tc *conn) read(c net.PacketConn) v5wire.Packet {
buf := make([]byte, 1280)
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
return &readError{err}
}
n, fromAddr, err := c.ReadFrom(buf)
if err != nil {
return &readError{err}
}
_, _, p, err := tc.codec.Decode(buf[:n], fromAddr.String())
if err != nil {
return &readError{err}
}
tc.logf("<< %s", p.Name())
return p
}
// logf prints to the test log.
func (tc *conn) logf(format string, args ...interface{}) {
if tc.log != nil {
tc.log.Logf("(%s) %s", tc.localNode.ID().TerminalString(), fmt.Sprintf(format, args...))
}
}
func laddr(c net.PacketConn) *net.UDPAddr {
return c.LocalAddr().(*net.UDPAddr)
}
func checkRecords(records []*enr.Record) ([]*enode.Node, error) {
nodes := make([]*enode.Node, len(records))
for i := range records {
n, err := enode.New(enode.ValidSchemes, records[i])
if err != nil {
return nil, err
}
nodes[i] = n
}
return nodes, nil
}
func containsUint(ints []uint, x uint) bool {
for i := range ints {
if ints[i] == x {
return true
}
}
return false
}

105
cmd/devp2p/keycmd.go Normal file
View File

@ -0,0 +1,105 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"net"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"gopkg.in/urfave/cli.v1"
)
var (
keyCommand = cli.Command{
Name: "key",
Usage: "Operations on node keys",
Subcommands: []cli.Command{
keyGenerateCommand,
keyToNodeCommand,
},
}
keyGenerateCommand = cli.Command{
Name: "generate",
Usage: "Generates node key files",
ArgsUsage: "keyfile",
Action: genkey,
}
keyToNodeCommand = cli.Command{
Name: "to-enode",
Usage: "Creates an enode URL from a node key file",
ArgsUsage: "keyfile",
Action: keyToURL,
Flags: []cli.Flag{hostFlag, tcpPortFlag, udpPortFlag},
}
)
var (
hostFlag = cli.StringFlag{
Name: "ip",
Usage: "IP address of the node",
Value: "127.0.0.1",
}
tcpPortFlag = cli.IntFlag{
Name: "tcp",
Usage: "TCP port of the node",
Value: 30303,
}
udpPortFlag = cli.IntFlag{
Name: "udp",
Usage: "UDP port of the node",
Value: 30303,
}
)
func genkey(ctx *cli.Context) error {
if ctx.NArg() != 1 {
return fmt.Errorf("need key file as argument")
}
file := ctx.Args().Get(0)
key, err := crypto.GenerateKey()
if err != nil {
return fmt.Errorf("could not generate key: %v", err)
}
return crypto.SaveECDSA(file, key)
}
func keyToURL(ctx *cli.Context) error {
if ctx.NArg() != 1 {
return fmt.Errorf("need key file as argument")
}
var (
file = ctx.Args().Get(0)
host = ctx.String(hostFlag.Name)
tcp = ctx.Int(tcpPortFlag.Name)
udp = ctx.Int(udpPortFlag.Name)
)
key, err := crypto.LoadECDSA(file)
if err != nil {
return err
}
ip := net.ParseIP(host)
if ip == nil {
return fmt.Errorf("invalid IP address %q", host)
}
node := enode.NewV4(&key.PublicKey, ip, tcp, udp)
fmt.Println(node.URLv4())
return nil
}

View File

@ -58,10 +58,12 @@ func init() {
// Add subcommands.
app.Commands = []cli.Command{
enrdumpCommand,
keyCommand,
discv4Command,
discv5Command,
dnsCommand,
nodesetCommand,
rlpxCommand,
}
}
@ -79,7 +81,7 @@ func commandHasFlag(ctx *cli.Context, flag cli.Flag) bool {
// getNodeArg handles the common case of a single node descriptor argument.
func getNodeArg(ctx *cli.Context) *enode.Node {
if ctx.NArg() != 1 {
if ctx.NArg() < 1 {
exit("missing node as command-line argument")
}
n, err := parseNode(ctx.Args()[0])

View File

@ -95,6 +95,7 @@ var filterFlags = map[string]nodeFilterC{
"-min-age": {1, minAgeFilter},
"-eth-network": {1, ethFilter},
"-les-server": {0, lesFilter},
"-snap": {0, snapFilter},
}
func parseFilters(args []string) ([]nodeFilter, error) {
@ -104,15 +105,15 @@ func parseFilters(args []string) ([]nodeFilter, error) {
if !ok {
return nil, fmt.Errorf("invalid filter %q", args[0])
}
if len(args) < fc.narg {
return nil, fmt.Errorf("filter %q wants %d arguments, have %d", args[0], fc.narg, len(args))
if len(args)-1 < fc.narg {
return nil, fmt.Errorf("filter %q wants %d arguments, have %d", args[0], fc.narg, len(args)-1)
}
filter, err := fc.fn(args[1:])
filter, err := fc.fn(args[1 : 1+fc.narg])
if err != nil {
return nil, fmt.Errorf("%s: %v", args[0], err)
}
filters = append(filters, filter)
args = args[fc.narg+1:]
args = args[1+fc.narg:]
}
return filters, nil
}
@ -191,3 +192,13 @@ func lesFilter(args []string) (nodeFilter, error) {
}
return f, nil
}
func snapFilter(args []string) (nodeFilter, error) {
f := func(n nodeJSON) bool {
var snap struct {
_ []rlp.RawValue `rlp:"tail"`
}
return n.N.Load(enr.WithEntry("snap", &snap)) == nil
}
return f, nil
}

102
cmd/devp2p/rlpxcmd.go Normal file
View File

@ -0,0 +1,102 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"net"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/ethtest"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/rlpx"
"github.com/ethereum/go-ethereum/rlp"
"gopkg.in/urfave/cli.v1"
)
var (
rlpxCommand = cli.Command{
Name: "rlpx",
Usage: "RLPx Commands",
Subcommands: []cli.Command{
rlpxPingCommand,
rlpxEthTestCommand,
},
}
rlpxPingCommand = cli.Command{
Name: "ping",
Usage: "ping <node>",
Action: rlpxPing,
}
rlpxEthTestCommand = cli.Command{
Name: "eth-test",
Usage: "Runs tests against a node",
ArgsUsage: "<node> <chain.rlp> <genesis.json>",
Action: rlpxEthTest,
Flags: []cli.Flag{
testPatternFlag,
testTAPFlag,
},
}
)
func rlpxPing(ctx *cli.Context) error {
n := getNodeArg(ctx)
fd, err := net.Dial("tcp", fmt.Sprintf("%v:%d", n.IP(), n.TCP()))
if err != nil {
return err
}
conn := rlpx.NewConn(fd, n.Pubkey())
ourKey, _ := crypto.GenerateKey()
_, err = conn.Handshake(ourKey)
if err != nil {
return err
}
code, data, _, err := conn.Read()
if err != nil {
return err
}
switch code {
case 0:
var h ethtest.Hello
if err := rlp.DecodeBytes(data, &h); err != nil {
return fmt.Errorf("invalid handshake: %v", err)
}
fmt.Printf("%+v\n", h)
case 1:
var msg []p2p.DiscReason
if rlp.DecodeBytes(data, &msg); len(msg) == 0 {
return fmt.Errorf("invalid disconnect message")
}
return fmt.Errorf("received disconnect message: %v", msg[0])
default:
return fmt.Errorf("invalid message code %d, expected handshake (code zero)", code)
}
return nil
}
// rlpxEthTest runs the eth protocol test suite.
func rlpxEthTest(ctx *cli.Context) error {
if ctx.NArg() < 3 {
exit("missing path to chain.rlp as command-line argument")
}
suite, err := ethtest.NewSuite(getNodeArg(ctx), ctx.Args()[1], ctx.Args()[2])
if err != nil {
exit(err)
}
return runTests(ctx, suite.EthTests())
}

69
cmd/devp2p/runtest.go Normal file
View File

@ -0,0 +1,69 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"os"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v4test"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
var (
testPatternFlag = cli.StringFlag{
Name: "run",
Usage: "Pattern of test suite(s) to run",
}
testTAPFlag = cli.BoolFlag{
Name: "tap",
Usage: "Output TAP",
}
// These two are specific to the discovery tests.
testListen1Flag = cli.StringFlag{
Name: "listen1",
Usage: "IP address of the first tester",
Value: v4test.Listen1,
}
testListen2Flag = cli.StringFlag{
Name: "listen2",
Usage: "IP address of the second tester",
Value: v4test.Listen2,
}
)
func runTests(ctx *cli.Context, tests []utesting.Test) error {
// Filter test cases.
if ctx.IsSet(testPatternFlag.Name) {
tests = utesting.MatchTests(tests, ctx.String(testPatternFlag.Name))
}
// Disable logging unless explicitly enabled.
if !ctx.GlobalIsSet("verbosity") && !ctx.GlobalIsSet("vmodule") {
log.Root().SetHandler(log.DiscardHandler())
}
// Run the tests.
var run = utesting.RunTests
if ctx.Bool(testTAPFlag.Name) {
run = utesting.RunTAP
}
results := run(tests, os.Stdout)
if utesting.CountFailures(results) > 0 {
os.Exit(1)
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More