Compare commits

..

649 Commits

Author SHA1 Message Date
Péter Szilágyi
f538259187 params: release Geth v1.9.18 2020-07-27 14:53:53 +03:00
gary rong
b1be979443 params: upgrade CHTs (#21376) 2020-07-27 12:57:15 +03:00
Péter Szilágyi
e997f92caf Merge pull request #21368 from holiman/update_uint256
deps: update uint256 to v1.1.1
2020-07-24 15:02:52 +03:00
Martin Holst Swende
56434bfa89 deps: update uint256 to v1.1.1 2020-07-24 14:00:08 +02:00
Péter Szilágyi
6793ffa12b Merge pull request #21300 from rjl493456442/txpool-fix-queued-evictions
core: fix queued transaction eviction
2020-07-24 11:14:42 +03:00
rjl493456442
5413df1dfa core: fix heartbeat in txpool
core: address comment
2020-07-24 11:12:59 +03:00
villanuevawill
c374447401 core: fix queued transaction eviction
Solves issue#20582. Non-executable transactions should not be evicted on each tick if there are no promote transactions or if a pending/reset empties the pending list. Tests and logging expanded to handle these cases in the future.

core/tx_pool: use a ts for each tx in the queue, but only update the heartbeat on promotion or pending replaced

queuedTs proper naming
2020-07-24 11:11:57 +03:00
Martin Holst Swende
105922180f eth/downloader: refactor downloader + queue (#21263)
* eth/downloader: refactor downloader + queue

downloader, fetcher: throttle-metrics, fetcher filter improvements, standalone resultcache

downloader: more accurate deliverytime calculation, less mem overhead in state requests

downloader/queue: increase underlying buffer of results, new throttle mechanism

eth/downloader: updates to tests

eth/downloader: fix up some review concerns

eth/downloader/queue: minor fixes

eth/downloader: minor fixes after review call

eth/downloader: testcases for queue.go

eth/downloader: minor change, don't set progress unless progress...

eth/downloader: fix flaw which prevented useless peers from being dropped

eth/downloader: try to fix tests

eth/downloader: verify non-deliveries against advertised remote head

eth/downloader: fix flaw with checking closed-status causing hang

eth/downloader: hashing avoidance

eth/downloader: review concerns + simplify resultcache and queue

eth/downloader: add back some locks, address review concerns

downloader/queue: fix remaining lock flaw

* eth/downloader: nitpick fixes

* eth/downloader: remove the *2*3/4 throttling threshold dance

* eth/downloader: print correct throttle threshold in stats

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-24 10:46:26 +03:00
Felix Lange
3a57eecc69 mobile: fix build on iOS (#21362)
This fixes the iOS framework build by naming the second parameter of the
Signer interface method. The name is important because it becomes part
of the objc method signature.

Fixes #21340
2020-07-23 19:15:40 +02:00
Felix Lange
997b55236e build: fix GOBIN for gomobile commands (#21361) 2020-07-23 12:34:08 +02:00
meowsbits
4c268e65a0 cmd/utils: implement configurable developer (--dev) account options (#21301)
* geth,utils: implement configurable developer account options

Prior to this change --dev (developer) mode
generated one account with an empty password,
irrespective of existing --password and --miner.etherbase
options.

This change makes --dev mode compatible with these
existing flags.

--dev mode may now be used in conjunction with
--password and --miner.etherbase flags to configure
the developer faucet using an existing keystore or
in creating a new account.

Signed-off-by: meows <b5c6@protonmail.com>

* main: remove key/pass flags from usage developer section

These flags are included already in other sections,
and it is not desired to duplicate them.

They were originally included in this section
along with added support for these flags in the
developer mode.

Signed-off-by: meows <b5c6@protonmail.com>
2020-07-23 06:47:34 +03:00
Péter Szilágyi
0b53e485d8 Merge pull request #21352 from karalabe/dev-noinit-genesis
cmd/utils: reuse existing genesis in persistent dev mode
2020-07-22 17:39:08 +03:00
Péter Szilágyi
9e22e912e3 cmd/utils: reuse existing genesis in persistent dev mode 2020-07-21 15:58:29 +03:00
rene
123864fc05 whisper/whisperv6: improve test error messages (#21348) 2020-07-21 10:53:06 +02:00
Sammy Libre
7163a6664e ethclient: serialize negative block number as "pending" (#21177)
Fixes #21175

Co-authored-by: sammy007 <sammy007@users.noreply.github.com>
Co-authored-by: Adam Schmideg <adamschmideg@users.noreply.github.com>
2020-07-21 10:51:15 +02:00
Binacs
4366c45e4e les: make clientPool.connectedBias configurable (#21305) 2020-07-21 10:23:40 +02:00
Péter Szilágyi
3a52c4dcf2 Merge pull request #21336 from karalabe/tiny-ref-optimization
core/vm: use pointers to operations vs. copy by value
2020-07-21 10:53:12 +03:00
Péter Szilágyi
722b742780 params: begin v1.9.18 release cycle 2020-07-20 15:58:33 +03:00
Péter Szilágyi
748f22c192 params: release Geth v1.9.17 2020-07-20 15:56:42 +03:00
gary rong
43e2e58cbd accounts, internal: fix funding check when estimating gas (#21346)
* internal, accounts: fix funding check when estimate gas

* accounts, internal: address comments
2020-07-20 15:52:42 +03:00
Péter Szilágyi
35ddf36229 Merge pull request #21347 from karalabe/ethstats-fixes
ethstats: fix reconnection issue, implement primus pings
2020-07-20 11:47:13 +03:00
Péter Szilágyi
0fef66c739 ethstats: fix reconnection issue, implement primus pings 2020-07-20 11:46:41 +03:00
Péter Szilágyi
508891e64b core/vm: use pointers to operations vs. copy by value 2020-07-16 15:32:01 +03:00
Nikola Madjarevic
9e88224eb8 core: raise gas limit in --dev mode, seed blake precompile (#21323)
* Set gasLimit in --dev mode to be 9m.

* core: Set gasLimit to 11.5 milion and add 1 wei allocation for BLAKE2b
2020-07-16 15:08:38 +03:00
Martin Holst Swende
295693759e core/vm: less allocations for various call variants (#21222)
* core/vm/runtime/tests: add more benchmarks

* core/vm: initial work on improving alloc count for calls to precompiles

name                                  old time/op    new time/op    delta
SimpleLoop/identity-precompile-10M-6     117ms ±75%      43ms ± 1%  -63.09%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   79.6ms ± 4%    70.5ms ± 1%  -11.42%  (p=0.008 n=5+5)

name                                  old alloc/op   new alloc/op   delta
SimpleLoop/identity-precompile-10M-6    24.4MB ± 0%     4.9MB ± 0%  -79.94%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   13.2kB ± 0%    13.2kB ± 0%     ~     (p=0.357 n=5+5)

name                                  old allocs/op  new allocs/op  delta
SimpleLoop/identity-precompile-10M-6      382k ± 0%      153k ± 0%  -59.99%  (p=0.000 n=5+4)
SimpleLoop/loop-10M-6                     40.0 ± 0%      40.0 ± 0%     ~     (all equal)

* core/vm: don't allocate big.int for touch

name                                  old time/op    new time/op    delta
SimpleLoop/identity-precompile-10M-6    43.3ms ± 1%    42.4ms ± 7%     ~     (p=0.151 n=5+5)
SimpleLoop/loop-10M-6                   70.5ms ± 1%    76.7ms ± 1%   +8.67%  (p=0.008 n=5+5)

name                                  old alloc/op   new alloc/op   delta
SimpleLoop/identity-precompile-10M-6    4.90MB ± 0%    2.46MB ± 0%  -49.83%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   13.2kB ± 0%    13.2kB ± 1%     ~     (p=0.571 n=5+5)

name                                  old allocs/op  new allocs/op  delta
SimpleLoop/identity-precompile-10M-6      153k ± 0%       76k ± 0%  -49.98%  (p=0.029 n=4+4)
SimpleLoop/loop-10M-6                     40.0 ± 0%      40.0 ± 0%     ~     (all equal)

* core/vm: reduce allocs in staticcall

name                                  old time/op    new time/op    delta
SimpleLoop/identity-precompile-10M-6    42.4ms ± 7%    37.5ms ± 6%  -11.68%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   76.7ms ± 1%    69.1ms ± 1%   -9.82%  (p=0.008 n=5+5)

name                                  old alloc/op   new alloc/op   delta
SimpleLoop/identity-precompile-10M-6    2.46MB ± 0%    0.02MB ± 0%  -99.35%  (p=0.008 n=5+5)
SimpleLoop/loop-10M-6                   13.2kB ± 1%    13.2kB ± 0%     ~     (p=0.143 n=5+5)

name                                  old allocs/op  new allocs/op  delta
SimpleLoop/identity-precompile-10M-6     76.4k ± 0%      0.1k ± 0%     ~     (p=0.079 n=4+5)
SimpleLoop/loop-10M-6                     40.0 ± 0%      40.0 ± 0%     ~     (all equal)

* trie: better use of hasher keccakState

* core/state/statedb: reduce allocations in getDeletedStateObject

* core/vm: reduce allocations in all call derivates

* core/vm: reduce allocations in call variants

- Make returnstack `uint32`
- Use a `sync.Pool` of `stack`s

* core/vm: fix tests

* core/vm: goimports

* core/vm: tracer fix + staticcall gas fix

* core/vm: add back snapshot to staticcall

* core/vm: review concerns + make returnstack pooled + enable returndata in traces

* core/vm: fix some test tracer method signatures

* core/vm: run gencodec, minor comment polish

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-07-16 15:06:19 +03:00
Guillaume Ballet
240d1851db trie: quell linter in commiter.go (#21329) 2020-07-15 11:00:04 +03:00
Martin Holst Swende
6c9f040ebe core: transaction pool optimizations (#21328)
* core: added local tx pool test case

* core, crypto: various allocation savings regarding tx handling

* core/txlist, txpool: save a reheap operation, avoid some bigint allocs

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-07-14 21:42:32 +02:00
rene
5b081ab214 cmd/clef: change --rpcport to --http.port and update flags in docs (#21318) 2020-07-14 10:35:32 +02:00
Felix Lange
6ef4495a8f p2p/discover: require table nodes to have an IP (#21330)
This fixes a corner case in discv5. The issue cannot happen in discv4
because it performs IP checks on all incoming node information.
2020-07-13 22:25:45 +02:00
gary rong
79addac698 core/rawdb: better log messages for ancient failure (#21327) 2020-07-13 20:43:30 +02:00
rene
7d5267e3a2 .github: Change Code Owners (#21326)
* modify code owners

* add marius
2020-07-13 16:20:20 +02:00
gary rong
4edbc1f2bb internal/ethapi: cap txfee for SignTransaction and Resend (#21231) 2020-07-13 12:45:39 +02:00
Tien
6cf6e1d753 README.md: point Go API reference link to pkg.go.dev (#21321) 2020-07-13 11:22:30 +02:00
libotony
2e08dad9e6 p2p/discv5: unset pingEcho on pong timeout (#21324) 2020-07-13 11:20:47 +02:00
Felix Lange
af258efdb9 light: goimports -w (#21325) 2020-07-13 11:17:49 +02:00
gary rong
6eef141aef les: historical data garbage collection (#19570)
This change introduces garbage collection for the light client. Historical
chain data is deleted periodically. If you want to disable the GC, use
the --light.nopruning flag.
2020-07-13 11:02:54 +02:00
Felix Lange
b8dd0890b3 params: begin v1.9.17 release cycle 2020-07-10 12:40:31 +02:00
Felix Lange
ea3b00ad75 params: go-ethereum v1.9.16 stable 2020-07-10 12:38:48 +02:00
gary rong
feb40e3a4d accounts/external: remove dependency on internal/ethapi (#21319)
Fixes #20535

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-10 11:33:31 +02:00
rene
beabf95ad7 cmd/geth, cmd/puppeth: replace deprecated rpc and ws flags in tests and docs (#21317) 2020-07-09 17:48:40 +02:00
Felix Lange
6ccce0906a common/math: use math/bits intrinsics for Safe* (#21316)
This is a resubmit of ledgerwatch/turbo-geth#556. The performance
benefit of this change is negligible, but it does remove a TODO.
2020-07-09 17:45:49 +02:00
Felix Lange
bcb3087450 Revert "core, txpool: less allocations when handling transactions (#21232)"
Reverting because this change started handling account balances as
uint64 in the transaction pool, which is incorrect.

This reverts commit af5c97aebe.
2020-07-09 14:02:03 +02:00
Martin Holst Swende
967d8de77a eth/downloader: fix peer idleness tracking when restarting state sync (#21260)
This fixes two issues with state sync restarts:

When sync restarts with a new root, some peers can have in-flight requests.
Since all peers with active requests were marked idle when exiting sync,
the new sync would schedule more requests for those peers. When the
response for the earlier request arrived, the new sync would reject it and
mark the peer idle again, rendering the peer useless until it disconnected.

The other issue was that peers would not be marked idle when they had
delivered a response, but the response hadn't been processed before
restarting the state sync. This also made the peer useless because it
would be permanently marked busy.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-08 23:08:08 +02:00
ucwong
7a556abe15 go.mod: upgrade to github.com/golang/snappy with arm64 asm (#21304) 2020-07-08 14:06:53 +02:00
Martin Holst Swende
5b1cfdef89 eth: increase timeout in TestBroadcastBlock (#21299) 2020-07-08 11:50:26 +02:00
chris-j-h
c16967c267 cmd/clef: Fix broken link in README and other minor fixes (#21303) 2020-07-07 22:23:23 +02:00
Adam Schmideg
6a48ae37b2 cmd/devp2p: add discv4 test suite (#21163)
This adds a test suite for discovery v4. The test suite is a port of the Hive suite for
discovery, and will replace the current suite on Hive soon-ish. The tests can be
run locally with this command:

    devp2p discv4 test -remote enode//...

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-07 14:37:33 +02:00
chris-j-h
e5871b928f cmd/clef: Update README with external v6.0.0 & internal v7.0.1 APIs (#21298)
Changes include:
* Updates response docs for `account_new`, `account_list`, `account_signTransaction`
* Removes `account_import`, `account_export` docs
* Adds `account_version` docs
* Updates request docs for `ui_approveListing`, `ui_approveSignData`, `ui_showInfo`, `ui_showError`, `ui_onApprovedTx`
* Adds `ui_approveNewAccount`, `ui_onInputRequired` docs
2020-07-07 11:12:38 +02:00
gary rong
6d8e51ab88 cmd, node: dump empty value config (#21296) 2020-07-06 22:09:30 +02:00
Felix Lange
6315b6fcc0 rlp: reduce allocations for big.Int and byte array encoding (#21291)
This change further improves the performance of RLP encoding by removing
allocations for big.Int and [...]byte types. I have added a new benchmark
that measures RLP encoding of types.Block to verify that performance is
improved.
2020-07-06 11:17:09 +02:00
Martin Holst Swende
fa01117498 build/ci: handle split up listing (#21293) 2020-07-04 20:10:48 +02:00
meowsbits
490b380a04 cmd/geth: allow configuring metrics HTTP server on separate endpoint (#21290)
Exposing /debug/metrics and /debug/metrics/prometheus was dependent
on --pprof, which also exposes other HTTP APIs. This change makes it possible
to run the metrics server on an independent endpoint without enabling pprof.
2020-07-03 19:12:22 +02:00
gary rong
61270e5e1c eth/gasprice: lighter gas price oracle for light client (#20409)
This PR reduces the bandwidth used by the light client to compute the
recommended gas price. The current mechanism for suggesting the price is:

- retrieve recent 20 blocks
- get the lowest gas price of these blocks
- sort the price array and return the middle(60%) one

This works for full nodes, which have all blocks available locally.
However, this is very expensive for the light client because the light
client needs to retrieve block bodies from the network.

The PR changes the default options for light client. With the new config,
the light client only retrieves the two latest blocks, but in order to
collect more sample transactions, the 3 lowest prices are collected from
each block.

This PR also changes the behavior for empty blocks. If the block is empty,
the lastest price is reused for sampling.
2020-07-03 14:50:35 +02:00
Martin Holst Swende
07a95ce571 les/checkpointoracle: don't lookup checkpoint more than once per minute (#21285)
* les/checkpointoracle: don't lookup checkpoint more than once per second

* les/checkpoint/oracle: change oracle checktime to 1 minute
2020-07-02 10:15:11 +02:00
Martin Holst Swende
04c4e50d72 ethapi: don't crash when keystore-specific methods are called but external signer used (#21279)
* console: prevent importRawKey from getting into CLI history

* internal/ethapi: error on keystore-methods when no keystore is present
2020-07-02 10:00:18 +02:00
Martin Holst Swende
7451fc637d internal/ethapi: default gas to maxgascap, not max int64 (#21284) 2020-07-02 09:43:42 +02:00
Martin Holst Swende
12867d152c rpc, internal/ethapi: default rpc gascap at 25M + better error message (#21229)
* rpc, internal/ethapi: default rpc gascap at 50M + better error message

* eth,internal: make globalgascap uint64

* core/tests: fix compilation failure

* eth/config: gascap at 25M + minor review concerns
2020-07-01 19:54:21 +02:00
Marius van der Wijden
af5c97aebe core, txpool: less allocations when handling transactions (#21232)
* core: use uint64 for total tx costs instead of big.Int

* core: added local tx pool test case

* core, crypto: various allocation savings regarding tx handling

* Update core/tx_list.go

* core: added tx.GasPriceIntCmp for comparison without allocation

adds a method to remove unneeded allocation in comparison to tx.gasPrice

* core: handle pools full of locals better

* core/tests: benchmark for tx_list

* core/txlist, txpool: save a reheap operation, avoid some bigint allocs

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-07-01 19:35:26 +02:00
Marius van der Wijden
8dfd66f701 rlp: avoid list header allocation in encoder (#21274)
List headers made up 11% of all allocations during sync. This change
removes most of those allocations by keeping the list header values
cached in the encoder buffer instead. Since encoder buffers are pooled,
list headers are no longer allocated in the common case where an
encoder buffer is available for reuse.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-07-01 13:49:19 +02:00
Adam Schmideg
ec51cbb5fb cmd/geth: LES priority client test (#20719)
This adds a regression test for the LES priority client API.
2020-07-01 10:31:11 +02:00
Marius van der Wijden
d671dbd5b7 eth/downloader: fixes data race between synchronize and other methods (#21201)
* eth/downloaded: fixed datarace between synchronize and Progress

There was a race condition between `downloader.synchronize()` and `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders`
This PR changes the behavior of the downloader a bit.
Previously the functions `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders` read the syncMode anew within their loops. Now they read the syncMode at the start of their function and don't change it during their runtime.

* eth/downloaded: comment

* eth/downloader: added comment
2020-06-30 19:43:29 +02:00
rene
1e635bd0bd go.mod: updated crypto deps causing build failure (#21276) 2020-06-30 16:05:59 +02:00
Guillaume Ballet
b86b1e6d43 go.mod: bump gopsutil version (#21275) 2020-06-30 15:22:21 +02:00
Marius van der Wijden
ddeea1e0c6 core: types: less allocations when hashing and tx handling (#21265)
* core, crypto: various allocation savings regarding tx handling

* core: reduce allocs for gas price comparison

This change reduces the allocations needed for comparing different transactions to each other.
A call to `tx.GasPrice()` copies the gas price as it has to be safe against modifications and
also needs to be threadsafe. For comparing and ordering different transactions we don't need
these guarantees

* core: added tx.GasPriceIntCmp for comparison without allocation

adds a method to remove unneeded allocation in comparison to tx.gasPrice

* core/types: pool legacykeccak256 objects in rlpHash

rlpHash is by far the most used function in core that allocates a legacyKeccak256 object on each call.
Since it is so widely used it makes sense to add pooling here so we relieve the GC.
On my machine these changes result in > 100 MILLION less allocations and > 30 GB less allocated memory.

* reverted some changes

* reverted some changes

* trie: use crypto.KeccakState instead of replicating code

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-06-30 11:59:06 +02:00
Martin Holst Swende
e376d2fb31 cmd/evm: add state transition tool for testing (#20958)
This PR implements the EVM state transition tool, which is intended
to be the replacement for our retesteth client implementation.
Documentation is present in the cmd/evm/README.md file.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-06-30 10:12:51 +02:00
Binacs
dd91c7ce6a cmd: abstract getPassPhrase functions into one (#21219)
* [cmd] Abstract `getPassPhrase` functions into one.

* cmd/ethkey: fix compilation failure

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2020-06-30 09:56:40 +02:00
meowsbits
c13df14581 utils: fix ineffectual miner config flags (#21271)
Without use of global, these flags didn't actually modify
miner configuration, since we weren't grabbing from the
proper context scope, which should be global (vs. subcommand).

Signed-off-by: meows <b5c6@protonmail.com>
2020-06-30 09:05:25 +02:00
Marius van der Wijden
02cea2330d eth: returned revert reason in traceTx (#21195)
* eth: returned revert reason in traceTx

* eth: return result data
2020-06-26 12:19:31 +02:00
meowsbits
413358abb9 cmd/geth: make import cmd exit with 1 if import errors occurred (#21244)
The import command should not return a 0 status
code if the import finishes prematurely becaues
of an import error.

Returning the error causes the program to exit with 1
if the err is non nil.

Signed-off-by: meows <b5c6@protonmail.com>
2020-06-24 22:01:58 +02:00
Marius van der Wijden
0c82928981 core/vm: fix incorrect computation of BLS discount (#21253)
* core/vm: fix incorrect computation of discount

During testing on Yolov1 we found that the way geth calculates the discount
is not in line with the specification. Basically what we did is calculate
128 * Bls12381GXMulGas * discount / 1000 whenever we received more than 128 pairs
of values. Correct would be to calculate k * Bls12381... for k > 128.

* core/vm: better logic for discount calculation

* core/vm: better calculation logic, added worstcase benchmarks

* core/vm: better benchmarking logic
2020-06-24 21:58:28 +02:00
Marius van der Wijden
b482423e61 trie: reduce allocs in insertPreimage (#21261) 2020-06-24 21:56:27 +02:00
gary rong
93142e50c3 eth: don't block if transaction broadcast loop fails (#21255)
* eth: don't block if transaction broadcast loop is returned

* eth: kick out peer if we failed to send message

* eth: address comment
2020-06-24 13:54:13 +03:00
Felix Lange
23f1a0b783 crypto/secp256k1: enable 128-bit int code and endomorphism optimization (#21203)
* crypto/secp256k1: enable use of __int128

This speeds up scalar & field calculations a lot.

* crypto/secp256k1: enable endomorphism optimization
2020-06-24 13:51:32 +03:00
Felix Lange
da180ba097 cmd/devp2p: add commands for node key management (#21202)
These commands mirror the key/URL generation functions of cmd/bootnode.

    $ devp2p key generate mynode.key
    $ devp2p key to-enode mynode.key -ip 203.0.113.21 -tcp 30304
    enode://78a7746089baf4b8615f54a5f0b67b22b1...
2020-06-24 10:41:53 +02:00
Péter Szilágyi
c42d1390d3 Merge pull request #21256 from karalabe/p2p-packet-metrics
p2p: measure packet throughput too, not just bandwidth
2020-06-24 09:44:23 +03:00
Péter Szilágyi
42ccb2fdbd p2p: measure packet throughput too, not just bandwidth 2020-06-24 09:36:20 +03:00
ucwong
dce533c246 whisper: fix time.sleep by time.ticker in whisper_test (#21251) 2020-06-23 10:46:59 +02:00
Guillaume Ballet
9a188c975d common/fdlimit: build on DragonflyBSD (#21241)
* common/fdlimit: build on DragonflyBSD

* review feedback
2020-06-19 15:43:52 +02:00
AusIV
3ebfeb09fe core/rawdb: fix high memory usage in freezer (#21243)
The ancients variable in the freezer is a list of hashes, which
identifies all of the hashes to be frozen. The slice is being allocated
with a capacity of `limit`, which is the number of the last block
this batch will attempt to add to the freezer. That means we are
allocating memory for all of the blocks in the freezer, not just
the ones to be added.

If instead we allocate `limit - f.frozen`, we will only allocate
enough space for the blocks we're about to add to the freezer. On
mainnet this reduces usage by about 320 MB.
2020-06-19 10:51:37 +03:00
ucwong
5435e0d1a1 whisper : use timer.Ticker instead of sleep (#21240)
* whisper : use timer.Ticker instead of sleep

* lint: Fix linter error

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-06-18 17:58:49 +02:00
ucwong
e029cc6616 go.mod: update snappy dependency (#21237) 2020-06-18 14:01:49 +03:00
gary rong
56a319b9da cmd, eth, internal, les: add txfee cap (#21212)
* cmd, eth, internal, les: add gasprice cap

* cmd/utils, eth: add default value for gasprice cap

* all: use txfee cap

* cmd, eth: add fix

* cmd, internal: address comments
2020-06-17 10:46:31 +03:00
zhangsoledad
bcf19bc4be core/rawdb: swap tailId and itemOffset for deleted items in freezer (#21220)
* fix(freezer): tailId filenum offset were misplaced

* core/rawdb: assume first item in freezer always start from zero
2020-06-17 09:41:07 +02:00
Péter Szilágyi
eb9d7d15ec Merge pull request #21170 from duanhao0814/fix-dup-ecrecover
core: filter out txs with invalid signatures as soon as possible
2020-06-16 11:35:16 +03:00
sixdays
a981b60c25 eth/downloader: don't use defer for unlock before return (#21227)
Co-authored-by: linjing <linjingjing@baidu.com>
2020-06-15 15:46:27 +03:00
HackyMiner
9371b2f70c internal/web3ext: add missing params to debug.accountRange (#21208) 2020-06-11 15:41:43 +02:00
Martin Holst Swende
c85fdb76ee go.mod: update uint256 to 1.1.0 (#21206) 2020-06-11 07:27:43 +03:00
Yang Hau
e30c0af861 build, internal/ethapi, crypto/bls12381: fix typos (#21210)
speicifc -> specific
assigened -> assigned
frobenious -> frobenius
2020-06-10 23:25:32 +03:00
gary rong
4a19c0e7b8 core, eth, internal: include read storage entries in structlog output (#21204)
* core, eth, internal: extend structLog tracer

* core/vm, internal: add storage view

* core, internal: add slots to storage directly

* core: remove useless

* core: address martin's comment

* core/vm: fix tests
2020-06-10 11:46:13 +02:00
Martin Holst Swende
e9ba536d85 eth/downloader: fix spuriously failing tests (#21149)
* eth/downloader tests: fix spurious failing test due to race between receipts/headers

* miner tests: fix travis failure on arm64

* eth/downloader: tests - store td in ancients too
2020-06-09 11:39:19 +02:00
Natsu Kagami
89043cba75 accounts/abi: make GetType public again (#21157) 2020-06-09 10:26:56 +02:00
Pau
d5c267fd30 accounts/keystore: fix typo in error message (#21200) 2020-06-09 10:23:42 +02:00
Péter Szilágyi
a0797e37f8 Merge pull request #21192 from karalabe/fix-escape-analysis
core/state: avoid escape analysis fault when accessing cached state
2020-06-08 16:38:05 +03:00
Péter Szilágyi
80e887d7bf core/state: avoid escape analysis fault when accessing cached state 2020-06-08 16:11:37 +03:00
Paweł Bylica
cf6674539c core/vm: use uint256 in EVM implementation (#20787)
* core/vm: use fixed uint256 library instead of big

* core/vm: remove intpools

* core/vm: upgrade uint256, fixes uint256.NewFromBig

* core/vm: use uint256.Int by value in Stack

* core/vm: upgrade uint256 to v1.0.0

* core/vm: don't preallocate space for 1024 stack items (only 16)

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-06-08 15:24:40 +03:00
ucwong
39abd92ca8 ethstats: use timer instead of time.sleep (#20924) 2020-06-08 14:27:08 +03:00
libby kent
45b7535137 cmd/ethkey: support --passwordfile in generate command (#21183) 2020-06-08 11:55:51 +02:00
Felix Lange
da06519347 params: begin v1.9.16 release cycle 2020-06-08 11:00:17 +02:00
Felix Lange
0f77f34bb6 params: go-ethereum v1.9.15 stable 2020-06-08 10:56:48 +02:00
Péter Szilágyi
651233454e Merge pull request #21188 from karalabe/cht-1.9.15
params: update CHTs for 1.9.15 release
2020-06-08 11:16:23 +03:00
Péter Szilágyi
a5c827af86 params: update CHTs for 1.9.15 release 2020-06-08 11:14:33 +03:00
Marius van der Wijden
0b3f3be2b5 internal/ethapi: return revert reason for eth_call (#21083)
* internal/ethapi: return revert reason for eth_call

* internal/ethapi: moved revert reason logic to doCall

* accounts/abi/bind/backends: added revert reason logic to simulated backend

* internal/ethapi: fixed linting error

* internal/ethapi: check if require reason can be unpacked

* internal/ethapi: better error logic

* internal/ethapi: simplify logic

* internal/ethapi: return vmError()

* internal/ethapi: move handling of revert out of docall

* graphql: removed revert logic until spec change

* rpc: internal/ethapi: added custom error types

* graphql: use returndata instead of return

Return() checks if there is an error. If an error is found, we return nil.
For most use cases it can be beneficial to return the output even if there
was an error. This code should be changed anyway once the spec supports
error reasons in graphql responses

* accounts/abi/bind/backends: added tests for revert reason

* internal/ethapi: add errorCode to revert error

* internal/ethapi: add errorCode of 3 to revertError

* internal/ethapi: unified estimateGasErrors, simplified logic

* internal/ethapi: unified handling of errors in DoEstimateGas

* rpc: print error data field

* accounts/abi/bind/backends: unify simulatedBackend and RPC

* internal/ethapi: added binary data to revertError data

* internal/ethapi: refactored unpacking logic into newRevertError

* accounts/abi/bind/backends: fix EstimateGas

* accounts, console, internal, rpc: minor error interface cleanups

* Revert "accounts, console, internal, rpc: minor error interface cleanups"

This reverts commit 2d3ef53c53.

* re-apply the good parts of 2d3ef53c53

* rpc: add test for returning server error data from client

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-06-08 11:09:49 +03:00
Ev
88125d8bd0 core: fix typo in comments (#21181) 2020-06-08 10:53:56 +03:00
Marius van der Wijden
55f30db0ae core/vm, crypt/bls12381: fixed comments in bls (#21182)
* core/vm: crypto/bls12381: minor code comments

* crypto/bls12381: fix comment
2020-06-08 10:53:19 +03:00
Mariano Cortesi
9d93535674 node: missing comma on toml tags (#21187) 2020-06-08 10:52:18 +03:00
ucwong
4b2ff1457a go.mod: upgrade go-duktape to hide unused function warning (#21168) 2020-06-04 17:42:05 +02:00
Péter Szilágyi
cefa2ab1bd Merge pull request #21173 from karalabe/faucet-delete-oldaccs
cmd/faucet: delete old keystore when importing new faucet key
2020-06-04 11:58:40 +03:00
Péter Szilágyi
b1b75f0089 accounts/keystore, cmd/faucet: return old account to allow unlock 2020-06-04 10:57:21 +03:00
Péter Szilágyi
201e345c65 Merge pull request #21172 from karalabe/faucet-twitter-mobile
acounts/keystore, cmd/faucet: fix faucet double import, fix twitter url
2020-06-04 09:30:40 +03:00
Péter Szilágyi
469b8739eb acounts/keystore, cmd/faucet: fix faucet double import, fix twitter url 2020-06-04 08:59:26 +03:00
Hao Duan
8523ad450d core: filter out txs with invalid signatures as soon as possible
Once we detect an invalid transaction during recovering signatures, we should
directly exclude this transaction to avoid validating the signatures hereafter.

This should optimize the validations times of transactions with invalid signatures
to only one time.
2020-06-04 10:37:21 +08:00
Péter Szilágyi
8b83125739 Merge pull request #21162 from karalabe/daofork-order-check-fix
cmd/geth: fix the fork orders for DAO tests
2020-06-03 12:20:57 +03:00
Péter Szilágyi
f52ff0f1e9 cmd/geth: fix the fork orders for DAO tests 2020-06-03 12:17:54 +03:00
Martin Holst Swende
890757f03a cmd, core, params: inital support for yolo-v1 testnet (#21154)
* core,params,puppeth: inital support for yolo-v1 testnet

* cmd/geth, core: add yolov1 console flag

* cmd, core, params: YoloV1 bakein fixups

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-06-03 12:05:15 +03:00
kilic
4fc678542d core/vm, crypto/bls12381, params: add bls12-381 elliptic curve precompiles (#21018)
* crypto: add bls12-381 elliptic curve wrapper

* params: add bls12-381 precompile gas parameters

* core/vm: add bls12-381 precompiles

* core/vm: add bls12-381 precompile tests

* go.mod, go.sum: use latest bls12381 lib

* core/vm: move point encode/decode functions to base library

* crypto/bls12381: introduce bls12-381 library init function

* crypto/bls12381: import bls12381 elliptic curve implementation

* go.mod, go.sum: remove bls12-381 library

* remove unsued frobenious coeffs

supress warning for inp that used in asm

* add mappings tests for zero inputs

fix swu g2 minus z inverse constant

* crypto/bls12381: fix typo

* crypto/bls12381: better comments for bls12381 constants

* crypto/bls12381: swu, use single conditional for e2

* crypto/bls12381: utils, delete empty line

* crypto/bls12381: utils, use FromHex for string to big

* crypto/bls12381: g1, g2, strict length check for FromBytes

* crypto/bls12381: field_element, comparision changes

* crypto/bls12381: change swu, isogeny constants with hex values

* core/vm: fix point multiplication comments

* core/vm: fix multiexp gas calculation and lookup for g1 and g2

* core/vm: simpler imput length check for multiexp and pairing precompiles

* core/vm: rm empty multiexp result declarations

* crypto/bls12381: remove modulus type definition

* crypto/bls12381: use proper init function

* crypto/bls12381: get rid of new lines at fatal desciprtions

* crypto/bls12-381: fix no-adx assembly multiplication

* crypto/bls12-381: remove old config function

* crypto/bls12381: update multiplication backend

this commit changes mul backend to 6limb eip1962 backend

mul assign operations are dropped

* core/vm/contracts_tests: externalize test vectors for precompiles

* core/vm/contracts_test: externalize failure-cases for precompiles

* core/vm: linting

* go.mod: tiny up sum file

* core/vm: fix goimports linter issues

* crypto/bls12381: build tags for plain ASM or ADX implementation

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-06-03 09:44:32 +03:00
chenglin
3f649d4852 core: collect NewTxsEvent items without holding reorg lock (#21145) 2020-06-02 18:52:20 +02:00
Guillaume Ballet
5f6f5e345e console: handle undefined + null in console funcs (#21160) 2020-06-02 18:06:22 +02:00
Felix Lange
d98c42c0e3 rpc: send websocket ping when connection is idle (#21142)
* rpc: send websocket ping when connection is idle

* rpc: use non-blocking send for websocket pingReset
2020-06-02 15:04:44 +03:00
Felix Lange
723bd8c17f p2p/discover: move discv4 encoding to new 'v4wire' package (#21147)
This moves all v4 protocol definitions to a new package, p2p/discover/v4wire.
The new package will be used for low-level protocol tests.
2020-06-02 13:20:19 +02:00
Greg Colvin
cd57d5cd38 core/vm: EIP-2315, JUMPSUB for the EVM (#20619)
* core/vm: implement EIP 2315, subroutines for the EVM

* core/vm: eip 2315 - lintfix + check jump dest validity + check ret stack size constraints

  logger: markdown-friendly traces, validate jumpdest, more testcase, correct opcodes

* core/vm: update subroutines acc to eip: disallow walk-into

* core/vm/eips: gas cost changes for subroutines

* core/vm: update opcodes for EIP-2315

* core/vm: define RETURNSUB as a 'jumping' operation + review concerns

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-06-02 13:30:16 +03:00
rene
a35382de94 metrics: replace gosigar with gopsutil (#21041)
* replace gosigar with gopsutil

* removed check for whether GOOS is openbsd

* removed accidental import of runtime

* potential fix for difference in units between gosig and gopsutil

* fixed lint error

* remove multiplication factor

* uses cpu.ClocksPerSec as the multiplication factor

* changed dependency from shirou to renaynay (#20)

* updated dep

* switching back from using renaynay fork to using upstream as PRs were merged on upstream

* removed empty line

* optimized imports

* tidied go mod
2020-06-02 12:08:33 +03:00
Martin Holst Swende
a5eee8d1dc eth/downloader: more context in errors (#21067)
This PR makes use of go 1.13 error handling, wrapping errors and using
errors.Is to check a wrapped root-cause. It also removes the travis
builders for go 1.11 and go 1.12.
2020-05-29 11:12:43 +02:00
gary rong
389da6aa48 trie: enforce monotonic range in prover and return end marker (#21130)
* trie: add hasRightElement indicator

* trie: ensure the range is monotonic increasing

* trie: address comment and fix lint

* trie: address comment

* trie: make linter happy

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-27 17:37:37 +03:00
sixdays
b2c59e297b consensus/clique: make internal error private (#21132)
Co-authored-by: linjing <linjingjing@baidu.com>
2020-05-27 17:12:13 +03:00
Felix Lange
9219e0fba4 eth: interrupt chain insertion on shutdown (#21114)
This adds a new API method on core.BlockChain to allow interrupting
running data inserts, and calls the method before shutting down the
downloader.

The BlockChain interrupt checks are now done through a method instead
of inlining the atomic load everywhere. There is no loss of efficiency from
this and it makes the interrupt protocol a lot clearer because the check is
defined next to the method that sets the flag.
2020-05-26 21:37:37 +02:00
Felix Lange
4873a9d3c3 build: upgrade to golangci lint v1.27.0 (#21127)
* build: upgrade to golangci-lint v1.27.0

* build: raise lint timeout to 3 minutes
2020-05-26 14:24:22 +03:00
gary rong
070a5e1252 trie: fix for range proof (#21107)
* trie: fix for range proof

* trie: fix typo
2020-05-26 13:11:29 +03:00
Hao Duan
81e9caed7d ethstats: avoid blocking chan when received invalid stats request (#21073)
* ethstats: avoid blocking chan when received invalid stats request

* ethstats: minor code polishes

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-26 13:09:00 +03:00
ucwong
7ddb40239b ethdb/leveldb: use timer instead of time.After (#21066) 2020-05-26 11:03:37 +02:00
Richard Patel
2f66a8d614 metrics/prometheus: define TYPE once, add tests (#21068)
* metrics/prometheus: define type once for histograms

* metrics/prometheus: test collector
2020-05-26 12:00:09 +03:00
Felix Lange
dbf6b8a797 cmd/utils: fix default DNS discovery configuration (#21124) 2020-05-25 19:50:36 +02:00
meowsbits
befecc9fdf consensus/ethash: fix flaky test by reading seal results (#21085) 2020-05-25 18:01:03 +02:00
Martin Holst Swende
e868adde30 core/vm: improve jumpdest lookup (#21123) 2020-05-25 16:12:48 +02:00
yutianwu
25a661e0c2 consensus/clique: remove redundant pair of parentheses (#21104) 2020-05-25 12:00:18 +02:00
Martin Michlmayr
4f2784b38f all: fix typos in comments (#21118) 2020-05-25 10:21:28 +02:00
ucwong
48e3b95e77 miner: replace use of 'self' as receiver name (#21113) 2020-05-25 10:20:09 +02:00
Felföldi Zsolt
b4a2681120 les, les/lespay: implement new server pool (#20758)
This PR reimplements the light client server pool. It is also a first step
to move certain logic into a new lespay package. This package will contain
the implementation of the lespay token sale functions, the token buying and
selling logic and other components related to peer selection/prioritization
and service quality evaluation. Over the long term this package will be
reusable for incentivizing future protocols.

Since the LES peer logic is now based on enode.Iterator, it can now use
DNS-based fallback discovery to find servers.

This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
2020-05-22 13:46:34 +02:00
gary rong
65ce550b37 trie: extend range proofs with non-existence (#21000)
* trie: implement range proof with non-existent edge proof

* trie: fix cornercase

* trie: consider empty range

* trie: add singleSide test

* trie: support all-elements range proof

* trie: fix typo

* trie: tiny typos and formulations

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-20 15:45:38 +03:00
ucwong
0a99efa61f whisper: use canonical import name of package go-ethereum (#21099) 2020-05-20 10:32:54 +03:00
Boqin Qin
d5b7d1cc34 accounts: add blockByNumberNoLock() to avoid double-lock (#20983)
* abi/bind/backends: testcase for double-lock

* accounts: add blockByNumberNoLock to avoid double-lock

* backend/simulated: use stateroot, not blockhash for retrieveing state

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-05-19 12:48:27 +02:00
Martin Holst Swende
e0987f67e0 cmd/clef, signer/core: password input fixes (#20960)
* cmd/clef, signer/core: use better terminal input for passwords, make it possible to avoid boot-up warning

* all: move commonly used prompter to isolated (small) package

* cmd/clef: Add new --acceptWarn to clef README

* cmd/clef: rename flag 'acceptWarn' to 'suppress-bootwarn'

Co-authored-by: ligi <ligi@ligi.de>
2020-05-19 10:44:46 +02:00
Felix Lange
3666da8a4b console: fix unlockAccount argument count check (#21081) 2020-05-14 14:12:52 +03:00
Marius van der Wijden
f3f1e59eea accounts/abi: simplify reflection logic (#21058)
* accounts/abi: simplified reflection logic

* accounts/abi: simplified reflection logic

* accounts/abi: removed unpack

* accounts/abi: removed comments

* accounts/abi: removed uneccessary complications

* accounts/abi: minor changes in error messages

* accounts/abi: removed unnused code

* accounts/abi: fixed indexed argument unpacking

* accounts/abi: removed superfluous test cases

This commit removes two test cases. The first one is trivially invalid as we have the same
test cases as passing in packing_test.go L375. The second one passes now,
because we don't need the mapArgNamesToStructFields in unpack_atomic anymore.
Checking for purely underscored arg names generally should not be something we do
as the abi/contract is generally out of the control of the user.

* accounts/abi: removed comments, debug println

* accounts/abi: added commented out code

* accounts/abi: addressed comments

* accounts/abi: remove unnecessary dst.CanSet check

* accounts/abi: added dst.CanSet checks
2020-05-13 17:50:18 +02:00
Satpal
677724af0c cmd: fix log contexts (#21077) 2020-05-13 18:34:24 +03:00
Péter Szilágyi
46698d7931 params: begin v1.9.15 release cycle 2020-05-13 12:33:58 +03:00
Péter Szilágyi
6d74d1e5f7 params: release go-ethereum v1.9.14 2020-05-13 12:31:35 +03:00
Hao Duan
a188a1e150 ethstats: stop report ticker in each loop cycle #21070 (#21071)
Co-authored-by: Hao Duan <duan.hao@hyperchain.cn>
2020-05-13 12:06:19 +03:00
gary rong
d02301f758 core: fix missing receipt on Clique crashes (#21045)
* core: fix missing receipt

* core: address comment
2020-05-13 11:33:48 +03:00
Marius van der Wijden
0b63915430 accounts/abi: allow overloaded argument names (#21060)
* accounts/abi: allow overloaded argument names

In solidity it is possible to create the following contract:
```
contract Overloader {
    struct F { uint _f; uint __f; uint f; }
    function f(F memory f) public {}
}
```
This however resulted in a panic in the abi package.

* accounts/abi fixed error handling
2020-05-12 13:02:23 +02:00
Marius van der Wijden
b8ea9042e5 accounts/abi: accounts/abi/bind: Move topics to abi package (#21057)
* accounts/abi/bind: added test cases for waitDeployed

* accounts/abi/bind: added test case for boundContract

* accounts/abi/bind: removed unnecessary resolve methods

* accounts/abi: moved topics from /bind to /abi

* accounts/abi/bind: cleaned up format... functions

* accounts/abi: improved log message

* accounts/abi: added type tests

* accounts/abi/bind: remove superfluous template methods
2020-05-12 12:21:40 +02:00
gary rong
7b7e5921a4 miner: support disabling empty blockprecommits form the Go API (#20736)
* cmd, miner: add noempty-precommit flag

* cmd, miner: get rid of external flag

* miner: change bool to atomic int

* miner: fix tiny typo

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-12 13:11:34 +03:00
ucwong
7540c53e72 core/rawdb: remove unused math (#21065) 2020-05-12 12:19:15 +03:00
ucwong
aaede53738 core/rawdb : log format fix for Unindexing transaction (#21064)
* core/rawdb : log format fix for Unindexing transaction

* core/rawdb: tiny fixup

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-12 11:46:35 +03:00
gary rong
53cac027d0 les: drop the message if the entire p2p connection is stuck (#21033)
* les: drop the message if the entire p2p connection is stuck

* les: fix lint
2020-05-12 11:02:15 +03:00
Martin Holst Swende
7ace5a3a8b core: fixup blockchain tests (#21062)
core: fixup blockchain tests
2020-05-12 08:46:10 +03:00
Péter Szilágyi
40859a2441 Merge pull request #21061 from karalabe/cht-1.9.14
params: bump CHTs for the v1.9.14 release
2020-05-11 19:05:47 +03:00
Martin Holst Swende
4535230059 cmd, core, eth: background transaction indexing (#20302)
* cmd, core, eth: init tx lookup in background

* core/rawdb: tiny log fixes to make it clearer what's happening

* core, eth: fix rebase errors

* core/rawdb: make reindexing less generic, but more optimal

* rlp: implement rlp list iterator

* core/rawdb: new implementation of tx indexing/unindex using generic tx iterator and hashing rlp-data

* core/rawdb, cmd/utils: fix review concerns

* cmd/utils: fix merge issue

* core/rawdb: add some log formatting polishes

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-11 18:58:43 +03:00
Péter Szilágyi
126ac94f36 params: bump CHTs for the v1.9.14 release 2020-05-11 18:56:09 +03:00
Felix Lange
6f54ae24cd p2p: add 0 port check in dialer (#21008)
* p2p: add low port check in dialer

We already have a check like this for UDP ports, add a similar one in
the dialer. This prevents dials to port zero and it's also an extra
layer of protection against spamming HTTP servers.

* p2p/discover: use errLowPort in v4 code

* p2p: change port check

* p2p: add comment

* p2p/simulations/adapters: ensure assigned port is in all node records
2020-05-11 18:11:17 +03:00
AusIV
069a7e1f8a core/rawdb: stop freezer process as part of freezer.Close() (#21010)
* core/rawdb: Stop freezer process as part of freezer.Close()

When you call db.Close(), it was closing the leveldb database first,
then closing the freezer, but never stopping the freezer process.
This could cause the freezer to attempt to write to leveldb after
leveldb had been closed, leading to a crash with a non-zero exit code.

This change adds a quit channel to the freezer, and freezer.Close()
will not return until the freezer process has stopped.

Additionally, when you call freezerdb.Close(), it will close the
AncientStore before closing leveldb, to ensure that the freezer goroutine
will be stopped before leveldb is closed.

* core/rawdb: Fix formatting for golint

* core/rawdb: Use backoff flag to avoid repeating select

* core/rawdb: Include accidentally omitted backoff
2020-05-11 15:11:17 +03:00
Martin Holst Swende
bd60295de5 console: fix some crashes/errors in the bridge (#21050)
Fixes #21046
2020-05-11 11:59:21 +02:00
Marius van der Wijden
930e82d7f4 params, cmd/utils: remove outdated discv5 bootnodes, deprecate flags (#20949)
* params: remove outdated discv5 bootnodes

* cmd/utils: deprecated bootnodesv4/v5 flags
2020-05-11 11:16:32 +03:00
Péter Szilágyi
37877e86ed Merge pull request #21056 from karalabe/statedb-simpler-code
core/state: make GetCodeSize mirror GetCode implementation wise
2020-05-11 11:09:25 +03:00
gary rong
263622f44f accounts/abi/bind/backend, internal/ethapi: recap gas limit with balance (#21043)
* accounts/abi/bind/backend, internal/ethapi: recap gas limit with balance

* accounts, internal: address comment and fix lint

* accounts, internal: extend log message

* tiny nits to format hexutil.Big and nil properly

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-05-11 11:08:20 +03:00
Péter Szilágyi
0b2edf05bb core/state: make GetCodeSize mirror GetCode implementation wise 2020-05-11 10:28:56 +03:00
ligi
e29e4c2376 build: fix CLI params for windows LNK files (#21055)
* build: Fix CLI params for windows LNK files

closes #21054

* Remove parameters
2020-05-11 10:05:37 +03:00
Martin Holst Swende
82f9ed49fa core/state: avoid statedb.dbErr due to emptyCode (#21051)
* core/state: more verbose stateb errors

* core/state: fix flaw

* core/state: fixed lint

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2020-05-08 21:52:20 +03:00
Martin Holst Swende
b0b65d017f core/state: abort commit if read errors have occurred (#21039)
This finally adds the error check that the documentation of StateDB.dbErr
promises to do. dbErr was added in 9e5f03b6c (June 2017), and the check was
already missing in that commit. We somehow survived without it for three years.
2020-05-07 15:13:34 +02:00
Martin Holst Swende
1152f45849 core/state: include zero-address in state dump if present (#21038)
* Include 0x0000 address into the dump if it is present

* core/state: go fmt

Co-authored-by: Alexey Akhunov <akhounov@gmail.com>
2020-05-07 16:06:31 +03:00
Marius van der Wijden
dd88bd82c9 accounts/abi/bind: add void if no return args specified (#21002)
* accounts/abi/bind: add void if no return args specified

Currently the java generator generates invalid input on pure/view functions
that have no return type. e.g. `function f(uint u) view public {}`
This is not a problem in practice as people rarely ever write functions like this.

* accounts/abi/bind: use elseif instead of nested if
2020-05-07 10:24:10 +03:00
gary rong
85944c2561 core/state/snapshot: fix typo (#21037) 2020-05-07 10:07:59 +03:00
Péter Szilágyi
87c463c47a Merge pull request #21036 from karalabe/snapshot-storage-leak
core/state/snapshot: don't create storage list for non-existing accounts
2020-05-06 18:11:15 +03:00
Péter Szilágyi
90af6dae6e core/state/snapshot: don't create storage list for non-existing accounts 2020-05-06 17:22:38 +03:00
Boqin Qin
39c64d85a2 core: avoid double-lock in tx_pool_test (#20984) 2020-05-06 15:47:59 +02:00
Guillaume Ballet
234cc8e77f eth/downloader: minor typo fixes in comments (#21035) 2020-05-06 15:35:04 +02:00
gary rong
5cdc2dffda trie: fix TestBadRangeProof unit test (#21034) 2020-05-06 15:33:57 +02:00
ploui
c2147ee154 eth: don't inadvertently enable snapshots in archive nodes (#21025)
* eth: don't reassign more cache than is being made available

* eth: don't inadvertently enable snapshot in a case where --snapshot wasn't given
2020-05-06 13:01:01 +03:00
Péter Szilágyi
b98259868b Merge pull request #21032 from karalabe/skip-announce-goroutine-eth64
eth: skip transaction announcer goroutine on eth<65
2020-05-06 12:59:33 +03:00
Péter Szilágyi
292570ad6c Merge pull request #21028 from karalabe/memfix-32bit-arch
cmd/geth: handle memfixes on 32bit arch with large RAM
2020-05-06 12:58:51 +03:00
Péter Szilágyi
34ed2d834a eth: skip transaction announcer goroutine on eth<65 2020-05-06 12:47:27 +03:00
Marius van der Wijden
933acf3389 account/abi: remove superfluous type checking (#21022)
* accounts/abi: added getType func to Type struct

* accounts/abi: fixed tuple unpack

* accounts/abi: removed type.Type

* accounts/abi: added comment

* accounts/abi: removed unused types

* accounts/abi: removed superfluous declarations

* accounts/abi: typo
2020-05-05 16:30:43 +02:00
Péter Szilágyi
44a3b8c04c Merge pull request #21026 from karalabe/disable-highmem-test
tests: skip consensus test using 1GB RAM
2020-05-05 15:11:40 +03:00
Péter Szilágyi
a52511e692 build: raise test timeout back to 10 mins (#21027) 2020-05-05 15:11:00 +03:00
Péter Szilágyi
8f8ff8d601 cmd/geth: handle memfixes on 32bit arch with large RAM 2020-05-05 14:22:51 +03:00
Péter Szilágyi
4515772993 tests: skip consensus test using 1GB RAM 2020-05-05 12:27:09 +03:00
rene
c989bca173 cmd/utils: renames flags related to http-rpc server (#20935)
* rpc flags related to starting http server renamed to http

* old rpc flags aliased and still functional

* pprof flags fixed

* renames gpo related flags

* linted

* renamed rpc flags for consistency and clarity

* added warn logs

* added more warn logs for all deprecated flags for consistency

* moves legacy flags to separate file, hides older flags under show-deprecated-flags command

* legacy prefix and moved some more legacy flags to legacy file

* fixed circular import

* added docs

* fixed imports lint error

* added notes about when flags were deprecated

* cmd/utils: group flags by deprecation date + reorder by date,

* modified deprecated comments for consistency, added warn log for --rpc

* making sure deprecated flags are still functional

* show-deprecated-flags command cleaned up

* fixed lint errors

* corrected merge conflict

* IsSet --> GlobalIsSet

* uncategorized flags, if not deprecated, displayed under misc

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-05-05 11:19:17 +03:00
Péter Szilágyi
587656619d Merge pull request #21023 from karalabe/snapshot-verify-iterator-release
core/state/snapshot: release iterator after verification
2020-05-04 17:10:17 +03:00
Péter Szilágyi
da59147014 core/state/snapshot: release iterator after verification 2020-05-04 15:14:08 +03:00
Péter Szilágyi
5e45db7610 Merge pull request #21021 from karalabe/tests-snapshot-gen-cleanup
tests: cleanup snapshot generator goroutine leak
2020-05-04 15:12:35 +03:00
Marius van der Wijden
ab72803e6f accounts/abi: move U256Bytes to common/math (#21020) 2020-05-04 14:09:14 +02:00
Marius van der Wijden
e872083d44 accounts/abi: removed Kind from Type struct (#21009)
* accounts/abi: removed Kind from Type struct

* accounts/abi: removed unused code
2020-05-04 13:20:20 +02:00
Péter Szilágyi
65cd28aa0e tests: cleanup snapshot generator goroutine leak 2020-05-04 12:10:02 +03:00
Martin Holst Swende
510b6f90db accounts/external: convert signature v value to 0/1 (#20997)
This fixes an issue with clef, which already transforms the signature
to use the legacy 27/28 encoding.

Fixes #20994
2020-05-01 13:52:41 +02:00
Boqin Qin
c43be6cf87 les: remove invalid use of t.Fatal in TestHandshake (#21012) 2020-05-01 13:48:52 +02:00
Felix Lange
7e4d1925f0 go.sum: run go mod tidy (#21014) 2020-05-01 12:51:04 +02:00
Martin Holst Swende
d2d3166f35 accounts/external: fill account-cache if that hasn't already been done, fixes #20995 (#20998) 2020-04-30 18:57:06 +02:00
gary rong
2337aa64eb core/state/snapshot: fix trie generator reporter (#21004) 2020-04-30 10:43:50 +03:00
Péter Szilágyi
3cebfb6664 Merge pull request #21003 from karalabe/snapshot-journal-nilfix
core/state/snapshot: fix journal nil deserialziation
2020-04-30 10:26:25 +03:00
Péter Szilágyi
4b6f6ffe23 core/state/snapshot: fix journal nil deserialziation 2020-04-29 18:00:29 +03:00
gary rong
26d271dfbb core/state/snapshot: implement storage iterator (#20971)
* core/state/snapshot: implement storage iterator

* core/state/snapshot, tests: implement helper function

* core/state/snapshot: fix storage issue

If an account is deleted in the tx_1 but recreated in the tx_2,
the it can happen that in this diff layer, both destructedSet
and storageData records this account. In this case, the storage
iterator should be able to iterate the slots belong to new account
but disable further iteration in deeper layers(belong to old account)

* core/state/snapshot: address peter and martin's comment

* core/state: address comments

* core/state/snapshot: fix test
2020-04-29 12:53:08 +03:00
ucwong
1264c19f11 go.mod : goupnp v1.0.0 upgrade (#20996) 2020-04-28 14:57:07 +03:00
Martin Holst Swende
7f95a85fd4 signer, log: properly escape character sequences (#20987)
* signer: properly handle terminal escape characters

* log: use strconv conversion instead of custom escape function

* log: remove relection tests for nil
2020-04-28 14:28:38 +03:00
ucwong
0708b573bc event, whisper/whisperv6: use defer where possible (#20940) 2020-04-28 10:53:08 +02:00
Steven E. Harris
9887edd580 rpc: add explicit 200 response for empty HTTP GET (#20952) 2020-04-28 10:43:21 +02:00
ucwong
1893266c59 go.mod: upgrade to golang-lru v0.5.4 (#20992)
golang-lru is now a go module, and the upgrade corrects a couple
of minor issues. In particular, the library could crash if you inserted
nil into an LRU cache.
2020-04-28 10:22:23 +02:00
Felix Lange
92a7538ed3 core: improve TestLogRebirth (#20961)
This is a resubmit of #20668 which rewrites the problematic test
without any additional goroutines. It also documents the test better.

The purpose of this test is checking whether log events are sent
correctly when importing blocks. The test was written at a time when
blockchain events were delivered asynchronously, making the check hard
to pull off. Now that core.BlockChain delivers events synchronously
during the call to InsertChain, the test can be simplified.

Co-authored-by: BurtonQin <bobbqqin@gmail.com>
2020-04-28 10:06:49 +02:00
Julian Y
5c3993444d rpc: make ExampleClientSubscription work with the geth API (#19483)
This corrects the call to eth_getBlockByNumber, which previously
returned this error:

  can't get latest block: missing value for required argument 1

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-27 17:25:24 +02:00
Boqin Qin
ba068d40dd core: add check in AddChildIndexer to avoid double lock (#20982)
This fixes a theoretical double lock condition which could occur in

    indexer.AddChildIndexer(indexer)

Nobody would ever do that though.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-27 15:16:30 +02:00
Marius van der Wijden
e32ee6ac05 accounts/abi: added abi test cases, minor bug fixes (#20903)
* accounts/abi: added documentation

* accounts/abi: reduced usage of arguments.LengthNonIndexed

* accounts/abi: simplified reflection logic

* accounts/abi: moved testjson data into global declaration

* accounts/abi: removed duplicate test cases

* accounts/abi: reworked abi tests

* accounts/abi: added more tests for abi packing

* accounts/abi/bind: refactored base tests

* accounts/abi: run pack tests as subtests

* accounts/abi: removed duplicate tests

* accounts/abi: removed unnused arguments.LengthNonIndexed

Due to refactors to the code, we do not need the arguments.LengthNonIndexed function anymore.
You can still get the length by calling len(arguments.NonIndexed())

* accounts/abi: added type test

* accounts/abi: modified unpack test to pack test

* accounts/abi: length check on arrayTy

* accounts/abi: test invalid abi

* accounts/abi: fixed rebase error

* accounts/abi: fixed rebase errors

* accounts/abi: removed unused definition

* accounts/abi: merged packing/unpacking tests

* accounts/abi: fixed [][][32]bytes encoding

* accounts/abi: added tuple test cases

* accounts/abi: renamed getMockLog -> newMockLog

* accounts/abi: removed duplicate test

* accounts/abi: bools -> booleans
2020-04-27 15:07:33 +02:00
Steven E. Harris
40283d0522 node: shut down all node-related HTTP servers gracefully (#20956)
Rather than just closing the underlying network listener to stop our
HTTP servers, use the graceful shutdown procedure, waiting for any
in-process requests to finish.
2020-04-27 11:16:00 +02:00
Péter Szilágyi
a070e23178 Merge pull request #20988 from karalabe/catchup-shutdown
eth: fix shutdown regression to abort downloads, not just cancel
2020-04-27 12:02:13 +03:00
Péter Szilágyi
b0bbd47185 eth: fix shutdown regression to abort downloads, not just cancel 2020-04-27 11:22:15 +03:00
tgyKomgo
1aa83290f5 p2p/enode: update code comment (#20972)
It is possible to specify enode URLs using domain name since
commit b90cdbaa79, but the code comment still said that only
IP addresses are allowed.

Co-authored-by: admin@komgo.io <KomgoRocks2018!>
2020-04-24 16:50:03 +02:00
gary rong
8a2e8faadd core/state/snapshot: fix binary iterator (#20970) 2020-04-24 14:43:49 +03:00
gary rong
44ff3f3dc9 trie: initial implementation for range proof (#20908)
* trie: initial implementation for range proof

* trie: add benchmark

* trie: fix lint

* trie: fix minor issue

* trie: unset the edge valuenode as well

* trie: unset the edge valuenode as nilValuenode
2020-04-24 14:37:56 +03:00
Marius van der Wijden
38aab0aa83 accounts/keystore: fix double import race (#20915)
* accounts/keystore: fix race in Import/ImportECDSA

* accounts/keystore: added import/export tests

* cmd/geth: improved TestAccountImport test

* accounts/keystore: added import/export tests

* accounts/keystore: fixed naming

* accounts/keystore: fixed typo

* accounts/keystore: use mutex instead of rwmutex

* accounts: use errors instead of fmt
2020-04-22 12:52:29 +03:00
icodezjb
2ec7232191 core: mirror full node reorg logic in light client too (#20931)
* core: fix the condition of reorg

* core: fix nitpick to only retrieve head once

* core: don't reorg if received chain is longer at same diff

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-04-22 11:27:47 +03:00
gary rong
b9df7ecdc3 all: seperate consensus error and evm internal error (#20830)
* all: seperate consensus error and evm internal error

There are actually two types of error will be returned when
a tranaction/message call is executed: (a) consensus error
(b) evm internal error. The former should be converted to
a consensus issue, e.g. The sender doesn't enough asset to
purchase the gas it specifies. The latter is allowed since
evm itself is a blackbox and internal error is allowed to happen.

This PR emphasizes the difference by introducing a executionResult
structure. The evm error is embedded inside. So if any error
returned, it indicates consensus issue happens.

And also this PR improve the `EstimateGas` API to return the concrete
revert reason if the transaction always fails

* all: polish

* accounts/abi/bind/backends: add tests

* accounts/abi/bind/backends, internal: cleanup error message

* all: address comments

* core: fix lint

* accounts, core, eth, internal: address comments

* accounts, internal: resolve revert reason if possible

* accounts, internal: address comments
2020-04-22 11:25:36 +03:00
ucwong
c60c0c97e7 go.mod : update fastcache to 1.5.7 (#20936) 2020-04-22 10:31:32 +03:00
Péter Szilágyi
870d4c4970 Merge pull request #20953 from holiman/fixdlseek
core/state/snapshot: make difflayer account iterator seek operation inclusive
2020-04-22 09:50:01 +03:00
ucwong
6c458f32f8 p2p: defer wait group done in protocol start (#20951) 2020-04-21 16:42:18 +02:00
Martin Holst Swende
c036fe35a8 core/state/snapshot: make difflayer account iterator seek operation inclusive 2020-04-21 16:26:02 +02:00
Boqin Qin
7599999dcd snapshot: add Unlock before return (#20948)
* Forget Unlock in snapshot

* Remove Unlock before panic
2020-04-21 11:11:38 +03:00
Péter Szilágyi
79b68dd78d Merge pull request #20923 from holiman/fix_seckeybuf
trie: fix concurrent usage of secKeyBuf, ref #20920
2020-04-20 16:52:04 +03:00
rene
648b0cb714 cmd, core: remove override muir glacier and override istanbul (#20942) 2020-04-20 12:46:38 +03:00
Marius van der Wijden
ac9c03f910 accounts/abi: Prevent recalculation of internal fields (#20895)
* accounts/abi: prevent recalculation of ID, Sig and String

* accounts/abi: fixed unpacking of no values

* accounts/abi: multiple fixes to arguments

* accounts/abi: refactored methodName and eventName

This commit moves the complicated logic of how we assign method names
and event names if they already exist into their own functions for
better readability.

* accounts/abi: prevent recalculation of internal

In this commit, I changed the way we calculate the string
representations, sig representations and the id's of methods. Before
that these fields would be recalculated everytime someone called .Sig()
.String() or .ID() on a method or an event.

Additionally this commit fixes issue #20856 as we assign names to inputs
with no name (input with name "" becomes "arg0")

* accounts/abi: added unnamed event params test

* accounts/abi: fixed rebasing errors in method sig

* accounts/abi: fixed rebasing errors in method sig

* accounts/abi: addressed comments

* accounts/abi: added FunctionType enumeration

* accounts/abi/bind: added test for unnamed arguments

* accounts/abi: improved readability in NewMethod, nitpicks

* accounts/abi: method/eventName -> overloadedMethodName
2020-04-20 09:01:04 +02:00
Boqin Qin
ca22d0761b event: fix inconsistency in Lock and Unlock (#20933)
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-17 14:51:38 +02:00
Nishant Das
7a63faf734 p2p/discover: add helper methods to UDPv5 (#20918)
This adds two new methods to UDPv5, AllNodes and LocalNode.

AllNodes returns all the nodes stored in the local table; this is
useful for the purposes of metrics collection and also debugging any
potential issues with other discovery v5 implementations.

LocalNode returns the local node object. The reason for exposing this
is so that users can modify and set/delete new key-value entries in
the local record.
2020-04-16 15:58:37 +02:00
Péter Szilágyi
3bf1054a13 params: begin v1.9.14 release cycle 2020-04-16 10:41:32 +03:00
Péter Szilágyi
cbc4ac264e params: release Geth v1.9.13 2020-04-16 10:39:22 +03:00
Péter Szilágyi
359d9c3f0a Merge pull request #20925 from karalabe/cht-1.9.13
params: update CHTs for the 1.9.13 release
2020-04-15 18:25:54 +03:00
Péter Szilágyi
d77d35a4a9 params: update CHTs for the 1.9.13 release 2020-04-15 18:03:10 +03:00
Martin Holst Swende
6402c42b67 all: simplify and fix database iteration with prefix/start (#20808)
* core/state/snapshot: start fixing disk iterator seek

* ethdb, rawdb, leveldb, memorydb: implement iterators with prefix and start

* les, core/state/snapshot: iterator fixes

* all: remove two iterator methods

* all: rename Iteratee.NewIteratorWith -> NewIterator

* ethdb: fix review concerns
2020-04-15 14:08:53 +03:00
Martin Holst Swende
af4080b4b7 trie: fix concurrent usage of secKeyBuf, ref #20920 2020-04-15 11:07:29 +02:00
gary rong
00064ddcfb accounts/abi: implement new fallback functions (#20764)
* accounts/abi: implement new fackball functions

In Solidity v0.6.0, the original fallback is separated
into two different sub types: fallback and receive.

This PR addes the support for parsing new format abi
and the relevant abigen functionalities.

* accounts/abi: fix unit tests

* accounts/abi: minor fixes

* accounts/abi, mobile: support jave binding

* accounts/abi: address marius's comment

* accounts/abi: Work around the uin64 conversion issue

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-04-15 09:23:58 +02:00
Marius van der Wijden
2a836bb259 core/rawdb: fix data race between Retrieve and Close (#20919)
* core/rawdb: fixed data race between retrieve and close

closes https://github.com/ethereum/go-ethereum/issues/20420

* core/rawdb: use non-atomic load while holding mutex
2020-04-14 18:13:47 +03:00
Péter Szilágyi
eb2fd823b2 travis, appveyor, build, Dockerfile: bump Go to 1.14.2 (#20913)
* travis, appveyor, build, Dockerfile: bump Go to 1.14.2

* travis, appveyor: force GO111MODULE=on for every build
2020-04-14 14:03:18 +03:00
ligi
5a20cc0de6 README: update min go version to 1.13 (#20911) 2020-04-14 10:08:27 +02:00
Felföldi Zsolt
0851646e48 les, les/lespay/client: add service value statistics and API (#20837)
This PR adds service value measurement statistics to the light client. It
also adds a private API that makes these statistics accessible. A follow-up
PR will add the new server pool which uses these statistics to select
servers with good performance.

This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4

Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
2020-04-09 11:55:32 +02:00
Raw Pong Ghmoa
15540ae992 cmd: deprecate --testnet, use named networks instead (#20852)
* cmd/utils: make goerli the default testnet

* cmd/geth: explicitly rename testnet to ropsten

* core: explicitly rename testnet to ropsten

* params: explicitly rename testnet to ropsten

* cmd: explicitly rename testnet to ropsten

* miner: explicitly rename testnet to ropsten

* mobile: allow for returning the goerli spec

* tests: explicitly rename testnet to ropsten

* docs: update readme to reflect changes to the default testnet

* mobile: allow for configuring goerli and rinkeby nodes

* cmd/geth: revert --testnet back to ropsten and mark as legacy

* cmd/util: mark --testnet flag as deprecated

* docs: update readme to properly reflect the 3 testnets

* cmd/utils: add an explicit deprecation warning on startup

* cmd/utils: swap goerli and ropsten in usage

* cmd/geth: swap goerli and ropsten in usage

* cmd/geth: if running a known preset, log it for convenience

* docs: improve readme on usage of ropsten's testnet datadir

* cmd/utils: check if legacy `testnet` datadir exists for ropsten

* cmd/geth: check for legacy testnet path in console command

* cmd/geth: use switch statement for complex conditions in main

* cmd/geth: move known preset log statement to the very top

* cmd/utils: create new ropsten configurations in the ropsten datadir

* cmd/utils: makedatadir should check for existing testnet dir

* cmd/geth: add legacy testnet flag to the copy db command

* cmd/geth: add legacy testnet flag to the inspect command
2020-04-09 12:09:58 +03:00
Marius van der Wijden
023b87b9d1 accounts/abi/bind: fixed erroneous filtering of negative ints (#20865)
* accounts/abi/bind: fixed erroneous packing of negative ints

* accounts/abi/bind: added test cases for negative ints in topics

* accounts/abi/bind: fixed genIntType for go 1.12

* accounts/abi: minor  nitpick
2020-04-09 10:54:57 +02:00
rene
1bad861222 changed date of rpcstack.go since new file (#20904) 2020-04-08 16:58:27 +02:00
Adam Schmideg
fe9ffa5953 crypto: improve error messages in LoadECDSA (#20718)
This improves error messages when the file is too short or too long.
Also rewrite the test for SaveECDSA because LoadECDSA has its own
test now.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-08 16:01:11 +02:00
rene
07d909ff32 node: allow websocket and HTTP on the same port (#20810)
This change makes it possible to run geth with JSON-RPC over HTTP and
WebSocket on the same TCP port. The default port for WebSocket
is still 8546. 

    geth --rpc --rpcport 8545 --ws --wsport 8545

This also removes a lot of deprecated API surface from package rpc.
The rpc package is now purely about serving JSON-RPC and no longer
provides a way to start an HTTP server.
2020-04-08 13:33:12 +02:00
Marius van der Wijden
5065cdefff accounts/abi/bind: Refactored topics (#20851)
* accounts/abi/bind: refactored topics

* accounts/abi/bind: use store function to remove code duplication

* accounts/abi/bind: removed unused type defs

* accounts/abi/bind: error on tuples in topics

* Cosmetic changes to restart travis build

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2020-04-08 12:00:10 +02:00
ucwong
6975172d01 whisper/mailserver : recover corrupt db files before opening (#20891)
* whisper/mailserver : recover db file when openfile corrupted

* whisper/mailserver : fix db -> s.db

* whisper/mailserver : common/errors for dbfile
2020-04-08 11:26:16 +02:00
Felix Lange
c8e9a91672 build: upgrade to golangci-lint 1.24.0 (#20901)
* accounts/scwallet: remove unnecessary uses of fmt.Sprintf

* cmd/puppeth: remove unnecessary uses of fmt.Sprintf

* p2p/discv5: remove unnecessary use of fmt.Sprintf

* whisper/mailserver: remove unnecessary uses of fmt.Sprintf

* core: goimports -w tx_pool_test.go

* eth/downloader: goimports -w downloader_test.go

* build: upgrade to golangci-lint 1.24.0
2020-04-08 11:07:29 +03:00
Felix Lange
b7394d7942 p2p/discover: add initial discovery v5 implementation (#20750)
This adds an implementation of the current discovery v5 spec.

There is full integration with cmd/devp2p and enode.Iterator in this
version. In theory we could enable the new protocol as a replacement of
discovery v4 at any time. In practice, there will likely be a few more
changes to the spec and implementation before this can happen.
2020-04-08 09:57:23 +02:00
rene
671f22be38 couple of fixes to docs in clef (#20900) 2020-04-07 14:37:24 +02:00
Adam Schmideg
6a3daa2a4e .github: change gitter reference to discord link in issue template (#20896) 2020-04-07 12:16:35 +02:00
Martin Holst Swende
094996b8c9 docs/audits: add discv5 protocol audits from LA and C53 (#20898) 2020-04-07 12:15:28 +02:00
Martin Holst Swende
8dc8941551 core/vm: use a callcontext struct (#20761)
* core/vm: use a callcontext struct

* core/vm: fix tests

* core/vm/runtime: benchmark

* core/vm: make intpool push inlineable, unexpose callcontext
2020-04-07 12:45:21 +03:00
Martin Holst Swende
0bec6a43f6 cmd/geth: enable metrics for geth import command (#20738)
* cmd/geth: enable metrics for geth import command

* cmd/geth: enable metrics-flags for import command
2020-04-07 11:23:57 +03:00
gary rong
f0b5eb09eb eth, les: fix flaky tests (#20897)
* les: fix flaky test

* eth: fix flaky test
2020-04-07 09:16:21 +03:00
William Morriss
3cf7d2e9a6 internal/ethapi: add CallArgs.ToMessage method (#20854)
ToMessage is used to convert between ethapi.CallArgs and types.Message.
It reduces the length of the DoCall method by about half by abstracting out
the conversion between the CallArgs and the Message. This should improve the
code's maintainability and reusability.
2020-04-03 20:10:53 +02:00
Boqin Qin
be6078ad83 all: fix a bunch of inconsequential goroutine leaks (#20667)
The leaks were mostly in unit tests, and could all be resolved by
adding suitably-sized channel buffers or by restructuring the test
to not send on a channel after an error has occurred.

There is an unavoidable goroutine leak in Console.Interactive: when
we receive a signal, the line reader cannot be unblocked and will get
stuck. This leak is now documented and I've tried to make it slightly 
less bad by adding a one-element buffer to the output channels of
the line-reading loop. Should the reader eventually awake from its
blocked state (i.e. when stdin is closed), at least it won't get stuck
trying to send to the interpreter loop which has quit long ago.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-03 20:07:22 +02:00
Marius van der Wijden
98eab2dbe7 mobile: use bind.NewKeyedTransactor instead of duplicating (#20888)
It's better to reuse the existing code to create a keyed transactor
than to rewrite the logic again.
2020-04-03 15:11:04 +03:00
gary rong
be9172a7ac rpc: metrics for JSON-RPC method calls (#20847)
This adds a couple of metrics for tracking the timing
and frequency of method calls:

- rpc/requests gauge counts all requests
- rpc/success gauge counts requests which return err == nil
- rpc/failure gauge counts requests which return err != nil
- rpc/duration/all timer tracks timing of all requests
- rpc/duration/<method>/<success/failure> tracks per-method timing
2020-04-03 12:36:44 +02:00
Luke Champine
462ddce5b2 crypto/ecies: improve concatKDF (#20836)
This removes a bunch of weird code around the counter overflow check in
concatKDF and makes it actually work for different hash output sizes.

The overflow check worked as follows: concatKDF applies the hash function N
times, where N is roundup(kdLen, hashsize) / hashsize. N should not
overflow 32 bits because that would lead to a repetition in the KDF output.

A couple issues with the overflow check:

- It used the hash.BlockSize, which is wrong because the
  block size is about the input of the hash function. Luckily, all standard
  hash functions have a block size that's greater than the output size, so
  concatKDF didn't crash, it just generated too much key material.
- The check used big.Int to compare against 2^32-1.
- The calculation could still overflow before reaching the check.

The new code in concatKDF doesn't check for overflow. Instead, there is a
new check on ECIESParams which ensures that params.KeyLen is < 512. This
removes any possibility of overflow.

There are a couple of miscellaneous improvements bundled in with this
change:

- The key buffer is pre-allocated instead of appending the hash output
  to an initially empty slice.
- The code that uses concatKDF to derive keys is now shared between Encrypt
  and Decrypt.
- There was a redundant invocation of IsOnCurve in Decrypt. This is now removed
  because elliptic.Unmarshal already checks whether the input is a valid curve
  point since Go 1.5.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-04-03 11:57:24 +02:00
ucwong
f7b29ec942 rpc: add missing timer.Stop calls in websocket tests (#20863) 2020-04-02 22:08:45 +02:00
ucwong
f98cabad7c core: add missing Timer.Stop call in TestLogReorgs (#20870) 2020-04-02 16:04:45 +02:00
ucwong
0c359e4b9a p2p/discv5, p2p/testing: add missing Timer.Stop calls in tests (#20869) 2020-04-02 16:03:40 +02:00
ucwong
37d6357806 ethstats: add missing Ticker.Stop call (#20867) 2020-04-02 16:02:10 +02:00
ucwong
53e034ce0b metrics: add missing calls to Ticker.Stop in tests (#20866) 2020-04-02 16:01:18 +02:00
ucwong
0893ee6d51 event: add missing timer.Stop call in TestFeed (#20868) 2020-04-02 15:56:25 +02:00
ucwong
4d891f23b5 les: add missing Ticker.Stop call (#20864) 2020-04-02 15:54:59 +02:00
ucwong
66ed58bfcc eth/fetcher: add missing timer.Stop calls (#20861) 2020-04-02 12:32:45 +02:00
ucwong
47f7c736cb eth/filters: add missing Ticker.Stop call (#20862) 2020-04-02 12:31:50 +02:00
Martin Holst Swende
228a297056 cmd/geth: fix bad genesis test (#20860) 2020-04-02 12:27:44 +02:00
ucwong
ad4b60efdd miner/worker: add missing timer.Stop call (#20857) 2020-04-02 10:40:38 +02:00
ucwong
c87cdd3053 p2p/discv5: add missing Timer.Stop calls (#20853) 2020-04-02 10:11:16 +02:00
Marius van der Wijden
f15849cf00 accounts/abi faster unpacking of int256 (#20850) 2020-04-01 18:46:53 +02:00
ucwong
bf35e27ea7 p2p/server: add UDP port mapping goroutine to wait group (#20846) 2020-04-01 18:00:33 +02:00
ucwong
1e2e1b41f8 cmd/devp2p, cmd/wnode, whisper: add missing calls to Timer.Stop (#20843) 2020-04-01 16:12:01 +02:00
Paweł Bylica
d56dc038d2 cmd/evm: Rework execution stats (#20792)
- Dump stats also for --bench flag.
- From memory stats only show number and size of allocations. This is what `test -bench` shows. I doubt others like number of GC runs are any useful, but can be added if requested.
- Now the mem stats are for single execution in case of --bench.
2020-04-01 12:40:07 +02:00
ucwong
a5a9feab21 whisper: fix whisper go routine leak with sync wait group (#20844) 2020-04-01 11:35:26 +02:00
Jeff Wentworth
f0be151349 README: update private network genesis spec with istanbul (#20841)
* add istanbul and muirGlacier to genesis states in README

* remove muirGlacier, relocate istanbul
2020-03-31 19:14:42 +03:00
gary rong
f78ffc0545 les: create utilities as common package (#20509)
* les: move execqueue into utilities package

execqueue is a util for executing queued functions
in a serial order which is used by both les server
and les client. Move it to common package.

* les: move randselect to utilities package

weighted_random_selector is a helpful tool for randomly select
items maintained in a set but based on the item weight.

It's used anywhere is LES package, mainly by les client but will
be used in les server with very high chance. So move it into a
common package as the second step for les separation.

* les: rename to utils
2020-03-31 17:17:24 +02:00
Martin Holst Swende
32d31c31af metrics: improve TestTimerFunc (#20818)
The test failed due to what appears to be fluctuations in time.Sleep, which is
not the actual method under test. This change modifies it so we compare the
metered Max to the actual time instead of the desired time.
2020-03-31 15:01:16 +02:00
Martin Holst Swende
3b69c14f5d whisper/whisperv6: decrease pow requirement in tests (#20815) 2020-03-31 12:10:34 +02:00
Adam Schmideg
300c35b854 travis: allow cocoapods deploy to fail (#20833) 2020-03-31 12:09:45 +02:00
Wenbiao Zheng
03fe9de2cb eth: add debug_accountRange API (#19645)
This new API allows reading accounts and their content by address range.

Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-03-31 12:08:44 +02:00
Martin Holst Swende
c56f4fa808 cmd/clef: add newaccount command (#20782)
* cmd/clef: add newaccount command

* cmd/clef: document clef_New, update API versioning

* Update cmd/clef/intapi_changelog.md

Co-Authored-By: ligi <ligi@ligi.de>

* Update signer/core/uiapi.go

Co-Authored-By: ligi <ligi@ligi.de>

Co-authored-by: ligi <ligi@ligi.de>
2020-03-31 12:03:48 +02:00
Hanjiang Yu
8f05cfa122 cmd, consensus: add option to disable mmap for DAG caches/datasets (#20484)
* cmd, consensus: add option to disable mmap for DAG caches/datasets

* consensus: add benchmarks for mmap with/with lock
2020-03-31 11:44:04 +03:00
Martin Holst Swende
76eed9e50d snapshotter/tests: verify snapdb post-state against trie (#20812)
* core/state/snapshot: basic trie-to-hash implementation

* tests: validate snapshot after test

* core/state/snapshot: fix review concerns
2020-03-31 10:25:41 +02:00
Péter Szilágyi
84f4975520 Merge pull request #20835 from holiman/bump
core: bump txpool tx max size to 128KB
2020-03-30 12:14:53 +03:00
Martin Holst Swende
55a73f556a core: bump txpool tx max size to 128KB 2020-03-30 10:47:09 +02:00
Ha ĐANG
5d7e5b00be eth/filters: fix typo on unindexedLogs function's comment (#20827) 2020-03-27 16:33:14 +01:00
gary rong
62cd943c7b les: fix dead lock (#20828) 2020-03-27 17:21:58 +02:00
Felix Lange
d6c5f2417c eth: improve shutdown synchronization (#20695)
* eth: improve shutdown synchronization

Most goroutines started by eth.Ethereum didn't have any shutdown sync at
all, which lead to weird error messages when quitting the client.

This change improves the clean shutdown path by stopping all internal
components in dependency order and waiting for them to actually be
stopped before shutdown is considered done. In particular, we now stop
everything related to peers before stopping 'resident' parts such as
core.BlockChain.

* eth: rewrite sync controller

* eth: remove sync start debug message

* eth: notify chainSyncer about new peers after handshake

* eth: move downloader.Cancel call into chainSyncer

* eth: make post-sync block broadcast synchronous

* eth: add comments

* core: change blockchain stop message

* eth: change closeBloomHandler channel type
2020-03-27 15:03:20 +02:00
rene
d7851e6359 graphql, node, rpc: fix typos in comments (#20824) 2020-03-27 13:52:53 +01:00
Felix Lange
d3c1e654f0 cmd/devp2p: be very correct about route53 change splitting (#20820)
Turns out the way RDATA limits work is documented after all,
I just didn't search right. The trick to make it work is to
count UPSERTs twice.

This also adds an additional check to ensure TTL changes are
applied on existing records.
2020-03-26 23:55:33 +01:00
Felix Lange
87a411b839 cmd/devp2p: lower route53 change limit again (#20819) 2020-03-26 17:39:56 +02:00
Felix Lange
1583e7d274 cmd/devp2p: tweak DNS TTLs (#20801)
* cmd/devp2p: tweak DNS TTLs

* cmd/devp2p: bump treeNodeTTL to four weeks
2020-03-26 13:51:50 +02:00
Péter Szilágyi
4690912ac9 Merge pull request #20816 from karalabe/disable-gosigar-ios
metrics: disable CPU stats (gosigar) on iOS
2020-03-26 13:15:43 +02:00
Péter Szilágyi
42e02ac03b metrics: disable CPU stats (gosigar) on iOS 2020-03-26 11:24:58 +02:00
Martin Holst Swende
39f502329f internal/ethapi: don't set sender-balance to maxuint, fixes #16999 (#20783)
Prior to this change, eth_call changed the balance of the sender account in the
EVM environment to 2^256 wei to cover the gas cost of the call execution.
We've had this behavior for a long time even though it's super confusing.

This commit sets the default call gasprice to zero instead of updating the balance,
which is better because it makes eth_call semantics less surprising. Removing
the built-in balance assignment also makes balance overrides work as expected.
2020-03-23 18:21:23 +01:00
Martin Holst Swende
0734c4b820 node, cmd/clef: report actual port used for http rpc (#20789) 2020-03-23 16:26:56 +01:00
meowsbits
a75c0610b7 core/blockchain: simplify atomic store after writeBlockWithState (#20798)
Signed-off-by: meows <b5c6@protonmail.com>
2020-03-23 15:05:15 +01:00
Péter Szilágyi
613af7ceea Merge pull request #20152 from karalabe/snapshot-5
Dynamic state snapshots
2020-03-23 12:57:31 +02:00
Martin Holst Swende
074efe6c8d core: fix two snapshot iterator flaws, decollide snap storage prefix
* core/state/snapshot/iterator: fix two disk iterator flaws

* core/rawdb: change SnapshotStoragePrefix to avoid prefix collision with preimagePrefix
2020-03-23 12:34:27 +02:00
meowsbits
93ffb85b3d rpc: dont log an error if user configures --rpcapi=rpc... (#20776)
This just prevents a false negative ERROR warning when, for some unknown
reason, a user attempts to turn on the module rpc even though it's already going
to be on.
2020-03-21 15:28:27 +01:00
Guillaume Ballet
e943f07a85 whisper/whisperv6: delete failing tests (#20788)
These tests occasionally fail on Travis.
2020-03-20 09:37:53 +01:00
Péter Szilágyi
0e6ea9199c Merge pull request #20781 from karalabe/fix-clique-console-apis
internal/web3ext: fix clique console apis to work on missing arguments
2020-03-19 10:06:44 +02:00
Péter Szilágyi
36e93d2dd8 Merge pull request #20779 from meowsbits/patch-3
core/rawdb: fix freezer table test error check
2020-03-18 16:30:58 +02:00
Péter Szilágyi
e6ca1958d3 internal/web3ext: fix clique console apis to work on missing arguments 2020-03-18 15:23:16 +02:00
Péter Szilágyi
4655b60999 Merge pull request #20780 from karalabe/fix-eth-mine-sync-race
eth: when triggering a sync, check the head header TD, not block
2020-03-18 15:08:06 +02:00
Péter Szilágyi
dc6e98d2a8 eth: when triggering a sync, check the head header TD, not block 2020-03-18 14:33:06 +02:00
gary rong
6283391c99 core/rawdb: improve table database (#20703)
This PR fixes issues in TableDatabase.

TableDatabase is a wrapper of underlying ethdb.Database with an additional prefix.
The prefix is applied to all entries it maintains. However when we try to retrieve entries
from it we don't handle the key properly. In theory the prefix should be truncated and
only user key is returned. But we don't do it in some cases, e.g. the iterator and batch
replayer created from it. So this PR is the fix to these issues.
2020-03-18 13:15:49 +01:00
meowsbits
20a092fb9f core/rawdb: fix freezer table test error check
Fixes: Condition is always 'false' because 'err' is always 'nil'
2020-03-18 06:55:30 -05:00
Alex Willmer
5dd0cd12ec go.mod: update duktape to fix sprintf warnings (#20777)
This revision of go-duktype fixes the following warning

```
duk_logging.c: In function ‘duk__logger_prototype_log_shared’:
duk_logging.c:184:64: warning: ‘Z’ directive writing 1 byte into a region of size between 0 and 9 [-Wformat-overflow=]
  184 |  sprintf((char *) date_buf, "%04d-%02d-%02dT%02d:%02d:%02d.%03dZ",
      |                                                                ^
In file included from /usr/include/stdio.h:867,
                 from duk_logging.c:5:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:36:10: note: ‘__builtin___sprintf_chk’ output between 25 and 85 bytes into a destination of size 32
   36 |   return __builtin___sprintf_chk (__s, __USE_FORTIFY_LEVEL - 1,
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   37 |       __bos (__s), __fmt, __va_arg_pack ());
      |       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
2020-03-18 11:52:07 +02:00
gary rong
efd92d81a9 cmd/checkpoint-admin: add some documentation (#20697) 2020-03-18 10:18:14 +01:00
Péter Szilágyi
8d7aa9078f params: begin v1.9.13 release cycle 2020-03-16 13:44:58 +02:00
Péter Szilágyi
b6f1c8dcc0 params: release Geth v1.9.12 2020-03-16 13:39:04 +02:00
winsvega
97243f3a76 geth retesteth: increase retesteth default http timeouts (#20767) 2020-03-16 10:33:32 +01:00
Péter Szilágyi
241b283690 Merge pull request #20747 from karalabe/update-crypto-deps
go.mod: update golang.org/x/crypto to fix a Go 1.14 race rejection
2020-03-14 14:23:58 +02:00
Péter Szilágyi
466b009135 go.mod: update golang.org/x/crypto to fix a Go 1.14 race rejection 2020-03-14 14:09:31 +02:00
Péter Szilágyi
68b4b74682 Merge pull request #20762 from karalabe/fix-txprop-leak
eth: fix transaction announce/broadcast goroutine leak
2020-03-14 13:52:46 +02:00
Péter Szilágyi
270fbfba4b eth: fix transaction announce/broadcast goroutine leak 2020-03-13 23:47:15 +02:00
gary rong
92f3405dae eth, les: fix time sensitive unit tests (#20741) 2020-03-12 11:25:52 +01:00
Felix Lange
b1efff659e rpc: improve cancel test (#20752)
This is supposed to fix the occasional failures in 
TestCancel* on Travis CI.
2020-03-12 11:24:36 +01:00
meowsbits
0bdb21f0cb tests: update tests/testdata@develop, include EIP2384 config (#20746)
Includes difficulty tests for EIP2384 aka MuirGlacier.
2020-03-10 10:55:27 +01:00
Péter Szilágyi
fab0ee3bfa core/state/snapshot: fix various iteration issues due to destruct set 2020-03-04 15:06:04 +02:00
Martin Holst Swende
bc5d742c66 core: more blockchain tests 2020-03-04 14:39:27 +02:00
Martin Holst Swende
eff7cfbb03 core/state/snapshot: handle deleted accounts in fast iterator 2020-03-04 14:38:55 +02:00
Péter Szilágyi
328de180a7 core/state: fix resurrection state clearing and access 2020-03-04 10:22:48 +02:00
Péter Szilágyi
dcb22a9f99 core/state: fix account root hash update point 2020-03-03 16:55:06 +02:00
Péter Szilágyi
a4cf279494 core/state: extend snapshotter to handle account resurrections 2020-03-03 15:52:00 +02:00
Péter Szilágyi
6e05ccd845 core/state/snapshot, tests: sync snap gen + snaps in consensus tests 2020-03-03 09:17:13 +02:00
Ali Atiia
556888c4a9 core/vm: fix method doc (#20730)
typo in func name in the comment
2020-03-02 19:20:27 +02:00
Martin Holst Swende
fe8347ea8a squashme 2020-03-02 14:06:44 +01:00
Martin Holst Swende
361a6f08ac core/tests: test for destroy+recreate contract with storage 2020-03-02 13:46:56 +01:00
rene
01d92531ee rpc: correct typo and reword comment for consistency (#20728) 2020-02-28 14:43:04 +01:00
Péter Szilágyi
92ec07d63b core/state: fix an account resurrection issue 2020-02-27 15:03:10 +02:00
Felix Lange
1e1b18637e p2p/discv5: fix test on go 1.14 (#20724) 2020-02-27 14:10:28 +02:00
Adam Schmideg
f1a7997af3 crypto/bn256: fix import line (#20723) 2020-02-27 13:59:00 +02:00
Felix Lange
cec1f292f0 mobile: add CallOpts.SetFrom (#20721)
This was missing because I forgot to wrap it when bind.CallOpts.From
as added.
2020-02-27 12:16:50 +02:00
gary rong
4fabd9cbd2 les: separate peer into clientPeer and serverPeer (#19991)
* les: separate peer into clientPeer and serverPeer

* les: address comments
2020-02-26 11:41:24 +02:00
Martin Holst Swende
fadf84a752 internal/ethapi: default to zero address for calls (#20702)
This makes eth_call and eth_estimateGas use the zero address
as sender when the "from" parameter is not supplied.

Co-authored-by: Felix Lange <fjl@twurst.com>
2020-02-25 17:57:06 +01:00
Péter Szilágyi
06d4470b41 core: fix broken tests due to API changes + linter 2020-02-25 12:51:16 +02:00
Martin Holst Swende
19099421dc core/state/snapshot: faster account iteration, CLI integration 2020-02-25 12:51:15 +02:00
Péter Szilágyi
6ddb92a089 core/state/snapshot: full featured account iteration 2020-02-25 12:51:14 +02:00
Martin Holst Swende
e570835356 core/state/snapshot: implement iterator priority for fast direct data lookup 2020-02-25 12:51:14 +02:00
Péter Szilágyi
e567675473 core/state/snapshot: move iterator out into its own files 2020-02-25 12:51:13 +02:00
Martin Holst Swende
7e38996301 core/state/snapshot: implement snapshot layer iteration 2020-02-25 12:51:12 +02:00
Péter Szilágyi
22c494d399 core/state/snapshot: bloom, metrics and prefetcher fixes 2020-02-25 12:51:11 +02:00
Martin Holst Swende
3ad4335acc core/state/snapshot: node behavioural difference on bloom content 2020-02-25 12:51:11 +02:00
Péter Szilágyi
fd39f722a3 core: journal the snapshot inside leveldb, not a flat file 2020-02-25 12:51:10 +02:00
Martin Holst Swende
d5d7c0c24b core/state/snapshot: fix difflayer origin-initalization after flatten 2020-02-25 12:51:09 +02:00
Péter Szilágyi
351a5903b0 core/rawdb, core/state/snapshot: runtime snapshot generation 2020-02-25 12:51:08 +02:00
Martin Holst Swende
f300c0df01 core/state/snapshot: replace bigcache with fastcache 2020-02-25 12:51:08 +02:00
Péter Szilágyi
d754091a87 core/state/snapshot: unlink snapshots from blocks, quad->linear cleanup 2020-02-25 12:51:07 +02:00
Martin Holst Swende
cdf3f016df snapshot: iteration and buffering optimizations 2020-02-25 12:51:06 +02:00
Péter Szilágyi
d7d81d7c12 core/state/snapshot: extract and split cap method, cover corners 2020-02-25 12:51:05 +02:00
Martin Holst Swende
e146fbe4e7 core/state: lazy sorting, snapshot invalidation 2020-02-25 12:51:05 +02:00
Péter Szilágyi
542df8898e core: initial version of state snapshots 2020-02-25 12:51:04 +02:00
Boqin Qin
2a5ed1a1d3 eth/downloader: fix possible data race by inconsistent field protection (#20690) 2020-02-25 11:44:21 +02:00
Péter Szilágyi
bf1cdd723a Merge pull request #20712 from karalabe/txfetcher-fix-test-randomness
eth/fetcher: remove randomness from test data
2020-02-24 15:03:49 +02:00
Péter Szilágyi
c6be24c731 eth/fetcher: remove randomness from test data 2020-02-24 14:59:02 +02:00
Chris Chinchilla
6ffee2afd6 docs: correct clef typo (#20705) 2020-02-21 16:30:21 +01:00
gary rong
2e1ecc02bd les, miner, accounts/abi/bind: fix load-sensitive unit tests (#20698) 2020-02-20 13:05:54 +01:00
Guillaume Ballet
6df973df27 go.mod: upgrade goja to latest (#20700)
The new goja version supports the 'escape' and 'unescape' built-in functions.
This fixes #20693
2020-02-20 12:46:47 +01:00
Gregory Markou
4be8840120 core/vm: use dedicated SLOAD gas constant for EIP-2200 (#20646) 2020-02-18 15:07:41 +01:00
Péter Szilágyi
529b81dadb params: begin v1.9.12 release cycle 2020-02-18 13:27:39 +02:00
Péter Szilágyi
6a62fe399b params: release Geth v1.9.11 stable 2020-02-18 13:26:00 +02:00
Felix Lange
dae3aee5ff les: add bootstrap nodes as initial discoveries (#20688) 2020-02-18 13:24:05 +02:00
Péter Szilágyi
05ccbb5edd Merge pull request #20687 from karalabe/cht-1.9.11
params: update CHTs for the v1.9.11 release
2020-02-18 10:57:05 +02:00
Péter Szilágyi
4f55e24c02 params: update CHTs for the v1.9.11 release 2020-02-18 10:55:44 +02:00
Felix Lange
91b228966e rpc: remove startup error for invalid modules, log it instead (#20684)
This removes the error added in #20597 in favor of a log message at
error level. Failing to start broke a bunch of people's setups and is
probably not the right thing to do for this check.
2020-02-17 18:33:32 +02:00
Boqin Qin
1b9c5b393b all: fix goroutine leaks in unit tests by adding 1-elem channel buffer (#20666)
This fixes a bunch of cases where a timeout in the test would leak
a goroutine.
2020-02-17 17:33:11 +01:00
Felix Lange
57d4898e29 p2p/dnsdisc: re-check tree root when leaf resolution fails (#20682)
This adds additional logic to re-resolve the root name of a tree when a
couple of leaf requests have failed. We need this change to avoid
getting into a failure state where leaf requests keep failing for half
an hour when the tree has been updated.
2020-02-17 15:23:25 +01:00
Péter Szilágyi
c2117982b8 Merge pull request #20678 from karalabe/broadcast-sqrt-proper
eth: don't enforce minimum broadcast, fix broadcast test
2020-02-17 14:43:30 +02:00
Felix Lange
1c4c486a85 cmd/ethkey: speed up test by using weaker scrypt parameters (#20680) 2020-02-17 13:22:52 +02:00
Felix Lange
ac72787768 p2p: remove MeteredPeerEvent (#20679)
This event was added for the dashboard, but we don't need it anymore
since the dashboard is gone.
2020-02-17 13:22:14 +02:00
Péter Szilágyi
26284ec3cc Merge pull request #20681 from karalabe/go1.13.8
travis, appveyor, build: bump builder Go to 1.13.8
2020-02-17 13:15:17 +02:00
Péter Szilágyi
fef8c985bc travis, appveyor, build: bump builder Go to 1.13.8 2020-02-17 13:13:24 +02:00
Péter Szilágyi
36a1e0b67d eth: don't enforce minimum broadcast, fix broadcast test 2020-02-17 12:01:03 +02:00
Boqin Qin
37531b1884 cmd/faucet: protect f.reqs with Rlock to prevent data race (#20669)
* cmd/faucet: add Rlock to protect f.reqs in apiHandler

* cmd/faucet: make a locked copy of f.reqs
2020-02-15 20:14:29 +02:00
Martin Holst Swende
855690523a core: ensure state exists for prefetcher (#20627) 2020-02-14 10:54:02 +02:00
Felix Lange
38d1b0cba2 cmd/geth: enable DNS discovery by default (#20660)
* node: expose config in service context

* eth: integrate p2p/dnsdisc

* cmd/geth: add some DNS flags

* eth: remove DNS URLs

* cmd/utils: configure DNS names for testnets

* params: update DNS URLs

* cmd/geth: configure mainnet DNS

* cmd/utils: rename DNS flag and fix flag processing

* cmd/utils: remove debug print

* node: fix test
2020-02-13 15:38:30 +02:00
Péter Szilágyi
eddcecc160 Merge pull request #20234 from rjl493456442/newtxhashes_2
core, eth: announce based transaction propagation
2020-02-13 15:28:34 +02:00
Péter Szilágyi
9938d954c8 eth: rework tx fetcher to use O(1) ops + manage network requests 2020-02-13 15:27:15 +02:00
Felix Lange
90caa2cabb p2p: new dial scheduler (#20592)
* p2p: new dial scheduler

This change replaces the peer-to-peer dial scheduler with a new and
improved implementation. The new code is better than the previous
implementation in two key aspects:

- The time between discovery of a node and dialing that node is
  significantly lower in the new version. The old dialState kept
  a buffer of nodes and launched a task to refill it whenever the buffer
  became empty. This worked well with the discovery interface we used to
  have, but doesn't really work with the new iterator-based discovery
  API.

- Selection of static dial candidates (created by Server.AddPeer or
  through static-nodes.json) performs much better for large amounts of
  static peers. Connections to static nodes are now limited like dynanic
  dials and can no longer overstep MaxPeers or the dial ratio.

* p2p/simulations/adapters: adapt to new NodeDialer interface

* p2p: re-add check for self in checkDial

* p2p: remove peersetCh

* p2p: allow static dials when discovery is disabled

* p2p: add test for dialScheduler.removeStatic

* p2p: remove blank line

* p2p: fix documentation of maxDialPeers

* p2p: change "ok" to "added" in static node log

* p2p: improve dialTask docs

Also increase log level for "Can't resolve node"

* p2p: ensure dial resolver is truly nil without discovery

* p2p: add "looking for peers" log message

* p2p: clean up Server.run comments

* p2p: fix maxDialedConns for maxpeers < dialRatio

Always allocate at least one dial slot unless dialing is disabled using
NoDial or MaxPeers == 0. Most importantly, this fixes MaxPeers == 1 to
dedicate the sole slot to dialing instead of listening.

* p2p: fix RemovePeer to disconnect the peer again

Also make RemovePeer synchronous and add a test.

* p2p: remove "Connection set up" log message

* p2p: clean up connection logging

We previously logged outgoing connection failures up to three times.

- in SetupConn() as "Setting up connection failed addr=..."
- in setupConn() with an error-specific message and "id=... addr=..."
- in dial() as "Dial error task=..."

This commit ensures a single log message is emitted per failure and adds
"id=... addr=... conn=..." everywhere (id= omitted when the ID isn't
known yet).

Also avoid printing a log message when a static dial fails but can't be
resolved because discv4 is disabled. The light client hit this case all
the time, increasing the message count to four lines per failed
connection.

* p2p: document that RemovePeer blocks
2020-02-13 11:10:03 +01:00
Boqin Qin
5f2002bbcc accounts: add walletsNoLock to avoid double read lock (#20655) 2020-02-12 15:20:50 +01:00
Boqin Qin
a9614c3c91 event, p2p/simulations/adapters: fix rare goroutine leaks (#20657)
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-02-12 15:19:47 +01:00
Marius van der Wijden
46c4b699c8 accounts/abi/bind/backends: add support for historical state (#20644) 2020-02-12 11:33:17 +01:00
Boqin Qin
1821328162 event: add missing unlock before panic (#20653) 2020-02-12 10:33:31 +01:00
Adam Schmideg
8045504abf les: log disconnect reason when light server is not synced (#20643)
Co-authored-by: ligi <ligi@ligi.de>
2020-02-11 16:46:32 +01:00
Felix Lange
c22fdec3c7 common/mclock: add NewTimer and Timer.Reset (#20634)
These methods can be helpful when migrating existing timer code.
2020-02-11 16:36:49 +01:00
rjl493456442
049e17116e core, eth: implement eth/65 transaction fetcher 2020-02-11 13:56:36 +02:00
winsvega
dcffb7777f cmd/geth retesteth: add eth_getBlockByHash (#20621) 2020-02-11 10:54:05 +01:00
chabashilah
8694d14e65 signer: add bytes32 as valid primitive (#20609) 2020-02-11 10:52:51 +01:00
Adam Schmideg
172f7778fe rpc: add error when call result parameter is not addressable (#20638) 2020-02-11 09:48:58 +01:00
AmitBRD
34bb132b10 graphql: add transaction signature values (#20623)
The feature update allows the GraphQL API endpoint to retrieve
transaction signature R,S,V parameters.

Co-authored-by: amitshah <amitshah0t7@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2020-02-09 21:50:44 +01:00
Nick Ward
675f4e75b8 README.md: update evm usage example (#20635) 2020-02-09 17:18:47 +01:00
Martin Holst Swende
4a231cd951 internal/ethapi: return non-null "number" for pending block (#20616)
Fixes: #20587, ethereum/web3.py#1572
2020-02-07 10:44:32 +01:00
Felix Lange
976a0f5558 cmd/devp2p: fix Route53 TXT record splitting (#20626)
For longer records and subtree entries, the deployer created two
separate TXT records. This doesn't work as intended because the client
will receive the two records in arbitrary order. The fix is to encode
longer values as "string1""string2" instead of "string1", "string2".
This encoding creates a single record on AWS Route53.
2020-02-05 15:29:59 +01:00
Martin Holst Swende
a1313b5b1e trie: make hasher parallel when number of changes are large (#20488)
* trie: make hasher parallel when number of changes are large

* trie: remove unused field dirtyCount

* trie: rename unhashedCount/unhashed
2020-02-04 14:02:38 +02:00
meowsbits
711ed74e09 cmd/geth: add 'dumpgenesis' command (#20191)
Adds the 'geth dumpgenesis' command, which writes the configured
genesis in JSON format to stdout. This provides a way to generate the
data (structure and content) that can then be used with the 'geth init'
command.
2020-02-04 11:49:13 +01:00
Martin Holst Swende
058a4ac5f1 core/evm: less iteration in blockhash (#20589)
* core/vm/runtime: add test for blockhash

* core/evm: less iteration in blockhash

* core/vm/runtime: nitpickfix

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-02-04 12:32:31 +02:00
tintin
33791dbeb5 tracers: avoid panic on invalid arguments (#20612)
* add regression tests for #20611

* eth/tracers: fix panics occurring for invalid params in js-tracers

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-02-04 09:55:07 +01:00
Martin Holst Swende
5a9c96454e trie: separate hashes and committer, collapse on commit
* trie:  make db insert use size instead of full data

* core/state: minor optimization in state onleaf allocation

* trie: implement dedicated committer and hasher

* trie: use dedicated committer/hasher

* trie: linter nitpicks

* core/state, trie: avoid unnecessary storage trie load+commit

* trie: review feedback, mainly docs + minor changes

* trie: start deprecating old hasher

* trie: fix misspell+lint

* trie: deprecate hasher.go, make proof framework use new hasher

* trie: rename pure_committer/hasher to committer/hasher

* trie, core/state: fix review concerns

* trie: more review concerns

* trie: make commit collapse into hashnode, don't touch dirtyness

* trie: goimports fixes

* trie: remove panics
2020-02-03 17:28:30 +02:00
Felix Lange
4cc89a5a32 internal/build: don't crash in DownloadFile when offline (#20595) 2020-02-03 17:22:46 +02:00
Martin Holst Swende
15d09038a6 params: update bootnodes (#20610) 2020-01-31 14:12:19 +01:00
Martin Holst Swende
3c776c7199 retesteth: clean txpool on rewind, default dao support (#20596) 2020-01-31 12:00:37 +01:00
Guillaume Ballet
24cab2d535 core/vm/runtime: fix typos in comment (#20608) 2020-01-30 11:21:10 +01:00
Guillaume Ballet
594e038e75 signer/rules: use goja and remove otto (#20599)
* signer: replace otto with goja

* go.mod: remove Otto
2020-01-29 13:47:56 +01:00
Felix Lange
a903912b96 rpc: check module availability at startup (#20597)
Fixes #20467

Co-authored-by: meowsbits <45600330+meowsbits@users.noreply.github.com>
2020-01-28 10:37:08 +01:00
Zhou Zhiyao
44c365c3e2 rpc: reset writeConn when conn is closed on readErr (#20414)
This change makes the client attempt to reconnect when a write fails.
We already had reconnect support, but the reconnect would previously
happen on the next call after an error. Being more eager leads to a
smoother experience overall.
2020-01-27 14:03:15 +01:00
Guillaume Ballet
7b68975a00 console, internal/jsre: use github.com/dop251/goja (#20470)
This replaces the JavaScript interpreter used by the console with goja,
which is actively maintained and a lot faster than otto. Clef still uses otto
and eth/tracers still uses duktape, so we are currently dependent on three
different JS interpreters. We're looking to replace the remaining uses of otto
soon though.
2020-01-27 11:50:48 +01:00
Guillaume Ballet
60deeb103e cmd/evm: accept --input for disasm command (#20548) 2020-01-27 10:05:21 +01:00
Martin Holst Swende
0b284f6c6c cmd/geth/retesteth: use canon head instead of keeping alternate count (#20572) 2020-01-23 20:55:56 +01:00
Guillaume Ballet
8a5c81349e eth: fix comment typo in handler.go (#20575) 2020-01-23 16:08:06 +01:00
Martin Holst Swende
33c56ebc67 cmd: implement abidump (#19958)
* abidump: implement abi dump command

* cmd/abidump: add license
2020-01-21 15:51:36 +01:00
Felix Lange
31baf3a9af log, internal/debug: delete RotatingFileHandler (#20586)
* log: delete RotatingFileHandler

We added this for the dashboard, which is gone now. The
handler never really worked well and had data race and file
handling issues.

* internal/debug: remove unused RotatingFileHandler setup code
2020-01-21 14:57:33 +02:00
Péter Szilágyi
ad2fc7c6a6 params: begin Geth v1.9.11 release cycle 2020-01-20 12:32:47 +02:00
Péter Szilágyi
58cf5686ea params: release Geth v1.9.10 2020-01-20 12:27:51 +02:00
Péter Szilágyi
b4aa4a6965 Merge pull request #20580 from karalabe/cht-1.9.10
params: update CHTs for v1.9.10 release
2020-01-20 11:18:37 +02:00
gary rong
b88b4632c2 core: fix chain indexer unit test (#20506) 2020-01-20 10:38:08 +02:00
Péter Szilágyi
1f1cefc036 params: update CHTs for v1.9.10 release 2020-01-20 10:28:49 +02:00
Péter Szilágyi
4c8fcd93da Merge pull request #20579 from karalabe/android-go-1.13.6
travis: bump Android builder to Go 1.13.6
2020-01-20 00:55:00 +02:00
Péter Szilágyi
fcc84c38dd travis: bump Android builder to Go 1.13.6 2020-01-20 00:54:20 +02:00
Péter Szilágyi
6d200efe72 Merge pull request #20578 from karalabe/win-go-1.13.6
appveyor: bump Go to 1.13.6 on Windows
2020-01-20 00:53:13 +02:00
Péter Szilágyi
92956e2930 appveyor: bump Go to 1.13.6 on Windows 2020-01-20 00:50:59 +02:00
Martin Holst Swende
9b09c0fc83 * trie: utilize callbacks instead of amassing lists in ref/unref (#20529)
* trie/tests: add benchmarks and update trie tests

* trie: update benchmark tests

* trie: utilize callbacks instead of amassing lists of hashes in database ref/unref

* trie: replace remaining non-callback based accesses
2020-01-17 13:59:45 +02:00
gary rong
770316dc20 core, light: write chain data in atomic way (#20287)
* core: write chain data in atomic way

* core, light: address comments

* core, light: fix linter

* core, light: address comments
2020-01-17 12:49:32 +02:00
Felix Lange
0af96d2556 cmd/devp2p: submit Route53 changes in batches (#20524)
This change works around the 32k RDATA character limit per change
request and fixes several issues in the deployer which prevented it from
working for our production trees.
2020-01-17 11:32:29 +01:00
Felix Lange
d5acc5ed9e p2p: ensure Server.loop is ticking even if discovery hangs (#20573)
This is a temporary fix for a problem which started happening when the
dialer was changed to read nodes from an enode.Iterator. Before the
iterator change, discovery queries would always return within a couple
seconds even if there was no Internet access. Since the iterator won't
return unless a node is actually found, discoverTask can take much
longer. This means that the 'emergency connect' logic might not execute
in time, leading to a stuck node.
2020-01-17 12:29:16 +02:00
Felix Lange
fcafa0baa5 p2p: wait for listener goroutines on shutdown (#20569)
* p2p: wait for goroutine exit, fixes #20558

* p2p: wait for all slots on exit

Co-authored-by: Martin Holst Swende <martin@swende.se>
2020-01-16 14:10:15 +02:00
Guillaume Ballet
1ee754b056 build: upgrade golangci to 1.22.2 (#20566)
* build: upgrade golangci to 1.22.2

* .golangci.yml: don't fail on asset deadcode
2020-01-16 14:09:38 +02:00
Péter Szilágyi
b3b8d36995 Merge pull request #20570 from karalabe/ppa-focal-go-1.13.6
travis, build: enable Ubuntu Focal and Go 1.13.6 on PPA
2020-01-16 14:09:00 +02:00
Péter Szilágyi
9b32f592dc travis, build: enable Ubuntu Focal and Go 1.13.6 on PPA 2020-01-16 13:28:32 +02:00
Felix Lange
3e97b04a3d build: put GOPATH in /tmp on launchpad (#20564)
* build: put GOPATH in /tmp on launchpad

* build: don't remove GOPATH from go tool environment
2020-01-16 13:03:41 +02:00
Martin Holst Swende
f20c8d495a eth: increase timeout to fix a spurious travis test failure (#20560) 2020-01-15 15:30:50 +01:00
Felix Lange
8704e8a8fc build: fix makefile HOME reference (#20562) 2020-01-15 12:38:35 +02:00
Felix Lange
94e8418939 build: attempt to fix debian build failure without GOPATH (#20561) 2020-01-15 11:37:31 +02:00
Felix Lange
feda78e052 build: remove env.sh (#20541)
* build: remove env.sh

This removes the dirty symlink-to-self hack we've had for years. The
script was added to enable building without GOPATH and did that job
reliably for all this time. We can remove the workaround because modern
Go supports building without GOPATH natively.

* Makefile: add GO111MODULE=on to environment
2020-01-14 14:13:14 +02:00
Péter Szilágyi
8592a57553 Merge pull request #20555 from holiman/cripple_txsize
core: set max tx size down to 2 slots (64KB)
2020-01-14 13:18:22 +02:00
Martin Holst Swende
b2de0bd87b core: set max tx size down to 2 slots (64KB) 2020-01-14 11:49:36 +01:00
Péter Szilágyi
e9e69d6e29 Merge pull request #20546 from karalabe/validate-block-broadcast
eth: check propagated block malformation on receiption
2020-01-13 15:19:14 +02:00
Péter Szilágyi
a90cc66f3c eth: check propagated block malformation on receiption 2020-01-13 14:23:54 +02:00
MichaelRiabzev-StarkWare
8bd37a1d91 core: count tx size in slots, bump max size ot 4x32KB (#20352)
* tests for tx size

* alow multiple slots transactions

* tests for tx size limit (32 KB)

* change tx size tests to use addRemoteSync instead of validateTx (requested in pool request).

* core: minor tx slotting polishes, add slot tracking metric

Co-authored-by: Michael Riabzev <RiabzevMichael@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-01-10 11:40:03 +02:00
Péter Szilágyi
b5c4ea56b8 Merge pull request #20540 from holiman/verbose_panic
core/state: add more verbosity to panic
2020-01-10 11:36:21 +02:00
Martin Holst Swende
fc392395fb core/state: add more verbosity to panic 2020-01-10 10:12:32 +01:00
Péter Szilágyi
b211742e5f Revert "eth: refactor creation of EthAPIBackend (#20476)" (#20536)
This reverts commit a1bc0e3cb6.
2020-01-09 13:26:37 +02:00
Felix Lange
0218d7001d internal/testlog: print file+line number of log call in test log (#20528)
* internal/testlog: print file+line number of log call in test log

This changes the unit test logger to print the actual file and line
number of the logging call instead of "testlog.go:44".

Output of 'go test -v -run TestServerListen ./p2p' before this change:

    === RUN   TestServerListen
    --- PASS: TestServerListen (0.00s)
        testlog.go:44: DEBUG[01-08|15:16:31.651] UDP listener up         addr=127.0.0.1:62678
        testlog.go:44: DEBUG[01-08|15:16:31.651] TCP listener up         addr=127.0.0.1:62678
        testlog.go:44: TRACE[01-08|15:16:31.652] Accepted connection     addr=127.0.0.1:62679

And after:

    === RUN   TestServerListen
    --- PASS: TestServerListen (0.00s)
        server.go:868: DEBUG[01-08|15:25:35.679] TCP listener up         addr=127.0.0.1:62712
        server.go:557: DEBUG[01-08|15:25:35.679] UDP listener up         addr=127.0.0.1:62712
        server.go:912: TRACE[01-08|15:25:35.680] Accepted connection     addr=127.0.0.1:62713

* internal/testlog: document use of t.Helper
2020-01-08 17:11:51 +02:00
gary rong
4d663d57d6 les: fix request serving metrics (#20507) 2020-01-08 15:08:56 +02:00
Felix Lange
8a63f7f504 .travis.yml: use latest macOS 10.14 image (#20526) 2020-01-08 11:25:01 +02:00
Guillaume Ballet
c49a4165d0 consensus/ethash: fix a typo and error message (#20503) 2020-01-07 18:19:21 +01:00
Jonathan Gimeno
a1bc0e3cb6 eth: refactor creation of EthAPIBackend (#20476) 2020-01-07 18:12:27 +01:00
wangxiang
a013f02df2 whisper/whisperv6: fix peer time.Ticker leak (#20520) 2020-01-07 18:08:22 +01:00
Marius van der Wijden
50be790869 README.md: Genoil fork has been discontinued (#20521) 2020-01-07 18:06:44 +01:00
Yole
9e0f934e2b cmd/geth: update copyright year (#20512)
Update copyright from 2013-2019 to 2013-2020
2020-01-07 11:56:16 +02:00
me020523
4f7b7f84ae add node.go unit test file node_test.go (#20028)
* add node.go unit test file node_test.go

* add node_test.go file license and rollback trie_test.go

* fix unused variable v

* trie: fix license year

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2020-01-07 10:31:20 +01:00
gary rong
c6285e6437 les/checkpointoracle: move oracle into its own package (#20508)
* les: move the checkpoint oracle into its own package

It's first step of refactor LES package. LES package
basically can be divided into LES client and LES server.
However both sides will use checkpoint package for
status retrieval and verification. So this PR moves
checkpoint oracle into a separate package

* les: address comments
2020-01-07 11:24:21 +02:00
Kumar Anirudha
35f95aef6f cmd/puppeth: change dashboard title to not use "testnet" (#20513) 2020-01-07 11:08:33 +02:00
Prince Sinha
7a509b4732 internal/ethapi: fix encoding of uncle headers and pending blocks (#20460)
Fixes #19024
Fixes #19332
2020-01-06 12:25:38 +01:00
Guillaume Ballet
433937fb42 cmd/geth: fix forked exe leak in console tests (#20480) 2020-01-06 12:07:21 +01:00
Chris Pacia
2eeb8dd271 rpc: add DialWebsocketWithDialer (#20471)
This commit intents to replicate the DialHTTPWithClient function which allows
creating a RPC Client using a custom dialer but for websockets.

We introduce a new DialWebsocketWithDialer function which allows the caller
to instantiate a new websocket client using a custom dialer.
2020-01-06 12:06:29 +01:00
Sylvain Laurent
b7cf41e4b3 accounts/abi: fix method constant flag for solidity 6.0 (#20482) 2020-01-06 12:03:26 +01:00
Felföldi Zsolt
3bb6815fc1 les: do not disconnect another server (#20453) 2019-12-25 02:06:00 +01:00
Gerald Nash
a67fe48b43 Change file extension of the ./tests/fuzzers README (#20474) 2019-12-20 13:19:01 +01:00
Ilan Gitter
93b1171316 accounts/abi/backends/simulated: add more API methods (#5) (#20208)
* Add more functionality to the sim (#5)

* backends: implement more of ethclient in sim

* backends: add BlockByNumber to simulated backend

* backends: make simulated progress function agree with syncprogress interface for client

* backends: add more tests

* backends: add more comments

* backends: fix sim for index in tx and add tests

* backends: add lock back to estimategas

* backends: goimports

* backends: go ci lint

* Add more functionality to the sim (#5)

* backends: implement more of ethclient in sim

* backends: add BlockByNumber to simulated backend

* backends: make simulated progress function agree with syncprogress interface for client

* backends: add more tests

* backends: add more comments

* backends: fix sim for index in tx and add tests

* backends: add lock back to estimategas

* backends: goimports

* backends: go ci lint

* assert errs
2019-12-20 11:33:32 +01:00
Jeff Wentworth
6ae9dc15cc [#20266] Fix bugs decoding integers and fixed bytes in indexed event fields (#20269)
* fix parseTopics() and add tests

* remove printf

* add ParseTopicsIntoMap() tests

* fix FixedBytesTy

* fix int and fixed bytes

* golint topics_test.go
2019-12-18 11:16:07 +01:00
Paweł Bylica
49cf000df7 cmd/evm: Add --bench flag for benchmarking (#20330)
The --bench flag uses the testing.B to execute the EVM bytecode many times and get the average exeuction time out of it.
2019-12-18 09:43:18 +01:00
Ryan Schneider
c4b7fdd27e eth, internal/web3ext: add optional first and last arguments to the admin_exportChain RPC. (#20107) 2019-12-17 12:10:14 +01:00
Guillaume Ballet
275cd4988d cmd/abigen: Sanitize vyper's combined json names (#20419)
* cmd/abigen: Sanitize vyper's combined json names

* Review feedback: handle full paths
2019-12-16 13:37:15 +01:00
Felix Lange
f51cf573b5 cmd/devp2p: implement AWS Route53 enrtree deployer (#20446) 2019-12-12 22:25:12 +01:00
Felix Lange
191364c350 p2p/dnsdisc: add enode.Iterator API (#20437)
* p2p/dnsdisc: add support for enode.Iterator

This changes the dnsdisc.Client API to support the enode.Iterator
interface.

* p2p/dnsdisc: rate-limit DNS requests

* p2p/dnsdisc: preserve linked trees across root updates

This improves the way links are handled when the link root changes.
Previously, sync would simply remove all links from the current tree and
garbage-collect all unreachable trees before syncing the new list of
links.

This behavior isn't great in certain cases: Consider a structure where
trees A, B, and C reference each other and D links to A. If D's link
root changed, the sync code would first remove trees A, B and C, only to
re-sync them later when the link to A was found again.

The fix for this problem is to track the current set of links in each
clientTree and removing old links only AFTER all links are synced.

* p2p/dnsdisc: deflake iterator test

* cmd/devp2p: adapt dnsClient to new p2p/dnsdisc API

* p2p/dnsdisc: tiny comment fix
2019-12-12 11:15:36 +02:00
Felix Lange
d90d1db609 eth/filters: remove use of event.TypeMux for pending logs (#20312) 2019-12-10 12:39:14 +01:00
Péter Szilágyi
b8bc9b3d8e Merge pull request #20444 from MariusVanDerWijden/patch-4
core: removed old invalid comment
2019-12-10 13:31:25 +02:00
Marius van der Wijden
f383eaa102 core: removed old invalid comment 2019-12-10 11:50:16 +01:00
Martin Holst Swende
cecc7230c0 tests/fuzzers: fuzzbuzz fuzzers for keystore, rlp, trie, whisper (#19910)
* fuzzers: fuzzers for keystore, rlp, trie, whisper (cred to @guidovranken)

* fuzzers: move fuzzers to testdata

* testdata/fuzzers: documentation

* testdata/fuzzers: corpus for rlp

* tests/fuzzers: fixup
2019-12-10 11:57:37 +02:00
Charing
4b40b5377b miner: add dependency for stress tests (#20436)
1.to build stress tests

Depends-On: 6269e5574c
2019-12-10 10:26:07 +02:00
Péter Szilágyi
370cb95b7f params: begin v1.9.10 release cycle 2019-12-06 11:53:25 +02:00
Péter Szilágyi
017449971e params: release Geth v1.9.9 2019-12-06 11:51:37 +02:00
Martin Holst Swende
bc01593afb consensus/ethash, params: eip-2384: bump difficulty bomb (#20347)
* consensus/ethash, params: implement eip-2384: bump difficulty bomb

* params: EIP 2384 compat checks

* consensus, params: add Muir Glacier block number (mainnet,ropsten) + official name

* core/forkid: forkid tests for muir glacier

* params/config: address review concerns

* params, core/forkid: review nitpicks

* cmd/geth,eth,les: add override option for muir glacier

* params: nit fix
2019-12-06 11:36:40 +02:00
Marius van der Wijden
c9dce0bfd7 p2p/enode: remove data race in sliceIter (#20421) 2019-12-05 22:16:35 +01:00
Péter Szilágyi
e78f631dfc Merge pull request #20428 from karalabe/cht-1.9.9
params: update CHTs for v1.9.9 release
2019-12-05 11:38:41 +02:00
Péter Szilágyi
6b6882f08b params: update CHTs for v1.9.9 release 2019-12-05 11:01:40 +02:00
Péter Szilágyi
c2d65d34d5 Merge pull request #20415 from karalabe/trie-dirty-cache-metrics
trie: track dirty cache metrics, track clean writes on commit
2019-12-02 12:51:00 +02:00
Péter Szilágyi
13ccf6016e trie: track dirty cache metrics, track clean writes on commit 2019-12-02 12:23:35 +02:00
Marius van der Wijden
7ce7c3967c accounts/abi/bind: fix destructive packing of *big.Int (#20412) 2019-12-02 10:29:25 +01:00
gary rong
fc7e0fe6c7 core, miner: remove PostChainEvents (#19396)
This change:

- removes the PostChainEvents method on core.BlockChain.
- sorts 'removed log' events by block number.
- fire the NewChainHead event if we inject a canonical block into the chain
  even if the entire insertion is not successful.
- guarantees correct event ordering in all cases.
2019-11-29 14:22:08 +01:00
Guillaume Ballet
5cc6e7a71e accounts/usbwallet: fix staticcheck warnings (#20372) 2019-11-29 11:47:14 +01:00
xinluyin
d556d39a2c internal/web3ext: add debug_accountRange (#20410) 2019-11-29 11:46:12 +01:00
Guillaume Ballet
54d332e1db accounts/scwallet: fix staticcheck warnings (#20370) 2019-11-29 11:42:53 +01:00
Guillaume Ballet
e0bf5f0ccb internal: fix staticcheck warnings (#20380) 2019-11-29 11:40:02 +01:00
Guillaume Ballet
1ff3d7c2d4 cmd/faucet, cmd/geth: fix staticcheck warnings (#20374) 2019-11-29 11:38:34 +01:00
gary rong
08611cfd75 trie: remove dead code (#20405) 2019-11-28 12:47:35 +02:00
Guillaume Ballet
9a529d64d1 log: fix staticcheck warnings (#20388) 2019-11-28 10:53:06 +01:00
Felix Lange
a91b704b01 consensus/ethash: refactor remote sealer (#20335)
The original idea behind this change was to remove a use of the
deprecated CancelRequest method. Simply removing it would've been an
option, but I couldn't resist and did a bit of a refactoring instead.

All remote sealing code was contained in a single giant function. Remote
sealing is now extracted into its own object, remoteSealer.
2019-11-28 10:51:57 +01:00
Péter Szilágyi
c9f28ca8e5 go: update fastcache to 1.5.3 (#20404)
deps: update fastcache to 1.5.3
2019-11-27 15:08:34 +02:00
Péter Szilágyi
58e33d9e5a Merge pull request #20403 from karalabe/fix-freezer-reinit
core/rawdb: fix reinit regression caused by the hash check PR
2019-11-27 15:05:58 +02:00
Martin Holst Swende
7800ba978d deps: update fastcache to 1.5.3 2019-11-27 13:46:07 +01:00
Péter Szilágyi
717f8a4e8f core/rawdb: fix reinit regression caused by the hash check PR 2019-11-27 14:41:47 +02:00
Guillaume Ballet
7b189d6f1f core: fix staticcheck warnings (#20384)
* core: fix staticcheck warnings

* fix goimports
2019-11-27 09:50:30 +01:00
Guillaume Ballet
c4844e9ee2 les: fix staticcheck warnings (#20371) 2019-11-27 09:49:41 +01:00
zaccoding
23c8c74131 cmd: fix command help messages in modules (#20203) 2019-11-26 11:46:39 +01:00
Péter Szilágyi
0676320169 params: begin v1.9.9 release cycle 2019-11-26 12:21:11 +02:00
Péter Szilágyi
d62e9b2857 params: release go-ethereum v1.9.8 2019-11-26 12:20:22 +02:00
Felföldi Zsolt
878e35bfde les: fix clientInfo deadlock (#20395) 2019-11-26 12:17:15 +02:00
Felix Lange
2e98706a99 p2p/discover: slow down lookups on empty table (#20389)
* p2p/discover: slow down lookups on empty table

* p2p/discover: wake from slowdown sleep when table is closed
2019-11-26 12:14:43 +02:00
Guillaume Ballet
8c1e8de839 accounts/keystore: fix staticcheck warnings (#20373)
* accounts/keystore: fix staticcheck warnings

* review feedback
2019-11-25 14:39:55 +01:00
gary rong
b26eedf9e9 accounts/abi/bind: avoid reclaring structs (#20381) 2019-11-25 14:03:22 +01:00
Felix Lange
44b41641f8 rlp: fix staticcheck warnings (#20368)
* rlp: fix staticcheck warnings

* rlp: fix ExampleDecode test
2019-11-25 14:41:53 +02:00
Péter Szilágyi
9ef90dbf30 Merge pull request #20385 from etclabscore/fix/version-cmd-networkid
cmd/geth: remove network id from version cmd
2019-11-25 13:51:22 +02:00
meows
d9d2a4eef9 cmd/geth: remove network id from version cmd
It was reflective only of the Default setting,
and not chain aware.
2019-11-25 06:17:45 -05:00
gary rong
9d67222f4e trie: replace bigcache with fastcache (#19971) 2019-11-25 10:58:15 +02:00
Guillaume Ballet
f5a68a40bf eth/tracers: fix staticcheck warnings (#20379) 2019-11-24 21:06:06 +01:00
Guillaume Ballet
f06ae5ca6a miner: fix staticcheck warnings (#20375) 2019-11-24 20:46:34 +01:00
Michael Forney
3a0480e07d core/asm: allow numbers in labels (#20362)
Numbers were already allowed when creating labels, just not when
referencing them.
2019-11-23 12:52:17 +01:00
Guillaume Ballet
5d21667587 tests, signer: remove staticcheck warnings (#20364) 2019-11-23 12:51:37 +01:00
Felix Lange
fdff182f11 p2p/discv5: add deprecation warning and remove unused code (#20367)
* p2p/discv5: add deprecation warning and remove unused code

* p2p/discv5: remove unused variables
2019-11-22 18:02:13 +02:00
Felix Lange
0abcf03fde trie: remove unused code (#20366) 2019-11-22 17:24:48 +02:00
Guillaume Ballet
58f2ce8671 metrics: fix issues reported by staticcheck (#20365) 2019-11-22 16:04:35 +01:00
Felix Lange
dd21f079e8 core/state: fix staticcheck warnings (#20357)
Also remove dependency on gopkg.in/check.v1 in tests.
2019-11-22 15:56:05 +01:00
Felix Lange
36a684ca1e accounts/abi: fix staticcheck warnings (#20358)
* accounts/abi: fix staticcheck warnings

* accounts/abi: restore unused field for test
2019-11-21 23:22:47 +02:00
Felix Lange
bcc1234778 accounts/abi/bind/backends: remove unused assignment (#20359) 2019-11-21 20:30:28 +02:00
Péter Szilágyi
c1db636fb3 Merge pull request #20360 from karalabe/ppa-fix-cigo-clean
build: skip go clean on PPA, messes with the module trick
2019-11-21 18:56:10 +02:00
Péter Szilágyi
5b558ad936 build: skip go clean on PPA, messes with the module trick 2019-11-21 18:50:58 +02:00
Felix Lange
b6d4f6b66e core/types: remove BlockBy sorting code (#20355) 2019-11-21 16:35:22 +02:00
Felix Lange
0ec5ab4175 common: improve GraphQL error messages (#20354) 2019-11-21 16:34:28 +02:00
Péter Szilágyi
0754100464 Merge pull request #20356 from karalabe/ppa-fix-cigo
build: pull in ci.go dependencies for the PPA builder
2019-11-21 16:24:07 +02:00
Péter Szilágyi
475ae8bd93 build: pull in ci.go dependencies for the PPA builder 2019-11-21 16:14:31 +02:00
Felix Lange
89ab8a74c0 go.mod: switch to Go modules (#20311)
* go.mod, vendor: switch to Go modules

* travis: explicitly enable go modules in Go 1.11 and 1.12

* accounts/abi/bind: switch binding test to go modules

* travis, build: aggregate and upload go mod dependencies for PPA

* go.mod: tidy up the modules to avoid xgo writes to go.sum

* build, internal/build: drop own file/folder copier

* travis: fake build ppa only for go module dependencies

* mobile: fix CopyFile switch to package cp

* build, travis: use ephemeral debsrc GOPATH to get mod deps
2019-11-21 14:51:53 +01:00
Felix Lange
72e62efc76 common/hexutil: improve GraphQL error messages (#20353) 2019-11-21 15:51:25 +02:00
Péter Szilágyi
f56f969dd3 Merge pull request #20350 from holiman/puppeth_ssh
cmd/puppeth: make ssh prompt more user-friendly
2019-11-21 15:08:10 +02:00
Martin Holst Swende
216ff5a952 cmd/puppeth: make ssh prompt more user-friendly 2019-11-21 13:18:12 +01:00
Péter Szilágyi
7be89a7a01 Merge pull request #20339 from etclabscore/fix/cmd-puppeth-blocknonce-type
cmd/puppeth: x-spec nonce data type, use types.BlockNonce
2019-11-21 13:34:33 +02:00
meows
59177bc8c0 cmd/puppeth: x-spec nonce data type, use types.BlockNonce
Refactors to use existing BlockNonce type instead of
hand-rolled bytes logic.
2019-11-20 10:26:31 -05:00
Péter Szilágyi
c5b46a79c1 Merge pull request #20338 from etclabscore/feat/statetests-dedupe-walk-refactor
tests: refactor TestState to dedupe walk callback
2019-11-20 16:24:03 +02:00
meowsbits
b4bc3b3c35 tests: enable TransactionTests Istanbul case (#20337) 2019-11-20 15:08:07 +01:00
Péter Szilágyi
75e029db8b build, travis: use ephemeral debsrc GOPATH to get mod deps 2019-11-20 14:57:33 +02:00
meows
b8ced9e00b tests: refactor TestState to dedupe walk callback
Minor refactoring.
2019-11-20 07:54:18 -05:00
Péter Szilágyi
f8790b9482 mobile: fix CopyFile switch to package cp 2019-11-20 14:42:36 +02:00
Péter Szilágyi
8bd5bb8918 travis: fake build ppa only for go module dependencies 2019-11-20 14:42:36 +02:00
Péter Szilágyi
a7dfaa0bda build, internal/build: drop own file/folder copier 2019-11-20 14:42:35 +02:00
Péter Szilágyi
e1dcea8bf0 go.mod: tidy up the modules to avoid xgo writes to go.sum 2019-11-20 14:42:34 +02:00
Péter Szilágyi
b3d6304f1e travis, build: aggregate and upload go mod dependencies for PPA 2019-11-20 14:42:33 +02:00
Péter Szilágyi
f4ec85486a accounts/abi/bind: switch binding test to go modules 2019-11-20 14:42:33 +02:00
Péter Szilágyi
dfdb204b48 travis: explicitly enable go modules in Go 1.11 and 1.12 2019-11-20 14:42:32 +02:00
Péter Szilágyi
15fb780de6 go.mod, vendor: switch to Go modules 2019-11-20 14:42:28 +02:00
Péter Szilágyi
3a4a3d080b Merge pull request #20261 from holiman/less_querying
internal/ethapi: don't query wallets at every execution of gas estimation
2019-11-20 12:49:13 +02:00
gary rong
b7ba944e88 cmd/puppeth: update chain spec of parity (#20241) 2019-11-20 12:46:35 +02:00
gary rong
9b59c75405 miner: fix data race in tests (#20310)
* miner: fix data race in tests

miner: fix linter

* miner: address comment
2019-11-20 12:36:41 +02:00
Felix Lange
f71e85b8e2 core: fix staticcheck warnings (#20323) 2019-11-20 10:53:01 +02:00
Felix Lange
8008c5b1fa rpc: remove 'exported or builtin' restriction for parameters (#20332)
* rpc: remove 'exported or builtin' restriction for parameters

There is no technial reason for this restriction because package reflect
can create values of any type. Requiring parameters and return values to
be exported causes a lot of noise in package exports.

* rpc: fix staticcheck warnings
2019-11-20 10:06:21 +02:00
Felix Lange
9c6cf960b4 internal/web3ext, les: update clique JS and make it work with the light client (#20318)
Also fix the input formatter on clique_getSnapshot and clique_getSigners
so that integers as well as hex number strings are accepted.
2019-11-19 18:22:04 +01:00
Felix Lange
df206d2513 p2p/simulations: fix staticcheck warnings (#20322) 2019-11-19 17:16:42 +01:00
Felix Lange
9e8cc00b73 p2p: remove unused code (#20325) 2019-11-19 17:16:08 +01:00
Felix Lange
ac5e28ea38 whisper/whisperv6: fix staticcheck warnings (#20328) 2019-11-19 17:14:00 +01:00
Guillaume Ballet
3b0f3483c4 .github: remove 'nonsense' from CODEOWNERS (#20329) 2019-11-19 17:13:42 +01:00
Felix Lange
7f70a70106 event: remove unused field 'closed' (#20324) 2019-11-19 16:00:32 +02:00
Felix Lange
94e8250983 cmd/wnode: remove uses of common.ToHex (#20327) 2019-11-19 15:55:48 +02:00
Felix Lange
c013192ba7 ethclient: remove use of common.ToHex (#20326) 2019-11-19 15:53:26 +02:00
Guillaume Ballet
0b6338321f travis: deactivate arm build during push (#20321) 2019-11-19 13:57:05 +01:00
gary rong
b9c90c5581 core/rawdb: check hash before return data from ancient db (#20195)
* core/rawdb: check hash before return data from ancient db

* core/rawdb: fix lint

* core/rawdb: calculate the hash in the fly
2019-11-19 12:32:57 +02:00
Felix Lange
5fefe39ba5 p2p/netutil: fix staticcheck warning (#20315) 2019-11-19 11:17:41 +02:00
Felix Lange
dfe891270a cmd/ethkey: fix file permissions in changepassword command (#20313)
Found by staticcheck.
2019-11-19 11:16:34 +02:00
Felix Lange
c5c5e0dbe8 consensus/clique: fix struct tags for status API (#20316)
Also unexport the status struct.
2019-11-18 18:14:59 +01:00
Martin Holst Swende
3f4a875bf6 consensus/clique: add clique_status API method (#20103)
This PR introduces clique_status which gives info about the health of
the clique network.

It's currently a bit PITA to find out how a clique network is
performing, and it can easily happen that sealers drop off -- and
everything is 'fine' until one more signer drops off, and the network
suddenly halts.

The new method provides the following stats:

- Which signers are currently active, and have signed blocks in the last
  N (set to 64) blocks?
- How many blocks has each signer signed?
- What is the difficulty in the last N blocks, compared to the
  theoretical maximum?
2019-11-18 17:03:57 +01:00
Felix Lange
a3d263dd3a cmd/clef: fix staticcheck warnings (#20314) 2019-11-18 16:38:54 +01:00
meowsbits
190fb8180a build: add test cmd flag -v for verbose logs (#20298)
Adds flags akin to -coverage flag enabling the test runner
to use go test's -v flag, signaling verbose test log output.
2019-11-18 15:48:20 +01:00
Guillaume Ballet
b02afb6b3d travis: use travis_wait for both install and build (#20309) 2019-11-18 15:34:17 +01:00
Felföldi Zsolt
422604b438 les: rename UpdateBalance to AddBalance and simplify return format (#20304) 2019-11-18 12:42:49 +01:00
meowsbits
57d697629d core: s/isEIP155/isHomestead/g (fix IntrinsicGas signature var name) (#20300)
* core: s/isEIP155/isEIP2/ (fix)

This signature variable name reflects a spec'd change
in gas cost for creating contracts as documented in EIP2 (Homestead HF).

https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md#specification

* core: s/isEIP2/sIsHomestead/g

Use isHomestead since Homestead is what the caller
and rest of the code uses.
2019-11-18 11:41:49 +02:00
Guillaume Ballet
11d09fd3ba travis: remove traces and use travis_wait in ARM build (#20296)
* travis: remove debug traces

* travis: Add travis_wait to the test run

* travis: increase travis_wait time
2019-11-18 10:52:12 +02:00
Felix Lange
689486449d build: use golangci-lint (#20295)
* build: use golangci-lint

This changes build/ci.go to download and run golangci-lint instead
of gometalinter.

* core/state: fix unnecessary conversion

* p2p/simulations: fix lock copying (found by go vet)

* signer/core: fix unnecessary conversions

* crypto/ecies: remove unused function cmpPublic

* core/rawdb: remove unused function print

* core/state: remove unused function xTestFuzzCutter

* core/vm: disable TestWriteExpectedValues in a different way

* core/forkid: remove unused function checksum

* les: remove unused type proofsData

* cmd/utils: remove unused functions prefixedNames, prefixFor

* crypto/bn256: run goimports

* p2p/nat: fix goimports lint issue

* cmd/clef: avoid using unkeyed struct fields

* les: cancel context in testRequest

* rlp: delete unreachable code

* core: gofmt

* internal/build: simplify DownloadFile for Go 1.11 compatibility

* build: remove go test --short flag

* .travis.yml: disable build cache

* whisper/whisperv6: fix ineffectual assignment in TestWhisperIdentityManagement

* .golangci.yml: enable goconst and ineffassign linters

* build: print message when there are no lint issues

* internal/build: refactor download a bit
2019-11-18 10:49:17 +02:00
Felix Lange
7c4a4eb58a rpc, p2p/simulations: use github.com/gorilla/websocket (#20289)
* rpc: improve codec abstraction

rpc.ServerCodec is an opaque interface. There was only one way to get a
codec using existing APIs: rpc.NewJSONCodec. This change exports
newCodec (as NewFuncCodec) and NewJSONCodec (as NewCodec). It also makes
all codec methods non-public to avoid showing internals in godoc.

While here, remove codec options in tests because they are not
supported anymore.

* p2p/simulations: use github.com/gorilla/websocket

This package was the last remaining user of golang.org/x/net/websocket.
Migrating to the new library wasn't straightforward because it is no
longer possible to treat WebSocket connections as a net.Conn.

* vendor: delete golang.org/x/net/websocket

* rpc: fix godoc comments and run gofmt
2019-11-18 10:40:59 +02:00
Michael Forney
9e71f55bfa cmd/evm: Allow loading input from file (#20273)
Make it possible to load input from a file. Simlar to `--code` / `--codefile`, have `--input`/`--inputfile`.
2019-11-17 15:45:54 +01:00
Martin Holst Swende
51c3290bee internal/ethapi: don't query wallets at every execution of gas estimation 2019-11-17 15:10:55 +01:00
nebojsa94
738b51ae31 core/vm: fix tracer interface parameter name (#20294) 2019-11-15 10:52:33 +02:00
meowsbits
f03b2db7db params: finish sentence in comment (#20291) 2019-11-14 23:05:32 +02:00
Guillaume Ballet
49d1a032da build: gather info to investigate why builds fail on ARM (#20281) 2019-11-14 14:42:23 +01:00
Guillaume Ballet
765fe446cf whisper/whisperv6: fix staticcheck issues (#20288) 2019-11-14 10:09:16 +01:00
Felix Lange
afe0b65405 dashboard: remove the dashboard (#20279)
This removes the dashboard project. The dashboard was an experimental
browser UI for geth which displayed metrics and chain information in
real time. We are removing it because it has marginal utility and nobody
on the team can maintain it.

Removing the dashboard removes a lot of dependency code and shaves
6 MB off the geth binary size.
2019-11-14 10:04:16 +01:00
Felix Lange
987648b0ad cmd/faucet: use github.com/gorilla/websocket (#20283)
golang.org/x/net/websocket is unmaintained, and we have already
switched to using github.com/gorilla/websocket for package rpc.
2019-11-14 10:05:17 +02:00
Jorropo
9504c5c360 rpc: fix typo example code (#20284) 2019-11-14 10:03:41 +02:00
gary rong
f8a95d996f accounts/abi/bind, cmd/abigen: implement alias for abigen (#20244)
* accounts/abi/bind, cmd/abigen: implement alias for abigen

* accounts/abi/bind: minor fixes

* accounts/abi/bind: address comments

* cmd/abigen: address comments

* accounts/abi/bind: print error log when identifier collision

* accounts/abi/bind: address comments

* accounts/abi/bind: address comment
2019-11-14 08:26:10 +01:00
Felföldi Zsolt
bf5c6b29fa les: implement server priority API (#20070)
This PR implements the LES server RPC API. Methods for server
capacity, client balance and client priority management are provided.
2019-11-13 23:47:03 +01:00
Guillaume Ballet
22e3bbbf0a miner: increase worker test timeout (#20268)
TestEmptyWork* occasionally fails due to timeout. Increase the timeout.
2019-11-13 12:40:50 +01:00
Kurkó Mihály
4ea9b62b5c dashboard: send current block to the dashboard client (#19762)
This adds all dashboard changes from the last couple months.
We're about to remove the dashboard, but decided that we should
get all the recent work in first in case anyone wants to pick up this
project later on.

* cmd, dashboard, eth, p2p: send peer info to the dashboard
* dashboard: update npm packages, improve UI, rebase
* dashboard, p2p: remove println, change doc
* cmd, dashboard, eth, p2p: cleanup after review
* dashboard: send current block to the dashboard client
2019-11-13 12:13:13 +01:00
Rick
6f1a600f6c p2p: fix bug in TestPeerDisconnect (#20277) 2019-11-13 12:01:52 +01:00
Guillaume Ballet
de2259d27c travis: enable test suite on ARM64 (#20219)
* travis: Enable ARM support

* Include fixes from 20039

* Add a trace to debug the invalid lookup issue

* Try increasing the timeout to see if the arm test passes

* Investigate the resolver issue

* Increase arm64 timeout for clique test

* increase timeout in tests for arm64

* Only test the failing tests

* Review feedback: don't export epsilon

* Remove investigation tricks+include fjl's feeback

* Revert the retry ahead of using the mock resolver

* Fix rebase errors
2019-11-08 10:58:57 +02:00
Felix Lange
adf007dadc p2p/enode: mock DNS resolver in URL parsing test (#20252) 2019-11-07 16:40:37 +01:00
Péter Szilágyi
4b8f56cf98 params: begin v1.9.8 release cycle 2019-11-07 10:01:20 +02:00
Péter Szilágyi
a718daa674 params: release Geth v1.9.7 2019-11-07 09:58:39 +02:00
gary rong
b9bac1f384 les: fix and slim the unit tests of les (#20247)
* les: loose restriction of unit tests

* les: update unit tests

* les, light: slim the unit tests
2019-11-06 22:09:37 +01:00
Péter Szilágyi
fc3661f89c Merge pull request #20248 from karalabe/cht-1.9.7
params: hard-code new CHTs for the 1.9.7 release
2019-11-06 17:54:39 +02:00
Péter Szilágyi
9948724deb params: hard-code new CHTs for the 1.9.7 release 2019-11-06 17:47:13 +02:00
Péter Szilágyi
c702bd70ed travis: bump linter to Go 1.13.x 2019-11-05 15:35:51 +02:00
Péter Szilágyi
734e00af9e travis, build, internal: use own Go bundle for PPA builds (#20240)
* build: bump PPAs to Go 1.13 (via longsleep), keep Trusty on 1.11

* travis, build, vendor: use own Go bundle for PPA builds

* travis, build, internal, vendor: smarter Go bundler, own untar

* build: updated ci-notes with new Go bundling, only make, don't test
2019-11-05 15:32:42 +02:00
Martin Holst Swende
b566cfdffd core/evm: avoid copying memory for input in calls (#20177)
* core/evm, contracts: avoid copying memory for input in calls + make ecrecover not modify input buffer

* core/vm: optimize mstore a bit

* core/vm: change Get -> GetCopy in vm memory access
2019-11-04 11:31:09 +02:00
gary rong
7a6d5d0cce cmd/puppeth: integrate istanbul into puppeth (#19926)
* cmd/puppeth: integrate istanbul into puppeth

* cmd/puppeth: address comment

* cmd/puppeth: use hexutil.Big for fork indicator

* cmd/puppeth: finalize istanbul fork

* cmd/puppeth: fix 2200 for parity, rename is to eip1283ReenableTransition

* cmd/puppeth: fix eip1108

* cmd/puppeth: add blake2f for parity

* cmd/puppeth: add aleth istanbul precompiled

* cmd/puppeth: use hexutil.Big

* cmd/puppeth: fix unit tests

* cmd/puppeth: update testdata
2019-11-04 10:41:29 +02:00
Péter Szilágyi
0ff7380465 Merge pull request #20231 from SamuelMarks/go1.13.4
appveyor: bump to Go 1.13.4
2019-11-04 10:38:10 +02:00
gary rong
0ce5e113be les: rework clientpool (#20077)
* les: rework clientpool
2019-11-02 13:02:35 +01:00
Samuel Marks
86fe283d19 appveyor: bump to Go 1.13.4 2019-11-02 18:54:04 +11:00
gary rong
44b74cfc40 accounts/abi: add internalType information and fix issues (#20179)
* accounts/abi: fix various issues

The fixed issues include:

(1) If there is no return in a call function, unpack should
return nil error

(2) For some functions which have struct array as parameter,
it will also be detected and generate the struct definition

(3) For event, if it has non-indexed parameter, the parameter
name will also be assigned if empty. Also the internal struct
will be detected and generate struct defition if not exist.

(4) Fix annotation generation in event function

* accounts/abi: add new abi field internalType

* accounts: address comments and add tests

* accounts/abi: replace strings.ReplaceAll with strings.Replace
2019-10-31 14:17:51 +01:00
Martin Holst Swende
9278951a62 params, core/forkid: configure mainnet istanbul block 9069K (#20222)
* params: configure mainnet istanbul block 9069K

* core/forkid: add some more test items for mainnet istanbul
2019-10-31 11:04:26 +02:00
Péter Szilágyi
12f2a25d5e Merge pull request #20225 from karalabe/forkid-eth-handshake-verification-plus
cmd/devp2p, core/forkid: make forkid.Filter API uniform
2019-10-31 10:58:28 +02:00
Péter Szilágyi
8927f7724a cmd/devp2p, core/forkid: make forkid.Filter API uniform 2019-10-31 10:38:14 +02:00
Péter Szilágyi
93422e9d15 Merge pull request #20140 from karalabe/eth64-handshake-forkid
eth: eth/64 - extend handshake with with fork id
2019-10-30 13:21:25 +02:00
gary rong
5d91acccd5 miner: increase import time allowance (#20217)
Fix the block import unit test which can time out sometimes.
2019-10-30 12:07:30 +01:00
Péter Szilágyi
9641cacea8 core/forkid: add two clauses for more precise validation (#20220) 2019-10-30 12:05:31 +01:00
Péter Szilágyi
64571f9379 eth: eth/64 - extend handshake packet with fork id 2019-10-29 18:04:39 +02:00
Péter Szilágyi
e306304414 Merge pull request #20204 from holiman/fix_downloader_race
eth/downloader: fix data race in downloader
2019-10-29 17:10:44 +02:00
Felix Lange
2c37142d2f cmd/devp2p, p2p: dial using node iterator, discovery crawler (#20132)
* p2p/enode: add Iterator and associated utilities

* p2p/discover: add RandomNodes iterator

* p2p: dial using iterator

* cmd/devp2p: add discv4 crawler

* cmd/devp2p: WIP nodeset filter

* cmd/devp2p: fixup lesFilter

* core/forkid: add NewStaticFilter

* cmd/devp2p: make -eth-network filter actually work

* cmd/devp2p: improve crawl timestamp handling

* cmd/devp2p: fix typo

* p2p/enode: fix comment typos

* p2p/discover: fix comment typos

* p2p/discover: rename lookup.next to 'advance'

* p2p: lower discovery mixer timeout

* p2p/enode: implement dynamic FairMix timeouts

* cmd/devp2p: add ropsten support in -eth-network filter

* cmd/devp2p: tweak crawler log message
2019-10-29 17:08:57 +02:00
Martin Holst Swende
3eca7b5d27 eth/downloader: fix data race in downloader 2019-10-29 14:32:45 +01:00
Michael Forney
b0b277525c core/asm: assembly parser label fixes (#20210)
* core/asm: Fix encoding of pushed labels

EVM uses big-endian byte-order, so to pad a label value to 4 bytes,
zeros must be added to the front, not the end.

* core/asm: Fix PC calculations when a label is pushed

Incrementing PC by 5 is only correct if the label appears after a jump,
in which case there is an implicit push. When it appears after an explicit
push, PC should only be incremented by 4.

* core/asm: Allow JUMP with no argument

This way, a label can be pushed explicitly, or loaded from memory to
implement a jump table.
2019-10-29 13:47:18 +01:00
gary rong
ecdbb402ee trie: remove node ordering slice in sync batch (#19929)
When we flush a batch of trie nodes into database during the state
sync, we should guarantee that all children should be flushed before
parent.

Actually the trie nodes commit order is strict by: children -> parent.
But when we flush all ready nodes into db, we don't need the order
anymore since

    (1) they are all ready nodes (no more dependency)
    (2) underlying database provides write atomicity
2019-10-28 18:50:11 +01:00
Michael Forney
9c81387bef cmd/evm: remove surrounding whitespace in hex input code (#20211)
This way, the output of `evm compile` can be used directly in `evm
--codefile code.txt run`, without stripping the trailing newline first.
2019-10-28 14:55:20 +01:00
Guillaume Ballet
72617a0742 consensus: fix possessives in comments. (#20209) 2019-10-28 09:57:34 +02:00
Martin Holst Swende
db79143a13 clef: resolve windows pipes, fixes #20121 (#20166) 2019-10-24 10:45:07 +02:00
Piotr Dyraga
538f763fdc accounts/abi/bind: take into account gas price during gas estimation (#20189)
The gas price was not passed to the `EstimateGas` function. As a result,
conditional execution paths depending on `tx.gasprice` could be not
correctly processed and we could get invalid gas estimates for contract
function calls.
2019-10-21 10:13:41 +02:00
gary rong
d4bb3798d8 miner: add generate and import unit test (#20111)
This PR adds a new unit test in miner package which will create some blocks from miner and then import into another chain. In this way, we can ensure all blocks generated by Geth miner obey consensus rules.
2019-10-20 12:36:40 +02:00
Marius Kjærstad
08953e42c1 metrics: change links in README.md to https (#20182) 2019-10-20 12:25:25 +02:00
Marius Kjærstad
b9299bbc46 dashboard: change links in README to https (#20181)
Changed http:// to https:// on links in dashboard/README.md
2019-10-18 21:30:53 +02:00
Marius Kjærstad
9a77065948 Changed http:// to https:// on links in log/README.md (#20178)
docs: change http to https on links in log/README.md
2019-10-18 08:51:54 +02:00
Jeffery Robert Walsh
a28093ced4 README: use new miner threads flag instead of legacy minerthreads flag (#20165) 2019-10-17 11:39:13 +03:00
Ross
d5b79e752e p2p/simulations: add node properties support and utility functions (#20060) 2019-10-17 10:07:09 +02:00
Felix Lange
7300365956 p2p/dnsdisc: update to latest EIP-1459 spec (#20168)
This updates the DNS TXT record format to the latest
changes in ethereum/EIPs#2313.
2019-10-16 14:35:24 +03:00
Martin Holst Swende
c476460cb2 params: check fork ordering when initializing new genesis, fixes #20136 (#20169)
prevent users from misconfiguring their nodes so that fork ordering is not preserved.
2019-10-16 13:23:14 +02:00
gary rong
028af3457d cmd/utils: fix command line flag resolve (#20167)
In Geth, we have two sources for configuration:
(1) Config file
(2) Command line flag

Basically geth will first resolve config file and then overwrite
configs with command line flags.

This issue is: geth should only overwrite configs if flags are truly
set. So before we apply any flag to configs, `GlobalIsSet` check
is necessary.
2019-10-15 10:19:20 +02:00
Felix Lange
a73f3f4518 params: begin v1.9.7 release cycle 2019-10-03 11:29:55 +02:00
Felix Lange
bd05968077 params: release Geth v1.9.6 stable 2019-10-03 11:29:20 +02:00
Felix Lange
6e730915bd les: add empty "les" ENR entry for servers (#20145) 2019-10-02 14:14:27 +03:00
Darrel Herbst
c713ea7c22 cmd/bootnode: fix exit behavior with -genkey (#20110) 2019-10-02 11:32:02 +02:00
Martin Holst Swende
7f5f62aaa0 tests: update test suite for istanbul (#20082)
* update tests for istanbul

* tests: updated blockchaintests, see https://github.com/ethereum/tests/issues/637

* tests: update again, hopefully fixed this time

* tests: skip time consuming, run legacy tests

* tests: update again

* build: disable long-running tests on travis

* tests: fix formatting nits

* tests: I hate github's editor
2019-10-02 11:33:51 +03:00
kikilass
b2f696e025 github: Added capital P (#20139) 2019-09-30 22:57:20 +03:00
Péter Szilágyi
62b43ee0d5 Merge pull request #20133 from karalabe/measure-subprotocol-traffic
p2p: measure subprotocol bandwidth usage
2019-09-30 12:02:29 +03:00
Péter Szilágyi
a2a60869c8 p2p: measure subprotocol bandwidth usage 2019-09-27 18:00:25 +03:00
gary rong
df89233b57 ethdb/leveldb: disable seek compaction (#20130)
* vendor: update leveldb

* ethdb/leveldb: disable seek compaction and add metrics

* vendor: udpate latest levledb

* ethdb/leveldb: fix typo
2019-09-26 17:44:00 +03:00
Martin Holst Swende
ead711779d core: initialize current block/fastblock atomics to nil, fix #19286 (#19352) 2019-09-26 11:10:35 +02:00
zcheng9
2133f18f15 core/state: fix database leak and copy tests (#19306) 2019-09-26 11:09:59 +02:00
ywzqwwt
1a6ef5ae58 core/blockchain: remove block from futureBlocks on error (#19763) 2019-09-26 10:57:51 +02:00
Ryan Schneider
ad03d9801c internal/ethapi: support block number or hash on state-related methods (#19491)
This change adds support for EIP-1898.
2019-09-26 10:47:31 +02:00
Lucas Hendren
62391ddbeb tests/solidity: add contract to test every opcode (#19283)
Fixes #18210
2019-09-26 10:30:33 +02:00
Felix Lange
0568e81701 p2p/dnsdisc: add implementation of EIP-1459 (#20094)
This adds an implementation of node discovery via DNS TXT records to the
go-ethereum library. The implementation doesn't match EIP-1459 exactly,
the main difference being that this implementation uses separate merkle
trees for tree links and ENRs. The EIP will be updated to match p2p/dnsdisc.

To maintain DNS trees, cmd/devp2p provides a frontend for the p2p/dnsdisc
library. The new 'dns' subcommands can be used to create, sign and deploy DNS
discovery trees.
2019-09-25 11:38:13 +02:00
gary rong
32b07e8b1f les: fix checkpoint sync (#20120) 2019-09-25 10:05:15 +02:00
Péter Szilágyi
aca39a6498 Merge pull request #20115 from holiman/minor_dashboard_fx
dashboard: log dashboard url
2019-09-24 13:29:21 +03:00
Martin Holst Swende
be500b57d2 dashboard: log host+port 2019-09-24 12:01:21 +02:00
Péter Szilágyi
a308f012ba core/state: fix copy-commit-copy (#20113)
* core/state: revert noop finalise, fix copy-commit-copy

* core/state: reintroduce net sstore tracking, extend tests for it
2019-09-24 10:49:59 +03:00
Péter Szilágyi
311419c7d6 Merge pull request #20096 from skylenet/remove-ef-legacy-bootnodes
params: remove legacy bootnodes
2019-09-23 11:38:39 +03:00
Felix Lange
63b18027dc params: start v1.9.6 release cycle 2019-09-20 13:33:08 +02:00
Felix Lange
a1c09b9387 params: release Geth v1.9.5 stable 2019-09-20 13:32:42 +02:00
gary rong
05347b3d98 core/state: fix state object deep copy (#20100)
deepCopy didn't copy pending storage updates, leading to the
creation of blocks with invalid state root.
2019-09-20 11:55:44 +02:00
Rafael Matias
75aec8a28d params: remove legacy bootnodes 2019-09-19 19:35:57 +02:00
Péter Szilágyi
24ef83518c params: start v1.9.5 release cycle 2019-09-19 11:38:43 +03:00
2377 changed files with 71736 additions and 878236 deletions

12
.github/CODEOWNERS vendored
View File

@@ -3,21 +3,21 @@
accounts/usbwallet @karalabe
accounts/scwallet @gballet
accounts/abi @gballet
accounts/abi @gballet @MariusVanDerWijden
cmd/clef @holiman
cmd/puppeth @karalabe
consensus @karalabe
core/ @karalabe @holiman @rjl493456442
dashboard/ @kurkomisi
eth/ @karalabe @holiman @rjl493456442
graphql/ @gballet
les/ @zsfelfoldi @rjl493456442
light/ @zsfelfoldi @rjl493456442
mobile/ @karalabe @ligi
node/ @fjl @renaynay
p2p/ @fjl @zsfelfoldi
rpc/ @fjl @holiman
p2p/simulations @zelig @nonsense @janos @justelad
p2p/protocols @zelig @nonsense @janos @justelad
p2p/testing @zelig @nonsense @janos @justelad
p2p/simulations @fjl
p2p/protocols @fjl
p2p/testing @fjl
signer/ @holiman
whisper/ @gballet @gluk256
whisper/ @gballet

View File

@@ -1,8 +1,8 @@
Hi there,
please note that this is an issue tracker reserved for bug reports and feature requests.
Please note that this is an issue tracker reserved for bug reports and feature requests.
For general questions please use the gitter channel or the Ethereum stack exchange at https://ethereum.stackexchange.com.
For general questions please use [discord](https://discord.gg/nthXNEv) or the Ethereum stack exchange at https://ethereum.stackexchange.com.
#### System information

1
.gitignore vendored
View File

@@ -24,6 +24,7 @@ build/_vendor/pkg
# used by the Makefile
/build/_workspace/
/build/cache/
/build/bin/
/geth*.zip

50
.golangci.yml Normal file
View File

@@ -0,0 +1,50 @@
# This file configures github.com/golangci/golangci-lint.
run:
timeout: 3m
tests: true
# default is true. Enables skipping of directories:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs-use-default: true
skip-files:
- core/genesis_alloc.go
linters:
disable-all: true
enable:
- deadcode
- goconst
- goimports
- gosimple
- govet
- ineffassign
- misspell
# - staticcheck
- unconvert
# - unused
- varcheck
linters-settings:
gofmt:
simplify: true
goconst:
min-len: 3 # minimum length of string constant
min-occurrences: 6 # minimum number of occurrences
issues:
exclude-rules:
- path: crypto/blake2b/
linters:
- deadcode
- path: crypto/bn256/cloudflare
linters:
- deadcode
- path: p2p/discv5/
linters:
- deadcode
- path: core/vm/instructions_test.go
linters:
- goconst
- path: cmd/faucet/
linters:
- deadcode

View File

@@ -2,12 +2,21 @@ language: go
go_import_path: github.com/ethereum/go-ethereum
sudo: false
jobs:
allow_failures:
- stage: build
os: osx
go: 1.14.x
env:
- azure-osx
- azure-ios
- cocoapods-ios
include:
# This builder only tests code linters on latest version of Go
# This builder only tests code linters on latest version of Go
- stage: lint
os: linux
dist: xenial
go: 1.12.x
go: 1.14.x
env:
- lint
git:
@@ -18,15 +27,9 @@ jobs:
- stage: build
os: linux
dist: xenial
go: 1.11.x
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: linux
dist: xenial
go: 1.12.x
go: 1.13.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
@@ -34,15 +37,33 @@ jobs:
# These are the latest Go versions.
- stage: build
os: linux
arch: amd64
dist: xenial
go: 1.13.x
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
if: type = pull_request
os: linux
arch: arm64
dist: xenial
go: 1.14.x
env:
- GO111MODULE=on
script:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- stage: build
os: osx
go: 1.13.x
osx_image: xcode11.3
go: 1.14.x
env:
- GO111MODULE=on
script:
- echo "Increase the maximum number of open file descriptors on macOS"
- NOFILE=20480
@@ -61,9 +82,10 @@ jobs:
if: type = push
os: linux
dist: xenial
go: 1.13.x
go: 1.14.x
env:
- ubuntu-ppa
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
addons:
@@ -77,7 +99,7 @@ jobs:
- python-paramiko
script:
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
- go run build/ci.go debsrc -goversion 1.14.2 -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Linux Azure uploads
- stage: build
@@ -85,9 +107,10 @@ jobs:
os: linux
dist: xenial
sudo: required
go: 1.13.x
go: 1.14.x
env:
- azure-linux
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
addons:
@@ -121,9 +144,10 @@ jobs:
dist: xenial
services:
- docker
go: 1.13.x
go: 1.14.x
env:
- azure-linux-mips
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
script:
@@ -164,10 +188,11 @@ jobs:
env:
- azure-android
- maven-android
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://dl.google.com/go/go1.13.linux-amd64.tar.gz | tar -xz
- curl https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go
@@ -185,11 +210,12 @@ jobs:
- stage: build
if: type = push
os: osx
go: 1.13.x
go: 1.14.x
env:
- azure-osx
- azure-ios
- cocoapods-ios
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
script:
@@ -216,9 +242,10 @@ jobs:
if: type = cron
os: linux
dist: xenial
go: 1.13.x
go: 1.14.x
env:
- azure-purge
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
script:

View File

@@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.13-alpine as builder
FROM golang:1.14-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git

View File

@@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.13-alpine as builder
FROM golang:1.14-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git

View File

@@ -10,33 +10,34 @@
GOBIN = ./build/bin
GO ?= latest
GORUN = env GO111MODULE=on go run
geth:
build/env.sh go run build/ci.go install ./cmd/geth
$(GORUN) build/ci.go install ./cmd/geth
@echo "Done building."
@echo "Run \"$(GOBIN)/geth\" to launch geth."
all:
build/env.sh go run build/ci.go install
$(GORUN) build/ci.go install
android:
build/env.sh go run build/ci.go aar --local
$(GORUN) build/ci.go aar --local
@echo "Done building."
@echo "Import \"$(GOBIN)/geth.aar\" to use the library."
ios:
build/env.sh go run build/ci.go xcode --local
$(GORUN) build/ci.go xcode --local
@echo "Done building."
@echo "Import \"$(GOBIN)/Geth.framework\" to use the library."
test: all
build/env.sh go run build/ci.go test
$(GORUN) build/ci.go test
lint: ## Run linters.
build/env.sh go run build/ci.go lint
$(GORUN) build/ci.go lint
clean:
./build/clean_go_build_cache.sh
env GO111MODULE=on go clean -cache
rm -fr build/_workspace/pkg/ $(GOBIN)/*
# The devtools target installs tools required for 'go generate'.
@@ -63,12 +64,12 @@ geth-linux: geth-linux-386 geth-linux-amd64 geth-linux-arm geth-linux-mips64 get
@ls -ld $(GOBIN)/geth-linux-*
geth-linux-386:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/386 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/386 -v ./cmd/geth
@echo "Linux 386 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep 386
geth-linux-amd64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/amd64 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/amd64 -v ./cmd/geth
@echo "Linux amd64 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep amd64
@@ -77,42 +78,42 @@ geth-linux-arm: geth-linux-arm-5 geth-linux-arm-6 geth-linux-arm-7 geth-linux-ar
@ls -ld $(GOBIN)/geth-linux-* | grep arm
geth-linux-arm-5:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm-5 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/arm-5 -v ./cmd/geth
@echo "Linux ARMv5 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm-5
geth-linux-arm-6:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm-6 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/arm-6 -v ./cmd/geth
@echo "Linux ARMv6 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm-6
geth-linux-arm-7:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm-7 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/arm-7 -v ./cmd/geth
@echo "Linux ARMv7 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm-7
geth-linux-arm64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm64 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/arm64 -v ./cmd/geth
@echo "Linux ARM64 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm64
geth-linux-mips:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mips --ldflags '-extldflags "-static"' -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/mips --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPS cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mips
geth-linux-mipsle:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mipsle --ldflags '-extldflags "-static"' -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/mipsle --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPSle cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mipsle
geth-linux-mips64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mips64 --ldflags '-extldflags "-static"' -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/mips64 --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPS64 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mips64
geth-linux-mips64le:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mips64le --ldflags '-extldflags "-static"' -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=linux/mips64le --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPS64le cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mips64le
@@ -121,12 +122,12 @@ geth-darwin: geth-darwin-386 geth-darwin-amd64
@ls -ld $(GOBIN)/geth-darwin-*
geth-darwin-386:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=darwin/386 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=darwin/386 -v ./cmd/geth
@echo "Darwin 386 cross compilation done:"
@ls -ld $(GOBIN)/geth-darwin-* | grep 386
geth-darwin-amd64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=darwin/amd64 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=darwin/amd64 -v ./cmd/geth
@echo "Darwin amd64 cross compilation done:"
@ls -ld $(GOBIN)/geth-darwin-* | grep amd64
@@ -135,11 +136,11 @@ geth-windows: geth-windows-386 geth-windows-amd64
@ls -ld $(GOBIN)/geth-windows-*
geth-windows-386:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=windows/386 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=windows/386 -v ./cmd/geth
@echo "Windows 386 cross compilation done:"
@ls -ld $(GOBIN)/geth-windows-* | grep 386
geth-windows-amd64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=windows/amd64 -v ./cmd/geth
$(GORUN) build/ci.go xgo -- --go=$(GO) --targets=windows/amd64 -v ./cmd/geth
@echo "Windows amd64 cross compilation done:"
@ls -ld $(GOBIN)/geth-windows-* | grep amd64

View File

@@ -4,7 +4,7 @@ Official Golang implementation of the Ethereum protocol.
[![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://godoc.org/github.com/ethereum/go-ethereum)
)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc)
[![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[![Travis](https://travis-ci.org/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv)
@@ -16,7 +16,7 @@ archives are published at https://geth.ethereum.org/downloads/.
For prerequisites and detailed build instructions please read the [Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum) on the wiki.
Building `geth` requires both a Go (version 1.10 or later) and a C compiler. You can install
Building `geth` requires both a Go (version 1.13 or later) and a C compiler. You can install
them using your favourite package manager. Once the dependencies are installed, run
```shell
@@ -39,7 +39,7 @@ directory.
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
@@ -72,7 +72,7 @@ This command will:
This tool is optional and if you leave it out you can always attach to an already running
`geth` instance with `geth attach`.
### A Full node on the Ethereum test network
### A Full node on the Görli test network
Transitioning towards developers, if you'd like to play around with creating Ethereum
contracts, you almost certainly would like to do that without any real money involved until
@@ -81,23 +81,24 @@ network, you want to join the **test** network with your node, which is fully eq
the main network, but with play-Ether only.
```shell
$ geth --testnet console
$ geth --goerli console
```
The `console` subcommand has the exact same meaning as above and they are equally
useful on the testnet too. Please see above for their explanations if you've skipped here.
useful on the testnet too. Please, see above for their explanations if you've skipped here.
Specifying the `--testnet` flag, however, will reconfigure your `geth` instance a bit:
Specifying the `--goerli` flag, however, will reconfigure your `geth` instance a bit:
* Instead of connecting the main Ethereum network, the client will connect to the Görli
test network, which uses different P2P bootnodes, different network IDs and genesis
states.
* Instead of using the default data directory (`~/.ethereum` on Linux for example), `geth`
will nest itself one level deeper into a `testnet` subfolder (`~/.ethereum/testnet` on
will nest itself one level deeper into a `goerli` subfolder (`~/.ethereum/goerli` on
Linux). Note, on OSX and Linux this also means that attaching to a running testnet node
requires the use of a custom endpoint since `geth attach` will try to attach to a
production node endpoint by default. E.g.
`geth attach <datadir>/testnet/geth.ipc`. Windows users are not affected by
production node endpoint by default, e.g.,
`geth attach <datadir>/goerli/geth.ipc`. Windows users are not affected by
this.
* Instead of connecting the main Ethereum network, the client will connect to the test
network, which uses different P2P bootnodes, different network IDs and genesis states.
*Note: Although there are some internal protective measures to prevent transactions from
crossing over between the main network and test network, you should make sure to always
@@ -107,17 +108,26 @@ accounts available between them.*
### Full node on the Rinkeby test network
The above test network is a cross-client one based on the ethash proof-of-work consensus
algorithm. As such, it has certain extra overhead and is more susceptible to reorganization
attacks due to the network's low difficulty/security. Go Ethereum also supports connecting
to a proof-of-authority based test network called [*Rinkeby*](https://www.rinkeby.io)
(operated by members of the community). This network is lighter, more secure, but is only
supported by go-ethereum.
Go Ethereum also supports connecting to the older proof-of-authority based test network
called [*Rinkeby*](https://www.rinkeby.io) which is operated by members of the community.
```shell
$ geth --rinkeby console
```
### Full node on the Ropsten test network
In addition to Görli and Rinkeby, Geth also supports the ancient Ropsten testnet. The
Ropsten test network is based on the Ethash proof-of-work consensus algorithm. As such,
it has certain extra overhead and is more susceptible to reorganization attacks due to the
network's low difficulty/security.
```shell
$ geth --ropsten console
```
*Note: Older Geth configurations store the Ropsten database in the `testnet` subdirectory.*
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a
@@ -152,7 +162,7 @@ above command does. It will also create a persistent volume in your home direct
saving your blockchain as well as map the default ports. There is also an `alpine` tag
available for a slim version of the image.
Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containers
Do not forget `--http.addr 0.0.0.0`, if you want to access RPC from other containers
and/or hosts. By default, `geth` binds to the local interface and RPC endpoints is not
accessible from the outside.
@@ -172,16 +182,16 @@ you'd expect.
HTTP based JSON-RPC API options:
* `--rpc` Enable the HTTP-RPC server
* `--rpcaddr` HTTP-RPC server listening interface (default: `localhost`)
* `--rpcport` HTTP-RPC server listening port (default: `8545`)
* `--rpcapi` API's offered over the HTTP-RPC interface (default: `eth,net,web3`)
* `--rpccorsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--http` Enable the HTTP-RPC server
* `--http.addr` HTTP-RPC server listening interface (default: `localhost`)
* `--http.port` HTTP-RPC server listening port (default: `8545`)
* `--http.api` API's offered over the HTTP-RPC interface (default: `eth,net,web3`)
* `--http.corsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--ws` Enable the WS-RPC server
* `--wsaddr` WS-RPC server listening interface (default: `localhost`)
* `--wsport` WS-RPC server listening port (default: `8546`)
* `--wsapi` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--wsorigins` Origins from which to accept websockets requests
* `--ws.addr` WS-RPC server listening interface (default: `localhost`)
* `--ws.port` WS-RPC server listening port (default: `8546`)
* `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--ws.origins` Origins from which to accept websockets requests
* `--ipcdisable` Disable the IPC-RPC server
* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,shh,txpool,web3`)
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
@@ -217,7 +227,8 @@ aware of and agree upon. This consists of a small JSON file (e.g. call it `genes
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0
"petersburgBlock": 0,
"istanbulBlock": 0
},
"alloc": {},
"coinbase": "0x0000000000000000000000000000000000000000",
@@ -294,7 +305,7 @@ also need to configure a miner to process transactions and create new blocks for
Mining on the public Ethereum network is a complex task as it's only feasible using GPUs,
requiring an OpenCL or CUDA enabled `ethminer` instance. For information on such a
setup, please consult the [EtherMining subreddit](https://www.reddit.com/r/EtherMining/)
and the [Genoil miner](https://github.com/Genoil/cpp-ethereum) repository.
and the [ethminer](https://github.com/ethereum-mining/ethminer) repository.
In a private network setting, however a single CPU miner instance is more than enough for
practical purposes as it can produce a stable stream of blocks at the correct intervals
@@ -303,7 +314,7 @@ ones either). To start a `geth` instance for mining, run it with all your usual
by:
```shell
$ geth <usual-flags> --mine --minerthreads=1 --etherbase=0x0000000000000000000000000000000000000000
$ geth <usual-flags> --mine --miner.threads=1 --etherbase=0x0000000000000000000000000000000000000000
```
Which will start mining blocks and transactions on a single CPU thread, crediting all

View File

@@ -19,10 +19,12 @@ package abi
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// The ABI holds information about a contract's context and available
@@ -32,6 +34,12 @@ type ABI struct {
Constructor Method
Methods map[string]Method
Events map[string]Event
// Additional "special" functions introduced in solidity v0.6.0.
// It's separated from the original default fallback. Each contract
// can only define one fallback and receive function.
Fallback Method // Note it's also used to represent legacy fallback before v0.6.0
Receive Method
}
// JSON returns a parsed ABI interface and error if it failed.
@@ -42,7 +50,6 @@ func JSON(reader io.Reader) (ABI, error) {
if err := dec.Decode(&abi); err != nil {
return ABI{}, err
}
return abi, nil
}
@@ -70,14 +77,11 @@ func (abi ABI) Pack(name string, args ...interface{}) ([]byte, error) {
return nil, err
}
// Pack up the method ID too if not a constructor and return
return append(method.ID(), arguments...), nil
return append(method.ID, arguments...), nil
}
// Unpack output in v according to the abi specification
func (abi ABI) Unpack(v interface{}, name string, data []byte) (err error) {
if len(data) == 0 {
return fmt.Errorf("abi: unmarshalling empty output")
}
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
if method, ok := abi.Methods[name]; ok {
@@ -94,9 +98,6 @@ func (abi ABI) Unpack(v interface{}, name string, data []byte) (err error) {
// UnpackIntoMap unpacks a log into the provided map[string]interface{}
func (abi ABI) UnpackIntoMap(v map[string]interface{}, name string, data []byte) (err error) {
if len(data) == 0 {
return fmt.Errorf("abi: unmarshalling empty output")
}
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
if method, ok := abi.Methods[name]; ok {
@@ -114,12 +115,22 @@ func (abi ABI) UnpackIntoMap(v map[string]interface{}, name string, data []byte)
// UnmarshalJSON implements json.Unmarshaler interface
func (abi *ABI) UnmarshalJSON(data []byte) error {
var fields []struct {
Type string
Name string
Constant bool
Type string
Name string
Inputs []Argument
Outputs []Argument
// Status indicator which can be: "pure", "view",
// "nonpayable" or "payable".
StateMutability string
// Deprecated Status indicators, but removed in v0.6.0.
Constant bool // True if function is either pure or view
Payable bool // True if function is payable
// Event relevant indicator represents the event is
// declared as anonymous.
Anonymous bool
Inputs []Argument
Outputs []Argument
}
if err := json.Unmarshal(data, &fields); err != nil {
return err
@@ -129,43 +140,67 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
for _, field := range fields {
switch field.Type {
case "constructor":
abi.Constructor = Method{
Inputs: field.Inputs,
abi.Constructor = NewMethod("", "", Constructor, field.StateMutability, field.Constant, field.Payable, field.Inputs, nil)
case "function":
name := abi.overloadedMethodName(field.Name)
abi.Methods[name] = NewMethod(name, field.Name, Function, field.StateMutability, field.Constant, field.Payable, field.Inputs, field.Outputs)
case "fallback":
// New introduced function type in v0.6.0, check more detail
// here https://solidity.readthedocs.io/en/v0.6.0/contracts.html#fallback-function
if abi.HasFallback() {
return errors.New("only single fallback is allowed")
}
// empty defaults to function according to the abi spec
case "function", "":
name := field.Name
_, ok := abi.Methods[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", field.Name, idx)
_, ok = abi.Methods[name]
abi.Fallback = NewMethod("", "", Fallback, field.StateMutability, field.Constant, field.Payable, nil, nil)
case "receive":
// New introduced function type in v0.6.0, check more detail
// here https://solidity.readthedocs.io/en/v0.6.0/contracts.html#fallback-function
if abi.HasReceive() {
return errors.New("only single receive is allowed")
}
abi.Methods[name] = Method{
Name: name,
RawName: field.Name,
Const: field.Constant,
Inputs: field.Inputs,
Outputs: field.Outputs,
if field.StateMutability != "payable" {
return errors.New("the statemutability of receive can only be payable")
}
abi.Receive = NewMethod("", "", Receive, field.StateMutability, field.Constant, field.Payable, nil, nil)
case "event":
name := field.Name
_, ok := abi.Events[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", field.Name, idx)
_, ok = abi.Events[name]
}
abi.Events[name] = Event{
Name: name,
RawName: field.Name,
Anonymous: field.Anonymous,
Inputs: field.Inputs,
}
name := abi.overloadedEventName(field.Name)
abi.Events[name] = NewEvent(name, field.Name, field.Anonymous, field.Inputs)
default:
return fmt.Errorf("abi: could not recognize type %v of field %v", field.Type, field.Name)
}
}
return nil
}
// overloadedMethodName returns the next available name for a given function.
// Needed since solidity allows for function overload.
//
// e.g. if the abi contains Methods send, send1
// overloadedMethodName would return send2 for input send.
func (abi *ABI) overloadedMethodName(rawName string) string {
name := rawName
_, ok := abi.Methods[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", rawName, idx)
_, ok = abi.Methods[name]
}
return name
}
// overloadedEventName returns the next available name for a given event.
// Needed since solidity allows for event overload.
//
// e.g. if the abi contains events received, received1
// overloadedEventName would return received2 for input received.
func (abi *ABI) overloadedEventName(rawName string) string {
name := rawName
_, ok := abi.Events[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", rawName, idx)
_, ok = abi.Events[name]
}
return name
}
// MethodById looks up a method by the 4-byte id
// returns nil if none found
func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
@@ -173,7 +208,7 @@ func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
return nil, fmt.Errorf("data too short (%d bytes) for abi method lookup", len(sigdata))
}
for _, method := range abi.Methods {
if bytes.Equal(method.ID(), sigdata[:4]) {
if bytes.Equal(method.ID, sigdata[:4]) {
return &method, nil
}
}
@@ -184,9 +219,41 @@ func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
// ABI and returns nil if none found.
func (abi *ABI) EventByID(topic common.Hash) (*Event, error) {
for _, event := range abi.Events {
if bytes.Equal(event.ID().Bytes(), topic.Bytes()) {
if bytes.Equal(event.ID.Bytes(), topic.Bytes()) {
return &event, nil
}
}
return nil, fmt.Errorf("no event with id: %#x", topic.Hex())
}
// HasFallback returns an indicator whether a fallback function is included.
func (abi *ABI) HasFallback() bool {
return abi.Fallback.Type == Fallback
}
// HasReceive returns an indicator whether a receive function is included.
func (abi *ABI) HasReceive() bool {
return abi.Receive.Type == Receive
}
// revertSelector is a special function selector for revert reason unpacking.
var revertSelector = crypto.Keccak256([]byte("Error(string)"))[:4]
// UnpackRevert resolves the abi-encoded revert reason. According to the solidity
// spec https://solidity.readthedocs.io/en/latest/control-structures.html#revert,
// the provided revert reason is abi-encoded as if it were a call to a function
// `Error(string)`. So it's a special tool for it.
func UnpackRevert(data []byte) (string, error) {
if len(data) < 4 {
return "", errors.New("invalid data for unpacking")
}
if !bytes.Equal(data[:4], revertSelector) {
return "", errors.New("invalid data for unpacking")
}
var reason string
typ, _ := NewType("string", "", nil)
if err := (Arguments{{Type: typ}}).Unpack(&reason, data[4:]); err != nil {
return "", err
}
return reason, nil
}

View File

@@ -19,6 +19,7 @@ package abi
import (
"bytes"
"encoding/hex"
"errors"
"fmt"
"math/big"
"reflect"
@@ -26,57 +27,105 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/crypto"
)
const jsondata = `
[
{ "type" : "function", "name" : "balance", "constant" : true },
{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] }
{ "type" : "function", "name" : "", "stateMutability" : "view" },
{ "type" : "function", "name" : "balance", "stateMutability" : "view" },
{ "type" : "function", "name" : "send", "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "test", "inputs" : [ { "name" : "number", "type" : "uint32" } ] },
{ "type" : "function", "name" : "string", "inputs" : [ { "name" : "inputs", "type" : "string" } ] },
{ "type" : "function", "name" : "bool", "inputs" : [ { "name" : "inputs", "type" : "bool" } ] },
{ "type" : "function", "name" : "address", "inputs" : [ { "name" : "inputs", "type" : "address" } ] },
{ "type" : "function", "name" : "uint64[2]", "inputs" : [ { "name" : "inputs", "type" : "uint64[2]" } ] },
{ "type" : "function", "name" : "uint64[]", "inputs" : [ { "name" : "inputs", "type" : "uint64[]" } ] },
{ "type" : "function", "name" : "int8", "inputs" : [ { "name" : "inputs", "type" : "int8" } ] },
{ "type" : "function", "name" : "foo", "inputs" : [ { "name" : "inputs", "type" : "uint32" } ] },
{ "type" : "function", "name" : "bar", "inputs" : [ { "name" : "inputs", "type" : "uint32" }, { "name" : "string", "type" : "uint16" } ] },
{ "type" : "function", "name" : "slice", "inputs" : [ { "name" : "inputs", "type" : "uint32[2]" } ] },
{ "type" : "function", "name" : "slice256", "inputs" : [ { "name" : "inputs", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "sliceAddress", "inputs" : [ { "name" : "inputs", "type" : "address[]" } ] },
{ "type" : "function", "name" : "sliceMultiAddress", "inputs" : [ { "name" : "a", "type" : "address[]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray", "inputs" : [ { "name" : "a", "type" : "uint256[2][2]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray2", "inputs" : [ { "name" : "a", "type" : "uint8[][2]" } ] },
{ "type" : "function", "name" : "nestedSlice", "inputs" : [ { "name" : "a", "type" : "uint8[][]" } ] },
{ "type" : "function", "name" : "receive", "inputs" : [ { "name" : "memo", "type" : "bytes" }], "outputs" : [], "payable" : true, "stateMutability" : "payable" },
{ "type" : "function", "name" : "fixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "fixedArrBytes", "stateMutability" : "view", "inputs" : [ { "name" : "bytes", "type" : "bytes" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "mixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" } ] },
{ "type" : "function", "name" : "doubleFixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type" : "uint256[2]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] },
{ "type" : "function", "name" : "multipleMixedArrStr", "stateMutability" : "view", "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type" : "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] },
{ "type" : "function", "name" : "overloadedNames", "stateMutability" : "view", "inputs": [ { "components": [ { "internalType": "uint256", "name": "_f", "type": "uint256" }, { "internalType": "uint256", "name": "__f", "type": "uint256"}, { "internalType": "uint256", "name": "f", "type": "uint256"}],"internalType": "struct Overloader.F", "name": "f","type": "tuple"}]}
]`
const jsondata2 = `
[
{ "type" : "function", "name" : "balance", "constant" : true },
{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "test", "constant" : false, "inputs" : [ { "name" : "number", "type" : "uint32" } ] },
{ "type" : "function", "name" : "string", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "string" } ] },
{ "type" : "function", "name" : "bool", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "bool" } ] },
{ "type" : "function", "name" : "address", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "address" } ] },
{ "type" : "function", "name" : "uint64[2]", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint64[2]" } ] },
{ "type" : "function", "name" : "uint64[]", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint64[]" } ] },
{ "type" : "function", "name" : "foo", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32" } ] },
{ "type" : "function", "name" : "bar", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32" }, { "name" : "string", "type" : "uint16" } ] },
{ "type" : "function", "name" : "slice", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32[2]" } ] },
{ "type" : "function", "name" : "slice256", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "sliceAddress", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "address[]" } ] },
{ "type" : "function", "name" : "sliceMultiAddress", "constant" : false, "inputs" : [ { "name" : "a", "type" : "address[]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray", "constant" : false, "inputs" : [ { "name" : "a", "type" : "uint256[2][2]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray2", "constant" : false, "inputs" : [ { "name" : "a", "type" : "uint8[][2]" } ] },
{ "type" : "function", "name" : "nestedSlice", "constant" : false, "inputs" : [ { "name" : "a", "type" : "uint8[][]" } ] }
]`
var (
Uint256, _ = NewType("uint256", "", nil)
Uint32, _ = NewType("uint32", "", nil)
Uint16, _ = NewType("uint16", "", nil)
String, _ = NewType("string", "", nil)
Bool, _ = NewType("bool", "", nil)
Bytes, _ = NewType("bytes", "", nil)
Address, _ = NewType("address", "", nil)
Uint64Arr, _ = NewType("uint64[]", "", nil)
AddressArr, _ = NewType("address[]", "", nil)
Int8, _ = NewType("int8", "", nil)
// Special types for testing
Uint32Arr2, _ = NewType("uint32[2]", "", nil)
Uint64Arr2, _ = NewType("uint64[2]", "", nil)
Uint256Arr, _ = NewType("uint256[]", "", nil)
Uint256Arr2, _ = NewType("uint256[2]", "", nil)
Uint256Arr3, _ = NewType("uint256[3]", "", nil)
Uint256ArrNested, _ = NewType("uint256[2][2]", "", nil)
Uint8ArrNested, _ = NewType("uint8[][2]", "", nil)
Uint8SliceNested, _ = NewType("uint8[][]", "", nil)
TupleF, _ = NewType("tuple", "struct Overloader.F", []ArgumentMarshaling{
{Name: "_f", Type: "uint256"},
{Name: "__f", Type: "uint256"},
{Name: "f", Type: "uint256"}})
)
var methods = map[string]Method{
"": NewMethod("", "", Function, "view", false, false, nil, nil),
"balance": NewMethod("balance", "balance", Function, "view", false, false, nil, nil),
"send": NewMethod("send", "send", Function, "", false, false, []Argument{{"amount", Uint256, false}}, nil),
"test": NewMethod("test", "test", Function, "", false, false, []Argument{{"number", Uint32, false}}, nil),
"string": NewMethod("string", "string", Function, "", false, false, []Argument{{"inputs", String, false}}, nil),
"bool": NewMethod("bool", "bool", Function, "", false, false, []Argument{{"inputs", Bool, false}}, nil),
"address": NewMethod("address", "address", Function, "", false, false, []Argument{{"inputs", Address, false}}, nil),
"uint64[]": NewMethod("uint64[]", "uint64[]", Function, "", false, false, []Argument{{"inputs", Uint64Arr, false}}, nil),
"uint64[2]": NewMethod("uint64[2]", "uint64[2]", Function, "", false, false, []Argument{{"inputs", Uint64Arr2, false}}, nil),
"int8": NewMethod("int8", "int8", Function, "", false, false, []Argument{{"inputs", Int8, false}}, nil),
"foo": NewMethod("foo", "foo", Function, "", false, false, []Argument{{"inputs", Uint32, false}}, nil),
"bar": NewMethod("bar", "bar", Function, "", false, false, []Argument{{"inputs", Uint32, false}, {"string", Uint16, false}}, nil),
"slice": NewMethod("slice", "slice", Function, "", false, false, []Argument{{"inputs", Uint32Arr2, false}}, nil),
"slice256": NewMethod("slice256", "slice256", Function, "", false, false, []Argument{{"inputs", Uint256Arr2, false}}, nil),
"sliceAddress": NewMethod("sliceAddress", "sliceAddress", Function, "", false, false, []Argument{{"inputs", AddressArr, false}}, nil),
"sliceMultiAddress": NewMethod("sliceMultiAddress", "sliceMultiAddress", Function, "", false, false, []Argument{{"a", AddressArr, false}, {"b", AddressArr, false}}, nil),
"nestedArray": NewMethod("nestedArray", "nestedArray", Function, "", false, false, []Argument{{"a", Uint256ArrNested, false}, {"b", AddressArr, false}}, nil),
"nestedArray2": NewMethod("nestedArray2", "nestedArray2", Function, "", false, false, []Argument{{"a", Uint8ArrNested, false}}, nil),
"nestedSlice": NewMethod("nestedSlice", "nestedSlice", Function, "", false, false, []Argument{{"a", Uint8SliceNested, false}}, nil),
"receive": NewMethod("receive", "receive", Function, "payable", false, true, []Argument{{"memo", Bytes, false}}, []Argument{}),
"fixedArrStr": NewMethod("fixedArrStr", "fixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr", Uint256Arr2, false}}, nil),
"fixedArrBytes": NewMethod("fixedArrBytes", "fixedArrBytes", Function, "view", false, false, []Argument{{"bytes", Bytes, false}, {"fixedArr", Uint256Arr2, false}}, nil),
"mixedArrStr": NewMethod("mixedArrStr", "mixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr", Uint256Arr2, false}, {"dynArr", Uint256Arr, false}}, nil),
"doubleFixedArrStr": NewMethod("doubleFixedArrStr", "doubleFixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr1", Uint256Arr2, false}, {"fixedArr2", Uint256Arr3, false}}, nil),
"multipleMixedArrStr": NewMethod("multipleMixedArrStr", "multipleMixedArrStr", Function, "view", false, false, []Argument{{"str", String, false}, {"fixedArr1", Uint256Arr2, false}, {"dynArr", Uint256Arr, false}, {"fixedArr2", Uint256Arr3, false}}, nil),
"overloadedNames": NewMethod("overloadedNames", "overloadedNames", Function, "view", false, false, []Argument{{"f", TupleF, false}}, nil),
}
func TestReader(t *testing.T) {
Uint256, _ := NewType("uint256", nil)
exp := ABI{
Methods: map[string]Method{
"balance": {
"balance", "balance", true, nil, nil,
},
"send": {
"send", "send", false, []Argument{
{"amount", Uint256, false},
}, nil,
},
},
abi := ABI{
Methods: methods,
}
abi, err := JSON(strings.NewReader(jsondata))
exp, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Error(err)
t.Fatal(err)
}
// deep equal fails for some reason
for name, expM := range exp.Methods {
gotM, exist := abi.Methods[name]
if !exist {
@@ -98,8 +147,58 @@ func TestReader(t *testing.T) {
}
}
func TestInvalidABI(t *testing.T) {
json := `[{ "type" : "function", "name" : "", "constant" : fals }]`
_, err := JSON(strings.NewReader(json))
if err == nil {
t.Fatal("invalid json should produce error")
}
json2 := `[{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "typ" : "uint256" } ] }]`
_, err = JSON(strings.NewReader(json2))
if err == nil {
t.Fatal("invalid json should produce error")
}
}
// TestConstructor tests a constructor function.
// The test is based on the following contract:
// contract TestConstructor {
// constructor(uint256 a, uint256 b) public{}
// }
func TestConstructor(t *testing.T) {
json := `[{ "inputs": [{"internalType": "uint256","name": "a","type": "uint256" },{ "internalType": "uint256","name": "b","type": "uint256"}],"stateMutability": "nonpayable","type": "constructor"}]`
method := NewMethod("", "", Constructor, "nonpayable", false, false, []Argument{{"a", Uint256, false}, {"b", Uint256, false}}, nil)
// Test from JSON
abi, err := JSON(strings.NewReader(json))
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(abi.Constructor, method) {
t.Error("Missing expected constructor")
}
// Test pack/unpack
packed, err := abi.Pack("", big.NewInt(1), big.NewInt(2))
if err != nil {
t.Error(err)
}
v := struct {
A *big.Int
B *big.Int
}{new(big.Int), new(big.Int)}
//abi.Unpack(&v, "", packed)
if err := abi.Constructor.Inputs.Unpack(&v, packed); err != nil {
t.Error(err)
}
if !reflect.DeepEqual(v.A, big.NewInt(1)) {
t.Error("Unable to pack/unpack from constructor")
}
if !reflect.DeepEqual(v.B, big.NewInt(2)) {
t.Error("Unable to pack/unpack from constructor")
}
}
func TestTestNumbers(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
@@ -135,64 +234,26 @@ func TestTestNumbers(t *testing.T) {
}
}
func TestTestString(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
if _, err := abi.Pack("string", "hello world"); err != nil {
t.Error(err)
}
}
func TestTestBool(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
if _, err := abi.Pack("bool", true); err != nil {
t.Error(err)
}
}
func TestTestSlice(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
slice := make([]uint64, 2)
if _, err := abi.Pack("uint64[2]", slice); err != nil {
t.Error(err)
}
if _, err := abi.Pack("uint64[]", slice); err != nil {
t.Error(err)
}
}
func TestMethodSignature(t *testing.T) {
String, _ := NewType("string", nil)
m := Method{"foo", "foo", false, []Argument{{"bar", String, false}, {"baz", String, false}}, nil}
m := NewMethod("foo", "foo", Function, "", false, false, []Argument{{"bar", String, false}, {"baz", String, false}}, nil)
exp := "foo(string,string)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
if m.Sig != exp {
t.Error("signature mismatch", exp, "!=", m.Sig)
}
idexp := crypto.Keccak256([]byte(exp))[:4]
if !bytes.Equal(m.ID(), idexp) {
t.Errorf("expected ids to match %x != %x", m.ID(), idexp)
if !bytes.Equal(m.ID, idexp) {
t.Errorf("expected ids to match %x != %x", m.ID, idexp)
}
uintt, _ := NewType("uint256", nil)
m = Method{"foo", "foo", false, []Argument{{"bar", uintt, false}}, nil}
m = NewMethod("foo", "foo", Function, "", false, false, []Argument{{"bar", Uint256, false}}, nil)
exp = "foo(uint256)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
if m.Sig != exp {
t.Error("signature mismatch", exp, "!=", m.Sig)
}
// Method with tuple arguments
s, _ := NewType("tuple", []ArgumentMarshaling{
s, _ := NewType("tuple", "", []ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256[]"},
{Name: "c", Type: "tuple[]", Components: []ArgumentMarshaling{
@@ -204,10 +265,10 @@ func TestMethodSignature(t *testing.T) {
{Name: "y", Type: "int256"},
}},
})
m = Method{"foo", "foo", false, []Argument{{"s", s, false}, {"bar", String, false}}, nil}
m = NewMethod("foo", "foo", Function, "", false, false, []Argument{{"s", s, false}, {"bar", String, false}}, nil)
exp = "foo((int256,int256[],(int256,int256)[],(int256,int256)[2]),string)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
if m.Sig != exp {
t.Error("signature mismatch", exp, "!=", m.Sig)
}
}
@@ -219,12 +280,12 @@ func TestOverloadedMethodSignature(t *testing.T) {
}
check := func(name string, expect string, method bool) {
if method {
if abi.Methods[name].Sig() != expect {
t.Fatalf("The signature of overloaded method mismatch, want %s, have %s", expect, abi.Methods[name].Sig())
if abi.Methods[name].Sig != expect {
t.Fatalf("The signature of overloaded method mismatch, want %s, have %s", expect, abi.Methods[name].Sig)
}
} else {
if abi.Events[name].Sig() != expect {
t.Fatalf("The signature of overloaded event mismatch, want %s, have %s", expect, abi.Events[name].Sig())
if abi.Events[name].Sig != expect {
t.Fatalf("The signature of overloaded event mismatch, want %s, have %s", expect, abi.Events[name].Sig)
}
}
}
@@ -235,7 +296,7 @@ func TestOverloadedMethodSignature(t *testing.T) {
}
func TestMultiPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
@@ -400,15 +461,7 @@ func TestInputVariableInputLength(t *testing.T) {
}
func TestInputFixedArrayAndVariableInputLength(t *testing.T) {
const definition = `[
{ "type" : "function", "name" : "fixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "fixedArrBytes", "constant" : true, "inputs" : [ { "name" : "str", "type" : "bytes" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "mixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type": "uint256[2]" }, { "name" : "dynArr", "type": "uint256[]" } ] },
{ "type" : "function", "name" : "doubleFixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type": "uint256[2]" }, { "name" : "fixedArr2", "type": "uint256[3]" } ] },
{ "type" : "function", "name" : "multipleMixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type": "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] }
]`
abi, err := JSON(strings.NewReader(definition))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Error(err)
}
@@ -555,7 +608,7 @@ func TestInputFixedArrayAndVariableInputLength(t *testing.T) {
strvalue = common.RightPadBytes([]byte(strin), 32)
fixedarrin1value1 = common.LeftPadBytes(fixedarrin1[0].Bytes(), 32)
fixedarrin1value2 = common.LeftPadBytes(fixedarrin1[1].Bytes(), 32)
dynarroffset = U256(big.NewInt(int64(256 + ((len(strin)/32)+1)*32)))
dynarroffset = math.U256Bytes(big.NewInt(int64(256 + ((len(strin)/32)+1)*32)))
dynarrlength = make([]byte, 32)
dynarrlength[31] = byte(len(dynarrin))
dynarrinvalue1 = common.LeftPadBytes(dynarrin[0].Bytes(), 32)
@@ -582,7 +635,7 @@ func TestInputFixedArrayAndVariableInputLength(t *testing.T) {
}
func TestDefaultFunctionParsing(t *testing.T) {
const definition = `[{ "name" : "balance" }]`
const definition = `[{ "name" : "balance", "type" : "function" }]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
@@ -602,9 +655,7 @@ func TestBareEvents(t *testing.T) {
{ "type" : "event", "name" : "tuple", "inputs" : [{ "indexed":false, "name":"t", "type":"tuple", "components":[{"name":"a", "type":"uint256"}] }, { "indexed":true, "name":"arg1", "type":"address" }] }
]`
arg0, _ := NewType("uint256", nil)
arg1, _ := NewType("address", nil)
tuple, _ := NewType("tuple", []ArgumentMarshaling{{Name: "a", Type: "uint256"}})
tuple, _ := NewType("tuple", "", []ArgumentMarshaling{{Name: "a", Type: "uint256"}})
expectedEvents := map[string]struct {
Anonymous bool
@@ -613,12 +664,12 @@ func TestBareEvents(t *testing.T) {
"balance": {false, nil},
"anon": {true, nil},
"args": {false, []Argument{
{Name: "arg0", Type: arg0, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
{Name: "arg0", Type: Uint256, Indexed: false},
{Name: "arg1", Type: Address, Indexed: true},
}},
"tuple": {false, []Argument{
{Name: "t", Type: tuple, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
{Name: "arg1", Type: Address, Indexed: true},
}},
}
@@ -891,45 +942,25 @@ func TestUnpackIntoMapNamingConflict(t *testing.T) {
}
func TestABI_MethodById(t *testing.T) {
const abiJSON = `[
{"type":"function","name":"receive","constant":false,"inputs":[{"name":"memo","type":"bytes"}],"outputs":[],"payable":true,"stateMutability":"payable"},
{"type":"event","name":"received","anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}]},
{"type":"function","name":"fixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr","type":"uint256[2]"}]},
{"type":"function","name":"fixedArrBytes","constant":true,"inputs":[{"name":"str","type":"bytes"},{"name":"fixedArr","type":"uint256[2]"}]},
{"type":"function","name":"mixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr","type":"uint256[2]"},{"name":"dynArr","type":"uint256[]"}]},
{"type":"function","name":"doubleFixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr1","type":"uint256[2]"},{"name":"fixedArr2","type":"uint256[3]"}]},
{"type":"function","name":"multipleMixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr1","type":"uint256[2]"},{"name":"dynArr","type":"uint256[]"},{"name":"fixedArr2","type":"uint256[3]"}]},
{"type":"function","name":"balance","constant":true},
{"type":"function","name":"send","constant":false,"inputs":[{"name":"amount","type":"uint256"}]},
{"type":"function","name":"test","constant":false,"inputs":[{"name":"number","type":"uint32"}]},
{"type":"function","name":"string","constant":false,"inputs":[{"name":"inputs","type":"string"}]},
{"type":"function","name":"bool","constant":false,"inputs":[{"name":"inputs","type":"bool"}]},
{"type":"function","name":"address","constant":false,"inputs":[{"name":"inputs","type":"address"}]},
{"type":"function","name":"uint64[2]","constant":false,"inputs":[{"name":"inputs","type":"uint64[2]"}]},
{"type":"function","name":"uint64[]","constant":false,"inputs":[{"name":"inputs","type":"uint64[]"}]},
{"type":"function","name":"foo","constant":false,"inputs":[{"name":"inputs","type":"uint32"}]},
{"type":"function","name":"bar","constant":false,"inputs":[{"name":"inputs","type":"uint32"},{"name":"string","type":"uint16"}]},
{"type":"function","name":"_slice","constant":false,"inputs":[{"name":"inputs","type":"uint32[2]"}]},
{"type":"function","name":"__slice256","constant":false,"inputs":[{"name":"inputs","type":"uint256[2]"}]},
{"type":"function","name":"sliceAddress","constant":false,"inputs":[{"name":"inputs","type":"address[]"}]},
{"type":"function","name":"sliceMultiAddress","constant":false,"inputs":[{"name":"a","type":"address[]"},{"name":"b","type":"address[]"}]}
]
`
abi, err := JSON(strings.NewReader(abiJSON))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
for name, m := range abi.Methods {
a := fmt.Sprintf("%v", m)
m2, err := abi.MethodById(m.ID())
m2, err := abi.MethodById(m.ID)
if err != nil {
t.Fatalf("Failed to look up ABI method: %v", err)
}
b := fmt.Sprintf("%v", m2)
if a != b {
t.Errorf("Method %v (id %v) not 'findable' by id in ABI", name, common.ToHex(m.ID()))
t.Errorf("Method %v (id %x) not 'findable' by id in ABI", name, m.ID)
}
}
// test unsuccessful lookups
if _, err = abi.MethodById(crypto.Keccak256()); err == nil {
t.Error("Expected error: no method with this id")
}
// Also test empty
if _, err := abi.MethodById([]byte{0x00}); err == nil {
t.Errorf("Expected error, too short to decode data")
@@ -995,8 +1026,8 @@ func TestABI_EventById(t *testing.T) {
t.Errorf("We should find a event for topic %s, test #%d", topicID.Hex(), testnum)
}
if event.ID() != topicID {
t.Errorf("Event id %s does not match topic %s, test #%d", event.ID().Hex(), topicID.Hex(), testnum)
if event.ID != topicID {
t.Errorf("Event id %s does not match topic %s, test #%d", event.ID.Hex(), topicID.Hex(), testnum)
}
unknowntopicID := crypto.Keccak256Hash([]byte("unknownEvent"))
@@ -1010,26 +1041,6 @@ func TestABI_EventById(t *testing.T) {
}
}
func TestDuplicateMethodNames(t *testing.T) {
abiJSON := `[{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"},{"name":"customFallback","type":"string"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
if _, ok := contractAbi.Methods["transfer"]; !ok {
t.Fatalf("Could not find original method")
}
if _, ok := contractAbi.Methods["transfer0"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer1"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer2"]; ok {
t.Fatalf("Should not have found extra method")
}
}
// TestDoubleDuplicateMethodNames checks that if transfer0 already exists, there won't be a name
// conflict and that the second transfer method will be renamed transfer1.
func TestDoubleDuplicateMethodNames(t *testing.T) {
@@ -1051,3 +1062,87 @@ func TestDoubleDuplicateMethodNames(t *testing.T) {
t.Fatalf("Should not have found extra method")
}
}
// TestDoubleDuplicateEventNames checks that if send0 already exists, there won't be a name
// conflict and that the second send event will be renamed send1.
// The test runs the abi of the following contract.
// contract DuplicateEvent {
// event send(uint256 a);
// event send0();
// event send();
// }
func TestDoubleDuplicateEventNames(t *testing.T) {
abiJSON := `[{"anonymous": false,"inputs": [{"indexed": false,"internalType": "uint256","name": "a","type": "uint256"}],"name": "send","type": "event"},{"anonymous": false,"inputs": [],"name": "send0","type": "event"},{ "anonymous": false, "inputs": [],"name": "send","type": "event"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
if _, ok := contractAbi.Events["send"]; !ok {
t.Fatalf("Could not find original event")
}
if _, ok := contractAbi.Events["send0"]; !ok {
t.Fatalf("Could not find duplicate event")
}
if _, ok := contractAbi.Events["send1"]; !ok {
t.Fatalf("Could not find duplicate event")
}
if _, ok := contractAbi.Events["send2"]; ok {
t.Fatalf("Should not have found extra event")
}
}
// TestUnnamedEventParam checks that an event with unnamed parameters is
// correctly handled
// The test runs the abi of the following contract.
// contract TestEvent {
// event send(uint256, uint256);
// }
func TestUnnamedEventParam(t *testing.T) {
abiJSON := `[{ "anonymous": false, "inputs": [{ "indexed": false,"internalType": "uint256", "name": "","type": "uint256"},{"indexed": false,"internalType": "uint256","name": "","type": "uint256"}],"name": "send","type": "event"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
event, ok := contractAbi.Events["send"]
if !ok {
t.Fatalf("Could not find event")
}
if event.Inputs[0].Name != "arg0" {
t.Fatalf("Could not find input")
}
if event.Inputs[1].Name != "arg1" {
t.Fatalf("Could not find input")
}
}
func TestUnpackRevert(t *testing.T) {
t.Parallel()
var cases = []struct {
input string
expect string
expectErr error
}{
{"", "", errors.New("invalid data for unpacking")},
{"08c379a1", "", errors.New("invalid data for unpacking")},
{"08c379a00000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000d72657665727420726561736f6e00000000000000000000000000000000000000", "revert reason", nil},
}
for index, c := range cases {
t.Run(fmt.Sprintf("case %d", index), func(t *testing.T) {
got, err := UnpackRevert(common.Hex2Bytes(c.input))
if c.expectErr != nil {
if err == nil {
t.Fatalf("Expected non-nil error")
}
if err.Error() != c.expectErr.Error() {
t.Fatalf("Expected error mismatch, want %v, got %v", c.expectErr, err)
}
return
}
if c.expect != got {
t.Fatalf("Output mismatch, want %v, got %v", c.expect, got)
}
})
}
}

View File

@@ -34,10 +34,11 @@ type Argument struct {
type Arguments []Argument
type ArgumentMarshaling struct {
Name string
Type string
Components []ArgumentMarshaling
Indexed bool
Name string
Type string
InternalType string
Components []ArgumentMarshaling
Indexed bool
}
// UnmarshalJSON implements json.Unmarshaler interface
@@ -48,7 +49,7 @@ func (argument *Argument) UnmarshalJSON(data []byte) error {
return fmt.Errorf("argument json err: %v", err)
}
argument.Type, err = NewType(arg.Type, arg.Components)
argument.Type, err = NewType(arg.Type, arg.InternalType, arg.Components)
if err != nil {
return err
}
@@ -58,18 +59,6 @@ func (argument *Argument) UnmarshalJSON(data []byte) error {
return nil
}
// LengthNonIndexed returns the number of arguments when not counting 'indexed' ones. Only events
// can ever have 'indexed' arguments, it should always be false on arguments for method input/output
func (arguments Arguments) LengthNonIndexed() int {
out := 0
for _, arg := range arguments {
if !arg.Indexed {
out++
}
}
return out
}
// NonIndexed returns the arguments with indexed arguments filtered out
func (arguments Arguments) NonIndexed() Arguments {
var ret []Argument
@@ -88,6 +77,12 @@ func (arguments Arguments) isTuple() bool {
// Unpack performs the operation hexdata -> Go format
func (arguments Arguments) Unpack(v interface{}, data []byte) error {
if len(data) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
}
return nil // Nothing to unmarshal, return
}
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
@@ -96,6 +91,9 @@ func (arguments Arguments) Unpack(v interface{}, data []byte) error {
if err != nil {
return err
}
if len(marshalledValues) == 0 {
return fmt.Errorf("abi: Unpack(no-values unmarshalled %T)", v)
}
if arguments.isTuple() {
return arguments.unpackTuple(v, marshalledValues)
}
@@ -104,90 +102,20 @@ func (arguments Arguments) Unpack(v interface{}, data []byte) error {
// UnpackIntoMap performs the operation hexdata -> mapping of argument name to argument value
func (arguments Arguments) UnpackIntoMap(v map[string]interface{}, data []byte) error {
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
return arguments.unpackIntoMap(v, marshalledValues)
}
// unpack sets the unmarshalled value to go format.
// Note the dst here must be settable.
func unpack(t *Type, dst interface{}, src interface{}) error {
var (
dstVal = reflect.ValueOf(dst).Elem()
srcVal = reflect.ValueOf(src)
)
tuple, typ := false, t
for {
if typ.T == SliceTy || typ.T == ArrayTy {
typ = typ.Elem
continue
}
tuple = typ.T == TupleTy
break
}
if !tuple {
return set(dstVal, srcVal)
}
// Dereferences interface or pointer wrapper
dstVal = indirectInterfaceOrPtr(dstVal)
switch t.T {
case TupleTy:
if dstVal.Kind() != reflect.Struct {
return fmt.Errorf("abi: invalid dst value for unpack, want struct, got %s", dstVal.Kind())
}
fieldmap, err := mapArgNamesToStructFields(t.TupleRawNames, dstVal)
if err != nil {
return err
}
for i, elem := range t.TupleElems {
fname := fieldmap[t.TupleRawNames[i]]
field := dstVal.FieldByName(fname)
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't found in the given value", t.TupleRawNames[i])
}
if err := unpack(elem, field.Addr().Interface(), srcVal.Field(i).Interface()); err != nil {
return err
}
}
return nil
case SliceTy:
if dstVal.Kind() != reflect.Slice {
return fmt.Errorf("abi: invalid dst value for unpack, want slice, got %s", dstVal.Kind())
}
slice := reflect.MakeSlice(dstVal.Type(), srcVal.Len(), srcVal.Len())
for i := 0; i < slice.Len(); i++ {
if err := unpack(t.Elem, slice.Index(i).Addr().Interface(), srcVal.Index(i).Interface()); err != nil {
return err
}
}
dstVal.Set(slice)
case ArrayTy:
if dstVal.Kind() != reflect.Array {
return fmt.Errorf("abi: invalid dst value for unpack, want array, got %s", dstVal.Kind())
}
array := reflect.New(dstVal.Type()).Elem()
for i := 0; i < array.Len(); i++ {
if err := unpack(t.Elem, array.Index(i).Addr().Interface(), srcVal.Index(i).Interface()); err != nil {
return err
}
}
dstVal.Set(array)
}
return nil
}
// unpackIntoMap unpacks marshalledValues into the provided map[string]interface{}
func (arguments Arguments) unpackIntoMap(v map[string]interface{}, marshalledValues []interface{}) error {
// Make sure map is not nil
if v == nil {
return fmt.Errorf("abi: cannot unpack into a nil map")
}
if len(data) == 0 {
if len(arguments) != 0 {
return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
}
return nil // Nothing to unmarshal, return
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
for i, arg := range arguments.NonIndexed() {
v[arg.Name] = marshalledValues[i]
}
@@ -196,88 +124,63 @@ func (arguments Arguments) unpackIntoMap(v map[string]interface{}, marshalledVal
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues interface{}) error {
if arguments.LengthNonIndexed() == 0 {
return nil
}
argument := arguments.NonIndexed()[0]
elem := reflect.ValueOf(v).Elem()
dst := reflect.ValueOf(v).Elem()
src := reflect.ValueOf(marshalledValues)
if elem.Kind() == reflect.Struct && argument.Type.T != TupleTy {
fieldmap, err := mapArgNamesToStructFields([]string{argument.Name}, elem)
if err != nil {
return err
}
field := elem.FieldByName(fieldmap[argument.Name])
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't be found in the given value", argument.Name)
}
return unpack(&argument.Type, field.Addr().Interface(), marshalledValues)
if dst.Kind() == reflect.Struct && src.Kind() != reflect.Struct {
return set(dst.Field(0), src)
}
return unpack(&argument.Type, elem.Addr().Interface(), marshalledValues)
return set(dst, src)
}
// unpackTuple unpacks ( hexdata -> go ) a batch of values.
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
var (
value = reflect.ValueOf(v).Elem()
typ = value.Type()
kind = value.Kind()
)
if err := requireUnpackKind(value, typ, kind, arguments); err != nil {
return err
}
value := reflect.ValueOf(v).Elem()
nonIndexedArgs := arguments.NonIndexed()
// If the interface is a struct, get of abi->struct_field mapping
var abi2struct map[string]string
if kind == reflect.Struct {
var (
argNames []string
err error
)
for _, arg := range arguments.NonIndexed() {
argNames = append(argNames, arg.Name)
switch value.Kind() {
case reflect.Struct:
argNames := make([]string, len(nonIndexedArgs))
for i, arg := range nonIndexedArgs {
argNames[i] = arg.Name
}
abi2struct, err = mapArgNamesToStructFields(argNames, value)
var err error
abi2struct, err := mapArgNamesToStructFields(argNames, value)
if err != nil {
return err
}
}
for i, arg := range arguments.NonIndexed() {
switch kind {
case reflect.Struct:
for i, arg := range nonIndexedArgs {
field := value.FieldByName(abi2struct[arg.Name])
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't be found in the given value", arg.Name)
}
if err := unpack(&arg.Type, field.Addr().Interface(), marshalledValues[i]); err != nil {
if err := set(field, reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
case reflect.Slice, reflect.Array:
if value.Len() < i {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
v := value.Index(i)
if err := requireAssignable(v, reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
if err := unpack(&arg.Type, v.Addr().Interface(), marshalledValues[i]); err != nil {
return err
}
default:
return fmt.Errorf("abi:[2] cannot unmarshal tuple in to %v", typ)
}
case reflect.Slice, reflect.Array:
if value.Len() < len(marshalledValues) {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
for i := range nonIndexedArgs {
if err := set(value.Index(i), reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
}
default:
return fmt.Errorf("abi:[2] cannot unmarshal tuple in to %v", value.Type())
}
return nil
}
// UnpackValues can be used to unpack ABI-encoded hexdata according to the ABI-specification,
// without supplying a struct to unpack into. Instead, this method returns a list containing the
// values. An atomic argument will be a list with one element.
func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
retval := make([]interface{}, 0, arguments.LengthNonIndexed())
nonIndexedArgs := arguments.NonIndexed()
retval := make([]interface{}, 0, len(nonIndexedArgs))
virtualArgs := 0
for index, arg := range arguments.NonIndexed() {
for index, arg := range nonIndexedArgs {
marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data)
if arg.Type.T == ArrayTy && !isDynamicType(arg.Type) {
// If we have a static array, like [3]uint256, these are coded as
@@ -315,7 +218,7 @@ func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
// Make sure arguments match up and pack them
abiArgs := arguments
if len(args) != len(abiArgs) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(abiArgs))
return nil, fmt.Errorf("argument count mismatch: got %d for %d", len(args), len(abiArgs))
}
// variable input is the output appended at the end of packed
// output. This is used for strings and bytes types input.

View File

@@ -25,8 +25,10 @@ import (
"time"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
@@ -38,6 +40,7 @@ import (
"github.com/ethereum/go-ethereum/eth/filters"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
)
@@ -46,19 +49,23 @@ import (
var _ bind.ContractBackend = (*SimulatedBackend)(nil)
var (
errBlockNumberUnsupported = errors.New("simulatedBackend cannot access blocks other than the latest block")
errGasEstimationFailed = errors.New("gas required exceeds allowance or always failing transaction")
errBlockNumberUnsupported = errors.New("simulatedBackend cannot access blocks other than the latest block")
errBlockDoesNotExist = errors.New("block does not exist in blockchain")
errTransactionDoesNotExist = errors.New("transaction does not exist")
)
// SimulatedBackend implements bind.ContractBackend, simulating a blockchain in
// the background. Its main purpose is to allow easily testing contract bindings.
// Simulated backend implements the following interfaces:
// ChainReader, ChainStateReader, ContractBackend, ContractCaller, ContractFilterer, ContractTransactor,
// DeployBackend, GasEstimator, GasPricer, LogFilterer, PendingContractCaller, TransactionReader, and TransactionSender
type SimulatedBackend struct {
database ethdb.Database // In memory database to store our testing data
blockchain *core.BlockChain // Ethereum blockchain to handle the consensus
mu sync.Mutex
pendingBlock *types.Block // Currently pending block that will be imported on request
pendingState *state.StateDB // Currently pending state that will be the active on on request
pendingState *state.StateDB // Currently pending state that will be the active on request
events *filters.EventSystem // Event system for filtering log events live
@@ -70,13 +77,13 @@ type SimulatedBackend struct {
func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
genesis.MustCommit(database)
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil)
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil, nil)
backend := &SimulatedBackend{
database: database,
blockchain: blockchain,
config: genesis.Config,
events: filters.NewEventSystem(new(event.TypeMux), &filterBackend{database, blockchain}, false),
events: filters.NewEventSystem(&filterBackend{database, blockchain}, false),
}
backend.rollback()
return backend
@@ -119,7 +126,19 @@ func (b *SimulatedBackend) rollback() {
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database())
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
}
// stateByBlockNumber retrieves a state by a given blocknumber.
func (b *SimulatedBackend) stateByBlockNumber(ctx context.Context, blockNumber *big.Int) (*state.StateDB, error) {
if blockNumber == nil || blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) == 0 {
return b.blockchain.State()
}
block, err := b.blockByNumberNoLock(ctx, blockNumber)
if err != nil {
return nil, err
}
return b.blockchain.StateAt(block.Root())
}
// CodeAt returns the code associated with a certain account in the blockchain.
@@ -127,10 +146,11 @@ func (b *SimulatedBackend) CodeAt(ctx context.Context, contract common.Address,
b.mu.Lock()
defer b.mu.Unlock()
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return nil, errBlockNumberUnsupported
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
statedb, _ := b.blockchain.State()
return statedb.GetCode(contract), nil
}
@@ -139,10 +159,11 @@ func (b *SimulatedBackend) BalanceAt(ctx context.Context, contract common.Addres
b.mu.Lock()
defer b.mu.Unlock()
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return nil, errBlockNumberUnsupported
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
statedb, _ := b.blockchain.State()
return statedb.GetBalance(contract), nil
}
@@ -151,10 +172,11 @@ func (b *SimulatedBackend) NonceAt(ctx context.Context, contract common.Address,
b.mu.Lock()
defer b.mu.Unlock()
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return 0, errBlockNumberUnsupported
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return 0, err
}
statedb, _ := b.blockchain.State()
return statedb.GetNonce(contract), nil
}
@@ -163,16 +185,20 @@ func (b *SimulatedBackend) StorageAt(ctx context.Context, contract common.Addres
b.mu.Lock()
defer b.mu.Unlock()
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return nil, errBlockNumberUnsupported
statedb, err := b.stateByBlockNumber(ctx, blockNumber)
if err != nil {
return nil, err
}
statedb, _ := b.blockchain.State()
val := statedb.GetState(contract, key)
return val[:], nil
}
// TransactionReceipt returns the receipt of a transaction.
func (b *SimulatedBackend) TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error) {
b.mu.Lock()
defer b.mu.Unlock()
receipt, _, _, _ := rawdb.ReadReceipt(b.database, txHash, b.config)
return receipt, nil
}
@@ -196,6 +222,121 @@ func (b *SimulatedBackend) TransactionByHash(ctx context.Context, txHash common.
return nil, false, ethereum.NotFound
}
// BlockByHash retrieves a block based on the block hash
func (b *SimulatedBackend) BlockByHash(ctx context.Context, hash common.Hash) (*types.Block, error) {
b.mu.Lock()
defer b.mu.Unlock()
if hash == b.pendingBlock.Hash() {
return b.pendingBlock, nil
}
block := b.blockchain.GetBlockByHash(hash)
if block != nil {
return block, nil
}
return nil, errBlockDoesNotExist
}
// BlockByNumber retrieves a block from the database by number, caching it
// (associated with its hash) if found.
func (b *SimulatedBackend) BlockByNumber(ctx context.Context, number *big.Int) (*types.Block, error) {
b.mu.Lock()
defer b.mu.Unlock()
return b.blockByNumberNoLock(ctx, number)
}
// blockByNumberNoLock retrieves a block from the database by number, caching it
// (associated with its hash) if found without Lock.
func (b *SimulatedBackend) blockByNumberNoLock(ctx context.Context, number *big.Int) (*types.Block, error) {
if number == nil || number.Cmp(b.pendingBlock.Number()) == 0 {
return b.blockchain.CurrentBlock(), nil
}
block := b.blockchain.GetBlockByNumber(uint64(number.Int64()))
if block == nil {
return nil, errBlockDoesNotExist
}
return block, nil
}
// HeaderByHash returns a block header from the current canonical chain.
func (b *SimulatedBackend) HeaderByHash(ctx context.Context, hash common.Hash) (*types.Header, error) {
b.mu.Lock()
defer b.mu.Unlock()
if hash == b.pendingBlock.Hash() {
return b.pendingBlock.Header(), nil
}
header := b.blockchain.GetHeaderByHash(hash)
if header == nil {
return nil, errBlockDoesNotExist
}
return header, nil
}
// HeaderByNumber returns a block header from the current canonical chain. If number is
// nil, the latest known header is returned.
func (b *SimulatedBackend) HeaderByNumber(ctx context.Context, block *big.Int) (*types.Header, error) {
b.mu.Lock()
defer b.mu.Unlock()
if block == nil || block.Cmp(b.pendingBlock.Number()) == 0 {
return b.blockchain.CurrentHeader(), nil
}
return b.blockchain.GetHeaderByNumber(uint64(block.Int64())), nil
}
// TransactionCount returns the number of transactions in a given block
func (b *SimulatedBackend) TransactionCount(ctx context.Context, blockHash common.Hash) (uint, error) {
b.mu.Lock()
defer b.mu.Unlock()
if blockHash == b.pendingBlock.Hash() {
return uint(b.pendingBlock.Transactions().Len()), nil
}
block := b.blockchain.GetBlockByHash(blockHash)
if block == nil {
return uint(0), errBlockDoesNotExist
}
return uint(block.Transactions().Len()), nil
}
// TransactionInBlock returns the transaction for a specific block at a specific index
func (b *SimulatedBackend) TransactionInBlock(ctx context.Context, blockHash common.Hash, index uint) (*types.Transaction, error) {
b.mu.Lock()
defer b.mu.Unlock()
if blockHash == b.pendingBlock.Hash() {
transactions := b.pendingBlock.Transactions()
if uint(len(transactions)) < index+1 {
return nil, errTransactionDoesNotExist
}
return transactions[index], nil
}
block := b.blockchain.GetBlockByHash(blockHash)
if block == nil {
return nil, errBlockDoesNotExist
}
transactions := block.Transactions()
if uint(len(transactions)) < index+1 {
return nil, errTransactionDoesNotExist
}
return transactions[index], nil
}
// PendingCodeAt returns the code associated with an account in the pending state.
func (b *SimulatedBackend) PendingCodeAt(ctx context.Context, contract common.Address) ([]byte, error) {
b.mu.Lock()
@@ -204,6 +345,36 @@ func (b *SimulatedBackend) PendingCodeAt(ctx context.Context, contract common.Ad
return b.pendingState.GetCode(contract), nil
}
func newRevertError(result *core.ExecutionResult) *revertError {
reason, errUnpack := abi.UnpackRevert(result.Revert())
err := errors.New("execution reverted")
if errUnpack == nil {
err = fmt.Errorf("execution reverted: %v", reason)
}
return &revertError{
error: err,
reason: hexutil.Encode(result.Revert()),
}
}
// revertError is an API error that encompassas an EVM revertal with JSON error
// code and a binary data blob.
type revertError struct {
error
reason string // revert reason hex encoded
}
// ErrorCode returns the JSON error code for a revertal.
// See: https://github.com/ethereum/wiki/wiki/JSON-RPC-Error-Codes-Improvement-Proposal
func (e *revertError) ErrorCode() int {
return 3
}
// ErrorData returns the hex encoded revert reason.
func (e *revertError) ErrorData() interface{} {
return e.reason
}
// CallContract executes a contract call.
func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) {
b.mu.Lock()
@@ -216,8 +387,15 @@ func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallM
if err != nil {
return nil, err
}
rval, _, _, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), state)
return rval, err
res, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), state)
if err != nil {
return nil, err
}
// If the result contains a revert reason, try to unpack and return it.
if len(res.Revert()) > 0 {
return nil, newRevertError(res)
}
return res.Return(), res.Err
}
// PendingCallContract executes a contract call on the pending state.
@@ -226,8 +404,15 @@ func (b *SimulatedBackend) PendingCallContract(ctx context.Context, call ethereu
defer b.mu.Unlock()
defer b.pendingState.RevertToSnapshot(b.pendingState.Snapshot())
rval, _, _, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
return rval, err
res, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
if err != nil {
return nil, err
}
// If the result contains a revert reason, try to unpack and return it.
if len(res.Revert()) > 0 {
return nil, newRevertError(res)
}
return res.Return(), res.Err
}
// PendingNonceAt implements PendingStateReader.PendingNonceAt, retrieving
@@ -262,25 +447,57 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
} else {
hi = b.pendingBlock.GasLimit()
}
// Recap the highest gas allowance with account's balance.
if call.GasPrice != nil && call.GasPrice.BitLen() != 0 {
balance := b.pendingState.GetBalance(call.From) // from can't be nil
available := new(big.Int).Set(balance)
if call.Value != nil {
if call.Value.Cmp(available) >= 0 {
return 0, errors.New("insufficient funds for transfer")
}
available.Sub(available, call.Value)
}
allowance := new(big.Int).Div(available, call.GasPrice)
if allowance.IsUint64() && hi > allowance.Uint64() {
transfer := call.Value
if transfer == nil {
transfer = new(big.Int)
}
log.Warn("Gas estimation capped by limited funds", "original", hi, "balance", balance,
"sent", transfer, "gasprice", call.GasPrice, "fundable", allowance)
hi = allowance.Uint64()
}
}
cap = hi
// Create a helper to check if a gas allowance results in an executable transaction
executable := func(gas uint64) bool {
executable := func(gas uint64) (bool, *core.ExecutionResult, error) {
call.Gas = gas
snapshot := b.pendingState.Snapshot()
_, _, failed, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
res, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
b.pendingState.RevertToSnapshot(snapshot)
if err != nil || failed {
return false
if err != nil {
if err == core.ErrIntrinsicGas {
return true, nil, nil // Special case, raise gas limit
}
return true, nil, err // Bail out
}
return true
return res.Failed(), res, nil
}
// Execute the binary search and hone in on an executable gas limit
for lo+1 < hi {
mid := (hi + lo) / 2
if !executable(mid) {
failed, _, err := executable(mid)
// If the error is not nil(consensus error), it means the provided message
// call or transaction will never be accepted no matter how much gas it is
// assigned. Return the error directly, don't struggle any more
if err != nil {
return 0, err
}
if failed {
lo = mid
} else {
hi = mid
@@ -288,8 +505,19 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
}
// Reject the transaction as invalid if it still fails at the highest allowance
if hi == cap {
if !executable(hi) {
return 0, errGasEstimationFailed
failed, result, err := executable(hi)
if err != nil {
return 0, err
}
if failed {
if result != nil && result.Err != vm.ErrOutOfGas {
if len(result.Revert()) > 0 {
return 0, newRevertError(result)
}
return 0, result.Err
}
// Otherwise, the specified gas cap is too low
return 0, fmt.Errorf("gas required exceeds allowance (%d)", cap)
}
}
return hi, nil
@@ -297,7 +525,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
// callContract implements common code between normal and pending contract calls.
// state is modified during execution, make sure to copy it if necessary.
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, statedb *state.StateDB) ([]byte, uint64, bool, error) {
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, statedb *state.StateDB) (*core.ExecutionResult, error) {
// Ensure message is initialized properly.
if call.GasPrice == nil {
call.GasPrice = big.NewInt(1)
@@ -347,7 +575,7 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database())
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
return nil
}
@@ -419,10 +647,38 @@ func (b *SimulatedBackend) SubscribeFilterLogs(ctx context.Context, query ethere
}), nil
}
// SubscribeNewHead returns an event subscription for a new header
func (b *SimulatedBackend) SubscribeNewHead(ctx context.Context, ch chan<- *types.Header) (ethereum.Subscription, error) {
// subscribe to a new head
sink := make(chan *types.Header)
sub := b.events.SubscribeNewHeads(sink)
return event.NewSubscription(func(quit <-chan struct{}) error {
defer sub.Unsubscribe()
for {
select {
case head := <-sink:
select {
case ch <- head:
case err := <-sub.Err():
return err
case <-quit:
return nil
}
case err := <-sub.Err():
return err
case <-quit:
return nil
}
}
}), nil
}
// AdjustTime adds a time shift to the simulated clock.
func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
b.mu.Lock()
defer b.mu.Unlock()
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTx(tx)
@@ -432,7 +688,7 @@ func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database())
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database(), nil)
return nil
}
@@ -502,22 +758,34 @@ func (fb *filterBackend) GetLogs(ctx context.Context, hash common.Hash) ([][]*ty
}
func (fb *filterBackend) SubscribeNewTxsEvent(ch chan<- core.NewTxsEvent) event.Subscription {
return nullSubscription()
}
func (fb *filterBackend) SubscribeChainEvent(ch chan<- core.ChainEvent) event.Subscription {
return fb.bc.SubscribeChainEvent(ch)
}
func (fb *filterBackend) SubscribeRemovedLogsEvent(ch chan<- core.RemovedLogsEvent) event.Subscription {
return fb.bc.SubscribeRemovedLogsEvent(ch)
}
func (fb *filterBackend) SubscribeLogsEvent(ch chan<- []*types.Log) event.Subscription {
return fb.bc.SubscribeLogsEvent(ch)
}
func (fb *filterBackend) SubscribePendingLogsEvent(ch chan<- []*types.Log) event.Subscription {
return nullSubscription()
}
func (fb *filterBackend) BloomStatus() (uint64, uint64) { return 4096, 0 }
func (fb *filterBackend) ServiceFilter(ctx context.Context, ms *bloombits.MatcherSession) {
panic("not supported")
}
func nullSubscription() event.Subscription {
return event.NewSubscription(func(quit <-chan struct{}) error {
<-quit
return nil
})
}
func (fb *filterBackend) SubscribeChainEvent(ch chan<- core.ChainEvent) event.Subscription {
return fb.bc.SubscribeChainEvent(ch)
}
func (fb *filterBackend) SubscribeRemovedLogsEvent(ch chan<- core.RemovedLogsEvent) event.Subscription {
return fb.bc.SubscribeRemovedLogsEvent(ch)
}
func (fb *filterBackend) SubscribeLogsEvent(ch chan<- []*types.Log) event.Subscription {
return fb.bc.SubscribeLogsEvent(ch)
}
func (fb *filterBackend) BloomStatus() (uint64, uint64) { return 4096, 0 }
func (fb *filterBackend) ServiceFilter(ctx context.Context, ms *bloombits.MatcherSession) {
panic("not supported")
}

File diff suppressed because it is too large Load Diff

View File

@@ -49,7 +49,7 @@ type TransactOpts struct {
Nonce *big.Int // Nonce to use for the transaction execution (nil = use pending state)
Signer SignerFn // Method to use for signing the transaction (mandatory)
Value *big.Int // Funds to transfer along along the transaction (nil = 0 = no funds)
Value *big.Int // Funds to transfer along the transaction (nil = 0 = no funds)
GasPrice *big.Int // Gas price to use for the transaction execution (nil = gas price oracle)
GasLimit uint64 // Gas limit to set for the transaction execution (0 = estimate)
@@ -171,12 +171,24 @@ func (c *BoundContract) Transact(opts *TransactOpts, method string, params ...in
if err != nil {
return nil, err
}
// todo(rjl493456442) check the method is payable or not,
// reject invalid transaction at the first place
return c.transact(opts, &c.address, input)
}
// RawTransact initiates a transaction with the given raw calldata as the input.
// It's usually used to initiates transaction for invoking **Fallback** function.
func (c *BoundContract) RawTransact(opts *TransactOpts, calldata []byte) (*types.Transaction, error) {
// todo(rjl493456442) check the method is payable or not,
// reject invalid transaction at the first place
return c.transact(opts, &c.address, calldata)
}
// Transfer initiates a plain transaction to move funds to the contract, calling
// its default method if one is available.
func (c *BoundContract) Transfer(opts *TransactOpts) (*types.Transaction, error) {
// todo(rjl493456442) check the payable fallback or receive is defined
// or not, reject invalid transaction at the first place
return c.transact(opts, &c.address, nil)
}
@@ -218,7 +230,7 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
}
}
// If the contract surely has code (or code is not needed), estimate the transaction
msg := ethereum.CallMsg{From: opts.From, To: contract, Value: value, Data: input}
msg := ethereum.CallMsg{From: opts.From, To: contract, GasPrice: gasPrice, Value: value, Data: input}
gasLimit, err = c.transactor.EstimateGas(ensureContext(opts.Context), msg)
if err != nil {
return nil, fmt.Errorf("failed to estimate gas needed: %v", err)
@@ -252,9 +264,9 @@ func (c *BoundContract) FilterLogs(opts *FilterOpts, name string, query ...[]int
opts = new(FilterOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].ID()}}, query...)
query = append([][]interface{}{{c.abi.Events[name].ID}}, query...)
topics, err := makeTopics(query...)
topics, err := abi.MakeTopics(query...)
if err != nil {
return nil, nil, err
}
@@ -301,9 +313,9 @@ func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]inter
opts = new(WatchOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].ID()}}, query...)
query = append([][]interface{}{{c.abi.Events[name].ID}}, query...)
topics, err := makeTopics(query...)
topics, err := abi.MakeTopics(query...)
if err != nil {
return nil, nil, err
}
@@ -337,7 +349,7 @@ func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log)
indexed = append(indexed, arg)
}
}
return parseTopics(out, indexed, log.Topics[1:])
return abi.ParseTopics(out, indexed, log.Topics[1:])
}
// UnpackLogIntoMap unpacks a retrieved log into the provided map.
@@ -353,7 +365,7 @@ func (c *BoundContract) UnpackLogIntoMap(out map[string]interface{}, event strin
indexed = append(indexed, arg)
}
}
return parseTopicsIntoMap(out, indexed, log.Topics[1:])
return abi.ParseTopicsIntoMap(out, indexed, log.Topics[1:])
}
// ensureContext is a helper method to ensure a context is not nil, even if the

View File

@@ -17,9 +17,9 @@
package bind_test
import (
"bytes"
"context"
"math/big"
"reflect"
"strings"
"testing"
@@ -34,8 +34,10 @@ import (
)
type mockCaller struct {
codeAtBlockNumber *big.Int
callContractBlockNumber *big.Int
codeAtBlockNumber *big.Int
callContractBlockNumber *big.Int
pendingCodeAtCalled bool
pendingCallContractCalled bool
}
func (mc *mockCaller) CodeAt(ctx context.Context, contract common.Address, blockNumber *big.Int) ([]byte, error) {
@@ -47,6 +49,16 @@ func (mc *mockCaller) CallContract(ctx context.Context, call ethereum.CallMsg, b
mc.callContractBlockNumber = blockNumber
return nil, nil
}
func (mc *mockCaller) PendingCodeAt(ctx context.Context, contract common.Address) ([]byte, error) {
mc.pendingCodeAtCalled = true
return nil, nil
}
func (mc *mockCaller) PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error) {
mc.pendingCallContractCalled = true
return nil, nil
}
func TestPassingBlockNumber(t *testing.T) {
mc := &mockCaller{}
@@ -82,57 +94,39 @@ func TestPassingBlockNumber(t *testing.T) {
if mc.codeAtBlockNumber != nil {
t.Fatalf("CodeAt() was passed a block number when it should not have been")
}
bc.Call(&bind.CallOpts{BlockNumber: blockNumber, Pending: true}, &ret, "something")
if !mc.pendingCallContractCalled {
t.Fatalf("CallContract() was not passed the block number")
}
if !mc.pendingCodeAtCalled {
t.Fatalf("CodeAt() was not passed the block number")
}
}
const hexData = "0x000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158"
func TestUnpackIndexedStringTyLogIntoMap(t *testing.T) {
hash := crypto.Keccak256Hash([]byte("testName"))
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x0"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"name","type":"string"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"name": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["name"] != expectedReceivedMap["name"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedSliceTyLogIntoMap(t *testing.T) {
@@ -141,51 +135,23 @@ func TestUnpackIndexedSliceTyLogIntoMap(t *testing.T) {
t.Fatal(err)
}
hash := crypto.Keccak256Hash(sliceBytes)
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x0"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"names","type":"string[]"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"names": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["names"] != expectedReceivedMap["names"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedArrayTyLogIntoMap(t *testing.T) {
@@ -194,51 +160,23 @@ func TestUnpackIndexedArrayTyLogIntoMap(t *testing.T) {
t.Fatal(err)
}
hash := crypto.Keccak256Hash(arrBytes)
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x0"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"addresses","type":"address[2]"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"addresses": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["addresses"] != expectedReceivedMap["addresses"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedFuncTyLogIntoMap(t *testing.T) {
@@ -249,99 +187,72 @@ func TestUnpackIndexedFuncTyLogIntoMap(t *testing.T) {
functionTyBytes := append(addrBytes, functionSelector...)
var functionTy [24]byte
copy(functionTy[:], functionTyBytes[0:24])
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
common.BytesToHash(functionTyBytes),
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
topics := []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
common.BytesToHash(functionTyBytes),
}
mockLog := newMockLog(topics, common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"function","type":"function"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"function": functionTy,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
}
if receivedMap["function"] != expectedReceivedMap["function"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
}
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func TestUnpackIndexedBytesTyLogIntoMap(t *testing.T) {
byts := []byte{1, 2, 3, 4, 5}
hash := crypto.Keccak256Hash(byts)
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
bytes := []byte{1, 2, 3, 4, 5}
hash := crypto.Keccak256Hash(bytes)
topics := []common.Hash{
common.HexToHash("0x99b5620489b6ef926d4518936cfec15d305452712b88bd59da2d9c10fb0953e8"),
hash,
}
mockLog := newMockLog(topics, common.HexToHash("0x5c698f13940a2153440c6d19660878bc90219d9298fdcf37365aa8d88d40fc42"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"content","type":"bytes"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
expectedReceivedMap := map[string]interface{}{
"content": hash,
"sender": common.HexToAddress("0x376c47978271565f56DEB45495afa69E59c16Ab2"),
"amount": big.NewInt(1),
"memo": []byte{88},
}
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err != nil {
unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
}
func unpackAndCheck(t *testing.T, bc *bind.BoundContract, expected map[string]interface{}, mockLog types.Log) {
received := make(map[string]interface{})
if err := bc.UnpackLogIntoMap(received, "received", mockLog); err != nil {
t.Error(err)
}
if len(receivedMap) != 4 {
t.Fatal("unpacked map expected to have length 4")
if len(received) != len(expected) {
t.Fatalf("unpacked map length %v not equal expected length of %v", len(received), len(expected))
}
if receivedMap["content"] != expectedReceivedMap["content"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["sender"] != expectedReceivedMap["sender"] {
t.Error("unpacked map does not match expected map")
}
if receivedMap["amount"].(*big.Int).Cmp(expectedReceivedMap["amount"].(*big.Int)) != 0 {
t.Error("unpacked map does not match expected map")
}
if !bytes.Equal(receivedMap["memo"].([]byte), expectedReceivedMap["memo"].([]byte)) {
t.Error("unpacked map does not match expected map")
for name, elem := range expected {
if !reflect.DeepEqual(elem, received[name]) {
t.Errorf("field %v does not match expected, want %v, got %v", name, elem, received[name])
}
}
}
func newMockLog(topics []common.Hash, txHash common.Hash) types.Log {
return types.Log{
Address: common.HexToAddress("0x0"),
Topics: topics,
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: txHash,
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
}
}

View File

@@ -47,13 +47,17 @@ const (
// to be used as is in client code, but rather as an intermediate struct which
// enforces compile time type safety and naming convention opposed to having to
// manually maintain hard coded strings that break on runtime.
func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]string, pkg string, lang Lang, libs map[string]string) (string, error) {
// Process each individual contract requested binding
contracts := make(map[string]*tmplContract)
func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]string, pkg string, lang Lang, libs map[string]string, aliases map[string]string) (string, error) {
var (
// contracts is the map of each individual contract requested binding
contracts = make(map[string]*tmplContract)
// Map used to flag each encountered library as such
isLib := make(map[string]struct{})
// structs is the map of all reclared structs shared by passed contracts.
structs = make(map[string]*tmplStruct)
// isLib is the map used to flag each encountered library as such
isLib = make(map[string]struct{})
)
for i := 0; i < len(types); i++ {
// Parse the actual ABI to generate the binding for
evmABI, err := abi.JSON(strings.NewReader(abis[i]))
@@ -73,20 +77,38 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
calls = make(map[string]*tmplMethod)
transacts = make(map[string]*tmplMethod)
events = make(map[string]*tmplEvent)
structs = make(map[string]*tmplStruct)
fallback *tmplMethod
receive *tmplMethod
// identifiers are used to detect duplicated identifier of function
// and event. For all calls, transacts and events, abigen will generate
// corresponding bindings. However we have to ensure there is no
// identifier coliision in the bindings of these categories.
callIdentifiers = make(map[string]bool)
transactIdentifiers = make(map[string]bool)
eventIdentifiers = make(map[string]bool)
)
for _, original := range evmABI.Methods {
// Normalize the method for capital cases and non-anonymous inputs/outputs
normalized := original
normalized.Name = methodNormalizer[lang](original.Name)
normalizedName := methodNormalizer[lang](alias(aliases, original.Name))
// Ensure there is no duplicated identifier
var identifiers = callIdentifiers
if !original.IsConstant() {
identifiers = transactIdentifiers
}
if identifiers[normalizedName] {
return "", fmt.Errorf("duplicated identifier \"%s\"(normalized \"%s\"), use --alias for renaming", original.Name, normalizedName)
}
identifiers[normalizedName] = true
normalized.Name = normalizedName
normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs {
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
if _, exist := structs[input.Type.String()]; input.Type.T == abi.TupleTy && !exist {
if hasStruct(input.Type) {
bindStructType[lang](input.Type, structs)
}
}
@@ -96,12 +118,12 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
if output.Name != "" {
normalized.Outputs[j].Name = capitalise(output.Name)
}
if _, exist := structs[output.Type.String()]; output.Type.T == abi.TupleTy && !exist {
if hasStruct(output.Type) {
bindStructType[lang](output.Type, structs)
}
}
// Append the methods to the call or transact lists
if original.Const {
if original.IsConstant() {
calls[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original.Outputs)}
} else {
transacts[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original.Outputs)}
@@ -114,25 +136,35 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
}
// Normalize the event for capital cases and non-anonymous outputs
normalized := original
normalized.Name = methodNormalizer[lang](original.Name)
// Ensure there is no duplicated identifier
normalizedName := methodNormalizer[lang](alias(aliases, original.Name))
if eventIdentifiers[normalizedName] {
return "", fmt.Errorf("duplicated identifier \"%s\"(normalized \"%s\"), use --alias for renaming", original.Name, normalizedName)
}
eventIdentifiers[normalizedName] = true
normalized.Name = normalizedName
normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs {
// Indexed fields are input, non-indexed ones are outputs
if input.Indexed {
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
if _, exist := structs[input.Type.String()]; input.Type.T == abi.TupleTy && !exist {
bindStructType[lang](input.Type, structs)
}
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
if hasStruct(input.Type) {
bindStructType[lang](input.Type, structs)
}
}
// Append the event to the accumulator list
events[original.Name] = &tmplEvent{Original: original, Normalized: normalized}
}
// Add two special fallback functions if they exist
if evmABI.HasFallback() {
fallback = &tmplMethod{Original: evmABI.Fallback}
}
if evmABI.HasReceive() {
receive = &tmplMethod{Original: evmABI.Receive}
}
// There is no easy way to pass arbitrary java objects to the Go side.
if len(structs) > 0 && lang == LangJava {
return "", errors.New("java binding for tuple arguments is not supported yet")
@@ -145,9 +177,10 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
Constructor: evmABI.Constructor,
Calls: calls,
Transacts: transacts,
Fallback: fallback,
Receive: receive,
Events: events,
Libraries: make(map[string]string),
Structs: structs,
}
// Function 4-byte signatures are stored in the same sequence
// as types, if available.
@@ -179,6 +212,7 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
Package: pkg,
Contracts: contracts,
Libraries: libs,
Structs: structs,
}
buffer := new(bytes.Buffer)
@@ -186,8 +220,6 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
"bindtype": bindType[lang],
"bindtopictype": bindTopicType[lang],
"namedtype": namedType[lang],
"formatmethod": formatMethod,
"formatevent": formatEvent,
"capitalise": capitalise,
"decapitalise": decapitalise,
}
@@ -244,7 +276,7 @@ func bindBasicTypeGo(kind abi.Type) string {
func bindTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
return structs[kind.String()].Name
return structs[kind.TupleRawName+kind.String()].Name
case abi.ArrayTy:
return fmt.Sprintf("[%d]", kind.Size) + bindTypeGo(*kind.Elem, structs)
case abi.SliceTy:
@@ -321,7 +353,7 @@ func pluralizeJavaType(typ string) string {
func bindTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
return structs[kind.String()].Name
return structs[kind.TupleRawName+kind.String()].Name
case abi.ArrayTy, abi.SliceTy:
return pluralizeJavaType(bindTypeJava(*kind.Elem, structs))
default:
@@ -340,6 +372,13 @@ var bindTopicType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct)
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeGo(kind, structs)
// todo(rjl493456442) according solidity documentation, indexed event
// parameters that are not value types i.e. arrays and structs are not
// stored directly but instead a keccak256-hash of an encoding is stored.
//
// We only convert stringS and bytes to hash, still need to deal with
// array(both fixed-size and dynamic-size) and struct.
if bound == "string" || bound == "[]byte" {
bound = "common.Hash"
}
@@ -350,6 +389,13 @@ func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeJava(kind, structs)
// todo(rjl493456442) according solidity documentation, indexed event
// parameters that are not value types i.e. arrays and structs are not
// stored directly but instead a keccak256-hash of an encoding is stored.
//
// We only convert stringS and bytes to hash, still need to deal with
// array(both fixed-size and dynamic-size) and struct.
if bound == "String" || bound == "byte[]" {
bound = "Hash"
}
@@ -369,7 +415,14 @@ var bindStructType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct
func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
if s, exist := structs[kind.String()]; exist {
// We compose raw struct name and canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
// compatibility, we concat these two together so that if kind.TupleRawName
// is not empty, it can have unique id.
id := kind.TupleRawName + kind.String()
if s, exist := structs[id]; exist {
return s.Name
}
var fields []*tmplField
@@ -377,8 +430,11 @@ func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
field := bindStructTypeGo(*elem, structs)
fields = append(fields, &tmplField{Type: field, Name: capitalise(kind.TupleRawNames[i]), SolKind: *elem})
}
name := fmt.Sprintf("Struct%d", len(structs))
structs[kind.String()] = &tmplStruct{
name := kind.TupleRawName
if name == "" {
name = fmt.Sprintf("Struct%d", len(structs))
}
structs[id] = &tmplStruct{
Name: name,
Fields: fields,
}
@@ -398,7 +454,14 @@ func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
if s, exist := structs[kind.String()]; exist {
// We compose raw struct name and canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
// compatibility, we concat these two together so that if kind.TupleRawName
// is not empty, it can have unique id.
id := kind.TupleRawName + kind.String()
if s, exist := structs[id]; exist {
return s.Name
}
var fields []*tmplField
@@ -406,8 +469,11 @@ func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
field := bindStructTypeJava(*elem, structs)
fields = append(fields, &tmplField{Type: field, Name: decapitalise(kind.TupleRawNames[i]), SolKind: *elem})
}
name := fmt.Sprintf("Class%d", len(structs))
structs[kind.String()] = &tmplStruct{
name := kind.TupleRawName
if name == "" {
name = fmt.Sprintf("Class%d", len(structs))
}
structs[id] = &tmplStruct{
Name: name,
Fields: fields,
}
@@ -452,6 +518,15 @@ func namedTypeJava(javaKind string, solKind abi.Type) string {
}
}
// alias returns an alias of the given string based on the aliasing rules
// or returns itself if no rule is matched.
func alias(aliases map[string]string, n string) string {
if alias, exist := aliases[n]; exist {
return alias
}
return n
}
// methodNormalizer is a name transformer that modifies Solidity method names to
// conform to target language naming concentions.
var methodNormalizer = map[Lang]func(string) string{
@@ -460,9 +535,7 @@ var methodNormalizer = map[Lang]func(string) string{
}
// capitalise makes a camel-case string which starts with an upper case character.
func capitalise(input string) string {
return abi.ToCamelCase(input)
}
var capitalise = abi.ToCamelCase
// decapitalise makes a camel-case string which starts with a lower case character.
func decapitalise(input string) string {
@@ -497,62 +570,17 @@ func structured(args abi.Arguments) bool {
return true
}
// resolveArgName converts a raw argument representation into a user friendly format.
func resolveArgName(arg abi.Argument, structs map[string]*tmplStruct) string {
var (
prefix string
embedded string
typ = &arg.Type
)
loop:
for {
switch typ.T {
case abi.SliceTy:
prefix += "[]"
case abi.ArrayTy:
prefix += fmt.Sprintf("[%d]", typ.Size)
default:
embedded = typ.String()
break loop
}
typ = typ.Elem
}
if s, exist := structs[embedded]; exist {
return prefix + s.Name
} else {
return arg.Type.String()
// hasStruct returns an indicator whether the given type is struct, struct slice
// or struct array.
func hasStruct(t abi.Type) bool {
switch t.T {
case abi.SliceTy:
return hasStruct(*t.Elem)
case abi.ArrayTy:
return hasStruct(*t.Elem)
case abi.TupleTy:
return true
default:
return false
}
}
// formatMethod transforms raw method representation into a user friendly one.
func formatMethod(method abi.Method, structs map[string]*tmplStruct) string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", resolveArgName(input, structs), input.Name)
}
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
outputs[i] = resolveArgName(output, structs)
if len(output.Name) > 0 {
outputs[i] += fmt.Sprintf(" %v", output.Name)
}
}
constant := ""
if method.Const {
constant = "constant "
}
return fmt.Sprintf("function %v(%v) %sreturns(%v)", method.RawName, strings.Join(inputs, ", "), constant, strings.Join(outputs, ", "))
}
// formatEvent transforms raw event representation into a user friendly one.
func formatEvent(event abi.Event, structs map[string]*tmplStruct) string {
inputs := make([]string, len(event.Inputs))
for i, input := range event.Inputs {
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", resolveArgName(input, structs), input.Name)
} else {
inputs[i] = fmt.Sprintf("%v %v", resolveArgName(input, structs), input.Name)
}
}
return fmt.Sprintf("event %v(%v)", event.RawName, strings.Join(inputs, ", "))
}

File diff suppressed because one or more lines are too long

View File

@@ -23,6 +23,7 @@ type tmplData struct {
Package string // Name of the package to place the generated file in
Contracts map[string]*tmplContract // List of contracts to generate into this file
Libraries map[string]string // Map the bytecode's link pattern to the library name
Structs map[string]*tmplStruct // Contract struct type definitions
}
// tmplContract contains the data needed to generate an individual contract binding.
@@ -34,10 +35,11 @@ type tmplContract struct {
Constructor abi.Method // Contract constructor for deploy parametrization
Calls map[string]*tmplMethod // Contract calls that only read state data
Transacts map[string]*tmplMethod // Contract calls that write state data
Fallback *tmplMethod // Additional special fallback function
Receive *tmplMethod // Additional special receive function
Events map[string]*tmplEvent // Contract events accessors
Libraries map[string]string // Same as tmplData, but filtered to only keep what the contract needs
Structs map[string]*tmplStruct // Contract struct type definitions
Library bool
Library bool // Indicator whether the contract is a library
}
// tmplMethod is a wrapper around an abi.Method that contains a few preprocessed
@@ -62,10 +64,10 @@ type tmplField struct {
SolKind abi.Type // Raw abi type information
}
// tmplStruct is a wrapper around an abi.tuple contains a auto-generated
// tmplStruct is a wrapper around an abi.tuple contains an auto-generated
// struct name.
type tmplStruct struct {
Name string // Auto-generated struct name(We can't obtain the raw struct name through abi)
Name string // Auto-generated struct name(before solidity v0.5.11) or raw name.
Fields []*tmplField // Struct fields definition depends on the binding language.
}
@@ -101,15 +103,22 @@ var (
_ = big.NewInt
_ = strings.NewReader
_ = ethereum.NotFound
_ = abi.U256
_ = bind.Bind
_ = common.Big1
_ = types.BloomLookup
_ = event.NewSubscription
)
{{$structs := .Structs}}
{{range $structs}}
// {{.Name}} is an auto generated low-level Go binding around an user-defined struct.
type {{.Name}} struct {
{{range $field := .Fields}}
{{$field.Name}} {{$field.Type}}{{end}}
}
{{end}}
{{range $contract := .Contracts}}
{{$structs := $contract.Structs}}
// {{.Type}}ABI is the input ABI used to generate the binding from.
const {{.Type}}ABI = "{{.InputABI}}"
@@ -285,18 +294,10 @@ var (
return _{{$contract.Type}}.Contract.contract.Transact(opts, method, params...)
}
{{range .Structs}}
// {{.Name}} is an auto generated low-level Go binding around an user-defined struct.
type {{.Name}} struct {
{{range $field := .Fields}}
{{$field.Name}} {{$field.Type}}{{end}}
}
{{end}}
{{range .Calls}}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Caller) {{.Normalized.Name}}(opts *bind.CallOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} },{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}}{{end}} error) {
{{if .Structured}}ret := new(struct{
{{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}}
@@ -315,14 +316,14 @@ var (
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}} {{end}} error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.CallOpts {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}CallerSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}} {{end}} error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.CallOpts {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
@@ -331,26 +332,72 @@ var (
{{range .Transacts}}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) {{.Normalized.Name}}(opts *bind.TransactOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.Transact(opts, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatmethod .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
{{end}}
{{if .Fallback}}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{.Fallback.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) Fallback(opts *bind.TransactOpts, calldata []byte) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.RawTransact(opts, calldata)
}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{.Fallback.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) Fallback(calldata []byte) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Fallback(&_{{$contract.Type}}.TransactOpts, calldata)
}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{.Fallback.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) Fallback(calldata []byte) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Fallback(&_{{$contract.Type}}.TransactOpts, calldata)
}
{{end}}
{{if .Receive}}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{.Receive.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) Receive(opts *bind.TransactOpts) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.RawTransact(opts, nil) // calldata is disallowed for receive function
}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{.Receive.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) Receive() (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Receive(&_{{$contract.Type}}.TransactOpts)
}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{.Receive.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) Receive() (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.Receive(&_{{$contract.Type}}.TransactOpts)
}
{{end}}
{{range .Events}}
// {{$contract.Type}}{{.Normalized.Name}}Iterator is returned from Filter{{.Normalized.Name}} and is used to iterate over the raw logs and unpacked data for {{.Normalized.Name}} events raised by the {{$contract.Type}} contract.
type {{$contract.Type}}{{.Normalized.Name}}Iterator struct {
@@ -424,7 +471,7 @@ var (
// Filter{{.Normalized.Name}} is a free log retrieval operation binding the contract event 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatevent .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Filter{{.Normalized.Name}}(opts *bind.FilterOpts{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type $structs}}{{end}}{{end}}) (*{{$contract.Type}}{{.Normalized.Name}}Iterator, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
@@ -441,7 +488,7 @@ var (
// Watch{{.Normalized.Name}} is a free log subscription operation binding the contract event 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{formatevent .Original $structs}}
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Watch{{.Normalized.Name}}(opts *bind.WatchOpts, sink chan<- *{{$contract.Type}}{{.Normalized.Name}}{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type $structs}}{{end}}{{end}}) (event.Subscription, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
@@ -507,8 +554,8 @@ package {{.Package}};
import org.ethereum.geth.*;
import java.util.*;
{{$structs := .Structs}}
{{range $contract := .Contracts}}
{{$structs := $contract.Structs}}
{{if not .Library}}public {{end}}class {{.Type}} {
// ABI is the input ABI used to generate the binding from.
public final static String ABI = "{{.InputABI}}";
@@ -577,7 +624,7 @@ import java.util.*;
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{.Original.String}}
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else if eq (len .Normalized.Outputs) 0}}void{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
@@ -611,6 +658,24 @@ import java.util.*;
return this.Contract.transact(opts, "{{.Original.Name}}" , args);
}
{{end}}
{{if .Fallback}}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{.Fallback.Original.String}}
public Transaction Fallback(TransactOpts opts, byte[] calldata) throws Exception {
return this.Contract.rawTransact(opts, calldata);
}
{{end}}
{{if .Receive}}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{.Receive.Original.String}}
public Transaction Receive(TransactOpts opts) throws Exception {
return this.Contract.rawTransact(opts, null);
}
{{end}}
}
{{end}}
`

View File

@@ -1,241 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind
import (
"encoding/binary"
"errors"
"fmt"
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// makeTopics converts a filter query argument list into a filter topic set.
func makeTopics(query ...[]interface{}) ([][]common.Hash, error) {
topics := make([][]common.Hash, len(query))
for i, filter := range query {
for _, rule := range filter {
var topic common.Hash
// Try to generate the topic based on simple types
switch rule := rule.(type) {
case common.Hash:
copy(topic[:], rule[:])
case common.Address:
copy(topic[common.HashLength-common.AddressLength:], rule[:])
case *big.Int:
blob := rule.Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case bool:
if rule {
topic[common.HashLength-1] = 1
}
case int8:
blob := big.NewInt(int64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case int16:
blob := big.NewInt(int64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case int32:
blob := big.NewInt(int64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case int64:
blob := big.NewInt(rule).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint8:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint16:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint32:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint64:
blob := new(big.Int).SetUint64(rule).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case string:
hash := crypto.Keccak256Hash([]byte(rule))
copy(topic[:], hash[:])
case []byte:
hash := crypto.Keccak256Hash(rule)
copy(topic[:], hash[:])
default:
// Attempt to generate the topic from funky types
val := reflect.ValueOf(rule)
switch {
// static byte array
case val.Kind() == reflect.Array && reflect.TypeOf(rule).Elem().Kind() == reflect.Uint8:
reflect.Copy(reflect.ValueOf(topic[:val.Len()]), val)
default:
return nil, fmt.Errorf("unsupported indexed type: %T", rule)
}
}
topics[i] = append(topics[i], topic)
}
}
return topics, nil
}
// Big batch of reflect types for topic reconstruction.
var (
reflectHash = reflect.TypeOf(common.Hash{})
reflectAddress = reflect.TypeOf(common.Address{})
reflectBigInt = reflect.TypeOf(new(big.Int))
)
// parseTopics converts the indexed topic fields into actual log field values.
//
// Note, dynamic types cannot be reconstructed since they get mapped to Keccak256
// hashes as the topic value!
func parseTopics(out interface{}, fields abi.Arguments, topics []common.Hash) error {
// Sanity check that the fields and topics match up
if len(fields) != len(topics) {
return errors.New("topic/field count mismatch")
}
// Iterate over all the fields and reconstruct them from topics
for _, arg := range fields {
if !arg.Indexed {
return errors.New("non-indexed field in topic reconstruction")
}
field := reflect.ValueOf(out).Elem().FieldByName(capitalise(arg.Name))
// Try to parse the topic back into the fields based on primitive types
switch field.Kind() {
case reflect.Bool:
if topics[0][common.HashLength-1] == 1 {
field.Set(reflect.ValueOf(true))
}
case reflect.Int8:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(int8(num.Int64())))
case reflect.Int16:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(int16(num.Int64())))
case reflect.Int32:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(int32(num.Int64())))
case reflect.Int64:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(num.Int64()))
case reflect.Uint8:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(uint8(num.Uint64())))
case reflect.Uint16:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(uint16(num.Uint64())))
case reflect.Uint32:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(uint32(num.Uint64())))
case reflect.Uint64:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(num.Uint64()))
default:
// Ran out of plain primitive types, try custom types
switch field.Type() {
case reflectHash: // Also covers all dynamic types
field.Set(reflect.ValueOf(topics[0]))
case reflectAddress:
var addr common.Address
copy(addr[:], topics[0][common.HashLength-common.AddressLength:])
field.Set(reflect.ValueOf(addr))
case reflectBigInt:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(num))
default:
// Ran out of custom types, try the crazies
switch {
// static byte array
case arg.Type.T == abi.FixedBytesTy:
reflect.Copy(field, reflect.ValueOf(topics[0][:arg.Type.Size]))
default:
return fmt.Errorf("unsupported indexed type: %v", arg.Type)
}
}
}
topics = topics[1:]
}
return nil
}
// parseTopicsIntoMap converts the indexed topic field-value pairs into map key-value pairs
func parseTopicsIntoMap(out map[string]interface{}, fields abi.Arguments, topics []common.Hash) error {
// Sanity check that the fields and topics match up
if len(fields) != len(topics) {
return errors.New("topic/field count mismatch")
}
// Iterate over all the fields and reconstruct them from topics
for _, arg := range fields {
if !arg.Indexed {
return errors.New("non-indexed field in topic reconstruction")
}
switch arg.Type.T {
case abi.BoolTy:
out[arg.Name] = topics[0][common.HashLength-1] == 1
case abi.IntTy, abi.UintTy:
num := new(big.Int).SetBytes(topics[0][:])
out[arg.Name] = num
case abi.AddressTy:
var addr common.Address
copy(addr[:], topics[0][common.HashLength-common.AddressLength:])
out[arg.Name] = addr
case abi.HashTy:
out[arg.Name] = topics[0]
case abi.FixedBytesTy:
out[arg.Name] = topics[0][:]
case abi.StringTy, abi.BytesTy, abi.SliceTy, abi.ArrayTy:
// Array types (including strings and bytes) have their keccak256 hashes stored in the topic- not a hash
// whose bytes can be decoded to the actual value- so the best we can do is retrieve that hash
out[arg.Name] = topics[0]
case abi.FunctionTy:
if garbage := binary.BigEndian.Uint64(topics[0][0:8]); garbage != 0 {
return fmt.Errorf("bind: got improperly encoded function type, got %v", topics[0].Bytes())
}
var tmp [24]byte
copy(tmp[:], topics[0][8:32])
out[arg.Name] = tmp
default: // Not handling tuples
return fmt.Errorf("unsupported indexed type: %v", arg.Type)
}
topics = topics[1:]
}
return nil
}

View File

@@ -1,103 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind
import (
"reflect"
"testing"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
)
func TestMakeTopics(t *testing.T) {
type args struct {
query [][]interface{}
}
tests := []struct {
name string
args args
want [][]common.Hash
wantErr bool
}{
{
"support fixed byte types, right padded to 32 bytes",
args{[][]interface{}{{[5]byte{1, 2, 3, 4, 5}}}},
[][]common.Hash{{common.Hash{1, 2, 3, 4, 5}}},
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := makeTopics(tt.args.query...)
if (err != nil) != tt.wantErr {
t.Errorf("makeTopics() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("makeTopics() = %v, want %v", got, tt.want)
}
})
}
}
func TestParseTopics(t *testing.T) {
type bytesStruct struct {
StaticBytes [5]byte
}
bytesType, _ := abi.NewType("bytes5", nil)
type args struct {
createObj func() interface{}
resultObj func() interface{}
fields abi.Arguments
topics []common.Hash
}
tests := []struct {
name string
args args
wantErr bool
}{
{
name: "support fixed byte types, right padded to 32 bytes",
args: args{
createObj: func() interface{} { return &bytesStruct{} },
resultObj: func() interface{} { return &bytesStruct{StaticBytes: [5]byte{1, 2, 3, 4, 5}} },
fields: abi.Arguments{abi.Argument{
Name: "staticBytes",
Type: bytesType,
Indexed: true,
}},
topics: []common.Hash{
{1, 2, 3, 4, 5},
},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
createObj := tt.args.createObj()
if err := parseTopics(createObj, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
t.Errorf("parseTopics() error = %v, wantErr %v", err, tt.wantErr)
}
resultObj := tt.args.resultObj()
if !reflect.DeepEqual(createObj, resultObj) {
t.Errorf("parseTopics() = %v, want %v", createObj, resultObj)
}
})
}
}

View File

@@ -18,7 +18,7 @@ package bind
import (
"context"
"fmt"
"errors"
"time"
"github.com/ethereum/go-ethereum/common"
@@ -56,14 +56,14 @@ func WaitMined(ctx context.Context, b DeployBackend, tx *types.Transaction) (*ty
// contract address when it is mined. It stops waiting when ctx is canceled.
func WaitDeployed(ctx context.Context, b DeployBackend, tx *types.Transaction) (common.Address, error) {
if tx.To() != nil {
return common.Address{}, fmt.Errorf("tx is not contract creation")
return common.Address{}, errors.New("tx is not contract creation")
}
receipt, err := WaitMined(ctx, b, tx)
if err != nil {
return common.Address{}, err
}
if receipt.ContractAddress == (common.Address{}) {
return common.Address{}, fmt.Errorf("zero address")
return common.Address{}, errors.New("zero address")
}
// Check that code has indeed been deployed at the address.
// This matters on pre-Homestead chains: OOG in the constructor

View File

@@ -18,6 +18,7 @@ package bind_test
import (
"context"
"errors"
"math/big"
"testing"
"time"
@@ -84,7 +85,7 @@ func TestWaitDeployed(t *testing.T) {
select {
case <-mined:
if err != test.wantErr {
t.Errorf("test %q: error mismatch: got %q, want %q", name, err, test.wantErr)
t.Errorf("test %q: error mismatch: want %q, got %q", name, test.wantErr, err)
}
if address != test.wantAddress {
t.Errorf("test %q: unexpected contract address %s", name, address.Hex())
@@ -94,3 +95,40 @@ func TestWaitDeployed(t *testing.T) {
}
}
}
func TestWaitDeployedCornerCases(t *testing.T) {
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
},
10000000,
)
defer backend.Close()
// Create a transaction to an account.
code := "6060604052600a8060106000396000f360606040526008565b00"
tx := types.NewTransaction(0, common.HexToAddress("0x01"), big.NewInt(0), 3000000, big.NewInt(1), common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
backend.SendTransaction(ctx, tx)
backend.Commit()
notContentCreation := errors.New("tx is not contract creation")
if _, err := bind.WaitDeployed(ctx, backend, tx); err.Error() != notContentCreation.Error() {
t.Errorf("error missmatch: want %q, got %q, ", notContentCreation, err)
}
// Create a transaction that is not mined.
tx = types.NewContractCreation(1, big.NewInt(0), 3000000, big.NewInt(1), common.FromHex(code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
go func() {
contextCanceled := errors.New("context canceled")
if _, err := bind.WaitDeployed(ctx, backend, tx); err.Error() != contextCanceled.Error() {
t.Errorf("error missmatch: want %q, got %q, ", contextCanceled, err)
}
}()
backend.SendTransaction(ctx, tx)
cancel()
}

View File

@@ -39,23 +39,21 @@ func formatSliceString(kind reflect.Kind, sliceSize int) string {
// type in t.
func sliceTypeCheck(t Type, val reflect.Value) error {
if val.Kind() != reflect.Slice && val.Kind() != reflect.Array {
return typeErr(formatSliceString(t.Kind, t.Size), val.Type())
return typeErr(formatSliceString(t.GetType().Kind(), t.Size), val.Type())
}
if t.T == ArrayTy && val.Len() != t.Size {
return typeErr(formatSliceString(t.Elem.Kind, t.Size), formatSliceString(val.Type().Elem().Kind(), val.Len()))
return typeErr(formatSliceString(t.Elem.GetType().Kind(), t.Size), formatSliceString(val.Type().Elem().Kind(), val.Len()))
}
if t.Elem.T == SliceTy {
if t.Elem.T == SliceTy || t.Elem.T == ArrayTy {
if val.Len() > 0 {
return sliceTypeCheck(*t.Elem, val.Index(0))
}
} else if t.Elem.T == ArrayTy {
return sliceTypeCheck(*t.Elem, val.Index(0))
}
if elemKind := val.Type().Elem().Kind(); elemKind != t.Elem.Kind {
return typeErr(formatSliceString(t.Elem.Kind, t.Size), val.Type())
if elemKind := val.Type().Elem().Kind(); elemKind != t.Elem.GetType().Kind() {
return typeErr(formatSliceString(t.Elem.GetType().Kind(), t.Size), val.Type())
}
return nil
}
@@ -68,10 +66,10 @@ func typeCheck(t Type, value reflect.Value) error {
}
// Check base type validity. Element types will be checked later on.
if t.Kind != value.Kind() {
return typeErr(t.Kind, value.Kind())
if t.GetType().Kind() != value.Kind() {
return typeErr(t.GetType().Kind(), value.Kind())
} else if t.T == FixedBytesTy && t.Size != value.Len() {
return typeErr(t.Type, value.Type())
return typeErr(t.GetType(), value.Type())
} else {
return nil
}

View File

@@ -42,36 +42,59 @@ type Event struct {
RawName string
Anonymous bool
Inputs Arguments
str string
// Sig contains the string signature according to the ABI spec.
// e.g. event foo(uint32 a, int b) = "foo(uint32,int256)"
// Please note that "int" is substitute for its canonical representation "int256"
Sig string
// ID returns the canonical representation of the event's signature used by the
// abi definition to identify event names and types.
ID common.Hash
}
// NewEvent creates a new Event.
// It sanitizes the input arguments to remove unnamed arguments.
// It also precomputes the id, signature and string representation
// of the event.
func NewEvent(name, rawName string, anonymous bool, inputs Arguments) Event {
// sanitize inputs to remove inputs without names
// and precompute string and sig representation.
names := make([]string, len(inputs))
types := make([]string, len(inputs))
for i, input := range inputs {
if input.Name == "" {
inputs[i] = Argument{
Name: fmt.Sprintf("arg%d", i),
Indexed: input.Indexed,
Type: input.Type,
}
} else {
inputs[i] = input
}
// string representation
names[i] = fmt.Sprintf("%v %v", input.Type, inputs[i].Name)
if input.Indexed {
names[i] = fmt.Sprintf("%v indexed %v", input.Type, inputs[i].Name)
}
// sig representation
types[i] = input.Type.String()
}
str := fmt.Sprintf("event %v(%v)", rawName, strings.Join(names, ", "))
sig := fmt.Sprintf("%v(%v)", rawName, strings.Join(types, ","))
id := common.BytesToHash(crypto.Keccak256([]byte(sig)))
return Event{
Name: name,
RawName: rawName,
Anonymous: anonymous,
Inputs: inputs,
str: str,
Sig: sig,
ID: id,
}
}
func (e Event) String() string {
inputs := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", input.Type, input.Name)
}
}
return fmt.Sprintf("event %v(%v)", e.RawName, strings.Join(inputs, ", "))
}
// Sig returns the event string signature according to the ABI spec.
//
// Example
//
// event foo(uint32 a, int b) = "foo(uint32,int256)"
//
// Please note that "int" is substitute for its canonical representation "int256"
func (e Event) Sig() string {
types := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
types[i] = input.Type.String()
}
return fmt.Sprintf("%v(%v)", e.RawName, strings.Join(types, ","))
}
// ID returns the canonical representation of the event's signature used by the
// abi definition to identify event names and types.
func (e Event) ID() common.Hash {
return common.BytesToHash(crypto.Keccak256([]byte(e.Sig())))
return e.str
}

View File

@@ -104,8 +104,8 @@ func TestEventId(t *testing.T) {
}
for name, event := range abi.Events {
if event.ID() != test.expectations[name] {
t.Errorf("expected id to be %x, got %x", test.expectations[name], event.ID())
if event.ID != test.expectations[name] {
t.Errorf("expected id to be %x, got %x", test.expectations[name], event.ID)
}
}
}
@@ -173,7 +173,7 @@ func TestEventTupleUnpack(t *testing.T) {
type EventTransferWithTag struct {
// this is valid because `value` is not exportable,
// so value is only unmarshalled into `Value1`.
value *big.Int
value *big.Int //lint:ignore U1000 unused field is part of test
Value1 *big.Int `abi:"value"`
}
@@ -312,14 +312,14 @@ func TestEventTupleUnpack(t *testing.T) {
&[]interface{}{common.Address{}, new(big.Int)},
&[]interface{}{},
jsonEventPledge,
"abi: insufficient number of elements in the list/array for unpack, want 3, got 2",
"abi: insufficient number of arguments for unpack, want 3, got 2",
"Can not unpack Pledge event into too short slice",
}, {
pledgeData1,
new(map[string]interface{}),
&[]interface{}{},
jsonEventPledge,
"abi: cannot unmarshal tuple into map[string]interface {}",
"abi:[2] cannot unmarshal tuple in to map[string]interface {}",
"Can not unpack Pledge event into map",
}, {
mixedCaseData1,
@@ -354,40 +354,6 @@ func unpackTestEventData(dest interface{}, hexData string, jsonEvent []byte, ass
return a.Unpack(dest, "e", data)
}
/*
Taken from
https://github.com/ethereum/go-ethereum/pull/15568
*/
type testResult struct {
Values [2]*big.Int
Value1 *big.Int
Value2 *big.Int
}
type testCase struct {
definition string
want testResult
}
func (tc testCase) encoded(intType, arrayType Type) []byte {
var b bytes.Buffer
if tc.want.Value1 != nil {
val, _ := intType.pack(reflect.ValueOf(tc.want.Value1))
b.Write(val)
}
if !reflect.DeepEqual(tc.want.Values, [2]*big.Int{nil, nil}) {
val, _ := arrayType.pack(reflect.ValueOf(tc.want.Values))
b.Write(val)
}
if tc.want.Value2 != nil {
val, _ := intType.pack(reflect.ValueOf(tc.want.Value2))
b.Write(val)
}
return b.Bytes()
}
// TestEventUnpackIndexed verifies that indexed field will be skipped by event decoder.
func TestEventUnpackIndexed(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`

View File

@@ -23,6 +23,24 @@ import (
"github.com/ethereum/go-ethereum/crypto"
)
// FunctionType represents different types of functions a contract might have.
type FunctionType int
const (
// Constructor represents the constructor of the contract.
// The constructor function is called while deploying a contract.
Constructor FunctionType = iota
// Fallback represents the fallback function.
// This function is executed if no other function matches the given function
// signature and no receive function is specified.
Fallback
// Receive represents the receive function.
// This function is executed on plain Ether transfers.
Receive
// Function represents a normal function.
Function
)
// Method represents a callable given a `Name` and whether the method is a constant.
// If the method is `Const` no transaction needs to be created for this
// particular Method call. It can easily be simulated using a local VM.
@@ -41,50 +59,109 @@ type Method struct {
// * foo(uint,uint)
// The method name of the first one will be resolved as foo while the second one
// will be resolved as foo0.
Name string
// RawName is the raw method name parsed from ABI.
RawName string
Const bool
Name string
RawName string // RawName is the raw method name parsed from ABI
// Type indicates whether the method is a
// special fallback introduced in solidity v0.6.0
Type FunctionType
// StateMutability indicates the mutability state of method,
// the default value is nonpayable. It can be empty if the abi
// is generated by legacy compiler.
StateMutability string
// Legacy indicators generated by compiler before v0.6.0
Constant bool
Payable bool
Inputs Arguments
Outputs Arguments
str string
// Sig returns the methods string signature according to the ABI spec.
// e.g. function foo(uint32 a, int b) = "foo(uint32,int256)"
// Please note that "int" is substitute for its canonical representation "int256"
Sig string
// ID returns the canonical representation of the method's signature used by the
// abi definition to identify method names and types.
ID []byte
}
// Sig returns the methods string signature according to the ABI spec.
//
// Example
//
// function foo(uint32 a, int b) = "foo(uint32,int256)"
//
// Please note that "int" is substitute for its canonical representation "int256"
func (method Method) Sig() string {
types := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
// NewMethod creates a new Method.
// A method should always be created using NewMethod.
// It also precomputes the sig representation and the string representation
// of the method.
func NewMethod(name string, rawName string, funType FunctionType, mutability string, isConst, isPayable bool, inputs Arguments, outputs Arguments) Method {
var (
types = make([]string, len(inputs))
inputNames = make([]string, len(inputs))
outputNames = make([]string, len(outputs))
)
for i, input := range inputs {
inputNames[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
types[i] = input.Type.String()
}
return fmt.Sprintf("%v(%v)", method.RawName, strings.Join(types, ","))
for i, output := range outputs {
outputNames[i] = output.Type.String()
if len(output.Name) > 0 {
outputNames[i] += fmt.Sprintf(" %v", output.Name)
}
}
// calculate the signature and method id. Note only function
// has meaningful signature and id.
var (
sig string
id []byte
)
if funType == Function {
sig = fmt.Sprintf("%v(%v)", rawName, strings.Join(types, ","))
id = crypto.Keccak256([]byte(sig))[:4]
}
// Extract meaningful state mutability of solidity method.
// If it's default value, never print it.
state := mutability
if state == "nonpayable" {
state = ""
}
if state != "" {
state = state + " "
}
identity := fmt.Sprintf("function %v", rawName)
if funType == Fallback {
identity = "fallback"
} else if funType == Receive {
identity = "receive"
} else if funType == Constructor {
identity = "constructor"
}
str := fmt.Sprintf("%v(%v) %sreturns(%v)", identity, strings.Join(inputNames, ", "), state, strings.Join(outputNames, ", "))
return Method{
Name: name,
RawName: rawName,
Type: funType,
StateMutability: mutability,
Constant: isConst,
Payable: isPayable,
Inputs: inputs,
Outputs: outputs,
str: str,
Sig: sig,
ID: id,
}
}
func (method Method) String() string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
}
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
outputs[i] = output.Type.String()
if len(output.Name) > 0 {
outputs[i] += fmt.Sprintf(" %v", output.Name)
}
}
constant := ""
if method.Const {
constant = "constant "
}
return fmt.Sprintf("function %v(%v) %sreturns(%v)", method.RawName, strings.Join(inputs, ", "), constant, strings.Join(outputs, ", "))
return method.str
}
// ID returns the canonical representation of the method's signature used by the
// abi definition to identify method names and types.
func (method Method) ID() []byte {
return crypto.Keccak256([]byte(method.Sig()))[:4]
// IsConstant returns the indicator whether the method is read-only.
func (method Method) IsConstant() bool {
return method.StateMutability == "view" || method.StateMutability == "pure" || method.Constant
}
// IsPayable returns the indicator whether the method can process
// plain ether transfers.
func (method Method) IsPayable() bool {
return method.StateMutability == "payable" || method.Payable
}

View File

@@ -23,13 +23,15 @@ import (
const methoddata = `
[
{"type": "function", "name": "balance", "constant": true },
{"type": "function", "name": "send", "constant": false, "inputs": [{ "name": "amount", "type": "uint256" }]},
{"type": "function", "name": "transfer", "constant": false, "inputs": [{"name": "from", "type": "address"}, {"name": "to", "type": "address"}, {"name": "value", "type": "uint256"}], "outputs": [{"name": "success", "type": "bool"}]},
{"type": "function", "name": "balance", "stateMutability": "view"},
{"type": "function", "name": "send", "inputs": [{ "name": "amount", "type": "uint256" }]},
{"type": "function", "name": "transfer", "inputs": [{"name": "from", "type": "address"}, {"name": "to", "type": "address"}, {"name": "value", "type": "uint256"}], "outputs": [{"name": "success", "type": "bool"}]},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple"}],"name":"tuple","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[]"}],"name":"tupleSlice","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[5]"}],"name":"tupleArray","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[5][]"}],"name":"complexTuple","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"}
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[5][]"}],"name":"complexTuple","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"stateMutability":"nonpayable","type":"fallback"},
{"stateMutability":"payable","type":"receive"}
]`
func TestMethodString(t *testing.T) {
@@ -39,7 +41,7 @@ func TestMethodString(t *testing.T) {
}{
{
method: "balance",
expectation: "function balance() constant returns()",
expectation: "function balance() view returns()",
},
{
method: "send",
@@ -65,6 +67,14 @@ func TestMethodString(t *testing.T) {
method: "complexTuple",
expectation: "function complexTuple((uint256,uint256)[5][] a) returns()",
},
{
method: "fallback",
expectation: "fallback() returns()",
},
{
method: "receive",
expectation: "receive() payable returns()",
},
}
abi, err := JSON(strings.NewReader(methoddata))
@@ -73,7 +83,14 @@ func TestMethodString(t *testing.T) {
}
for _, test := range table {
got := abi.Methods[test.method].String()
var got string
if test.method == "fallback" {
got = abi.Fallback.String()
} else if test.method == "receive" {
got = abi.Receive.String()
} else {
got = abi.Methods[test.method].String()
}
if got != test.expectation {
t.Errorf("expected string to be %s, got %s", test.expectation, got)
}
@@ -120,7 +137,7 @@ func TestMethodSig(t *testing.T) {
}
for _, test := range cases {
got := abi.Methods[test.method].Sig()
got := abi.Methods[test.method].Sig
if got != test.expect {
t.Errorf("expected string to be %s, got %s", test.expect, got)
}

View File

@@ -69,11 +69,11 @@ func packElement(t Type, reflectValue reflect.Value) []byte {
func packNum(value reflect.Value) []byte {
switch kind := value.Kind(); kind {
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return U256(new(big.Int).SetUint64(value.Uint()))
return math.U256Bytes(new(big.Int).SetUint64(value.Uint()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return U256(big.NewInt(value.Int()))
return math.U256Bytes(big.NewInt(value.Int()))
case reflect.Ptr:
return U256(value.Interface().(*big.Int))
return math.U256Bytes(new(big.Int).Set(value.Interface().(*big.Int)))
default:
panic("abi: fatal error")
}

View File

@@ -18,623 +18,62 @@ package abi
import (
"bytes"
"encoding/hex"
"fmt"
"math"
"math/big"
"reflect"
"strconv"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
)
// TestPack tests the general pack/unpack tests in packing_test.go
func TestPack(t *testing.T) {
for i, test := range []struct {
typ string
components []ArgumentMarshaling
input interface{}
output []byte
}{
{
"uint8",
nil,
uint8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint8[]",
nil,
[]uint8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16",
nil,
uint16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16[]",
nil,
[]uint16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32",
nil,
uint32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32[]",
nil,
[]uint32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64",
nil,
uint64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64[]",
nil,
[]uint64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256",
nil,
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256[]",
nil,
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8",
nil,
int8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8[]",
nil,
[]int8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16",
nil,
int16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16[]",
nil,
[]int16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32",
nil,
int32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32[]",
nil,
[]int32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64",
nil,
int64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64[]",
nil,
[]int64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256",
nil,
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256[]",
nil,
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"bytes1",
nil,
[1]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes2",
nil,
[2]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes3",
nil,
[3]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes4",
nil,
[4]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes5",
nil,
[5]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes6",
nil,
[6]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes7",
nil,
[7]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes8",
nil,
[8]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes9",
nil,
[9]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes10",
nil,
[10]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes11",
nil,
[11]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes12",
nil,
[12]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes13",
nil,
[13]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes14",
nil,
[14]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes15",
nil,
[15]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes16",
nil,
[16]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes17",
nil,
[17]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes18",
nil,
[18]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes19",
nil,
[19]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes20",
nil,
[20]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes21",
nil,
[21]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes22",
nil,
[22]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes23",
nil,
[23]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
nil,
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes25",
nil,
[25]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes26",
nil,
[26]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes27",
nil,
[27]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes28",
nil,
[28]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes29",
nil,
[29]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes30",
nil,
[30]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes31",
nil,
[31]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes32",
nil,
[32]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"uint32[2][3][4]",
nil,
[4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018"),
},
{
"address[]",
nil,
[]common.Address{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000"),
},
{
"bytes32[]",
nil,
[]common.Hash{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000201000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000"),
},
{
"function",
nil,
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"string",
nil,
"foobar",
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000006666f6f6261720000000000000000000000000000000000000000000000000000"),
},
{
"string[]",
nil,
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset 128 to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"string[2]",
nil,
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"bytes32[][]",
nil,
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[][2]",
nil,
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[3][2]",
nil,
[][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
// static tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "int64"},
{Name: "b", Type: "int256"},
{Name: "c", Type: "int256"},
{Name: "d", Type: "bool"},
{Name: "e", Type: "bytes32[3][2]"},
},
struct {
A int64
B *big.Int
C *big.Int
D bool
E [][]common.Hash
}{1, big.NewInt(1), big.NewInt(-1), true, [][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001" + // struct[a]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // struct[c]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[d]
"0100000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // struct[e] array[1][2]
},
{
// dynamic tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "string"},
{Name: "b", Type: "int64"},
{Name: "c", Type: "bytes"},
{Name: "d", Type: "string[]"},
{Name: "e", Type: "int256[]"},
{Name: "f", Type: "address[]"},
},
struct {
FieldA string `abi:"a"` // Test whether abi tag works
FieldB int64 `abi:"b"`
C []byte
D []string
E []*big.Int
F []common.Address
}{"foobar", 1, []byte{1}, []string{"foo", "bar"}, []*big.Int{big.NewInt(1), big.NewInt(-1)}, []common.Address{{1}, {2}}},
common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"0000000000000000000000000000000000000000000000000000000000000100" + // struct[c] offset
"0000000000000000000000000000000000000000000000000000000000000140" + // struct[d] offset
"0000000000000000000000000000000000000000000000000000000000000220" + // struct[e] offset
"0000000000000000000000000000000000000000000000000000000000000280" + // struct[f] offset
"0000000000000000000000000000000000000000000000000000000000000006" + // struct[a] length
"666f6f6261720000000000000000000000000000000000000000000000000000" + // struct[a] "foobar"
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[c] length
"0100000000000000000000000000000000000000000000000000000000000000" + // []byte{1}
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[d] length
"0000000000000000000000000000000000000000000000000000000000000040" + // foo offset
"0000000000000000000000000000000000000000000000000000000000000080" + // bar offset
"0000000000000000000000000000000000000000000000000000000000000003" + // foo length
"666f6f0000000000000000000000000000000000000000000000000000000000" + // foo
"0000000000000000000000000000000000000000000000000000000000000003" + // bar offset
"6261720000000000000000000000000000000000000000000000000000000000" + // bar
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[e] length
"0000000000000000000000000000000000000000000000000000000000000001" + // 1
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // -1
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[f] length
"0000000000000000000000000100000000000000000000000000000000000000" + // common.Address{1}
"0000000000000000000000000200000000000000000000000000000000000000"), // common.Address{2}
},
{
// nested tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "tuple", Components: []ArgumentMarshaling{{Name: "a", Type: "uint256"}, {Name: "b", Type: "uint256[]"}}},
{Name: "b", Type: "int256[]"},
},
struct {
A struct {
FieldA *big.Int `abi:"a"`
B []*big.Int
for i, test := range packUnpackTests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
encb, err := hex.DecodeString(test.packed)
if err != nil {
t.Fatalf("invalid hex %s: %v", test.packed, err)
}
inDef := fmt.Sprintf(`[{ "name" : "method", "type": "function", "inputs": %s}]`, test.def)
inAbi, err := JSON(strings.NewReader(inDef))
if err != nil {
t.Fatalf("invalid ABI definition %s, %v", inDef, err)
}
var packed []byte
if reflect.TypeOf(test.unpacked).Kind() != reflect.Struct {
packed, err = inAbi.Pack("method", test.unpacked)
} else {
// if want is a struct we need to use the components.
elem := reflect.ValueOf(test.unpacked)
var values []interface{}
for i := 0; i < elem.NumField(); i++ {
field := elem.Field(i)
values = append(values, field.Interface())
}
B []*big.Int
}{
A: struct {
FieldA *big.Int `abi:"a"` // Test whether abi tag works for nested tuple
B []*big.Int
}{big.NewInt(1), []*big.Int{big.NewInt(1), big.NewInt(0)}},
B: []*big.Int{big.NewInt(1), big.NewInt(0)}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // a offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // b offset
"0000000000000000000000000000000000000000000000000000000000000001" + // a.a value
"0000000000000000000000000000000000000000000000000000000000000040" + // a.b offset
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b length
"0000000000000000000000000000000000000000000000000000000000000001" + // a.b[0] value
"0000000000000000000000000000000000000000000000000000000000000000" + // a.b[1] value
"0000000000000000000000000000000000000000000000000000000000000002" + // b length
"0000000000000000000000000000000000000000000000000000000000000001" + // b[0] value
"0000000000000000000000000000000000000000000000000000000000000000"), // b[1] value
},
{
// tuple slice
"tuple[]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256[]"},
},
[]struct {
A *big.Int
B []*big.Int
}{
{big.NewInt(-1), []*big.Int{big.NewInt(1), big.NewInt(0)}},
{big.NewInt(1), []*big.Int{big.NewInt(2), big.NewInt(-1)}},
},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // tuple length
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // tuple[1] offset
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].B length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].B[0] value
"0000000000000000000000000000000000000000000000000000000000000000" + // tuple[0].B[1] value
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[1].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B length
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B[0] value
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].B[1] value
},
{
// static tuple array
"tuple[2]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256"},
},
[2]struct {
A *big.Int
B *big.Int
}{
{big.NewInt(-1), big.NewInt(1)},
{big.NewInt(1), big.NewInt(-1)},
},
common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].a
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].b
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].a
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].b
},
{
// dynamic tuple array
"tuple[2]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256[]"},
},
[2]struct {
A []*big.Int
}{
{[]*big.Int{big.NewInt(-1), big.NewInt(1)}},
{[]*big.Int{big.NewInt(1), big.NewInt(-1)}},
},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000c0" + // tuple[1] offset
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[0].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].A length
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A[0]
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].A[1]
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[1].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].A length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A[0]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].A[1]
},
} {
typ, err := NewType(test.typ, test.components)
if err != nil {
t.Fatalf("%v failed. Unexpected parse error: %v", i, err)
}
output, err := typ.pack(reflect.ValueOf(test.input))
if err != nil {
t.Fatalf("%v failed. Unexpected pack error: %v", i, err)
}
packed, err = inAbi.Pack("method", values...)
}
if !bytes.Equal(output, test.output) {
t.Errorf("input %d for typ: %v failed. Expected bytes: '%x' Got: '%x'", i, typ.String(), test.output, output)
}
if err != nil {
t.Fatalf("test %d (%v) failed: %v", i, test.def, err)
}
if !reflect.DeepEqual(packed[4:], encb) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, encb, packed[4:])
}
})
}
}
func TestMethodPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Fatal(err)
}
sig := abi.Methods["slice"].ID()
sig := abi.Methods["slice"].ID
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@@ -648,7 +87,7 @@ func TestMethodPack(t *testing.T) {
}
var addrA, addrB = common.Address{1}, common.Address{2}
sig = abi.Methods["sliceAddress"].ID()
sig = abi.Methods["sliceAddress"].ID
sig = append(sig, common.LeftPadBytes([]byte{32}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrA[:], 32)...)
@@ -663,7 +102,7 @@ func TestMethodPack(t *testing.T) {
}
var addrC, addrD = common.Address{3}, common.Address{4}
sig = abi.Methods["sliceMultiAddress"].ID()
sig = abi.Methods["sliceMultiAddress"].ID
sig = append(sig, common.LeftPadBytes([]byte{64}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{160}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@@ -681,7 +120,7 @@ func TestMethodPack(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["slice256"].ID()
sig = abi.Methods["slice256"].ID
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@@ -695,7 +134,7 @@ func TestMethodPack(t *testing.T) {
}
a := [2][2]*big.Int{{big.NewInt(1), big.NewInt(1)}, {big.NewInt(2), big.NewInt(0)}}
sig = abi.Methods["nestedArray"].ID()
sig = abi.Methods["nestedArray"].ID
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
@@ -712,7 +151,7 @@ func TestMethodPack(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["nestedArray2"].ID()
sig = abi.Methods["nestedArray2"].ID
sig = append(sig, common.LeftPadBytes([]byte{0x20}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x40}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x80}, 32)...)
@@ -728,7 +167,7 @@ func TestMethodPack(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["nestedSlice"].ID()
sig = abi.Methods["nestedSlice"].ID
sig = append(sig, common.LeftPadBytes([]byte{0x20}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x02}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x40}, 32)...)

View File

@@ -0,0 +1,988 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"math/big"
"github.com/ethereum/go-ethereum/common"
)
type packUnpackTest struct {
def string
unpacked interface{}
packed string
}
var packUnpackTests = []packUnpackTest{
// Booleans
{
def: `[{ "type": "bool" }]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: true,
},
{
def: `[{ "type": "bool" }]`,
packed: "0000000000000000000000000000000000000000000000000000000000000000",
unpacked: false,
},
// Integers
{
def: `[{ "type": "uint8" }]`,
unpacked: uint8(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{ "type": "uint8[]" }]`,
unpacked: []uint8{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{ "type": "uint16" }]`,
unpacked: uint16(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{ "type": "uint16[]" }]`,
unpacked: []uint16{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint17"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: big.NewInt(1),
},
{
def: `[{"type": "uint32"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: uint32(1),
},
{
def: `[{"type": "uint32[]"}]`,
unpacked: []uint32{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint64"}]`,
unpacked: uint64(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint64[]"}]`,
unpacked: []uint64{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint256"}]`,
unpacked: big.NewInt(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "uint256[]"}]`,
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int8"}]`,
unpacked: int8(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int8[]"}]`,
unpacked: []int8{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int16"}]`,
unpacked: int16(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int16[]"}]`,
unpacked: []int16{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int17"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: big.NewInt(1),
},
{
def: `[{"type": "int32"}]`,
unpacked: int32(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int32"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: int32(1),
},
{
def: `[{"type": "int32[]"}]`,
unpacked: []int32{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int64"}]`,
unpacked: int64(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int64[]"}]`,
unpacked: []int64{1, 2},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int256"}]`,
unpacked: big.NewInt(2),
packed: "0000000000000000000000000000000000000000000000000000000000000002",
},
{
def: `[{"type": "int256"}]`,
packed: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
unpacked: big.NewInt(-1),
},
{
def: `[{"type": "int256[]"}]`,
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
},
// Address
{
def: `[{"type": "address"}]`,
packed: "0000000000000000000000000100000000000000000000000000000000000000",
unpacked: common.Address{1},
},
{
def: `[{"type": "address[]"}]`,
unpacked: []common.Address{{1}, {2}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000100000000000000000000000000000000000000" +
"0000000000000000000000000200000000000000000000000000000000000000",
},
// Bytes
{
def: `[{"type": "bytes1"}]`,
unpacked: [1]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes2"}]`,
unpacked: [2]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes3"}]`,
unpacked: [3]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes4"}]`,
unpacked: [4]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes5"}]`,
unpacked: [5]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes6"}]`,
unpacked: [6]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes7"}]`,
unpacked: [7]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes8"}]`,
unpacked: [8]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes9"}]`,
unpacked: [9]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes10"}]`,
unpacked: [10]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes11"}]`,
unpacked: [11]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes12"}]`,
unpacked: [12]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes13"}]`,
unpacked: [13]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes14"}]`,
unpacked: [14]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes15"}]`,
unpacked: [15]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes16"}]`,
unpacked: [16]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes17"}]`,
unpacked: [17]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes18"}]`,
unpacked: [18]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes19"}]`,
unpacked: [19]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes20"}]`,
unpacked: [20]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes21"}]`,
unpacked: [21]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes22"}]`,
unpacked: [22]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes23"}]`,
unpacked: [23]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes24"}]`,
unpacked: [24]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes25"}]`,
unpacked: [25]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes26"}]`,
unpacked: [26]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes27"}]`,
unpacked: [27]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes28"}]`,
unpacked: [28]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes29"}]`,
unpacked: [29]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes30"}]`,
unpacked: [30]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes31"}]`,
unpacked: [31]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes32"}]`,
unpacked: [32]byte{1},
packed: "0100000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "bytes32"}]`,
packed: "0100000000000000000000000000000000000000000000000000000000000000",
unpacked: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "bytes"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000020" +
"0100000000000000000000000000000000000000000000000000000000000000",
unpacked: common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
def: `[{"type": "bytes32"}]`,
packed: "0100000000000000000000000000000000000000000000000000000000000000",
unpacked: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
// Functions
{
def: `[{"type": "function"}]`,
packed: "0100000000000000000000000000000000000000000000000000000000000000",
unpacked: [24]byte{1},
},
// Slice and Array
{
def: `[{"type": "uint8[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint8{1, 2},
},
{
def: `[{"type": "uint8[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: []uint8{},
},
{
def: `[{"type": "uint256[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: []*big.Int{},
},
{
def: `[{"type": "uint8[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint8{1, 2},
},
{
def: `[{"type": "int8[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int8{1, 2},
},
{
def: `[{"type": "int16[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []int16{1, 2},
},
{
def: `[{"type": "int16[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int16{1, 2},
},
{
def: `[{"type": "int32[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []int32{1, 2},
},
{
def: `[{"type": "int32[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int32{1, 2},
},
{
def: `[{"type": "int64[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []int64{1, 2},
},
{
def: `[{"type": "int64[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]int64{1, 2},
},
{
def: `[{"type": "int256[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "int256[3]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003",
unpacked: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: [][]uint8{},
},
{
def: `[{"type": "uint8[][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"00000000000000000000000000000000000000000000000000000000000000a0" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"00000000000000000000000000000000000000000000000000000000000000a0" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003",
unpacked: [][]uint8{{1, 2}, {1, 2, 3}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000060" +
"0000000000000000000000000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: [2][]uint8{{}, {}},
},
{
def: `[{"type": "uint8[][2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001",
unpacked: [2][]uint8{{1}, {1}},
},
{
def: `[{"type": "uint8[2][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000000",
unpacked: [][2]uint8{},
},
{
def: `[{"type": "uint8[2][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint8[2][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint16{1, 2},
},
{
def: `[{"type": "uint16[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint16{1, 2},
},
{
def: `[{"type": "uint32[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint32{1, 2},
},
{
def: `[{"type": "uint32[2][3][4]"}]`,
unpacked: [4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"0000000000000000000000000000000000000000000000000000000000000004" +
"0000000000000000000000000000000000000000000000000000000000000005" +
"0000000000000000000000000000000000000000000000000000000000000006" +
"0000000000000000000000000000000000000000000000000000000000000007" +
"0000000000000000000000000000000000000000000000000000000000000008" +
"0000000000000000000000000000000000000000000000000000000000000009" +
"000000000000000000000000000000000000000000000000000000000000000a" +
"000000000000000000000000000000000000000000000000000000000000000b" +
"000000000000000000000000000000000000000000000000000000000000000c" +
"000000000000000000000000000000000000000000000000000000000000000d" +
"000000000000000000000000000000000000000000000000000000000000000e" +
"000000000000000000000000000000000000000000000000000000000000000f" +
"0000000000000000000000000000000000000000000000000000000000000010" +
"0000000000000000000000000000000000000000000000000000000000000011" +
"0000000000000000000000000000000000000000000000000000000000000012" +
"0000000000000000000000000000000000000000000000000000000000000013" +
"0000000000000000000000000000000000000000000000000000000000000014" +
"0000000000000000000000000000000000000000000000000000000000000015" +
"0000000000000000000000000000000000000000000000000000000000000016" +
"0000000000000000000000000000000000000000000000000000000000000017" +
"0000000000000000000000000000000000000000000000000000000000000018",
},
{
def: `[{"type": "bytes32[]"}]`,
unpacked: []common.Hash{{1}, {2}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0100000000000000000000000000000000000000000000000000000000000000" +
"0200000000000000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "uint32[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint32{1, 2},
},
{
def: `[{"type": "uint64[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []uint64{1, 2},
},
{
def: `[{"type": "uint64[2]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: [2]uint64{1, 2},
},
{
def: `[{"type": "uint256[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "uint256[3]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000003",
unpacked: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "string[4]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"00000000000000000000000000000000000000000000000000000000000000c0" +
"0000000000000000000000000000000000000000000000000000000000000100" +
"0000000000000000000000000000000000000000000000000000000000000140" +
"0000000000000000000000000000000000000000000000000000000000000005" +
"48656c6c6f000000000000000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000005" +
"576f726c64000000000000000000000000000000000000000000000000000000" +
"000000000000000000000000000000000000000000000000000000000000000b" +
"476f2d657468657265756d000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000008" +
"457468657265756d000000000000000000000000000000000000000000000000",
unpacked: [4]string{"Hello", "World", "Go-ethereum", "Ethereum"},
},
{
def: `[{"type": "string[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"0000000000000000000000000000000000000000000000000000000000000008" +
"457468657265756d000000000000000000000000000000000000000000000000" +
"000000000000000000000000000000000000000000000000000000000000000b" +
"676f2d657468657265756d000000000000000000000000000000000000000000",
unpacked: []string{"Ethereum", "go-ethereum"},
},
{
def: `[{"type": "bytes[]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"0000000000000000000000000000000000000000000000000000000000000080" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"f0f0f00000000000000000000000000000000000000000000000000000000000" +
"0000000000000000000000000000000000000000000000000000000000000003" +
"f0f0f00000000000000000000000000000000000000000000000000000000000",
unpacked: [][]byte{{0xf0, 0xf0, 0xf0}, {0xf0, 0xf0, 0xf0}},
},
{
def: `[{"type": "uint256[2][][]"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000040" +
"00000000000000000000000000000000000000000000000000000000000000e0" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000000c8" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000003e8" +
"0000000000000000000000000000000000000000000000000000000000000002" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000000c8" +
"0000000000000000000000000000000000000000000000000000000000000001" +
"00000000000000000000000000000000000000000000000000000000000003e8",
unpacked: [][][2]*big.Int{{{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}, {{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}},
},
// struct outputs
{
def: `[{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: struct {
Int1 *big.Int
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int_one","type":"int256"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001",
unpacked: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}]`,
packed: "0000000000000000000000000000000000000000000000000000000000000001" +
"0000000000000000000000000000000000000000000000000000000000000002",
unpacked: struct {
IntOne *big.Int
Intone *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "string"}]`,
unpacked: "foobar",
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000006" +
"666f6f6261720000000000000000000000000000000000000000000000000000",
},
{
def: `[{"type": "string[]"}]`,
unpacked: []string{"hello", "foobar"},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset 128 to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000", // str[1]
},
{
def: `[{"type": "string[2]"}]`,
unpacked: [2]string{"hello", "foobar"},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" + // offset to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000", // str[1]
},
{
def: `[{"type": "bytes32[][]"}]`,
unpacked: [][][32]byte{{{1}, {2}}, {{3}, {4}, {5}}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // array[1][2]
},
{
def: `[{"type": "bytes32[][2]"}]`,
unpacked: [2][][32]byte{{{1}, {2}}, {{3}, {4}, {5}}},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // array[1][2]
},
{
def: `[{"type": "bytes32[3][2]"}]`,
unpacked: [2][3][32]byte{{{1}, {2}, {3}}, {{3}, {4}, {5}}},
packed: "0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // array[1][2]
},
{
// static tuple
def: `[{"name":"a","type":"int64"},
{"name":"b","type":"int256"},
{"name":"c","type":"int256"},
{"name":"d","type":"bool"},
{"name":"e","type":"bytes32[3][2]"}]`,
unpacked: struct {
A int64
B *big.Int
C *big.Int
D bool
E [2][3][32]byte
}{1, big.NewInt(1), big.NewInt(-1), true, [2][3][32]byte{{{1}, {2}, {3}}, {{3}, {4}, {5}}}},
packed: "0000000000000000000000000000000000000000000000000000000000000001" + // struct[a]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // struct[c]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[d]
"0100000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000", // struct[e] array[1][2]
},
{
def: `[{"name":"a","type":"string"},
{"name":"b","type":"int64"},
{"name":"c","type":"bytes"},
{"name":"d","type":"string[]"},
{"name":"e","type":"int256[]"},
{"name":"f","type":"address[]"}]`,
unpacked: struct {
FieldA string `abi:"a"` // Test whether abi tag works
FieldB int64 `abi:"b"`
C []byte
D []string
E []*big.Int
F []common.Address
}{"foobar", 1, []byte{1}, []string{"foo", "bar"}, []*big.Int{big.NewInt(1), big.NewInt(-1)}, []common.Address{{1}, {2}}},
packed: "00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"0000000000000000000000000000000000000000000000000000000000000100" + // struct[c] offset
"0000000000000000000000000000000000000000000000000000000000000140" + // struct[d] offset
"0000000000000000000000000000000000000000000000000000000000000220" + // struct[e] offset
"0000000000000000000000000000000000000000000000000000000000000280" + // struct[f] offset
"0000000000000000000000000000000000000000000000000000000000000006" + // struct[a] length
"666f6f6261720000000000000000000000000000000000000000000000000000" + // struct[a] "foobar"
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[c] length
"0100000000000000000000000000000000000000000000000000000000000000" + // []byte{1}
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[d] length
"0000000000000000000000000000000000000000000000000000000000000040" + // foo offset
"0000000000000000000000000000000000000000000000000000000000000080" + // bar offset
"0000000000000000000000000000000000000000000000000000000000000003" + // foo length
"666f6f0000000000000000000000000000000000000000000000000000000000" + // foo
"0000000000000000000000000000000000000000000000000000000000000003" + // bar offset
"6261720000000000000000000000000000000000000000000000000000000000" + // bar
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[e] length
"0000000000000000000000000000000000000000000000000000000000000001" + // 1
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // -1
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[f] length
"0000000000000000000000000100000000000000000000000000000000000000" + // common.Address{1}
"0000000000000000000000000200000000000000000000000000000000000000", // common.Address{2}
},
{
def: `[{"components": [{"name": "a","type": "uint256"},
{"name": "b","type": "uint256[]"}],
"name": "a","type": "tuple"},
{"name": "b","type": "uint256[]"}]`,
unpacked: struct {
A struct {
FieldA *big.Int `abi:"a"`
B []*big.Int
}
B []*big.Int
}{
A: struct {
FieldA *big.Int `abi:"a"` // Test whether abi tag works for nested tuple
B []*big.Int
}{big.NewInt(1), []*big.Int{big.NewInt(1), big.NewInt(2)}},
B: []*big.Int{big.NewInt(1), big.NewInt(2)}},
packed: "0000000000000000000000000000000000000000000000000000000000000040" + // a offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // b offset
"0000000000000000000000000000000000000000000000000000000000000001" + // a.a value
"0000000000000000000000000000000000000000000000000000000000000040" + // a.b offset
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b length
"0000000000000000000000000000000000000000000000000000000000000001" + // a.b[0] value
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b[1] value
"0000000000000000000000000000000000000000000000000000000000000002" + // b length
"0000000000000000000000000000000000000000000000000000000000000001" + // b[0] value
"0000000000000000000000000000000000000000000000000000000000000002", // b[1] value
},
{
def: `[{"components": [{"name": "a","type": "int256"},
{"name": "b","type": "int256[]"}],
"name": "a","type": "tuple[]"}]`,
unpacked: []struct {
A *big.Int
B []*big.Int
}{
{big.NewInt(-1), []*big.Int{big.NewInt(1), big.NewInt(3)}},
{big.NewInt(1), []*big.Int{big.NewInt(2), big.NewInt(-1)}},
},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple length
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // tuple[1] offset
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].B length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].B[0] value
"0000000000000000000000000000000000000000000000000000000000000003" + // tuple[0].B[1] value
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[1].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B length
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B[0] value
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // tuple[1].B[1] value
},
{
def: `[{"components": [{"name": "a","type": "int256"},
{"name": "b","type": "int256"}],
"name": "a","type": "tuple[2]"}]`,
unpacked: [2]struct {
A *big.Int
B *big.Int
}{
{big.NewInt(-1), big.NewInt(1)},
{big.NewInt(1), big.NewInt(-1)},
},
packed: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].a
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].b
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].a
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // tuple[1].b
},
{
def: `[{"components": [{"name": "a","type": "int256[]"}],
"name": "a","type": "tuple[2]"}]`,
unpacked: [2]struct {
A []*big.Int
}{
{[]*big.Int{big.NewInt(-1), big.NewInt(1)}},
{[]*big.Int{big.NewInt(1), big.NewInt(-1)}},
},
packed: "0000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000c0" + // tuple[1] offset
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[0].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].A length
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A[0]
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].A[1]
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[1].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].A length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A[0]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // tuple[1].A[1]
},
}

View File

@@ -17,7 +17,9 @@
package abi
import (
"errors"
"fmt"
"math/big"
"reflect"
"strings"
)
@@ -25,46 +27,38 @@ import (
// indirect recursively dereferences the value until it either gets the value
// or finds a big.Int
func indirect(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Ptr && v.Elem().Type() != derefbigT {
if v.Kind() == reflect.Ptr && v.Elem().Type() != reflect.TypeOf(big.Int{}) {
return indirect(v.Elem())
}
return v
}
// indirectInterfaceOrPtr recursively dereferences the value until value is not interface.
func indirectInterfaceOrPtr(v reflect.Value) reflect.Value {
if (v.Kind() == reflect.Interface || v.Kind() == reflect.Ptr) && v.Elem().IsValid() {
return indirect(v.Elem())
}
return v
}
// reflectIntKind returns the reflect using the given size and
// reflectIntType returns the reflect using the given size and
// unsignedness.
func reflectIntKindAndType(unsigned bool, size int) (reflect.Kind, reflect.Type) {
func reflectIntType(unsigned bool, size int) reflect.Type {
if unsigned {
switch size {
case 8:
return reflect.TypeOf(uint8(0))
case 16:
return reflect.TypeOf(uint16(0))
case 32:
return reflect.TypeOf(uint32(0))
case 64:
return reflect.TypeOf(uint64(0))
}
}
switch size {
case 8:
if unsigned {
return reflect.Uint8, uint8T
}
return reflect.Int8, int8T
return reflect.TypeOf(int8(0))
case 16:
if unsigned {
return reflect.Uint16, uint16T
}
return reflect.Int16, int16T
return reflect.TypeOf(int16(0))
case 32:
if unsigned {
return reflect.Uint32, uint32T
}
return reflect.Int32, int32T
return reflect.TypeOf(int32(0))
case 64:
if unsigned {
return reflect.Uint64, uint64T
}
return reflect.Int64, int64T
return reflect.TypeOf(int64(0))
}
return reflect.Ptr, bigT
return reflect.TypeOf(&big.Int{})
}
// mustArrayToBytesSlice creates a new byte slice with the exact same size as value
@@ -84,12 +78,16 @@ func set(dst, src reflect.Value) error {
switch {
case dstType.Kind() == reflect.Interface && dst.Elem().IsValid():
return set(dst.Elem(), src)
case dstType.Kind() == reflect.Ptr && dstType.Elem() != derefbigT:
case dstType.Kind() == reflect.Ptr && dstType.Elem() != reflect.TypeOf(big.Int{}):
return set(dst.Elem(), src)
case srcType.AssignableTo(dstType) && dst.CanSet():
dst.Set(src)
case dstType.Kind() == reflect.Slice && srcType.Kind() == reflect.Slice:
case dstType.Kind() == reflect.Slice && srcType.Kind() == reflect.Slice && dst.CanSet():
return setSlice(dst, src)
case dstType.Kind() == reflect.Array:
return setArray(dst, src)
case dstType.Kind() == reflect.Struct:
return setStruct(dst, src)
default:
return fmt.Errorf("abi: cannot unmarshal %v in to %v", src.Type(), dst.Type())
}
@@ -98,38 +96,56 @@ func set(dst, src reflect.Value) error {
// setSlice attempts to assign src to dst when slices are not assignable by default
// e.g. src: [][]byte -> dst: [][15]byte
// setSlice ignores if we cannot copy all of src' elements.
func setSlice(dst, src reflect.Value) error {
slice := reflect.MakeSlice(dst.Type(), src.Len(), src.Len())
for i := 0; i < src.Len(); i++ {
v := src.Index(i)
reflect.Copy(slice.Index(i), v)
}
dst.Set(slice)
return nil
}
// requireAssignable assures that `dest` is a pointer and it's not an interface.
func requireAssignable(dst, src reflect.Value) error {
if dst.Kind() != reflect.Ptr && dst.Kind() != reflect.Interface {
return fmt.Errorf("abi: cannot unmarshal %v into %v", src.Type(), dst.Type())
}
return nil
}
// requireUnpackKind verifies preconditions for unpacking `args` into `kind`
func requireUnpackKind(v reflect.Value, t reflect.Type, k reflect.Kind,
args Arguments) error {
switch k {
case reflect.Struct:
case reflect.Slice, reflect.Array:
if minLen := args.LengthNonIndexed(); v.Len() < minLen {
return fmt.Errorf("abi: insufficient number of elements in the list/array for unpack, want %d, got %d",
minLen, v.Len())
if src.Index(i).Kind() == reflect.Struct {
if err := set(slice.Index(i), src.Index(i)); err != nil {
return err
}
} else {
// e.g. [][32]uint8 to []common.Hash
if err := set(slice.Index(i), src.Index(i)); err != nil {
return err
}
}
}
if dst.CanSet() {
dst.Set(slice)
return nil
}
return errors.New("Cannot set slice, destination not settable")
}
func setArray(dst, src reflect.Value) error {
array := reflect.New(dst.Type()).Elem()
min := src.Len()
if src.Len() > dst.Len() {
min = dst.Len()
}
for i := 0; i < min; i++ {
if err := set(array.Index(i), src.Index(i)); err != nil {
return err
}
}
if dst.CanSet() {
dst.Set(array)
return nil
}
return errors.New("Cannot set array, destination not settable")
}
func setStruct(dst, src reflect.Value) error {
for i := 0; i < src.NumField(); i++ {
srcField := src.Field(i)
dstField := dst.Field(i)
if !dstField.IsValid() || !srcField.IsValid() {
return fmt.Errorf("Could not find src field: %v value: %v in destination", srcField.Type().Name(), srcField)
}
if err := set(dstField, srcField); err != nil {
return err
}
default:
return fmt.Errorf("abi: cannot unmarshal tuple into %v", t)
}
return nil
}
@@ -156,9 +172,8 @@ func mapArgNamesToStructFields(argNames []string, value reflect.Value) (map[stri
continue
}
// skip fields that have no abi:"" tag.
var ok bool
var tagName string
if tagName, ok = typ.Field(i).Tag.Lookup("abi"); !ok {
tagName, ok := typ.Field(i).Tag.Lookup("abi")
if !ok {
continue
}
// check if tag is empty.

173
accounts/abi/topics.go Normal file
View File

@@ -0,0 +1,173 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"encoding/binary"
"errors"
"fmt"
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// MakeTopics converts a filter query argument list into a filter topic set.
func MakeTopics(query ...[]interface{}) ([][]common.Hash, error) {
topics := make([][]common.Hash, len(query))
for i, filter := range query {
for _, rule := range filter {
var topic common.Hash
// Try to generate the topic based on simple types
switch rule := rule.(type) {
case common.Hash:
copy(topic[:], rule[:])
case common.Address:
copy(topic[common.HashLength-common.AddressLength:], rule[:])
case *big.Int:
blob := rule.Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case bool:
if rule {
topic[common.HashLength-1] = 1
}
case int8:
copy(topic[:], genIntType(int64(rule), 1))
case int16:
copy(topic[:], genIntType(int64(rule), 2))
case int32:
copy(topic[:], genIntType(int64(rule), 4))
case int64:
copy(topic[:], genIntType(rule, 8))
case uint8:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint16:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint32:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint64:
blob := new(big.Int).SetUint64(rule).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case string:
hash := crypto.Keccak256Hash([]byte(rule))
copy(topic[:], hash[:])
case []byte:
hash := crypto.Keccak256Hash(rule)
copy(topic[:], hash[:])
default:
// todo(rjl493456442) according solidity documentation, indexed event
// parameters that are not value types i.e. arrays and structs are not
// stored directly but instead a keccak256-hash of an encoding is stored.
//
// We only convert stringS and bytes to hash, still need to deal with
// array(both fixed-size and dynamic-size) and struct.
// Attempt to generate the topic from funky types
val := reflect.ValueOf(rule)
switch {
// static byte array
case val.Kind() == reflect.Array && reflect.TypeOf(rule).Elem().Kind() == reflect.Uint8:
reflect.Copy(reflect.ValueOf(topic[:val.Len()]), val)
default:
return nil, fmt.Errorf("unsupported indexed type: %T", rule)
}
}
topics[i] = append(topics[i], topic)
}
}
return topics, nil
}
func genIntType(rule int64, size uint) []byte {
var topic [common.HashLength]byte
if rule < 0 {
// if a rule is negative, we need to put it into two's complement.
// extended to common.Hashlength bytes.
topic = [common.HashLength]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}
}
for i := uint(0); i < size; i++ {
topic[common.HashLength-i-1] = byte(rule >> (i * 8))
}
return topic[:]
}
// ParseTopics converts the indexed topic fields into actual log field values.
func ParseTopics(out interface{}, fields Arguments, topics []common.Hash) error {
return parseTopicWithSetter(fields, topics,
func(arg Argument, reconstr interface{}) {
field := reflect.ValueOf(out).Elem().FieldByName(ToCamelCase(arg.Name))
field.Set(reflect.ValueOf(reconstr))
})
}
// ParseTopicsIntoMap converts the indexed topic field-value pairs into map key-value pairs
func ParseTopicsIntoMap(out map[string]interface{}, fields Arguments, topics []common.Hash) error {
return parseTopicWithSetter(fields, topics,
func(arg Argument, reconstr interface{}) {
out[arg.Name] = reconstr
})
}
// parseTopicWithSetter converts the indexed topic field-value pairs and stores them using the
// provided set function.
//
// Note, dynamic types cannot be reconstructed since they get mapped to Keccak256
// hashes as the topic value!
func parseTopicWithSetter(fields Arguments, topics []common.Hash, setter func(Argument, interface{})) error {
// Sanity check that the fields and topics match up
if len(fields) != len(topics) {
return errors.New("topic/field count mismatch")
}
// Iterate over all the fields and reconstruct them from topics
for i, arg := range fields {
if !arg.Indexed {
return errors.New("non-indexed field in topic reconstruction")
}
var reconstr interface{}
switch arg.Type.T {
case TupleTy:
return errors.New("tuple type in topic reconstruction")
case StringTy, BytesTy, SliceTy, ArrayTy:
// Array types (including strings and bytes) have their keccak256 hashes stored in the topic- not a hash
// whose bytes can be decoded to the actual value- so the best we can do is retrieve that hash
reconstr = topics[i]
case FunctionTy:
if garbage := binary.BigEndian.Uint64(topics[i][0:8]); garbage != 0 {
return fmt.Errorf("bind: got improperly encoded function type, got %v", topics[i].Bytes())
}
var tmp [24]byte
copy(tmp[:], topics[i][8:32])
reconstr = tmp
default:
var err error
reconstr, err = toGoType(0, arg.Type, topics[i].Bytes())
if err != nil {
return err
}
}
// Use the setter function to store the value
setter(arg, reconstr)
}
return nil
}

381
accounts/abi/topics_test.go Normal file
View File

@@ -0,0 +1,381 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"math/big"
"reflect"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
func TestMakeTopics(t *testing.T) {
type args struct {
query [][]interface{}
}
tests := []struct {
name string
args args
want [][]common.Hash
wantErr bool
}{
{
"support fixed byte types, right padded to 32 bytes",
args{[][]interface{}{{[5]byte{1, 2, 3, 4, 5}}}},
[][]common.Hash{{common.Hash{1, 2, 3, 4, 5}}},
false,
},
{
"support common hash types in topics",
args{[][]interface{}{{common.Hash{1, 2, 3, 4, 5}}}},
[][]common.Hash{{common.Hash{1, 2, 3, 4, 5}}},
false,
},
{
"support address types in topics",
args{[][]interface{}{{common.Address{1, 2, 3, 4, 5}}}},
[][]common.Hash{{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5}}},
false,
},
{
"support *big.Int types in topics",
args{[][]interface{}{{big.NewInt(1).Lsh(big.NewInt(2), 254)}}},
[][]common.Hash{{common.Hash{128}}},
false,
},
{
"support boolean types in topics",
args{[][]interface{}{
{true},
{false},
}},
[][]common.Hash{
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}},
{common.Hash{0}},
},
false,
},
{
"support int/uint(8/16/32/64) types in topics",
args{[][]interface{}{
{int8(-2)},
{int16(-3)},
{int32(-4)},
{int64(-5)},
{int8(1)},
{int16(256)},
{int32(65536)},
{int64(4294967296)},
{uint8(1)},
{uint16(256)},
{uint32(65536)},
{uint64(4294967296)},
}},
[][]common.Hash{
{common.Hash{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 254}},
{common.Hash{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 253}},
{common.Hash{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 252}},
{common.Hash{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 251}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0}},
{common.Hash{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0}},
},
false,
},
{
"support string types in topics",
args{[][]interface{}{{"hello world"}}},
[][]common.Hash{{crypto.Keccak256Hash([]byte("hello world"))}},
false,
},
{
"support byte slice types in topics",
args{[][]interface{}{{[]byte{1, 2, 3}}}},
[][]common.Hash{{crypto.Keccak256Hash([]byte{1, 2, 3})}},
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := MakeTopics(tt.args.query...)
if (err != nil) != tt.wantErr {
t.Errorf("makeTopics() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("makeTopics() = %v, want %v", got, tt.want)
}
})
}
}
type args struct {
createObj func() interface{}
resultObj func() interface{}
resultMap func() map[string]interface{}
fields Arguments
topics []common.Hash
}
type bytesStruct struct {
StaticBytes [5]byte
}
type int8Struct struct {
Int8Value int8
}
type int256Struct struct {
Int256Value *big.Int
}
type hashStruct struct {
HashValue common.Hash
}
type funcStruct struct {
FuncValue [24]byte
}
type topicTest struct {
name string
args args
wantErr bool
}
func setupTopicsTests() []topicTest {
bytesType, _ := NewType("bytes5", "", nil)
int8Type, _ := NewType("int8", "", nil)
int256Type, _ := NewType("int256", "", nil)
tupleType, _ := NewType("tuple(int256,int8)", "", nil)
stringType, _ := NewType("string", "", nil)
funcType, _ := NewType("function", "", nil)
tests := []topicTest{
{
name: "support fixed byte types, right padded to 32 bytes",
args: args{
createObj: func() interface{} { return &bytesStruct{} },
resultObj: func() interface{} { return &bytesStruct{StaticBytes: [5]byte{1, 2, 3, 4, 5}} },
resultMap: func() map[string]interface{} {
return map[string]interface{}{"staticBytes": [5]byte{1, 2, 3, 4, 5}}
},
fields: Arguments{Argument{
Name: "staticBytes",
Type: bytesType,
Indexed: true,
}},
topics: []common.Hash{
{1, 2, 3, 4, 5},
},
},
wantErr: false,
},
{
name: "int8 with negative value",
args: args{
createObj: func() interface{} { return &int8Struct{} },
resultObj: func() interface{} { return &int8Struct{Int8Value: -1} },
resultMap: func() map[string]interface{} {
return map[string]interface{}{"int8Value": int8(-1)}
},
fields: Arguments{Argument{
Name: "int8Value",
Type: int8Type,
Indexed: true,
}},
topics: []common.Hash{
{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: false,
},
{
name: "int256 with negative value",
args: args{
createObj: func() interface{} { return &int256Struct{} },
resultObj: func() interface{} { return &int256Struct{Int256Value: big.NewInt(-1)} },
resultMap: func() map[string]interface{} {
return map[string]interface{}{"int256Value": big.NewInt(-1)}
},
fields: Arguments{Argument{
Name: "int256Value",
Type: int256Type,
Indexed: true,
}},
topics: []common.Hash{
{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: false,
},
{
name: "hash type",
args: args{
createObj: func() interface{} { return &hashStruct{} },
resultObj: func() interface{} { return &hashStruct{crypto.Keccak256Hash([]byte("stringtopic"))} },
resultMap: func() map[string]interface{} {
return map[string]interface{}{"hashValue": crypto.Keccak256Hash([]byte("stringtopic"))}
},
fields: Arguments{Argument{
Name: "hashValue",
Type: stringType,
Indexed: true,
}},
topics: []common.Hash{
crypto.Keccak256Hash([]byte("stringtopic")),
},
},
wantErr: false,
},
{
name: "function type",
args: args{
createObj: func() interface{} { return &funcStruct{} },
resultObj: func() interface{} {
return &funcStruct{[24]byte{255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}}
},
resultMap: func() map[string]interface{} {
return map[string]interface{}{"funcValue": [24]byte{255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}}
},
fields: Arguments{Argument{
Name: "funcValue",
Type: funcType,
Indexed: true,
}},
topics: []common.Hash{
{0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: false,
},
{
name: "error on topic/field count mismatch",
args: args{
createObj: func() interface{} { return nil },
resultObj: func() interface{} { return nil },
resultMap: func() map[string]interface{} { return make(map[string]interface{}) },
fields: Arguments{Argument{
Name: "tupletype",
Type: tupleType,
Indexed: true,
}},
topics: []common.Hash{},
},
wantErr: true,
},
{
name: "error on unindexed arguments",
args: args{
createObj: func() interface{} { return &int256Struct{} },
resultObj: func() interface{} { return &int256Struct{} },
resultMap: func() map[string]interface{} { return make(map[string]interface{}) },
fields: Arguments{Argument{
Name: "int256Value",
Type: int256Type,
Indexed: false,
}},
topics: []common.Hash{
{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: true,
},
{
name: "error on tuple in topic reconstruction",
args: args{
createObj: func() interface{} { return &tupleType },
resultObj: func() interface{} { return &tupleType },
resultMap: func() map[string]interface{} { return make(map[string]interface{}) },
fields: Arguments{Argument{
Name: "tupletype",
Type: tupleType,
Indexed: true,
}},
topics: []common.Hash{{0}},
},
wantErr: true,
},
{
name: "error on improper encoded function",
args: args{
createObj: func() interface{} { return &funcStruct{} },
resultObj: func() interface{} { return &funcStruct{} },
resultMap: func() map[string]interface{} {
return make(map[string]interface{})
},
fields: Arguments{Argument{
Name: "funcValue",
Type: funcType,
Indexed: true,
}},
topics: []common.Hash{
{0, 0, 0, 0, 0, 0, 0, 128, 255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255},
},
},
wantErr: true,
},
}
return tests
}
func TestParseTopics(t *testing.T) {
tests := setupTopicsTests()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
createObj := tt.args.createObj()
if err := ParseTopics(createObj, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
t.Errorf("parseTopics() error = %v, wantErr %v", err, tt.wantErr)
}
resultObj := tt.args.resultObj()
if !reflect.DeepEqual(createObj, resultObj) {
t.Errorf("parseTopics() = %v, want %v", createObj, resultObj)
}
})
}
}
func TestParseTopicsIntoMap(t *testing.T) {
tests := setupTopicsTests()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
outMap := make(map[string]interface{})
if err := ParseTopicsIntoMap(outMap, tt.args.fields, tt.args.topics); (err != nil) != tt.wantErr {
t.Errorf("parseTopicsIntoMap() error = %v, wantErr %v", err, tt.wantErr)
}
resultMap := tt.args.resultMap()
if !reflect.DeepEqual(outMap, resultMap) {
t.Errorf("parseTopicsIntoMap() = %v, want %v", outMap, resultMap)
}
})
}
}

View File

@@ -23,6 +23,8 @@ import (
"regexp"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/common"
)
// Type enumerator
@@ -45,16 +47,16 @@ const (
// Type is the reflection of the supported argument type
type Type struct {
Elem *Type
Kind reflect.Kind
Type reflect.Type
Size int
T byte // Our own type checking
stringKind string // holds the unparsed string for deriving signatures
// Tuple relative fields
TupleElems []*Type // Type information of all tuple fields
TupleRawNames []string // Raw field name of all tuple fields
TupleRawName string // Raw struct name defined in source code, may be empty.
TupleElems []*Type // Type information of all tuple fields
TupleRawNames []string // Raw field name of all tuple fields
TupleType reflect.Type // Underlying struct of the tuple
}
var (
@@ -63,7 +65,7 @@ var (
)
// NewType creates a new reflection type of abi type given in t.
func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
func NewType(t string, internalType string, components []ArgumentMarshaling) (typ Type, err error) {
// check that array brackets are equal if they exist
if strings.Count(t, "[") != strings.Count(t, "]") {
return Type{}, fmt.Errorf("invalid arg type in abi")
@@ -73,9 +75,14 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
// if there are brackets, get ready to go into slice/array mode and
// recursively create the type
if strings.Count(t, "[") != 0 {
i := strings.LastIndex(t, "[")
// Note internalType can be empty here.
subInternal := internalType
if i := strings.LastIndex(internalType, "["); i != -1 {
subInternal = subInternal[:i]
}
// recursively embed the type
embeddedType, err := NewType(t[:i], components)
i := strings.LastIndex(t, "[")
embeddedType, err := NewType(t[:i], subInternal, components)
if err != nil {
return Type{}, err
}
@@ -88,20 +95,16 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
if len(intz) == 0 {
// is a slice
typ.T = SliceTy
typ.Kind = reflect.Slice
typ.Elem = &embeddedType
typ.Type = reflect.SliceOf(embeddedType.Type)
typ.stringKind = embeddedType.stringKind + sliced
} else if len(intz) == 1 {
// is a array
// is an array
typ.T = ArrayTy
typ.Kind = reflect.Array
typ.Elem = &embeddedType
typ.Size, err = strconv.Atoi(intz[0])
if err != nil {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
typ.Type = reflect.ArrayOf(typ.Size, embeddedType.Type)
typ.stringKind = embeddedType.stringKind + sliced
} else {
return Type{}, fmt.Errorf("invalid formatting of array type")
@@ -133,36 +136,24 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
// varType is the parsed abi type
switch varType := parsedType[1]; varType {
case "int":
typ.Kind, typ.Type = reflectIntKindAndType(false, varSize)
typ.Size = varSize
typ.T = IntTy
case "uint":
typ.Kind, typ.Type = reflectIntKindAndType(true, varSize)
typ.Size = varSize
typ.T = UintTy
case "bool":
typ.Kind = reflect.Bool
typ.T = BoolTy
typ.Type = reflect.TypeOf(bool(false))
case "address":
typ.Kind = reflect.Array
typ.Type = addressT
typ.Size = 20
typ.T = AddressTy
case "string":
typ.Kind = reflect.String
typ.Type = reflect.TypeOf("")
typ.T = StringTy
case "bytes":
if varSize == 0 {
typ.T = BytesTy
typ.Kind = reflect.Slice
typ.Type = reflect.SliceOf(reflect.TypeOf(byte(0)))
} else {
typ.T = FixedBytesTy
typ.Kind = reflect.Array
typ.Size = varSize
typ.Type = reflect.ArrayOf(varSize, reflect.TypeOf(byte(0)))
}
case "tuple":
var (
@@ -172,17 +163,20 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
expression string // canonical parameter expression
)
expression += "("
overloadedNames := make(map[string]string)
for idx, c := range components {
cType, err := NewType(c.Type, c.Components)
cType, err := NewType(c.Type, c.InternalType, c.Components)
if err != nil {
return Type{}, err
}
if ToCamelCase(c.Name) == "" {
return Type{}, errors.New("abi: purely anonymous or underscored field is not supported")
fieldName, err := overloadedArgName(c.Name, overloadedNames)
if err != nil {
return Type{}, err
}
overloadedNames[fieldName] = fieldName
fields = append(fields, reflect.StructField{
Name: ToCamelCase(c.Name), // reflect.StructOf will panic for any exported field.
Type: cType.Type,
Name: fieldName, // reflect.StructOf will panic for any exported field.
Type: cType.GetType(),
Tag: reflect.StructTag("json:\"" + c.Name + "\""),
})
elems = append(elems, &cType)
@@ -193,17 +187,26 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
}
}
expression += ")"
typ.Kind = reflect.Struct
typ.Type = reflect.StructOf(fields)
typ.TupleType = reflect.StructOf(fields)
typ.TupleElems = elems
typ.TupleRawNames = names
typ.T = TupleTy
typ.stringKind = expression
const structPrefix = "struct "
// After solidity 0.5.10, a new field of abi "internalType"
// is introduced. From that we can obtain the struct name
// user defined in the source code.
if internalType != "" && strings.HasPrefix(internalType, structPrefix) {
// Foo.Bar type definition is not allowed in golang,
// convert the format to FooBar
typ.TupleRawName = strings.Replace(internalType[len(structPrefix):], ".", "", -1)
}
case "function":
typ.Kind = reflect.Array
typ.T = FunctionTy
typ.Size = 24
typ.Type = reflect.ArrayOf(24, reflect.TypeOf(byte(0)))
default:
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
@@ -211,6 +214,56 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
return
}
// GetType returns the reflection type of the ABI type.
func (t Type) GetType() reflect.Type {
switch t.T {
case IntTy:
return reflectIntType(false, t.Size)
case UintTy:
return reflectIntType(true, t.Size)
case BoolTy:
return reflect.TypeOf(false)
case StringTy:
return reflect.TypeOf("")
case SliceTy:
return reflect.SliceOf(t.Elem.GetType())
case ArrayTy:
return reflect.ArrayOf(t.Size, t.Elem.GetType())
case TupleTy:
return t.TupleType
case AddressTy:
return reflect.TypeOf(common.Address{})
case FixedBytesTy:
return reflect.ArrayOf(t.Size, reflect.TypeOf(byte(0)))
case BytesTy:
return reflect.SliceOf(reflect.TypeOf(byte(0)))
case HashTy:
// hashtype currently not used
return reflect.ArrayOf(32, reflect.TypeOf(byte(0)))
case FixedPointTy:
// fixedpoint type currently not used
return reflect.ArrayOf(32, reflect.TypeOf(byte(0)))
case FunctionTy:
return reflect.ArrayOf(24, reflect.TypeOf(byte(0)))
default:
panic("Invalid type")
}
}
func overloadedArgName(rawName string, names map[string]string) (string, error) {
fieldName := ToCamelCase(rawName)
if fieldName == "" {
return "", errors.New("abi: purely anonymous or underscored field is not supported")
}
// Handle overloaded fieldNames
_, ok := names[fieldName]
for idx := 0; ok; idx++ {
fieldName = fmt.Sprintf("%s%d", ToCamelCase(rawName), idx)
_, ok = names[fieldName]
}
return fieldName, nil
}
// String implements Stringer
func (t Type) String() (out string) {
return t.stringKind

View File

@@ -36,58 +36,58 @@ func TestTypeRegexp(t *testing.T) {
components []ArgumentMarshaling
kind Type
}{
{"bool", nil, Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}},
{"bool[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool(nil)), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", nil, Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][][2]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][2][]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", nil, Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", nil, Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", nil, Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", nil, Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", nil, Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", nil, Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", nil, Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", nil, Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", nil, Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", nil, Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", nil, Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]*big.Int{}), Size: 2, Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", nil, Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}},
{"bytes[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]byte{}), Elem: &Type{Kind: reflect.Slice, Type: reflect.TypeOf([]byte{}), T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]byte{}), Elem: &Type{T: BytesTy, Type: reflect.TypeOf([]byte{}), Kind: reflect.Slice, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][32]byte{}), Elem: &Type{Kind: reflect.Array, Type: reflect.TypeOf([32]byte{}), T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][32]byte{}), Elem: &Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", nil, Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}},
{"string[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]string{}), Elem: &Type{Kind: reflect.String, Type: reflect.TypeOf(""), T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]string{}), Elem: &Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}, stringKind: "string[2]"}},
{"address", nil, Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
{"bool", nil, Type{T: BoolTy, stringKind: "bool"}},
{"bool[]", nil, Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", nil, Type{Size: 2, T: ArrayTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", nil, Type{T: SliceTy, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", nil, Type{T: SliceTy, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: SliceTy, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", nil, Type{T: SliceTy, Elem: &Type{T: SliceTy, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", nil, Type{T: SliceTy, Elem: &Type{T: ArrayTy, Size: 2, Elem: &Type{T: SliceTy, Elem: &Type{T: BoolTy, stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", nil, Type{Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", nil, Type{Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", nil, Type{Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", nil, Type{Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", nil, Type{Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", nil, Type{T: SliceTy, Elem: &Type{Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", nil, Type{T: SliceTy, Elem: &Type{Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", nil, Type{Size: 2, T: ArrayTy, Elem: &Type{Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", nil, Type{T: SliceTy, Elem: &Type{Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", nil, Type{T: SliceTy, Elem: &Type{Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", nil, Type{T: SliceTy, Elem: &Type{Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", nil, Type{Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", nil, Type{Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", nil, Type{Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", nil, Type{Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", nil, Type{Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", nil, Type{T: SliceTy, Elem: &Type{Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", nil, Type{T: SliceTy, Elem: &Type{Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", nil, Type{T: SliceTy, Elem: &Type{Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", nil, Type{T: SliceTy, Elem: &Type{Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", nil, Type{T: SliceTy, Elem: &Type{Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", nil, Type{T: FixedBytesTy, Size: 32, stringKind: "bytes32"}},
{"bytes[]", nil, Type{T: SliceTy, Elem: &Type{T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", nil, Type{T: SliceTy, Elem: &Type{T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", nil, Type{T: StringTy, stringKind: "string"}},
{"string[]", nil, Type{T: SliceTy, Elem: &Type{T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{T: StringTy, stringKind: "string"}, stringKind: "string[2]"}},
{"address", nil, Type{Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", nil, Type{T: SliceTy, Elem: &Type{Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", nil, Type{T: ArrayTy, Size: 2, Elem: &Type{Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
// TODO when fixed types are implemented properly
// {"fixed", nil, Type{}},
// {"fixed128x128", nil, Type{}},
@@ -95,18 +95,18 @@ func TestTypeRegexp(t *testing.T) {
// {"fixed[2]", nil, Type{}},
// {"fixed128x128[]", nil, Type{}},
// {"fixed128x128[2]", nil, Type{}},
{"tuple", []ArgumentMarshaling{{Name: "a", Type: "int64"}}, Type{Kind: reflect.Struct, T: TupleTy, Type: reflect.TypeOf(struct {
{"tuple", []ArgumentMarshaling{{Name: "a", Type: "int64"}}, Type{T: TupleTy, TupleType: reflect.TypeOf(struct {
A int64 `json:"a"`
}{}), stringKind: "(int64)",
TupleElems: []*Type{{Kind: reflect.Int64, T: IntTy, Type: reflect.TypeOf(int64(0)), Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"a"}}},
{"tuple with long name", []ArgumentMarshaling{{Name: "aTypicalParamName", Type: "int64"}}, Type{Kind: reflect.Struct, T: TupleTy, Type: reflect.TypeOf(struct {
TupleElems: []*Type{{T: IntTy, Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"a"}}},
{"tuple with long name", []ArgumentMarshaling{{Name: "aTypicalParamName", Type: "int64"}}, Type{T: TupleTy, TupleType: reflect.TypeOf(struct {
ATypicalParamName int64 `json:"aTypicalParamName"`
}{}), stringKind: "(int64)",
TupleElems: []*Type{{Kind: reflect.Int64, T: IntTy, Type: reflect.TypeOf(int64(0)), Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"aTypicalParamName"}}},
TupleElems: []*Type{{T: IntTy, Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"aTypicalParamName"}}},
}
for _, tt := range tests {
typ, err := NewType(tt.blob, tt.components)
typ, err := NewType(tt.blob, "", tt.components)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", tt.blob, err)
}
@@ -281,7 +281,7 @@ func TestTypeCheck(t *testing.T) {
B *big.Int
}{{big.NewInt(0), big.NewInt(0)}, {big.NewInt(0), big.NewInt(0)}}, ""},
} {
typ, err := NewType(test.typ, test.components)
typ, err := NewType(test.typ, "", test.components)
if err != nil && len(test.err) == 0 {
t.Fatal("unexpected parse error:", err)
} else if err != nil && len(test.err) != 0 {
@@ -306,3 +306,27 @@ func TestTypeCheck(t *testing.T) {
}
}
}
func TestInternalType(t *testing.T) {
components := []ArgumentMarshaling{{Name: "a", Type: "int64"}}
internalType := "struct a.b[]"
kind := Type{
T: TupleTy,
TupleType: reflect.TypeOf(struct {
A int64 `json:"a"`
}{}),
stringKind: "(int64)",
TupleRawName: "ab[]",
TupleElems: []*Type{{T: IntTy, Size: 64, stringKind: "int64"}},
TupleRawNames: []string{"a"},
}
blob := "tuple"
typ, err := NewType(blob, internalType, components)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", blob, err)
}
if !reflect.DeepEqual(typ, kind) {
t.Errorf("type %q: parsed type mismatch:\nGOT %s\nWANT %s ", blob, spew.Sdump(typeWithoutStringer(typ)), spew.Sdump(typeWithoutStringer(kind)))
}
}

View File

@@ -26,45 +26,47 @@ import (
)
var (
maxUint256 = big.NewInt(0).Add(
big.NewInt(0).Exp(big.NewInt(2), big.NewInt(256), nil),
big.NewInt(-1))
maxInt256 = big.NewInt(0).Add(
big.NewInt(0).Exp(big.NewInt(2), big.NewInt(255), nil),
big.NewInt(-1))
// MaxUint256 is the maximum value that can be represented by a uint256
MaxUint256 = new(big.Int).Sub(new(big.Int).Lsh(common.Big1, 256), common.Big1)
// MaxInt256 is the maximum value that can be represented by a int256
MaxInt256 = new(big.Int).Sub(new(big.Int).Lsh(common.Big1, 255), common.Big1)
)
// reads the integer based on its kind
func readInteger(typ byte, kind reflect.Kind, b []byte) interface{} {
switch kind {
case reflect.Uint8:
return b[len(b)-1]
case reflect.Uint16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case reflect.Uint32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case reflect.Uint64:
return binary.BigEndian.Uint64(b[len(b)-8:])
case reflect.Int8:
// ReadInteger reads the integer based on its kind and returns the appropriate value
func ReadInteger(typ Type, b []byte) interface{} {
if typ.T == UintTy {
switch typ.Size {
case 8:
return b[len(b)-1]
case 16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case 32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case 64:
return binary.BigEndian.Uint64(b[len(b)-8:])
default:
// the only case left for unsigned integer is uint256.
return new(big.Int).SetBytes(b)
}
}
switch typ.Size {
case 8:
return int8(b[len(b)-1])
case reflect.Int16:
case 16:
return int16(binary.BigEndian.Uint16(b[len(b)-2:]))
case reflect.Int32:
case 32:
return int32(binary.BigEndian.Uint32(b[len(b)-4:]))
case reflect.Int64:
case 64:
return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
default:
// the only case lefts for integer is int256/uint256.
// big.SetBytes can't tell if a number is negative, positive on itself.
// the only case left for integer is int256
// big.SetBytes can't tell if a number is negative or positive in itself.
// On EVM, if the returned number > max int256, it is negative.
// A number is > max int256 if the bit at position 255 is set.
ret := new(big.Int).SetBytes(b)
if typ == UintTy {
return ret
}
if ret.Cmp(maxInt256) > 0 {
ret.Add(maxUint256, big.NewInt(0).Neg(ret))
ret.Add(ret, big.NewInt(1))
if ret.Bit(255) == 1 {
ret.Add(MaxUint256, new(big.Int).Neg(ret))
ret.Add(ret, common.Big1)
ret.Neg(ret)
}
return ret
@@ -102,13 +104,13 @@ func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
return
}
// through reflection, creates a fixed array to be read from
func readFixedBytes(t Type, word []byte) (interface{}, error) {
// ReadFixedBytes uses reflection to create a fixed array to be read from
func ReadFixedBytes(t Type, word []byte) (interface{}, error) {
if t.T != FixedBytesTy {
return nil, fmt.Errorf("abi: invalid type in call to make fixed byte array")
}
// convert
array := reflect.New(t.Type).Elem()
array := reflect.New(t.GetType()).Elem()
reflect.Copy(array, reflect.ValueOf(word[0:t.Size]))
return array.Interface(), nil
@@ -129,10 +131,10 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
if t.T == SliceTy {
// declare our slice
refSlice = reflect.MakeSlice(t.Type, size, size)
refSlice = reflect.MakeSlice(t.GetType(), size, size)
} else if t.T == ArrayTy {
// declare our array
refSlice = reflect.New(t.Type).Elem()
refSlice = reflect.New(t.GetType()).Elem()
} else {
return nil, fmt.Errorf("abi: invalid type in array/slice unpacking stage")
}
@@ -156,7 +158,7 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
}
func forTupleUnpack(t Type, output []byte) (interface{}, error) {
retval := reflect.New(t.Type).Elem()
retval := reflect.New(t.GetType()).Elem()
virtualArgs := 0
for index, elem := range t.TupleElems {
marshalledValue, err := toGoType((index+virtualArgs)*32, *elem, output)
@@ -216,9 +218,8 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
return nil, err
}
return forTupleUnpack(t, output[begin:])
} else {
return forTupleUnpack(t, output[index:])
}
return forTupleUnpack(t, output[index:])
case SliceTy:
return forEachUnpack(t, output[begin:], 0, length)
case ArrayTy:
@@ -230,7 +231,7 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
case StringTy: // variable arrays are written at the end of the return bytes
return string(output[begin : begin+length]), nil
case IntTy, UintTy:
return readInteger(t.T, t.Kind, returnOutput), nil
return ReadInteger(t, returnOutput), nil
case BoolTy:
return readBool(returnOutput)
case AddressTy:
@@ -240,7 +241,7 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
case BytesTy:
return output[begin : begin+length], nil
case FixedBytesTy:
return readFixedBytes(t, returnOutput)
return ReadFixedBytes(t, returnOutput)
case FunctionTy:
return readFunctionType(t, returnOutput)
default:

View File

@@ -30,6 +30,34 @@ import (
"github.com/stretchr/testify/require"
)
// TestUnpack tests the general pack/unpack tests in packing_test.go
func TestUnpack(t *testing.T) {
for i, test := range packUnpackTests {
t.Run(strconv.Itoa(i)+" "+test.def, func(t *testing.T) {
//Unpack
def := fmt.Sprintf(`[{ "name" : "method", "type": "function", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.packed)
if err != nil {
t.Fatalf("invalid hex %s: %v", test.packed, err)
}
outptr := reflect.New(reflect.TypeOf(test.unpacked))
err = abi.Unpack(outptr.Interface(), "method", encb)
if err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
}
out := outptr.Elem().Interface()
if !reflect.DeepEqual(test.unpacked, out) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, test.unpacked, out)
}
})
}
}
type unpackTest struct {
def string // ABI definition JSON
enc string // evm return data
@@ -51,16 +79,7 @@ func (test unpackTest) checkError(err error) error {
}
var unpackTests = []unpackTest{
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: true,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000000",
want: false,
},
// Bools
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000001000000000001",
@@ -73,11 +92,7 @@ var unpackTests = []unpackTest{
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint32(1),
},
// Integers
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
@@ -90,16 +105,6 @@ var unpackTests = []unpackTest{
want: uint16(0),
err: "abi: cannot unmarshal *big.Int in to uint16",
},
{
def: `[{"type": "uint17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int32(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
@@ -112,36 +117,10 @@ var unpackTests = []unpackTest{
want: int16(0),
err: "abi: cannot unmarshal *big.Int in to int16",
},
{
def: `[{"type": "int17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int256"}]`,
enc: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
want: big.NewInt(-1),
},
{
def: `[{"type": "address"}]`,
enc: "0000000000000000000000000100000000000000000000000000000000000000",
want: common.Address{1},
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{},
err: "abi: cannot unmarshal []uint8 in to [32]uint8",
want: [32]byte{1},
},
{
def: `[{"type": "bytes32"}]`,
@@ -149,219 +128,13 @@ var unpackTests = []unpackTest{
want: []byte(nil),
err: "abi: cannot unmarshal [32]uint8 in to []uint8",
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "function"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [24]byte{1},
},
// slices
{
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint8{1, 2},
},
{
def: `[{"type": "uint8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint8{1, 2},
},
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [][]uint8{{1, 2}, {1, 2, 3}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
want: [2][]uint8{{1}, {1}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint16{1, 2},
},
{
def: `[{"type": "uint16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint16{1, 2},
},
{
def: `[{"type": "uint32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint32{1, 2},
},
{
def: `[{"type": "uint32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint32{1, 2},
},
{
def: `[{"type": "uint32[2][3][4]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018",
want: [4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
},
{
def: `[{"type": "uint64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint64{1, 2},
},
{
def: `[{"type": "uint64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint64{1, 2},
},
{
def: `[{"type": "uint256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "uint256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "string[4]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000000548656c6c6f0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005576f726c64000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b476f2d657468657265756d0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000",
want: [4]string{"Hello", "World", "Go-ethereum", "Ethereum"},
},
{
def: `[{"type": "string[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b676f2d657468657265756d000000000000000000000000000000000000000000",
want: []string{"Ethereum", "go-ethereum"},
},
{
def: `[{"type": "bytes[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000003f0f0f000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003f0f0f00000000000000000000000000000000000000000000000000000000000",
want: [][]byte{{0xf0, 0xf0, 0xf0}, {0xf0, 0xf0, 0xf0}},
},
{
def: `[{"type": "uint256[2][][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000003e80000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000003e8",
want: [][][2]*big.Int{{{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}, {{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}},
},
{
def: `[{"type": "int8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int8{1, 2},
},
{
def: `[{"type": "int8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int8{1, 2},
},
{
def: `[{"type": "int16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int16{1, 2},
},
{
def: `[{"type": "int16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int16{1, 2},
},
{
def: `[{"type": "int32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int32{1, 2},
},
{
def: `[{"type": "int32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int32{1, 2},
},
{
def: `[{"type": "int64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int64{1, 2},
},
{
def: `[{"type": "int64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int64{1, 2},
},
{
def: `[{"type": "int256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "int256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
// struct outputs
{
def: `[{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int_one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
Intone *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"___","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
Intone *big.Int
}{},
err: "abi: purely underscored output cannot unpack to struct",
}{IntOne: big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"},{"name":"IntOne","type":"int256"}]`,
@@ -408,19 +181,44 @@ var unpackTests = []unpackTest{
}{},
err: "abi: purely underscored output cannot unpack to struct",
},
// Make sure only the first argument is consumed
{
def: `[{"name":"int_one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
}
func TestUnpack(t *testing.T) {
// TestLocalUnpackTests runs test specially designed only for unpacking.
// All test cases that can be used to test packing and unpacking should move to packing_test.go
func TestLocalUnpackTests(t *testing.T) {
for i, test := range unpackTests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
def := fmt.Sprintf(`[{ "name" : "method", "outputs": %s}]`, test.def)
//Unpack
def := fmt.Sprintf(`[{ "name" : "method", "type": "function", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.enc)
if err != nil {
t.Fatalf("invalid hex: %s" + test.enc)
t.Fatalf("invalid hex %s: %v", test.enc, err)
}
outptr := reflect.New(reflect.TypeOf(test.want))
err = abi.Unpack(outptr.Interface(), "method", encb)
@@ -492,7 +290,7 @@ type methodMultiOutput struct {
func methodMultiReturn(require *require.Assertions) (ABI, []byte, methodMultiOutput) {
const definition = `[
{ "name" : "multi", "constant" : false, "outputs": [ { "name": "Int", "type": "uint256" }, { "name": "String", "type": "string" } ] }]`
{ "name" : "multi", "type": "function", "outputs": [ { "name": "Int", "type": "uint256" }, { "name": "String", "type": "string" } ] }]`
var expected = methodMultiOutput{big.NewInt(1), "hello"}
abi, err := JSON(strings.NewReader(definition))
@@ -562,7 +360,7 @@ func TestMethodMultiReturn(t *testing.T) {
}, {
&[]interface{}{new(int)},
&[]interface{}{},
"abi: insufficient number of elements in the list/array for unpack, want 2, got 1",
"abi: insufficient number of arguments for unpack, want 2, got 1",
"Can not unpack into a slice with wrong types",
}}
for _, tc := range testCases {
@@ -581,7 +379,7 @@ func TestMethodMultiReturn(t *testing.T) {
}
func TestMultiReturnWithArray(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"type": "uint64[3]"}, {"type": "uint64"}]}]`
const definition = `[{"name" : "multi", "type": "function", "outputs": [{"type": "uint64[3]"}, {"type": "uint64"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
@@ -604,7 +402,7 @@ func TestMultiReturnWithArray(t *testing.T) {
}
func TestMultiReturnWithStringArray(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"name": "","type": "uint256[3]"},{"name": "","type": "address"},{"name": "","type": "string[2]"},{"name": "","type": "bool"}]}]`
const definition = `[{"name" : "multi", "type": "function", "outputs": [{"name": "","type": "uint256[3]"},{"name": "","type": "address"},{"name": "","type": "string[2]"},{"name": "","type": "bool"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
@@ -634,7 +432,7 @@ func TestMultiReturnWithStringArray(t *testing.T) {
}
func TestMultiReturnWithStringSlice(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"name": "","type": "string[]"},{"name": "","type": "uint256[]"}]}]`
const definition = `[{"name" : "multi", "type": "function", "outputs": [{"name": "","type": "string[]"},{"name": "","type": "uint256[]"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
@@ -670,7 +468,7 @@ func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
// values of nested static arrays count towards the size as well, and any element following
// after such nested array argument should be read with the correct offset,
// so that it does not read content from the previous array argument.
const definition = `[{"name" : "multi", "outputs": [{"type": "uint64[3][2][4]"}, {"type": "uint64"}]}]`
const definition = `[{"name" : "multi", "type": "function", "outputs": [{"type": "uint64[3][2][4]"}, {"type": "uint64"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
@@ -707,15 +505,15 @@ func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
func TestUnmarshal(t *testing.T) {
const definition = `[
{ "name" : "int", "constant" : false, "outputs": [ { "type": "uint256" } ] },
{ "name" : "bool", "constant" : false, "outputs": [ { "type": "bool" } ] },
{ "name" : "bytes", "constant" : false, "outputs": [ { "type": "bytes" } ] },
{ "name" : "fixed", "constant" : false, "outputs": [ { "type": "bytes32" } ] },
{ "name" : "multi", "constant" : false, "outputs": [ { "type": "bytes" }, { "type": "bytes" } ] },
{ "name" : "intArraySingle", "constant" : false, "outputs": [ { "type": "uint256[3]" } ] },
{ "name" : "addressSliceSingle", "constant" : false, "outputs": [ { "type": "address[]" } ] },
{ "name" : "addressSliceDouble", "constant" : false, "outputs": [ { "name": "a", "type": "address[]" }, { "name": "b", "type": "address[]" } ] },
{ "name" : "mixedBytes", "constant" : true, "outputs": [ { "name": "a", "type": "bytes" }, { "name": "b", "type": "bytes32" } ] }]`
{ "name" : "int", "type": "function", "outputs": [ { "type": "uint256" } ] },
{ "name" : "bool", "type": "function", "outputs": [ { "type": "bool" } ] },
{ "name" : "bytes", "type": "function", "outputs": [ { "type": "bytes" } ] },
{ "name" : "fixed", "type": "function", "outputs": [ { "type": "bytes32" } ] },
{ "name" : "multi", "type": "function", "outputs": [ { "type": "bytes" }, { "type": "bytes" } ] },
{ "name" : "intArraySingle", "type": "function", "outputs": [ { "type": "uint256[3]" } ] },
{ "name" : "addressSliceSingle", "type": "function", "outputs": [ { "type": "address[]" } ] },
{ "name" : "addressSliceDouble", "type": "function", "outputs": [ { "name": "a", "type": "address[]" }, { "name": "b", "type": "address[]" } ] },
{ "name" : "mixedBytes", "type": "function", "stateMutability" : "view", "outputs": [ { "name": "a", "type": "bytes" }, { "name": "b", "type": "bytes32" } ] }]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
@@ -955,7 +753,7 @@ func TestUnmarshal(t *testing.T) {
}
func TestUnpackTuple(t *testing.T) {
const simpleTuple = `[{"name":"tuple","constant":false,"outputs":[{"type":"tuple","name":"ret","components":[{"type":"int256","name":"a"},{"type":"int256","name":"b"}]}]}]`
const simpleTuple = `[{"name":"tuple","type":"function","outputs":[{"type":"tuple","name":"ret","components":[{"type":"int256","name":"a"},{"type":"int256","name":"b"}]}]}]`
abi, err := JSON(strings.NewReader(simpleTuple))
if err != nil {
t.Fatal(err)
@@ -979,12 +777,12 @@ func TestUnpackTuple(t *testing.T) {
t.Errorf("unexpected value unpacked: want %x, got %x", 1, v.A)
}
if v.B.Cmp(big.NewInt(-1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", v.B, -1)
t.Errorf("unexpected value unpacked: want %x, got %x", -1, v.B)
}
}
// Test nested tuple
const nestedTuple = `[{"name":"tuple","constant":false,"outputs":[
const nestedTuple = `[{"name":"tuple","type":"function","outputs":[
{"type":"tuple","name":"s","components":[{"type":"uint256","name":"a"},{"type":"uint256[]","name":"b"},{"type":"tuple[]","name":"c","components":[{"name":"x", "type":"uint256"},{"name":"y","type":"uint256"}]}]},
{"type":"tuple","name":"t","components":[{"name":"x", "type":"uint256"},{"name":"y","type":"uint256"}]},
{"type":"uint256","name":"a"}
@@ -1106,7 +904,7 @@ func TestOOMMaliciousInput(t *testing.T) {
},
}
for i, test := range oomTests {
def := fmt.Sprintf(`[{ "name" : "method", "outputs": %s}]`, test.def)
def := fmt.Sprintf(`[{ "name" : "method", "type": "function", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)

View File

@@ -129,6 +129,8 @@ type Wallet interface {
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignHashWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
//
// This method should return the signature in 'canonical' format, with v 0 or 1
SignText(account Account, text []byte) ([]byte, error)
// SignTextWithPassphrase is identical to Signtext, but also takes a password

View File

@@ -27,7 +27,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/signer/core"
@@ -131,6 +130,12 @@ func (api *ExternalSigner) Accounts() []accounts.Account {
func (api *ExternalSigner) Contains(account accounts.Account) bool {
api.cacheMu.RLock()
defer api.cacheMu.RUnlock()
if api.cache == nil {
// If we haven't already fetched the accounts, it's time to do so now
api.cacheMu.RUnlock()
api.Accounts()
api.cacheMu.RLock()
}
for _, a := range api.cache {
if a.Address == account.Address && (account.URL == (accounts.URL{}) || account.URL == api.URL()) {
return true
@@ -161,7 +166,7 @@ func (api *ExternalSigner) SignData(account accounts.Account, mimeType string, d
hexutil.Encode(data)); err != nil {
return nil, err
}
// If V is on 27/28-form, convert to to 0/1 for Clique
// If V is on 27/28-form, convert to 0/1 for Clique
if mimeType == accounts.MimetypeClique && (res[64] == 27 || res[64] == 28) {
res[64] -= 27 // Transform V from 27/28 to 0/1 for Clique use
}
@@ -169,19 +174,29 @@ func (api *ExternalSigner) SignData(account accounts.Account, mimeType string, d
}
func (api *ExternalSigner) SignText(account accounts.Account, text []byte) ([]byte, error) {
var res hexutil.Bytes
var signature hexutil.Bytes
var signAddress = common.NewMixedcaseAddress(account.Address)
if err := api.client.Call(&res, "account_signData",
if err := api.client.Call(&signature, "account_signData",
accounts.MimetypeTextPlain,
&signAddress, // Need to use the pointer here, because of how MarshalJSON is defined
hexutil.Encode(text)); err != nil {
return nil, err
}
return res, nil
if signature[64] == 27 || signature[64] == 28 {
// If clef is used as a backend, it may already have transformed
// the signature to ethereum-type signature.
signature[64] -= 27 // Transform V from Ethereum-legacy to 0/1
}
return signature, nil
}
// signTransactionResult represents the signinig result returned by clef.
type signTransactionResult struct {
Raw hexutil.Bytes `json:"raw"`
Tx *types.Transaction `json:"tx"`
}
func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
res := ethapi.SignTransactionResult{}
data := hexutil.Bytes(tx.Data())
var to *common.MixedcaseAddress
if tx.To() != nil {
@@ -197,6 +212,7 @@ func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transactio
To: to,
From: common.NewMixedcaseAddress(account.Address),
}
var res signTransactionResult
if err := api.client.Call(&res, "account_signTransaction", args); err != nil {
return nil, err
}

View File

@@ -61,7 +61,7 @@ func TestHDPathParsing(t *testing.T) {
// Weird inputs just to ensure they work
{" m / 44 '\n/\n 60 \n\n\t' /\n0 ' /\t\t 0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
// Invaid derivation paths
// Invalid derivation paths
{"", nil}, // Empty relative derivation path
{"m", nil}, // Empty absolute derivation path
{"m/", nil}, // Missing last derivation component

View File

@@ -24,7 +24,6 @@ import (
"crypto/ecdsa"
crand "crypto/rand"
"errors"
"fmt"
"math/big"
"os"
"path/filepath"
@@ -44,6 +43,10 @@ var (
ErrLocked = accounts.NewAuthNeededError("password or unlock")
ErrNoMatch = errors.New("no key for given address or file")
ErrDecrypt = errors.New("could not decrypt key with given password")
// ErrAccountAlreadyExists is returned if an account attempted to import is
// already present in the keystore.
ErrAccountAlreadyExists = errors.New("account already exists")
)
// KeyStoreType is the reflect type of a keystore backend.
@@ -67,7 +70,8 @@ type KeyStore struct {
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
mu sync.RWMutex
mu sync.RWMutex
importMu sync.Mutex // Import Mutex locks the import to prevent two insertions from racing
}
type unlocked struct {
@@ -443,14 +447,27 @@ func (ks *KeyStore) Import(keyJSON []byte, passphrase, newPassphrase string) (ac
if err != nil {
return accounts.Account{}, err
}
ks.importMu.Lock()
defer ks.importMu.Unlock()
if ks.cache.hasAddress(key.Address) {
return accounts.Account{
Address: key.Address,
}, ErrAccountAlreadyExists
}
return ks.importKey(key, newPassphrase)
}
// ImportECDSA stores the given key into the key directory, encrypting it with the passphrase.
func (ks *KeyStore) ImportECDSA(priv *ecdsa.PrivateKey, passphrase string) (accounts.Account, error) {
ks.importMu.Lock()
defer ks.importMu.Unlock()
key := newKeyFromECDSA(priv)
if ks.cache.hasAddress(key.Address) {
return accounts.Account{}, fmt.Errorf("account already exists")
return accounts.Account{
Address: key.Address,
}, ErrAccountAlreadyExists
}
return ks.importKey(key, passphrase)
}

View File

@@ -23,11 +23,14 @@ import (
"runtime"
"sort"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/event"
)
@@ -338,6 +341,88 @@ func TestWalletNotifications(t *testing.T) {
checkEvents(t, wantEvents, events)
}
// TestImportExport tests the import functionality of a keystore.
func TestImportECDSA(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
key, err := crypto.GenerateKey()
if err != nil {
t.Fatalf("failed to generate key: %v", key)
}
if _, err = ks.ImportECDSA(key, "old"); err != nil {
t.Errorf("importing failed: %v", err)
}
if _, err = ks.ImportECDSA(key, "old"); err == nil {
t.Errorf("importing same key twice succeeded")
}
if _, err = ks.ImportECDSA(key, "new"); err == nil {
t.Errorf("importing same key twice succeeded")
}
}
// TestImportECDSA tests the import and export functionality of a keystore.
func TestImportExport(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
acc, err := ks.NewAccount("old")
if err != nil {
t.Fatalf("failed to create account: %v", acc)
}
json, err := ks.Export(acc, "old", "new")
if err != nil {
t.Fatalf("failed to export account: %v", acc)
}
dir2, ks2 := tmpKeyStore(t, true)
defer os.RemoveAll(dir2)
if _, err = ks2.Import(json, "old", "old"); err == nil {
t.Errorf("importing with invalid password succeeded")
}
acc2, err := ks2.Import(json, "new", "new")
if err != nil {
t.Errorf("importing failed: %v", err)
}
if acc.Address != acc2.Address {
t.Error("imported account does not match exported account")
}
if _, err = ks2.Import(json, "new", "new"); err == nil {
t.Errorf("importing a key twice succeeded")
}
}
// TestImportRace tests the keystore on races.
// This test should fail under -race if importing races.
func TestImportRace(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
acc, err := ks.NewAccount("old")
if err != nil {
t.Fatalf("failed to create account: %v", acc)
}
json, err := ks.Export(acc, "old", "new")
if err != nil {
t.Fatalf("failed to export account: %v", acc)
}
dir2, ks2 := tmpKeyStore(t, true)
defer os.RemoveAll(dir2)
var atom uint32
var wg sync.WaitGroup
wg.Add(2)
for i := 0; i < 2; i++ {
go func() {
defer wg.Done()
if _, err := ks2.Import(json, "new", "new"); err != nil {
atomic.AddUint32(&atom, 1)
}
}()
}
wg.Wait()
if atom != 1 {
t.Errorf("Import is racy")
}
}
// checkAccounts checks that all known live accounts are present in the wallet list.
func checkAccounts(t *testing.T, live map[common.Address]accounts.Account, wallets []accounts.Wallet) {
if len(live) != len(wallets) {

View File

@@ -123,6 +123,7 @@ func (ks keyStorePassphrase) StoreKey(filename string, key *Key, auth string) er
"Please file a ticket at:\n\n" +
"https://github.com/ethereum/go-ethereum/issues." +
"The error was : %s"
//lint:ignore ST1005 This is a message for the user
return fmt.Errorf(msg, tmpName, err)
}
}
@@ -237,7 +238,7 @@ func DecryptKey(keyjson []byte, auth string) (*Key, error) {
func DecryptDataV3(cryptoJson CryptoJSON, auth string) ([]byte, error) {
if cryptoJson.Cipher != "aes-128-ctr" {
return nil, fmt.Errorf("Cipher not supported: %v", cryptoJson.Cipher)
return nil, fmt.Errorf("cipher not supported: %v", cryptoJson.Cipher)
}
mac, err := hex.DecodeString(cryptoJson.MAC)
if err != nil {
@@ -273,7 +274,7 @@ func DecryptDataV3(cryptoJson CryptoJSON, auth string) ([]byte, error) {
func decryptKeyV3(keyProtected *encryptedKeyJSONV3, auth string) (keyBytes []byte, keyId []byte, err error) {
if keyProtected.Version != version {
return nil, nil, fmt.Errorf("Version not supported: %v", keyProtected.Version)
return nil, nil, fmt.Errorf("version not supported: %v", keyProtected.Version)
}
keyId = uuid.Parse(keyProtected.Id)
plainText, err := DecryptDataV3(keyProtected.Crypto, auth)
@@ -335,13 +336,13 @@ func getKDFKey(cryptoJSON CryptoJSON, auth string) ([]byte, error) {
c := ensureInt(cryptoJSON.KDFParams["c"])
prf := cryptoJSON.KDFParams["prf"].(string)
if prf != "hmac-sha256" {
return nil, fmt.Errorf("Unsupported PBKDF2 PRF: %s", prf)
return nil, fmt.Errorf("unsupported PBKDF2 PRF: %s", prf)
}
key := pbkdf2.Key(authArray, salt, c, dkLen, sha256.New)
return key, nil
}
return nil, fmt.Errorf("Unsupported KDF: %s", cryptoJSON.KDF)
return nil, fmt.Errorf("unsupported KDF: %s", cryptoJSON.KDF)
}
// TODO: can we do without this when unmarshalling dynamic JSON?

View File

@@ -141,6 +141,11 @@ func (am *Manager) Wallets() []Wallet {
am.lock.RLock()
defer am.lock.RUnlock()
return am.walletsNoLock()
}
// walletsNoLock returns all registered wallets. Callers must hold am.lock.
func (am *Manager) walletsNoLock() []Wallet {
cpy := make([]Wallet, len(am.wallets))
copy(cpy, am.wallets)
return cpy
@@ -155,7 +160,7 @@ func (am *Manager) Wallet(url string) (Wallet, error) {
if err != nil {
return nil, err
}
for _, wallet := range am.Wallets() {
for _, wallet := range am.walletsNoLock() {
if wallet.URL() == parsed {
return wallet, nil
}

View File

@@ -220,7 +220,7 @@ func (hub *Hub) refreshWallets() {
// Mark the reader as present
seen[reader] = struct{}{}
// If we alreay know about this card, skip to the next reader, otherwise clean up
// If we already know about this card, skip to the next reader, otherwise clean up
if wallet, ok := hub.wallets[reader]; ok {
if err := wallet.ping(); err == nil {
continue

View File

@@ -71,7 +71,7 @@ func NewSecureChannelSession(card *pcsc.Card, keyData []byte) (*SecureChannelSes
cardPublic, ok := gen.Unmarshal(keyData)
if !ok {
return nil, fmt.Errorf("Could not unmarshal public key from card")
return nil, fmt.Errorf("could not unmarshal public key from card")
}
secret, err := gen.GenerateSharedSecret(private, cardPublic)
@@ -109,7 +109,7 @@ func (s *SecureChannelSession) Pair(pairingPassword []byte) error {
cardChallenge := response.Data[32:64]
if !bytes.Equal(expectedCryptogram, cardCryptogram) {
return fmt.Errorf("Invalid card cryptogram %v != %v", expectedCryptogram, cardCryptogram)
return fmt.Errorf("invalid card cryptogram %v != %v", expectedCryptogram, cardCryptogram)
}
md.Reset()
@@ -132,7 +132,7 @@ func (s *SecureChannelSession) Pair(pairingPassword []byte) error {
// Unpair disestablishes an existing pairing.
func (s *SecureChannelSession) Unpair() error {
if s.PairingKey == nil {
return fmt.Errorf("Cannot unpair: not paired")
return fmt.Errorf("cannot unpair: not paired")
}
_, err := s.transmitEncrypted(claSCWallet, insUnpair, s.PairingIndex, 0, []byte{})
@@ -148,7 +148,7 @@ func (s *SecureChannelSession) Unpair() error {
// Open initializes the secure channel.
func (s *SecureChannelSession) Open() error {
if s.iv != nil {
return fmt.Errorf("Session already opened")
return fmt.Errorf("session already opened")
}
response, err := s.open()
@@ -185,11 +185,11 @@ func (s *SecureChannelSession) mutuallyAuthenticate() error {
return err
}
if response.Sw1 != 0x90 || response.Sw2 != 0x00 {
return fmt.Errorf("Got unexpected response from MUTUALLY_AUTHENTICATE: 0x%x%x", response.Sw1, response.Sw2)
return fmt.Errorf("got unexpected response from MUTUALLY_AUTHENTICATE: 0x%x%x", response.Sw1, response.Sw2)
}
if len(response.Data) != scSecretLength {
return fmt.Errorf("Response from MUTUALLY_AUTHENTICATE was %d bytes, expected %d", len(response.Data), scSecretLength)
return fmt.Errorf("response from MUTUALLY_AUTHENTICATE was %d bytes, expected %d", len(response.Data), scSecretLength)
}
return nil
@@ -222,7 +222,7 @@ func (s *SecureChannelSession) pair(p1 uint8, data []byte) (*responseAPDU, error
// transmitEncrypted sends an encrypted message, and decrypts and returns the response.
func (s *SecureChannelSession) transmitEncrypted(cla, ins, p1, p2 byte, data []byte) (*responseAPDU, error) {
if s.iv == nil {
return nil, fmt.Errorf("Channel not open")
return nil, fmt.Errorf("channel not open")
}
data, err := s.encryptAPDU(data)
@@ -261,14 +261,14 @@ func (s *SecureChannelSession) transmitEncrypted(cla, ins, p1, p2 byte, data []b
return nil, err
}
if !bytes.Equal(s.iv, rmac) {
return nil, fmt.Errorf("Invalid MAC in response")
return nil, fmt.Errorf("invalid MAC in response")
}
rapdu := &responseAPDU{}
rapdu.deserialize(plainData)
if rapdu.Sw1 != sw1Ok {
return nil, fmt.Errorf("Unexpected response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", cla, ins, rapdu.Sw1, rapdu.Sw2)
return nil, fmt.Errorf("unexpected response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", cla, ins, rapdu.Sw1, rapdu.Sw2)
}
return rapdu, nil
@@ -277,7 +277,7 @@ func (s *SecureChannelSession) transmitEncrypted(cla, ins, p1, p2 byte, data []b
// encryptAPDU is an internal method that serializes and encrypts an APDU.
func (s *SecureChannelSession) encryptAPDU(data []byte) ([]byte, error) {
if len(data) > maxPayloadSize {
return nil, fmt.Errorf("Payload of %d bytes exceeds maximum of %d", len(data), maxPayloadSize)
return nil, fmt.Errorf("payload of %d bytes exceeds maximum of %d", len(data), maxPayloadSize)
}
data = pad(data, 0x80)
@@ -323,10 +323,10 @@ func unpad(data []byte, terminator byte) ([]byte, error) {
case terminator:
return data[:len(data)-i], nil
default:
return nil, fmt.Errorf("Expected end of padding, got %d", data[len(data)-i])
return nil, fmt.Errorf("expected end of padding, got %d", data[len(data)-i])
}
}
return nil, fmt.Errorf("Expected end of padding, got 0")
return nil, fmt.Errorf("expected end of padding, got 0")
}
// updateIV is an internal method that updates the initialization vector after

View File

@@ -167,7 +167,7 @@ func transmit(card *pcsc.Card, command *commandAPDU) (*responseAPDU, error) {
}
if response.Sw1 != sw1Ok {
return nil, fmt.Errorf("Unexpected insecure response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", command.Cla, command.Ins, response.Sw1, response.Sw2)
return nil, fmt.Errorf("unexpected insecure response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", command.Cla, command.Ins, response.Sw1, response.Sw2)
}
return response, nil
@@ -252,7 +252,7 @@ func (w *Wallet) release() error {
// with the wallet.
func (w *Wallet) pair(puk []byte) error {
if w.session.paired() {
return fmt.Errorf("Wallet already paired")
return fmt.Errorf("wallet already paired")
}
pairing, err := w.session.pair(puk)
if err != nil {
@@ -312,15 +312,15 @@ func (w *Wallet) Status() (string, error) {
}
switch {
case !w.session.verified && status.PinRetryCount == 0 && status.PukRetryCount == 0:
return fmt.Sprintf("Bricked, waiting for full wipe"), nil
return "Bricked, waiting for full wipe", nil
case !w.session.verified && status.PinRetryCount == 0:
return fmt.Sprintf("Blocked, waiting for PUK (%d attempts left) and new PIN", status.PukRetryCount), nil
case !w.session.verified:
return fmt.Sprintf("Locked, waiting for PIN (%d attempts left)", status.PinRetryCount), nil
case !status.Initialized:
return fmt.Sprintf("Empty, waiting for initialization"), nil
return "Empty, waiting for initialization", nil
default:
return fmt.Sprintf("Online"), nil
return "Online", nil
}
}
@@ -362,7 +362,7 @@ func (w *Wallet) Open(passphrase string) error {
return err
}
// Pairing succeeded, fall through to PIN checks. This will of course fail,
// but we can't return ErrPINNeeded directly here becase we don't know whether
// but we can't return ErrPINNeeded directly here because we don't know whether
// a PIN check or a PIN reset is needed.
passphrase = ""
}
@@ -773,12 +773,12 @@ func (w *Wallet) findAccountPath(account accounts.Account) (accounts.DerivationP
// Look for the path in the URL
if account.URL.Scheme != w.Hub.scheme {
return nil, fmt.Errorf("Scheme %s does not match wallet scheme %s", account.URL.Scheme, w.Hub.scheme)
return nil, fmt.Errorf("scheme %s does not match wallet scheme %s", account.URL.Scheme, w.Hub.scheme)
}
parts := strings.SplitN(account.URL.Path, "/", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("Invalid URL format: %s", account.URL)
return nil, fmt.Errorf("invalid URL format: %s", account.URL)
}
if parts[0] != fmt.Sprintf("%x", w.PublicKey[1:3]) {
@@ -813,7 +813,7 @@ func (s *Session) pair(secret []byte) (smartcardPairing, error) {
// unpair deletes an existing pairing.
func (s *Session) unpair() error {
if !s.verified {
return fmt.Errorf("Unpair requires that the PIN be verified")
return fmt.Errorf("unpair requires that the PIN be verified")
}
return s.Channel.Unpair()
}
@@ -850,7 +850,7 @@ func (s *Session) paired() bool {
// authenticate uses an existing pairing to establish a secure channel.
func (s *Session) authenticate(pairing smartcardPairing) error {
if !bytes.Equal(s.Wallet.PublicKey, pairing.PublicKey) {
return fmt.Errorf("Cannot pair using another wallet's pairing; %x != %x", s.Wallet.PublicKey, pairing.PublicKey)
return fmt.Errorf("cannot pair using another wallet's pairing; %x != %x", s.Wallet.PublicKey, pairing.PublicKey)
}
s.Channel.PairingKey = pairing.PairingKey
s.Channel.PairingIndex = pairing.PairingIndex
@@ -879,6 +879,7 @@ func (s *Session) walletStatus() (*walletStatus, error) {
}
// derivationPath fetches the wallet's current derivation path from the card.
//lint:ignore U1000 needs to be added to the console interface
func (s *Session) derivationPath() (accounts.DerivationPath, error) {
response, err := s.Channel.transmitEncrypted(claSCWallet, insStatus, statusP1Path, 0, nil)
if err != nil {
@@ -993,12 +994,14 @@ func (s *Session) derive(path accounts.DerivationPath) (accounts.Account, error)
}
// keyExport contains information on an exported keypair.
//lint:ignore U1000 needs to be added to the console interface
type keyExport struct {
PublicKey []byte `asn1:"tag:0"`
PrivateKey []byte `asn1:"tag:1,optional"`
}
// publicKey returns the public key for the current derivation path.
//lint:ignore U1000 needs to be added to the console interface
func (s *Session) publicKey() ([]byte, error) {
response, err := s.Channel.transmitEncrypted(claSCWallet, insExportKey, exportP1Any, exportP2Pubkey, nil)
if err != nil {

View File

@@ -162,7 +162,8 @@ func (w *ledgerDriver) SignTx(path accounts.DerivationPath, tx *types.Transactio
return common.Address{}, nil, accounts.ErrWalletClosed
}
// Ensure the wallet is capable of signing the given transaction
if chainID != nil && w.version[0] <= 1 && w.version[1] <= 0 && w.version[2] <= 2 {
if chainID != nil && w.version[0] <= 1 && w.version[2] <= 2 {
//lint:ignore ST1005 brand name displayed on the console
return common.Address{}, nil, fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least", w.version[0], w.version[1], w.version[2])
}
// All infos gathered and metadata checks out, request signing

View File

@@ -6,6 +6,7 @@ clone_depth: 5
version: "{branch}.{build}"
environment:
global:
GO111MODULE: on
GOPATH: C:\gopath
CC: gcc.exe
matrix:
@@ -23,8 +24,8 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://dl.google.com/go/go1.13.windows-%GETH_ARCH%.zip
- 7z x go1.13.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://dl.google.com/go/go1.14.2.windows-%GETH_ARCH%.zip
- 7z x go1.14.2.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version

20
build/checksums.txt Normal file
View File

@@ -0,0 +1,20 @@
# This file contains sha256 checksums of optional build dependencies.
98de84e69726a66da7b4e58eac41b99cbe274d7e8906eeb8a5b7eb0aadee7f7c go1.14.2.src.tar.gz
d998a84eea42f2271aca792a7b027ca5c1edfcba229e8e5a844c9ac3f336df35 golangci-lint-1.27.0-linux-armv7.tar.gz
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint.exe-1.27.0-windows-amd64.zip
bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint-1.27.0-windows-amd64.zip
0e2a57d6ba709440d3ed018ef1037465fa010ed02595829092860e5cf863042e golangci-lint-1.27.0-freebsd-386.tar.gz
90205fc42ab5ed0096413e790d88ac9b4ed60f4c47e576d13dc0660f7ed4b013 golangci-lint-1.27.0-linux-arm64.tar.gz
8d345e4e88520e21c113d81978e89ad77fc5b13bfdf20e5bca86b83fc4261272 golangci-lint-1.27.0-linux-amd64.tar.gz
cc619634a77f18dc73df2a0725be13116d64328dc35131ca1737a850d6f76a59 golangci-lint-1.27.0-freebsd-armv7.tar.gz
fe683583cfc9eeec83e498c0d6159d87b5e1919dbe4b6c3b3913089642906069 golangci-lint-1.27.0-linux-s390x.tar.gz
058f5579bee75bdaacbaf75b75e1369f7ad877fd8b3b145aed17a17545de913e golangci-lint-1.27.0-freebsd-armv6.tar.gz
38e1e3dadbe3f56ab62b4de82ee0b88e8fad966d8dfd740a26ef94c2edef9818 golangci-lint-1.27.0-linux-armv6.tar.gz
071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint.exe-1.27.0-windows-386.zip
071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint-1.27.0-windows-386.zip
5f37e2b33914ecddb7cad38186ef4ec61d88172fc04f930fa0267c91151ff306 golangci-lint-1.27.0-linux-386.tar.gz
4d94cfb51fdebeb205f1d5a349ac2b683c30591c5150708073c1c329e15965f0 golangci-lint-1.27.0-freebsd-amd64.tar.gz
52572ba8ff07d5169c2365d3de3fec26dc55a97522094d13d1596199580fa281 golangci-lint-1.27.0-linux-ppc64le.tar.gz
3fb1a1683a29c6c0a8cd76135f62b606fbdd538d5a7aeab94af1af70ffdc2fd4 golangci-lint-1.27.0-darwin-amd64.tar.gz

View File

@@ -22,19 +22,18 @@ variables `PPA_SIGNING_KEY` and `PPA_SSH_KEY` on Travis.
We want to build go-ethereum with the most recent version of Go, irrespective of the Go
version that is available in the main Ubuntu repository. In order to make this possible,
our PPA depends on the ~gophers/ubuntu/archive PPA. Our source package build-depends on
golang-1.11, which is co-installable alongside the regular golang package. PPA dependencies
can be edited at https://launchpad.net/%7Eethereum/+archive/ubuntu/ethereum/+edit-dependencies
we bundle the entire Go sources into our own source archive and start the built job by
compiling Go and then using that to build go-ethereum. On Trusty we have a special case
requiring the `~gophers/ubuntu/archive` PPA since Trusty can't even build Go itself. PPA
deps are set at https://launchpad.net/%7Eethereum/+archive/ubuntu/ethereum/+edit-dependencies
## Building Packages Locally (for testing)
You need to run Ubuntu to do test packaging.
Add the gophers PPA and install Go 1.11 and Debian packaging tools:
Install any version of Go and Debian packaging tools:
$ sudo apt-add-repository ppa:gophers/ubuntu/archive
$ sudo apt-get update
$ sudo apt-get install build-essential golang-1.11 devscripts debhelper python-bzrlib python-paramiko
$ sudo apt-get install build-essential golang-go devscripts debhelper python-bzrlib python-paramiko
Create the source packages:
@@ -42,10 +41,10 @@ Create the source packages:
Then go into the source package directory for your running distribution and build the package:
$ cd dist/ethereum-unstable-1.6.0+xenial
$ cd dist/ethereum-unstable-1.9.6+bionic
$ dpkg-buildpackage
Built packages are placed in the dist/ directory.
$ cd ..
$ dpkg-deb -c geth-unstable_1.6.0+xenial_amd64.deb
$ dpkg-deb -c geth-unstable_1.9.6+bionic_amd64.deb

View File

@@ -58,6 +58,7 @@ import (
"strings"
"time"
"github.com/cespare/cp"
"github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params"
)
@@ -138,7 +139,19 @@ var (
// Note: zesty is unsupported because it was officially deprecated on Launchpad.
// Note: artful is unsupported because it was officially deprecated on Launchpad.
// Note: cosmic is unsupported because it was officially deprecated on Launchpad.
debDistros = []string{"trusty", "xenial", "bionic", "disco", "eoan"}
debDistroGoBoots = map[string]string{
"trusty": "golang-1.11",
"xenial": "golang-go",
"bionic": "golang-go",
"disco": "golang-go",
"eoan": "golang-go",
"focal": "golang-go",
}
debGoBootPaths = map[string]string{
"golang-1.11": "/usr/lib/go-1.11",
"golang-go": "/usr/lib/go",
}
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@@ -202,9 +215,9 @@ func doInstall(cmdline []string) {
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if minor < 9 {
if minor < 11 {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.9 and cannot")
log.Println("go-ethereum requires at least Go version 1.11 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
}
@@ -217,18 +230,15 @@ func doInstall(cmdline []string) {
if *arch == "" || *arch == runtime.GOARCH {
goinstall := goTool("install", buildFlags(env)...)
if runtime.GOARCH == "arm64" {
goinstall.Args = append(goinstall.Args, "-p", "1")
}
goinstall.Args = append(goinstall.Args, "-v")
goinstall.Args = append(goinstall.Args, packages...)
build.MustRun(goinstall)
return
}
// If we are cross compiling to ARMv5 ARMv6 or ARMv7, clean any previous builds
if *arch == "arm" {
os.RemoveAll(filepath.Join(runtime.GOROOT(), "pkg", runtime.GOOS+"_arm"))
for _, path := range filepath.SplitList(build.GOPATH()) {
os.RemoveAll(filepath.Join(path, "pkg", runtime.GOOS+"_arm"))
}
}
// Seems we are cross compiling, work around forbidden GOBIN
goinstall := goToolArch(*arch, *cc, "install", buildFlags(env)...)
goinstall.Args = append(goinstall.Args, "-v")
@@ -278,7 +288,6 @@ func goTool(subcmd string, args ...string) *exec.Cmd {
func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd {
cmd := build.GoTool(subcmd, args...)
cmd.Env = []string{"GOPATH=" + build.GOPATH()}
if arch == "" || arch == runtime.GOARCH {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
} else {
@@ -289,7 +298,7 @@ func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd
cmd.Env = append(cmd.Env, "CC="+cc)
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "GOBIN=") {
if strings.HasPrefix(e, "GOBIN=") {
continue
}
cmd.Env = append(cmd.Env, e)
@@ -303,6 +312,7 @@ func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd
func doTest(cmdline []string) {
coverage := flag.Bool("coverage", false, "Whether to record code coverage")
verbose := flag.Bool("v", false, "Whether to log verbosely")
flag.CommandLine.Parse(cmdline)
env := build.Env()
@@ -315,48 +325,50 @@ func doTest(cmdline []string) {
// Test a single package at a time. CI builders are slow
// and some tests run into timeouts under load.
gotest := goTool("test", buildFlags(env)...)
gotest.Args = append(gotest.Args, "-p", "1", "-timeout", "5m")
gotest.Args = append(gotest.Args, "-p", "1")
if *coverage {
gotest.Args = append(gotest.Args, "-covermode=atomic", "-cover")
}
if *verbose {
gotest.Args = append(gotest.Args, "-v")
}
gotest.Args = append(gotest.Args, packages...)
build.MustRun(gotest)
}
// runs gometalinter on requested packages
// doLint runs golangci-lint on requested packages.
func doLint(cmdline []string) {
var (
cachedir = flag.String("cachedir", "./build/cache", "directory for caching golangci-lint binary.")
)
flag.CommandLine.Parse(cmdline)
packages := []string{"./..."}
if len(flag.CommandLine.Args()) > 0 {
packages = flag.CommandLine.Args()
}
// Get metalinter and install all supported linters
build.MustRun(goTool("get", "gopkg.in/alecthomas/gometalinter.v2"))
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), "--install")
// Run fast linters batched together
configs := []string{
"--vendor",
"--tests",
"--deadline=2m",
"--disable-all",
"--enable=goimports",
"--enable=varcheck",
"--enable=vet",
"--enable=gofmt",
"--enable=misspell",
"--enable=goconst",
"--min-occurrences=6", // for goconst
}
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), append(configs, packages...)...)
linter := downloadLinter(*cachedir)
lflags := []string{"run", "--config", ".golangci.yml"}
build.MustRunCommand(linter, append(lflags, packages...)...)
fmt.Println("You have achieved perfection.")
}
// Run slow linters one by one
for _, linter := range []string{"unconvert", "gosimple"} {
configs = []string{"--vendor", "--tests", "--deadline=10m", "--disable-all", "--enable=" + linter}
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), append(configs, packages...)...)
// downloadLinter downloads and unpacks golangci-lint.
func downloadLinter(cachedir string) string {
const version = "1.27.0"
csdb := build.MustLoadChecksums("build/checksums.txt")
base := fmt.Sprintf("golangci-lint-%s-%s-%s", version, runtime.GOOS, runtime.GOARCH)
url := fmt.Sprintf("https://github.com/golangci/golangci-lint/releases/download/v%s/%s.tar.gz", version, base)
archivePath := filepath.Join(cachedir, base+".tar.gz")
if err := csdb.DownloadFile(url, archivePath); err != nil {
log.Fatal(err)
}
if err := build.ExtractTarballArchive(archivePath, cachedir); err != nil {
log.Fatal(err)
}
return filepath.Join(cachedir, base, "golangci-lint")
}
// Release Packaging
@@ -459,11 +471,13 @@ func maybeSkipArchive(env build.Environment) {
// Debian Packaging
func doDebianSource(cmdline []string) {
var (
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
goversion = flag.String("goversion", "", `Go version to build with (will be included in the source package)`)
cachedir = flag.String("cachedir", "./build/cache", `Filesystem path to cache the downloaded Go bundles at`)
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
)
flag.CommandLine.Parse(cmdline)
*workdir = makeWorkdir(*workdir)
@@ -477,12 +491,39 @@ func doDebianSource(cmdline []string) {
build.MustRun(gpg)
}
// Create Debian packages and upload them
// Download and verify the Go source package.
gobundle := downloadGoSources(*goversion, *cachedir)
// Download all the dependencies needed to build the sources and run the ci script
srcdepfetch := goTool("install", "-n", "./...")
srcdepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath"))
build.MustRun(srcdepfetch)
cidepfetch := goTool("run", "./build/ci.go")
cidepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath"))
cidepfetch.Run() // Command fails, don't care, we only need the deps to start it
// Create Debian packages and upload them.
for _, pkg := range debPackages {
for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
for distro, goboot := range debDistroGoBoots {
// Prepare the debian package with the go-ethereum sources.
meta := newDebMetadata(distro, goboot, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc", "-d", "-Zxz")
// Add Go source code
if err := build.ExtractTarballArchive(gobundle, pkgdir); err != nil {
log.Fatalf("Failed to extract Go sources: %v", err)
}
if err := os.Rename(filepath.Join(pkgdir, "go"), filepath.Join(pkgdir, ".go")); err != nil {
log.Fatalf("Failed to rename Go source folder: %v", err)
}
// Add all dependency modules in compressed form
os.MkdirAll(filepath.Join(pkgdir, ".mod", "cache"), 0755)
if err := cp.CopyAll(filepath.Join(pkgdir, ".mod", "cache", "download"), filepath.Join(*workdir, "modgopath", "pkg", "mod", "cache", "download")); err != nil {
log.Fatalf("Failed to copy Go module dependencies: %v", err)
}
// Run the packaging and upload to the PPA
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc", "-d", "-Zxz", "-nc")
debuild.Dir = pkgdir
build.MustRun(debuild)
@@ -502,6 +543,17 @@ func doDebianSource(cmdline []string) {
}
}
func downloadGoSources(version string, cachedir string) string {
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.src.tar.gz", version)
url := "https://dl.google.com/go/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
log.Fatal(err)
}
return dst
}
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
@@ -561,7 +613,9 @@ type debPackage struct {
}
type debMetadata struct {
Env build.Environment
Env build.Environment
GoBootPackage string
GoBootPath string
PackageName string
@@ -590,19 +644,21 @@ func (d debExecutable) Package() string {
return d.BinaryName
}
func newDebMetadata(distro, author string, env build.Environment, t time.Time, name string, version string, exes []debExecutable) debMetadata {
func newDebMetadata(distro, goboot, author string, env build.Environment, t time.Time, name string, version string, exes []debExecutable) debMetadata {
if author == "" {
// No signing key, use default author.
author = "Ethereum Builds <fjl@ethereum.org>"
}
return debMetadata{
PackageName: name,
Env: env,
Author: author,
Distro: distro,
Version: version,
Time: t.Format(time.RFC1123Z),
Executables: exes,
GoBootPackage: goboot,
GoBootPath: debGoBootPaths[goboot],
PackageName: name,
Env: env,
Author: author,
Distro: distro,
Version: version,
Time: t.Format(time.RFC1123Z),
Executables: exes,
}
}
@@ -667,7 +723,6 @@ func stageDebianSource(tmpdir string, meta debMetadata) (pkgdir string) {
if err := os.Mkdir(pkgdir, 0755); err != nil {
log.Fatal(err)
}
// Copy the source code.
build.MustRunCommand("git", "checkout-index", "-a", "--prefix", pkgdir+string(filepath.Separator))
@@ -685,7 +740,6 @@ func stageDebianSource(tmpdir string, meta debMetadata) (pkgdir string) {
build.Render("build/deb/"+meta.PackageName+"/deb.install", install, 0644, exe)
build.Render("build/deb/"+meta.PackageName+"/deb.docs", docs, 0644, exe)
}
return pkgdir
}
@@ -733,9 +787,12 @@ func doWindowsInstaller(cmdline []string) {
build.Render("build/nsis.uninstall.nsh", filepath.Join(*workdir, "uninstall.nsh"), 0644, allTools)
build.Render("build/nsis.pathupdate.nsh", filepath.Join(*workdir, "PathUpdate.nsh"), 0644, nil)
build.Render("build/nsis.envvarupdate.nsh", filepath.Join(*workdir, "EnvVarUpdate.nsh"), 0644, nil)
build.CopyFile(filepath.Join(*workdir, "SimpleFC.dll"), "build/nsis.simplefc.dll", 0755)
build.CopyFile(filepath.Join(*workdir, "COPYING"), "COPYING", 0755)
if err := cp.CopyFile(filepath.Join(*workdir, "SimpleFC.dll"), "build/nsis.simplefc.dll"); err != nil {
log.Fatal("Failed to copy SimpleFC.dll: %v", err)
}
if err := cp.CopyFile(filepath.Join(*workdir, "COPYING"), "COPYING"); err != nil {
log.Fatal("Failed to copy copyright note: %v", err)
}
// Build the installer. This assumes that all the needed files have been previously
// built (don't mix building and packaging to keep cross compilation complexity to a
// minimum).
@@ -752,7 +809,6 @@ func doWindowsInstaller(cmdline []string) {
"/DARCH="+*arch,
filepath.Join(*workdir, "geth.nsi"),
)
// Sign and publish installer.
if err := archiveUpload(installer, *upload, *signer); err != nil {
log.Fatal(err)
@@ -825,15 +881,15 @@ func gomobileTool(subcmd string, args ...string) *exec.Cmd {
cmd := exec.Command(filepath.Join(GOBIN, "gomobile"), subcmd)
cmd.Args = append(cmd.Args, args...)
cmd.Env = []string{
"GOPATH=" + build.GOPATH(),
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") || strings.HasPrefix(e, "GOBIN=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
return cmd
}
@@ -902,7 +958,7 @@ func doXCodeFramework(cmdline []string) {
if *local {
// If we're building locally, use the build folder and stop afterwards
bind.Dir, _ = filepath.Abs(GOBIN)
bind.Dir = GOBIN
build.MustRun(bind)
return
}
@@ -1013,16 +1069,10 @@ func doXgo(cmdline []string) {
func xgoTool(args []string) *exec.Cmd {
cmd := exec.Command(filepath.Join(GOBIN, "xgo"), args...)
cmd.Env = []string{
"GOPATH=" + build.GOPATH(),
cmd.Env = os.Environ()
cmd.Env = append(cmd.Env, []string{
"GOBIN=" + GOBIN,
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "GOBIN=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
}...)
return cmd
}
@@ -1049,6 +1099,8 @@ func doPurge(cmdline []string) {
if err != nil {
log.Fatal(err)
}
fmt.Printf("Found %d blobs\n", len(blobs))
// Iterate over the blobs, collect and sort all unstable builds
for i := 0; i < len(blobs); i++ {
if !strings.Contains(blobs[i].Name, "unstable") {
@@ -1070,6 +1122,7 @@ func doPurge(cmdline []string) {
break
}
}
fmt.Printf("Deleting %d blobs\n", len(blobs))
// Delete all marked as such and return
if err := build.AzureBlobstoreDelete(auth, blobs); err != nil {
log.Fatal(err)

View File

@@ -1,19 +0,0 @@
#!/bin/sh
# Cleaning the Go cache only makes sense if we actually have Go installed... or
# if Go is actually callable. This does not hold true during deb packaging, so
# we need an explicit check to avoid build failures.
if ! command -v go > /dev/null; then
exit
fi
version_gt() {
test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"
}
golang_version=$(go version |cut -d' ' -f3 |sed 's/go//')
# Clean go build cache when go version is greater than or equal to 1.10
if !(version_gt 1.10 $golang_version); then
go clean -cache
fi

View File

@@ -2,7 +2,7 @@ Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.11
Build-Depends: debhelper (>= 8.0.0), {{.GoBootPackage}}
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git

View File

@@ -4,11 +4,27 @@
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
# Launchpad rejects Go's access to $HOME/.cache, use custom folder
# Launchpad rejects Go's access to $HOME, use custom folders
export GOCACHE=/tmp/go-build
export GOPATH=/tmp/gopath
export GOROOT_BOOTSTRAP={{.GoBootPath}}
override_dh_auto_clean:
# Don't try to be smart Launchpad, we know our build rules better than you
override_dh_auto_build:
build/env.sh /usr/lib/go-1.11/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
# We can't download a fresh Go within Launchpad, so we're shipping and building
# one on the fly. However, we can't build it inside the go-ethereum folder as
# bootstrapping clashes with go modules, so build in a sibling folder.
(mv .go ../ && cd ../.go/src && ./make.bash)
# We can't download external go modules within Launchpad, so we're shipping the
# entire dependency source cache with go-ethereum.
mkdir -p $(GOPATH)/pkg
mv .mod $(GOPATH)/pkg/mod
# A fresh Go was built, all dependency downloads faked, hope build works now
../.go/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:

View File

@@ -1,30 +0,0 @@
#!/bin/sh
set -e
if [ ! -f "build/env.sh" ]; then
echo "$0 must be run from the root of the repository."
exit 2
fi
# Create fake Go workspace if it doesn't exist yet.
workspace="$PWD/build/_workspace"
root="$PWD"
ethdir="$workspace/src/github.com/ethereum"
if [ ! -L "$ethdir/go-ethereum" ]; then
mkdir -p "$ethdir"
cd "$ethdir"
ln -s ../../../../../. go-ethereum
cd "$root"
fi
# Set up the environment to use the workspace.
GOPATH="$workspace"
export GOPATH
# Run the command inside the workspace.
cd "$ethdir/go-ethereum"
PWD="$ethdir/go-ethereum"
# Launch the arguments with the configured environment.
exec "$@"

View File

@@ -19,9 +19,9 @@ Section "Geth" GETH_IDX
# Create start menu launcher
createDirectory "$SMPROGRAMS\${APPNAME}"
createShortCut "$SMPROGRAMS\${APPNAME}\${APPNAME}.lnk" "$INSTDIR\geth.exe" "--fast" "--cache=512"
createShortCut "$SMPROGRAMS\${APPNAME}\Attach.lnk" "$INSTDIR\geth.exe" "attach" "" ""
createShortCut "$SMPROGRAMS\${APPNAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe" "" "" ""
createShortCut "$SMPROGRAMS\${APPNAME}\${APPNAME}.lnk" "$INSTDIR\geth.exe"
createShortCut "$SMPROGRAMS\${APPNAME}\Attach.lnk" "$INSTDIR\geth.exe" "attach"
createShortCut "$SMPROGRAMS\${APPNAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe"
# Firewall - remove rules (if exists)
SimpleFC::AdvRemoveRule "Geth incoming peers (TCP:30303)"

74
cmd/abidump/main.go Normal file
View File

@@ -0,0 +1,74 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"encoding/hex"
"flag"
"fmt"
"os"
"strings"
"github.com/ethereum/go-ethereum/signer/core"
"github.com/ethereum/go-ethereum/signer/fourbyte"
)
func init() {
flag.Usage = func() {
fmt.Fprintln(os.Stderr, "Usage:", os.Args[0], "<hexdata>")
flag.PrintDefaults()
fmt.Fprintln(os.Stderr, `
Parses the given ABI data and tries to interpret it from the fourbyte database.`)
}
}
func parse(data []byte) {
db, err := fourbyte.New()
if err != nil {
die(err)
}
messages := core.ValidationMessages{}
db.ValidateCallData(nil, data, &messages)
for _, m := range messages.Messages {
fmt.Printf("%v: %v\n", m.Typ, m.Message)
}
}
// Example
// ./abidump a9059cbb000000000000000000000000ea0e2dc7d65a50e77fc7e84bff3fd2a9e781ff5c0000000000000000000000000000000000000000000000015af1d78b58c40000
func main() {
flag.Parse()
switch {
case flag.NArg() == 1:
hexdata := flag.Arg(0)
data, err := hex.DecodeString(strings.TrimPrefix(hexdata, "0x"))
if err != nil {
die(err)
}
parse(data)
default:
fmt.Fprintln(os.Stderr, "Error: one argument needed")
flag.Usage()
os.Exit(2)
}
}
func die(args ...interface{}) {
fmt.Fprintln(os.Stderr, args...)
os.Exit(1)
}

View File

@@ -21,29 +21,20 @@ import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/compiler"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
const (
commandHelperTemplate = `{{.Name}}{{if .Subcommands}} command{{end}}{{if .Flags}} [command options]{{end}} [arguments...]
{{if .Description}}{{.Description}}
{{end}}{{if .Subcommands}}
SUBCOMMANDS:
{{range .Subcommands}}{{.Name}}{{with .ShortName}}, {{.}}{{end}}{{ "\t" }}{{.Usage}}
{{end}}{{end}}{{if .Flags}}
OPTIONS:
{{range $.Flags}}{{"\t"}}{{.}}
{{end}}
{{end}}`
)
var (
// Git SHA1 commit hash of the release (set via linker flags)
gitCommit = ""
@@ -103,10 +94,14 @@ var (
Usage: "Destination language for the bindings (go, java, objc)",
Value: "go",
}
aliasFlag = cli.StringFlag{
Name: "alias",
Usage: "Comma separated aliases for function and event renaming, e.g. foo=bar",
}
)
func init() {
app = utils.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app = flags.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Flags = []cli.Flag{
abiFlag,
binFlag,
@@ -120,9 +115,10 @@ func init() {
pkgFlag,
outFlag,
langFlag,
aliasFlag,
}
app.Action = utils.MigrateFlags(abigen)
cli.CommandHelpTemplate = commandHelperTemplate
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
}
func abigen(c *cli.Context) error {
@@ -144,11 +140,12 @@ func abigen(c *cli.Context) error {
}
// If the entire solidity code was specified, build and bind based on that
var (
abis []string
bins []string
types []string
sigs []map[string]string
libs = make(map[string]string)
abis []string
bins []string
types []string
sigs []map[string]string
libs = make(map[string]string)
aliases = make(map[string]string)
)
if c.GlobalString(abiFlag.Name) != "" {
// Load up the ABI, optional bytecode and type name from the parameters
@@ -199,10 +196,22 @@ func abigen(c *cli.Context) error {
utils.Fatalf("Failed to build Solidity contract: %v", err)
}
case c.GlobalIsSet(vyFlag.Name):
contracts, err = compiler.CompileVyper(c.GlobalString(vyperFlag.Name), c.GlobalString(vyFlag.Name))
output, err := compiler.CompileVyper(c.GlobalString(vyperFlag.Name), c.GlobalString(vyFlag.Name))
if err != nil {
utils.Fatalf("Failed to build Vyper contract: %v", err)
}
contracts = make(map[string]*compiler.Contract)
for n, contract := range output {
name := n
// Sanitize the combined json names to match the
// format expected by solidity.
if !strings.Contains(n, ":") {
// Remove extra path components
name = abi.ToCamelCase(strings.TrimSuffix(filepath.Base(name), ".vy"))
}
contracts[name] = contract
}
case c.GlobalIsSet(jsonFlag.Name):
jsonOutput, err := ioutil.ReadFile(c.GlobalString(jsonFlag.Name))
if err != nil {
@@ -232,8 +241,20 @@ func abigen(c *cli.Context) error {
libs[libPattern] = nameParts[len(nameParts)-1]
}
}
// Extract all aliases from the flags
if c.GlobalIsSet(aliasFlag.Name) {
// We support multi-versions for aliasing
// e.g.
// foo=bar,foo2=bar2
// foo:bar,foo2:bar2
re := regexp.MustCompile(`(?:(\w+)[:=](\w+))`)
submatches := re.FindAllStringSubmatch(c.GlobalString(aliasFlag.Name), -1)
for _, match := range submatches {
aliases[match[1]] = match[2]
}
}
// Generate the contract binding
code, err := bind.Bind(types, abis, bins, sigs, c.GlobalString(pkgFlag.Name), lang, libs)
code, err := bind.Bind(types, abis, bins, sigs, c.GlobalString(pkgFlag.Name), lang, libs, aliases)
if err != nil {
utils.Fatalf("Failed to generate ABI binding: %v", err)
}

View File

@@ -70,7 +70,9 @@ func main() {
if err = crypto.SaveECDSA(*genKey, nodeKey); err != nil {
utils.Fatalf("%v", err)
}
return
if !*writeAddr {
return
}
case *nodeKeyFile == "" && *nodeKeyHex == "":
utils.Fatalf("Use -nodekey or -nodekeyhex to specify a private key")
case *nodeKeyFile != "" && *nodeKeyHex != "":

View File

@@ -0,0 +1,103 @@
## Checkpoint-admin
Checkpoint-admin is a tool for updating checkpoint oracle status. It provides a series of functions including deploying checkpoint oracle contract, signing for new checkpoints, and updating checkpoints in the checkpoint oracle contract.
### Checkpoint
In the LES protocol, there is an important concept called checkpoint. In simple terms, whenever a certain number of blocks are generated on the blockchain, a new checkpoint is generated which contains some important information such as
* Block hash at checkpoint
* Canonical hash trie root at checkpoint
* Bloom trie root at checkpoint
*For a more detailed introduction to checkpoint, please see the LES [spec](https://github.com/ethereum/devp2p/blob/master/caps/les.md).*
Using this information, light clients can skip all historical block headers when synchronizing data and start synchronization from this checkpoint. Therefore, as long as the light client can obtain some latest and correct checkpoints, the amount of data and time for synchronization will be greatly reduced.
However, from a security perspective, the most critical step in a synchronization algorithm based on checkpoints is to determine whether the checkpoint used by the light client is correct. Otherwise, all blockchain data synchronized based on this checkpoint may be wrong. For this we provide two different ways to ensure the correctness of the checkpoint used by the light client.
#### Hardcoded checkpoint
There are several hardcoded checkpoints in the [source code](https://github.com/ethereum/go-ethereum/blob/master/params/config.go#L38) of the go-ethereum project. These checkpoints are updated by go-ethereum developers when new versions of software are released. Because light client users trust Geth developers to some extent, hardcoded checkpoints in the code can also be considered correct.
#### Checkpoint oracle
Hardcoded checkpoints can solve the problem of verifying the correctness of checkpoints (although this is a more centralized solution). But the pain point of this solution is that developers can only update checkpoints when a new version of software is released. In addition, light client users usually do not keep the Geth version they use always up to date. So hardcoded checkpoints used by users are generally stale. Therefore, it still needs to download a large amount of blockchain data during synchronization.
Checkpoint oracle is a more flexible solution. In simple terms, this is a smart contract that is deployed on the blockchain. The smart contract records several designated trusted signers. Whenever enough trusted signers have issued their signatures for the same checkpoint, it can be considered that the checkpoint has been authenticated by the signers. Checkpoints authenticated by trusted signers can be considered correct.
So this way, even without updating the software version, as long as the trusted signers regularly update the checkpoint in oracle on time, the light client can always use the latest and verified checkpoint for data synchronization.
### Usage
Checkpoint-admin is a command line tool designed for checkpoint oracle. Users can easily deploy contracts and update checkpoints through this tool.
#### Install
```shell
go get github.com/ethereum/go-ethereum/cmd/checkpoint-admin
```
#### Deploy
Deploy checkpoint oracle contract. `--signers` indicates the specified trusted signer, and `--threshold` indicates the minimum number of signatures required by trusted signers to update a checkpoint.
```shell
checkpoint-admin deploy --rpc <NODE_RPC_ENDPOINT> --clef <CLEF_ENDPOINT> --signer <SIGNER_TO_SIGN_TX> --signers <TRUSTED_SIGNER_LIST> --threshold 1
```
It is worth noting that checkpoint-admin only supports clef as a signer for transactions and plain text(checkpoint). For more clef usage, please see the clef [tutorial](https://geth.ethereum.org/docs/clef/tutorial) .
#### Sign
Checkpoint-admin provides two different modes of signing. You can automatically obtain the current stable checkpoint and sign it interactively, and you can also use the information provided by the command line flags to sign checkpoint offline.
**Interactive mode**
```shell
checkpoint-admin sign --clef <CLEF_ENDPOINT> --signer <SIGNER_TO_SIGN_CHECKPOINT> --rpc <NODE_RPC_ENDPOINT>
```
*It is worth noting that the connected Geth node can be a fullnode or a light client. If it is fullnode, you must enable the LES protocol. E.G. add `--light.serv 50` to the startup command line flags*.
**Offline mode**
```shell
checkpoint-admin sign --clef <CLEF_ENDPOINT> --signer <SIGNER_TO_SIGN_CHECKPOINT> --index <CHECKPOINT_INDEX> --hash <CHECKPOINT_HASH> --oracle <CHECKPOINT_ORACLE_ADDRESS>
```
*CHECKPOINT_HASH is obtained based on this [calculation method](https://github.com/ethereum/go-ethereum/blob/master/params/config.go#L251).*
#### Publish
Collect enough signatures from different trusted signers for the same checkpoint and submit them to oracle to update the "authenticated" checkpoint in the contract.
```shell
checkpoint-admin publish --clef <CLEF_ENDPOINT> --rpc <NODE_RPC_ENDPOINT> --signer <SIGNER_TO_SIGN_TX> --index <CHECKPOINT_INDEX> --signatures <CHECKPOINT_SIGNATURE_LIST>
```
#### Status query
Check the latest status of checkpoint oracle.
```shell
checkpoint-admin status --rpc <NODE_RPC_ENDPOINT>
```
### Enable checkpoint oracle in your private network
Currently, only the Ethereum mainnet and the default supported test networks (ropsten, rinkeby, goerli) activate this feature. If you want to activate this feature in your private network, you can overwrite the relevant checkpoint oracle settings through the configuration file after deploying the oracle contract.
* Get your node configuration file `geth dumpconfig OTHER_COMMAND_LINE_OPTIONS > config.toml`
* Edit the configuration file and add the following information
```toml
[Eth.CheckpointOracle]
Address = CHECKPOINT_ORACLE_ADDRESS
Signers = [TRUSTED_SIGNER_1, ..., TRUSTED_SIGNER_N]
Threshold = THRESHOLD
```
* Start geth with the modified configuration file
*In the private network, all fullnodes and light clients need to be started using the same checkpoint oracle settings.*

View File

@@ -22,25 +22,12 @@ import (
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/fdlimit"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
const (
commandHelperTemplate = `{{.Name}}{{if .Subcommands}} command{{end}}{{if .Flags}} [command options]{{end}} [arguments...]
{{if .Description}}{{.Description}}
{{end}}{{if .Subcommands}}
SUBCOMMANDS:
{{range .Subcommands}}{{.Name}}{{with .ShortName}}, {{.}}{{end}}{{ "\t" }}{{.Usage}}
{{end}}{{end}}{{if .Flags}}
OPTIONS:
{{range $.Flags}}{{"\t"}}{{.}}
{{end}}
{{end}}`
)
var (
// Git SHA1 commit hash of the release (set via linker flags)
gitCommit = ""
@@ -50,7 +37,7 @@ var (
var app *cli.App
func init() {
app = utils.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app = flags.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Commands = []cli.Command{
commandStatus,
commandDeploy,
@@ -61,7 +48,7 @@ func init() {
oracleFlag,
nodeURLFlag,
}
cli.CommandHelpTemplate = commandHelperTemplate
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
}
// Commonly used command line flags.

View File

@@ -9,7 +9,7 @@ Clef can run as a daemon on the same machine, off a usb-stick like [USB armory](
Check out the
* [CLI tutorial](tutorial.md) for some concrete examples on how Clef works.
* [Setup docs](docs/setup.md) for infos on how to configure Clef on QubesOS or USB Armory.
* [Setup docs](docs/setup.md) for information on how to configure Clef on QubesOS or USB Armory.
* [Data types](datatypes.md) for details on the communication messages between Clef and an external UI.
## Command line flags
@@ -33,12 +33,12 @@ GLOBAL OPTIONS:
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--nousb Disables monitoring for and managing USB hardware wallets
--pcscdpath value Path to the smartcard daemon (pcscd) socket file (default: "/run/pcscd/pcscd.comm")
--rpcaddr value HTTP-RPC server listening interface (default: "localhost")
--rpcvhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--http.addr value HTTP-RPC server listening interface (default: "localhost")
--http.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--ipcdisable Disable the IPC-RPC server
--ipcpath Filename for IPC socket/pipe within the datadir (explicit paths escape it)
--rpc Enable the HTTP-RPC server
--rpcport value HTTP-RPC server listening port (default: 8550)
--http Enable the HTTP-RPC server
--http.port value HTTP-RPC server listening port (default: 8550)
--signersecret value A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash
--4bytedb-custom value File used for writing new 4byte-identifiers submitted via API (default: "./4byte-custom.json")
--auditlog value File used to emit audit logs. Set to "" to disable (default: "audit.log")
@@ -46,6 +46,7 @@ GLOBAL OPTIONS:
--stdio-ui Use STDIN/STDOUT as a channel for an external UI. This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user interface, and can be used when Clef is started by an external process.
--stdio-ui-test Mechanism to test interface between Clef and UI. Requires 'stdio-ui'.
--advanced If enabled, issues warnings instead of rejections for suspicious requests. Default off
--suppress-bootwarn If set, does not show the warning during boot
--help, -h show help
--version, -v print the version
```
@@ -112,11 +113,11 @@ Some snags and todos
### External API
Clef listens to HTTP requests on `rpcaddr`:`rpcport` (or to IPC on `ipcpath`), with the same JSON-RPC standard as Geth. The messages are expected to be [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Clef listens to HTTP requests on `http.addr`:`http.port` (or to IPC on `ipcpath`), with the same JSON-RPC standard as Geth. The messages are expected to be [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Some of these call can require user interaction. Clients must be aware that responses may be delayed significantly or may never be received if a users decides to ignore the confirmation request.
Some of these calls can require user interaction. Clients must be aware that responses may be delayed significantly or may never be received if a user decides to ignore the confirmation request.
The External API is **untrusted**: it does not accept credentials over this API, nor does it expect that requests have any authority.
The External API is **untrusted**: it does not accept credentials, nor does it expect that requests have any authority.
### Internal UI API
@@ -145,13 +146,11 @@ See the [external API changelog](extapi_changelog.md) for information about chan
All hex encoded values must be prefixed with `0x`.
## Methods
### account_new
#### Create new password protected account
The signer will generate a new private key, encrypts it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and stores it in the keystore directory.
The signer will generate a new private key, encrypt it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and store it in the keystore directory.
The client is responsible for creating a backup of the keystore. If the keystore is lost there is no method of retrieving lost accounts.
#### Arguments
@@ -160,7 +159,6 @@ None
#### Result
- address [string]: account address that is derived from the generated key
- url [string]: location of the keyfile
#### Sample call
```json
@@ -172,14 +170,11 @@ None
}
```
Response
```
```json
{
"id": 0,
"jsonrpc": "2.0",
"result": {
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"url": "keystore:///my/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
"result": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133"
}
```
@@ -195,8 +190,6 @@ None
#### Result
- array with account records:
- account.address [string]: account address that is derived from the generated key
- account.type [string]: type of the
- account.url [string]: location of the account
#### Sample call
```json
@@ -207,21 +200,13 @@ None
}
```
Response
```
```json
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"address": "0xafb2f771f58513609765698f65d3f2f0224a956f",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T07-26-47.162109726Z--afb2f771f58513609765698f65d3f2f0224a956f"
},
{
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
"0xafb2f771f58513609765698f65d3f2f0224a956f",
"0xbea9183f8f4f03d427f6bcea17388bdff1cab133"
]
}
```
@@ -229,10 +214,10 @@ Response
### account_signTransaction
#### Sign transactions
Signs a transactions and responds with the signed transaction in RLP encoded form.
Signs a transaction and responds with the signed transaction in RLP-encoded and JSON forms.
#### Arguments
2. transaction object:
1. transaction object:
- `from` [address]: account to send the transaction from
- `to` [address]: receiver account. If omitted or `0x`, will cause contract creation.
- `gas` [number]: maximum amount of gas to burn
@@ -240,12 +225,13 @@ Response
- `value` [number:optional]: amount of Wei to send with the transaction
- `data` [data:optional]: input data
- `nonce` [number]: account nonce
3. method signature [string:optional]
1. method signature [string:optional]
- The method signature, if present, is to aid decoding the calldata. Should consist of `methodname(paramtype,...)`, e.g. `transfer(uint256,address)`. The signer may use this data to parse the supplied calldata, and show the user. The data, however, is considered totally untrusted, and reliability is not expected.
#### Result
- signed transaction in RLP encoded form [data]
- raw [data]: signed transaction in RLP encoded form
- tx [json]: signed transaction in JSON form
#### Sample call
```json
@@ -270,11 +256,22 @@ Response
```json
{
"id": 2,
"jsonrpc": "2.0",
"error": {
"code": -32000,
"message": "Request denied"
"id": 2,
"result": {
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1234",
"gas": "0x55555",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x1234",
"input": "0xabcd",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
}
```
@@ -326,7 +323,7 @@ Response
Bash example:
```bash
#curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
> curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":{"raw":"0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","tx":{"nonce":"0x0","gasPrice":"0x1","gas":"0x333","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0","value":"0x0","input":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012","v":"0x26","r":"0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e","s":"0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","hash":"0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"}}}
```
@@ -373,7 +370,7 @@ Response
### account_signTypedData
#### Sign data
Signs a chunk of structured data conformant to [EIP712]([EIP-712](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-712.md)) and returns the calculated signature.
Signs a chunk of structured data conformant to [EIP-712](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-712.md) and returns the calculated signature.
#### Arguments
- account [address]: account to sign with
@@ -469,7 +466,7 @@ Response
### account_ecRecover
#### Sign data
#### Recover the signing address
Derive the address from the account that was used to sign data with content type `text/plain` and the signature.
@@ -487,7 +484,6 @@ Derive the address from the account that was used to sign data with content type
"jsonrpc": "2.0",
"method": "account_ecRecover",
"params": [
"data/plain",
"0xaabbccdd",
"0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
]
@@ -503,117 +499,36 @@ Response
}
```
### account_import
### account_version
#### Import account
Import a private key into the keystore. The imported key is expected to be encrypted according to the web3 keystore
format.
#### Get external API version
Get the version of the external API used by Clef.
#### Arguments
- account [object]: key in [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) (retrieved with account_export)
None
#### Result
- imported key [object]:
- key.address [address]: address of the imported key
- key.type [string]: type of the account
- key.url [string]: key URL
* external API version [string]
#### Sample call
```json
{
"id": 6,
"id": 0,
"jsonrpc": "2.0",
"method": "account_import",
"params": [
{
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
}
]
"method": "account_version",
"params": []
}
```
Response
```json
{
"id": 6,
"jsonrpc": "2.0",
"result": {
"address": "0xc7412fc59930fd90099c917a50e5f11d0934b2f5",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T11-00-42.032024108Z--c7412fc59930fd90099c917a50e5f11d0934b2f5"
}
}
```
### account_export
#### Export account from keystore
Export a private key from the keystore. The exported private key is encrypted with the original password. When the
key is imported later this password is required.
#### Arguments
- account [address]: export private key that is associated with this account
#### Result
- exported key, see [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) for
more information
#### Sample call
```json
{
"id": 5,
"jsonrpc": "2.0",
"method": "account_export",
"params": [
"0xc7412fc59930fd90099c917a50e5f11d0934b2f5"
]
}
```
Response
```json
{
"id": 5,
"jsonrpc": "2.0",
"result": {
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
}
"id": 0,
"jsonrpc": "2.0",
"result": "6.0.0"
}
```
@@ -625,7 +540,7 @@ By starting the signer with the switch `--stdio-ui-test`, the signer will invoke
denials. This can be used during development to ensure that the API is (at least somewhat) correctly implemented.
See `pythonsigner`, which can be invoked via `python3 pythonsigner.py test` to perform the 'denial-handshake-test'.
All methods in this API uses object-based parameters, so that there can be no mixups of parameters: each piece of data is accessed by key.
All methods in this API use object-based parameters, so that there can be no mixup of parameters: each piece of data is accessed by key.
See the [ui API changelog](intapi_changelog.md) for information about changes to this API.
@@ -784,12 +699,10 @@ Invoked when a request for account listing has been made.
{
"accounts": [
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-20T14-44-54.089682944Z--123409812340981234098123409812deadbeef42",
"address": "0x123409812340981234098123409812deadbeef42"
},
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-23T21-59-03.199240693Z--cafebabedeadbeef34098123409812deadbeef42",
"address": "0xcafebabedeadbeef34098123409812deadbeef42"
}
@@ -819,7 +732,13 @@ Invoked when a request for account listing has been made.
{
"address": "0x123409812340981234098123409812deadbeef42",
"raw_data": "0x01020304",
"message": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"messages": [
{
"name": "message",
"value": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"type": "text/plain"
}
],
"hash": "0x7e3a4e7a9d1744bc5c675c25e1234ca8ed9162bd17f78b9085e48047c15ac310",
"meta": {
"remote": "signer binary",
@@ -829,12 +748,34 @@ Invoked when a request for account listing has been made.
}
]
}
```
### ApproveNewAccount / `ui_approveNewAccount`
Invoked when a request for creating a new account has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 4,
"method": "ui_approveNewAccount",
"params": [
{
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ShowInfo / `ui_showInfo`
The UI should show the info to the user. Does not expect response.
The UI should show the info (a single message) to the user. Does not expect response.
#### Sample call
@@ -844,9 +785,7 @@ The UI should show the info to the user. Does not expect response.
"id": 9,
"method": "ui_showInfo",
"params": [
{
"text": "Tests completed"
}
"Tests completed"
]
}
@@ -854,18 +793,16 @@ The UI should show the info to the user. Does not expect response.
### ShowError / `ui_showError`
The UI should show the info to the user. Does not expect response.
The UI should show the error (a single message) to the user. Does not expect response.
```json
{
"jsonrpc": "2.0",
"id": 2,
"method": "ShowError",
"method": "ui_showError",
"params": [
{
"text": "Testing 'ShowError'"
}
"Something bad happened!"
]
}
@@ -879,9 +816,36 @@ When implementing rate-limited rules, this callback should be used.
TLDR; Use this method to keep track of signed transactions, instead of using the data in `ApproveTx`.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onApprovedTx",
"params": [
{
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1",
"gas": "0x333",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
]
}
```
### OnSignerStartup / `ui_onSignerStartup`
This method provide the UI with information about what API version the signer uses (both internal and external) aswell as build-info and external API,
This method provides the UI with information about what API version the signer uses (both internal and external) as well as build-info and external API,
in k/v-form.
Example call:
@@ -905,6 +869,27 @@ Example call:
```
### OnInputRequired / `ui_onInputRequired`
Invoked when Clef requires user input (e.g. a password).
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onInputRequired",
"params": [
{
"title": "Account password",
"prompt": "Please enter the password for account 0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"isPassword": true
}
]
}
```
### Rules for UI apis

View File

@@ -3,7 +3,7 @@
These data types are defined in the channel between clef and the UI
### SignDataRequest
SignDataRequest contains information about a pending request to sign some data. The data to be signed can be of various types, defined by content-type. Clef has done most of the work in canonicalizing and making sense of the data, and it's up to the UI to presentthe user with the contents of the `message`
SignDataRequest contains information about a pending request to sign some data. The data to be signed can be of various types, defined by content-type. Clef has done most of the work in canonicalizing and making sense of the data, and it's up to the UI to present the user with the contents of the `message`
Example:
```json

View File

@@ -34,7 +34,7 @@ There are two ways that this can be achieved: integrated via Qubes or integrated
#### 1. Qubes Integrated
Qubes provdes a facility for inter-qubes communication via `qrexec`. A qube can request to make a cross-qube RPC request
Qubes provides a facility for inter-qubes communication via `qrexec`. A qube can request to make a cross-qube RPC request
to another qube. The OS then asks the user if the call is permitted.
![Example](qubes/qrexec-example.png)
@@ -48,7 +48,7 @@ This is how [Split GPG](https://www.qubes-os.org/doc/split-gpg/) is implemented.
![Clef via qrexec](qubes/clef_qubes_qrexec.png)
On the `target` qubes, we need to define the rpc service.
On the `target` qubes, we need to define the RPC service.
[qubes.Clefsign](qubes/qubes.Clefsign):
@@ -135,11 +135,11 @@ $ cat newaccnt.json
$ cat newaccnt.json| qrexec-client-vm debian-work qubes.Clefsign
```
This should pop up first a dialog to allow the IPC call:
A dialog should pop up first to allow the IPC call:
![one](qubes/qubes_newaccount-1.png)
Followed by a GTK-dialog to approve the operation
Followed by a GTK-dialog to approve the operation:
![two](qubes/qubes_newaccount-2.png)
@@ -169,7 +169,7 @@ However, it comes with a couple of drawbacks:
- The `Origin` header must be forwarded
- Information about the remote ip must be added as a `X-Forwarded-For`. However, Clef cannot always trust an `XFF` header,
since malicious clients may lie about `XFF` in order to fool the http server into believing it comes from another address.
- Even with a policy in place to allow rpc-calls between `caller` and `target`, there will be several popups:
- Even with a policy in place to allow RPC calls between `caller` and `target`, there will be several popups:
- One qubes-specific where the user specifies the `target` vm
- One clef-specific to approve the transaction
@@ -177,7 +177,7 @@ However, it comes with a couple of drawbacks:
#### 2. Network integrated
The second way to set up Clef on a qubes system is to allow networking, and have Clef listen to a port which is accessible
form other qubes.
from other qubes.
![Clef via http](qubes/clef_qubes_http.png)
@@ -186,13 +186,13 @@ form other qubes.
## USBArmory
The [USB armory](https://inversepath.com/usbarmory) is an open source hardware design with an 800 Mhz ARM processor. It is a pocket-size
The [USB armory](https://inversepath.com/usbarmory) is an open source hardware design with an 800 MHz ARM processor. It is a pocket-size
computer. When inserted into a laptop, it identifies itself as a USB network interface, basically adding another network
to your computer. Over this new network interface, you can SSH into the device.
Running Clef off a USB armory means that you can use the armory as a very versatile offline computer, which only
ever connects to a local network between your computer and the device itself.
Needless to say, the while this model should be fairly secure against remote attacks, an attacker with physical access
Needless to say, while this model should be fairly secure against remote attacks, an attacker with physical access
to the USB Armory would trivially be able to extract the contents of the device filesystem.

View File

@@ -10,6 +10,17 @@ TL;DR: Given a version number MAJOR.MINOR.PATCH, increment the:
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
### 7.0.1
Added `clef_New` to the internal API callable from a UI.
> `New` creates a new password protected Account. The private key is protected with
> the given password. Users are responsible to backup the private key that is stored
> in the keystore location that was specified when this API was created.
> This method is the same as New on the external API, the difference being that
> this implementation does not ask for confirmation, since it's initiated by
> the user
### 7.0.0
- The `message` field was renamed to `messages` in all data signing request methods to better reflect that it's a list, not a value.
@@ -150,7 +161,7 @@ UserInputResponse struct {
#### 1.2.0
* Add `OnStartup` method, to provide the UI with information about what API version
the signer uses (both internal and external) aswell as build-info and external api.
the signer uses (both internal and external) as well as build-info and external api.
Example call:
```json

View File

@@ -32,6 +32,7 @@ import (
"os/user"
"path/filepath"
"runtime"
"sort"
"strings"
"time"
@@ -40,10 +41,10 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
@@ -53,6 +54,7 @@ import (
"github.com/ethereum/go-ethereum/signer/fourbyte"
"github.com/ethereum/go-ethereum/signer/rules"
"github.com/ethereum/go-ethereum/signer/storage"
colorable "github.com/mattn/go-colorable"
"github.com/mattn/go-isatty"
"gopkg.in/urfave/cli.v1"
@@ -82,6 +84,10 @@ var (
Name: "advanced",
Usage: "If enabled, issues warnings instead of rejections for suspicious requests. Default off",
}
acceptFlag = cli.BoolFlag{
Name: "suppress-bootwarn",
Usage: "If set, does not show the warning during boot",
}
keystoreFlag = cli.StringFlag{
Name: "keystore",
Value: filepath.Join(node.DefaultDataDir(), "keystore"),
@@ -98,10 +104,15 @@ var (
Usage: "Chain id to use for signing (1=mainnet, 3=Ropsten, 4=Rinkeby, 5=Goerli)",
}
rpcPortFlag = cli.IntFlag{
Name: "rpcport",
Name: "http.port",
Usage: "HTTP-RPC server listening port",
Value: node.DefaultHTTPPort + 5,
}
legacyRPCPortFlag = cli.IntFlag{
Name: "rpcport",
Usage: "HTTP-RPC server listening port (Deprecated, please use --http.port).",
Value: node.DefaultHTTPPort + 5,
}
signerSecretFlag = cli.StringFlag{
Name: "signersecret",
Usage: "A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash",
@@ -187,6 +198,22 @@ The setpw command stores a password for a given address (keyfile).
Description: `
The delpw command removes a password for a given address (keyfile).
`}
newAccountCommand = cli.Command{
Action: utils.MigrateFlags(newAccount),
Name: "newaccount",
Usage: "Create a new account",
ArgsUsage: "",
Flags: []cli.Flag{
logLevelFlag,
keystoreFlag,
utils.LightKDFFlag,
acceptFlag,
},
Description: `
The newaccount command creates a new keystore-backed account. It is a convenience-method
which can be used in lieu of an external UI.`,
}
gendocCommand = cli.Command{
Action: GenDoc,
Name: "gendoc",
@@ -196,6 +223,42 @@ The gendoc generates example structures of the json-rpc communication types.
`}
)
// AppHelpFlagGroups is the application flags, grouped by functionality.
var AppHelpFlagGroups = []flags.FlagGroup{
{
Name: "FLAGS",
Flags: []cli.Flag{
logLevelFlag,
keystoreFlag,
configdirFlag,
chainIdFlag,
utils.LightKDFFlag,
utils.NoUSBFlag,
utils.SmartCardDaemonPathFlag,
utils.HTTPListenAddrFlag,
utils.HTTPVirtualHostsFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.HTTPEnabledFlag,
rpcPortFlag,
signerSecretFlag,
customDBFlag,
auditLogFlag,
ruleFlag,
stdiouiFlag,
testFlag,
advancedMode,
acceptFlag,
},
},
{
Name: "ALIASED (deprecated)",
Flags: []cli.Flag{
legacyRPCPortFlag,
},
},
}
func init() {
app.Name = "Clef"
app.Usage = "Manage Ethereum account operations"
@@ -207,11 +270,11 @@ func init() {
utils.LightKDFFlag,
utils.NoUSBFlag,
utils.SmartCardDaemonPathFlag,
utils.RPCListenAddrFlag,
utils.RPCVirtualHostsFlag,
utils.HTTPListenAddrFlag,
utils.HTTPVirtualHostsFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.RPCEnabledFlag,
utils.HTTPEnabledFlag,
rpcPortFlag,
signerSecretFlag,
customDBFlag,
@@ -220,9 +283,51 @@ func init() {
stdiouiFlag,
testFlag,
advancedMode,
acceptFlag,
legacyRPCPortFlag,
}
app.Action = signer
app.Commands = []cli.Command{initCommand, attestCommand, setCredentialCommand, delCredentialCommand, gendocCommand}
app.Commands = []cli.Command{initCommand,
attestCommand,
setCredentialCommand,
delCredentialCommand,
newAccountCommand,
gendocCommand}
cli.CommandHelpTemplate = flags.CommandHelpTemplate
// Override the default app help template
cli.AppHelpTemplate = flags.ClefAppHelpTemplate
// Override the default app help printer, but only for the global app help
originalHelpPrinter := cli.HelpPrinter
cli.HelpPrinter = func(w io.Writer, tmpl string, data interface{}) {
if tmpl == flags.ClefAppHelpTemplate {
// Render out custom usage screen
originalHelpPrinter(w, tmpl, flags.HelpData{App: data, FlagGroups: AppHelpFlagGroups})
} else if tmpl == flags.CommandHelpTemplate {
// Iterate over all command specific flags and categorize them
categorized := make(map[string][]cli.Flag)
for _, flag := range data.(cli.Command).Flags {
if _, ok := categorized[flag.String()]; !ok {
categorized[flags.FlagCategory(flag, AppHelpFlagGroups)] = append(categorized[flags.FlagCategory(flag, AppHelpFlagGroups)], flag)
}
}
// sort to get a stable ordering
sorted := make([]flags.FlagGroup, 0, len(categorized))
for cat, flgs := range categorized {
sorted = append(sorted, flags.FlagGroup{Name: cat, Flags: flgs})
}
sort.Sort(flags.ByCategory(sorted))
// add sorted array to data and render with default printer
originalHelpPrinter(w, tmpl, map[string]interface{}{
"cmd": data,
"categorizedFlags": sorted,
})
} else {
originalHelpPrinter(w, tmpl, data)
}
}
}
func main() {
@@ -262,7 +367,7 @@ func initializeSecrets(c *cli.Context) error {
text := "The master seed of clef will be locked with a password.\nPlease specify a password. Do not forget this password!"
var password string
for {
password = getPassPhrase(text, true)
password = utils.GetPassPhrase(text, true)
if err := core.ValidatePasswordFormat(password); err != nil {
fmt.Printf("invalid password: %v\n", err)
} else {
@@ -335,7 +440,7 @@ func setCredential(ctx *cli.Context) error {
utils.Fatalf("Invalid address specified: %s", addr)
}
address := common.HexToAddress(addr)
password := getPassPhrase("Please enter a password to store for this address:", true)
password := utils.GetPassPhrase("Please enter a password to store for this address:", true)
fmt.Println()
stretchedKey, err := readMasterKey(ctx, nil)
@@ -381,14 +486,41 @@ func removeCredential(ctx *cli.Context) error {
return nil
}
func newAccount(c *cli.Context) error {
if err := initialize(c); err != nil {
return err
}
// The newaccount is meant for users using the CLI, since 'real' external
// UIs can use the UI-api instead. So we'll just use the native CLI UI here.
var (
ui = core.NewCommandlineUI()
pwStorage storage.Storage = &storage.NoStorage{}
ksLoc = c.GlobalString(keystoreFlag.Name)
lightKdf = c.GlobalBool(utils.LightKDFFlag.Name)
)
log.Info("Starting clef", "keystore", ksLoc, "light-kdf", lightKdf)
am := core.StartClefAccountManager(ksLoc, true, lightKdf, "")
// This gives is us access to the external API
apiImpl := core.NewSignerAPI(am, 0, true, ui, nil, false, pwStorage)
// This gives us access to the internal API
internalApi := core.NewUIServerAPI(apiImpl)
addr, err := internalApi.New(context.Background())
if err == nil {
fmt.Printf("Generated account %v\n", addr.String())
}
return err
}
func initialize(c *cli.Context) error {
// Set up the logger to print everything
logOutput := os.Stdout
if c.GlobalBool(stdiouiFlag.Name) {
logOutput = os.Stderr
// If using the stdioui, we can't do the 'confirm'-flow
fmt.Fprintf(logOutput, legalWarning)
} else {
if !c.GlobalBool(acceptFlag.Name) {
fmt.Fprint(logOutput, legalWarning)
}
} else if !c.GlobalBool(acceptFlag.Name) {
if !confirm(legalWarning) {
return fmt.Errorf("aborted by user")
}
@@ -404,6 +536,27 @@ func initialize(c *cli.Context) error {
return nil
}
// ipcEndpoint resolves an IPC endpoint based on a configured value, taking into
// account the set data folders as well as the designated platform we're currently
// running on.
func ipcEndpoint(ipcPath, datadir string) string {
// On windows we can only use plain top-level pipes
if runtime.GOOS == "windows" {
if strings.HasPrefix(ipcPath, `\\.\pipe\`) {
return ipcPath
}
return `\\.\pipe\` + ipcPath
}
// Resolve names into the data directory full paths otherwise
if filepath.Base(ipcPath) == ipcPath {
if datadir == "" {
return filepath.Join(os.TempDir(), ipcPath)
}
return filepath.Join(datadir, ipcPath)
}
return ipcPath
}
func signer(c *cli.Context) error {
// If we have some unrecognized command, bail out
if args := c.Args(); len(args) > 0 {
@@ -435,7 +588,6 @@ func signer(c *cli.Context) error {
api core.ExternalAPI
pwStorage storage.Storage = &storage.NoStorage{}
)
configDir := c.GlobalString(configdirFlag.Name)
if stretchedKey, err := readMasterKey(c, ui); err != nil {
log.Warn("Failed to open master, rules disabled", "err", err)
@@ -513,31 +665,44 @@ func signer(c *cli.Context) error {
Service: api,
Version: "1.0"},
}
if c.GlobalBool(utils.RPCEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.RPCVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.RPCCORSDomainFlag.Name))
if c.GlobalBool(utils.HTTPEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.HTTPVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.HTTPCORSDomainFlag.Name))
srv := rpc.NewServer()
err := node.RegisterApisFromWhitelist(rpcAPI, []string{"account"}, srv, false)
if err != nil {
utils.Fatalf("Could not register API: %w", err)
}
handler := node.NewHTTPHandlerStack(srv, cors, vhosts)
// set port
port := c.Int(rpcPortFlag.Name)
if c.GlobalIsSet(legacyRPCPortFlag.Name) {
if !c.GlobalIsSet(rpcPortFlag.Name) {
port = c.Int(legacyRPCPortFlag.Name)
}
log.Warn("The flag --rpcport is deprecated and will be removed in the future, please use --http.port")
}
// start http server
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.RPCListenAddrFlag.Name), c.Int(rpcPortFlag.Name))
listener, _, err := rpc.StartHTTPEndpoint(httpEndpoint, rpcAPI, []string{"account"}, cors, vhosts, rpc.DefaultHTTPTimeouts)
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.HTTPListenAddrFlag.Name), port)
httpServer, addr, err := node.StartHTTPEndpoint(httpEndpoint, rpc.DefaultHTTPTimeouts, handler)
if err != nil {
utils.Fatalf("Could not start RPC api: %v", err)
}
extapiURL = fmt.Sprintf("http://%s", httpEndpoint)
extapiURL = fmt.Sprintf("http://%v/", addr)
log.Info("HTTP endpoint opened", "url", extapiURL)
defer func() {
listener.Close()
log.Info("HTTP endpoint closed", "url", httpEndpoint)
// Don't bother imposing a timeout here.
httpServer.Shutdown(context.Background())
log.Info("HTTP endpoint closed", "url", extapiURL)
}()
}
if !c.GlobalBool(utils.IPCDisabledFlag.Name) {
if c.IsSet(utils.IPCPathFlag.Name) {
ipcapiURL = c.GlobalString(utils.IPCPathFlag.Name)
} else {
ipcapiURL = filepath.Join(configDir, "clef.ipc")
}
givenPath := c.GlobalString(utils.IPCPathFlag.Name)
ipcapiURL = ipcEndpoint(filepath.Join(givenPath, "clef.ipc"), configDir)
listener, _, err := rpc.StartIPCEndpoint(ipcapiURL, rpcAPI)
if err != nil {
utils.Fatalf("Could not start IPC api: %v", err)
@@ -547,7 +712,6 @@ func signer(c *cli.Context) error {
listener.Close()
log.Info("IPC endpoint closed", "url", ipcapiURL)
}()
}
if c.GlobalBool(testFlag.Name) {
@@ -563,7 +727,7 @@ func signer(c *cli.Context) error {
},
})
abortChan := make(chan os.Signal)
abortChan := make(chan os.Signal, 1)
signal.Notify(abortChan, os.Interrupt)
sig := <-abortChan
@@ -643,7 +807,7 @@ func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
}
password = resp.Text
} else {
password = getPassPhrase("Decrypt master seed of clef", false)
password = utils.GetPassPhrase("Decrypt master seed of clef", false)
}
masterSeed, err := decryptSeed(cipherKey, password)
if err != nil {
@@ -678,7 +842,7 @@ func checkFile(filename string) error {
// confirm displays a text and asks for user confirmation
func confirm(text string) bool {
fmt.Printf(text)
fmt.Print(text)
fmt.Printf("\nEnter 'ok' to proceed:\n> ")
text, err := bufio.NewReader(os.Stdin).ReadString('\n')
@@ -744,21 +908,19 @@ func testExternalUI(api *core.SignerAPI) {
api.UI.ShowInfo("Please approve the next request for signing a clique header")
time.Sleep(delay)
cliqueHeader := types.Header{
common.HexToHash("0000H45H"),
common.HexToHash("0000H45H"),
common.HexToAddress("0000H45H"),
common.HexToHash("0000H00H"),
common.HexToHash("0000H45H"),
common.HexToHash("0000H45H"),
types.Bloom{},
big.NewInt(1337),
big.NewInt(1337),
1338,
1338,
1338,
[]byte("Extra data Extra data Extra data Extra data Extra data Extra data Extra data Extra data"),
common.HexToHash("0x0000H45H"),
types.BlockNonce{},
ParentHash: common.HexToHash("0000H45H"),
UncleHash: common.HexToHash("0000H45H"),
Coinbase: common.HexToAddress("0000H45H"),
Root: common.HexToHash("0000H00H"),
TxHash: common.HexToHash("0000H45H"),
ReceiptHash: common.HexToHash("0000H45H"),
Difficulty: big.NewInt(1337),
Number: big.NewInt(1337),
GasLimit: 1338,
GasUsed: 1338,
Time: 1338,
Extra: []byte("Extra data Extra data Extra data Extra data Extra data Extra data Extra data Extra data"),
MixDigest: common.HexToHash("0x0000H45H"),
}
cliqueRlp, err := rlp.EncodeToBytes(cliqueHeader)
if err != nil {
@@ -840,27 +1002,6 @@ func testExternalUI(api *core.SignerAPI) {
}
// getPassPhrase retrieves the password associated with clef, either fetched
// from a list of preloaded passphrases, or requested interactively from the user.
// TODO: there are many `getPassPhrase` functions, it will be better to abstract them into one.
func getPassPhrase(prompt string, confirmation bool) string {
fmt.Println(prompt)
password, err := console.Stdin.PromptPassword("Password: ")
if err != nil {
utils.Fatalf("Failed to read password: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat password: ")
if err != nil {
utils.Fatalf("Failed to read password confirmation: %v", err)
}
if password != confirm {
utils.Fatalf("Passwords do not match")
}
}
return password
}
type encryptedSeedStorage struct {
Description string `json:"description"`
Version int `json:"version"`
@@ -911,7 +1052,7 @@ func GenDoc(ctx *cli.Context) {
if data, err := json.MarshalIndent(v, "", " "); err == nil {
output = append(output, fmt.Sprintf("### %s\n\n%s\n\nExample:\n```json\n%s\n```", name, desc, data))
} else {
log.Error("Error generating output", err)
log.Error("Error generating output", "err", err)
}
}
)
@@ -922,7 +1063,7 @@ func GenDoc(ctx *cli.Context) {
"of the work in canonicalizing and making sense of the data, and it's up to the UI to present" +
"the user with the contents of the `message`"
sighash, msg := accounts.TextAndHash([]byte("hello world"))
messages := []*core.NameValueType{{"message", msg, accounts.MimetypeTextPlain}}
messages := []*core.NameValueType{{Name: "message", Value: msg, Typ: accounts.MimetypeTextPlain}}
add("SignDataRequest", desc, &core.SignDataRequest{
Address: common.NewMixedcaseAddress(a),
@@ -953,8 +1094,8 @@ func GenDoc(ctx *cli.Context) {
add("SignTxRequest", desc, &core.SignTxRequest{
Meta: meta,
Callinfo: []core.ValidationInfo{
{"Warning", "Something looks odd, show this message as a warning"},
{"Info", "User should see this aswell"},
{Typ: "Warning", Message: "Something looks odd, show this message as a warning"},
{Typ: "Info", Message: "User should see this as well"},
},
Transaction: core.SendTxArgs{
Data: &data,
@@ -1020,16 +1161,21 @@ func GenDoc(ctx *cli.Context) {
&core.ListRequest{
Meta: meta,
Accounts: []accounts.Account{
{a, accounts.URL{Scheme: "keystore", Path: "/path/to/keyfile/a"}},
{b, accounts.URL{Scheme: "keystore", Path: "/path/to/keyfile/b"}}},
{Address: a, URL: accounts.URL{Scheme: "keystore", Path: "/path/to/keyfile/a"}},
{Address: b, URL: accounts.URL{Scheme: "keystore", Path: "/path/to/keyfile/b"}}},
})
add("ListResponse", "Response to list request. The response contains a list of all addresses to show to the caller. "+
"Note: the UI is free to respond with any address the caller, regardless of whether it exists or not",
&core.ListResponse{
Accounts: []accounts.Account{
{common.HexToAddress("0xcowbeef000000cowbeef00000000000000000c0w"), accounts.URL{Path: ".. ignored .."}},
{common.HexToAddress("0xffffffffffffffffffffffffffffffffffffffff"), accounts.URL{}},
{
Address: common.HexToAddress("0xcowbeef000000cowbeef00000000000000000c0w"),
URL: accounts.URL{Path: ".. ignored .."},
},
{
Address: common.HexToAddress("0xffffffffffffffffffffffffffffffffffffffff"),
},
}})
}

View File

@@ -1,6 +1,6 @@
## Initializing Clef
First thing's first, Clef needs to store some data itself. Since that data might be sensitive (passwords, signing rules, accounts), Clef's entire storage is encrypted. To support encrypting data, the first step is to initialize Clef with a random master seed, itself too encrypted with your chosen password:
First things first, Clef needs to store some data itself. Since that data might be sensitive (passwords, signing rules, accounts), Clef's entire storage is encrypted. To support encrypting data, the first step is to initialize Clef with a random master seed, itself too encrypted with your chosen password:
```text
$ clef init

156
cmd/devp2p/crawl.go Normal file
View File

@@ -0,0 +1,156 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/enode"
)
type crawler struct {
input nodeSet
output nodeSet
disc resolver
iters []enode.Iterator
inputIter enode.Iterator
ch chan *enode.Node
closed chan struct{}
// settings
revalidateInterval time.Duration
}
type resolver interface {
RequestENR(*enode.Node) (*enode.Node, error)
}
func newCrawler(input nodeSet, disc resolver, iters ...enode.Iterator) *crawler {
c := &crawler{
input: input,
output: make(nodeSet, len(input)),
disc: disc,
iters: iters,
inputIter: enode.IterNodes(input.nodes()),
ch: make(chan *enode.Node),
closed: make(chan struct{}),
}
c.iters = append(c.iters, c.inputIter)
// Copy input to output initially. Any nodes that fail validation
// will be dropped from output during the run.
for id, n := range input {
c.output[id] = n
}
return c
}
func (c *crawler) run(timeout time.Duration) nodeSet {
var (
timeoutTimer = time.NewTimer(timeout)
timeoutCh <-chan time.Time
doneCh = make(chan enode.Iterator, len(c.iters))
liveIters = len(c.iters)
)
defer timeoutTimer.Stop()
for _, it := range c.iters {
go c.runIterator(doneCh, it)
}
loop:
for {
select {
case n := <-c.ch:
c.updateNode(n)
case it := <-doneCh:
if it == c.inputIter {
// Enable timeout when we're done revalidating the input nodes.
log.Info("Revalidation of input set is done", "len", len(c.input))
if timeout > 0 {
timeoutCh = timeoutTimer.C
}
}
if liveIters--; liveIters == 0 {
break loop
}
case <-timeoutCh:
break loop
}
}
close(c.closed)
for _, it := range c.iters {
it.Close()
}
for ; liveIters > 0; liveIters-- {
<-doneCh
}
return c.output
}
func (c *crawler) runIterator(done chan<- enode.Iterator, it enode.Iterator) {
defer func() { done <- it }()
for it.Next() {
select {
case c.ch <- it.Node():
case <-c.closed:
return
}
}
}
func (c *crawler) updateNode(n *enode.Node) {
node, ok := c.output[n.ID()]
// Skip validation of recently-seen nodes.
if ok && time.Since(node.LastCheck) < c.revalidateInterval {
return
}
// Request the node record.
nn, err := c.disc.RequestENR(n)
node.LastCheck = truncNow()
if err != nil {
if node.Score == 0 {
// Node doesn't implement EIP-868.
log.Debug("Skipping node", "id", n.ID())
return
}
node.Score /= 2
} else {
node.N = nn
node.Seq = nn.Seq()
node.Score++
if node.FirstResponse.IsZero() {
node.FirstResponse = node.LastCheck
}
node.LastResponse = node.LastCheck
}
// Store/update node in output set.
if node.Score <= 0 {
log.Info("Removing node", "id", n.ID())
delete(c.output, n.ID())
} else {
log.Info("Updating node", "id", n.ID(), "seq", n.Seq(), "score", node.Score)
c.output[n.ID()] = node
}
}
func truncNow() time.Time {
return time.Now().UTC().Truncate(1 * time.Second)
}

View File

@@ -19,11 +19,14 @@ package main
import (
"fmt"
"net"
"sort"
"os"
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/devp2p/internal/v4test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/params"
@@ -38,36 +41,97 @@ var (
discv4PingCommand,
discv4RequestRecordCommand,
discv4ResolveCommand,
discv4ResolveJSONCommand,
discv4CrawlCommand,
discv4TestCommand,
},
}
discv4PingCommand = cli.Command{
Name: "ping",
Usage: "Sends ping to a node",
Action: discv4Ping,
Name: "ping",
Usage: "Sends ping to a node",
Action: discv4Ping,
ArgsUsage: "<node>",
}
discv4RequestRecordCommand = cli.Command{
Name: "requestenr",
Usage: "Requests a node record using EIP-868 enrRequest",
Action: discv4RequestRecord,
Name: "requestenr",
Usage: "Requests a node record using EIP-868 enrRequest",
Action: discv4RequestRecord,
ArgsUsage: "<node>",
}
discv4ResolveCommand = cli.Command{
Name: "resolve",
Usage: "Finds a node in the DHT",
Action: discv4Resolve,
Flags: []cli.Flag{bootnodesFlag},
Name: "resolve",
Usage: "Finds a node in the DHT",
Action: discv4Resolve,
ArgsUsage: "<node>",
Flags: []cli.Flag{bootnodesFlag},
}
discv4ResolveJSONCommand = cli.Command{
Name: "resolve-json",
Usage: "Re-resolves nodes in a nodes.json file",
Action: discv4ResolveJSON,
Flags: []cli.Flag{bootnodesFlag},
ArgsUsage: "<nodes.json file>",
}
discv4CrawlCommand = cli.Command{
Name: "crawl",
Usage: "Updates a nodes.json file with random nodes found in the DHT",
Action: discv4Crawl,
Flags: []cli.Flag{bootnodesFlag, crawlTimeoutFlag},
}
discv4TestCommand = cli.Command{
Name: "test",
Usage: "Runs tests against a node",
Action: discv4Test,
Flags: []cli.Flag{remoteEnodeFlag, testPatternFlag, testListen1Flag, testListen2Flag},
}
)
var bootnodesFlag = cli.StringFlag{
Name: "bootnodes",
Usage: "Comma separated nodes used for bootstrapping",
}
var (
bootnodesFlag = cli.StringFlag{
Name: "bootnodes",
Usage: "Comma separated nodes used for bootstrapping",
}
nodekeyFlag = cli.StringFlag{
Name: "nodekey",
Usage: "Hex-encoded node key",
}
nodedbFlag = cli.StringFlag{
Name: "nodedb",
Usage: "Nodes database location",
}
listenAddrFlag = cli.StringFlag{
Name: "addr",
Usage: "Listening address",
}
crawlTimeoutFlag = cli.DurationFlag{
Name: "timeout",
Usage: "Time limit for the crawl.",
Value: 30 * time.Minute,
}
remoteEnodeFlag = cli.StringFlag{
Name: "remote",
Usage: "Enode of the remote node under test",
EnvVar: "REMOTE_ENODE",
}
testPatternFlag = cli.StringFlag{
Name: "run",
Usage: "Pattern of test suite(s) to run",
}
testListen1Flag = cli.StringFlag{
Name: "listen1",
Usage: "IP address of the first tester",
Value: v4test.Listen1,
}
testListen2Flag = cli.StringFlag{
Name: "listen2",
Usage: "IP address of the second tester",
Value: v4test.Listen2,
}
)
func discv4Ping(ctx *cli.Context) error {
n, disc, err := getNodeArgAndStartV4(ctx)
if err != nil {
return err
}
n := getNodeArg(ctx)
disc := startV4(ctx)
defer disc.Close()
start := time.Now()
@@ -79,10 +143,8 @@ func discv4Ping(ctx *cli.Context) error {
}
func discv4RequestRecord(ctx *cli.Context) error {
n, disc, err := getNodeArgAndStartV4(ctx)
if err != nil {
return err
}
n := getNodeArg(ctx)
disc := startV4(ctx)
defer disc.Close()
respN, err := disc.RequestENR(n)
@@ -94,33 +156,139 @@ func discv4RequestRecord(ctx *cli.Context) error {
}
func discv4Resolve(ctx *cli.Context) error {
n, disc, err := getNodeArgAndStartV4(ctx)
if err != nil {
return err
}
n := getNodeArg(ctx)
disc := startV4(ctx)
defer disc.Close()
fmt.Println(disc.Resolve(n).String())
return nil
}
func getNodeArgAndStartV4(ctx *cli.Context) (*enode.Node, *discover.UDPv4, error) {
if ctx.NArg() != 1 {
return nil, nil, fmt.Errorf("missing node as command-line argument")
func discv4ResolveJSON(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need nodes file as argument")
}
n, err := parseNode(ctx.Args()[0])
if err != nil {
return nil, nil, err
nodesFile := ctx.Args().Get(0)
inputSet := make(nodeSet)
if common.FileExist(nodesFile) {
inputSet = loadNodesJSON(nodesFile)
}
var bootnodes []*enode.Node
if commandHasFlag(ctx, bootnodesFlag) {
bootnodes, err = parseBootnodes(ctx)
// Add extra nodes from command line arguments.
var nodeargs []*enode.Node
for i := 1; i < ctx.NArg(); i++ {
n, err := parseNode(ctx.Args().Get(i))
if err != nil {
return nil, nil, err
exit(err)
}
nodeargs = append(nodeargs, n)
}
disc, err := startV4(bootnodes)
return n, disc, err
// Run the crawler.
disc := startV4(ctx)
defer disc.Close()
c := newCrawler(inputSet, disc, enode.IterNodes(nodeargs))
c.revalidateInterval = 0
output := c.run(0)
writeNodesJSON(nodesFile, output)
return nil
}
func discv4Crawl(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need nodes file as argument")
}
nodesFile := ctx.Args().First()
var inputSet nodeSet
if common.FileExist(nodesFile) {
inputSet = loadNodesJSON(nodesFile)
}
disc := startV4(ctx)
defer disc.Close()
c := newCrawler(inputSet, disc, disc.RandomNodes())
c.revalidateInterval = 10 * time.Minute
output := c.run(ctx.Duration(crawlTimeoutFlag.Name))
writeNodesJSON(nodesFile, output)
return nil
}
func discv4Test(ctx *cli.Context) error {
// Configure test package globals.
if !ctx.IsSet(remoteEnodeFlag.Name) {
return fmt.Errorf("Missing -%v", remoteEnodeFlag.Name)
}
v4test.Remote = ctx.String(remoteEnodeFlag.Name)
v4test.Listen1 = ctx.String(testListen1Flag.Name)
v4test.Listen2 = ctx.String(testListen2Flag.Name)
// Filter and run test cases.
tests := v4test.AllTests
if ctx.IsSet(testPatternFlag.Name) {
tests = utesting.MatchTests(tests, ctx.String(testPatternFlag.Name))
}
results := utesting.RunTests(tests, os.Stdout)
if fails := utesting.CountFailures(results); fails > 0 {
return fmt.Errorf("%v/%v tests passed.", len(tests)-fails, len(tests))
}
fmt.Printf("%v/%v passed\n", len(tests), len(tests))
return nil
}
// startV4 starts an ephemeral discovery V4 node.
func startV4(ctx *cli.Context) *discover.UDPv4 {
ln, config := makeDiscoveryConfig(ctx)
socket := listen(ln, ctx.String(listenAddrFlag.Name))
disc, err := discover.ListenV4(socket, ln, config)
if err != nil {
exit(err)
}
return disc
}
func makeDiscoveryConfig(ctx *cli.Context) (*enode.LocalNode, discover.Config) {
var cfg discover.Config
if ctx.IsSet(nodekeyFlag.Name) {
key, err := crypto.HexToECDSA(ctx.String(nodekeyFlag.Name))
if err != nil {
exit(fmt.Errorf("-%s: %v", nodekeyFlag.Name, err))
}
cfg.PrivateKey = key
} else {
cfg.PrivateKey, _ = crypto.GenerateKey()
}
if commandHasFlag(ctx, bootnodesFlag) {
bn, err := parseBootnodes(ctx)
if err != nil {
exit(err)
}
cfg.Bootnodes = bn
}
dbpath := ctx.String(nodedbFlag.Name)
db, err := enode.OpenDB(dbpath)
if err != nil {
exit(err)
}
ln := enode.NewLocalNode(db, cfg.PrivateKey)
return ln, cfg
}
func listen(ln *enode.LocalNode, addr string) *net.UDPConn {
if addr == "" {
addr = "0.0.0.0:0"
}
socket, err := net.ListenPacket("udp4", addr)
if err != nil {
exit(err)
}
usocket := socket.(*net.UDPConn)
uaddr := socket.LocalAddr().(*net.UDPAddr)
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
ln.SetFallbackUDP(uaddr.Port)
return usocket
}
func parseBootnodes(ctx *cli.Context) ([]*enode.Node, error) {
@@ -138,29 +306,3 @@ func parseBootnodes(ctx *cli.Context) ([]*enode.Node, error) {
}
return nodes, nil
}
// commandHasFlag returns true if the current command supports the given flag.
func commandHasFlag(ctx *cli.Context, flag cli.Flag) bool {
flags := ctx.FlagNames()
sort.Strings(flags)
i := sort.SearchStrings(flags, flag.GetName())
return i != len(flags) && flags[i] == flag.GetName()
}
// startV4 starts an ephemeral discovery V4 node.
func startV4(bootnodes []*enode.Node) (*discover.UDPv4, error) {
var cfg discover.Config
cfg.Bootnodes = bootnodes
cfg.PrivateKey, _ = crypto.GenerateKey()
db, _ := enode.OpenDB("")
ln := enode.NewLocalNode(db, cfg.PrivateKey)
socket, err := net.ListenUDP("udp4", &net.UDPAddr{IP: net.IP{0, 0, 0, 0}})
if err != nil {
return nil, err
}
addr := socket.LocalAddr().(*net.UDPAddr)
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
ln.SetFallbackUDP(addr.Port)
return discover.ListenUDP(socket, ln, cfg)
}

123
cmd/devp2p/discv5cmd.go Normal file
View File

@@ -0,0 +1,123 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/p2p/discover"
"gopkg.in/urfave/cli.v1"
)
var (
discv5Command = cli.Command{
Name: "discv5",
Usage: "Node Discovery v5 tools",
Subcommands: []cli.Command{
discv5PingCommand,
discv5ResolveCommand,
discv5CrawlCommand,
discv5ListenCommand,
},
}
discv5PingCommand = cli.Command{
Name: "ping",
Usage: "Sends ping to a node",
Action: discv5Ping,
}
discv5ResolveCommand = cli.Command{
Name: "resolve",
Usage: "Finds a node in the DHT",
Action: discv5Resolve,
Flags: []cli.Flag{bootnodesFlag},
}
discv5CrawlCommand = cli.Command{
Name: "crawl",
Usage: "Updates a nodes.json file with random nodes found in the DHT",
Action: discv5Crawl,
Flags: []cli.Flag{bootnodesFlag, crawlTimeoutFlag},
}
discv5ListenCommand = cli.Command{
Name: "listen",
Usage: "Runs a node",
Action: discv5Listen,
Flags: []cli.Flag{
bootnodesFlag,
nodekeyFlag,
nodedbFlag,
listenAddrFlag,
},
}
)
func discv5Ping(ctx *cli.Context) error {
n := getNodeArg(ctx)
disc := startV5(ctx)
defer disc.Close()
fmt.Println(disc.Ping(n))
return nil
}
func discv5Resolve(ctx *cli.Context) error {
n := getNodeArg(ctx)
disc := startV5(ctx)
defer disc.Close()
fmt.Println(disc.Resolve(n))
return nil
}
func discv5Crawl(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need nodes file as argument")
}
nodesFile := ctx.Args().First()
var inputSet nodeSet
if common.FileExist(nodesFile) {
inputSet = loadNodesJSON(nodesFile)
}
disc := startV5(ctx)
defer disc.Close()
c := newCrawler(inputSet, disc, disc.RandomNodes())
c.revalidateInterval = 10 * time.Minute
output := c.run(ctx.Duration(crawlTimeoutFlag.Name))
writeNodesJSON(nodesFile, output)
return nil
}
func discv5Listen(ctx *cli.Context) error {
disc := startV5(ctx)
defer disc.Close()
fmt.Println(disc.Self())
select {}
}
// startV5 starts an ephemeral discovery v5 node.
func startV5(ctx *cli.Context) *discover.UDPv5 {
ln, config := makeDiscoveryConfig(ctx)
socket := listen(ln, ctx.String(listenAddrFlag.Name))
disc, err := discover.ListenV5(socket, ln, config)
if err != nil {
exit(err)
}
return disc
}

View File

@@ -0,0 +1,163 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"strings"
"github.com/cloudflare/cloudflare-go"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/dnsdisc"
"gopkg.in/urfave/cli.v1"
)
var (
cloudflareTokenFlag = cli.StringFlag{
Name: "token",
Usage: "CloudFlare API token",
EnvVar: "CLOUDFLARE_API_TOKEN",
}
cloudflareZoneIDFlag = cli.StringFlag{
Name: "zoneid",
Usage: "CloudFlare Zone ID (optional)",
}
)
type cloudflareClient struct {
*cloudflare.API
zoneID string
}
// newCloudflareClient sets up a CloudFlare API client from command line flags.
func newCloudflareClient(ctx *cli.Context) *cloudflareClient {
token := ctx.String(cloudflareTokenFlag.Name)
if token == "" {
exit(fmt.Errorf("need cloudflare API token to proceed"))
}
api, err := cloudflare.NewWithAPIToken(token)
if err != nil {
exit(fmt.Errorf("can't create Cloudflare client: %v", err))
}
return &cloudflareClient{
API: api,
zoneID: ctx.String(cloudflareZoneIDFlag.Name),
}
}
// deploy uploads the given tree to CloudFlare DNS.
func (c *cloudflareClient) deploy(name string, t *dnsdisc.Tree) error {
if err := c.checkZone(name); err != nil {
return err
}
records := t.ToTXT(name)
return c.uploadRecords(name, records)
}
// checkZone verifies permissions on the CloudFlare DNS Zone for name.
func (c *cloudflareClient) checkZone(name string) error {
if c.zoneID == "" {
log.Info(fmt.Sprintf("Finding CloudFlare zone ID for %s", name))
id, err := c.ZoneIDByName(name)
if err != nil {
return err
}
c.zoneID = id
}
log.Info(fmt.Sprintf("Checking Permissions on zone %s", c.zoneID))
zone, err := c.ZoneDetails(c.zoneID)
if err != nil {
return err
}
if !strings.HasSuffix(name, "."+zone.Name) {
return fmt.Errorf("CloudFlare zone name %q does not match name %q to be deployed", zone.Name, name)
}
needPerms := map[string]bool{"#zone:edit": false, "#zone:read": false}
for _, perm := range zone.Permissions {
if _, ok := needPerms[perm]; ok {
needPerms[perm] = true
}
}
for _, ok := range needPerms {
if !ok {
return fmt.Errorf("wrong permissions on zone %s: %v", c.zoneID, needPerms)
}
}
return nil
}
// uploadRecords updates the TXT records at a particular subdomain. All non-root records
// will have a TTL of "infinity" and all existing records not in the new map will be
// nuked!
func (c *cloudflareClient) uploadRecords(name string, records map[string]string) error {
// Convert all names to lowercase.
lrecords := make(map[string]string, len(records))
for name, r := range records {
lrecords[strings.ToLower(name)] = r
}
records = lrecords
log.Info(fmt.Sprintf("Retrieving existing TXT records on %s", name))
entries, err := c.DNSRecords(c.zoneID, cloudflare.DNSRecord{Type: "TXT"})
if err != nil {
return err
}
existing := make(map[string]cloudflare.DNSRecord)
for _, entry := range entries {
if !strings.HasSuffix(entry.Name, name) {
continue
}
existing[strings.ToLower(entry.Name)] = entry
}
// Iterate over the new records and inject anything missing.
for path, val := range records {
old, exists := existing[path]
if !exists {
// Entry is unknown, push a new one to Cloudflare.
log.Info(fmt.Sprintf("Creating %s = %q", path, val))
ttl := rootTTL
if path != name {
ttl = treeNodeTTL // Max TTL permitted by Cloudflare
}
_, err = c.CreateDNSRecord(c.zoneID, cloudflare.DNSRecord{Type: "TXT", Name: path, Content: val, TTL: ttl})
} else if old.Content != val {
// Entry already exists, only change its content.
log.Info(fmt.Sprintf("Updating %s from %q to %q", path, old.Content, val))
old.Content = val
err = c.UpdateDNSRecord(c.zoneID, old.ID, old)
} else {
log.Info(fmt.Sprintf("Skipping %s = %q", path, val))
}
if err != nil {
return fmt.Errorf("failed to publish %s: %v", path, err)
}
}
// Iterate over the old records and delete anything stale.
for path, entry := range existing {
if _, ok := records[path]; ok {
continue
}
// Stale entry, nuke it.
log.Info(fmt.Sprintf("Deleting %s = %q", path, entry.Content))
if err := c.DeleteDNSRecord(c.zoneID, entry.ID); err != nil {
return fmt.Errorf("failed to delete %s: %v", path, err)
}
}
return nil
}

322
cmd/devp2p/dns_route53.go Normal file
View File

@@ -0,0 +1,322 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"errors"
"fmt"
"sort"
"strconv"
"strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/route53"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/dnsdisc"
"gopkg.in/urfave/cli.v1"
)
const (
// Route53 limits change sets to 32k of 'RDATA size'. Change sets are also limited to
// 1000 items. UPSERTs count double.
// https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests-changeresourcerecordsets
route53ChangeSizeLimit = 32000
route53ChangeCountLimit = 1000
)
var (
route53AccessKeyFlag = cli.StringFlag{
Name: "access-key-id",
Usage: "AWS Access Key ID",
EnvVar: "AWS_ACCESS_KEY_ID",
}
route53AccessSecretFlag = cli.StringFlag{
Name: "access-key-secret",
Usage: "AWS Access Key Secret",
EnvVar: "AWS_SECRET_ACCESS_KEY",
}
route53ZoneIDFlag = cli.StringFlag{
Name: "zone-id",
Usage: "Route53 Zone ID",
}
)
type route53Client struct {
api *route53.Route53
zoneID string
}
type recordSet struct {
values []string
ttl int64
}
// newRoute53Client sets up a Route53 API client from command line flags.
func newRoute53Client(ctx *cli.Context) *route53Client {
akey := ctx.String(route53AccessKeyFlag.Name)
asec := ctx.String(route53AccessSecretFlag.Name)
if akey == "" || asec == "" {
exit(fmt.Errorf("need Route53 Access Key ID and secret proceed"))
}
config := &aws.Config{Credentials: credentials.NewStaticCredentials(akey, asec, "")}
session, err := session.NewSession(config)
if err != nil {
exit(fmt.Errorf("can't create AWS session: %v", err))
}
return &route53Client{
api: route53.New(session),
zoneID: ctx.String(route53ZoneIDFlag.Name),
}
}
// deploy uploads the given tree to Route53.
func (c *route53Client) deploy(name string, t *dnsdisc.Tree) error {
if err := c.checkZone(name); err != nil {
return err
}
// Compute DNS changes.
existing, err := c.collectRecords(name)
if err != nil {
return err
}
log.Info(fmt.Sprintf("Found %d TXT records", len(existing)))
records := t.ToTXT(name)
changes := c.computeChanges(name, records, existing)
if len(changes) == 0 {
log.Info("No DNS changes needed")
return nil
}
// Submit change batches.
batches := splitChanges(changes, route53ChangeSizeLimit, route53ChangeCountLimit)
for i, changes := range batches {
log.Info(fmt.Sprintf("Submitting %d changes to Route53", len(changes)))
batch := new(route53.ChangeBatch)
batch.SetChanges(changes)
batch.SetComment(fmt.Sprintf("enrtree update %d/%d of %s at seq %d", i+1, len(batches), name, t.Seq()))
req := &route53.ChangeResourceRecordSetsInput{HostedZoneId: &c.zoneID, ChangeBatch: batch}
resp, err := c.api.ChangeResourceRecordSets(req)
if err != nil {
return err
}
log.Info(fmt.Sprintf("Waiting for change request %s", *resp.ChangeInfo.Id))
wreq := &route53.GetChangeInput{Id: resp.ChangeInfo.Id}
if err := c.api.WaitUntilResourceRecordSetsChanged(wreq); err != nil {
return err
}
}
return nil
}
// checkZone verifies zone information for the given domain.
func (c *route53Client) checkZone(name string) (err error) {
if c.zoneID == "" {
c.zoneID, err = c.findZoneID(name)
}
return err
}
// findZoneID searches for the Zone ID containing the given domain.
func (c *route53Client) findZoneID(name string) (string, error) {
log.Info(fmt.Sprintf("Finding Route53 Zone ID for %s", name))
var req route53.ListHostedZonesByNameInput
for {
resp, err := c.api.ListHostedZonesByName(&req)
if err != nil {
return "", err
}
for _, zone := range resp.HostedZones {
if isSubdomain(name, *zone.Name) {
return *zone.Id, nil
}
}
if !*resp.IsTruncated {
break
}
req.DNSName = resp.NextDNSName
req.HostedZoneId = resp.NextHostedZoneId
}
return "", errors.New("can't find zone ID for " + name)
}
// computeChanges creates DNS changes for the given record.
func (c *route53Client) computeChanges(name string, records map[string]string, existing map[string]recordSet) []*route53.Change {
// Convert all names to lowercase.
lrecords := make(map[string]string, len(records))
for name, r := range records {
lrecords[strings.ToLower(name)] = r
}
records = lrecords
var changes []*route53.Change
for path, val := range records {
ttl := int64(rootTTL)
if path != name {
ttl = int64(treeNodeTTL)
}
prevRecords, exists := existing[path]
prevValue := strings.Join(prevRecords.values, "")
if !exists {
// Entry is unknown, push a new one
log.Info(fmt.Sprintf("Creating %s = %q", path, val))
changes = append(changes, newTXTChange("CREATE", path, ttl, splitTXT(val)))
} else if prevValue != val || prevRecords.ttl != ttl {
// Entry already exists, only change its content.
log.Info(fmt.Sprintf("Updating %s from %q to %q", path, prevValue, val))
changes = append(changes, newTXTChange("UPSERT", path, ttl, splitTXT(val)))
} else {
log.Info(fmt.Sprintf("Skipping %s = %q", path, val))
}
}
// Iterate over the old records and delete anything stale.
for path, set := range existing {
if _, ok := records[path]; ok {
continue
}
// Stale entry, nuke it.
log.Info(fmt.Sprintf("Deleting %s = %q", path, strings.Join(set.values, "")))
changes = append(changes, newTXTChange("DELETE", path, set.ttl, set.values...))
}
sortChanges(changes)
return changes
}
// sortChanges ensures DNS changes are in leaf-added -> root-changed -> leaf-deleted order.
func sortChanges(changes []*route53.Change) {
score := map[string]int{"CREATE": 1, "UPSERT": 2, "DELETE": 3}
sort.Slice(changes, func(i, j int) bool {
if *changes[i].Action == *changes[j].Action {
return *changes[i].ResourceRecordSet.Name < *changes[j].ResourceRecordSet.Name
}
return score[*changes[i].Action] < score[*changes[j].Action]
})
}
// splitChanges splits up DNS changes such that each change batch
// is smaller than the given RDATA limit.
func splitChanges(changes []*route53.Change, sizeLimit, countLimit int) [][]*route53.Change {
var (
batches [][]*route53.Change
batchSize int
batchCount int
)
for _, ch := range changes {
// Start new batch if this change pushes the current one over the limit.
count := changeCount(ch)
size := changeSize(ch) * count
overSize := batchSize+size > sizeLimit
overCount := batchCount+count > countLimit
if len(batches) == 0 || overSize || overCount {
batches = append(batches, nil)
batchSize = 0
batchCount = 0
}
batches[len(batches)-1] = append(batches[len(batches)-1], ch)
batchSize += size
batchCount += count
}
return batches
}
// changeSize returns the RDATA size of a DNS change.
func changeSize(ch *route53.Change) int {
size := 0
for _, rr := range ch.ResourceRecordSet.ResourceRecords {
if rr.Value != nil {
size += len(*rr.Value)
}
}
return size
}
func changeCount(ch *route53.Change) int {
if *ch.Action == "UPSERT" {
return 2
}
return 1
}
// collectRecords collects all TXT records below the given name.
func (c *route53Client) collectRecords(name string) (map[string]recordSet, error) {
log.Info(fmt.Sprintf("Retrieving existing TXT records on %s (%s)", name, c.zoneID))
var req route53.ListResourceRecordSetsInput
req.SetHostedZoneId(c.zoneID)
existing := make(map[string]recordSet)
err := c.api.ListResourceRecordSetsPages(&req, func(resp *route53.ListResourceRecordSetsOutput, last bool) bool {
for _, set := range resp.ResourceRecordSets {
if !isSubdomain(*set.Name, name) || *set.Type != "TXT" {
continue
}
s := recordSet{ttl: *set.TTL}
for _, rec := range set.ResourceRecords {
s.values = append(s.values, *rec.Value)
}
name := strings.TrimSuffix(*set.Name, ".")
existing[name] = s
}
return true
})
return existing, err
}
// newTXTChange creates a change to a TXT record.
func newTXTChange(action, name string, ttl int64, values ...string) *route53.Change {
var c route53.Change
var r route53.ResourceRecordSet
var rrs []*route53.ResourceRecord
for _, val := range values {
rr := new(route53.ResourceRecord)
rr.SetValue(val)
rrs = append(rrs, rr)
}
r.SetType("TXT")
r.SetName(name)
r.SetTTL(ttl)
r.SetResourceRecords(rrs)
c.SetAction(action)
c.SetResourceRecordSet(&r)
return &c
}
// isSubdomain returns true if name is a subdomain of domain.
func isSubdomain(name, domain string) bool {
domain = strings.TrimSuffix(domain, ".")
name = strings.TrimSuffix(name, ".")
return strings.HasSuffix("."+name, "."+domain)
}
// splitTXT splits value into a list of quoted 255-character strings.
func splitTXT(value string) string {
var result strings.Builder
for len(value) > 0 {
rlen := len(value)
if rlen > 253 {
rlen = 253
}
result.WriteString(strconv.Quote(value[:rlen]))
value = value[rlen:]
}
return result.String()
}

View File

@@ -0,0 +1,166 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"reflect"
"testing"
"github.com/aws/aws-sdk-go/service/route53"
)
// This test checks that computeChanges/splitChanges create DNS changes in
// leaf-added -> root-changed -> leaf-deleted order.
func TestRoute53ChangeSort(t *testing.T) {
testTree0 := map[string]recordSet{
"2kfjogvxdqtxxugbh7gs7naaai.n": {ttl: 3333, values: []string{
`"enr:-HW4QO1ml1DdXLeZLsUxewnthhUy8eROqkDyoMTyavfks9JlYQIlMFEUoM78PovJDPQrAkrb3LRJ-""vtrymDguKCOIAWAgmlkgnY0iXNlY3AyNTZrMaEDffaGfJzgGhUif1JqFruZlYmA31HzathLSWxfbq_QoQ4"`,
}},
"fdxn3sn67na5dka4j2gok7bvqi.n": {ttl: treeNodeTTL, values: []string{`"enrtree-branch:"`}},
"n": {ttl: rootTTL, values: []string{`"enrtree-root:v1 e=2KFJOGVXDQTXXUGBH7GS7NAAAI l=FDXN3SN67NA5DKA4J2GOK7BVQI seq=0 sig=v_-J_q_9ICQg5ztExFvLQhDBGMb0lZPJLhe3ts9LAcgqhOhtT3YFJsl8BWNDSwGtamUdR-9xl88_w-X42SVpjwE"`}},
}
testTree1 := map[string]string{
"n": "enrtree-root:v1 e=JWXYDBPXYWG6FX3GMDIBFA6CJ4 l=C7HRFPF3BLGF3YR4DY5KX3SMBE seq=1 sig=o908WmNp7LibOfPsr4btQwatZJ5URBr2ZAuxvK4UWHlsB9sUOTJQaGAlLPVAhM__XJesCHxLISo94z5Z2a463gA",
"C7HRFPF3BLGF3YR4DY5KX3SMBE.n": "enrtree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@morenodes.example.org",
"JWXYDBPXYWG6FX3GMDIBFA6CJ4.n": "enrtree-branch:2XS2367YHAXJFGLZHVAWLQD4ZY,H4FHT4B454P6UXFD7JCYQ5PWDY,MHTDO6TMUBRIA2XWG5LUDACK24",
"2XS2367YHAXJFGLZHVAWLQD4ZY.n": "enr:-HW4QOFzoVLaFJnNhbgMoDXPnOvcdVuj7pDpqRvh6BRDO68aVi5ZcjB3vzQRZH2IcLBGHzo8uUN3snqmgTiE56CH3AMBgmlkgnY0iXNlY3AyNTZrMaECC2_24YYkYHEgdzxlSNKQEnHhuNAbNlMlWJxrJxbAFvA",
"H4FHT4B454P6UXFD7JCYQ5PWDY.n": "enr:-HW4QAggRauloj2SDLtIHN1XBkvhFZ1vtf1raYQp9TBW2RD5EEawDzbtSmlXUfnaHcvwOizhVYLtr7e6vw7NAf6mTuoCgmlkgnY0iXNlY3AyNTZrMaECjrXI8TLNXU0f8cthpAMxEshUyQlK-AM0PW2wfrnacNI",
"MHTDO6TMUBRIA2XWG5LUDACK24.n": "enr:-HW4QLAYqmrwllBEnzWWs7I5Ev2IAs7x_dZlbYdRdMUx5EyKHDXp7AV5CkuPGUPdvbv1_Ms1CPfhcGCvSElSosZmyoqAgmlkgnY0iXNlY3AyNTZrMaECriawHKWdDRk2xeZkrOXBQ0dfMFLHY4eENZwdufn1S1o",
}
wantChanges := []*route53.Change{
{
Action: sp("CREATE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("2xs2367yhaxjfglzhvawlqd4zy.n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enr:-HW4QOFzoVLaFJnNhbgMoDXPnOvcdVuj7pDpqRvh6BRDO68aVi5ZcjB3vzQRZH2IcLBGHzo8uUN3snqmgTiE56CH3AMBgmlkgnY0iXNlY3AyNTZrMaECC2_24YYkYHEgdzxlSNKQEnHhuNAbNlMlWJxrJxbAFvA"`),
}},
TTL: ip(treeNodeTTL),
Type: sp("TXT"),
},
},
{
Action: sp("CREATE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("c7hrfpf3blgf3yr4dy5kx3smbe.n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enrtree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@morenodes.example.org"`),
}},
TTL: ip(treeNodeTTL),
Type: sp("TXT"),
},
},
{
Action: sp("CREATE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("h4fht4b454p6uxfd7jcyq5pwdy.n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enr:-HW4QAggRauloj2SDLtIHN1XBkvhFZ1vtf1raYQp9TBW2RD5EEawDzbtSmlXUfnaHcvwOizhVYLtr7e6vw7NAf6mTuoCgmlkgnY0iXNlY3AyNTZrMaECjrXI8TLNXU0f8cthpAMxEshUyQlK-AM0PW2wfrnacNI"`),
}},
TTL: ip(treeNodeTTL),
Type: sp("TXT"),
},
},
{
Action: sp("CREATE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("jwxydbpxywg6fx3gmdibfa6cj4.n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enrtree-branch:2XS2367YHAXJFGLZHVAWLQD4ZY,H4FHT4B454P6UXFD7JCYQ5PWDY,MHTDO6TMUBRIA2XWG5LUDACK24"`),
}},
TTL: ip(treeNodeTTL),
Type: sp("TXT"),
},
},
{
Action: sp("CREATE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("mhtdo6tmubria2xwg5ludack24.n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enr:-HW4QLAYqmrwllBEnzWWs7I5Ev2IAs7x_dZlbYdRdMUx5EyKHDXp7AV5CkuPGUPdvbv1_Ms1CPfhcGCvSElSosZmyoqAgmlkgnY0iXNlY3AyNTZrMaECriawHKWdDRk2xeZkrOXBQ0dfMFLHY4eENZwdufn1S1o"`),
}},
TTL: ip(treeNodeTTL),
Type: sp("TXT"),
},
},
{
Action: sp("UPSERT"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enrtree-root:v1 e=JWXYDBPXYWG6FX3GMDIBFA6CJ4 l=C7HRFPF3BLGF3YR4DY5KX3SMBE seq=1 sig=o908WmNp7LibOfPsr4btQwatZJ5URBr2ZAuxvK4UWHlsB9sUOTJQaGAlLPVAhM__XJesCHxLISo94z5Z2a463gA"`),
}},
TTL: ip(rootTTL),
Type: sp("TXT"),
},
},
{
Action: sp("DELETE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("2kfjogvxdqtxxugbh7gs7naaai.n"),
ResourceRecords: []*route53.ResourceRecord{
{Value: sp(`"enr:-HW4QO1ml1DdXLeZLsUxewnthhUy8eROqkDyoMTyavfks9JlYQIlMFEUoM78PovJDPQrAkrb3LRJ-""vtrymDguKCOIAWAgmlkgnY0iXNlY3AyNTZrMaEDffaGfJzgGhUif1JqFruZlYmA31HzathLSWxfbq_QoQ4"`)},
},
TTL: ip(3333),
Type: sp("TXT"),
},
},
{
Action: sp("DELETE"),
ResourceRecordSet: &route53.ResourceRecordSet{
Name: sp("fdxn3sn67na5dka4j2gok7bvqi.n"),
ResourceRecords: []*route53.ResourceRecord{{
Value: sp(`"enrtree-branch:"`),
}},
TTL: ip(treeNodeTTL),
Type: sp("TXT"),
},
},
}
var client route53Client
changes := client.computeChanges("n", testTree1, testTree0)
if !reflect.DeepEqual(changes, wantChanges) {
t.Fatalf("wrong changes (got %d, want %d)", len(changes), len(wantChanges))
}
// Check splitting according to size.
wantSplit := [][]*route53.Change{
wantChanges[:4],
wantChanges[4:6],
wantChanges[6:],
}
split := splitChanges(changes, 600, 4000)
if !reflect.DeepEqual(split, wantSplit) {
t.Fatalf("wrong split batches: got %d, want %d", len(split), len(wantSplit))
}
// Check splitting according to count.
wantSplit = [][]*route53.Change{
wantChanges[:5],
wantChanges[5:],
}
split = splitChanges(changes, 10000, 6)
if !reflect.DeepEqual(split, wantSplit) {
t.Fatalf("wrong split batches: got %d, want %d", len(split), len(wantSplit))
}
}
func sp(s string) *string { return &s }
func ip(i int64) *int64 { return &i }

386
cmd/devp2p/dnscmd.go Normal file
View File

@@ -0,0 +1,386 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/ecdsa"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console/prompt"
"github.com/ethereum/go-ethereum/p2p/dnsdisc"
"github.com/ethereum/go-ethereum/p2p/enode"
cli "gopkg.in/urfave/cli.v1"
)
var (
dnsCommand = cli.Command{
Name: "dns",
Usage: "DNS Discovery Commands",
Subcommands: []cli.Command{
dnsSyncCommand,
dnsSignCommand,
dnsTXTCommand,
dnsCloudflareCommand,
dnsRoute53Command,
},
}
dnsSyncCommand = cli.Command{
Name: "sync",
Usage: "Download a DNS discovery tree",
ArgsUsage: "<url> [ <directory> ]",
Action: dnsSync,
Flags: []cli.Flag{dnsTimeoutFlag},
}
dnsSignCommand = cli.Command{
Name: "sign",
Usage: "Sign a DNS discovery tree",
ArgsUsage: "<tree-directory> <key-file>",
Action: dnsSign,
Flags: []cli.Flag{dnsDomainFlag, dnsSeqFlag},
}
dnsTXTCommand = cli.Command{
Name: "to-txt",
Usage: "Create a DNS TXT records for a discovery tree",
ArgsUsage: "<tree-directory> <output-file>",
Action: dnsToTXT,
}
dnsCloudflareCommand = cli.Command{
Name: "to-cloudflare",
Usage: "Deploy DNS TXT records to CloudFlare",
ArgsUsage: "<tree-directory>",
Action: dnsToCloudflare,
Flags: []cli.Flag{cloudflareTokenFlag, cloudflareZoneIDFlag},
}
dnsRoute53Command = cli.Command{
Name: "to-route53",
Usage: "Deploy DNS TXT records to Amazon Route53",
ArgsUsage: "<tree-directory>",
Action: dnsToRoute53,
Flags: []cli.Flag{route53AccessKeyFlag, route53AccessSecretFlag, route53ZoneIDFlag},
}
)
var (
dnsTimeoutFlag = cli.DurationFlag{
Name: "timeout",
Usage: "Timeout for DNS lookups",
}
dnsDomainFlag = cli.StringFlag{
Name: "domain",
Usage: "Domain name of the tree",
}
dnsSeqFlag = cli.UintFlag{
Name: "seq",
Usage: "New sequence number of the tree",
}
)
const (
rootTTL = 30 * 60 // 30 min
treeNodeTTL = 4 * 7 * 24 * 60 * 60 // 4 weeks
)
// dnsSync performs dnsSyncCommand.
func dnsSync(ctx *cli.Context) error {
var (
c = dnsClient(ctx)
url = ctx.Args().Get(0)
outdir = ctx.Args().Get(1)
)
domain, _, err := dnsdisc.ParseURL(url)
if err != nil {
return err
}
if outdir == "" {
outdir = domain
}
t, err := c.SyncTree(url)
if err != nil {
return err
}
def := treeToDefinition(url, t)
def.Meta.LastModified = time.Now()
writeTreeMetadata(outdir, def)
writeTreeNodes(outdir, def)
return nil
}
func dnsSign(ctx *cli.Context) error {
if ctx.NArg() < 2 {
return fmt.Errorf("need tree definition directory and key file as arguments")
}
var (
defdir = ctx.Args().Get(0)
keyfile = ctx.Args().Get(1)
def = loadTreeDefinition(defdir)
domain = directoryName(defdir)
)
if def.Meta.URL != "" {
d, _, err := dnsdisc.ParseURL(def.Meta.URL)
if err != nil {
return fmt.Errorf("invalid 'url' field: %v", err)
}
domain = d
}
if ctx.IsSet(dnsDomainFlag.Name) {
domain = ctx.String(dnsDomainFlag.Name)
}
if ctx.IsSet(dnsSeqFlag.Name) {
def.Meta.Seq = ctx.Uint(dnsSeqFlag.Name)
} else {
def.Meta.Seq++ // Auto-bump sequence number if not supplied via flag.
}
t, err := dnsdisc.MakeTree(def.Meta.Seq, def.Nodes, def.Meta.Links)
if err != nil {
return err
}
key := loadSigningKey(keyfile)
url, err := t.Sign(key, domain)
if err != nil {
return fmt.Errorf("can't sign: %v", err)
}
def = treeToDefinition(url, t)
def.Meta.LastModified = time.Now()
writeTreeMetadata(defdir, def)
return nil
}
func directoryName(dir string) string {
abs, err := filepath.Abs(dir)
if err != nil {
exit(err)
}
return filepath.Base(abs)
}
// dnsToTXT peforms dnsTXTCommand.
func dnsToTXT(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need tree definition directory as argument")
}
output := ctx.Args().Get(1)
if output == "" {
output = "-" // default to stdout
}
domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0))
if err != nil {
return err
}
writeTXTJSON(output, t.ToTXT(domain))
return nil
}
// dnsToCloudflare peforms dnsCloudflareCommand.
func dnsToCloudflare(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need tree definition directory as argument")
}
domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0))
if err != nil {
return err
}
client := newCloudflareClient(ctx)
return client.deploy(domain, t)
}
// dnsToRoute53 peforms dnsRoute53Command.
func dnsToRoute53(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need tree definition directory as argument")
}
domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0))
if err != nil {
return err
}
client := newRoute53Client(ctx)
return client.deploy(domain, t)
}
// loadSigningKey loads a private key in Ethereum keystore format.
func loadSigningKey(keyfile string) *ecdsa.PrivateKey {
keyjson, err := ioutil.ReadFile(keyfile)
if err != nil {
exit(fmt.Errorf("failed to read the keyfile at '%s': %v", keyfile, err))
}
password, _ := prompt.Stdin.PromptPassword("Please enter the password for '" + keyfile + "': ")
key, err := keystore.DecryptKey(keyjson, password)
if err != nil {
exit(fmt.Errorf("error decrypting key: %v", err))
}
return key.PrivateKey
}
// dnsClient configures the DNS discovery client from command line flags.
func dnsClient(ctx *cli.Context) *dnsdisc.Client {
var cfg dnsdisc.Config
if commandHasFlag(ctx, dnsTimeoutFlag) {
cfg.Timeout = ctx.Duration(dnsTimeoutFlag.Name)
}
return dnsdisc.NewClient(cfg)
}
// There are two file formats for DNS node trees on disk:
//
// The 'TXT' format is a single JSON file containing DNS TXT records
// as a JSON object where the keys are names and the values are objects
// containing the value of the record.
//
// The 'definition' format is a directory containing two files:
//
// enrtree-info.json -- contains sequence number & links to other trees
// nodes.json -- contains the nodes as a JSON array.
//
// This format exists because it's convenient to edit. nodes.json can be generated
// in multiple ways: it may be written by a DHT crawler or compiled by a human.
type dnsDefinition struct {
Meta dnsMetaJSON
Nodes []*enode.Node
}
type dnsMetaJSON struct {
URL string `json:"url,omitempty"`
Seq uint `json:"seq"`
Sig string `json:"signature,omitempty"`
Links []string `json:"links"`
LastModified time.Time `json:"lastModified"`
}
func treeToDefinition(url string, t *dnsdisc.Tree) *dnsDefinition {
meta := dnsMetaJSON{
URL: url,
Seq: t.Seq(),
Sig: t.Signature(),
Links: t.Links(),
}
if meta.Links == nil {
meta.Links = []string{}
}
return &dnsDefinition{Meta: meta, Nodes: t.Nodes()}
}
// loadTreeDefinition loads a directory in 'definition' format.
func loadTreeDefinition(directory string) *dnsDefinition {
metaFile, nodesFile := treeDefinitionFiles(directory)
var def dnsDefinition
err := common.LoadJSON(metaFile, &def.Meta)
if err != nil && !os.IsNotExist(err) {
exit(err)
}
if def.Meta.Links == nil {
def.Meta.Links = []string{}
}
// Check link syntax.
for _, link := range def.Meta.Links {
if _, _, err := dnsdisc.ParseURL(link); err != nil {
exit(fmt.Errorf("invalid link %q: %v", link, err))
}
}
// Check/convert nodes.
nodes := loadNodesJSON(nodesFile)
if err := nodes.verify(); err != nil {
exit(err)
}
def.Nodes = nodes.nodes()
return &def
}
// loadTreeDefinitionForExport loads a DNS tree and ensures it is signed.
func loadTreeDefinitionForExport(dir string) (domain string, t *dnsdisc.Tree, err error) {
metaFile, _ := treeDefinitionFiles(dir)
def := loadTreeDefinition(dir)
if def.Meta.URL == "" {
return "", nil, fmt.Errorf("missing 'url' field in %v", metaFile)
}
domain, pubkey, err := dnsdisc.ParseURL(def.Meta.URL)
if err != nil {
return "", nil, fmt.Errorf("invalid 'url' field in %v: %v", metaFile, err)
}
if t, err = dnsdisc.MakeTree(def.Meta.Seq, def.Nodes, def.Meta.Links); err != nil {
return "", nil, err
}
if err := ensureValidTreeSignature(t, pubkey, def.Meta.Sig); err != nil {
return "", nil, err
}
return domain, t, nil
}
// ensureValidTreeSignature checks that sig is valid for tree and assigns it as the
// tree's signature if valid.
func ensureValidTreeSignature(t *dnsdisc.Tree, pubkey *ecdsa.PublicKey, sig string) error {
if sig == "" {
return fmt.Errorf("missing signature, run 'devp2p dns sign' first")
}
if err := t.SetSignature(pubkey, sig); err != nil {
return fmt.Errorf("invalid signature on tree, run 'devp2p dns sign' to update it")
}
return nil
}
// writeTreeMetadata writes a DNS node tree metadata file to the given directory.
func writeTreeMetadata(directory string, def *dnsDefinition) {
metaJSON, err := json.MarshalIndent(&def.Meta, "", jsonIndent)
if err != nil {
exit(err)
}
if err := os.Mkdir(directory, 0744); err != nil && !os.IsExist(err) {
exit(err)
}
metaFile, _ := treeDefinitionFiles(directory)
if err := ioutil.WriteFile(metaFile, metaJSON, 0644); err != nil {
exit(err)
}
}
func writeTreeNodes(directory string, def *dnsDefinition) {
ns := make(nodeSet, len(def.Nodes))
ns.add(def.Nodes...)
_, nodesFile := treeDefinitionFiles(directory)
writeNodesJSON(nodesFile, ns)
}
func treeDefinitionFiles(directory string) (string, string) {
meta := filepath.Join(directory, "enrtree-info.json")
nodes := filepath.Join(directory, "nodes.json")
return meta, nodes
}
// writeTXTJSON writes TXT records in JSON format.
func writeTXTJSON(file string, txt map[string]string) {
txtJSON, err := json.MarshalIndent(txt, "", jsonIndent)
if err != nil {
exit(err)
}
if file == "-" {
os.Stdout.Write(txtJSON)
fmt.Println()
return
}
if err := ioutil.WriteFile(file, txtJSON, 0644); err != nil {
exit(err)
}
}

View File

@@ -0,0 +1,467 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v4test
import (
"bytes"
"crypto/rand"
"fmt"
"net"
"reflect"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/utesting"
"github.com/ethereum/go-ethereum/p2p/discover/v4wire"
)
const (
expiration = 20 * time.Second
wrongPacket = 66
macSize = 256 / 8
)
var (
// Remote node under test
Remote string
// IP where the first tester is listening, port will be assigned
Listen1 string = "127.0.0.1"
// IP where the second tester is listening, port will be assigned
// Before running the test, you may have to `sudo ifconfig lo0 add 127.0.0.2` (on MacOS at least)
Listen2 string = "127.0.0.2"
)
type pingWithJunk struct {
Version uint
From, To v4wire.Endpoint
Expiration uint64
JunkData1 uint
JunkData2 []byte
}
func (req *pingWithJunk) Name() string { return "PING/v4" }
func (req *pingWithJunk) Kind() byte { return v4wire.PingPacket }
type pingWrongType struct {
Version uint
From, To v4wire.Endpoint
Expiration uint64
}
func (req *pingWrongType) Name() string { return "WRONG/v4" }
func (req *pingWrongType) Kind() byte { return wrongPacket }
func futureExpiration() uint64 {
return uint64(time.Now().Add(expiration).Unix())
}
// This test just sends a PING packet and expects a response.
func BasicPing(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// checkPong verifies that reply is a valid PONG matching the given ping hash.
func (te *testenv) checkPong(reply v4wire.Packet, pingHash []byte) error {
if reply == nil || reply.Kind() != v4wire.PongPacket {
return fmt.Errorf("expected PONG reply, got %v", reply)
}
pong := reply.(*v4wire.Pong)
if !bytes.Equal(pong.ReplyTok, pingHash) {
return fmt.Errorf("PONG reply token mismatch: got %x, want %x", pong.ReplyTok, pingHash)
}
wantEndpoint := te.localEndpoint(te.l1)
if !reflect.DeepEqual(pong.To, wantEndpoint) {
return fmt.Errorf("PONG 'to' endpoint mismatch: got %+v, want %+v", pong.To, wantEndpoint)
}
if v4wire.Expired(pong.Expiration) {
return fmt.Errorf("PONG is expired (%v)", pong.Expiration)
}
return nil
}
// This test sends a PING packet with wrong 'to' field and expects a PONG response.
func PingWrongTo(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: wrongEndpoint,
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with wrong 'from' field and expects a PONG response.
func PingWrongFrom(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: wrongEndpoint,
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with additional data at the end and expects a PONG
// response. The remote node should respond because EIP-8 mandates ignoring additional
// trailing data.
func PingExtraData(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
pingHash := te.send(te.l1, &pingWithJunk{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
JunkData1: 42,
JunkData2: []byte{9, 8, 7, 6, 5, 4, 3, 2, 1},
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with additional data and wrong 'from' field
// and expects a PONG response.
func PingExtraDataWrongFrom(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
req := pingWithJunk{
Version: 4,
From: wrongEndpoint,
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
JunkData1: 42,
JunkData2: []byte{9, 8, 7, 6, 5, 4, 3, 2, 1},
}
pingHash := te.send(te.l1, &req)
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test sends a PING packet with an expiration in the past.
// The remote node should not respond.
func PingPastExpiration(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: -futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if reply != nil {
t.Fatal("Expected no reply, got", reply)
}
}
// This test sends an invalid packet. The remote node should not respond.
func WrongPacketType(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
te.send(te.l1, &pingWrongType{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if reply != nil {
t.Fatal("Expected no reply, got", reply)
}
}
// This test verifies that the default behaviour of ignoring 'from' fields is unaffected by
// the bonding process. After bonding, it pings the target with a different from endpoint.
func BondThenPingWithWrongFrom(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
wrongEndpoint := v4wire.Endpoint{IP: net.ParseIP("192.0.2.0")}
pingHash := te.send(te.l1, &v4wire.Ping{
Version: 4,
From: wrongEndpoint,
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}
// This test just sends FINDNODE. The remote node should not reply
// because the endpoint proof has not completed.
func FindnodeWithoutEndpointProof(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
req := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(req.Target[:])
te.send(te.l1, &req)
reply, _, _ := te.read(te.l1)
if reply != nil {
t.Fatal("Expected no response, got", reply)
}
}
// BasicFindnode sends a FINDNODE request after performing the endpoint
// proof. The remote node should respond.
func BasicFindnode(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
findnode := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l1, &findnode)
reply, _, err := te.read(te.l1)
if err != nil {
t.Fatal("read find nodes", err)
}
if reply.Kind() != v4wire.NeighborsPacket {
t.Fatal("Expected neighbors, got", reply.Name())
}
}
// This test sends an unsolicited NEIGHBORS packet after the endpoint proof, then sends
// FINDNODE to read the remote table. The remote node should not return the node contained
// in the unsolicited NEIGHBORS packet.
func UnsolicitedNeighbors(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
// Send unsolicited NEIGHBORS response.
fakeKey, _ := crypto.GenerateKey()
encFakeKey := v4wire.EncodePubkey(&fakeKey.PublicKey)
neighbors := v4wire.Neighbors{
Expiration: futureExpiration(),
Nodes: []v4wire.Node{{
ID: encFakeKey,
IP: net.IP{1, 2, 3, 4},
UDP: 30303,
TCP: 30303,
}},
}
te.send(te.l1, &neighbors)
// Check if the remote node included the fake node.
te.send(te.l1, &v4wire.Findnode{
Expiration: futureExpiration(),
Target: encFakeKey,
})
reply, _, err := te.read(te.l1)
if err != nil {
t.Fatal("read find nodes", err)
}
if reply.Kind() != v4wire.NeighborsPacket {
t.Fatal("Expected neighbors, got", reply.Name())
}
nodes := reply.(*v4wire.Neighbors).Nodes
if contains(nodes, encFakeKey) {
t.Fatal("neighbors response contains node from earlier unsolicited neighbors response")
}
}
// This test sends FINDNODE with an expiration timestamp in the past.
// The remote node should not respond.
func FindnodePastExpiration(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
bond(t, te)
findnode := v4wire.Findnode{Expiration: -futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l1, &findnode)
for {
reply, _, _ := te.read(te.l1)
if reply == nil {
return
} else if reply.Kind() == v4wire.NeighborsPacket {
t.Fatal("Unexpected NEIGHBORS response for expired FINDNODE request")
}
}
}
// bond performs the endpoint proof with the remote node.
func bond(t *utesting.T, te *testenv) {
te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
var gotPing, gotPong bool
for !gotPing || !gotPong {
req, hash, err := te.read(te.l1)
if err != nil {
t.Fatal(err)
}
switch req.(type) {
case *v4wire.Ping:
te.send(te.l1, &v4wire.Pong{
To: te.remoteEndpoint(),
ReplyTok: hash,
Expiration: futureExpiration(),
})
gotPing = true
case *v4wire.Pong:
// TODO: maybe verify pong data here
gotPong = true
}
}
}
// This test attempts to perform a traffic amplification attack against a
// 'victim' endpoint using FINDNODE. In this attack scenario, the attacker
// attempts to complete the endpoint proof non-interactively by sending a PONG
// with mismatching reply token from the 'victim' endpoint. The attack works if
// the remote node does not verify the PONG reply token field correctly. The
// attacker could then perform traffic amplification by sending many FINDNODE
// requests to the discovery node, which would reply to the 'victim' address.
func FindnodeAmplificationInvalidPongHash(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
// Send PING to start endpoint verification.
te.send(te.l1, &v4wire.Ping{
Version: 4,
From: te.localEndpoint(te.l1),
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
var gotPing, gotPong bool
for !gotPing || !gotPong {
req, _, err := te.read(te.l1)
if err != nil {
t.Fatal(err)
}
switch req.(type) {
case *v4wire.Ping:
// Send PONG from this node ID, but with invalid ReplyTok.
te.send(te.l1, &v4wire.Pong{
To: te.remoteEndpoint(),
ReplyTok: make([]byte, macSize),
Expiration: futureExpiration(),
})
gotPing = true
case *v4wire.Pong:
gotPong = true
}
}
// Now send FINDNODE. The remote node should not respond because our
// PONG did not reference the PING hash.
findnode := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l1, &findnode)
// If we receive a NEIGHBORS response, the attack worked and the test fails.
reply, _, _ := te.read(te.l1)
if reply != nil && reply.Kind() == v4wire.NeighborsPacket {
t.Error("Got neighbors")
}
}
// This test attempts to perform a traffic amplification attack using FINDNODE.
// The attack works if the remote node does not verify the IP address of FINDNODE
// against the endpoint verification proof done by PING/PONG.
func FindnodeAmplificationWrongIP(t *utesting.T) {
te := newTestEnv(Remote, Listen1, Listen2)
defer te.close()
// Do the endpoint proof from the l1 IP.
bond(t, te)
// Now send FINDNODE from the same node ID, but different IP address.
// The remote node should not respond.
findnode := v4wire.Findnode{Expiration: futureExpiration()}
rand.Read(findnode.Target[:])
te.send(te.l2, &findnode)
// If we receive a NEIGHBORS response, the attack worked and the test fails.
reply, _, _ := te.read(te.l2)
if reply != nil {
t.Error("Got NEIGHORS response for FINDNODE from wrong IP")
}
}
var AllTests = []utesting.Test{
{Name: "Ping/Basic", Fn: BasicPing},
{Name: "Ping/WrongTo", Fn: PingWrongTo},
{Name: "Ping/WrongFrom", Fn: PingWrongFrom},
{Name: "Ping/ExtraData", Fn: PingExtraData},
{Name: "Ping/ExtraDataWrongFrom", Fn: PingExtraDataWrongFrom},
{Name: "Ping/PastExpiration", Fn: PingPastExpiration},
{Name: "Ping/WrongPacketType", Fn: WrongPacketType},
{Name: "Ping/BondThenPingWithWrongFrom", Fn: BondThenPingWithWrongFrom},
{Name: "Findnode/WithoutEndpointProof", Fn: FindnodeWithoutEndpointProof},
{Name: "Findnode/BasicFindnode", Fn: BasicFindnode},
{Name: "Findnode/UnsolicitedNeighbors", Fn: UnsolicitedNeighbors},
{Name: "Findnode/PastExpiration", Fn: FindnodePastExpiration},
{Name: "Amplification/InvalidPongHash", Fn: FindnodeAmplificationInvalidPongHash},
{Name: "Amplification/WrongIP", Fn: FindnodeAmplificationWrongIP},
}

View File

@@ -0,0 +1,123 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package v4test
import (
"crypto/ecdsa"
"fmt"
"net"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/discover/v4wire"
"github.com/ethereum/go-ethereum/p2p/enode"
)
const waitTime = 300 * time.Millisecond
type testenv struct {
l1, l2 net.PacketConn
key *ecdsa.PrivateKey
remote *enode.Node
remoteAddr *net.UDPAddr
}
func newTestEnv(remote string, listen1, listen2 string) *testenv {
l1, err := net.ListenPacket("udp", fmt.Sprintf("%v:0", listen1))
if err != nil {
panic(err)
}
l2, err := net.ListenPacket("udp", fmt.Sprintf("%v:0", listen2))
if err != nil {
panic(err)
}
key, err := crypto.GenerateKey()
if err != nil {
panic(err)
}
node, err := enode.Parse(enode.ValidSchemes, remote)
if err != nil {
panic(err)
}
if node.IP() == nil || node.UDP() == 0 {
var ip net.IP
var tcpPort, udpPort int
if ip = node.IP(); ip == nil {
ip = net.ParseIP("127.0.0.1")
}
if tcpPort = node.TCP(); tcpPort == 0 {
tcpPort = 30303
}
if udpPort = node.TCP(); udpPort == 0 {
udpPort = 30303
}
node = enode.NewV4(node.Pubkey(), ip, tcpPort, udpPort)
}
addr := &net.UDPAddr{IP: node.IP(), Port: node.UDP()}
return &testenv{l1, l2, key, node, addr}
}
func (te *testenv) close() {
te.l1.Close()
te.l2.Close()
}
func (te *testenv) send(c net.PacketConn, req v4wire.Packet) []byte {
packet, hash, err := v4wire.Encode(te.key, req)
if err != nil {
panic(fmt.Errorf("can't encode %v packet: %v", req.Name(), err))
}
if _, err := c.WriteTo(packet, te.remoteAddr); err != nil {
panic(fmt.Errorf("can't send %v: %v", req.Name(), err))
}
return hash
}
func (te *testenv) read(c net.PacketConn) (v4wire.Packet, []byte, error) {
buf := make([]byte, 2048)
if err := c.SetReadDeadline(time.Now().Add(waitTime)); err != nil {
return nil, nil, err
}
n, _, err := c.ReadFrom(buf)
if err != nil {
return nil, nil, err
}
p, _, hash, err := v4wire.Decode(buf[:n])
return p, hash, err
}
func (te *testenv) localEndpoint(c net.PacketConn) v4wire.Endpoint {
addr := c.LocalAddr().(*net.UDPAddr)
return v4wire.Endpoint{
IP: addr.IP.To4(),
UDP: uint16(addr.Port),
TCP: 0,
}
}
func (te *testenv) remoteEndpoint() v4wire.Endpoint {
return v4wire.NewEndpoint(te.remoteAddr, 0)
}
func contains(ns []v4wire.Node, key v4wire.Pubkey) bool {
for _, n := range ns {
if n.ID == key {
return true
}
}
return false
}

105
cmd/devp2p/keycmd.go Normal file
View File

@@ -0,0 +1,105 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"net"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"gopkg.in/urfave/cli.v1"
)
var (
keyCommand = cli.Command{
Name: "key",
Usage: "Operations on node keys",
Subcommands: []cli.Command{
keyGenerateCommand,
keyToNodeCommand,
},
}
keyGenerateCommand = cli.Command{
Name: "generate",
Usage: "Generates node key files",
ArgsUsage: "keyfile",
Action: genkey,
}
keyToNodeCommand = cli.Command{
Name: "to-enode",
Usage: "Creates an enode URL from a node key file",
ArgsUsage: "keyfile",
Action: keyToURL,
Flags: []cli.Flag{hostFlag, tcpPortFlag, udpPortFlag},
}
)
var (
hostFlag = cli.StringFlag{
Name: "ip",
Usage: "IP address of the node",
Value: "127.0.0.1",
}
tcpPortFlag = cli.IntFlag{
Name: "tcp",
Usage: "TCP port of the node",
Value: 30303,
}
udpPortFlag = cli.IntFlag{
Name: "udp",
Usage: "UDP port of the node",
Value: 30303,
}
)
func genkey(ctx *cli.Context) error {
if ctx.NArg() != 1 {
return fmt.Errorf("need key file as argument")
}
file := ctx.Args().Get(0)
key, err := crypto.GenerateKey()
if err != nil {
return fmt.Errorf("could not generate key: %v", err)
}
return crypto.SaveECDSA(file, key)
}
func keyToURL(ctx *cli.Context) error {
if ctx.NArg() != 1 {
return fmt.Errorf("need key file as argument")
}
var (
file = ctx.Args().Get(0)
host = ctx.String(hostFlag.Name)
tcp = ctx.Int(tcpPortFlag.Name)
udp = ctx.Int(udpPortFlag.Name)
)
key, err := crypto.LoadECDSA(file)
if err != nil {
return err
}
ip := net.ParseIP(host)
if ip == nil {
return fmt.Errorf("invalid IP address %q", host)
}
node := enode.NewV4(&key.PublicKey, ip, tcp, udp)
fmt.Println(node.URLv4())
return nil
}

View File

@@ -20,8 +20,10 @@ import (
"fmt"
"os"
"path/filepath"
"sort"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/params"
"gopkg.in/urfave/cli.v1"
)
@@ -43,7 +45,7 @@ func init() {
// Set up the CLI app.
app.Flags = append(app.Flags, debug.Flags...)
app.Before = func(ctx *cli.Context) error {
return debug.Setup(ctx, "")
return debug.Setup(ctx)
}
app.After = func(ctx *cli.Context) error {
debug.Exit()
@@ -56,13 +58,42 @@ func init() {
// Add subcommands.
app.Commands = []cli.Command{
enrdumpCommand,
keyCommand,
discv4Command,
discv5Command,
dnsCommand,
nodesetCommand,
}
}
func main() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
exit(app.Run(os.Args))
}
// commandHasFlag returns true if the current command supports the given flag.
func commandHasFlag(ctx *cli.Context, flag cli.Flag) bool {
flags := ctx.FlagNames()
sort.Strings(flags)
i := sort.SearchStrings(flags, flag.GetName())
return i != len(flags) && flags[i] == flag.GetName()
}
// getNodeArg handles the common case of a single node descriptor argument.
func getNodeArg(ctx *cli.Context) *enode.Node {
if ctx.NArg() != 1 {
exit("missing node as command-line argument")
}
n, err := parseNode(ctx.Args()[0])
if err != nil {
exit(err)
}
return n
}
func exit(err interface{}) {
if err == nil {
os.Exit(0)
}
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}

102
cmd/devp2p/nodeset.go Normal file
View File

@@ -0,0 +1,102 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"sort"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/p2p/enode"
)
const jsonIndent = " "
// nodeSet is the nodes.json file format. It holds a set of node records
// as a JSON object.
type nodeSet map[enode.ID]nodeJSON
type nodeJSON struct {
Seq uint64 `json:"seq"`
N *enode.Node `json:"record"`
// The score tracks how many liveness checks were performed. It is incremented by one
// every time the node passes a check, and halved every time it doesn't.
Score int `json:"score,omitempty"`
// These two track the time of last successful contact.
FirstResponse time.Time `json:"firstResponse,omitempty"`
LastResponse time.Time `json:"lastResponse,omitempty"`
// This one tracks the time of our last attempt to contact the node.
LastCheck time.Time `json:"lastCheck,omitempty"`
}
func loadNodesJSON(file string) nodeSet {
var nodes nodeSet
if err := common.LoadJSON(file, &nodes); err != nil {
exit(err)
}
return nodes
}
func writeNodesJSON(file string, nodes nodeSet) {
nodesJSON, err := json.MarshalIndent(nodes, "", jsonIndent)
if err != nil {
exit(err)
}
if file == "-" {
os.Stdout.Write(nodesJSON)
return
}
if err := ioutil.WriteFile(file, nodesJSON, 0644); err != nil {
exit(err)
}
}
func (ns nodeSet) nodes() []*enode.Node {
result := make([]*enode.Node, 0, len(ns))
for _, n := range ns {
result = append(result, n.N)
}
// Sort by ID.
sort.Slice(result, func(i, j int) bool {
return bytes.Compare(result[i].ID().Bytes(), result[j].ID().Bytes()) < 0
})
return result
}
func (ns nodeSet) add(nodes ...*enode.Node) {
for _, n := range nodes {
ns[n.ID()] = nodeJSON{Seq: n.Seq(), N: n}
}
}
func (ns nodeSet) verify() error {
for id, n := range ns {
if n.N.ID() != id {
return fmt.Errorf("invalid node %v: ID does not match ID %v in record", id, n.N.ID())
}
if n.N.Seq() != n.Seq {
return fmt.Errorf("invalid node %v: 'seq' does not match seq %d from record", id, n.N.Seq())
}
}
return nil
}

193
cmd/devp2p/nodesetcmd.go Normal file
View File

@@ -0,0 +1,193 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"net"
"time"
"github.com/ethereum/go-ethereum/core/forkid"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"gopkg.in/urfave/cli.v1"
)
var (
nodesetCommand = cli.Command{
Name: "nodeset",
Usage: "Node set tools",
Subcommands: []cli.Command{
nodesetInfoCommand,
nodesetFilterCommand,
},
}
nodesetInfoCommand = cli.Command{
Name: "info",
Usage: "Shows statistics about a node set",
Action: nodesetInfo,
ArgsUsage: "<nodes.json>",
}
nodesetFilterCommand = cli.Command{
Name: "filter",
Usage: "Filters a node set",
Action: nodesetFilter,
ArgsUsage: "<nodes.json> filters..",
SkipFlagParsing: true,
}
)
func nodesetInfo(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need nodes file as argument")
}
ns := loadNodesJSON(ctx.Args().First())
fmt.Printf("Set contains %d nodes.\n", len(ns))
return nil
}
func nodesetFilter(ctx *cli.Context) error {
if ctx.NArg() < 1 {
return fmt.Errorf("need nodes file as argument")
}
ns := loadNodesJSON(ctx.Args().First())
filter, err := andFilter(ctx.Args().Tail())
if err != nil {
return err
}
result := make(nodeSet)
for id, n := range ns {
if filter(n) {
result[id] = n
}
}
writeNodesJSON("-", result)
return nil
}
type nodeFilter func(nodeJSON) bool
type nodeFilterC struct {
narg int
fn func([]string) (nodeFilter, error)
}
var filterFlags = map[string]nodeFilterC{
"-ip": {1, ipFilter},
"-min-age": {1, minAgeFilter},
"-eth-network": {1, ethFilter},
"-les-server": {0, lesFilter},
}
func parseFilters(args []string) ([]nodeFilter, error) {
var filters []nodeFilter
for len(args) > 0 {
fc, ok := filterFlags[args[0]]
if !ok {
return nil, fmt.Errorf("invalid filter %q", args[0])
}
if len(args) < fc.narg {
return nil, fmt.Errorf("filter %q wants %d arguments, have %d", args[0], fc.narg, len(args))
}
filter, err := fc.fn(args[1:])
if err != nil {
return nil, fmt.Errorf("%s: %v", args[0], err)
}
filters = append(filters, filter)
args = args[fc.narg+1:]
}
return filters, nil
}
func andFilter(args []string) (nodeFilter, error) {
checks, err := parseFilters(args)
if err != nil {
return nil, err
}
f := func(n nodeJSON) bool {
for _, filter := range checks {
if !filter(n) {
return false
}
}
return true
}
return f, nil
}
func ipFilter(args []string) (nodeFilter, error) {
_, cidr, err := net.ParseCIDR(args[0])
if err != nil {
return nil, err
}
f := func(n nodeJSON) bool { return cidr.Contains(n.N.IP()) }
return f, nil
}
func minAgeFilter(args []string) (nodeFilter, error) {
minage, err := time.ParseDuration(args[0])
if err != nil {
return nil, err
}
f := func(n nodeJSON) bool {
age := n.LastResponse.Sub(n.FirstResponse)
return age >= minage
}
return f, nil
}
func ethFilter(args []string) (nodeFilter, error) {
var filter forkid.Filter
switch args[0] {
case "mainnet":
filter = forkid.NewStaticFilter(params.MainnetChainConfig, params.MainnetGenesisHash)
case "rinkeby":
filter = forkid.NewStaticFilter(params.RinkebyChainConfig, params.RinkebyGenesisHash)
case "goerli":
filter = forkid.NewStaticFilter(params.GoerliChainConfig, params.GoerliGenesisHash)
case "ropsten":
filter = forkid.NewStaticFilter(params.RopstenChainConfig, params.RopstenGenesisHash)
default:
return nil, fmt.Errorf("unknown network %q", args[0])
}
f := func(n nodeJSON) bool {
var eth struct {
ForkID forkid.ID
_ []rlp.RawValue `rlp:"tail"`
}
if n.N.Load(enr.WithEntry("eth", &eth)) != nil {
return false
}
return filter(eth.ForkID) == nil
}
return f, nil
}
func lesFilter(args []string) (nodeFilter, error) {
f := func(n nodeJSON) bool {
var les struct {
_ []rlp.RawValue `rlp:"tail"`
}
return n.N.Load(enr.WithEntry("les", &les)) == nil
}
return f, nil
}

View File

@@ -51,7 +51,7 @@ Change the password of a keyfile.`,
}
// Decrypt key with passphrase.
passphrase := getPassphrase(ctx)
passphrase := getPassphrase(ctx, false)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)
@@ -67,7 +67,7 @@ Change the password of a keyfile.`,
}
newPhrase = strings.TrimRight(string(content), "\r\n")
} else {
newPhrase = promptPassphrase(true)
newPhrase = utils.GetPassPhrase("", true)
}
// Encrypt the key with the new passphrase.
@@ -77,7 +77,7 @@ Change the password of a keyfile.`,
}
// Then write the new keyfile in place of the old one.
if err := ioutil.WriteFile(keyfilepath, newJson, 600); err != nil {
if err := ioutil.WriteFile(keyfilepath, newJson, 0600); err != nil {
utils.Fatalf("Error writing new keyfile to disk: %v", err)
}

View File

@@ -52,6 +52,10 @@ If you want to encrypt an existing private key, it can be specified by setting
Name: "privatekey",
Usage: "file containing a raw private key to encrypt",
},
cli.BoolFlag{
Name: "lightkdf",
Usage: "use less secure scrypt parameters",
},
},
Action: func(ctx *cli.Context) error {
// Check if keyfile path given and make sure it doesn't already exist.
@@ -90,8 +94,12 @@ If you want to encrypt an existing private key, it can be specified by setting
}
// Encrypt key with passphrase.
passphrase := promptPassphrase(true)
keyjson, err := keystore.EncryptKey(key, passphrase, keystore.StandardScryptN, keystore.StandardScryptP)
passphrase := getPassphrase(ctx, true)
scryptN, scryptP := keystore.StandardScryptN, keystore.StandardScryptP
if ctx.Bool("lightkdf") {
scryptN, scryptP = keystore.LightScryptN, keystore.LightScryptP
}
keyjson, err := keystore.EncryptKey(key, passphrase, scryptN, scryptP)
if err != nil {
utils.Fatalf("Error encrypting key: %v", err)
}

View File

@@ -60,7 +60,7 @@ make sure to use this feature with great caution!`,
}
// Decrypt key with passphrase.
passphrase := getPassphrase(ctx)
passphrase := getPassphrase(ctx, false)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)

View File

@@ -20,7 +20,7 @@ import (
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/internal/flags"
"gopkg.in/urfave/cli.v1"
)
@@ -35,7 +35,7 @@ var gitDate = ""
var app *cli.App
func init() {
app = utils.NewApp(gitCommit, gitDate, "an Ethereum key manager")
app = flags.NewApp(gitCommit, gitDate, "an Ethereum key manager")
app.Commands = []cli.Command{
commandGenerate,
commandInspect,
@@ -43,6 +43,7 @@ func init() {
commandSignMessage,
commandVerifyMessage,
}
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
}
// Commonly used command line flags.

View File

@@ -62,7 +62,7 @@ To sign a message contained in a file, use the --msgfile flag.
}
// Decrypt key with passphrase.
passphrase := getPassphrase(ctx)
passphrase := getPassphrase(ctx, false)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)

View File

@@ -34,7 +34,7 @@ func TestMessageSignVerify(t *testing.T) {
message := "test message"
// Create the key.
generate := runEthkey(t, "generate", keyfile)
generate := runEthkey(t, "generate", "--lightkdf", keyfile)
generate.Expect(`
!! Unsupported terminal, password will be echoed.
Password: {{.InputLine "foobar"}}

View File

@@ -23,36 +23,14 @@ import (
"strings"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/crypto"
"gopkg.in/urfave/cli.v1"
)
// promptPassphrase prompts the user for a passphrase. Set confirmation to true
// to require the user to confirm the passphrase.
func promptPassphrase(confirmation bool) string {
passphrase, err := console.Stdin.PromptPassword("Password: ")
if err != nil {
utils.Fatalf("Failed to read password: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat password: ")
if err != nil {
utils.Fatalf("Failed to read password confirmation: %v", err)
}
if passphrase != confirm {
utils.Fatalf("Passwords do not match")
}
}
return passphrase
}
// getPassphrase obtains a passphrase given by the user. It first checks the
// --passfile command line flag and ultimately prompts the user for a
// passphrase.
func getPassphrase(ctx *cli.Context) string {
func getPassphrase(ctx *cli.Context, confirmation bool) string {
// Look for the --passwordfile flag.
passphraseFile := ctx.String(passphraseFlag.Name)
if passphraseFile != "" {
@@ -65,7 +43,7 @@ func getPassphrase(ctx *cli.Context) string {
}
// Otherwise prompt the user for the passphrase.
return promptPassphrase(false)
return utils.GetPassPhrase("", confirmation)
}
// signHash is a helper function that calculates a hash for the given message

268
cmd/evm/README.md Normal file
View File

@@ -0,0 +1,268 @@
## EVM state transition tool
The `evm t8n` tool is a stateless state transition utility. It is a utility
which can
1. Take a prestate, including
- Accounts,
- Block context information,
- Previous blockshashes (*optional)
2. Apply a set of transactions,
3. Apply a mining-reward (*optional),
4. And generate a post-state, including
- State root, transaction root, receipt root,
- Information about rejected transactions,
- Optionally: a full or partial post-state dump
## Specification
The idea is to specify the behaviour of this binary very _strict_, so that other
node implementors can build replicas based on their own state-machines, and the
state generators can swap between a `geth`-based implementation and a `parityvm`-based
implementation.
### Command line params
Command line params that has to be supported are
```
--trace Output full trace logs to files <txhash>.jsonl
--trace.nomemory Disable full memory dump in traces
--trace.nostack Disable stack output in traces
--output.alloc alloc Determines where to put the alloc of the post-state.
`stdout` - into the stdout output
`stderr` - into the stderr output
--output.result result Determines where to put the result (stateroot, txroot etc) of the post-state.
`stdout` - into the stdout output
`stderr` - into the stderr output
--state.fork value Name of ruleset to use.
--state.chainid value ChainID to use (default: 1)
--state.reward value Mining reward. Set to -1 to disable (default: 0)
```
### Error codes and output
All logging should happen against the `stderr`.
There are a few (not many) errors that can occur, those are defined below.
#### EVM-based errors (`2` to `9`)
- Other EVM error. Exit code `2`
- Failed configuration: when a non-supported or invalid fork was specified. Exit code `3`.
- Block history is not supplied, but needed for a `BLOCKHASH` operation. If `BLOCKHASH`
is invoked targeting a block which history has not been provided for, the program will
exit with code `4`.
#### IO errors (`10`-`20`)
- Invalid input json: the supplied data could not be marshalled.
The program will exit with code `10`
- IO problems: failure to load or save files, the program will exit with code `11`
## Examples
### Basic usage
Invoking it with the provided example files
```
./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json
```
Two resulting files:
`alloc.json`:
```json
{
"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": {
"balance": "0xfeed1a9d",
"nonce": "0x1"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0x5ffd4878be161d74",
"nonce": "0xac"
},
"0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0xa410"
}
}
```
`result.json`:
```json
{
"stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13",
"txRoot": "0xc4761fd7b87ff2364c7c60b6c5c8d02e522e815328aaea3f20e3b7b7ef52c42d",
"receiptRoot": "0x056b23fbba480696b65fe5a59b8f2148a1299103c4f57df839233af2cf4ca2d2",
"logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"receipts": [
{
"root": "0x",
"status": "0x1",
"cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",
"blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"transactionIndex": "0x0"
}
],
"rejected": [
1
]
}
```
We can make them spit out the data to e.g. `stdout` like this:
```
./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.result=stdout --output.alloc=stdout
```
Output:
```json
{
"alloc": {
"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": {
"balance": "0xfeed1a9d",
"nonce": "0x1"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0x5ffd4878be161d74",
"nonce": "0xac"
},
"0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0xa410"
}
},
"result": {
"stateRoot": "0x84208a19bc2b46ada7445180c1db162be5b39b9abc8c0a54b05d32943eae4e13",
"txRoot": "0xc4761fd7b87ff2364c7c60b6c5c8d02e522e815328aaea3f20e3b7b7ef52c42d",
"receiptRoot": "0x056b23fbba480696b65fe5a59b8f2148a1299103c4f57df839233af2cf4ca2d2",
"logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"receipts": [
{
"root": "0x",
"status": "0x1",
"cumulativeGasUsed": "0x5208",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"logs": null,
"transactionHash": "0x0557bacce3375c98d806609b8d5043072f0b6a8bae45ae5a67a00d3a1a18d673",
"contractAddress": "0x0000000000000000000000000000000000000000",
"gasUsed": "0x5208",
"blockHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"transactionIndex": "0x0"
}
],
"rejected": [
1
]
}
}
```
## About Ommers
Mining rewards and ommer rewards might need to be added. This is how those are applied:
- `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`.
- For each ommer (mined by `0xbb`), with blocknumber `N-delta`
- (where `delta` is the difference between the current block and the ommer)
- The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward`
- The account `0xaa` (block miner) is awarded `block_reward / 32`
To make `state_t8n` apply these, the following inputs are required:
- `state.reward`
- For ethash, it is `5000000000000000000` `wei`,
- If this is not defined, mining rewards are not applied,
- A value of `0` is valid, and causes accounts to be 'touched'.
- For each ommer, the tool needs to be given an `address` and a `delta`. This
is done via the `env`.
Note: the tool does not verify that e.g. the normal uncle rules apply,
and allows e.g two uncles at the same height, or the uncle-distance. This means that
the tool allows for negative uncle reward (distance > 8)
Example:
`./testdata/5/env.json`:
```json
{
"currentCoinbase": "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"currentDifficulty": "0x20000",
"currentGasLimit": "0x750a163df65e8a",
"currentNumber": "1",
"currentTimestamp": "1000",
"ommers": [
{"delta": 1, "address": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" },
{"delta": 2, "address": "0xcccccccccccccccccccccccccccccccccccccccc" }
]
}
```
When applying this, using a reward of `0x08`
Output:
```json
{
"alloc": {
"0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": {
"balance": "0x88"
},
"0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb": {
"balance": "0x70"
},
"0xcccccccccccccccccccccccccccccccccccccccc": {
"balance": "0x60"
}
}
}
```
### Future EIPS
It is also possible to experiment with future eips that are not yet defined in a hard fork.
Example, putting EIP-1344 into Frontier:
```
./evm t8n --state.fork=Frontier+1344 --input.pre=./testdata/1/pre.json --input.txs=./testdata/1/txs.json --input.env=/testdata/1/env.json
```
### Block history
The `BLOCKHASH` opcode requires blockhashes to be provided by the caller, inside the `env`.
If a required blockhash is not provided, the exit code should be `4`:
Example where blockhashes are provided:
```
./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace
```
```
cat trace-0.jsonl | grep BLOCKHASH -C2
```
```
{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memory":"0x","memSize":0,"stack":[],"returnStack":[],"depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memory":"0x","memSize":0,"stack":["0x1"],"returnStack":[],"depth":1,"refund":0,"opName":"BLOCKHASH","error":""}
{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memory":"0x","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"returnStack":[],"depth":1,"refund":0,"opName":"STOP","error":""}
{"output":"","gasUsed":"0x17","time":155861}
```
In this example, the caller has not provided the required blockhash:
```
./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace
```
```
ERROR(4): getHash(3) invoked, blockhash for that block not provided
```
Error code: 4
### Chaining
Another thing that can be done, is to chain invocations:
```
./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json
INFO [06-29|11:52:04.934] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [06-29|11:52:04.936] rejected tx index=0 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
INFO [06-29|11:52:04.936] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low"
```
What happened here, is that we first applied two identical transactions, so the second one was rejected.
Then, taking the poststate alloc as the input for the next state, we tried again to include
the same two transactions: this time, both failed due to too low nonce.
In order to meaningfully chain invocations, one would need to provide meaningful new `env`, otherwise the
actual blocknumber (exposed to the EVM) would not increase.

View File

@@ -34,17 +34,22 @@ var disasmCommand = cli.Command{
}
func disasmCmd(ctx *cli.Context) error {
if len(ctx.Args().First()) == 0 {
return errors.New("filename required")
var in string
switch {
case len(ctx.Args().First()) > 0:
fn := ctx.Args().First()
input, err := ioutil.ReadFile(fn)
if err != nil {
return err
}
in = string(input)
case ctx.GlobalIsSet(InputFlag.Name):
in = ctx.GlobalString(InputFlag.Name)
default:
return errors.New("Missing filename or --input value")
}
fn := ctx.Args().First()
in, err := ioutil.ReadFile(fn)
if err != nil {
return err
}
code := strings.TrimSpace(string(in))
code := strings.TrimSpace(in)
fmt.Printf("%v\n", code)
return asm.PrintDisassembled(code)
}

View File

@@ -0,0 +1,255 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package t8ntool
import (
"fmt"
"math/big"
"os"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"golang.org/x/crypto/sha3"
)
type Prestate struct {
Env stEnv `json:"env"`
Pre core.GenesisAlloc `json:"pre"`
}
// ExecutionResult contains the execution status after running a state test, any
// error that might have occurred and a dump of the final state if requested.
type ExecutionResult struct {
StateRoot common.Hash `json:"stateRoot"`
TxRoot common.Hash `json:"txRoot"`
ReceiptRoot common.Hash `json:"receiptRoot"`
LogsHash common.Hash `json:"logsHash"`
Bloom types.Bloom `json:"logsBloom" gencodec:"required"`
Receipts types.Receipts `json:"receipts"`
Rejected []int `json:"rejected,omitempty"`
}
type ommer struct {
Delta uint64 `json:"delta"`
Address common.Address `json:"address"`
}
//go:generate gencodec -type stEnv -field-override stEnvMarshaling -out gen_stenv.go
type stEnv struct {
Coinbase common.Address `json:"currentCoinbase" gencodec:"required"`
Difficulty *big.Int `json:"currentDifficulty" gencodec:"required"`
GasLimit uint64 `json:"currentGasLimit" gencodec:"required"`
Number uint64 `json:"currentNumber" gencodec:"required"`
Timestamp uint64 `json:"currentTimestamp" gencodec:"required"`
BlockHashes map[math.HexOrDecimal64]common.Hash `json:"blockHashes,omitempty"`
Ommers []ommer `json:"ommers,omitempty"`
}
type stEnvMarshaling struct {
Coinbase common.UnprefixedAddress
Difficulty *math.HexOrDecimal256
GasLimit math.HexOrDecimal64
Number math.HexOrDecimal64
Timestamp math.HexOrDecimal64
}
// Apply applies a set of transactions to a pre-state
func (pre *Prestate) Apply(vmConfig vm.Config, chainConfig *params.ChainConfig,
txs types.Transactions, miningReward int64,
getTracerFn func(txIndex int) (tracer vm.Tracer, err error)) (*state.StateDB, *ExecutionResult, error) {
// Capture errors for BLOCKHASH operation, if we haven't been supplied the
// required blockhashes
var hashError error
getHash := func(num uint64) common.Hash {
if pre.Env.BlockHashes == nil {
hashError = fmt.Errorf("getHash(%d) invoked, no blockhashes provided", num)
return common.Hash{}
}
h, ok := pre.Env.BlockHashes[math.HexOrDecimal64(num)]
if !ok {
hashError = fmt.Errorf("getHash(%d) invoked, blockhash for that block not provided", num)
}
return h
}
var (
statedb = MakePreState(rawdb.NewMemoryDatabase(), pre.Pre)
signer = types.MakeSigner(chainConfig, new(big.Int).SetUint64(pre.Env.Number))
gaspool = new(core.GasPool)
blockHash = common.Hash{0x13, 0x37}
rejectedTxs []int
includedTxs types.Transactions
gasUsed = uint64(0)
receipts = make(types.Receipts, 0)
txIndex = 0
)
gaspool.AddGas(pre.Env.GasLimit)
vmContext := vm.Context{
CanTransfer: core.CanTransfer,
Transfer: core.Transfer,
Coinbase: pre.Env.Coinbase,
BlockNumber: new(big.Int).SetUint64(pre.Env.Number),
Time: new(big.Int).SetUint64(pre.Env.Timestamp),
Difficulty: pre.Env.Difficulty,
GasLimit: pre.Env.GasLimit,
GetHash: getHash,
// GasPrice and Origin needs to be set per transaction
}
// If DAO is supported/enabled, we need to handle it here. In geth 'proper', it's
// done in StateProcessor.Process(block, ...), right before transactions are applied.
if chainConfig.DAOForkSupport &&
chainConfig.DAOForkBlock != nil &&
chainConfig.DAOForkBlock.Cmp(new(big.Int).SetUint64(pre.Env.Number)) == 0 {
misc.ApplyDAOHardFork(statedb)
}
for i, tx := range txs {
msg, err := tx.AsMessage(signer)
if err != nil {
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "error", err)
rejectedTxs = append(rejectedTxs, i)
continue
}
tracer, err := getTracerFn(txIndex)
if err != nil {
return nil, nil, err
}
vmConfig.Tracer = tracer
vmConfig.Debug = (tracer != nil)
statedb.Prepare(tx.Hash(), blockHash, txIndex)
vmContext.GasPrice = msg.GasPrice()
vmContext.Origin = msg.From()
evm := vm.NewEVM(vmContext, statedb, chainConfig, vmConfig)
snapshot := statedb.Snapshot()
// (ret []byte, usedGas uint64, failed bool, err error)
msgResult, err := core.ApplyMessage(evm, msg, gaspool)
if err != nil {
statedb.RevertToSnapshot(snapshot)
log.Info("rejected tx", "index", i, "hash", tx.Hash(), "from", msg.From(), "error", err)
rejectedTxs = append(rejectedTxs, i)
continue
}
includedTxs = append(includedTxs, tx)
if hashError != nil {
return nil, nil, NewError(ErrorMissingBlockhash, hashError)
}
gasUsed += msgResult.UsedGas
// Create a new receipt for the transaction, storing the intermediate root and gas used by the tx
{
var root []byte
if chainConfig.IsByzantium(vmContext.BlockNumber) {
statedb.Finalise(true)
} else {
root = statedb.IntermediateRoot(chainConfig.IsEIP158(vmContext.BlockNumber)).Bytes()
}
receipt := types.NewReceipt(root, msgResult.Failed(), gasUsed)
receipt.TxHash = tx.Hash()
receipt.GasUsed = msgResult.UsedGas
// if the transaction created a contract, store the creation address in the receipt.
if msg.To() == nil {
receipt.ContractAddress = crypto.CreateAddress(evm.Context.Origin, tx.Nonce())
}
// Set the receipt logs and create a bloom for filtering
receipt.Logs = statedb.GetLogs(tx.Hash())
receipt.Bloom = types.CreateBloom(types.Receipts{receipt})
// These three are non-consensus fields
//receipt.BlockHash
//receipt.BlockNumber =
receipt.TransactionIndex = uint(txIndex)
receipts = append(receipts, receipt)
}
txIndex++
}
statedb.IntermediateRoot(chainConfig.IsEIP158(vmContext.BlockNumber))
// Add mining reward?
if miningReward > 0 {
// Add mining reward. The mining reward may be `0`, which only makes a difference in the cases
// where
// - the coinbase suicided, or
// - there are only 'bad' transactions, which aren't executed. In those cases,
// the coinbase gets no txfee, so isn't created, and thus needs to be touched
var (
blockReward = big.NewInt(miningReward)
minerReward = new(big.Int).Set(blockReward)
perOmmer = new(big.Int).Div(blockReward, big.NewInt(32))
)
for _, ommer := range pre.Env.Ommers {
// Add 1/32th for each ommer included
minerReward.Add(minerReward, perOmmer)
// Add (8-delta)/8
reward := big.NewInt(8)
reward.Sub(reward, big.NewInt(0).SetUint64(ommer.Delta))
reward.Mul(reward, blockReward)
reward.Div(reward, big.NewInt(8))
statedb.AddBalance(ommer.Address, reward)
}
statedb.AddBalance(pre.Env.Coinbase, minerReward)
}
// Commit block
root, err := statedb.Commit(chainConfig.IsEIP158(vmContext.BlockNumber))
if err != nil {
fmt.Fprintf(os.Stderr, "Could not commit state: %v", err)
return nil, nil, NewError(ErrorEVM, fmt.Errorf("could not commit state: %v", err))
}
execRs := &ExecutionResult{
StateRoot: root,
TxRoot: types.DeriveSha(includedTxs),
ReceiptRoot: types.DeriveSha(receipts),
Bloom: types.CreateBloom(receipts),
LogsHash: rlpHash(statedb.Logs()),
Receipts: receipts,
Rejected: rejectedTxs,
}
return statedb, execRs, nil
}
func MakePreState(db ethdb.Database, accounts core.GenesisAlloc) *state.StateDB {
sdb := state.NewDatabase(db)
statedb, _ := state.New(common.Hash{}, sdb, nil)
for addr, a := range accounts {
statedb.SetCode(addr, a.Code)
statedb.SetNonce(addr, a.Nonce)
statedb.SetBalance(addr, a.Balance)
for k, v := range a.Storage {
statedb.SetState(addr, k, v)
}
}
// Commit and re-open to start with a clean state.
root, _ := statedb.Commit(false)
statedb, _ = state.New(root, sdb, nil)
return statedb
}
func rlpHash(x interface{}) (h common.Hash) {
hw := sha3.NewLegacyKeccak256()
rlp.Encode(hw, x)
hw.Sum(h[:0])
return h
}

View File

@@ -0,0 +1,103 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package t8ntool
import (
"fmt"
"strings"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/tests"
"gopkg.in/urfave/cli.v1"
)
var (
TraceFlag = cli.BoolFlag{
Name: "trace",
Usage: "Output full trace logs to files <txhash>.jsonl",
}
TraceDisableMemoryFlag = cli.BoolFlag{
Name: "trace.nomemory",
Usage: "Disable full memory dump in traces",
}
TraceDisableStackFlag = cli.BoolFlag{
Name: "trace.nostack",
Usage: "Disable stack output in traces",
}
TraceDisableReturnDataFlag = cli.BoolFlag{
Name: "trace.noreturndata",
Usage: "Disable return data output in traces",
}
OutputAllocFlag = cli.StringFlag{
Name: "output.alloc",
Usage: "Determines where to put the `alloc` of the post-state.\n" +
"\t`stdout` - into the stdout output\n" +
"\t`stderr` - into the stderr output\n" +
"\t<file> - into the file <file> ",
Value: "alloc.json",
}
OutputResultFlag = cli.StringFlag{
Name: "output.result",
Usage: "Determines where to put the `result` (stateroot, txroot etc) of the post-state.\n" +
"\t`stdout` - into the stdout output\n" +
"\t`stderr` - into the stderr output\n" +
"\t<file> - into the file <file> ",
Value: "result.json",
}
InputAllocFlag = cli.StringFlag{
Name: "input.alloc",
Usage: "`stdin` or file name of where to find the prestate alloc to use.",
Value: "alloc.json",
}
InputEnvFlag = cli.StringFlag{
Name: "input.env",
Usage: "`stdin` or file name of where to find the prestate env to use.",
Value: "env.json",
}
InputTxsFlag = cli.StringFlag{
Name: "input.txs",
Usage: "`stdin` or file name of where to find the transactions to apply.",
Value: "txs.json",
}
RewardFlag = cli.Int64Flag{
Name: "state.reward",
Usage: "Mining reward. Set to -1 to disable",
Value: 0,
}
ChainIDFlag = cli.Int64Flag{
Name: "state.chainid",
Usage: "ChainID to use",
Value: 1,
}
ForknameFlag = cli.StringFlag{
Name: "state.fork",
Usage: fmt.Sprintf("Name of ruleset to use."+
"\n\tAvailable forknames:"+
"\n\t %v"+
"\n\tAvailable extra eips:"+
"\n\t %v"+
"\n\tSyntax <forkname>(+ExtraEip)",
strings.Join(tests.AvailableForks(), "\n\t "),
strings.Join(vm.ActivateableEips(), ", ")),
Value: "Istanbul",
}
VerbosityFlag = cli.IntFlag{
Name: "verbosity",
Usage: "sets the verbosity level",
Value: 3,
}
)

View File

@@ -0,0 +1,80 @@
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
package t8ntool
import (
"encoding/json"
"errors"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
)
var _ = (*stEnvMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (s stEnv) MarshalJSON() ([]byte, error) {
type stEnv struct {
Coinbase common.UnprefixedAddress `json:"currentCoinbase" gencodec:"required"`
Difficulty *math.HexOrDecimal256 `json:"currentDifficulty" gencodec:"required"`
GasLimit math.HexOrDecimal64 `json:"currentGasLimit" gencodec:"required"`
Number math.HexOrDecimal64 `json:"currentNumber" gencodec:"required"`
Timestamp math.HexOrDecimal64 `json:"currentTimestamp" gencodec:"required"`
BlockHashes map[math.HexOrDecimal64]common.Hash `json:"blockHashes,omitempty"`
Ommers []ommer `json:"ommers,omitempty"`
}
var enc stEnv
enc.Coinbase = common.UnprefixedAddress(s.Coinbase)
enc.Difficulty = (*math.HexOrDecimal256)(s.Difficulty)
enc.GasLimit = math.HexOrDecimal64(s.GasLimit)
enc.Number = math.HexOrDecimal64(s.Number)
enc.Timestamp = math.HexOrDecimal64(s.Timestamp)
enc.BlockHashes = s.BlockHashes
enc.Ommers = s.Ommers
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (s *stEnv) UnmarshalJSON(input []byte) error {
type stEnv struct {
Coinbase *common.UnprefixedAddress `json:"currentCoinbase" gencodec:"required"`
Difficulty *math.HexOrDecimal256 `json:"currentDifficulty" gencodec:"required"`
GasLimit *math.HexOrDecimal64 `json:"currentGasLimit" gencodec:"required"`
Number *math.HexOrDecimal64 `json:"currentNumber" gencodec:"required"`
Timestamp *math.HexOrDecimal64 `json:"currentTimestamp" gencodec:"required"`
BlockHashes map[math.HexOrDecimal64]common.Hash `json:"blockHashes,omitempty"`
Ommers []ommer `json:"ommers,omitempty"`
}
var dec stEnv
if err := json.Unmarshal(input, &dec); err != nil {
return err
}
if dec.Coinbase == nil {
return errors.New("missing required field 'currentCoinbase' for stEnv")
}
s.Coinbase = common.Address(*dec.Coinbase)
if dec.Difficulty == nil {
return errors.New("missing required field 'currentDifficulty' for stEnv")
}
s.Difficulty = (*big.Int)(dec.Difficulty)
if dec.GasLimit == nil {
return errors.New("missing required field 'currentGasLimit' for stEnv")
}
s.GasLimit = uint64(*dec.GasLimit)
if dec.Number == nil {
return errors.New("missing required field 'currentNumber' for stEnv")
}
s.Number = uint64(*dec.Number)
if dec.Timestamp == nil {
return errors.New("missing required field 'currentTimestamp' for stEnv")
}
s.Timestamp = uint64(*dec.Timestamp)
if dec.BlockHashes != nil {
s.BlockHashes = dec.BlockHashes
}
if dec.Ommers != nil {
s.Ommers = dec.Ommers
}
return nil
}

View File

@@ -0,0 +1,277 @@
// Copyright 2020 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package t8ntool
import (
"encoding/json"
"fmt"
"io/ioutil"
"math/big"
"os"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/tests"
"gopkg.in/urfave/cli.v1"
)
const (
ErrorEVM = 2
ErrorVMConfig = 3
ErrorMissingBlockhash = 4
ErrorJson = 10
ErrorIO = 11
stdinSelector = "stdin"
)
type NumberedError struct {
errorCode int
err error
}
func NewError(errorCode int, err error) *NumberedError {
return &NumberedError{errorCode, err}
}
func (n *NumberedError) Error() string {
return fmt.Sprintf("ERROR(%d): %v", n.errorCode, n.err.Error())
}
func (n *NumberedError) Code() int {
return n.errorCode
}
type input struct {
Alloc core.GenesisAlloc `json:"alloc,omitempty"`
Env *stEnv `json:"env,omitempty"`
Txs types.Transactions `json:"txs,omitempty"`
}
func Main(ctx *cli.Context) error {
// Configure the go-ethereum logger
glogger := log.NewGlogHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(false)))
glogger.Verbosity(log.Lvl(ctx.Int(VerbosityFlag.Name)))
log.Root().SetHandler(glogger)
var (
err error
tracer vm.Tracer
)
var getTracer func(txIndex int) (vm.Tracer, error)
if ctx.Bool(TraceFlag.Name) {
// Configure the EVM logger
logConfig := &vm.LogConfig{
DisableStack: ctx.Bool(TraceDisableStackFlag.Name),
DisableMemory: ctx.Bool(TraceDisableMemoryFlag.Name),
DisableReturnData: ctx.Bool(TraceDisableReturnDataFlag.Name),
Debug: true,
}
var prevFile *os.File
// This one closes the last file
defer func() {
if prevFile != nil {
prevFile.Close()
}
}()
getTracer = func(txIndex int) (vm.Tracer, error) {
if prevFile != nil {
prevFile.Close()
}
traceFile, err := os.Create(fmt.Sprintf("trace-%d.jsonl", txIndex))
if err != nil {
return nil, NewError(ErrorIO, fmt.Errorf("failed creating trace-file: %v", err))
}
prevFile = traceFile
return vm.NewJSONLogger(logConfig, traceFile), nil
}
} else {
getTracer = func(txIndex int) (tracer vm.Tracer, err error) {
return nil, nil
}
}
// We need to load three things: alloc, env and transactions. May be either in
// stdin input or in files.
// Check if anything needs to be read from stdin
var (
prestate Prestate
txs types.Transactions // txs to apply
allocStr = ctx.String(InputAllocFlag.Name)
envStr = ctx.String(InputEnvFlag.Name)
txStr = ctx.String(InputTxsFlag.Name)
inputData = &input{}
)
if allocStr == stdinSelector || envStr == stdinSelector || txStr == stdinSelector {
decoder := json.NewDecoder(os.Stdin)
decoder.Decode(inputData)
}
if allocStr != stdinSelector {
inFile, err := os.Open(allocStr)
if err != nil {
return NewError(ErrorIO, fmt.Errorf("failed reading alloc file: %v", err))
}
defer inFile.Close()
decoder := json.NewDecoder(inFile)
if err := decoder.Decode(&inputData.Alloc); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed unmarshaling alloc-file: %v", err))
}
}
if envStr != stdinSelector {
inFile, err := os.Open(envStr)
if err != nil {
return NewError(ErrorIO, fmt.Errorf("failed reading env file: %v", err))
}
defer inFile.Close()
decoder := json.NewDecoder(inFile)
var env stEnv
if err := decoder.Decode(&env); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed unmarshaling env-file: %v", err))
}
inputData.Env = &env
}
if txStr != stdinSelector {
inFile, err := os.Open(txStr)
if err != nil {
return NewError(ErrorIO, fmt.Errorf("failed reading txs file: %v", err))
}
defer inFile.Close()
decoder := json.NewDecoder(inFile)
var txs types.Transactions
if err := decoder.Decode(&txs); err != nil {
return NewError(ErrorJson, fmt.Errorf("Failed unmarshaling txs-file: %v", err))
}
inputData.Txs = txs
}
prestate.Pre = inputData.Alloc
prestate.Env = *inputData.Env
txs = inputData.Txs
// Iterate over all the tests, run them and aggregate the results
vmConfig := vm.Config{
Tracer: tracer,
Debug: (tracer != nil),
}
// Construct the chainconfig
var chainConfig *params.ChainConfig
if cConf, extraEips, err := tests.GetChainConfig(ctx.String(ForknameFlag.Name)); err != nil {
return NewError(ErrorVMConfig, fmt.Errorf("Failed constructing chain configuration: %v", err))
} else {
chainConfig = cConf
vmConfig.ExtraEips = extraEips
}
// Set the chain id
chainConfig.ChainID = big.NewInt(ctx.Int64(ChainIDFlag.Name))
// Run the test and aggregate the result
state, result, err := prestate.Apply(vmConfig, chainConfig, txs, ctx.Int64(RewardFlag.Name), getTracer)
if err != nil {
return err
}
// Dump the excution result
//postAlloc := state.DumpGenesisFormat(false, false, false)
collector := make(Alloc)
state.DumpToCollector(collector, false, false, false, nil, -1)
return dispatchOutput(ctx, result, collector)
}
type Alloc map[common.Address]core.GenesisAccount
func (g Alloc) OnRoot(common.Hash) {}
func (g Alloc) OnAccount(addr common.Address, dumpAccount state.DumpAccount) {
balance, _ := new(big.Int).SetString(dumpAccount.Balance, 10)
var storage map[common.Hash]common.Hash
if dumpAccount.Storage != nil {
storage = make(map[common.Hash]common.Hash)
for k, v := range dumpAccount.Storage {
storage[k] = common.HexToHash(v)
}
}
genesisAccount := core.GenesisAccount{
Code: common.FromHex(dumpAccount.Code),
Storage: storage,
Balance: balance,
Nonce: dumpAccount.Nonce,
}
g[addr] = genesisAccount
}
// saveFile marshalls the object to the given file
func saveFile(filename string, data interface{}) error {
b, err := json.MarshalIndent(data, "", " ")
if err != nil {
return NewError(ErrorJson, fmt.Errorf("failed marshalling output: %v", err))
}
if err = ioutil.WriteFile(filename, b, 0644); err != nil {
return NewError(ErrorIO, fmt.Errorf("failed writing output: %v", err))
}
return nil
}
// dispatchOutput writes the output data to either stderr or stdout, or to the specified
// files
func dispatchOutput(ctx *cli.Context, result *ExecutionResult, alloc Alloc) error {
stdOutObject := make(map[string]interface{})
stdErrObject := make(map[string]interface{})
dispatch := func(fName, name string, obj interface{}) error {
switch fName {
case "stdout":
stdOutObject[name] = obj
case "stderr":
stdErrObject[name] = obj
default: // save to file
if err := saveFile(fName, obj); err != nil {
return err
}
}
return nil
}
if err := dispatch(ctx.String(OutputAllocFlag.Name), "alloc", alloc); err != nil {
return err
}
if err := dispatch(ctx.String(OutputResultFlag.Name), "result", result); err != nil {
return err
}
if len(stdOutObject) > 0 {
b, err := json.MarshalIndent(stdOutObject, "", " ")
if err != nil {
return NewError(ErrorJson, fmt.Errorf("failed marshalling output: %v", err))
}
os.Stdout.Write(b)
}
if len(stdErrObject) > 0 {
b, err := json.MarshalIndent(stdErrObject, "", " ")
if err != nil {
return NewError(ErrorJson, fmt.Errorf("failed marshalling output: %v", err))
}
os.Stderr.Write(b)
}
return nil
}

View File

@@ -22,7 +22,9 @@ import (
"math/big"
"os"
"github.com/ethereum/go-ethereum/cmd/evm/internal/t8ntool"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/internal/flags"
"gopkg.in/urfave/cli.v1"
)
@@ -30,7 +32,7 @@ var gitCommit = "" // Git SHA1 commit hash of the release (set via linker flags)
var gitDate = ""
var (
app = utils.NewApp(gitCommit, gitDate, "the evm command line interface")
app = flags.NewApp(gitCommit, gitDate, "the evm command line interface")
DebugFlag = cli.BoolFlag{
Name: "debug",
@@ -79,10 +81,18 @@ var (
Name: "input",
Usage: "input for the EVM",
}
InputFileFlag = cli.StringFlag{
Name: "inputfile",
Usage: "file containing input for the EVM",
}
VerbosityFlag = cli.IntFlag{
Name: "verbosity",
Usage: "sets the verbosity level",
}
BenchFlag = cli.BoolFlag{
Name: "bench",
Usage: "benchmark the execution",
}
CreateFlag = cli.BoolFlag{
Name: "create",
Usage: "indicates the action should be create rather than call",
@@ -111,6 +121,14 @@ var (
Name: "nostack",
Usage: "disable stack output",
}
DisableStorageFlag = cli.BoolFlag{
Name: "nostorage",
Usage: "disable storage output",
}
DisableReturnDataFlag = cli.BoolFlag{
Name: "noreturndata",
Usage: "disable return data output",
}
EVMInterpreterFlag = cli.StringFlag{
Name: "vm.evm",
Usage: "External EVM configuration (default = built-in interpreter)",
@@ -118,8 +136,31 @@ var (
}
)
var stateTransitionCommand = cli.Command{
Name: "transition",
Aliases: []string{"t8n"},
Usage: "executes a full state transition",
Action: t8ntool.Main,
Flags: []cli.Flag{
t8ntool.TraceFlag,
t8ntool.TraceDisableMemoryFlag,
t8ntool.TraceDisableStackFlag,
t8ntool.TraceDisableReturnDataFlag,
t8ntool.OutputAllocFlag,
t8ntool.OutputResultFlag,
t8ntool.InputAllocFlag,
t8ntool.InputEnvFlag,
t8ntool.InputTxsFlag,
t8ntool.ForknameFlag,
t8ntool.ChainIDFlag,
t8ntool.RewardFlag,
t8ntool.VerbosityFlag,
},
}
func init() {
app.Flags = []cli.Flag{
BenchFlag,
CreateFlag,
DebugFlag,
VerbosityFlag,
@@ -130,6 +171,7 @@ func init() {
ValueFlag,
DumpFlag,
InputFlag,
InputFileFlag,
MemProfileFlag,
CPUProfileFlag,
StatDumpFlag,
@@ -139,6 +181,8 @@ func init() {
ReceiverFlag,
DisableMemoryFlag,
DisableStackFlag,
DisableStorageFlag,
DisableReturnDataFlag,
EVMInterpreterFlag,
}
app.Commands = []cli.Command{
@@ -146,12 +190,18 @@ func init() {
disasmCommand,
runCommand,
stateTestCommand,
stateTransitionCommand,
}
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
}
func main() {
if err := app.Run(os.Args); err != nil {
code := 1
if ec, ok := err.(*t8ntool.NumberedError); ok {
code = ec.Code()
}
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
os.Exit(code)
}
}

23
cmd/evm/poststate.json Normal file
View File

@@ -0,0 +1,23 @@
{
"root": "f4157bb27bcb1d1a63001434a249a80948f2e9fe1f53d551244c1dae826b5b23",
"accounts": {
"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192": {
"balance": "4276951709",
"nonce": 1,
"root": "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"codeHash": "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "6916764286133345652",
"nonce": 172,
"root": "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"codeHash": "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
},
"0xc94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "42500",
"nonce": 0,
"root": "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"codeHash": "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
}
}
}

View File

@@ -17,6 +17,7 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
@@ -24,6 +25,7 @@ import (
"os"
goruntime "runtime"
"runtime/pprof"
"testing"
"time"
"github.com/ethereum/go-ethereum/cmd/evm/internal/compiler"
@@ -68,14 +70,49 @@ func readGenesis(genesisPath string) *core.Genesis {
return genesis
}
type execStats struct {
time time.Duration // The execution time.
allocs int64 // The number of heap allocations during execution.
bytesAllocated int64 // The cumulative number of bytes allocated during execution.
}
func timedExec(bench bool, execFunc func() ([]byte, uint64, error)) (output []byte, gasLeft uint64, stats execStats, err error) {
if bench {
result := testing.Benchmark(func(b *testing.B) {
for i := 0; i < b.N; i++ {
output, gasLeft, err = execFunc()
}
})
// Get the average execution time from the benchmarking result.
// There are other useful stats here that could be reported.
stats.time = time.Duration(result.NsPerOp())
stats.allocs = result.AllocsPerOp()
stats.bytesAllocated = result.AllocedBytesPerOp()
} else {
var memStatsBefore, memStatsAfter goruntime.MemStats
goruntime.ReadMemStats(&memStatsBefore)
startTime := time.Now()
output, gasLeft, err = execFunc()
stats.time = time.Since(startTime)
goruntime.ReadMemStats(&memStatsAfter)
stats.allocs = int64(memStatsAfter.Mallocs - memStatsBefore.Mallocs)
stats.bytesAllocated = int64(memStatsAfter.TotalAlloc - memStatsBefore.TotalAlloc)
}
return output, gasLeft, stats, err
}
func runCmd(ctx *cli.Context) error {
glogger := log.NewGlogHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(false)))
glogger.Verbosity(log.Lvl(ctx.GlobalInt(VerbosityFlag.Name)))
log.Root().SetHandler(glogger)
logconfig := &vm.LogConfig{
DisableMemory: ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
Debug: ctx.GlobalBool(DebugFlag.Name),
DisableMemory: ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
DisableStorage: ctx.GlobalBool(DisableStorageFlag.Name),
DisableReturnData: ctx.GlobalBool(DisableReturnDataFlag.Name),
Debug: ctx.GlobalBool(DebugFlag.Name),
}
var (
@@ -100,10 +137,10 @@ func runCmd(ctx *cli.Context) error {
genesisConfig = gen
db := rawdb.NewMemoryDatabase()
genesis := gen.ToBlock(db)
statedb, _ = state.New(genesis.Root(), state.NewDatabase(db))
statedb, _ = state.New(genesis.Root(), state.NewDatabase(db), nil)
chainConfig = gen.Config
} else {
statedb, _ = state.New(common.Hash{}, state.NewDatabase(rawdb.NewMemoryDatabase()))
statedb, _ = state.New(common.Hash{}, state.NewDatabase(rawdb.NewMemoryDatabase()), nil)
genesisConfig = new(core.Genesis)
}
if ctx.GlobalString(SenderFlag.Name) != "" {
@@ -115,11 +152,7 @@ func runCmd(ctx *cli.Context) error {
receiver = common.HexToAddress(ctx.GlobalString(ReceiverFlag.Name))
}
var (
code []byte
ret []byte
err error
)
var code []byte
codeFileFlag := ctx.GlobalString(CodeFileFlag.Name)
codeFlag := ctx.GlobalString(CodeFlag.Name)
@@ -145,6 +178,7 @@ func runCmd(ctx *cli.Context) error {
} else {
hexcode = []byte(codeFlag)
}
hexcode = bytes.TrimSpace(hexcode)
if len(hexcode)%2 != 0 {
fmt.Printf("Invalid input length for hex data (%d)\n", len(hexcode))
os.Exit(1)
@@ -201,18 +235,37 @@ func runCmd(ctx *cli.Context) error {
} else {
runtimeConfig.ChainConfig = params.AllEthashProtocolChanges
}
tstart := time.Now()
var leftOverGas uint64
var hexInput []byte
if inputFileFlag := ctx.GlobalString(InputFileFlag.Name); inputFileFlag != "" {
var err error
if hexInput, err = ioutil.ReadFile(inputFileFlag); err != nil {
fmt.Printf("could not load input from file: %v\n", err)
os.Exit(1)
}
} else {
hexInput = []byte(ctx.GlobalString(InputFlag.Name))
}
input := common.FromHex(string(bytes.TrimSpace(hexInput)))
var execFunc func() ([]byte, uint64, error)
if ctx.GlobalBool(CreateFlag.Name) {
input := append(code, common.Hex2Bytes(ctx.GlobalString(InputFlag.Name))...)
ret, _, leftOverGas, err = runtime.Create(input, &runtimeConfig)
input = append(code, input...)
execFunc = func() ([]byte, uint64, error) {
output, _, gasLeft, err := runtime.Create(input, &runtimeConfig)
return output, gasLeft, err
}
} else {
if len(code) > 0 {
statedb.SetCode(receiver, code)
}
ret, leftOverGas, err = runtime.Call(receiver, common.Hex2Bytes(ctx.GlobalString(InputFlag.Name)), &runtimeConfig)
execFunc = func() ([]byte, uint64, error) {
return runtime.Call(receiver, input, &runtimeConfig)
}
}
execTime := time.Since(tstart)
bench := ctx.GlobalBool(BenchFlag.Name)
output, leftOverGas, stats, err := timedExec(bench, execFunc)
if ctx.GlobalBool(DumpFlag.Name) {
statedb.Commit(true)
@@ -242,20 +295,15 @@ func runCmd(ctx *cli.Context) error {
vm.WriteLogs(os.Stderr, statedb.Logs())
}
if ctx.GlobalBool(StatDumpFlag.Name) {
var mem goruntime.MemStats
goruntime.ReadMemStats(&mem)
fmt.Fprintf(os.Stderr, `evm execution time: %v
heap objects: %d
allocations: %d
total allocations: %d
GC calls: %d
Gas used: %d
`, execTime, mem.HeapObjects, mem.Alloc, mem.TotalAlloc, mem.NumGC, initialGas-leftOverGas)
if bench || ctx.GlobalBool(StatDumpFlag.Name) {
fmt.Fprintf(os.Stderr, `EVM gas used: %d
execution time: %v
allocations: %d
allocated bytes: %d
`, initialGas-leftOverGas, stats.time, stats.allocs, stats.bytesAllocated)
}
if tracer == nil {
fmt.Printf("0x%x\n", ret)
fmt.Printf("0x%x\n", output)
if err != nil {
fmt.Printf(" error: %v\n", err)
}

View File

@@ -59,8 +59,10 @@ func stateTestCmd(ctx *cli.Context) error {
// Configure the EVM logger
config := &vm.LogConfig{
DisableMemory: ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
DisableMemory: ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
DisableStorage: ctx.GlobalBool(DisableStorageFlag.Name),
DisableReturnData: ctx.GlobalBool(DisableReturnDataFlag.Name),
}
var (
tracer vm.Tracer
@@ -96,7 +98,7 @@ func stateTestCmd(ctx *cli.Context) error {
for _, st := range test.Subtests() {
// Run the test and aggregate the result
result := &StatetestResult{Name: key, Fork: st.Fork, Pass: true}
state, err := test.Run(st, cfg)
_, state, err := test.Run(st, cfg, false)
// print state root for evmlab tracing
if ctx.GlobalBool(MachineFlag.Name) && state != nil {
fmt.Fprintf(os.Stderr, "{\"stateRoot\": \"%x\"}\n", state.IntermediateRoot(false))

12
cmd/evm/testdata/1/alloc.json vendored Normal file
View File

@@ -0,0 +1,12 @@
{
"a94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0x5ffd4878be161d74",
"code": "0x",
"nonce": "0xac",
"storage": {}
},
"0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192":{
"balance": "0xfeedbead",
"nonce" : "0x00"
}
}

Some files were not shown because too many files have changed in this diff Show More