Compare commits

..

1392 Commits

Author SHA1 Message Date
Péter Szilágyi
316fc7ecfc params, swarm: release Geth v1.8.14 and Swarm v0.3.2 2018-08-22 11:39:00 +03:00
gary rong
af85d8e2b3 cmd, eth: apply default miner recommit setting (#17479) 2018-08-22 09:47:50 +03:00
Péter Szilágyi
6a8b47c880 Merge pull request #17472 from karalabe/txpool-locals
cmd, core, miner: add --txpool.locals and priority mining
2018-08-22 09:46:13 +03:00
Péter Szilágyi
e0d0e64ce2 cmd, core, miner: add --txpool.locals and priority mining 2018-08-22 09:43:57 +03:00
gary rong
b2c644ffb5 cmd, eth, miner: make recommit configurable (#17444)
* cmd, eth, miner: make recommit configurable

* cmd, eth, les, miner: polish a bit

* miner: filter duplicate sealing work

* cmd: remove uncessary conversion

* miner: avoid microptimization in favor of cleaner code
2018-08-21 22:56:54 +03:00
Geon Kim
522cfc68ff swarm: fix typos (#17473) 2018-08-21 21:13:33 +03:00
gary rong
a063fe9b2d miner: fix uncle iteration logic (#17469) 2018-08-21 17:33:04 +03:00
Péter Szilágyi
1dcad8b2de Merge pull request #17466 from karalabe/rinkeby-light-snapshots
consensus/clique, light: light client snapshots on Rinkeby
2018-08-21 16:13:26 +03:00
Jeremy Schlatter
86acdf1a5b vendor: update rjeczalik/notify so that it compiles on go1.11 (#17467) 2018-08-21 16:13:03 +03:00
Péter Szilágyi
9f036647e4 consensus/clique, light: light client snapshots on Rinkeby 2018-08-21 15:21:59 +03:00
Felföldi Zsolt
355fc47d39 les: fix CHT field in nodeInfo (#17465) 2018-08-21 14:58:10 +03:00
Pierre Neter
76301ca051 eth: upgradedb subcommand was dropped (#17464) 2018-08-21 13:36:38 +03:00
Anton Evangelatov
c582667c9b swarm/network: bump bzz protocol version (#17449) 2018-08-21 11:34:40 +03:00
Anton Evangelatov
dcd97c41fa swarm, swarm/network, swarm/pss: log error and fix logs (#17410)
* swarm, swarm/network, swarm/pss: log error and fix logs

* swarm/pss: log compressed publickey
2018-08-21 11:34:10 +03:00
Péter Szilágyi
85c6a1c526 Merge pull request #17451 from karalabe/bn256-relicense
crypto/bn256: add missing license file, release wrapper in BSD-3
2018-08-21 11:33:33 +03:00
Péter Szilágyi
ecca49e078 Merge pull request #17460 from holiman/tracerfix
Ensure from < to when tracing chain
2018-08-21 11:32:50 +03:00
Martin Holst Swende
106d196ec4 eth: ensure from<to when tracing chain (credits Chen Nan via bugbounty) 2018-08-21 09:48:53 +02:00
Péter Szilágyi
a6d45a5d00 crypto/bn256: add missing license file, release wrapper in BSD-3 2018-08-20 18:05:06 +03:00
Nilesh Trivedi
7d38d53ae4 cmd/puppeth: accept ssh identity in the server string (#17407)
* cmd/puppeth: Accept identityfile in the server string with fallback to id_rsa

* cmd/puppeth: code polishes + fix heath check double ports
2018-08-20 16:54:38 +03:00
Felföldi Zsolt
1de9ada401 light: new CHTs (#17448) 2018-08-20 16:49:28 +03:00
Aditya
c929030e28 core/types: fix docs about protected Vs (#17436) 2018-08-20 15:14:50 +03:00
Péter Szilágyi
87f294aa0b Merge pull request #17437 from hackmod/console-typo
console: fixed comment typo
2018-08-20 15:12:48 +03:00
Péter Szilágyi
68f0a414ea Merge pull request #17430 from karalabe/miner-notify-less-aggressive-test
consensus/ethash: reduce notify test aggressiveness
2018-08-20 15:11:49 +03:00
Anton Evangelatov
55d050ccd8 travis: remove brew update and osxfuse install (#17429) 2018-08-20 15:11:08 +03:00
Anton Evangelatov
a8aa89accb swarm/storage: cleanup task - remove bigger chunks (#17424) 2018-08-20 15:10:30 +03:00
Elad
c4078fc805 cmd/swarm: added swarm bootnodes (#17414) 2018-08-20 15:09:50 +03:00
Wuxiang
d3488c1aff p2p: fix typo (#17446) 2018-08-20 15:07:21 +03:00
hackyminer
0fd02fe9cf console: fixed comment typo 2018-08-18 07:47:10 +09:00
Péter Szilágyi
251c868008 consensus/ethash: reduce notify test aggressiveness 2018-08-17 18:12:39 +03:00
Péter Szilágyi
99e1a5e0fb Merge pull request #17426 from karalabe/miner-fees-log-fix
miner: update mining log with correct fee calculation
2018-08-17 16:46:57 +03:00
Péter Szilágyi
22cd3f70a6 miner: update mining log with correct fee calculation 2018-08-17 15:47:14 +03:00
Felix Lange
2695fa2213 les: fix crasher in NodeInfo when running as server (#17419)
* les: fix crasher in NodeInfo when running as server

The ProtocolManager computes CHT and Bloom trie roots by asking the
indexers for their current head. It tried to get the indexers from
LesOdr, but no LesOdr instance is created in server mode.

Attempt to fix this by moving the indexers, protocol creation and
NodeInfo to a new lesCommons struct which is embedded into both server
and client.

All this setup code should really be cleaned up, but this is just a
hotfix so we have to do that some other time.

* les: fix commons protocol maker
2018-08-17 13:21:53 +03:00
Anton Evangelatov
f44046a1c6 build: do not require ethereum-swarm deb when installing ethereum (#17425) 2018-08-17 11:33:39 +03:00
Péter Szilágyi
2a06791461 Merge pull request #17368 from karalabe/bn256-go1.11
crypto/bn256: fix issues caused by Go 1.11
2018-08-17 11:01:35 +03:00
Sasuke1964
60390878a5 accounts: fixed typo (#17421) 2018-08-17 10:38:02 +03:00
Péter Szilágyi
9bf6bb8f63 Merge pull request #17405 from karalabe/miner-remote-dag
consensus/ethash: use DAGs for remote mining, generate async
2018-08-16 15:25:25 +03:00
Péter Szilágyi
1d439b5e10 Merge pull request #17416 from karalabe/miner-details
miner: add gas and fee details to mining logs
2018-08-16 15:04:50 +03:00
Péter Szilágyi
62f5137a72 miner: add gas and fee details to mining logs 2018-08-16 14:49:00 +03:00
gary rong
54216811a0 miner: regenerate mining work every 3 seconds (#17413)
* miner: regenerate mining work every 3 seconds

* miner: polish
2018-08-16 14:14:33 +03:00
Péter Szilágyi
5952d962dc Merge pull request #17412 from karalabe/puppeth-fix-dial-panic
cmd/puppeth: fix nil panic on disconnected stats gathering
2018-08-16 13:19:01 +03:00
Péter Szilágyi
3e21adc648 crypto/bn256: fix issues caused by Go 1.11 2018-08-16 11:02:16 +03:00
Péter Szilágyi
b24fb76a3a cmd/puppeth: fix nil panic on disconnected stats gathering 2018-08-16 09:41:16 +03:00
Felföldi Zsolt
2cdf6ee7e0 light: CHT and bloom trie indexers working in light mode (#16534)
This PR enables the indexers to work in light client mode by
downloading a part of these tries (the Merkle proofs of the last
values of the last known section) in order to be able to add new
values and recalculate subsequent hashes. It also adds CHT data to
NodeInfo.
2018-08-15 22:25:46 +02:00
Elad
e8752f4e9f cmd/swarm, swarm: added access control functionality (#17404)
Co-authored-by: Janos Guljas <janos@resenje.org>
Co-authored-by: Anton Evangelatov <anton.evangelatov@gmail.com>
Co-authored-by: Balint Gabor <balint.g@gmail.com>
2018-08-15 17:41:52 +02:00
Péter Szilágyi
d8541a9f99 consensus/ethash: use DAGs for remote mining, generate async 2018-08-15 14:38:39 +03:00
gary rong
040aa2bb10 miner: streaming uncle blocks (#17320)
* miner: stream uncle block

* miner: polish
2018-08-15 14:09:17 +03:00
Péter Szilágyi
e598ae5c01 Merge pull request #17402 from karalabe/deprecate-flags
cmd: polish miner flags, deprecate olds, add upgrade path
2018-08-15 12:09:34 +03:00
Péter Szilágyi
2a17fe2561 cmd: polish miner flags, deprecate olds, add upgrade path 2018-08-15 11:41:23 +03:00
Jeff Prestes
212bba47ff backends: configurable gas limit to allow testing large contracts (#17358)
* backends: increase gaslimit in order to allow tests of large contracts

* backends: increase gaslimit in order to allow tests of large contracts

* backends: increase gaslimit in order to allow tests of large contracts
2018-08-15 10:15:42 +03:00
Felföldi Zsolt
b52bb31b76 p2p/discv5: add delay to refresh cycle when no seed nodes are found (#16994) 2018-08-14 22:59:18 +02:00
Felföldi Zsolt
b2ddb1fcbf les: implement client connection logic (#16899)
This PR implements les.freeClientPool. It also adds a simulated clock
in common/mclock, which enables time-sensitive tests to run quickly
and still produce accurate results, and package common/prque which is
a generalised variant of prque that enables removing elements other
than the top one from the queue.

les.freeClientPool implements a client database that limits the
connection time of each client and manages accepting/rejecting
incoming connections and even kicking out some connected clients. The
pool calculates recent usage time for each known client (a value that
increases linearly when the client is connected and decreases
exponentially when not connected). Clients with lower recent usage are
preferred, unknown nodes have the highest priority. Already connected
nodes receive a small bias in their favor in order to avoid accepting
and instantly kicking out clients.

Note: the pool can use any string for client identification. Using
signature keys for that purpose would not make sense when being known
has a negative value for the client. Currently the LES protocol
manager uses IP addresses (without port address) to identify clients.
2018-08-14 22:44:46 +02:00
gary rong
a1783d1697 miner: move agent logic to worker (#17351)
* miner: move agent logic to worker

* miner: polish

* core: persist block before reorg
2018-08-14 18:34:33 +03:00
gary rong
e0e0e53401 crypto: change formula for create2 (#17393) 2018-08-14 18:30:42 +03:00
Anton Evangelatov
97887d98da swarm/network, swarm/storage: validate chunk size (#17397)
* swarm/network, swarm/storage: validate default chunk size

* swarm/bmt, swarm/network, swarm/storage: update BMT hash initialisation

* swarm/bmt: move segmentCount to tests

* swarm/chunk: change chunk.DefaultSize to be untyped const

* swarm/storage: add size validator

* swarm/storage: add chunk size validation to localstore

* swarm/storage: move validation from localstore to validator

* swarm/storage: global chunk rules in MRU
2018-08-14 16:03:56 +02:00
Yao Zengzeng
8a040de60b README.md: fix some typos (#17381)
Signed-off-by: YaoZengzeng <yaozengzeng@zju.edu.cn>
2018-08-14 14:25:36 +03:00
Eugene Valeyev
e07e507d1a whisper: fixed broken partial topic filtering
Changes in #15811 broke partial topic filtering. Re-enable it.
2018-08-13 16:27:25 +02:00
Péter Szilágyi
d8328a96b4 Merge pull request #17347 from karalabe/miner-notify
cmd, consensus/ethash, eth: miner push notifications
2018-08-13 17:03:16 +03:00
Mymskmkt
fb368723ac core: fix comment typo (#17376) 2018-08-13 11:40:52 +03:00
Janoš Guljaš
6d1e292eef Manifest cli fix and upload defaultpath only once (#17375)
* cmd/swarm: fix manifest subcommands and add tests

* cmd/swarm: manifest update: update default entry for non-encrypted uploads

* swarm/api: upload defaultpath file only once

* swarm/api/client: improve UploadDirectory default path handling

* cmd/swarm: support absolute and relative default path values

* cmd/swarm: fix a typo in test

* cmd/swarm: check encrypted uploads in manifest update tests
2018-08-10 16:12:55 +02:00
Elad
3ec5dda4d2 swarm/api/http: added logging to denote request ended (#17371) 2018-08-10 13:49:37 +02:00
Péter Szilágyi
f0998415ba cmd, consensus/ethash, eth: miner push notifications 2018-08-10 09:06:59 +03:00
Janoš Guljaš
45eaef2431 cmd/swarm: solve rare cases of using the same random port in tests (#17352) 2018-08-09 16:15:59 +02:00
Janoš Guljaš
3bcb501c8f swarm/api: close tar writer in GetDirectoryTar to flush and clean (#17339) 2018-08-09 16:15:07 +02:00
Janoš Guljaš
d3e4c2dcb0 cmd/swarm: disable TestCLISwarmFs fuse test on darwin (#17340) 2018-08-09 16:14:30 +02:00
Anton Evangelatov
7b5c375825 cmd/swarm: remove shadow err (#17360) 2018-08-09 12:37:00 +03:00
Péter Szilágyi
beade042d1 Merge pull request #17357 from karalabe/tracer-trie-deref-bug
eth, trie: fix tracer GC which accidentally pruned the metaroot
2018-08-09 12:35:12 +03:00
Péter Szilágyi
11bbc66082 eth, trie: fix tracer GC which accidentally pruned the metaroot 2018-08-09 12:33:30 +03:00
libotony
834057592f p2p/discv5: fix negative index after uint convert to int (#17274) 2018-08-09 10:03:42 +03:00
Jay
abbb219933 rpc: fix a subscription name (#17345) 2018-08-09 08:56:35 +03:00
Mymskmkt
8051a0768a trie: fix comment typo (#17350) 2018-08-08 16:08:40 +03:00
Giulio M
a1eb9c7d13 swarm/api/http: fixed list leaf links (#17342) 2018-08-08 09:33:06 +02:00
Janoš Guljaš
00e6da9704 swarm/bmt: ignore data longer then 4096 bytes in Hasher.Write (#17338) 2018-08-07 15:34:33 +02:00
Attila Gazso
9df16f3468 swarm: Added lightnode flag (#17291)
* swarm: Added lightnode flag

Added --lightnode command line parameter
Added LightNode to Handshake message

* swarm/config: Fixed variable naming

* cmd/swarm: Changed BoolTFlag to BoolFlag for SwarmLightNodeEnabled

* swarm/network: Changed logging

* swarm/network: Changed protocol version testing

* swarm/network: Renamed DefaultNetworkID variable to TestProtocolNetworkID

* swarm/network: Bumped protocol version

* swarm/network: Changed LightNode handhsake test to table driven

* swarm/network: Changed back TestProtocolVersion to 5 for now

* swarm/network: Moved the test configuration inside the test function scope
2018-08-07 15:34:11 +02:00
b00ris
8461fea44b whisper: remove unused error (#17315) 2018-08-07 15:16:56 +02:00
Elad
4bb2dc3d09 swarm/api/http: test fixes (#17334) 2018-08-07 13:40:38 +02:00
Oleg Kovalov
cf05ef9106 p2p, swarm, trie: avoid copying slices in loops (#17265) 2018-08-07 13:56:40 +03:00
Anton Evangelatov
de9b0660ac swarm/README: add more sections to easily onboard developers (#17333) 2018-08-07 12:56:01 +02:00
Andrew Chiw
042191338d swarm/api/http: GET/PUT/PATCH/DELETE/POST multipart form unit tests. (#17277)
httpDo has a verbose option that dumps the HTTP request
2018-08-07 12:00:12 +02:00
Elad
93fe16b0a5 swarm/api/http: refactored http package (#17309) 2018-08-07 11:56:55 +02:00
Javier Peletier
64a4e89504 swarm/storage/mru: HOTFIX - fix panic in Handler.update (#17313) 2018-08-07 11:53:56 +02:00
Felföldi Zsolt
eef65b20fc p2p: use safe atomic operations when changing connFlags (#17325) 2018-08-06 15:46:30 +03:00
Felföldi Zsolt
c4df67461f Merge pull request #16333 from shazow/addremovetrustedpeer
rpc: Add admin_addTrustedPeer and admin_removeTrustedPeer.
2018-08-06 13:30:04 +02:00
gary rong
941018b570 miner: seperate state, receipts for different mining work (#17323) 2018-08-06 12:55:44 +03:00
Janoš Guljaš
a72ba5a55b cmd/swarm, swarm: various test fixes (#17299)
* swarm/network/simulation: increase the sleep duration for TestRun

* cmd/swarm, swarm: fix failing tests on mac

* cmd/swarm: update TestCLISwarmFs skip comment

* swarm/network/simulation: adjust disconnections on simulation close

* swarm/network/simulation: call cleanups after net shutdown
2018-08-06 12:33:22 +03:00
stormpang
6711f098d5 core/vm: fix comment typo (#17319)
antything --> anything
:P
2018-08-06 11:42:49 +03:00
Péter Szilágyi
c376a5263f Merge pull request #17318 from ligi/fix_punctuation
Fix punctuation - closes #17317
2018-08-06 11:11:58 +03:00
ligi
2901b8b2d2 README: Fix punctuation - closes #17317 2018-08-05 15:40:22 +02:00
Péter Szilágyi
35fcd2f423 Merge pull request #17311 from karalabe/puppeth-graceful-stop
cmd/puppeth: graceful shutdown on redeploys
2018-08-03 13:26:39 +03:00
Péter Szilágyi
faf0e06ed8 cmd/puppeth: graceful shutdown on redeploys 2018-08-03 12:08:19 +03:00
gary rong
51db5975cc consensus/ethash: move remote agent logic to ethash internal (#15853)
* consensus/ethash: start remote ggoroutine to handle remote mining

* consensus/ethash: expose remote miner api

* consensus/ethash: expose submitHashrate api

* miner, ethash: push empty block to sealer without waiting execution

* consensus, internal: add getHashrate API for ethash

* consensus: add three method for consensus interface

* miner: expose consensus engine running status to miner

* eth, miner: specify etherbase when miner created

* miner: commit new work when consensus engine is started

* consensus, miner: fix some logics

* all: delete useless interfaces

* consensus: polish a bit
2018-08-03 11:33:37 +03:00
Roc Yu
70176cda0e accounts/keystore: rename skipKeyFile to nonKeyFile to better reveal the function purpose (#17290) 2018-08-03 10:25:17 +03:00
Péter Szilágyi
d56fa8a659 Merge pull request #17310 from karalabe/mobile-nil-panic
mobile: fix missing return for CallMsg.SetTo(nil)
2018-08-03 09:37:57 +03:00
Péter Szilágyi
e4cb158d01 mobile: fix missing return for CallMsg.SetTo(nil) 2018-08-03 08:42:31 +03:00
Hyung-Kyu Hqueue Choi
0ab54de1a5 core/vm: update benchmarks for core/vm (#17308)
- Update benchmarks to use a pool of int pools.
  Unless benchmarks are aborted with segmentation fault.

Signed-off-by: Hyung-Kyu Choi <hqueue@users.noreply.github.com>
2018-08-03 08:15:33 +03:00
Péter Szilágyi
adc2944b4c Merge pull request #17301 from karalabe/tests-enable-constantinople
tests: enable the Constantinople fork definition
2018-08-02 14:42:06 +03:00
Péter Szilágyi
16eaf2b158 Merge pull request #17302 from karalabe/revert-evm-nil-panic
Revert "cmd/evm: change error msg output to stderr (#17118)"
2018-08-01 19:43:01 +03:00
Péter Szilágyi
83e2761c3a Revert "cmd/evm: change error msg output to stderr (#17118)"
This reverts commit fb9f7261ec.
2018-08-01 19:09:08 +03:00
Péter Szilágyi
353a82385b tests: enable the Constantinople fork definition 2018-08-01 18:47:09 +03:00
Anton Evangelatov
46d4721519 build: explicitly name all packages to be cross-compiled (#17288) 2018-07-31 16:34:54 +03:00
Péter Szilágyi
454382e81a params, swarm/version: begin Geth v1.8.14, Swarm v0.3.2 cycle 2018-07-31 13:39:22 +03:00
Péter Szilágyi
225171a4bf params, swarm/version: release Geth v1.8.13, Swarm 0.3.1 2018-07-31 13:35:53 +03:00
Ha ĐANG
702b8a7aec core/vm: fix typo in cryptographic hash function name (#17285) 2018-07-31 13:27:51 +03:00
Ryan Schneider
5d7e18539e rpc: make HTTP RPC timeouts configurable, raise defaults (#17240)
* rpc: Make HTTP server timeout values configurable

* rpc: Remove flags for setting HTTP Timeouts, configuring via .toml is sufficient.

* rpc: Replace separate constants with a single default struct.

* rpc: Update HTTP Server Read and Write Timeouts to 30s.

* rpc: Remove redundant NewDefaultHTTPTimeouts function.

* rpc: document HTTPTimeouts.

* rpc: sanitize timeout values for library use
2018-07-31 12:16:14 +03:00
gary rong
c4a1d4fecf eth/filters: fix the block range assignment for log filter (#17284) 2018-07-31 12:10:38 +03:00
Chen Quan
fb9f7261ec cmd/evm: change error msg output to stderr (#17118)
* cmd/evm: change error msg output to stderr

* cmd/evm: fix some linter error
2018-07-31 10:48:27 +03:00
Péter Szilágyi
d927cbb638 Merge pull request #17282 from karalabe/trie-flushlist-fixes
trie: handle removing the freshest node too
2018-07-31 10:41:53 +03:00
ledgerwatch
2fbc454355 miner: fix state locking while writing to chain (issue #16933) (#17173) 2018-07-31 10:22:33 +03:00
holisticode
d6efa69187 Merge netsim mig to master (#17241)
* swarm: merged stream-tests migration to develop

* swarm/network: expose simulation RandomUpNode to use in stream tests

* swarm/network: wait for subs in PeerEvents and fix stream.runSyncTest

* swarm: enforce waitkademlia for snapshot tests

* swarm: fixed syncer tests and snapshot_sync_test

* swarm: linting of simulation package

* swarm: address review comments

* swarm/network/stream: fix delivery_test bugs and refactor

* swarm/network/stream: addressed PR comments @janos

* swarm/network/stream: enforce waitKademlia, improve TestIntervals

* swarm/network/stream: TestIntervals not waiting for chunk to be stored
2018-07-30 22:55:25 +02:00
Péter Szilágyi
3ea8ac6a9a Merge pull request #17281 from karalabe/puppeth-cachewarn-fix
cmd/puppeth: force tiny memory for geth attach in id lookup
2018-07-30 16:32:10 +03:00
Péter Szilágyi
8a9c31a307 trie: handle removing the freshest node too 2018-07-30 16:31:17 +03:00
Péter Szilágyi
6380c06c65 Merge pull request #17279 from karalabe/puppeth-banlist-fix
cmd/puppeth: split banned ethstats addresses over columns
2018-07-30 16:15:35 +03:00
Péter Szilágyi
54d1111965 cmd/puppeth: force tiny memory for geth attach in id lookup 2018-07-30 16:09:19 +03:00
Péter Szilágyi
f00d0daf33 cmd/puppeth: split banned ethstats addresses over columns 2018-07-30 15:39:35 +03:00
Ha ĐANG
2cffd4ff3c core: fix some small typos on comment code (#17278) 2018-07-30 14:10:48 +03:00
Oleg Kovalov
7b1aa64220 dashboard: append to proper slice (#17266) 2018-07-30 12:48:16 +03:00
Janoš Guljaš
8f4c4fea20 p2p: fix rare deadlock in Stop (#17260) 2018-07-30 12:44:17 +03:00
Oleg Kovalov
d42ce0f2c1 all: simplify switches (#17267)
* all: simplify switches

* silly mistake
2018-07-30 12:30:09 +03:00
Anton Evangelatov
273c7a9dc4 swarm/api: remove ioutil.ReadAll for HandleGetFiles (#17276) 2018-07-30 12:19:26 +03:00
Anton Evangelatov
a5d5609e38 build: rename swarm deb package to ethereum-swarm; change swarm deb version from 1.8.x to 0.3.x (#16988)
* build: add support for different package and binary names

* build: bump up copyright date

* build: change default PackageName to empty string

* build, internal, swarm: enhance build/release process

* build: hack ethereum-swarm as a "depends" in deb package

* build/ci: remove redundant variables

* build, cmd, mobile, params, swarm: remove VERSION file; rename Version to VersionMeta;

* internal: remove VERSION() method which reads VERSION file

* build: fix VersionFilePath to Version

* Makefile: remove clean_go_build_cache.sh until it works

* Makefile: revert removal of clean_go_build_cache.sh
2018-07-30 11:56:40 +03:00
Péter Szilágyi
93c0f1715d Merge pull request #17245 from karalabe/azure-deps-fixups
internal, vendor: update Azure blobstore API
2018-07-26 19:59:46 +03:00
Péter Szilágyi
d9575e92fc crypto/secp256k1: remove external LGPL dependencies (#17239) 2018-07-26 13:33:13 +02:00
Raghav Sood
11a402f747 core: report progress on log chain exports (#17066)
* core/blockchain: export progress

* core: polish up chain export progress report a bit
2018-07-26 14:26:24 +03:00
a e r t h
021d6fbbbb cmd: prevent accidental invalid commands (#17248)
* cmd: stop parsing bootnodes if one is invalid

* cmd/geth: require valid command as argument (or no arg)
2018-07-26 13:57:20 +03:00
Péter Szilágyi
feed8069a6 Merge pull request #17251 from karalabe/ppa-deprecate-artful
build: deprecated ubuntu artful, enable ubuntu cosmic
2018-07-26 13:34:38 +03:00
Péter Szilágyi
514022bde6 Merge pull request #17252 from karalabe/travis-debsrc-fix
build: noop clean during travis debsrc assembly step
2018-07-26 13:29:57 +03:00
Péter Szilágyi
ff22ec31b6 build: noop clean during travis debsrc assembly step 2018-07-26 13:26:53 +03:00
Péter Szilágyi
0598707129 build: deprecated ubuntu artful, enable ubuntu cosmic 2018-07-26 13:14:36 +03:00
Péter Szilágyi
a511f6b515 Merge pull request #17250 from karalabe/fix-clean
build: fix bash->sh function declaration
2018-07-26 13:05:45 +03:00
Péter Szilágyi
8b1e14b7f8 build: fix bash->sh function declaration 2018-07-26 13:01:00 +03:00
Sarlor
6c412e313c cmd/utils: fix comment typo (#17249)
cmd: Comment error
2018-07-26 10:59:05 +03:00
Péter Szilágyi
6b232ce325 internal, vendor: update Azure blobstore API 2018-07-25 18:04:43 +03:00
Guillaume Ballet
7abedf9bbb core/vm: support for multiple interpreters (#17093)
- Define an Interpreter interface
- One contract can call contracts from other interpreter types.
- Pass the interpreter to the operands instead of the evm.
  This is meant to prevent type assertions in operands.
2018-07-25 08:56:39 -04:00
Antoine Rondelet
27a278e6e3 core: fixed typo in addresssByHeartbeat (#17243) 2018-07-25 14:25:14 +03:00
Péter Szilágyi
a0bcb16875 Merge pull request #17244 from chainpunk/master
core: fix typo in comment code
2018-07-25 14:07:54 +03:00
hadv
bc0a43191e core: fix typo in comment code 2018-07-25 18:04:38 +07:00
Viktor Trón
1064b9e691 Merge pull request #17233 from ethersphere/swarm-readme
swarm: README.md
2018-07-25 05:38:14 +02:00
Osuke
10780e8a00 core: fix txpool guarantee comment (#17214)
* fixed-typo

* core: fix txpool guarantee comment
2018-07-24 18:44:41 +03:00
gary rong
2433349c80 core/vm, params: implement EXTCODEHASH opcode (#17202)
* core/vm, params: implement EXTCODEHASH opcode

* core, params: tiny fixes and polish

* core: add function description
2018-07-24 18:06:40 +03:00
Anton Evangelatov
58243b4d3e README: point Swarm brief to the Swarm README, instead of directly to docs 2018-07-24 16:55:07 +02:00
gary rong
cab1cff11c core, crypto, params: implement CREATE2 evm instrction (#17196)
* core, crypto, params: implement CREATE2 evm instrction

* core/vm: add opcode to string mapping

* core: remove past fork checking

* core, crypto: use option2 to generate new address
2018-07-24 17:22:03 +03:00
Vincent Serpoul
2909f6d7a2 common: add database/sql support for Hash and Address (#15541) 2018-07-24 15:15:07 +02:00
Ian Macalinao
d96ba77113 eth/filters: improve error message for invalid filter topics (#17234) 2018-07-24 15:12:49 +02:00
Péter Szilágyi
62467e4405 Merge pull request #17206 from hadv/master
consensus/clique: replace bubble sort by golang stable sort
2018-07-24 13:46:37 +03:00
Wenbiao Zheng
d0082bb7ec core: fix comment typo (#17236) 2018-07-24 13:17:12 +03:00
Péter Szilágyi
21c059b67e Merge pull request #16734 from reductionista/eip234
eth/filters, interfaces: EIP-234 Add blockHash option to eth_getLogs
2018-07-24 12:52:16 +03:00
Sheldon
9e24491c65 core/bloombits, light: fix typos (#17235) 2018-07-24 11:24:27 +03:00
hadv
49f63deb24 consensus/clique: replace bubble sort by golang stable sort 2018-07-24 14:56:53 +07:00
Viktor Trón
b536460f8e Merge pull request #17231 from ethersphere/develop
swarm: client-side MRU signatures ; BMT fixes ; network simulation tests
2018-07-24 08:44:43 +02:00
Péter Szilágyi
afd8b84706 crypto/secp256k1: unify the package license to 3-Clause BSD (#17225)
Our original wrapper code had two parts. One taken from a third
party repository (who took it from upstream Go) licensed under
BSD-3. The second written by Jeff, Felix and Gustav, licensed
under LGPL. This made this package problematic to use from the
outside.

With the agreement of the original copyright holders, this commit
changes the license of the LGPL portions of the code to BSD-3:

---
I agree changing from LGPL to a BSD style license.

Jeff
---
Hey guys,

My preference would be to relicense to GNUBL, but I'm also OK with BSD.

Cheers,
Gustav
---
Felix Lange (fjl):
I would approve anything that makes our licensing less complicated
---
2018-07-24 02:47:47 +02:00
Wenbiao Zheng
f6206efe5b consensus: move test use only var/func to test(#17004) 2018-07-24 02:14:15 +02:00
Hyung-Kyu Hqueue Choi
ae674a3660 Makefile: clean go build cache (#17079) 2018-07-24 02:11:51 +02:00
Wenbiao Zheng
894022a3d5 cmd/geth: clean up call to SelfDerive (#16970) 2018-07-24 02:09:24 +02:00
Wenbiao Zheng
68da9aa716 rpc: clean up check for missing methods/subscriptions on handler (#17145) 2018-07-24 02:00:55 +02:00
Anton Evangelatov
14bdcdeab4 swarm/testutil: remove EnableMetrics 2018-07-23 18:45:39 +02:00
Wenbiao Zheng
fe6a9473dc p2p: token is useless in xxxEncHandshake (#17230) 2018-07-23 17:36:08 +02:00
Javier Peletier
427316a707 swarm/storage/mru: Client-side MRU signatures (#784)
* swarm/storage/mru: Add embedded publickey and remove ENS dep

This commit breaks swarm, swarm/api...
but tests in swarm/storage/mru pass

* swarm: Refactor swarm, swarm/api to mru changes, make tests pass

* swarm/storage/mru: Remove self from recv, remove test ens vldtr

* swarm/storage/mru: Remove redundant test, expose ResourceHash mthd

* swarm/storage/mru: Make HeaderGetter mandatory + godoc fixes

* swarm/storage: Remove validator prefix for metadata chunk

* swarm/storage/mru: Use Address instead of PublicKey

* swarm/storage/mru: Change index from name to metadata chunk addr

* swarm/storage/mru: Refactor swarm/api/... to MRU index changes

* swarm/storage/mru: Refactor cleanup

* swarm/storage/mru: Rebase cleanup

* swarm: Use constructor for GenericSigner MRU in swarm.go

* swarm/storage: Change to BMTHash for MRU hashing

* swarm/storage: Reduce loglevel on chunk validator logs

* swarm/storage/mru: Delint

* swarm: MRU Rebase cleanup

* swarm/storage/mru: client-side mru signatures

Rebase to PR #668 and fix all conflicts

* swarm/storage/mru:  refactor and documentation

* swarm/resource/mru: error-checking  tests for parseUpdate/newUpdateChunk

* swarm/storage/mru: Added resourcemetadata tests

* swarm/storage/mru: Added tests  for UpdateRequest

* swarm/storage/mru: more test coverage for UpdateRequest and comments

* swarm/storage/mru: Avoid fake chunks in parseUpdate()

* swarm/storage/mru: Documented resource.go extensively

moved some functions where they make most sense

* swarm/storage/mru: increase test coverage for UpdateRequest and

variable name changes throughout to increase consistency

* swarm/storage/mru: moved default timestamp to NewCreateRequest-

* swarm/storage/mru: lookup refactor

* swarm/storage/mru: added comments and renamed raw flag to rawmru

* swarm/storage/mru: fix receiver typo

* swarm/storage/mru: refactored update chunk new/create

* swarm/storage/mru:  refactored signature digest to avoid malleability

* swarm/storage/mru: optimize update data serialization

* swarm/storage/mru: refactor and cleanup

* swarm/storage/mru: add timestamp struct and serialization

* swarm/storage/mru: fix lint error and mark some old code for deletion

* swarm/storage/mru: remove unnecessary variable

* swarm/storage/mru: Added more comments throughout

* swarm/storage/mru: Refactored metadata chunk layout + extensive error...

* swarm/storage/mru: refactor cli parser
Changed resource info output to JSON

* swarm/storage/mru: refactor serialization for extensibility

refactored error messages to NewErrorf

* swarm/storage/mru: Moved Signature to resource_sign.
Check Sign errors in server tests

* swarm/storage/mru: Remove isSafeName() checks

* swarm/storage/mru: scrubbed off all references to "block" for time

* swarm/storage/mru: removed superfluous isSynced() call.

* swarm/storage/mru: remove isMultihash() and ToSafeName functions

* swarm/storage/mru: various fixes and comments

* swarm/storage/mru: decoupled cli for independent create/update
* Made resource name optional
* Removed unused LookupPrevious

* swarm/storage/mru: Decoupled resource create / update & refactor

* swarm/storage/mru: Fixed some comments as per issues raised in PR #743

* swarm/storage/mru: Cosmetic changes as per #743 comments

* swarm/storage/mru: refct request encoder/decoder > marshal/unmarshal

* swarm/storage/mru: Cosmetic changes as per review in #748

* swarm/storage/mru: removed timestamp proof placeholder

* swarm/storage/mru: cosmetic/doc/fixes changes as per comments in #704

* swarm/storage/mru: removed unnecessary check in Handler.update

* swarm/storage/mru: Implemented Marshaler/Unmarshaler iface in Request

* swarm/storage/mru: Fixed linter error

* swarm/storage/mru: removed redundant address in signature digest

* swarm/storage/mru: fixed bug: LookupLatestVersionInPeriod not working

* swarm/storage/mru: Unfold Request creation API for create or update+create
set common time source for mru package

* swarm/api/http: fix HandleGetResource error variable shadowed
when requesting a resource that does not exist

* swarm/storage/mru: Add simple check to detect duplicate updates

* swarm/storage/mru: moved Multihash() to the right place.

* cmd/swarm: remove unneeded clientaccountmanager.go

* swarm/storage/mru: Changed some comments as per reviews in #784

* swarm/storage/mru: Made SignedResourceUpdate.GetDigest() public

* swarm/storage/mru: cosmetic changes as per comments in #784

* cmd/swarm: Inverted --multihash flag default

* swarm/storage/mru: removed Verify from SignedResourceUpdate.fromChunk

* swarm/storage/mru: Moved validation code out of serializer
Cosmetic / comment changes

* swarm/storage/mru: Added unit tests for UpdateLookup

* swarm/storage/mru: Increased coverage of metadata serialization

* swarm/storage/mru: Increased test coverage of updateHeader serializers

* swarm/storage/mru: Add resourceUpdate serializer test
2018-07-23 15:33:33 +02:00
Elad
0647c4de7b swarm/api/http: http package refactoring 1/5 and 2/5 2018-07-23 15:33:32 +02:00
Javier Peletier
7ddc2c9e95 cmd/swarm: add implicit subcommand help (fix #786) (#788)
* cmd/swarm: add implicit subcommand help (fix #786)

* cmd/swarm: moved implicit help to a recursive func
2018-07-23 15:33:32 +02:00
Janoš Guljaš
dcaaa3c804 swarm: network simulation for swarm tests (#769)
* cmd/swarm: minor cli flag text adjustments

* cmd/swarm, swarm/storage, swarm: fix  mingw on windows test issues

* cmd/swarm: support for smoke tests on the production swarm cluster

* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion

* changed colour of landing page

* landing page reacts to enter keypress

* swarm/api/http: sticky footer for swarm landing page using flex

* swarm/api/http: sticky footer for error pages and fix for multiple choices

* swarm: propagate ctx to internal apis (#754)

* swarm/simnet: add basic node/service functions

* swarm/netsim: add buckets for global state and kademlia health check

* swarm/netsim: Use sync.Map as bucket and provide cleanup function for...

* swarm, swarm/netsim: adjust SwarmNetworkTest

* swarm/netsim: fix tests

* swarm: added visualization option to sim net redesign

* swarm/netsim: support multiple services per node

* swarm/netsim: remove redundant return statement

* swarm/netsim: add comments

* swarm: shutdown HTTP in Simulation.Close

* swarm: sim HTTP server timeout

* swarm/netsim: add more simulation methods and peer events examples

* swarm/netsim: add WaitKademlia example

* swarm/netsim: fix comments

* swarm/netsim: terminate peer events goroutines on simulation done

* swarm, swarm/netsim: naming updates

* swarm/netsim: return not healthy kademlias on WaitTillHealthy

* swarm: fix WaitTillHealthy call in testSwarmNetwork

* swarm/netsim: allow bucket to have any type for a key

* swarm: Added snapshots to new netsim

* swarm/netsim: add more tests for bucket

* swarm/netsim: move http related things into separate files

* swarm/netsim: add AddNodeWithService option

* swarm/netsim: add more tests and Start* methods

* swarm/netsim: add peer events and kademlia tests

* swarm/netsim: fix some tests flakiness

* swarm/netsim: improve random nodes selection, fix TestStartStop* tests

* swarm/netsim: remove time measurement from TestClose to avoid flakiness

* swarm/netsim: builder pattern for netsim HTTP server (#773)

* swarm/netsim: add connect related tests

* swarm/netsim: add comment for TestPeerEvents

* swarm: rename netsim package to network/simulation
2018-07-23 15:33:25 +02:00
lash
f5b128a5b3 swarm/fuse: Hotfix missing parantheses in statement 2018-07-23 15:31:02 +02:00
Viktor Trón
fd982d3f3b swarm/bmt: async section writer interface to BMT (#778)
- AsyncHasher implements AsyncWriter interface
 - add extra level for zerohashes in pool to lookup empty data hash
 - remove unused segment, hash and depth fields from Tree
 - Hash pkg function -> syncHash moved to test
 - add asyncHash helper func to tests using shuffle
 - add TestAsyncCorrectness to tests
 - add BenchmarkBMTAsync to tests
 - refactor benchmarks using subbenchmarks
 - improved comments
 - preinitialise base hashers on the nodes
2018-07-23 15:09:25 +02:00
emile
526abe2736 eth/tracers: fix noop tracer (#17220) 2018-07-22 22:11:52 +03:00
cong
8997efe31f rpc: fix missing parentheses in doc (#17224) 2018-07-22 22:09:45 +03:00
Anton Evangelatov
4ea2d707f9 gitter: change ethereum/swarm to ethersphere/orange-lounge 2018-07-21 00:47:46 +02:00
Anton Evangelatov
ee8877509a swarm/readme: add link to code review guidelines 2018-07-19 13:27:31 +02:00
Anton Evangelatov
763e64cad8 swarm: readme.md 2018-07-19 12:13:03 +02:00
Roc Yu
040dd5bd5d accounts/abi: refactor Method#Sig() to use index in range iterator directly (#17198) 2018-07-19 11:42:47 +03:00
gary rong
dcdd57df62 core, ethdb: two tiny fixes (#17183)
* ethdb: fix memory database

* core: fix bloombits checking

* core: minor polish
2018-07-18 13:41:36 +03:00
Roc Yu
323428865f accounts: add unit tests for URL (#17182) 2018-07-18 11:32:49 +03:00
jkcomment
65c91ad5e7 p2p: correct comments typo (#17184) 2018-07-18 10:41:18 +03:00
Ralph Caraveo III
5d30be412b all: switch out defunct set library to different one (#16873)
* keystore, ethash, eth, miner, rpc, whisperv6: tech debt with now defunct set.

* whisperv5: swap out gopkg.in/fatih/set.v0 with supported set
2018-07-16 10:54:19 +03:00
Kurkó Mihály
eb7f901289 dashboard: fix CSS, escape special HTML chars, clean up code (#17167)
* dashboard: fix CSS, escape special HTML chars, clean up code

* dashboard: change 0 to 1

* dashboard: add escape-html npm package
2018-07-16 10:43:58 +03:00
Péter Szilágyi
db5e403afe build: fix typo (#17175)
build: Fix a typo in ci.go
2018-07-16 10:36:21 +03:00
Roc Yu
96116758d2 cmd/geth: fix golint issue (#17176) 2018-07-16 10:33:58 +03:00
Felix Yan
65cebb7730 build: Fix a typo in ci.go 2018-07-15 14:13:11 +08:00
Anton Evangelatov
2e0391ea84 cmd/swarm: change version of swarm binary (#17174) 2018-07-13 22:53:02 +02:00
Anton Evangelatov
d483da766f swarm/network: bump up protocol versions due to wrapped message intro (#17170) 2018-07-13 17:40:55 +02:00
Anton Evangelatov
7c9314f231 swarm: integrate OpenTracing; propagate ctx to internal APIs (#17169)
* swarm: propagate ctx, enable opentracing

* swarm/tracing: log error when tracing is misconfigured
2018-07-13 17:40:28 +02:00
Péter Szilágyi
e1f1d3085c accounts, eth, les: blockhash based filtering on all code paths 2018-07-12 18:16:54 +03:00
Domino Valdano
96339daf40 eth/filters, ethereum: EIP-234 add blockHash param for eth_getLogs 2018-07-12 18:16:32 +03:00
Anton Evangelatov
f7d3678c28 swarm/api/http: http package refactoring 1/5 and 2/5 (#17164) 2018-07-12 15:42:45 +02:00
Kwuaint
facf1bc9d6 consensus/ethash: fix the algorithm of fakeBlockNumber in comments (#17166)
correct the algorithm in the comments for fakeBlockNumber, from "min" to "max".
2018-07-12 13:32:23 +03:00
gary rong
e8824f6e74 vendor, ethdb: resume write operation asap (#17144)
* vendor: update leveldb

* ethdb: remove useless warning log
2018-07-12 11:07:51 +03:00
Kurkó Mihály
a9835c1816 cmd, dashboard, log: log collection and exploration (#17097)
* cmd, dashboard, internal, log, node: logging feature

* cmd, dashboard, internal, log: requested changes

* dashboard, vendor: gofmt, govendor, use vendored file watcher

* dashboard, log: gofmt -s -w, goimports

* dashboard, log: gosimple
2018-07-11 10:59:04 +03:00
Wenbiao Zheng
2eedbe799f cmd: typo fixed, isntance -> instance (#17149) 2018-07-09 17:34:59 +03:00
Anton Evangelatov
b3711af051 swarm: ctx propagation; bmt fixes; pss generic notification framework (#17150)
* cmd/swarm: minor cli flag text adjustments

* swarm/api/http: sticky footer for swarm landing page using flex

* swarm/api/http: sticky footer for error pages and fix for multiple choices

* cmd/swarm, swarm/storage, swarm: fix  mingw on windows test issues

* cmd/swarm: update description of swarm cmd

* swarm: added network ID test

* cmd/swarm: support for smoke tests on the production swarm cluster

* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion

* swarm: propagate ctx to internal apis (#754)

* swarm/metrics: collect disk measurements

* swarm/bmt: fix io.Writer interface

  * Write now tolerates arbitrary variable buffers
  * added variable buffer tests
  * Write loop and finalise optimisation
  * refactor / rename
  * add tests for empty input

* swarm/pss: (UPDATE) Generic notifications package (#744)

swarm/pss: Generic package for creating pss notification svcs

* swarm: Adding context to more functions

* swarm/api: change colour of landing page in templates

* swarm/api: change landing page to react to enter keypress
2018-07-09 14:11:49 +02:00
Smilenator
30bdf817a0 core/types: polish TxDifference code and docs a bit (#17130)
* core: fix func TxDifference

fix a typo in func comment;
change named return to unnamed as there's explicit return in the body

* fix another typo in TxDifference
2018-07-09 11:48:54 +03:00
Wenbiao Zheng
fbeb4f20f9 cmd/geth: fix usage formatting (#17136) 2018-07-09 11:41:28 +03:00
LeoLiao
0b20b1a050 consensus/clique: fixed documentation copy-paste issue (#17137) 2018-07-09 11:39:43 +03:00
LeoLiao
4dbefc1f25 cmd/geth: fixed comment typo (#17140) 2018-07-09 11:38:52 +03:00
LeoLiao
dbae1dc7b3 rpc: fixed comment grammar issue (#17146) 2018-07-09 11:31:59 +03:00
Felix Lange
3b07451564 params, VERSION: v1.8.13 unstable 2018-07-05 01:09:02 +02:00
Felix Lange
37685930d9 params: v1.8.12 stable 2018-07-05 01:08:05 +02:00
Felföldi Zsolt
51df1c1f20 les: add announcement safety check to light fetcher (#17034) 2018-07-04 13:40:20 +03:00
Felföldi Zsolt
f524ec4326 light: new CHTs (#17124) 2018-07-04 12:41:17 +03:00
Zak Cole
eb794af833 consensus/ethash: fixed documentation typo (#17121)
"proot-of-work" to "proof-of-work"
2018-07-04 11:20:58 +03:00
Péter Szilágyi
67a7857124 Merge pull request #17111 from karalabe/trie-memleak
trie: fix a temporary memory leak in the memcache
2018-07-03 17:20:58 +03:00
Felix Lange
c73b654fd1 p2p/discover: move bond logic from table to transport (#17048)
* p2p/discover: move bond logic from table to transport

This commit moves node endpoint verification (bonding) from the table to
the UDP transport implementation. Previously, adding a node to the table
entailed pinging the node if needed. With this change, the ping-back
logic is embedded in the packet handler at a lower level.

It is easy to verify that the basic protocol is unchanged: we still
require a valid pong reply from the node before findnode is accepted.

The node database tracked the time of last ping sent to the node and
time of last valid pong received from the node. Node endpoints are
considered verified when a valid pong is received and the time of last
pong was called 'bond time'. The time of last ping sent was unused. In
this commit, the last ping database entry is repurposed to mean last
ping _received_. This entry is now used to track whether the node needs
to be pinged back.

The other big change is how nodes are added to the table. We used to add
nodes in Table.bond, which ran when a remote node pinged us or when we
encountered the node in a neighbors reply. The transport now adds to the
table directly after the endpoint is verified through ping. To ensure
that the Table can't be filled just by pinging the node repeatedly, we
retain the isInitDone check. During init, only nodes from neighbors
replies are added.

* p2p/discover: reduce findnode failure counter on success

* p2p/discover: remove unused parameter of loadSeedNodes

* p2p/discover: improve ping-back check and comments

* p2p/discover: add neighbors reply nodes always, not just during init
2018-07-03 16:24:12 +03:00
Chen Quan
9da128db70 cmd/p2psim: add exit error output and exit code (#17116) 2018-07-03 15:14:57 +03:00
Guillaume Ballet
4e5d1f1c39 core/vm: reuse bigint pools across transactions (#17070)
* core/vm: A pool for int pools

* core/vm: fix rebase issue

* core/vm: push leftover stack items after execution, not before
2018-07-03 13:06:42 +03:00
LeoLiao
d57e85ecc9 node: documentation typo fix (#17113) 2018-07-03 11:42:18 +03:00
Anton Evangelatov
1990c9e621 cmd/geth: export metrics to InfluxDB (#16979)
* cmd/geth: add flags for metrics export

* cmd/geth: update usage fields for metrics flags

* metrics/influxdb: update reporter logger to adhere to geth logging convention
2018-07-02 15:51:02 +03:00
Péter Szilágyi
319098cc1c trie: fix a temporary memory leak in the memcache 2018-07-02 15:47:33 +03:00
Fabian Raetz
223d943481 vendor: update docker/docker/pkg/reexec so that it compiles on OpenBSD (#17084) 2018-07-02 14:25:02 +03:00
Péter Szilágyi
8974e2e5e0 Merge pull request #17092 from pilu/master
remove formatting from ResettingTimer metrics if requested in raw format
2018-07-02 14:22:19 +03:00
gary rong
a4a2343cdc ethdb, core: implement delete for db batch (#17101) 2018-07-02 11:16:30 +03:00
kevin.xu
fdfd6d3c39 ethstats: comment minor correction (#17102)
spell correction from `repors` to `reports`
2018-06-29 15:15:38 +03:00
Andrea Franz
b5537c5601 node: remove formatting from ResettingTimer metrics if requested in raw 2018-06-27 11:43:49 +02:00
Péter Szilágyi
e916f9786d Merge pull request #17087 from OpenCommunityCoin/build/portable-shell
build: make build/goimports.sh more potable
2018-06-27 11:28:58 +03:00
hackyminer
909e968ebb build: make build/goimports.sh more potable 2018-06-26 22:04:27 +09:00
Guillaume Ballet
598f786aab core/vm: clear linter warnings (#17057)
* core/vm: clear linter warnings

* core/vm: review input

* core/vm.go: revert lint in noop as per request
2018-06-26 15:56:25 +03:00
Adrià Cidre
461291882e whisper: Reduce message loop log from Warn to Info (#17055) 2018-06-26 04:31:05 -04:00
lash
1f0f6f0272 swarm/pss: Hide big network tests under longrunning flag (#17074) 2018-06-25 16:11:47 +02:00
Balint Gabor
0a22ae5572 swarm/fuse: Disable fuse tests, they are flaky (#17072) 2018-06-25 14:04:01 +02:00
Péter Szilágyi
b0cfd9c786 Merge pull request #17054 from chfast/log-time-format
log: Change time format
2018-06-25 12:23:33 +03:00
Paweł Bylica
6d8a1bfb08 log: Change time format
- Keep the tailing zeros.
- Limit precision to milliseconds.
2018-06-25 11:22:23 +02:00
gary rong
4895665670 les: handle conn/disc/reg logic in the eventloop (#16981)
* les: handle conn/disc/reg logic in the eventloop

* les: try to dial before start eventloop

* les: handle disconnect logic more safely

* les: grammar fix
2018-06-25 11:52:24 +03:00
Viktor Trón
eaff89291c Merge pull request #17041 from ethersphere/swarm-network-rewrite-merge
Swarm POC3 - happy solstice
2018-06-21 23:00:43 +02:00
ethersphere
e187711c65 swarm: network rewrite merge 2018-06-21 21:10:31 +02:00
Andrey Petrov
6209545083 p2p: Wrap conn.flags ops with atomic.Load/Store 2018-06-21 12:22:47 -04:00
Andrey Petrov
193a402cc0 p2p: Test for peer.rw.flags race conditions 2018-06-21 12:22:47 -04:00
Andrey Petrov
dcca66bce8 p2p: Cache inbound flag on Peer.isInbound to avoid a race 2018-06-21 12:22:47 -04:00
Andrey Petrov
399aa710d5 p2p: Attempt to race check peer.Inbound() in TestServerDial 2018-06-21 12:22:47 -04:00
Andrey Petrov
699794d88d p2p: More tests for AddTrustedPeer/RemoveTrustedPeer 2018-06-21 12:22:47 -04:00
Andrey Petrov
773857a524 p2p: Test for MaxPeers=0 and TrustedPeer override 2018-06-21 12:21:48 -04:00
Andrey Petrov
2a75fe3308 rpc: Add admin_addTrustedPeer and admin_removeTrustedPeer.
These RPC calls are analogous to Parity's parity_addReservedPeer and
parity_removeReservedPeer.

They are useful for adjusting the trusted peer set during runtime,
without requiring restarting the server.
2018-06-21 12:21:48 -04:00
Péter Szilágyi
d926bf2c7e trie: cache collapsed tries node, not rlp blobs (#16876)
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.

This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
2018-06-21 11:28:05 +02:00
nobody
8db8d074e2 cmd/geth: remove the tail "," from genesis config (#17028)
remove the tail "," from genesis config,  which will cause genesis config parse error .
2018-06-21 10:44:39 +03:00
Husam Ibrahim
1a70338734 mobile: correct comment typo in ethereum.go (#17040) 2018-06-21 10:35:35 +03:00
Wenbiao Zheng
61a5976368 accounts: remove deadcode isSigned (#16990) 2018-06-20 11:46:29 +02:00
Péter Szilágyi
88c42ab4e7 Merge pull request #16954 from holiman/more_tracers
eth/tracers: fix err in 4byte, add some opcode analysis tools
2018-06-20 11:43:40 +03:00
Martin Holst Swende
4210dd1500 tracers: fix err in 4byte, add some opcode analysis tools 2018-06-20 11:42:57 +03:00
ligi
c971ab617d travis: use NDK 17b for Android archives (#17029) 2018-06-20 11:28:10 +03:00
Wenbiao Zheng
f1986f86f2 signer: remove useless errorWrapper (#17003) 2018-06-19 14:48:10 +03:00
Husam Ibrahim
28aca90716 accounts/usbwallet: correct comment typo (#16998) 2018-06-19 14:43:20 +03:00
Wenbiao Zheng
9b1536b26a core: remove dead code, limit test code scope (#17006)
* core: move test util var/func to test file

* core: remove useless func
2018-06-19 14:41:13 +03:00
Husam Ibrahim
3e57c33147 accounts/usbwallet: correct comment typo (#17008) 2018-06-19 14:36:35 +03:00
Husam Ibrahim
baa7eb901e mobile: correct comment typo in geth.go (#17021) 2018-06-19 14:35:59 +03:00
Wenbiao Zheng
574378edb5 cmd: remove faucet/puppeth dead code (#16991)
* cmd/faucet: authGitHub is not used anymore

* cmd/puppeth: remove not used code
2018-06-15 11:14:55 +03:00
Wenbiao Zheng
c95e4a80d1 accounts/keystore: assign schema as const instead of var (#16985) 2018-06-14 17:24:35 +03:00
Ryan Schneider
897ea01d5f internal/debug: use pprof goroutine writer for debug_stacks (#16892)
* debug: Use pprof goroutine writer in debug.Stacks() to ensure all goroutines are captured.

* Up to 64MB limit, previous code only captured first 1MB of goroutines.

* internal/debug: simplify stacks handler

* fix typo

* fix pointer receiver
2018-06-14 16:58:44 +03:00
Caesar Chad
ec192f18b4 core/asm: correct comments typo (#16974)
* core/asm/compiler: correct comments typo

core/asm/compiler: correct comments typo

* Correct comments typo
2018-06-14 16:24:35 +03:00
Felix Lange
aa34173f13 common/number: delete unused package (#16983)
This package was meant to hold an improved 256 bit integer library, but
the effort was abandoned in 2015. AFAIK nothing ever used this package.
Time to say goodbye.
2018-06-14 15:10:34 +03:00
kiel barry
c343f75c26 bmt: fix package documentation comment (#16909) 2018-06-14 13:50:31 +02:00
Wenbiao Zheng
52b1d09457 core: reduce nesting in transaction pool code (#16980) 2018-06-14 13:46:43 +03:00
williambannas
9402f96597 eth: conform better to the golint standards (#16783)
* eth: made changes to conform better to the golint standards

* eth: fix comment nit
2018-06-14 13:14:52 +03:00
kiel barry
d0fd8d6fc2 common: all golint warnings removed (#16852)
* common: all golint warnings removed

* common: fixups
2018-06-14 12:52:50 +03:00
Péter Szilágyi
cfde0b5f52 Merge pull request #16977 from karalabe/go-1.10.3
travis, appveyor: update to Go 1.10.3
2018-06-14 12:51:50 +03:00
Péter Szilágyi
de06185fc3 travis, appveyor: update to Go 1.10.3 2018-06-14 12:46:49 +03:00
Jeremy Schlatter
3fb5f3ae11 cmd/utils: fix NetworkId default when -dev is set (#16833)
Prior to this change, when geth was started with `geth -dev -rpc`,
it would report a network id of `1` in response to the `net_version` RPC
request. But the actual network id it used to verify transactions
was `1337`.

This change causes geth instead respond with `1337` to the `net_version`
RPC when geth is started with `geth -dev -rpc`.
2018-06-14 12:31:31 +03:00
knarfeh
e75d0a6e4c eth/filters: make filterLogs func more readable (#16920) 2018-06-14 12:27:02 +03:00
Martin Holst Swende
947e0afeb3 core/vm: optimize MSTORE and SLOAD (#16939)
* vm/test: add tests+benchmarks for mstore

* core/vm: less alloc and copying for mstore

* core/vm: less allocs in sload

* vm: check for errors more correctly
2018-06-14 12:23:37 +03:00
Elad
1836366ac1 all: library changes for swarm-network-rewrite (#16898)
This commit adds all changes needed for the merge of swarm-network-rewrite.
The changes:

- build: increase linter timeout
- contracts/ens: export ensNode
- log: add Output method and enable fractional seconds in format
- metrics: relax test timeout
- p2p: reduced some log levels, updates to simulation packages
- rpc: increased maxClientSubscriptionBuffer to 20000
2018-06-14 11:21:17 +02:00
Armin Braun
591cef17d4 #15685 made peer_test.go more portable by using random free port instead of hardcoded port 30303 (#15687)
Improves test portability by resolving 127.0.0.1:0
to get a random free port instead of the hard coded one. Now
the test works if you have a running node on the same
interface already.

Fixes #15685
2018-06-14 10:54:00 +02:00
Caesar Chad
e33a5de454 console: correct some comments typo (#16971)
console/console: correct some comments typo
2018-06-14 11:35:20 +03:00
Caesar Chad
f04c0e341e core/asm: correct comments typo (#16975)
core/asm/lexer: correct comments typo
2018-06-14 11:32:19 +03:00
Wenbiao Zheng
ea89f40f0d eth/fetcher: fix annotation (#16969) 2018-06-13 17:02:28 +03:00
Ryan Schneider
1fc54d92ec internal/web3ext: fix method name for enabling mutex profiling (#16964) 2018-06-13 14:10:20 +02:00
John C. Vernaleo
8c4a7fa8d3 core: change comment to match code more closely (#16963) 2018-06-13 10:14:15 +03:00
Péter Szilágyi
423d4254f5 VERSION, params: begin v1.8.12 release cycle 2018-06-12 17:04:17 +03:00
Péter Szilágyi
dea1ce052a params: release go-ethereum v1.8.11 2018-06-12 17:02:14 +03:00
Felföldi Zsolt
25982375a8 les: fix retriever logic (#16776)
This PR fixes a retriever logic bug. When a peer had a soft timeout
and then a response arrived, it always assumed it was the same peer
even though it could have been a later requested one that did not time
out at all yet. In this case the logic went to an illegal state and
deadlocked, causing a goroutine leak.

Fixes #16243 and replaces #16359.
Thanks to @riceke for finding the bug in the logic.
2018-06-12 15:58:47 +02:00
Felföldi Zsolt
049f5b3572 core, eth, les: more efficient hash-based header chain retrieval (#16946) 2018-06-12 16:52:54 +03:00
Felix Lange
0255951587 crypto: replace ToECDSAPub with error-checking func UnmarshalPubkey (#16932)
ToECDSAPub was unsafe because it returned a non-nil key with nil X, Y in
case of invalid input. This change replaces ToECDSAPub with
UnmarshalPubkey across the codebase.
2018-06-12 15:26:08 +02:00
Péter Szilágyi
85cd64df0e Merge pull request #16958 from karalabe/pending-account-fast
internal/ethapi: reduce pendingTransactions to O(txs+accs) from O(txs*accs)
2018-06-12 14:07:21 +03:00
Péter Szilágyi
9608ccf106 Merge pull request #16959 from karalabe/fix-linters
metrics: fix gofmt linter warnings
2018-06-12 14:04:59 +03:00
Péter Szilágyi
3f06da7b5f metrics: fix gofmt linter warnings 2018-06-12 14:02:36 +03:00
Felföldi Zsolt
546d42179e les: pass server pool to protocol manager (#16947) 2018-06-12 14:00:52 +03:00
Péter Szilágyi
90829a04bf internal/ethapi: reduce pendingTransactions to O(txs+accs) from O(txs*accs) 2018-06-12 13:49:43 +03:00
gary rong
f991995918 ethdb: gracefullly handle quit channel (#16794)
* ethdb: gratefullly handle quit channel

* ethdb: minor polish
2018-06-11 16:11:48 +03:00
Wenbiao Zheng
aab7ab04b0 core/rawdb: wrap db key creations (#16914)
* core/rawdb: use wrappered helper to assemble key

* core/rawdb: wrappered helper to assemble key

* core/rawdb: rewrite the wrapper, pass common.Hash
2018-06-11 16:06:26 +03:00
Péter Szilágyi
43b940ec5a Merge pull request #16945 from karalabe/triedb-spurious-warning
trie: don't report the root flushlist as an alloc
2018-06-11 15:02:37 +03:00
Clayton Jacobs
b487bdf0ba metrics: removed repetitive calculations (#16944) 2018-06-11 14:45:25 +03:00
Péter Szilágyi
a3267ed929 trie: don't report the root flushlist as an alloc 2018-06-11 14:32:13 +03:00
Péter Szilágyi
9f7592c802 Merge pull request #16942 from karalabe/rpc-nil-reply
rpc: support returning nil pointer big.Ints (null)
2018-06-11 13:58:17 +03:00
Péter Szilágyi
99483e85b9 rpc: support returning nil pointer big.Ints (null) 2018-06-11 13:56:22 +03:00
xincaosu
1d666cf27e rpc: fix a comment typo (#16929) 2018-06-11 12:01:13 +03:00
Martin Holst Swende
eac16f9824 core: improve getBadBlocks to return full block rlp (#16902)
* core: improve getBadBlocks to return full block rlp

* core, eth, ethapi: changes to getBadBlocks formatting

* ethapi: address review concerns
2018-06-11 11:03:40 +03:00
Steven Roose
69c52bde3f ethclient: fix RPC parse error of Parity response (#16924)
The error produced when using a Parity RPC was the following:

ERROR: transaction did not get mined: failed to get tx for txid 0xbdeb094b3278019383c8da148ff1cb5b5dbd61bf8731bc2310ac1b8ed0235226: json: cannot unmarshal non-string into Go struct field txExtraInfo.blockHash of type common.Hash
2018-06-11 10:41:09 +03:00
Felföldi Zsolt
2977538ac0 light: new CHTs for mainnet and ropsten (#16926) 2018-06-11 10:38:05 +03:00
Anton Evangelatov
7f0726f706 metrics: return an empty snapshot for NilResettingTimer (#16930) 2018-06-11 10:31:55 +03:00
Steven Roose
13af276418 cmd/ethkey: add command to change key passphrase (#16516)
This change introduces 

    ethkey changepassphrase <keyfile>

to change the passphrase of a key file.
2018-06-08 15:07:07 +02:00
Sarlor
ea06da0892 trie: avoid unnecessary slicing on shortnode decoding (#16917)
optimization code
2018-06-07 11:48:36 +03:00
ledgerwatch
feb6620c34 core: relax type requirement for bc in ApplyTransaction (#16901) 2018-06-07 10:34:24 +02:00
Bruno Škvorc
90b22773e9 cmd/puppeth: fixed a typo in a wizard input query (#16910) 2018-06-06 12:17:41 +03:00
Guillaume Ballet
9e4f96a1a6 whisper: re-insert #16757 that has been lost during a merge (#16889) 2018-06-05 16:25:16 +02:00
Péter Szilágyi
01a7e267dc Merge pull request #16882 from karalabe/streaming-ecrecover
core: concurrent background transaction sender ecrecover
2018-06-05 17:13:43 +03:00
Felix Lange
e8ea5aa0d5 trie: reduce hasher allocations (#16896)
* trie: reduce hasher allocations

name    old time/op    new time/op    delta
Hash-8    4.05µs ±12%    3.56µs ± 9%  -12.13%  (p=0.000 n=20+19)

name    old alloc/op   new alloc/op   delta
Hash-8    1.30kB ± 0%    0.66kB ± 0%  -49.15%  (p=0.000 n=20+20)

name    old allocs/op  new allocs/op  delta
Hash-8      11.0 ± 0%       8.0 ± 0%  -27.27%  (p=0.000 n=20+20)

* trie: bump initial buffer cap in hasher
2018-06-05 15:06:29 +03:00
Elad
5bee5d69d7 vendor: added vendor packages necessary for the swarm-network-rewrite merge (#16792)
* vendor: added vendor packages necessary for the swarm-network-rewrite merge into ethereum master

* vendor: removed multihash deps
2018-06-05 12:40:21 +02:00
kiel barry
cbfb40b0aa params: fix golint warnings (#16853)
params: fix golint warnings
2018-06-05 12:31:34 +02:00
Antonio Salazar Cardozo
4cf2b4110e cmd/abigen: support for reading solc output from stdin (#16683)
Allow the --abi flag to be given - to indicate that it should read the
ABI information from standard input. It expects to read the solc output
with the --combined-json flag providing bin, abi, userdoc, devdoc, and
metadata, and works very similarly to the internal invocation of solc,
except it allows external invocation of solc.

This facilitates integration with more complex solc invocations, such
as invocations that require path remapping or --allow-paths tweaks.

Simple usage example:

    solc --combined-json bin,abi,userdoc,devdoc,metadata *.sol | abigen --abi -
2018-06-05 12:22:02 +02:00
Mark
0029a869f0 miner: not call commitNewWork if it's a side block (#16751) 2018-06-05 12:10:09 +02:00
Péter Szilágyi
2ab24a2a8f core: concurrent background transaction sender ecrecover 2018-06-05 11:03:55 +03:00
Martin Holst Swende
400332b99d eth/tracers: fix minor off-by-one error (#16879)
* tracing: fix minor off-by-one error

* tracers: go generate
2018-06-05 10:27:07 +03:00
Felföldi Zsolt
a5237a27ea les: add Skip overflow check to GetBlockHeadersMsg handler (#16891) 2018-06-05 10:23:00 +03:00
Péter Szilágyi
7a22e89080 Merge pull request #16894 from hadv/master
core: fix typo in comment code
2018-06-05 10:11:42 +03:00
hadv
e3a993d774 core: fix typo in comment code 2018-06-05 09:56:45 +07:00
Péter Szilágyi
ed40767355 Merge pull request #16800 from rjl493456442/memory_allowance_warining
cmd: cap cache size if exceeds reasonable range
2018-06-04 17:40:51 +03:00
rjl493456442
a20cc75b4a cmd/geth: cap cache allowance 2018-06-04 13:46:39 +03:00
Péter Szilágyi
b659718fd0 Merge pull request #16880 from holiman/http_timeouts
rpc: set timeouts for http server, see #16859
2018-06-04 13:38:23 +03:00
Anton Evangelatov
be2aec092d metrics: expvar support for ResettingTimer (#16878)
* metrics: expvar support for ResettingTimer

* metrics: use integers for percentiles; remove Overall

* metrics: fix edge-case panic for index-out-of-range
2018-06-04 13:05:16 +03:00
Martin Holst Swende
17f80cc2e2 rpc: set timeouts for http server, see #16859 2018-06-04 11:41:55 +02:00
Péter Szilágyi
143c4341d8 core, eth, trie: streaming GC for the trie cache (#16810)
* core, eth, trie: streaming GC for the trie cache

* trie: track memcache statistics
2018-06-04 10:47:43 +03:00
Felix Lange
3f33a7c8ce consensus/ethash: reduce keccak hash allocations (#16857)
Use Read instead of Sum to avoid internal allocations and
copying the state.

name                      old time/op  new time/op  delta
CacheGeneration-8          764ms ± 1%   579ms ± 1%  -24.22%  (p=0.000 n=20+17)
SmallDatasetGeneration-8  75.2ms ±12%  60.6ms ±10%  -19.37%  (p=0.000 n=20+20)
HashimotoLight-8          1.58ms ±11%  1.55ms ± 8%     ~     (p=0.322 n=20+19)
HashimotoFullSmall-8      4.90µs ± 1%  4.88µs ± 1%   -0.31%  (p=0.013 n=19+18)
2018-06-04 10:32:32 +03:00
Ryan Schneider
c8dcb9584e rpc: use HTTP request context as top-level context (#16861) 2018-06-02 12:26:47 +02:00
kiel barry
af28d12847 console: squash golint warnings (#16836) 2018-05-31 13:59:08 +02:00
kiel barry
0ad32d3be7 ethstats: fix last golint warning (#16837) 2018-05-30 11:36:02 +03:00
Péter Szilágyi
68b0d30d4a VERSION, params: begin 1.8.11 release cycle 2018-05-30 11:04:45 +03:00
Péter Szilágyi
eae63c511c params: release Geth 1.8.10 hotfix 2018-05-30 11:00:07 +03:00
Péter Szilágyi
ca34e8230e Merge pull request #16843 from karalabe/txpool-fix-deadlock
core: fix transaction event asynchronicity
2018-05-30 10:45:02 +03:00
Péter Szilágyi
342ec83d67 core: fix transaction event asynchronicity 2018-05-30 10:14:00 +03:00
Wenbiao Zheng
38c7eb0f26 trie: rename TrieSync to Sync and improve hexToKeybytes (#16804)
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.

In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
2018-05-29 17:48:43 +02:00
Péter Szilágyi
d51faee240 Merge pull request #16831 from abeln/patch-1
core/vm: fix typo in comment
2018-05-29 15:44:30 +03:00
kimmylin
426f62f1a8 core: improve test for TransactionPriceNonceSort (#16413) 2018-05-29 14:21:04 +02:00
Dmitry Shulyak
7677ec1f34 p2p/discv5: add egress/ingress traffic metrics to discv5 udp transport (#16369) 2018-05-29 13:46:09 +02:00
Abel Nieto
d258eee211 core/vm: fix typo in comment 2018-05-29 13:22:00 +02:00
kiel barry
84f8c0cc1f common: improve documentation comments (#16701)
This commit adds many comments and removes unused code.
It also removes the EmptyHash function, which had some uses
but was silly.
2018-05-29 12:42:21 +02:00
Andrea Franz
998f6564b2 whisper/shhclient: update call to shh_post to expect string instead of bool (#16757)
Fixes #16756
2018-05-29 04:36:31 -04:00
Smilenator
40a2c52397 eth/fetcher: reuse variables for hash and number (#16819) 2018-05-29 10:57:08 +03:00
Mohanson
a9c6ef6905 ethereum: fix a typo in FilterQuery{} (#16827)
Fix a spelling mistake in comment
2018-05-29 10:44:06 +03:00
Péter Szilágyi
ccc0debb63 VERSION, params: begin 1.8.10 release cycle 2018-05-28 13:01:20 +03:00
Péter Szilágyi
ff9b14617e params: release go-ethereum v1.8.9 2018-05-28 12:58:22 +03:00
Wenbiao Zheng
d6ed2f67a8 eth, node, trie: fix minor typos (#16802) 2018-05-24 15:55:20 +03:00
Péter Szilágyi
54294b45b1 Merge pull request #16803 from karalabe/trie-avoid-funccall
trie: cleaner logic, one less func call
2018-05-24 15:54:00 +03:00
Péter Szilágyi
d31802312a trie: cleaner logic, one less func call 2018-05-24 13:46:45 +03:00
Ryan Schneider
55b579e02c core: use a wrapped map to remove contention in TxPool.Get. (#16670)
* core: use a wrapped `map` and `sync.RWMutex` for `TxPool.all` to remove contention in `TxPool.Get`.

* core: Remove redundant `txLookup.Find` and improve comments on txLookup methods.
2018-05-23 15:55:42 +03:00
Abel Nieto
be22ee8dda core/vm: fix typo in instructions.go (#16788) 2018-05-23 15:02:10 +03:00
Péter Szilágyi
56de337e57 Merge pull request #16722 from karalabe/trie-iterator-proofs
trie: support proof generation from the iterator
2018-05-23 13:51:09 +03:00
Péter Szilágyi
c934c06cc1 trie: support proof generation from the iterator 2018-05-23 13:02:20 +03:00
gary rong
fbf57d53e2 core/types: convert status type from uint to uint64 (#16784) 2018-05-23 11:10:24 +03:00
gary rong
6ce21a4744 vendor, ethdb: print warning log if leveldb is performing compaction (#16766)
* vendor: update leveldb package

* ethdb: print warning log if db is performing compaction

* ethdb: update annotation and log
2018-05-22 11:12:52 +03:00
kiel barry
9af364e42b node: all golint warnings fixed (#16773)
* node: all golint warnings fixed

* node: rm per peter

* node: rm per peter
2018-05-22 10:29:41 +03:00
kiel barry
09d44247f7 log: fixes for golint warnings (#16775) 2018-05-22 10:28:43 +03:00
kiel barry
0fe47e98c4 trie: fixes to comply with golint (#16771) 2018-05-21 23:41:31 +03:00
Péter Szilágyi
415969f534 Merge pull request #16769 from karalabe/async-broadcasts
eth: propagate blocks and transactions async
2018-05-21 13:42:25 +03:00
Péter Szilágyi
d9cee2c172 eth: propagate blocks and transactions async 2018-05-21 11:32:42 +03:00
Péter Szilágyi
ab6bdbd9b0 Merge pull request #16758 from hadv/fix/typos
Fix some typos in comment code and output log
2018-05-19 19:40:55 +03:00
Péter Szilágyi
953b5ac015 Merge pull request #16720 from rjl493456442/PreTxsEvent
all: collate new transaction events together
2018-05-19 19:39:28 +03:00
hadv
f2fdb75dd9 core, consensus: fix some typos in comment code and output log 2018-05-19 15:44:36 +07:00
Péter Szilágyi
f9c456e02d Merge pull request #16753 from karalabe/go-1.10.2
travis, appveyor: bump Go release to 1.10.2
2018-05-18 13:18:54 +03:00
Péter Szilágyi
579bd0f9fb travis, appveyor: bump Go release to 1.10.2 2018-05-18 12:24:04 +03:00
Péter Szilágyi
49719e21bc core, eth: minor txpool event cleanups 2018-05-18 12:08:24 +03:00
rjl493456442
a2e43d28d0 all: collate new transaction events together 2018-05-18 11:46:44 +03:00
Felix Lange
6286c255f1 p2p/enr: updates for discovery v4 compatibility (#16679)
This applies spec changes from ethereum/EIPs#1049 and adds support for
pluggable identity schemes.

Some care has been taken to make the "v4" scheme standalone. It uses
public APIs only and could be moved out of package enr at any time.

A couple of minor changes were needed to make identity schemes work:

- The sequence number is now updated in Set instead of when signing.
- Record is now copy-safe, i.e. calling Set on a shallow copy doesn't
  modify the record it was copied from.
2018-05-17 15:11:27 +02:00
Péter Szilágyi
f6bc65fc68 Merge pull request #16739 from karalabe/android-trusty
travis: try to upgrade android builder to trusty
2018-05-14 16:56:02 +03:00
Péter Szilágyi
ff8a033f18 travis: try to upgrade android builder to trusty 2018-05-14 16:42:26 +03:00
Guillaume Ballet
247b5f0369 accounts/abi: allow abi: tags when unpacking structs
Go code users can now tag event struct members with `abi:` to specify in what fields the event will be de-serialized.

See PR #16648 for details.
2018-05-14 14:47:31 +02:00
Péter Szilágyi
49ec4f0cd1 VERSION, params: start 1.8.9 release cycle 2018-05-14 13:16:55 +03:00
Péter Szilágyi
2688dab48c params: release go-ethereum v1.8.8 2018-05-14 13:14:17 +03:00
Felföldi Zsolt
595b47e535 light: new CHT for mainnet and ropsten (#16736) 2018-05-14 12:23:58 +03:00
kiel barry
784aa83942 bmt: golint updates for this or self warning (#16628)
* bmt/*: golint updates for this or self warning

* Update bmt.go
2018-05-10 13:36:01 +03:00
ligi
fcc18f4c80 travis: use Android NDK 16b (#16562) 2018-05-10 13:33:13 +03:00
Felix Lange
53a18d2e27 event: document select case slice use and add edge case test (#16680)
Feed keeps active subscription channels in a slice called 'f.sendCases'.
The Send method tracks the active cases in a local variable 'cases'
whose value is f.sendCases initially. 'cases' shrinks to a shorter
prefix of f.sendCases every time a send succeeds, moving the successful
case out of range of the active case list.

This can be confusing because the two slices share a backing array. Add
more comments to document what is going on. Also add a test for removing
a case that is in 'f.sentCases' but not 'cases'.
2018-05-10 13:26:36 +03:00
gary rong
7beccb29be all: get rid of error when creating memory database (#16716)
* all: get rid of error when create mdb

* core: clean up variables definition

* all: inline mdb definition
2018-05-09 15:24:25 +03:00
Andrea Franz
5dbd8b42a9 whisper/shhclient: update call to shh_generateSymKeyFromPassword to pass a string (#16668) 2018-05-09 13:40:59 +02:00
gary rong
4e7dc34ff1 eth/filter: check nil pointer when unsubscribe (#16682)
* eth/filter: check nil pointer when unsubscribe

* eth/filters, accounts, rpc: abort system if subscribe failed

* eth/filter: add crit log before exit

* eth/filter, event: minor fixes
2018-05-09 11:29:25 +03:00
kiel barry
4747aad160 eth: golint fixes to variable names (#16711) 2018-05-09 10:59:00 +03:00
kiel barry
4ea493e7eb cmd: various golint fixes (#16700)
* cmd: various golint fixes

* cmd: update to pr change request

* cmd: update to pr change request
2018-05-09 10:38:03 +03:00
Guilherme Salgado
c60f6f6214 p2p: don't discard reason set by Disconnect (#16559)
Peer.run was discarding the reason for disconnection sent to the disc
channel by Disconnect.
2018-05-09 01:20:20 +02:00
kiel barry
ba975dc093 crypto: fix golint warnings (#16710) 2018-05-09 01:17:09 +02:00
ligi
eab6e5a317 build: specify the key to use when invoking gpg:sign-and-deploy-file (#16696) 2018-05-09 01:13:53 +02:00
Ivan Daniluk
c4a4613d95 p2p/simulations/adapters: fix websocket log line parsing in exec adapter (#16667) 2018-05-08 17:05:27 +02:00
Domino Valdano
fedae95015 eth/filters: derive FilterCriteria from ethereum.FilterQuery (#16629) 2018-05-08 14:39:15 +02:00
kiel barry
864e80a48f p2p: fix some golint warnings (#16577) 2018-05-08 13:08:43 +02:00
kiel barry
a42be3b78d rlp: fix some golint warnings (#16659) 2018-05-08 11:48:07 +02:00
Péter Szilágyi
6cf0ab38bd core/rawdb: separate raw database access to own package (#16666) 2018-05-07 14:35:06 +03:00
Erichin
5463ed9996 mobile: add GetStatus Method for Receipt (#16598) 2018-05-07 11:32:51 +03:00
GagziW
d7be5c6619 common: changed if-else blocks to conform with golint (#16656) 2018-05-07 11:26:39 +03:00
Ivan Daniluk
d2fe83dc5c whisper/mailserver: pass init error to the caller (#16671)
* whisper/mailserver: pass init error to the caller

* whisper/mailserver: add returns to fmt.Errorf

* whisper/mailserver: check err in mailserver init test
2018-05-04 12:10:18 +03:00
Eli
16f3c31773 signer: fix golint errors (#16653)
* signer/*: golint fixes

Specifically naming and comment formatting for documentation

* signer/*: fixed naming error crashing build

* signer/*: corrected error

* signer/core: fix tiny error whitespace

* signer/rules: fix test refactor
2018-05-04 11:04:17 +03:00
kiel barry
5b3af4c3d1 eth: golint updates for this or self warning (#16632)
* eth/*:golint updates for this or self warning

* eth/*: golint updates for this or self warning, pr updated per feedback
2018-05-03 15:15:33 +03:00
kiel barry
60b433ab84 event: golint updates for this or self warning (#16631)
* event/*: golint updates for this or self warning

* event/*: golint updates for this or self warning, pr updated per feedback
2018-05-03 14:54:36 +03:00
YH-Zhou
fd3da7c69d consensus/ethash: fixed typo (#16665) 2018-05-03 12:44:47 +03:00
kiel barry
cd9a1d5b37 metrics: golint updates for this or self warning (#16635)
* metrics/*: golint updates for this or self warning

* metrics/*: golint updates for this or self warning, updated pr from feedback
2018-05-03 12:43:59 +03:00
kiel barry
2ad511ce09 rpc: golint error with context as last parameter (#16657)
* rpc/*: golint error with context as last parameter

* Update json.go
2018-05-03 11:41:22 +03:00
GagziW
541f299fbb accounts: changed if-else blocks to conform with golint (#16654) 2018-05-03 11:37:56 +03:00
GagziW
7c02933275 les: changed if-else blocks to conform with golint (#16658) 2018-05-03 11:35:06 +03:00
GagziW
f2447bd4c3 p2p: changed if-else blocks to conform with golint (#16660) 2018-05-03 11:33:39 +03:00
GagziW
ea1724de1a log: changed if-else blocks to conform with golint (#16661) 2018-05-03 11:30:12 +03:00
Péter Szilágyi
577d375a0d VERSION, params: begin v1.8.8 release cycle 2018-05-02 14:04:09 +03:00
Péter Szilágyi
66432f3821 params: release geth 1.8.7 2018-05-02 14:01:38 +03:00
Martin Holst Swende
5d4d79ae26 cmd/clef: documentation about setup (#16568)
clef: documentation about setup
2018-05-02 13:31:05 +03:00
Péter Szilágyi
6a01363d1d Merge pull request #16644 from ligi/reduce_aar_size
build: Add ldflags "-s -w" when building aar
2018-05-02 12:23:24 +03:00
ligi
58c4e033f4 build: Add ldflags -s -w when building aar
Smaller size on mobile is always good.
Might also solve our maven central upload problem
2018-05-02 11:21:24 +02:00
Péter Szilágyi
5449139ca2 Merge pull request #16569 from holiman/evm_blocknum
cmd/evm: use block number from genesis
2018-05-02 11:48:38 +03:00
Péter Szilágyi
579ac6287b Merge pull request #16576 from CrispinFlowerday/bugfix/local_underpriced_txs
core: ensure local transactions aren't discarded as underpriced
2018-05-02 11:34:47 +03:00
kiel barry
a7720b5926 core: golint updates for this or self warning (#16633) 2018-05-02 11:27:59 +03:00
kiel barry
670bae4cd3 internal: golint updates for this or self warning (#16634) 2018-05-02 11:26:21 +03:00
Eli
4a8d5d2b1e trie: golint iterator fixes (#16639) 2018-05-02 11:24:34 +03:00
Eli
d76c5ca532 tests: golint fixes for tests directory (#16640) 2018-05-02 11:20:19 +03:00
kiel barry
c1ea527573 accounts: golint updates for this or self warning (#16627) 2018-05-02 11:17:52 +03:00
Martin Holst Swende
8dfa4f46a9 evm/main: use blocknumber from genesis 2018-05-02 10:17:00 +02:00
Crispin Flowerday
0afd767537 core: ensure local transactions aren't discarded as underpriced
This fixes an issue where local transactions are discarded as
underpriced when the pool and queue are full.
2018-05-02 11:04:40 +03:00
Péter Szilágyi
448d17b8f7 Merge pull request #16630 from tstranex/master
vendor: Fix index out of range panic when size is bigger than 1 TiB
2018-05-02 10:58:14 +03:00
Péter Szilágyi
9922943b42 Merge pull request #16636 from reductionista/travis
travis.yml: remove obsolete brew-cask install
2018-05-02 10:41:50 +03:00
timothy
a1949d0788 vendor: fix leveldb crash when bigger than 1 TiB 2018-05-02 10:34:31 +03:00
Eli
9f6af6f812 whisper: Golint fixes in whisper packages (#16637) 2018-05-02 08:17:17 +02:00
Domino Valdano
ea171d5bd9 travis.yml: remove obsolete brew-cask install 2018-05-01 11:41:34 -07:00
Péter Szilágyi
1da33028ce Merge pull request #16588 from karalabe/tracer-dirty-fix
core, eth: fix tracer dirty finalization
2018-04-27 16:28:19 +03:00
Péter Szilágyi
7a7428a027 core, eth: fix tracer dirty finalization 2018-04-27 14:29:18 +03:00
xincaosu
cfe8f5fd94 trie: remove unused buf parameter (#16583) 2018-04-27 12:45:02 +03:00
Martin Klepsch
852aa143ac cmd/utils: point users to --syncmode under DEPRECATED (#16572)
Indicate that --light and --fast options are replaced by --syncmode
2018-04-27 12:32:06 +03:00
Felix Lange
b724d1aada core/state: cache missing storage entries (#16584) 2018-04-27 12:13:23 +03:00
kimmylin
86be91b3e2 core/types: avoid duplicating transactions on changing signer (#16435) 2018-04-24 10:39:03 +03:00
Felix Lange
e7067be94f cmd/geth, mobile: add memsize to pprof server (#16532)
* cmd/geth, mobile: add memsize to pprof server

This is a temporary change, to be reverted before the next release.

* cmd/geth: fix variable name
2018-04-23 16:20:39 +03:00
Péter Szilágyi
9586f2acc7 VERSION, params: begin release cycle 1.8.7 2018-04-23 15:54:11 +03:00
Péter Szilágyi
12683feca7 params: release v1.8.6 to fix docker images 2018-04-23 15:52:02 +03:00
Péter Szilágyi
49371bf255 Dockerfile: drop legacy discovery v5 port mappings 2018-04-23 15:51:30 +03:00
Péter Szilágyi
16a78b095e Merge pull request #16552 from karalabe/revert-docker-user
Dockerfile: revert the user change PR that broke all APIs
2018-04-23 15:27:24 +03:00
Péter Szilágyi
96a6c8ba0a Dockerfile: revert the user change PR that broke all APIs 2018-04-23 15:26:15 +03:00
Péter Szilágyi
7d2c730acb Merge pull request #16551 from ethereum/revert-16477-puppeth-dockerfile-permission-fix
Revert "cmd/puppeth: fix node deploys for updated dockerfile user"
2018-04-23 15:24:17 +03:00
Péter Szilágyi
abd881f6d4 Revert "cmd/puppeth: fix node deploys for updated dockerfile user" 2018-04-23 15:23:56 +03:00
Péter Szilágyi
4f91831aec Merge pull request #16550 from ethereum/revert-16478-fix-alltools-dockerfile
Revert "Dockerfile.alltools: fix invalid command"
2018-04-23 15:23:47 +03:00
Péter Szilágyi
3f2583d6d1 Revert "Dockerfile.alltools: fix invalid command" 2018-04-23 15:23:14 +03:00
Vie
26a4dbb467 cmd/geth: update the copyright year in the geth command usage (#16537) 2018-04-23 13:45:43 +03:00
Péter Szilágyi
50aa1dcfda VERSION, params: begin Geth 1.8.6 release cycle 2018-04-23 10:06:45 +03:00
Péter Szilágyi
cbdaa0ca2a params: release Geth v1.8.5 - Dirty Derivative² 2018-04-23 10:02:34 +03:00
Domino Valdano
7cf83cee52 eth/downloader: fix for Issue #16539 (#16546) 2018-04-23 10:01:21 +03:00
Fabian Raetz
744428cb03 vendor: update elastic/gosigar so that it compiles on OpenBSD (#16542) 2018-04-21 19:19:12 +02:00
Lorenzo Manacorda
b15eb665ee ethclient: add DialContext and Close (#16318)
DialContext allows users to pass a Context object for cancellation.
Close closes the underlying RPC connection.
2018-04-19 15:36:33 +02:00
gluk256
a16f12ba86 whisper/whisperv6: post returns the hash of sent message (#16495) 2018-04-19 15:34:24 +02:00
Martin Holst Swende
8feb31825e rpc: handle HTTP response error codes (#16500) 2018-04-19 15:32:43 +02:00
Wuxiang
8f8774cf6d all: fix various typos (#16533)
* fix typo

* fix typo

* fix typo
2018-04-19 16:32:02 +03:00
dm4
c514fbccc0 core/asm: accept uppercase instructions (#16531) 2018-04-19 15:31:30 +02:00
Felix Lange
52b046c9b6 rpc: clean up IPC handler (#16524)
This avoids logging accept errors on shutdown and removes
a bit of duplication. It also fixes some goimports lint warnings.
2018-04-18 12:27:20 +02:00
Zhenguo Niu
661f5f3dac cmd/utils: fix help template issue for subcommands (#16351) 2018-04-18 01:50:25 +02:00
dm4
49e38c970e core/asm: remove unused condition (#16487) 2018-04-18 01:08:31 +02:00
thomasmodeneis
ba1030b6b8 build: enable goimports and varcheck linters (#16446) 2018-04-18 00:53:50 +02:00
Péter Szilágyi
7605e63cb9 VERSION, params: begin v1.8.5 release cycle 2018-04-17 11:26:06 +03:00
Péter Szilágyi
2423ae01e0 params: release Geth v1.8.4 2018-04-17 11:20:53 +03:00
Felföldi Zsolt
92c6d13083 light: new CHTs (#16515) 2018-04-17 09:33:31 +03:00
Martin Holst Swende
ec3db0f56c cmd/clef, signer: initial poc of the standalone signer (#16154)
* signer: introduce external signer command

* cmd/signer, rpc: Implement new signer. Add info about remote user to Context

* signer: refactored request/response, made use of urfave.cli

* cmd/signer: Use common flags

* cmd/signer: methods to validate calldata against abi

* cmd/signer: work on abi parser

* signer: add mutex around UI

* cmd/signer: add json 4byte directory, remove passwords from api

* cmd/signer: minor changes

* cmd/signer: Use ErrRequestDenied, enable lightkdf

* cmd/signer: implement tests

* cmd/signer: made possible for UI to modify tx parameters

* cmd/signer: refactors, removed channels in ui comms, added UI-api via stdin/out

* cmd/signer: Made lowercase json-definitions, added UI-signer test functionality

* cmd/signer: update documentation

* cmd/signer: fix bugs, improve abi detection, abi argument display

* cmd/signer: minor change in json format

* cmd/signer: rework json communication

* cmd/signer: implement mixcase addresses in API, fix json id bug

* cmd/signer: rename fromaccount, update pythonpoc with new json encoding format

* cmd/signer: make use of new abi interface

* signer: documentation

* signer/main: remove redundant  option

* signer: implement audit logging

* signer: create package 'signer', minor changes

* common: add 0x-prefix to mixcaseaddress in json marshalling + validation

* signer, rules, storage: implement rules + ephemeral storage for signer rules

* signer: implement OnApprovedTx, change signing response (API BREAKAGE)

* signer: refactoring + documentation

* signer/rules: implement dispatching to next handler

* signer: docs

* signer/rules: hide json-conversion from users, ensure context is cleaned

* signer: docs

* signer: implement validation rules, change signature of call_info

* signer: fix log flaw with string pointer

* signer: implement custom 4byte databsae that saves submitted signatures

* signer/storage: implement aes-gcm-backed credential storage

* accounts: implement json unmarshalling of url

* signer: fix listresponse, fix gas->uint64

* node: make http/ipc start methods public

* signer: add ipc capability+review concerns

* accounts: correct docstring

* signer: address review concerns

* rpc: go fmt -s

* signer: review concerns+ baptize Clef

* signer,node: move Start-functions to separate file

* signer: formatting
2018-04-16 15:04:32 +03:00
gary rong
de2a7bb764 eth/downloader: wait for all fetcher goroutines to exit before terminating (#16509) 2018-04-16 11:37:48 +03:00
gary rong
6b2b328cdb ethdb: add leveldb write delay statistic (#16499) 2018-04-16 11:18:24 +03:00
Ryan Schneider
2a1fc3d155 miner: remove contention on currentMu for pending data retrievals (#16497) 2018-04-16 10:56:20 +03:00
Péter Szilágyi
60516c83b0 Merge pull request #16494 from karalabe/txpool-stable-pricedelete
core: txpool stable underprice drop order, perf fixes
2018-04-12 13:19:06 +03:00
Péter Szilágyi
db48d312e4 core: txpool stable underprice drop order, perf fixes 2018-04-12 12:54:22 +03:00
Péter Szilágyi
7e911b8e47 Merge pull request #16491 from holiman/fix_copy_again
core/state: fix ripemd-cornercase in Copy
2018-04-12 10:54:29 +03:00
Martin Holst Swende
7205366c9f core/state: fix ripemd-cornercase in Copy 2018-04-11 15:03:49 +02:00
Péter Szilágyi
5a79aca8b9 Merge pull request #16485 from holiman/fixcopycopy
core/state: fix bug in copy of copy State
2018-04-11 12:00:17 +03:00
Martin Holst Swende
0c7b99b8cc core/state: fix bug in copy of copy State 2018-04-11 10:23:01 +02:00
cpusoft
e7cc5b4160 les: add ps.lock.Unlock() before return (#16360) 2018-04-11 11:02:33 +03:00
Péter Szilágyi
34ecb495b6 Merge pull request #16481 from karalabe/go1.10.1
travis, appveyor: bump to Go 1.10.1
2018-04-11 10:56:32 +03:00
Elad_
2e247705cd travis.yml: add TEST_PACKAGES to speed up swarm testing (#16456)
This commit is meant to allow ecosystem projects such as ethersphere
to minimize CI build times by specifying an environment variable with
the packages to run tests on.

If the environment variable isn't defined the build script will test
all packages so this shouldn't affect the main go-ethereum repository.
2018-04-10 16:35:26 +02:00
Péter Szilágyi
95d5c22086 travis, appveyor: bump to Go 1.10.1 2018-04-10 17:07:58 +03:00
Felix Lange
3caf16b15f core: remove stray account creations in state transition (#16470)
The 'from' and 'to' methods on StateTransitions are reader methods and
shouldn't have inadvertent side effects on state.

It is safe to remove the check in 'from' because account existence is
implicitly checked by the nonce and balance checks. If the account has
non-zero balance or nonce, it must exist. Even if the sender account has
nonce zero at the start of the state transition or no balance, the nonce
is incremented before execution and the account will be created at that
time.

It is safe to remove the check in 'to' because the EVM creates the
account if necessary.

Fixes #15119
2018-04-10 15:33:25 +02:00
ligi
30deb6067f build: add -e and -X flags to get more information on #16433 (#16443) 2018-04-10 14:32:48 +03:00
Péter Szilágyi
989ab26028 Merge pull request #16478 from karalabe/fix-alltools-dockerfile
Dockerfile.alltools: fix invalid command
2018-04-10 14:13:52 +03:00
Felix Lange
c7ab3e5544 common: delete StringToAddress, StringToHash (#16436)
* common: delete StringToAddress, StringToHash

These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).

* eth/filters: remove incorrect use of common.BytesToAddress
2018-04-10 14:12:07 +03:00
Péter Szilágyi
29213b1f8f Dockerfile.alltools: fix invalid command 2018-04-10 14:02:56 +03:00
Péter Szilágyi
149f706fde Merge pull request #16477 from karalabe/puppeth-dockerfile-permission-fix
cmd/puppeth: fix node deploys for updated dockerfile user
2018-04-10 13:40:52 +03:00
Péter Szilágyi
7c1e9a5004 cmd/puppeth: fix node deploys for updated dockerfile user 2018-04-10 13:13:05 +03:00
Péter Szilágyi
39f4c80155 Merge pull request #15225 from holiman/test_removefrom_dirtyset
Change handling of dirty objects in state
2018-04-10 12:28:30 +03:00
Martin Holst Swende
8c31d2897b core: add blockchain benchmarks 2018-04-10 11:20:06 +02:00
Martin Holst Swende
14c9215dd3 state: handle nil in journal dirties 2018-04-10 11:20:02 +02:00
gary rong
1100e8ba63 eth/downloader: flush state sync data before exit (#16280) 2018-04-09 14:46:27 +02:00
Felix Lange
0fac705ed0 compression/rle: delete RLE compression (#16468) 2018-04-09 13:47:06 +02:00
Ivo Georgiev
315b9b18df ethclient: remove empty object in newHeads subscription call (#16454) 2018-04-09 12:40:58 +02:00
DoubleWoodH
8de655ef3a bmt: fix comment typos (#16461) 2018-04-09 12:38:01 +02:00
dm4
3ebcf92b42 cmd/evm: print vm output when debug flag is on (#16326) 2018-04-06 12:43:36 +02:00
Zhenguo Niu
c43792a42c cmd/geth: update template for 'geth bug' command (#16350) 2018-04-06 12:42:11 +02:00
Federico Gimenez
50dbe8e244 Dockerfile: use non-privileged user account (#16052) 2018-04-05 14:14:32 +02:00
Steven Roose
ec8ee611ca core/types: remove String methods from struct types (#16205)
Most of these methods did not contain all the relevant information
inside the object and were not using a similar formatting type.
Moreover, the existence of a suboptimal String method breaks usage
with more advanced data dumping tools like go-spew.
2018-04-05 14:13:02 +02:00
Giovanni HoSang
1e248f3a6e README: change 'built in' to 'built-in' 2018-04-04 15:25:34 +02:00
Ricardo Domingos
6ab9f0a19f accounts/abi: improve test coverage (#16044) 2018-04-04 13:42:36 +02:00
Yusup
7aad81f881 eth: fix typos (#16414) 2018-04-04 12:25:02 +02:00
Nguyen Sy Thanh Son
2a4bd55b43 cmd/geth: remove relOracle variable (#16434) 2018-04-04 12:22:26 +02:00
Jia Chenhui
5909482fb5 core/state: avoid redundant addition to code size cache (#16427) 2018-04-03 17:13:19 +02:00
David Huie
d1af4e1a9e crypto/secp256k1: catch curve parameter parse errors (#16392) 2018-04-03 17:12:00 +02:00
Li Xuanji
6cdfb9a3eb .gitattributes: enable solidity highlighting on github (#16425) 2018-04-03 15:21:24 +02:00
Felix Lange
a095b84ec5 travis.yml: remove sudo requirement for PPA and Azure purge builders (#16404)
This is supposed to fix the FTP upload issue according to
travis-ci/travis-ci#9391.
2018-03-28 15:35:41 +03:00
Péter Szilágyi
d985b9052a core/state: avoid linear overhead on journal dirty listing 2018-03-28 09:32:02 +03:00
Martin Holst Swende
958ed4f3d9 core/state: rework dirty handling to avoid quadratic overhead 2018-03-28 09:29:28 +03:00
Jia Chenhui
1a8894b3d5 core/state: uniform parameter style (#16398)
- Uniform code style.
2018-03-28 09:26:37 +03:00
Guillaume Ballet
80449719bd whisper: fix issue in topic list copy (#16381)
- Fixes #16271. What was appeneded was a pointer to
an object that changes during the iteration.
- The topic is allocated as a 4-byte array, fill partial topics
with 0s. Partial topics are currently disabled, but would
crash as they rely on the presence of byte number 3.
2018-03-27 17:26:08 +02:00
Felföldi Zsolt
45bd4fedde light: new CHT for ropsten (#16393) 2018-03-27 10:09:15 +03:00
Péter Szilágyi
d763e20d55 Merge pull request #16394 from hydai/fix_typo
core/vm: Fixed typos in core/vm/interpreter.go
2018-03-27 10:08:58 +03:00
hydai
6134990709 core/vm: Fixed typos in core/vm/interpreter.go 2018-03-27 12:29:04 +08:00
Felix Lange
85ea9159d0 params, VERSION: v1.8.4 unstable 2018-03-26 18:09:56 +02:00
Felix Lange
329ac18ef6 params: v1.8.3 stable 2018-03-26 18:09:06 +02:00
Felix Lange
89cc604a50 light: new mainnet CHT (#16390) 2018-03-26 18:02:12 +03:00
Guillaume Ballet
cf799e5eaa whisper: switch all remaining components from v5 to v6 2018-03-26 16:36:14 +02:00
Péter Szilágyi
c053f1146d Merge pull request #16388 from hydai/fix_comments
core/vm: Fixed typos
2018-03-26 17:05:35 +03:00
hydai
c3dc814fea core/vm: Fixed typo in core/vm/evm.go 2018-03-26 21:40:00 +08:00
Zhenguo Niu
db9b2f5405 cmd/puppeth: add constraints to network name (#16336)
* cmd/puppeth: add constraints to network name

* cmd/puppeth: update usage of network arg

* cmd/puppeth: avoid package dependency on utils
2018-03-26 15:21:20 +03:00
Felix Lange
e9b5e22ad1 rpc: limit chunked requests (#16343) 2018-03-26 14:46:37 +03:00
Jia Chenhui
e506d384e9 core/state: fix typo (#16370) 2018-03-26 14:45:34 +03:00
Péter Szilágyi
dd708c1636 Merge pull request #16319 from rjl493456442/dump_preimages
cmd: implement preimage dump and import cmds
2018-03-26 14:45:01 +03:00
Péter Szilágyi
495bdb0c71 cmd: export preimages in RLP, support GZIP, uniform with block export 2018-03-26 14:08:01 +03:00
hydai
7c131f4d6d core/asm: fixed typo (posititon -> position) (#16366) 2018-03-26 13:48:39 +03:00
hydai
84c5db5409 core/vm: remove JIT VM codes (#16362) 2018-03-26 13:48:04 +03:00
David Huie
23ac783332 ecies: drop randomness parameter from PrivateKey.Decrypt (#16374)
The parameter `rand` is unused in `PrivateKey.Decrypt`. Decryption in
the ECIES encryption scheme is deterministic, so randomness isn't
needed.
2018-03-26 13:46:18 +03:00
Péter Szilágyi
e9a1d8de34 Merge pull request #16387 from karalabe/evm-polsihes
core: minor evm polishes and optimizations
2018-03-26 13:44:36 +03:00
rjl493456442
b6b6f52ec8 cmd: implement preimage dump and import cmds 2018-03-26 12:51:46 +03:00
Péter Szilágyi
1fae50a199 core: minor evm polishes and optimizations 2018-03-26 12:28:46 +03:00
Guillaume Ballet
3d013c1939 whisper: some components are still using v5, switch to v6 2018-03-22 15:48:52 +01:00
Martin Holst Swende
933972d139 Merge pull request #16256 from epiclabs-io/unpack_one_arg_event
Fix issue unmarshaling single parameter events from abigen generated go code #16208
2018-03-21 10:11:37 +01:00
Felix Lange
b1917ac9a3 build: add GOBIN to PATH for gomobile (#16344)
* build: add GOBIN to PATH for gomobile

* build: install gobind alongside gomobile
2018-03-20 02:00:45 +09:00
Péter Szilágyi
1203c6a237 crypto/bn256: full switchover to cloudflare's code (#16301)
* crypto/bn256: full switchover to cloudflare's code

* crypto/bn256: only use cloudflare for optimized architectures

* crypto/bn256: upstream fallback for non-optimized code

* .travis, build: drop support for Go 1.8 (need type aliases)

* crypto/bn256/cloudflare: enable curve mul lattice optimization
2018-03-20 01:13:54 +09:00
Guillaume Ballet
0965761a45 whisper: Notify Vlad and Guillaume of whisper PRs (#16340) 2018-03-19 22:27:46 +09:00
Martin Holst Swende
faed47b3c5 Merge pull request #15990 from markya0616/sim_backend_block_hash
accounts/abi, core: add AddTxWithChain in BlockGen for simulation
2018-03-19 08:29:54 +01:00
stompesi
fe6cf00f48 miner: remove duplicated code (#15968) 2018-03-16 14:13:52 +02:00
Péter Szilágyi
322006d0f2 Merge pull request #16315 from karalabe/drop-vagrant
containers: drop vagrant support, noone's maintaining it
2018-03-15 18:59:50 +02:00
Péter Szilágyi
56e2376e69 containers: drop vagrant support, noone's maintaining it 2018-03-14 13:23:40 +02:00
hydai
a063876749 core/asm: fixed typo (labal -> label) (#16313) 2018-03-14 11:59:06 +02:00
Péter Szilágyi
62bc179bb9 Merge pull request #16310 from karalabe/websocket-request-limits
rpc: enforce the 128KB request limits on websockets too
2018-03-13 14:35:18 +02:00
Péter Szilágyi
555f42cfd8 rpc: enforce the 128KB request limits on websockets too 2018-03-13 13:55:26 +02:00
Anton Evangelatov
6a2d2869f6 github: more information bot configuration (#16298) 2018-03-12 10:56:59 +02:00
Felföldi Zsolt
1488fdaf19 cmd/utils: fix maxpeers vs lightpeers logic (#16125) 2018-03-09 11:55:03 +02:00
gary rong
77da203547 eth: update higest block we know during the sync if a higher was found (#16283)
* eth: update higest block we know during the sync if a higher was found

* eth: avoid useless sync in fast sync
2018-03-09 11:51:30 +02:00
Péter Szilágyi
307846d046 Merge pull request #16287 from razum2um/master
Allow any vhost for wallet deployed by puppeth
2018-03-09 11:06:18 +02:00
Péter Szilágyi
38e2071df8 Merge pull request #16289 from jeffwalsh/remove-add-std-arg
common/compiler: remove "--add-std" arg, deprecated in solidity 0.4.21
2018-03-09 11:05:09 +02:00
Jeffery Robert Walsh
a25561dfb4 common/compiler: remove "--add-std" arg, deprecated in solidity 0.4.21 2018-03-08 18:05:56 -08:00
Vlad Bokov
52697fb1b2 cmd/puppeth: allow any vhost in wallet 2018-03-09 00:02:31 +07:00
Péter Szilágyi
b2f53f9621 Merge pull request #16128 from karalabe/go1.10
travis, Dockerfile, appveyor, build: bump to Go 1.10
2018-03-08 17:26:14 +02:00
Péter Szilágyi
669aba8e2c travis, Dockerfile, appveyor, build: bump to Go 1.10 2018-03-08 16:34:26 +02:00
Kurkó Mihály
39c16c8a1e cmd, ethdb, vendor: integrate leveldb iostats (#16277)
* cmd, dashboard, ethdb, vendor: send iostats to dashboard

* ethdb: change names

* ethdb: handle parsing errors

* ethdb: handle iostats syntax error

* ethdb: r -> w
2018-03-08 14:59:00 +02:00
Martin Holst Swende
4871e25f5f core/vm: optimize eq, slt, sgt and iszero + tests (#16047)
* vm: optimize eq, slt, sgt and iszero + tests

* core/vm: fix error in slt/sgt, found by vmtests. Added testcase

* core/vm: make slt/sgt cleaner
2018-03-08 14:48:19 +02:00
Péter Szilágyi
85d5f2c661 Merge pull request #16285 from karalabe/fix-resend-optional-param
internal/ethapi: make resent gas params optional
2018-03-08 12:52:38 +02:00
Péter Szilágyi
28ef23f446 internal/ethapi: make resent gas params optional 2018-03-08 12:29:42 +02:00
Kurkó Mihály
704840a8ad cmd, dashboard: use webpack dev server, remove custom assets (#16263)
* cmd, dashboard: remove custom assets, webpack dev server

* dashboard: yarn commands, small fixes
2018-03-08 10:22:21 +02:00
Péter Szilágyi
3ec1b9a92d Merge pull request #16275 from karalabe/bump-duktape
vendor: bump duktape to get rid of build warning
2018-03-07 16:29:31 +02:00
Péter Szilágyi
fc1f3f2618 vendor: bump duktape to get rid of build warning 2018-03-07 14:47:09 +02:00
Mark
cddb529d70 accounts/abi: normalize method name to a camel-case string (#15976) 2018-03-07 14:34:53 +02:00
Kyuntae Ethan Kim
63687f04e4 core: check transaction/receipt count match when reconstructing blocks (#16272) 2018-03-07 12:05:14 +02:00
Péter Szilágyi
d43ffdbf6a Merge pull request #16240 from cuiweixie/txpool
core: should enqueue the invalids tx anyway
2018-03-07 11:45:28 +02:00
Kyuntae Ethan Kim
f6bef558aa eth: fixed typo (#16274) 2018-03-07 11:15:54 +02:00
Péter Szilágyi
2b5d1a4a4c core: update txpool tests for the removal fix 2018-03-07 10:58:11 +02:00
cui
f8601430fd core: should enqueue the invalids tx anyway
even the pending is empty we shoud enqueue the invalid txs
2018-03-07 10:07:50 +02:00
gluk256
f1d440a437 whisper: final refactoring (#16259)
whisper: final refactoring
2018-03-06 23:37:43 +01:00
Anton Evangelatov
746392cfd2 swarm/storage: disable treechunker test while it is flaky (#16254) 2018-03-05 17:23:33 +01:00
Javier Peletier
60a999f238 accounts/abi: Modified unpackAtomic to accept struct lvalues 2018-03-05 16:01:40 +01:00
Javier Peletier
13b566e06e accounts/abi: Add one-parameter event test case from enriquefynn/unpack_one_arg_event 2018-03-05 16:00:03 +01:00
Péter Szilágyi
1548518644 VERSION, params: begin 1.8.3 release cycle 2018-03-05 15:52:21 +02:00
Péter Szilágyi
b8b9f7f447 params: release Geth 1.8.2 stable 2018-03-05 15:48:48 +02:00
Anton Evangelatov
c636ac4045 github: config for probot-stale bot (#16235)
* github: config for probot-stale bot

* github: use stale label, instead of wontfix
2018-03-05 15:23:26 +02:00
Péter Szilágyi
bd6879ac51 core/vm, crypto/bn256: switch over to cloudflare library (#16203)
* core/vm, crypto/bn256: switch over to cloudflare library

* crypto/bn256: unmarshal constraint + start pure go impl

* crypto/bn256: combo cloudflare and google lib

* travis: drop 386 test job
2018-03-05 14:33:45 +02:00
Péter Szilágyi
223fe3f26e Merge pull request #16229 from karalabe/evm-call-fix
cmd/evm, core/vm, internal/ethapi: don't disable call gas metering
2018-03-05 14:23:48 +02:00
Péter Szilágyi
b7e57ca1d0 cmd/evm, core/vm, internal/ethapi: don't disable call gas metering 2018-03-05 14:01:13 +02:00
Martin Holst Swende
478143d69a utils: fix #16138 by checking if vhosts flag is set (#16141)
* utils: fix #16138 by checking if vhosts flag is set

* utils,node: fix defaults for rpcvhosts

* node,utils: address review concerns
2018-03-05 13:02:32 +02:00
Guillaume Ballet
abed63c38f Merge pull request #16250 from gluk256/317-fatalf
whisper: refactoring go-routines workflow

Move the call mailServer.Init() down (to the bottom of the function) because if the function initialize() completes successfully, then it will be followed by mailServer.Close() in shutdown(). The workflow of the corresponding goroutines is clearer now.
2018-03-05 10:49:25 +01:00
Kyuntae Ethan Kim
d429a92f09 consensus/ethash: fixed typo (#16253) 2018-03-05 11:32:56 +02:00
Vlad
61a061c9b4 whisper: refactoring go-routines 2018-03-04 23:35:05 +01:00
protolambda
0b814d32f8 accounts/abi: Abi binding support for nested arrays, fixes #15648, including nested array unpack fix (#15676)
* accounts/abi/bind: support for multi-dim arrays

Also:
- reduce usage of regexes a bit.
- fix minor Java syntax problems

Fixes #15648

* accounts/abi/bind: Add some more documentation

* accounts/abi/bind: Improve code readability

* accounts/abi: bugfix for unpacking nested arrays

The code previously assumed the arrays/slices were always 1 level
deep. While the packing supports nested arrays (!!!).

The current code for unpacking doesn't return the "consumed" length, so
this fix had to work around that by calculating it (i.e. packing and
 getting resulting length) after the unpacking of the array element.
It's far from ideal, but unpacking behaviour is fixed now.

* accounts/abi: Fix unpacking of nested arrays

Removed the temporary workaround of packing to calculate size, which was
incorrect for slice-like types anyway.
Full size of nested arrays is used now.

* accounts/abi: deeply nested array unpack test

Test unpacking of an array nested more than one level.

* accounts/abi: Add deeply nested array pack test

Same as the deep nested array unpack test, but the other way around.

* accounts/abi/bind: deeply nested arrays bind test

Test the usage of bindings that were generated
for methods with multi-dimensional (and not
just a single extra dimension, like foo[2][3])
array arguments and returns.

edit: trigger rebuild, CI failed to fetch linter module.

* accounts/abi/bind: improve array binding

wrapArray uses a regex now, and arrayBindingJava is improved.

* accounts/abi: Improve naming of element size func

The full step size for unpacking an array
 is now retrieved with "getFullElemSize".

* accounts/abi: support nested nested array args

Previously, the code only considered the outer-size of the array,
ignoring the size of the contents. This was fine for most types,
but nested arrays are packed directly into it, and count towards
the total size. This resulted in arguments following a nested
array to replicate some of the binary contents of the array.

The fix: for arrays, calculate their complete contents size:
 count the arg.Type.Elem.Size when Elem is an Array, and
 repeat when their child is an array too, etc.
The count is the number of 32 byte elements, similar to how it
 previously counted, but nested.

* accounts/abi: Test deep nested arr multi-arguments

Arguments with a deeply nested array should not cause the next arguments
to be read from the wrong position.
2018-03-04 23:24:17 +01:00
Guillaume Ballet
7b1d637098 Merge pull request #16245 from gluk256/311-close-channel
whisper: close the `done` channel in one location
2018-03-04 23:22:26 +01:00
Vlad
95cca85d6d whisper: minor refactoring 2018-03-03 21:37:16 +01:00
gluk256
66cd41af1e Merge pull request #16231 from gluk256/303-reader
whisper: filereader mode introduced to wnode
2018-03-03 09:40:01 +01:00
gluk256
fa375955ad whisper/whisperv6: delete unused function (#16234) 2018-03-03 00:54:15 +01:00
Felföldi Zsolt
5ad7b9123c light: new CHTs (#16233) 2018-03-03 00:52:54 +01:00
Péter Szilágyi
ca64a122d3 eth/downloader: save and load trie sync progress (#16224) 2018-03-03 00:52:39 +01:00
Felix Lange
12f4d28411 internal/debug: add support for mutex profiles (#16230) 2018-03-03 00:52:21 +01:00
Vlad
6219a33822 whisper: filereader mode introduced to wnode 2018-03-02 14:54:54 +01:00
Péter Szilágyi
49bcb5fbd5 Merge pull request #16228 from karalabe/faucet-background-skip
cmd/faucet: update state in background, skip when busy
2018-03-02 12:13:19 +02:00
Péter Szilágyi
6f13e515f4 cmd/faucet: update state in background, skip when busy 2018-03-02 12:02:23 +02:00
Zhenguo Niu
d520bf4503 cmd/swarm: fix some typos in manifest cmd (#16227)
Replace "atleast" with "at least" in the manifest error message.
2018-03-02 10:59:26 +01:00
Guillaume Ballet
a76e46e3d7 Merge pull request #16223 from gluk256/300-msg-serialiation
whisper: topics replaced by bloom filters in mailserver communication
2018-03-01 20:27:20 +01:00
Anton Evangelatov
3ca3fffdf0 metrics: fix flaky Example metrics test (#16222)
* metrics: add sleep to test in order to get predictable output

* metrics: relax constraints on timer test
2018-03-01 19:55:31 +02:00
Vlad
ee75a90ab4 whisper: topics replaced by bloom filters 2018-03-01 16:04:09 +01:00
gluk256
5a150e1b77 whisper: serious security issue fixed (#16219)
The diagnostic tool was saving the unencrypted version of the messages, which is an obvious
security flaw. As of this commit:
  * encrypted messages saved instead of plain text.
  * all messages are stored, even that created by the user of wnode.
2018-03-01 09:34:46 +01:00
Guillaume Ballet
9b4e182ce5 Merge pull request #16210 from gluk256/288-filter-optimization
whisper: message filtering optimization

Only run the message through filters who registered their interest.
2018-02-28 17:28:09 +01:00
Vlad
d24d10a764 whisper: style fixes 2018-02-28 15:05:35 +01:00
Guillaume Ballet
52bb0a1ec7 Merge pull request #16214 from b00ris/whisperv6_datarace
whisper: fixed dataraces in peer unit tests
2018-02-28 14:31:19 +01:00
Guillaume Ballet
1843615456 Merge pull request #16206 from gluk256/277-mailserver
whisper: mailserver no longer supports the signature validation

Mailserver is provided as an example, but client validation belongs to the upper layer protocol and
needs not be covered in this example. The check that was previously available hinders the switch
to libp2p so we agreed not to include that check in that example code anymore.
2018-02-28 14:10:08 +01:00
Péter Szilágyi
7843192c8e Merge pull request #16217 from karalabe/rpc-receipt-fetch-fix
internal/ethapi: fix getTransactionReceipt
2018-02-28 13:40:17 +02:00
b00ris
62c239f608 whisper: fix typo 2018-02-28 14:38:42 +03:00
Péter Szilágyi
8f43c97433 Merge pull request #16207 from karalabe/drop-go1.7
travis, build, consensus: drop support for Go 1.7
2018-02-28 13:29:58 +02:00
Péter Szilágyi
ba7b384019 internal/ethapi: fix getTransactionReceipt 2018-02-28 12:40:15 +02:00
Mark Rushakoff
98ec5e5011 core/asm: rename isAlphaNumeric to isLetter (#16212)
The function would return false for numbers, so isLetter is a more
accurate description of the behavior.
2018-02-28 12:20:07 +02:00
b00ris
cf52d5c91f whisper: fixed datarace 2018-02-28 09:50:36 +03:00
Vlad
a69cb3b4ff whisper: comment updated 2018-02-28 00:39:38 +01:00
Vlad
c733792be4 whsiper: refactoring 2018-02-27 23:38:20 +01:00
Vlad
014d8d9837 whisper: message filtering optimized 2018-02-27 21:16:15 +01:00
Péter Szilágyi
17b0e226d3 travis, build, consensus: drop support for Go 1.7 2018-02-27 18:25:56 +02:00
Vlad
5e30a5f66e whisper: test fixed 2018-02-27 15:52:10 +01:00
Vlad
dadf4d53ab whisper: mailserver no longer supports the signature vaidation 2018-02-27 15:45:00 +01:00
Elad Nachmias
b574b57766 swarm: give correct error on 0x hash prefix (#16195)
- added a case error struct that contains information about certain error cases
in which we would like to output more information to the client
- added a validation method that iterates and adds the information that is
stored in the error cases
2018-02-27 15:32:38 +02:00
Anton Evangelatov
18bb3da55e node: fill StandardCounters as part of debugapi/metrics (#16054) 2018-02-27 15:30:07 +02:00
Elad Nachmias
dd389e595f eth: added travis build badge (#16117)
* eth: added travis build status for master branch

* README: fix travis badge order, link to CI
2018-02-27 13:04:47 +02:00
Saulius Grigaitis
c41f1a3e23 puppeth: fix Parity Chain Spec parameter GasLimitBoundDivision (#16188) 2018-02-27 12:56:51 +02:00
Andrey Petrov
2e9c8fd4fb eth, les: allow exceeding maxPeers for trusted peers (#16189)
Fixes #3326, #14472
2018-02-27 12:52:59 +02:00
Guillaume Ballet
4c845bdc27 Merge pull request #16198 from gluk256/266-wnode
whisper: refactor wnode to systematically store messages if a directory is provided
2018-02-26 21:23:51 +01:00
Vlad
f4e676cccd whipser: comments updated 2018-02-26 19:26:36 +01:00
JU HYEONG PARK
61c9730b2d p2p: fix doEncHandshake documentation (#16184) 2018-02-26 17:22:46 +01:00
Vlad
6e0667fa06 whisper: wnode updated - all messages are saved if savedir param is given 2018-02-26 13:58:04 +01:00
Martin Holst Swende
f83237573f core: make current*Block atomic, and accessor functions mutex-free (#16171)
* core: make current*Block atomic, and accessor functions mutex-free

* core: fix review concerns

* core: fix error in atomic assignment

* core/light: implement atomic getter/setter for headerchain
2018-02-26 11:53:10 +02:00
Domino Valdano
d398d04e27 cmd/geth: fix broken links to JavaScript-Console wiki in cmd line help (#16183)
* Fixed broken link to JavaScript-Console wiki in cmd line help

* cmd/geth: Added missing r in 'JavaScript'
2018-02-26 11:38:17 +02:00
Anton Evangelatov
764878d988 contracts/chequebook: increase interval between auto deposits (#16178) 2018-02-26 11:36:26 +02:00
cooganb
22fc6928d7 swarm: creates Swarm landing page for browser 'localhost:xxxx/' GET request when running Swarm (#15926)
* swarm: began work on GetHandleFile method re: issue #155

* swarm: now able to serve landing page template

* swarm: added landing page template

* swarm: landing page has working input

* swarm: fixed CSS issue in template

* swarm: deleted extra lines

* swarm: deleted time header and made redirect a relative path

* swarm: removed code mistakenly left
2018-02-26 09:56:40 +01:00
Guillaume Ballet
423c8bb1d8 Merge pull request #16176 from gluk256/255-refactoring
whisper: filters no longer get removed after a while
2018-02-23 18:02:51 +01:00
Anton Evangelatov
114738982e swarm/metrics: introduce metrics export flag (#16177) 2018-02-23 16:22:16 +01:00
Vlad
6919c36432 whisper: refactoring 2018-02-23 14:52:25 +01:00
Anton Evangelatov
dcca613a0b swarm: initial instrumentation (#15969)
* swarm: initial instrumentation with go-metrics

* swarm: initialise metrics collection and add ResettingTimer to HTTP requests

* swarm: update metrics flags names. remove redundant Timer.

* swarm: rename method for periodically updating gauges

* swarm: finalise metrics after feedback

* swarm/network: always init kad metrics containers

* swarm/network: off-by-one index in metrics containers

* swarm, metrics: resolved conflicts
2018-02-23 14:19:59 +01:00
Lewis Marshall
b677a07d36 swarm/api/http: Fix using deprecated bzzr scheme (#16152)
Without this, deprecated bzzr requests just return an empty response.

Signed-off-by: Lewis Marshall <lewis@lmars.net>
2018-02-23 14:09:01 +01:00
gluk256
4702ace5f7 Merge pull request #16172 from gluk256/244-light-client
whisper: light client mode introduced
2018-02-23 14:07:29 +01:00
Péter Szilágyi
89f914c030 core: flush out trie cache more meaningfully on stop (#16143)
* core: flush out trie cache more meaningfully on stop

* core: upgrade legacy tests to chain maker
2018-02-23 14:02:33 +02:00
Guillaume Ballet
fb5d085234 Merge pull request #16146 from status-im/pombeirp/whisperv6-peer-race-cond-fix
Fix race condition in whisperv6/peer.go
2018-02-23 11:49:47 +01:00
Martin Holst Swende
44d40ffce1 core, vm, common: define constantinople fork + shift (#16045)
* core, vm, common: define constantinople fork, start implementation of shift instructions

* vm: more testcases

* vm: add tests for intpool erroneous intpool handling

* core, vm, common: fix constantinople review concerns

* vm: add string<->op definitions for new opcodes
2018-02-23 12:32:57 +02:00
Vlad
d7b4b40cb6 whisper: light client mode introduced 2018-02-23 11:10:28 +01:00
Anton Evangelatov
ae9f97221a metrics: pull library and introduce ResettingTimer and InfluxDB reporter (#15910)
* go-metrics: fork library and introduce ResettingTimer and InfluxDB reporter.

* vendor: change nonsense/go-metrics to ethersphere/go-metrics

* go-metrics: add tests. move ResettingTimer logic from reporter to type.

* all, metrics: pull in metrics package in go-ethereum

* metrics/test: make sure metrics are enabled for tests

* metrics: apply gosimple rules

* metrics/exp, internal/debug: init expvar endpoint when starting pprof server

* internal/debug: tiny comment formatting fix
2018-02-23 11:56:08 +02:00
Péter Szilágyi
7f74bdf8dd Merge pull request #16164 from karalabe/les-receipt-fix-optimal
eth, les, light: filter on logs only, derive receipts on demand
2018-02-23 11:50:16 +02:00
Balint Gabor
a1984ce727 Merge pull request #15748 from janos/multiple-ens-endpoints
swarm: repeated --ens-api CLI flag (#14386)
2018-02-22 23:15:13 +01:00
Janos Guljas
6a9730edaa swarm, cmd/swarm: Merge branch 'master' into multiple-ens-endpoints 2018-02-22 18:51:34 +01:00
Ivan Daniluk
8522b31221 p2p: remove unused code (#16158)
* p2p: remove unused code

* p2p: remove unused imports
2018-02-22 19:20:28 +02:00
Péter Szilágyi
5cf1d35470 eth, les, light: filter on logs only, derive receipts on demand 2018-02-22 19:12:43 +02:00
Janoš Guljaš
4535247793 rpc: set rpcRequest.service as methodNotFoundError.service value (#16163)
RPC Server readRequest method sets the serverRequest error service
value as the rpcRequest.method and this change sets it to the right
service value.
2018-02-22 18:39:28 +02:00
Péter Szilágyi
44c393607e Merge pull request #16166 from karalabe/drop-dead
core: yeah, funny file, drop it
2018-02-22 16:31:28 +02:00
Balint Gabor
221486a291 Merge pull request #15919 from ethersphere/p2p-protocols-pr
p2p/protocols, p2p/testing: protocol abstraction and testing
2018-02-22 15:02:51 +01:00
Péter Szilágyi
0b3e23f636 core: yeah, funny file, drop it 2018-02-22 15:41:48 +02:00
Janos Guljas
a3a07350dc swarm, cmd/swarm: Merge branch 'master' into multiple-ens-endpoints 2018-02-22 14:23:17 +01:00
Péter Szilágyi
5be1085b6b Merge pull request #16165 from karalabe/faucet-twitter-api
cmd/faucet: resolve twitter user from final redirect
2018-02-22 13:30:40 +02:00
Péter Szilágyi
72c4c50777 cmd/faucet: resolve twitter user from final redirect 2018-02-22 13:20:36 +02:00
Anton Evangelatov
1e457b6599 p2p: don't send DiscReason when using net.Pipe (#16004) 2018-02-22 11:41:06 +01:00
Felix Lange
28b20cff4b p2p/protocols: gofmt -w -s 2018-02-22 11:37:57 +01:00
Guillaume Ballet
bb5349b154 whisper: Support for v2 has long been discontinued, remove it. (#16153) 2018-02-22 12:25:07 +02:00
Péter Szilágyi
724a915470 Merge pull request #16157 from nileshtrivedi/master
cmd/puppeth: Don't allow hyphen in network name. Fixes #16155
2018-02-22 09:56:59 +02:00
Nilesh Trivedi
085d3fbf72 cmd/puppeth: Don't allow hyphen in network name. Fixes #16155 2018-02-22 00:23:50 +05:30
Martin Holst Swende
45ce4dce3f Merge pull request #15776 from ProChain/master
accounts/abi: Fix the bug of mobile framework crashing
2018-02-21 19:21:41 +01:00
Martin Holst Swende
f54506ccf8 Merge pull request #15770 from holiman/abi_nostruct
accounts/abi: add another unpack interface
2018-02-21 15:14:07 +01:00
Martin Holst Swende
b585f76128 ethapi: prevent creating contract if no data is provided (#16108)
* ethapi: prevent creating contract if no data is provided

* internal/ethapi: downcase error for no data on contract creation
2018-02-21 16:10:18 +02:00
Dmitry Shulyak
14c76371ba p2p: when peer is removed remove it also from dial history (#16060)
This change removes a peer information from dialing history
when peer is removed from static list. It allows to force a
server to re-dial concrete peer if it is needed.

In our case we are running geth node on mobile devices, and
it is common for a network connection to flap on mobile.
Almost every time it flaps or network connection is changed
from cellular to wifi peers are disconnected with read
timeout. And usually it takes 30 seconds (default expiration
timeout) to recover connection with static peers after
connectivity is restored.

This change allows us to reconnect with peers almost
immediately and it seems harmless enough.
2018-02-21 15:03:26 +01:00
Péter Szilágyi
7d57824663 Merge pull request #16142 from karalabe/graceful-sigterm
cmd, console: support all termination signals
2018-02-21 15:50:34 +02:00
Péter Szilágyi
01507d9b9d cmd, console: support all termination signals 2018-02-21 15:23:10 +02:00
Pedro Pombeiro
34d94e22d9 whisper: Fix race condition in whisperv6/peer.go 2018-02-21 13:23:53 +01:00
Martin Holst Swende
61f2279bde abi: fix missing method on go 1.7/1.8 2018-02-21 12:27:50 +01:00
Martin Holst Swende
bd6ed23899 accounts/abi: harden unpacking against malicious input 2018-02-21 12:27:50 +01:00
Martin Holst Swende
08c5d4dd27 accounts/abi: address review concerns 2018-02-21 12:27:50 +01:00
Martin Holst Swende
f0f594d045 accounts/abi: Deduplicate code in unpacker 2018-02-21 12:27:50 +01:00
Martin Holst Swende
1ede68355d accounts/abi: add another unpack interface 2018-02-21 12:27:50 +01:00
steve greensill
5603715c06 README: add goreportcard.com badge to Readme (#16133)
* README: add goreportcard.com badge to Readme

* README: fix double github.com
2018-02-20 11:38:27 +02:00
Péter Szilágyi
46a5532ac5 VERSION, params: begin v1.8.2 release cycle 2018-02-19 11:38:36 +02:00
Péter Szilágyi
1e67410e88 params: release Geth v1.8.1 2018-02-19 11:35:31 +02:00
Felföldi Zsolt
1bdde620da les: fix light fetcher database race (#16103)
* les: fix light fetcher database race

* les: lightFetcher comments
2018-02-19 10:41:30 +02:00
Péter Szilágyi
06c5cae315 Merge pull request #16115 from nonsense/update_rjeczalik_notify
vendor: update rjeczalik/notify so that it compiles on go1.10
2018-02-19 10:39:28 +02:00
Janos Guljas
e07603bbc4 p2p/testing: check for all expectations in TestExchanges
Handle all expectations in ProtocolSession.TestExchanges in any
order that are received.
2018-02-17 23:42:28 +01:00
Péter Szilágyi
9fd76e33af Merge pull request #16109 from karalabe/p2p-bond-check
p2p/discover: validate bond against lastpong, not db presence
2018-02-17 18:47:44 +02:00
Anton Evangelatov
0a7cbd915a vendor: update rjeczalik/notify so that it compiles on go1.10 2018-02-17 14:35:59 +01:00
Felix Lange
aeedec4078 p2p/discover: s/lastPong/bondTime/, update TestUDP_findnode
I forgot to change the check in udp.go when I changed Table.bond to be
based on lastPong instead of node presence in db. Rename lastPong to
bondTime and add hasBond so it's clearer what this DB key is used for
now.
2018-02-16 21:29:20 +01:00
Péter Szilágyi
32301a4d6b p2p/discover: validate bond against lastpong, not db presence 2018-02-16 17:05:08 +02:00
Fynn
1e72271f57 accounts/abi: use unpackTuple to unpack event arguments
Events with just 1 argument fail before this change
2018-02-16 11:46:25 +01:00
cooganb
4e61ed02e2 swarm: add favicon for Swarm templates served by browser (#15958)
* swarm: added script to HTML header to create favicon addresses #153

* swarm: moved data blob direclty into link tag, removed script

* swarm: added favicon info to other html templates

* swarm: fixing test errors

* swarm: fixing favicon test

* swarm: fixing travis tests

* swarm: added script to HTML header to create favicon addresses #153

* swarm: moved data blob direclty into link tag, removed script

* swarm: added favicon info to other html templates

* swarm: fixing test errors

* swarm: fixing favicon test

* swarm: fixing travis tests
2018-02-15 16:24:20 +02:00
Guillaume Ballet
5f9b01a283 whisper: only use the node id as a p2p id, not for sending messages (#16102)
This is in preparation for the switch to libp2p: the ID generated
will be from a private key created with the help of libp2p's crypto
library, while Whisper will still use Go's default crypto libraries
for encrypting its messages. This change removes a conflict.

It shouldn't have any impact as the person receiving emails is
the user, not the node.
2018-02-15 14:43:48 +02:00
gluk256
fac6d9ce77 whisper: test timeout extended (#16088)
* whisper: timeout extended

* whisper: test updated

* whisper: test updated
2018-02-15 14:42:44 +02:00
Péter Szilágyi
2003b79779 Merge pull request #16095 from karalabe/les-lock
les: add missing lock around peer access
2018-02-15 13:02:36 +02:00
GuiltyMorishita
e2f2bb3e2e node: fix typo hvosts -> vhosts (#16096) 2018-02-15 12:38:39 +02:00
Péter Szilágyi
b92276c700 Merge pull request #16098 from holiman/fix_import
main: add gc flags to import-command
2018-02-15 12:34:33 +02:00
Martin Holst Swende
de93a9d437 main: add gc flags to import-command 2018-02-15 09:16:59 +01:00
ferhat elmas
dc7ca52b3b core: handle ignored error (#16065)
- according to implementation of `IntrinsicGas`
we can continue execution since problem will be detected
later. However, early return is future-proof for changes.
2018-02-14 22:02:51 +02:00
Péter Szilágyi
dfc5842a89 les: add missing lock around peer access 2018-02-14 21:09:20 +02:00
ferhat elmas
ff225db813 core/vm: remove unused hashing (#16075) 2018-02-14 15:41:05 +02:00
Felix Lange
752761cb57 params, VERSION: v1.8.1 unstable 2018-02-14 13:55:21 +01:00
Felix Lange
5f54075760 params: v1.8.0 stable 2018-02-14 13:51:30 +01:00
Péter Szilágyi
57bca0af8c containers/docker: bump legacy images to 1.8 branch (#16084) 2018-02-14 13:49:53 +01:00
Felix Lange
a5c0bbb4f4 all: update license information (#16089) 2018-02-14 13:49:11 +01:00
Péter Szilágyi
0544a43c13 Merge pull request #16085 from karalabe/p2p-fix-outofbounds
p2p/discover: fix out-of-bounds issue
2018-02-13 23:40:18 +02:00
Péter Szilágyi
20797348ca p2p/discover: fix out-of-bounds issue 2018-02-13 20:59:43 +02:00
Felix Lange
88f2839da4 travis.yml: work around Go 1.9.4 issue (#16082)
* travis.yml: work around Go 1.9.4 issue

* travis: workaround the workaround
2018-02-13 19:32:20 +02:00
Felix Lange
b007412db1 core: soften up state memory force-commit log messages (#16080)
Talk about "state" instead of "trie timing", "trie memory" and remove
the overzealous warning when the limit is just reached. Since the time
limit is always reached on slow machines, move the message to info level
so users don't freak out about internal details.
2018-02-13 15:12:55 +02:00
Péter Szilágyi
da41a7258d Merge pull request #16073 from karalabe/puppeth-unify-discovery
cmd/puppeth: unify discv4 and discv5 ports
2018-02-13 11:43:50 +02:00
Felföldi Zsolt
8d32c4b990 light: new CHTs (#16074) 2018-02-12 18:03:17 +02:00
Péter Szilágyi
12dab53495 cmd/puppeth: unify discv4 and discv5 ports 2018-02-12 16:27:53 +02:00
Péter Szilágyi
70fbc87379 Merge pull request #16071 from holiman/lintfix
node, rpc: fix linter issues
2018-02-12 15:16:47 +02:00
Martin Holst Swende
6c6247a690 node, rpc: fix linter issues 2018-02-12 14:12:55 +01:00
Martin Holst Swende
589b603a9b rpc: dns rebind protection (#15962)
* cmd,node,rpc: add allowedHosts to prevent dns rebinding attacks

* p2p,node: Fix bug with dumpconfig introduced in r54aeb8e4c0bb9f0e7a6c67258af67df3b266af3d

* rpc: add wildcard support for rpcallowedhosts + go fmt

* cmd/geth, cmd/utils, node, rpc: ignore direct ip(v4/6) addresses in rpc virtual hostnames check

* http, rpc, utils: make vhosts into map, address review concerns

* node: change log messages to use geth standard (not sprintf)

* rpc: fix spelling
2018-02-12 14:52:07 +02:00
Felix Lange
9123eceb0f p2p, p2p/discover: misc connectivity improvements (#16069)
* p2p: add DialRatio for configuration of inbound vs. dialed connections

* p2p: add connection flags to PeerInfo

* p2p/netutil: add SameNet, DistinctNetSet

* p2p/discover: improve revalidation and seeding

This changes node revalidation to be periodic instead of on-demand. This
should prevent issues where dead nodes get stuck in closer buckets
because no other node will ever come along to replace them.

Every 5 seconds (on average), the last node in a random bucket is
checked and moved to the front of the bucket if it is still responding.
If revalidation fails, the last node is replaced by an entry of the
'replacement list' containing recently-seen nodes.

Most close buckets are removed because it's very unlikely we'll ever
encounter a node that would fall into any of those buckets.

Table seeding is also improved: we now require a few minutes of table
membership before considering a node as a potential seed node. This
should make it less likely to store short-lived nodes as potential
seeds.

* p2p/discover: fix nits in UDP transport

We would skip sending neighbors replies if there were fewer than
maxNeighbors results and CheckRelayIP returned an error for the last
one. While here, also resolve a TODO about pong reply tokens.
2018-02-12 14:36:09 +02:00
Péter Szilágyi
1d39912a9b Merge pull request #16068 from karalabe/import-known-rolledback-blocks
core: force import known but rolled back blocks
2018-02-12 12:53:45 +02:00
Péter Szilágyi
69c1f2c2a7 core: force import known but rolled back blocks 2018-02-12 11:54:14 +02:00
ferhat elmas
52ad848b2e internal/build: fix usage of strings.TrimLeft (#16066) 2018-02-12 11:18:35 +02:00
Péter Szilágyi
4065695350 Merge pull request #16063 from karalabe/deprecate-ubuntu-zesty
build: deprecate zesty, add bionic PPA
2018-02-11 19:54:57 +02:00
Péter Szilágyi
969474f60a build: deprecate zesty, add bionic PPA 2018-02-11 19:07:11 +02:00
Péter Szilágyi
62ffec1be3 Merge pull request #16062 from karalabe/nodisable-fastsync
eth: only disable fast sync after success
2018-02-11 19:01:32 +02:00
Péter Szilágyi
57fd2da0fe eth: only disable fast sync after success 2018-02-11 17:25:00 +02:00
Péter Szilágyi
aa9432b816 Merge pull request #16061 from karalabe/downloader-nostate-ancestor-lookup
eth/downloader: don't require state for ancestor lookups
2018-02-11 16:39:09 +02:00
Péter Szilágyi
7a0019c63b les, light: fix CHT trie retrievals (#16039)
* les, light: fix CHT trie retrievals

* les, light: minor polishes, test remote CHT retrievals

* les, light: deterministic nodeset rlp, bloombits test skeleton

* les: add an event emission to the les bloombits test

* les: drop dead tester code
2018-02-11 14:57:46 +02:00
Péter Szilágyi
96dad6b6f6 eth/downloader: don't require state for ancestor lookups 2018-02-11 14:43:56 +02:00
Guillaume Ballet
5cf75a30c1 whisper: get wnode to work with v6 (#16051)
The bulk of the issue was to adapt to the new requirement
that a v6 filter has to either contain a symmertric key or
an asymmetric one.

This commits revert one of the fixes that I made to remove
a linter warning: unexporting NewSentMessage. This is not
really a problem as I have a cleanup in the pipe that will
solve this issue.
2018-02-10 15:35:32 +02:00
Felföldi Zsolt
2f849ade82 les: fix server panic when discovery disabled (#16055) 2018-02-10 14:33:52 +02:00
Chase Wright
a00f4a12a9 README: remove --fast and --cache flags and clarify default sync mode (#16043)
* Remove --fast flag and clarify default

`--fast` is no longer a flag it's `--syncmode "fast"` and that is the default

* Remove --cache flag

--cache=512 is no longer required as of 1.8 as the default has been increased

* README: Minor cache amount fix, mention Rinkeby
2018-02-10 12:50:14 +02:00
gluk256
42628ba7ed whisper: bloom filter refactoring (#16046)
* whisper: bloom filter refactoring

* whisper: fixed full node
2018-02-09 17:25:23 +02:00
gluk256
ccf8083537 whisper: Seal function fixed (#16048) 2018-02-09 17:25:03 +02:00
Felföldi Zsolt
c4712bf96b p2p/discv5: fix multiple discovery issues (#16036)
* p2p/discv5: add query delay, fix node address update logic, retry refresh if empty

* p2p/discv5: remove unnecessary ping before topic query

* p2p/discv5: do not filter local address from topicNodes

* p2p/discv5: remove canQuery()

* p2p/discv5: gofmt
2018-02-08 19:06:31 +02:00
cdetrio
2b4c7e9b37 params: update ropsten bootnodes (#16029)
* params: update ropsten bootnodes

* params: fix linter
2018-02-08 15:30:26 +02:00
Péter Szilágyi
03daf601c1 Merge pull request #16037 from karalabe/light-startup-polishes
eth, light: minor light client startup cleanups
2018-02-08 15:11:05 +02:00
Péter Szilágyi
eb07dbb079 eth, light: minor light client startup cleanups 2018-02-08 07:49:23 +02:00
gluk256
1a4e68721a Merge pull request #16022 from gballet/whisper-v6-investigate-macos-timeout
whisper: improve a log message to analyze a travis issue
2018-02-06 22:20:23 +02:00
Guillaume Ballet
806430a252 whisper: improve a log message to analyze a travis issue 2018-02-05 18:18:13 +01:00
Péter Szilágyi
55599ee95d core, trie: intermediate mempool between trie and database (#15857)
This commit reduces database I/O by not writing every state trie to disk.
2018-02-05 17:40:32 +01:00
Péter Szilágyi
59336283c0 Merge pull request #16020 from evertonfraga/patch-1
github: Replaces Wiki link [ci skip]
2018-02-05 17:58:47 +02:00
Ev
203440e813 github: Replaces Wiki link 2018-02-05 16:52:27 +01:00
Felföldi Zsolt
c3f238dd53 les: limit LES peer count and improve peer configuration logic (#16010)
* les: limit number of LES connections

* eth, cmd/utils: light vs max peer configuration logic
2018-02-05 15:41:53 +02:00
Martin Holst Swende
bc0666fb27 eth/downloader: fix #15858 by checking if downloader dropPeer function is set (#15992) 2018-02-05 15:38:06 +02:00
Péter Szilágyi
0662384d29 Merge pull request #16009 from holiman/db_handles
cmd/utils: fix #16006 by not lowering OS ulimit
2018-02-04 01:05:32 +02:00
Péter Szilágyi
b4e05adcc7 Merge pull request #16011 from karalabe/fix-bootnode-gofmt
params: fix bootnodes gofmt
2018-02-02 15:58:10 +02:00
Péter Szilágyi
efc9209158 params: fix bootnodes gofmt 2018-02-02 15:57:13 +02:00
Martin Holst Swende
ec28a58cc1 utils: fix #16006 by not lowering OS ulimit 2018-02-02 09:33:33 +01:00
Afri Schoedon
4dedde7beb params: Add Ropsten bootnodes (#16008) 2018-02-01 16:10:57 +02:00
Péter Szilágyi
fdb34b7a7c Merge pull request #15996 from karalabe/drop-redundant-methods
core, eth, les, light: get rid of redundant methods
2018-01-31 12:46:38 +02:00
Péter Szilágyi
07d4a02257 Merge pull request #15997 from karalabe/batch-reset-size
ethdb: reset the batch size too on reset
2018-01-30 19:18:42 +02:00
Péter Szilágyi
3e89b80ccb ethdb: reset the batch size too on reset 2018-01-30 19:12:28 +02:00
Martin Holst Swende
017b9f7eac core, ethdb: reuse database batches (#15989)
* leveldb: Update leveldb to 211f780 (poolfix)

* core, ethdb: reuse database batches
2018-01-30 19:03:31 +02:00
Péter Szilágyi
566d5c0777 core, eth, les, light: get rid of redundant methods 2018-01-30 18:42:00 +02:00
Felföldi Zsolt
6198c53e28 p2p/discv5: fix removeTicketRef cached ticket removal (#15995) 2018-01-30 18:01:22 +02:00
gluk256
a9e4a90d57 whisper: change the whisper message format so as to add the payload size (#15870)
* whisper: message format changed

* whisper: tests fixed

* whisper: style fixes

* whisper: fixed names, fixed failing tests

* whisper: fix merge issue in #15870

Occured while using the github online merge tool. Lesson learned.

* whisper: fix a gofmt error for #15870
2018-01-30 10:55:08 +02:00
Martin Holst Swende
59a852e418 vendor: update leveldb to 211f780 (poolfix) (#15988) 2018-01-29 16:17:59 +02:00
Guillaume Ballet
dd7a715d73 internal: fix a typo that caused a lint error on travis (#15987) 2018-01-29 15:35:05 +02:00
mark.lin
c1d70ea970 accounts/abi, core: add AddTxWithChain in BlockGen for simulation 2018-01-29 18:47:08 +08:00
Martin Holst Swende
722bac84fa ethapi: add personal.signTransaction (#15971)
* ethapi: add personal.signTransaction

* ethapi: refactor to minimize duplicate code

* ethapi: make nonce,gas,gasPrice obligatory in signTransaction
2018-01-26 19:32:43 +02:00
Felföldi Zsolt
23bca0f374 les: fix TxStatusMsg RLP coding (#15974) 2018-01-26 19:30:45 +02:00
Guillaume Ballet
367c329b88 whisper: remove linter warnings (#15972)
* whisper: fixes warnings from the code linter

* whisper: more non-API-breaking changes

The remaining lint errors are because of auto-generated
files and one is because an exported function has a non-
exported return type. Changing this would break the API,
and will be part of another commit for easier reversal.

* whisper: un-export NewSentMessage to please the linter

This is an API change, which is why it's in its own commit.
This change was initiated after the linter complained that
the returned type wasn't exported. I chose to un-export
the function instead of exporting the type, because that
type is an implementation detail that I would like to
change in the near future to make the code more
readable and with an increased coverage.

* whisper: update gencodec output after upgrading it to new lint standards
2018-01-26 13:45:10 +02:00
b00ris
2ef3815af4 whisper: fix empty topic (#15811)
* whisper: fix empty topic

* whisper: add check to matchSingleTopic

* whisper: add tests

* whisper: fix gosimple

* whisper: added lastTopicByte const
2018-01-26 13:41:53 +02:00
Miguel Mota
4dd0727c39 accounts: fix comment typo (#15977) 2018-01-26 13:33:27 +02:00
waymobetta
8f6990dc7d accounts/abi/bind/backends: fix comment typo (#15914) 2018-01-25 12:26:14 +02:00
Felföldi Zsolt
c335821479 cmd, params: update discovery v5 bootnodes (#15954) 2018-01-25 12:25:00 +02:00
Steven Roose
952482d5e4 rpc: Support specifying HTTP client in RPC dialing (#15836)
* rpc: Support specifying HTTP client in RPC dialing

Adds a minimal interface that captures http.Client and adds a new method
rpc.DialHTTPClient that takes a client using that interface. The existing
rpc.DialHTTP method is then alternatively implemented by using the new
rpc.DialHTTPClient method provided with a standard *http.Client.

* rpc: fix minor doc typos
2018-01-24 10:59:15 +02:00
Péter Szilágyi
5c83a4e5dd Merge pull request #15832 from karalabe/abigen-events
accounts/abi/bind: support event filtering in abigen
2018-01-24 10:55:24 +02:00
Péter Szilágyi
1bf508b449 accounts/abi/bind: support event filtering in abigen 2018-01-24 10:54:13 +02:00
Kurkó Mihály
05ade19302 dashboard: CPU, memory, diskIO and traffic on the footer (#15950)
* dashboard: footer, deep state update

* dashboard: resolve asset path

* dashboard: prevent state update on every reconnection

* dashboard: fix linter issue

* dashboard, cmd: minor UI fix, include commit hash

* dashboard: gitCommit renamed to commit

* dashboard: move the geth version to the right, make commit optional

* dashboard: memory, traffic and CPU on footer

* dashboard: fix merge

* dashboard: CPU, diskIO on footer

* dashboard: rename variables, use group declaration

* dashboard: docs
2018-01-23 22:51:04 +02:00
Felföldi Zsolt
ec96216d16 Chain indexer fix + new CHT (#15934)
* core, light: fix chain indexer bug

* light: add new CHT
2018-01-23 13:10:49 +02:00
croath
a6787a6308 accounts/abi: fix the output at the beginning instead of making a workaround 2018-01-23 19:10:23 +08:00
Felföldi Zsolt
397c6cde1e p2p/discv5: fix topic register panic at shutdown (#15946) 2018-01-23 12:53:09 +02:00
Péter Szilágyi
302c17c36a Merge pull request #15948 from holiman/addr_v5_bootnode
p2p/discv5: logs info about discv5 node info at bind time
2018-01-23 12:27:03 +02:00
Felix Lange
924065e19d consensus/ethash: improve cache/dataset handling (#15864)
* consensus/ethash: add maxEpoch constant

* consensus/ethash: improve cache/dataset handling

There are two fixes in this commit:

Unmap the memory through a finalizer like the libethash wrapper did. The
release logic was incorrect and freed the memory while it was being
used, leading to crashes like in #14495 or #14943.

Track caches and datasets using simplelru instead of reinventing LRU
logic. This should make it easier to see whether it's correct.

* consensus/ethash: restore 'future item' logic in lru

* consensus/ethash: use mmap even in test mode

This makes it possible to shorten the time taken for TestCacheFileEvict.

* consensus/ethash: shuffle func calc*Size comments around

* consensus/ethash: ensure future cache/dataset is in the lru cache

* consensus/ethash: add issue link to the new test

* consensus/ethash: fix vet

* consensus/ethash: fix test

* consensus: tiny issue + nitpick fixes
2018-01-23 12:05:30 +02:00
Martin Holst Swende
48641d7308 p2p/discv5: logs info about discv5 node info at bind time 2018-01-23 08:50:11 +01:00
Péter Szilágyi
5d4267911a Merge pull request #15941 from karalabe/fix-header-reorg-order
core: sorted reorg insertion order for proper head header updating
2018-01-22 15:38:30 +02:00
Felföldi Zsolt
92580d69d3 p2p, p2p/discover, p2p/discv5: implement UDP port sharing (#15200)
This commit affects p2p/discv5 "topic discovery" by running it on
the same UDP port where the old discovery works. This is realized
by giving an "unhandled" packet channel to the old v4 discovery
packet handler where all invalid packets are sent. These packets
are then processed by v5. v5 packets are always invalid when
interpreted by v4 and vice versa. This is ensured by adding one
to the first byte of the packet hash in v5 packets.

DiscoveryV5Bootnodes is also changed to point to new bootnodes
that are implementing the changed packet format with modified
hash. Existing and new v5 bootnodes are both running on different
ports ATM.
2018-01-22 13:38:34 +01:00
Péter Szilágyi
84be009154 core: sorted reorg insertion order for proper head header updating 2018-01-22 14:11:07 +02:00
zelig
407339085f p2p/protocols, p2p/testing: protocol abstraction and testing 2018-01-18 10:53:47 +01:00
Péter Szilágyi
02aeb3d766 Merge pull request #15902 from shapeshed/patch-2
core/vm: Fix comment typo
2018-01-16 18:23:41 +02:00
George Ornbo
370dca4491 core/vm: Fix comment typo 2018-01-16 15:47:33 +00:00
Felix Lange
f08cd94fb7 cmd/ethkey: fix formatting, review nits (#15807)
This commit:

- Adds a --msgfile option to read the message to sign from a file
  instead of command line argument.
- Adds a unit test for signing subcommands.
- Removes some weird whitespace in the code.
2018-01-16 15:42:41 +01:00
Péter Szilágyi
216e584899 Revert "trie: make fullnode children hash calculation concurrently (#15131)" (#15889)
This reverts commit 0f7fbb85d6.
2018-01-15 15:32:14 +02:00
Jim McDonald
18a7d31338 miner: avoid unnecessary work (#15883) 2018-01-15 12:57:06 +02:00
Kurkó Mihály
938cf4528a dashboard: deep state update, version in footer (#15837)
* dashboard: footer, deep state update

* dashboard: resolve asset path

* dashboard: remove bundle.js

* dashboard: prevent state update on every reconnection

* dashboard: fix linter issue

* dashboard, cmd: minor UI fix, include commit hash

* remove geth binary

* dashboard: gitCommit renamed to commit

* dashboard: move the geth version to the right, make commit optional

* dashboard: commit limited to 7 characters

* dashboard: limit commit length on client side

* dashboard: run go generate
2018-01-15 11:20:00 +02:00
Péter Szilágyi
81ad8f665d Merge pull request #15869 from Magicking/ethstats-reporting-fix
ethstats: Fix ethstats reporting while syncing
2018-01-15 10:36:43 +02:00
Magicking
90e5744d6f ethstats: Fix ethstats reporting while syncing 2018-01-12 18:30:38 +01:00
Péter Szilágyi
3f40b22dac Merge pull request #15863 from karalabe/light-miner-error
cmd/geth: user friendly light miner error
2018-01-12 17:30:19 +02:00
gluk256
fd869dc839 whisper/whisperv6: implement pow/bloom exchange protocol (#15802)
This is the main feature of v6.
2018-01-12 12:11:22 +01:00
Péter Szilágyi
bd0dbfa2a8 cmd/geth: user friendly light miner error 2018-01-12 11:59:18 +02:00
Ricardo Domingos
56152b31ac common/fdlimit: Move fdlimit files to separate package (#15850)
* common/fdlimit: Move fdlimit files to separate package

When go-ethereum is used as a library the calling program need to set
the FD limit.

This commit extract fdlimit files to a separate package so it can be
used outside of go-ethereum.

* common/fdlimit: Remove FdLimit from functions signature

* common/fdlimit: Rename fdlimit functions
2018-01-11 22:55:21 +02:00
Jean-André Santoni
023769d9d4 travis.yml: remove alias for 'cd' to avoid hang on macOS (#15849)
This works around travis-ci/travis-ci#8703.
2018-01-11 16:02:01 +01:00
Nick Johnson
b06e20bc7c eth/gasprice: set default percentile to 60%, count blocks instead of transactions (#15828)
The first should address a long term issue where we recommend a gas
price that is greater than that required for 50% of transactions in
recent blocks, which can lead to gas price inflation as people take
this figure and add a margin to it, resulting in a positive feedback
loop.
2018-01-10 13:57:36 +01:00
gary rong
3a5a5599dd consensus/ethash: fix byzantium difficulty comment typo (#15842) 2018-01-10 10:58:03 +02:00
Felföldi Zsolt
83d1657444 les: fix les/1 CHT compatibility issue (#15692) 2018-01-09 11:41:59 +01:00
Felix Lange
9d06026c19 all: regenerate codecs with gencodec commit 90983d99de (#15830)
Fixes #15777 because null is now allowed for hexutil.Bytes.
2018-01-08 15:13:22 +02:00
Felix Lange
5c2f1e0014 all: update generated code (#15808)
* core/types, core/vm, eth, tests: regenerate gencodec files

* Makefile: update devtools target

Install protoc-gen-go and print reminders about npm, solc and protoc.
Also switch to github.com/kevinburke/go-bindata because it's more
maintained.

* contracts/ens: update contracts and regenerate with solidity v0.4.19

The newer upstream version of the FIFSRegistrar contract doesn't set the
resolver anymore. The resolver is now deployed separately.

* contracts/release: regenerate with solidity v0.4.19

* contracts/chequebook: fix fallback and regenerate with solidity v0.4.19

The contract didn't have a fallback function, payments would be rejected
when compiled with newer solidity. References to 'mortal' and 'owned'
use the local file system so we can compile without network access.

* p2p/discv5: regenerate with recent stringer

* cmd/faucet: regenerate

* dashboard: regenerate

* eth/tracers: regenerate

* internal/jsre/deps: regenerate

* dashboard: avoid sed -i because it's not portable

* accounts/usbwallet/internal/trezor: fix go generate warnings
2018-01-08 14:15:57 +02:00
Russ Cox
a139041d40 dashboard: use balanced trees to include binary data (#15813)
* go-ethereum/dashboard: update assets.go

Use current rsc/go-bindata instead of jteeuwen/go-bindata, to get
balanced tree in very long string concatenations.
This works around problems in current Go distributions.

For golang/go#23222.

* dashboard: run last two go:generate steps for linter
2018-01-05 13:02:15 +02:00
Felix Lange
1c2378b926 tests: update to upstream commit 2bb0c3da3b (#15806)
Also raise traceLimit once again and print the VM
error and output on failure.
2018-01-04 13:18:30 +01:00
Péter Szilágyi
ae71da1b03 eth: fix tracer panic when running without configs + reexec (#15799) 2018-01-04 12:58:11 +01:00
Evangelos Pappas
7a59a9380e cmd/utils: handle git commit a bit safer for user specified strings (#15790)
* cmd/utils/flags.go: Applying a String len guard for the gitCommit param of the NewApp()

* cmd/utils: remove redundant clause in if condition
2018-01-03 18:18:53 +02:00
Péter Szilágyi
762f3a48a0 Merge pull request #15466 from karalabe/uint64-gas-limit
all: switch gas limits from big.Int to uint64
2018-01-03 16:53:06 +02:00
Felix Lange
b47285f1cf vendor: update github.com/rjeczalik/notify (#15801) 2018-01-03 16:50:33 +02:00
Péter Szilágyi
6f69cdd109 all: switch gas limits from big.Int to uint64 2018-01-03 14:45:35 +02:00
Furkan KAMACI
b8caba9709 various: remove redundant parentheses (#15793) 2018-01-03 14:14:47 +02:00
Felix Lange
9d48dbf5c2 eth: revert tracer preimage recording (#15800)
This reverts commits 85a1eda59e (#15792) and c495bca4ad (#15787)
because they introduce database writes during tracing.
2018-01-03 11:58:25 +01:00
Felix Lange
85a1eda59e eth: uncaptialize tracer preimage error message (#15792)
* eth: uncaptialize tracer preimage error message

* eth: improve very important error message
2018-01-03 10:53:09 +02:00
Richard Hart
72e70bcec2 console: remove comment about 'invalid' input (#15735)
All inputs are saved into history, including 'invalid' inputs.
2018-01-02 13:00:13 +01:00
ferhat elmas
5866626b08 core, p2p/discv5: use time.NewTicker instead of time.Tick (#15747) 2018-01-02 12:50:46 +01:00
cdetrio
c495bca4ad eth: enable preimage recording when tracing (#15787) 2018-01-02 12:49:17 +01:00
Péter Szilágyi
413cc5b0c8 cmd/geth: remove trailing newline in license command (#15782) 2018-01-02 12:48:19 +01:00
Péter Szilágyi
d2533d0efb vendor: update github.com/rjeczalik/notify for go1.10 (#15785) 2018-01-02 11:41:47 +01:00
Péter Szilágyi
3e0113fff4 build: set CC through a command-line flag (#15784)
This avoids setting CC for the go run invocation, which fails on go1.10.
2018-01-02 11:40:56 +01:00
Péter Szilágyi
9c42a41ed8 eth/downloader: avoid hidden reference to finished statesync request (#15545) 2018-01-02 11:38:26 +01:00
Péter Szilágyi
2fe07c203e build: fix version comparison for go1.10 and beyond (#15781) 2018-01-02 11:28:07 +01:00
Deepak Sharma
6882943e39 containers/docker: change docker images to go1.9 (#15789) 2018-01-02 11:27:33 +01:00
Péter Szilágyi
b98aa3b4f1 whisper/whisper2: fix Go 1.10 vet issues on type mismatches (#15783) 2018-01-02 11:13:24 +01:00
Alex Wu
6cd6b921ac crypto: ensure private keys are < N (#15745)
Fixes #15744
2018-01-02 10:55:03 +01:00
sunxiaojun2014
908faf8cd7 consensus/ethash: fix overdue link (#15786) 2017-12-31 13:38:39 +02:00
croath
88e67c552e accounts/abi: add a test case for unpacking mobile interfaces 2017-12-31 19:12:55 +08:00
Péter Szilágyi
b9731767af accounts/abi: handle named ouputs prefixed with underscores (#15766)
* accounts/abi: handle named ouputs prefixed with underscores

* accounts/abi: handle collinding outputs for struct unpacks

* accounts: handle purely underscore output names
2017-12-29 23:20:02 +02:00
Anton Evangelatov
36a10875c8 p2p/enr: initial implementation (#15585)
Initial implementation of ENR according to ethereum/EIPs#778
2017-12-29 21:18:51 +01:00
croath
e7cd627d93 accounts/abi: fix for one output interface crashing 2017-12-29 19:56:23 +08:00
Péter Szilágyi
f7ca03ae87 eth, les, light: expose chain config in les node info too (#15732) 2017-12-28 14:18:34 +01:00
Péter Szilágyi
c15d76a40f p2p/discv5: fix reg lookup, polish code, use logger (#15737) 2017-12-28 14:17:03 +01:00
Sorin Neacsu
5369a5c54d rpc: allow OPTIONS requests without Content-Type (#15759)
Fixes #15740
2017-12-28 14:15:33 +01:00
Martin Holst Swende
9d187f0238 Merge pull request #15731 from holiman/revamp_abi
accounts/abi refactor
2017-12-22 20:59:41 +01:00
Martin Holst Swende
c095c87e11 accounts/abi: merging of https://github.com/ethereum/go-ethereum/pull/15452 + lookup by id 2017-12-22 19:26:57 +01:00
Martin Holst Swende
73d4a57d47 acounts/abi: refactor abi, generalize abi pack/unpack to Arguments 2017-12-22 19:26:52 +01:00
gary rong
5f8888e116 accounts, consensus, core, eth: make chain maker consensus agnostic (#15497)
* accounts, consensus, core, eth: make chain maker consensus agnostic

* consensus, core: move CalcDifficulty to Engine interface

* consensus: add docs for calcDifficulty function

* consensus, core: minor comment fixups
2017-12-22 14:37:50 +02:00
Kurkó Mihály
9dbb8ef4aa dashboard: integrate Flow, sketch message API (#15713)
* dashboard: minor design change

* dashboard: Flow integration, message API

* dashboard: minor polishes, exclude misspell linter
2017-12-21 17:54:38 +02:00
Péter Szilágyi
52f4d6dd78 Merge pull request #15730 from karalabe/puppeth-expose-faucet-http
cmd/puppeth: fix faucet 502 error due to non-exposed HTTP port
2017-12-21 17:36:27 +02:00
Péter Szilágyi
e4aa882ec5 cmd/puppeth: fix faucet 502 error due to non-exposed HTTP port 2017-12-21 17:25:42 +02:00
gluk256
38b1e8ee20 whisper/whisperv6: PoW requirement (#15701)
New Whisper-level message introduced (PoW requirement),
corresponding logic added, plus some tests.
2017-12-21 15:17:27 +01:00
Robert Zaremba
81d4cafb32 accounts/abi: add unpack into array test 2017-12-21 15:14:50 +01:00
Robert Zaremba
1afca33eac accounts/abi: add Method Unpack tests
+ Reworked Method Unpack tests into more readable components
+ Added Method Unpack into slice test
2017-12-21 15:14:50 +01:00
Robert Zaremba
95461e8b22 accounts/abi: satisfy most of the linter warnings
+ adding missing comments
+ small cleanups which won't significantly change
  function body.
+ unify Method receiver name
2017-12-21 15:14:50 +01:00
Robert Zaremba
0ed8b838a9 accounts/abi: fix event unpack into slice
+ The event slice unpacker doesn't correctly extract element from the
slice. The indexed arguments are not ignored as they should be
(the data offset should not include the indexed arguments).

+ The `Elem()` call in the slice unpack doesn't work.
The Slice related tests fails because of that.

+ the check in the loop are suboptimal and have been extracted
out of the loop.

+ extracted common code from event and method tupleUnpack
2017-12-21 15:14:50 +01:00
Robert Zaremba
9becba5540 accounts/abi: fix event tupleUnpack
Event.tupleUnpack doesn't handle correctly Indexed arguments,
hence it can't unpack an event with indexed arguments.
2017-12-21 15:14:50 +01:00
Robert Zaremba
3511904aad accounts/abi: adding event unpacker tests 2017-12-21 15:14:50 +01:00
Martin Holst Swende
b0d41e386e Merge pull request #15285 from yondonfu/abi-offset-fixed-arrays
accounts/abi: include fixed array size in offset for dynamic type
2017-12-21 14:42:03 +01:00
Péter Szilágyi
91c3362315 Merge pull request #15729 from karalabe/faucet-fix-twitter
cmd/faucet: fix removal of Twitter zlib compression
2017-12-21 15:32:10 +02:00
lash
14852810b4 cmd/utils: add check on fd hard limit, skip test if below target (#15684)
* cmd/utils: Add check on hard limit, skip test if below target

* cmd/utils: Cross platform compatible fd limit test

* cmd/utils: Remove syscall.Rlimit in test

* cmd/utils: comment fd utility method
2017-12-21 15:30:44 +02:00
Janoš Guljaš
542d51895f swarm/api: url scheme bzz-hash to get hashes of swarm content (#15238) (#15715)
* swarm/api: url scheme bzz-hash to get hashes of swarm content (#15238)

Update URI to support bzz-hash scheme and handle such HTTP requests by
responding with hash of the content as a text/plain response.

* swarm/api: return hash of the content for bzz-hash:// requests

* swarm/api: revert "return hash of the content for bzz-hash:// requests"

Return hashes of the content that would be returned by bzz-raw
request.

* swarm/api/http: handle error in TestBzzGetPath

* swarm/api: remove extra blank line in comment
2017-12-21 14:47:10 +02:00
Péter Szilágyi
68651a2329 cmd/faucet: fix removal of Twitter zlib compression 2017-12-21 14:14:24 +02:00
Péter Szilágyi
5258785c81 cmd, core, eth/tracers: support fancier js tracing (#15516)
* cmd, core, eth/tracers: support fancier js tracing

* eth, internal/web3ext: rework trace API, concurrency, chain tracing

* eth/tracers: add three more JavaScript tracers

* eth/tracers, vendor: swap ottovm to duktape for tracing

* core, eth, internal: finalize call tracer and needed extras

* eth, tests: prestate tracer, call test suite, rewinding

* vendor: fix windows builds for tracer js engine

* vendor: temporary duktape fix

* eth/tracers: fix up 4byte and evmdis tracer

* vendor: pull in latest duktape with my upstream fixes

* eth: fix some review comments

* eth: rename rewind to reexec to make it more obvious

* core/vm: terminate tracing using defers
2017-12-21 13:56:11 +02:00
Péter Szilágyi
1a5425779b Merge pull request #15727 from karalabe/rinkeby-akasha-bootnode
params: add Rinkeby bootnode from Akasha
2017-12-21 13:54:46 +02:00
Péter Szilágyi
a28390542c params: add Rinkeby bootnode from Akasha 2017-12-21 13:19:42 +02:00
Steven Roose
eeb53bc143 cmd/ethkey: new command line tool for keys (#15438)
ethkey is a new tool that serves as a command line interface to
the basic key management functionalities of geth. It currently
supports:
 
 - generating keyfiles
 - inspecting keyfiles (print public and private key)
 - signing messages
 - verifying signed messages
2017-12-21 11:36:05 +01:00
Bob Glickstein
e21aa0fda3 accounts/abi: remove check for len%32==0 when unpacking events (#15670)
This change inlines the logic of bytesAreProper at its sole
callsite, ABI.Unpack, and applies the multiple-of-32 test only in
the case of unpacking methods. Event data is not required to be a
multiple of 32 bytes long.
2017-12-21 10:59:14 +01:00
gluk256
9f1007e554 whisper/whisperv6: message bundling (#15666)
Changed the communication protocol for ordinary message,
according to EIP 627. Messages will be send in bundles, i.e.
array of messages will be sent instead of single message.
2017-12-21 10:31:44 +01:00
Péter Szilágyi
4b939c23e4 appveyor: bump Go to 1.9.2 (#15726) 2017-12-21 11:30:44 +02:00
Péter Szilágyi
7138de7b55 core: silence txpool reorg warning (annoying on import) (#15725) 2017-12-21 10:20:10 +02:00
Kurkó Mihály
b4cf57a581 core: fix typos (#15720) 2017-12-20 19:08:51 +02:00
Dmitry Shulyak
da58afcea0 accounts/abi: update array length after parsing array (#15618)
Fixes #15617
2017-12-20 15:09:23 +01:00
Felix Lange
ce823c9f84 crypto: ensure that VerifySignature rejects malleable signatures (#15708)
* crypto: ensure that VerifySignature rejects malleable signatures

It already rejected them when using libsecp256k1, make sure the nocgo
version does the same thing.

* crypto: simplify check

* crypto: fix build
2017-12-20 14:30:00 +02:00
Péter Szilágyi
5e1581c2c3 core: fix panic when stat-ing a tx from a queue-only account (#15714) 2017-12-20 12:34:43 +02:00
Janos Guljas
820cf09c98 cmd/swarm: return error early in buildConfig function 2017-12-19 23:51:09 +01:00
Armin Braun
50df2b78be console: create datadir at startup (#15700)
Fixes #15672 by creating the datadir when creating the
console. This prevents failing to save the history if no datadir
exists.
2017-12-19 13:21:03 +01:00
Janos Guljas
dd5ae4fd8e cmd/swarm: add validation for EnsAPIs configuration parameter 2017-12-19 11:47:26 +01:00
Janoš Guljaš
c786f75389 swarm: bzz-list, bzz-raw and bzz-immutable schemes (#15667)
* swarm/api: url scheme bzz-list for getting list of files from manifest

Replace query parameter list=true for listing all files contained
in a swarm manifest with a new URL scheme bzz-list.

* swarm: replaace bzzr and bzzi schemes with bzz-raw and bzz-immutable

New URI Shemes are added and old ones are deprecated, but not removed.
Old Schemes bzzr and bzzi are functional for backward compatibility.

* swarm/api: completely remove bzzr and bzzi schemes

Remove old schemes in favour of bzz-raw and
bzz-immutable.

* swarm/api: revert "completely remove bzzr and bzzi schemes"

Keep bzzr and bzzi schemes for backward compatibility. At least
until 0.3 swarm release.
2017-12-19 10:49:30 +02:00
Péter Szilágyi
7f9d94fe9a Merge pull request #15693 from zsfelfoldi/wait_nopeers
contracts/release: do not print error log if les backend has no peers
2017-12-19 10:45:02 +02:00
Yondon Fu
cf7aba36c8 accounts/abi: update array type check in method.go. Add more packing tests 2017-12-18 21:16:25 -05:00
Yondon Fu
3857cdc267 Merge branch 'master' into abi-offset-fixed-arrays 2017-12-18 17:17:41 -05:00
Janos Guljas
0d6a735a72 swarm/api: implement NoResolverError with information about TLD
MultiResolver needs to provide information about TLD that has
no resolver configured for.
2017-12-18 23:07:48 +01:00
Zsolt Felfoldi
48648bc2f8 contracts/release: do not print error log if les backend has no peers 2017-12-18 16:26:17 +01:00
Janos Guljas
c0a4d9e1e6 cmd/swarm, swarm: disable ENS API by default
Specifying ENS API CLI flag, env variable or configuration
field is required for ENS resolving. Backward compatibility is
preserved with --ens-api="" CLI flag value.
2017-12-18 16:22:39 +01:00
Péter Szilágyi
fe070ab5c3 Merge pull request #15674 from chfast/vm-no-snapshot-param
core/vm: Remove snapshot param from Interpreter.Run()
2017-12-18 16:16:59 +02:00
Felix Lange
8c33ac10bf internal/ethapi: support "input" in transaction args (#15640)
The tx data field is called "input" in returned objects and "data" in
argument objects. Make it so "input" can be used, but bail if both
are set.
2017-12-18 12:50:21 +01:00
Péter Szilágyi
3b79bac05b Merge pull request #15698 from original-brownbear/15668
accounts/keystore: Improved error message
2017-12-18 13:43:10 +02:00
Armin
afc2039f22 accounts/keystore: Improved error message
* Fix for #15668
2017-12-18 12:28:34 +01:00
Péter Szilágyi
13db4af345 Merge pull request #15696 from ferhatelmas/p2p-goroutine-leak
p2p/discover: fix leaked goroutine in data expiration
2017-12-18 10:59:10 +02:00
Péter Szilágyi
64ee3e92ea Merge pull request #15686 from sorin/sorin-geth-attach-rinkeby
cmd/geth: add support for geth --rinkeby attach
2017-12-18 10:28:09 +02:00
ferhat elmas
afa3c72c40 p2p/discover: fix leaked goroutine in data expiration 2017-12-18 09:16:54 +01:00
Sorin Neacsu
1d7d7f57d0 cmd/geth: add support for geth --rinkeby attach 2017-12-15 13:31:10 -08:00
Paweł Bylica
fb5f25eeee core/vm: Remove snapshot param from Interpreter.Run() 2017-12-15 13:33:35 +01:00
Felix Lange
c6069a627c crypto, crypto/secp256k1: add CompressPubkey (#15626)
This adds the inverse to DecompressPubkey and improves a few minor
details in crypto/secp256k1.
2017-12-15 10:40:09 +01:00
Péter Szilágyi
1f2176dedc Merge pull request #15679 from shapeshed/patch-1
crypto: Fix comment typo
2017-12-15 01:24:19 +02:00
George Ornbo
7bb2a489b2 crypto: Fix comment typo 2017-12-14 21:55:18 +00:00
rhaps107
e9971d356b internal/ethapi: don't crash for missing receipts
Fixes #15408
Fixes #14432
2017-12-14 13:24:34 +01:00
Janos Guljas
47a8014559 cmd/swarm: Merge branch 'master' into multiple-ens-endpoints
Fix a conflict in cmd/swarm envVarsOverride function.
2017-12-14 10:36:12 +01:00
Péter Szilágyi
5129ef22c2 Merge pull request #15629 from holiman/relax_futuretime
consensus/ethash: relax requirements when determining future-blocks
2017-12-14 11:28:42 +02:00
Janos Guljas
19982f9467 swarm, cmd/swarm: Merge branch 'master' into multiple-ens-endpoints
Merge with changes that implement config file PR #15548.

Field *EnsApi string* in swarm/api.Config is replaced with
*EnsAPIs []string*.

A new field *EnsDisabled bool* is added to swarm/api.Config for
easy way to disable ENS resolving with config file.

Signature of function swarm.NewSwarm is changed and simplified.
2017-12-13 10:40:39 +01:00
Felix Lange
3654aeaa4f p2p/simulations: fix gosimple nit (#15661) 2017-12-13 03:15:27 +01:00
Vitaly V
f258a21a63 rpc: use method constants instead of literal strings (#15652) 2017-12-12 19:12:32 +01:00
holisticode
fd777bb210 p2p/simulations: add mocker functionality (#15207)
This commit adds mocker functionality to p2p/simulations. A
mocker allows to starting/stopping of nodes via the HTTP API.
2017-12-12 19:10:41 +01:00
Zach
3da1bf8ca1 all: use gometalinter.v2, fix new gosimple issues (#15650) 2017-12-12 19:05:47 +01:00
yoza
bbea4b2b53 internal/ethapi: fix typo in comment (#15659) 2017-12-12 18:55:39 +01:00
holisticode
32516c768e cmd/swarm: add config file (#15548)
This commit adds a TOML configuration option to swarm. It reuses
the TOML configuration structure used in geth with swarm
customized items.

The commit:

* Adds a "dumpconfig" command to the swarm executable which
  allows printing the (default) configuration to stdout, which
  then can be redirected to a file in order to customize it.
* Adds a "--config <file>" option to the swarm executable which will
  allow to load a configuration file in TOML format from the
  specified location in order to initialize the Swarm node The
  override priorities are like follows: environment variables
  override command line arguments override config file override
  default config.
2017-12-11 22:56:06 +01:00
Felix Lange
1a32bdf92c crypto: fix error check in toECDSA (#15632)
With this change,

    key, err := crypto.HexToECDSA("000000...")
    
returns nil key and an error instead of a non-nil key with nil X
and Y inside. Issue found by @guidovranken.
2017-12-11 22:49:09 +01:00
Felix Lange
2499b1b139 rlp: fix string size check in readKind (#15625)
Issue found by @guidovranken
2017-12-11 22:47:10 +01:00
Guillaume Ballet
e7610eadfe whisper: sym encryption message padding includes salt (#15631)
Now that the AES salt has been moved to the payload, padding must
be adjusted to hide it, lest an attacker guesses that the packet
uses symmetric encryption.
2017-12-11 12:32:58 +01:00
Michael Ruminer
732f5468d3 eth: make tracing API errors more user friendly (#15589) 2017-12-09 23:47:13 +01:00
Alejandro Isaza
bbfe0b8d04 mobile: Add GetSign to BigInt (#15558) 2017-12-09 23:45:46 +01:00
Péter Szilágyi
46e5583993 cmd/utils, eth: init etherbase from within eth (#15528) 2017-12-09 23:42:23 +01:00
Guillaume Ballet
bf62acf033 whisper/whisperv6: remove Version from the envelope (#15621) 2017-12-08 16:08:56 +01:00
Sorin Neacsu
586198ccea console: add admin.clearHistory command (#15614) 2017-12-08 15:14:14 +01:00
Guillaume Ballet
d95962cd5d whisper/whisperv6: remove aesnonce (#15578)
As per EIP-627, the salt for symmetric encryption is now
part of the payload. This commit does that.
2017-12-08 11:40:59 +01:00
Martin Holst Swende
79d5e5593f consensus/ethash: relax requirements when determining future-blocks 2017-12-08 10:06:16 +01:00
Felix Lange
b5874273ce travis.yml: avoid submodules on builders without tests (#15620)
Also remove installation steps for fuse and golang.org/x/tools/cmd/cover 
because they're not required anymore.
2017-12-07 15:49:35 +01:00
Airead
8092106abc core/types: fix typo in comment (#15619) 2017-12-07 10:06:44 +01:00
Benoit Verkindt
eab2201f80 eth: return rlp-decoded values from debug_storageRangeAt (#15476)
Fixes #15196
2017-12-06 16:42:16 +01:00
Felix Lange
e85b68ef53 crypto: add DecompressPubkey, VerifySignature (#15615)
We need those operations for p2p/enr.

Also upgrade github.com/btcsuite/btcd/btcec to the latest version
and improve BenchmarkSha3. The benchmark printed extra output 
that confused tools like benchstat and ignored N.
2017-12-06 16:07:08 +01:00
Sorin Neacsu
6e613cf3de cmd/geth: add support for geth attach --testnet (#15597) 2017-12-05 11:17:38 +01:00
Janos Guljas
1dc19de5da swarm/api: use path.Ext instead strings.LastIndex in MultiResolver.Resolve 2017-12-04 22:56:52 +01:00
Janos Guljas
e451b65fae swarm: deprecate --ens-addr CLI flag with a warning message 2017-12-04 22:41:21 +01:00
Janos Guljas
3732c15faa swarm/api: remove unneeded blank assignement 2017-12-04 22:32:33 +01:00
Janos Guljas
a758b5cf7a swarm/api: initialize map with make to comply with the convention 2017-12-04 22:31:00 +01:00
Janos Guljas
34edbc8868 swarm/api: remove unneeded assignment in MultiResolverOptionWithResolver 2017-12-04 22:29:37 +01:00
Janos Guljas
15ad6f27da swarm: check if "--ens-api ''" is specified in order to disable ENS 2017-12-04 22:28:11 +01:00
Janos Guljas
b33a051a48 swarm: add comment for parseFlagEnsAPI and fix a mistake in comment in code 2017-12-04 22:20:29 +01:00
Steven Roose
afb8154eab common: improve IsHexAddress and add tests (#15551)
Also unexport isHex, hasHexPrefix because IsHexAddress is the only caller.

Fixes #15550
2017-12-04 19:34:15 +01:00
Janos Guljas
7898e0d585 swarm: multiple --ens-api flags
Allow multiple --ens-api flags to be specified with value format
[tld:][contract-addr@]url.

Backward compatibility with only one --ens-api flag and --ens-addr
flag is preserved and conflict cases are handled:

 - multiple --ens-api with --ens-addr returns an error

 - single --ens-api with contract address and --ens-addr with
   different contract address returns an error

Previously implemented --ens-endpoint is removed. Its functionality
is replaced with multiple --ens-api flags.
2017-12-04 12:44:24 +01:00
ferhat elmas
1d06e41f04 p2p, swarm/network/kademlia: use IsZero to check for zero time (#15603) 2017-12-04 11:07:10 +01:00
ferhat elmas
43dd8e62fc build: enable gosimple linter (#15593) 2017-12-01 13:04:06 +01:00
Matthew Di Ferrante
80c6dfc19f crypto/bn256: fix generator on G1 (#15591)
Generator in the current lib uses -2 as the y point when doing
ScalarBaseMult, this makes it so that points/signatures generated
from libs like py_ecc don't match/validate as pretty much all
other libs (including libsnark) have (1, 2) as the standard
generator.

This does not affect consensus as the generator is never used in
the VM, points are always explicitly defined and there is not
ScalarBaseMult op - it only makes it so that doing "import
github.com/ethereum/go-ethereum/crypto/bn256" doesn't generate
bad points in userland tools.
2017-12-01 13:03:39 +01:00
Rob
d927c67f9d eth/downloader: update tests for reliability (#15337)
Updated use of Parallel and added some subtests to help isolate
them. Increased timeout in RequestHeadersByNumber so it
doesn't time out and causes other tests to break.
2017-12-01 12:54:17 +01:00
Guillaume Ballet
20fe928914 whisper: rename EnvNonce to Nonce in the v6 Envelope (#15579) 2017-12-01 12:50:19 +01:00
Lewis Marshall
54aeb8e4c0 p2p/simulations: various stability fixes (#15198)
p2p/simulations: introduce dialBan

- Refactor simulations/network connection getters to support
  avoiding simultaneous dials between two peers If two peers dial
  simultaneously, the connection will be dropped to help avoid
  that, we essentially lock the connection object with a
  timestamp which serves as a ban on dialing for a period of time
  (dialBanTimeout).

- The connection getter InitConn can be wrapped and passed to the
  nodes via adapters.NodeConfig#Reachable field and then used by
  the respective services when they initiate connections. This
  massively stablise the emerging connectivity when running with
  hundreds of nodes bootstrapping a network.

p2p: add Inbound public method to p2p.Peer

p2p/simulations: Add server id to logs to support debugging
in-memory network simulations when multiple peers are logging.

p2p: SetupConn now returns error. The dialer checks the error and
only calls resolve if the actual TCP dial fails.
2017-12-01 12:49:04 +01:00
Janos Guljas
057af8c5c8 swarm: add CLI --ens-endpoint flag (#14386)
Implement a CLI flag that can be repeated to allow multiple ENS
resolvers for different TLDs.
2017-12-01 11:25:50 +01:00
Zach
73067fd24f buld: enable goconst linter (#15566) 2017-11-30 11:22:26 +01:00
Péter Szilágyi
e37f7be97e Merge pull request #15577 from karalabe/common-hexconvert-singlebyte
common: fix hex utils to handle 1 byte address conversions
2017-11-29 11:27:24 +02:00
Péter Szilágyi
b33a5294ea common: fix hex utils to handle 1 byte address conversions 2017-11-29 02:25:32 +02:00
Felix Lange
be12392fba core/vm: track 63/64 call gas off stack (#15563)
* core/vm: track 63/64 call gas off stack

Gas calculations in gasCall* relayed the available gas for calls by
replacing it on the stack. This lead to inconsistent traces, which we
papered over by copying the pre-execution stack in trace mode.

This change relays available gas using a temporary variable, off the
stack, and allows removing the weird copy.

* core/vm: remove stackCopy

* core/vm: pop call gas into pool

* core/vm: to -> addr
2017-11-28 21:05:49 +02:00
Maximilian Meister
8f35e3086c cmd/geth: fix geth attach --datadir=... (#15517) 2017-11-28 14:00:00 +01:00
Péter Szilágyi
e323ed5a9a Merge pull request #15557 from MaximilianMeister/bootnodes-toml
cmd/utils: bootstrap nodes in config file were not respected
2017-11-28 13:34:14 +02:00
Zach
6bb61ee9ef build: improve ci.go synopsis (#15565) 2017-11-28 10:45:48 +01:00
gary rong
0f7fbb85d6 trie: make fullnode children hash calculation concurrently (#15131)
* trie: make fullnode children hash calculation concurrently

* trie: thread out only on topmost fullnode

* trie: clean up full node children hash calculation

* trie: minor code fixups
2017-11-27 13:34:17 +02:00
Maximilian Meister
62dc530773 cmd/utils: bootstrap nodes in config file were not respected
Signed-off-by: Maximilian Meister <mmeister@suse.de>
2017-11-26 12:42:51 +01:00
Paul Litvak
e4c9fd29a3 cmd/utils: disallow --lightserv in light mode (#15514)
* Disallow --lightserv in light mode

* Reformatted

* cmd/utils: reduce nesting levels a bit
2017-11-24 17:07:21 +02:00
Péter Szilágyi
de37e088f2 Merge pull request #15549 from karalabe/statedb-copy
core/state: copy trie too, not just content
2017-11-24 17:02:40 +02:00
Péter Szilágyi
f0ac925fa7 Merge pull request #15329 from holisticode/exact-match-fix
swarm/api: bug fix exact match for manifest
2017-11-24 16:23:37 +02:00
Péter Szilágyi
0981d2e566 Merge pull request #15498 from nonsense/account_cache_modtime_test_fix
accounts/keystore: change modtime for test case files to be bigger than 1sec.
2017-11-24 16:21:39 +02:00
gary rong
f14047dae5 cmd, consensus, eth: split ethash related config to it own (#15520)
* cmd, consensus, eth: split ethash related config to it own

* eth, consensus: minor polish

* eth, consenus, console: compress pow testing config field to single one

* consensus, eth: document pow mode
2017-11-24 16:10:27 +02:00
Péter Szilágyi
b0056f5bd0 Merge pull request #15552 from karalabe/javascript-tracers-nowrap
internal/ethapi: avoid recreating JavaScript tracer wrappers
2017-11-24 15:45:08 +02:00
Péter Szilágyi
5dea0f2aa4 core/state: copy trie too, not just content 2017-11-24 14:20:49 +02:00
Péter Szilágyi
989fb4472a internal/ethapi: avoid recreating JavaScript tracer wrappers 2017-11-24 13:55:12 +02:00
Ricardo Domingos
9ff9d04a69 all: fix code comment typos (#15547)
* console: fix typo in comment

* contracts/release: fix typo in comment

* core: fix typo in comment

* eth: fix typo in comment

* miner: fix typo in comment
2017-11-24 11:20:01 +02:00
Zoe Nolan
edc3e0efeb cmd/puppeth: fix typo in comment (#15539)
* cmd: fix typo in comment

* cmd/puppeth: tiny comment fixup
2017-11-24 10:58:28 +02:00
Péter Szilágyi
f9569f3cd8 Merge pull request #15390 from karalabe/puppeth-devcon3
cmd/puppeth: new version as presented at devcon3
2017-11-24 10:56:33 +02:00
Péter Szilágyi
a3a2c6b0d9 cmd/puppeth: fix typos and review suggestions 2017-11-23 14:22:59 +02:00
Péter Szilágyi
35801f938e Merge pull request #15538 from zoenolan/patch-1
build: fix typo in comment
2017-11-23 14:06:48 +02:00
Zoe Nolan
cc3ca63dbf build: fix typo in comment 2017-11-21 19:23:42 +00:00
Péter Szilágyi
049797d40a Merge pull request #15521 from rjl493456442/clean_tx_journal
les: clean up tx journal file after testing
2017-11-21 20:42:11 +02:00
rjl493456442
41ef34ae40 les: use modified default txpool config to avoid creating journal file 2017-11-21 22:13:57 +08:00
Péter Szilágyi
b169a309f9 cmd/puppeth: fix unconvert linters 2017-11-21 15:13:08 +02:00
Péter Szilágyi
7f40ae7876 cmd/puppeth: switch over to upstream alltools docker image 2017-11-21 15:09:40 +02:00
Péter Szilágyi
327dcd3622 cmd/faucet, cmd/puppeth: drop GitHub support at official request 2017-11-21 15:09:39 +02:00
Péter Szilágyi
ffc12f63ec cmd/puppeth: simplifications and pre-built docker images 2017-11-21 15:09:39 +02:00
Péter Szilágyi
80be5e5463 cmd/puppeth: store genesis locally to persist restarts 2017-11-21 15:09:38 +02:00
Péter Szilágyi
7abf968d6f cmd/puppeth: skip genesis custom extra-data 2017-11-21 15:09:37 +02:00
Péter Szilágyi
6eb38e02a8 cmd/puppeth: fix dashboard iframes, extend with new services 2017-11-21 15:09:36 +02:00
Péter Szilágyi
51a86f61be cmd/faucet: protocol relative websockets, noauth mode 2017-11-21 15:09:36 +02:00
Péter Szilágyi
b5cf603895 cmd/puppeth: add support for deploying web wallets 2017-11-21 15:09:35 +02:00
Péter Szilágyi
1e0c336d29 cmd/puppeth: etherchain light block explorer for PoW nets 2017-11-21 15:09:34 +02:00
Péter Szilágyi
9e095251b7 cmd/puppeth: mount ethash dir from the host to cache DAGs 2017-11-21 15:09:33 +02:00
Péter Szilágyi
da3b9f831e cmd/puppeth: support deploying services with forced rebuilds 2017-11-21 15:09:33 +02:00
Péter Szilágyi
7b258c9681 cmd/puppeth: concurrent server dials and health checks 2017-11-21 15:09:32 +02:00
Péter Szilágyi
8c78449a9e cmd/puppeth: reorganize stats reports to make it readable 2017-11-21 15:09:28 +02:00
Péter Szilágyi
005665867d VERSION, params: begin 1.8.0 release cycle 2017-11-21 11:59:12 +02:00
Péter Szilágyi
4bb3c89d44 params: release v1.7.3 stable 2017-11-21 11:54:19 +02:00
Martin Holst Swende
bedf6f40af cmd/geth: make geth account new faster with many keys (#15529) 2017-11-20 17:39:53 +01:00
Felix Lange
b4f2e4de8f .github: add CODEOWNERS (#15507) 2017-11-20 17:32:23 +01:00
Nick Johnson
72ed186f46 eth, internal: Implement getModifiedAccountsBy(Hash|Number) using trie diffs (#15512)
* eth, internal: Implement  using trie diffs

* eth, internal: Changes in response to review

* eth: More fixes to getModifiedAccountsBy*

* eth: minor polishes on error capitalization
2017-11-20 17:18:50 +02:00
Péter Szilágyi
7b95cca56c Merge pull request #15527 from holiman/bump_watch
accounts/keystore: Ignore initial trigger of rescan-event
2017-11-20 13:54:15 +02:00
Martin Holst Swende
e2b3a23663 accounts/keystore: Ignore initial trigger of rescan-event 2017-11-20 12:35:30 +01:00
Péter Szilágyi
0f184d3b14 Merge pull request #15526 from karalabe/accounts-new
accounts: fix two races in the account manager
2017-11-20 13:14:57 +02:00
Péter Szilágyi
6810674de9 accounts/keystore: lock file cache during scan, minor polish 2017-11-20 12:21:52 +02:00
Péter Szilágyi
8a79836044 accounts: list, then subscribe (sub requires active reader) 2017-11-20 12:20:46 +02:00
Pulyak Viktor
f5091e5711 internal/ethapi: fix js tracer to properly decode addresses (#15297)
* Add method getBalanceFromJs for work with address as bytes

* expect []byte instead of common.Address in ethapi tracer
2017-11-18 03:56:03 +02:00
Péter Szilágyi
0dbf55d478 Merge pull request #15509 from tbm/typo
Fix typo in README.md
2017-11-17 17:01:53 +02:00
Martin Michlmayr
2ab5c11261 README: fix typo 2017-11-17 14:45:09 +00:00
Péter Szilágyi
e39ca5e597 Merge pull request #15506 from tsarpaul/master
internal/ethapi: changed output in txpool.inspect
2017-11-17 15:16:49 +02:00
tsarpaul
c7b0abf86b Added output to clarify gas calculation in txpool.inspect 2017-11-17 15:07:57 +02:00
Péter Szilágyi
c8b7ea1c2e Merge pull request #15505 from karalabe/fix-rpc-pr
rpc: minor cleanups to RPC PR
2017-11-17 14:59:44 +02:00
Péter Szilágyi
3c6b9c5d72 rpc: minor cleanups to RPC PR 2017-11-17 14:25:02 +02:00
Armani Ferrante
c5b8569707 rpc: disallow PUT and DELETE on HTTP (#15501)
Fixes #15493
2017-11-17 13:07:11 +01:00
Péter Szilágyi
b0190189a3 core/vm, internal/ethapi: tracer no full storage, nicer json output (#15499)
* core/vm, internal/ethapi: tracer no full storage, nicer json output

* core/vm, internal/ethapi: omit disabled trace fields
2017-11-16 18:53:18 +02:00
Péter Szilágyi
87f5b4123c Merge pull request #15496 from karalabe/rpc-get-healthcheck
rpc: allow dumb empty requests for AWS health checks
2017-11-16 17:03:06 +02:00
Anton Evangelatov
b64525694b accounts/keystore: comments above time.Sleep 2017-11-16 15:01:02 +01:00
Anton Evangelatov
448abb61eb accounts/keystore: change modtime for test cases to be bigger than 1sec. 2017-11-16 14:17:28 +01:00
Péter Szilágyi
4013e23312 rpc: allow dumb empty requests for AWS health checks 2017-11-16 13:51:06 +02:00
jtakalai
5aa3eac22d eth/downloader: minor comments cleanup (#15495)
it's -> its

pet peeve, and I like to imagine I'm not alone.
2017-11-16 13:14:51 +02:00
Péter Szilágyi
bb57f1f1e5 Merge pull request #15489 from karalabe/bloombits-shifted-start
core/bloombits: handle non 8-bit boundary section matches
2017-11-15 14:34:53 +02:00
Péter Szilágyi
463014126f core/bloombits: handle non 8-bit boundary section matches 2017-11-15 14:10:35 +02:00
Péter Szilágyi
bce5d837b5 Merge pull request #14582 from holiman/jumpdest_improv
core/vm: improve jumpdest analysis
2017-11-15 10:52:14 +02:00
Péter Szilágyi
43c8a1914c Merge pull request #15470 from karalabe/clique-sametd-splitter
core: split same-td blocks on block height
2017-11-15 10:44:18 +02:00
Kurkó Mihály
ba62215d9e cmd, dashboard: dashboard using React, Material-UI, Recharts (#15393)
* cmd, dashboard: dashboard using React, Material-UI, Recharts

* cmd, dashboard, metrics: initial proof of concept dashboard

* dashboard: delete blobs

* dashboard: gofmt -s -w .

* dashboard: minor text and code polishes
2017-11-14 19:34:00 +02:00
gary rong
984c25ac40 accounts, internal: fail if no suitable estimated gas found (#15477)
* accounts, internal: return an error if no suitable estimated gas found

* accounts, internal: minor polishes on the gas estimator
2017-11-14 18:26:31 +02:00
Péter Szilágyi
a3128f9099 Merge pull request #15479 from guoger/patch-1
core/vm: fix typos in jump_table.go
2017-11-14 15:06:29 +02:00
Jay Guo
924098c6e5 core/vm: fix typos in jump_table.go 2017-11-14 17:57:37 +08:00
Martin Holst Swende
96ddf27a48 core/vm: copyright header on test-file 2017-11-13 22:04:53 +01:00
Péter Szilágyi
54ce3887d8 core: split same-td blocks on block height 2017-11-13 17:07:05 +02:00
Péter Szilágyi
b81a9cd829 Merge pull request #15467 from karalabe/docker-alltools
Dockerfile: support alltools image beside plain Geth
2017-11-13 14:40:37 +02:00
Péter Szilágyi
cef06358ff Dockerfile: support alltools image beside plain Geth 2017-11-13 14:30:13 +02:00
Péter Szilágyi
9b97f98334 Merge pull request #15457 from robert-zaremba/testify
vendor: add github.com/stretchr/testify test dependency
2017-11-13 14:23:38 +02:00
Péter Szilágyi
836314c055 Merge pull request #15464 from karalabe/docker-fix
dockerignore, internal/build: forward correct git folder
2017-11-12 22:56:38 +02:00
Péter Szilágyi
e401536c97 dockerignore, internal/build: forward correct git folder 2017-11-12 22:52:41 +02:00
Bo
cb8bbe7081 puppeth: handle encrypted ssh keys (closes #15442) (#15443)
* cmd/puppeth: handle encrypted ssh keys

* cmd/puppeth: fix unconvert linter error
2017-11-12 22:24:42 +02:00
Arba Sasmoyo
f47adc9ea8 .dockerignore, internal/build: Read git information directly from file (#15458)
* .dockerignore, internal/build: Read git information directly from file

This commit changes the way of retrieving git commit and branch for build
environment from running git command to reading git files directly.

This commit also adds required git files into Docker build context.

fixes: #15346

* .dockerignore: workaround for including some files in .git
2017-11-12 22:00:18 +02:00
Robert Zaremba
5d895db2fd vendor: add github.com/stretchr/testify test dependency
github.com/stretchr/testify is a useful library for doing
assertion in tests. It makes assertions in test more less verbose and
more comfortable to read and use.
2017-11-11 00:29:41 +01:00
ferhat elmas
86f6568f66 build: enable unconvert linter (#15456)
* build: enable unconvert linter

 - fixes #15453
 - update code base for failing cases

* cmd/puppeth: replace syscall.Stdin with os.Stdin.Fd() for unconvert linter
2017-11-10 19:06:45 +02:00
Benoit Verkindt
3ee86a57f3 rpc: warn on WebSocket origin mismatch (#15451)
Fixes #15373
2017-11-10 10:22:06 +01:00
Alex Beregszaszi
b31cc71ff6 contracts/chequebook: update for latest Solidity (#15425)
This changes behaviour: before if the owner/amount didn't match, it
resulted in a successful execution without doing anything; now it
results in a failed execution using the revert opcode (remainder gas
is not consumed).
2017-11-10 10:19:51 +01:00
Péter Szilágyi
febfea7af2 Merge pull request #15450 from karalabe/lint-gofmt-misspell
build: enable gofmt and misspell linters
2017-11-10 11:09:47 +02:00
Péter Szilágyi
d1eb4006e2 build: enable gofmt and misspell linters 2017-11-09 19:48:12 +02:00
Fabio Barone
03ec3fed2b swarm/api: bug fix exact match for manifest 2017-11-09 10:38:42 -05:00
Péter Szilágyi
3d88ecd61a Merge pull request #15448 from karalabe/android-build-fix
travis: bump Android NDK version and Android Go builder
2017-11-09 14:43:37 +02:00
Péter Szilágyi
09b347fec9 travis: bump Android NDK version and Android Go builder 2017-11-09 14:32:05 +02:00
Dan Melton
d7f2462e8f build: add Travis job to lint Go code #15372 (#15416)
* build: [finishes #15372] implements generalized linter and travis job

* .travis, build: minor polishes, disable deadcode
2017-11-09 12:46:03 +02:00
bas-vk
4fe30bf5ad rpc: check content-type for HTTP requests (#15220) 2017-11-09 10:54:58 +01:00
bas-vk
4732ee89cb github: add remark about general questions (#15250) 2017-11-09 10:51:14 +01:00
b00ris
7ace023981 les: fix channel assignment data race (#15441) 2017-11-09 10:43:37 +01:00
Evgeny Danilenko
0914d4e0d2 les: fix misuse of WaitGroup (#15365) 2017-11-09 10:34:35 +01:00
ferhat elmas
9619a61024 all: gofmt -w -s (#15419) 2017-11-08 11:45:52 +01:00
Eugene Valeyev
bfdc0fa362 mobile: fix FilterLogs (#15418)
All logs in the FilterLog return value would be the same object 
because the for loop captured the pointer to the iteration variable.
2017-11-06 09:46:43 -06:00
gluk256
9f7cd75682 whisper/whisperv6: initial commit (clone of v5) (#15324) 2017-11-03 21:29:49 +01:00
Jim McDonald
0131bd6ff9 core: respect price bump threshold (#15401)
* core: allow price bump at threshold

* core: test changes to allow price bump at threshold

* core: reinstate tx replacement test underneath threshold

* core: minor test failure message cleanups
2017-10-30 13:05:00 +02:00
Péter Szilágyi
dd8a62683e Merge pull request #15398 from ferhatelmas/core-swarm-typo
core, swarm: typo fixes
2017-10-30 11:08:24 +02:00
ferhat elmas
07e8c177e7 core, swarm: typo fixes 2017-10-30 01:23:23 +01:00
Felföldi Zsolt
8d434f6a6f les, core/bloombits: post-LES/2 fixes (#15391)
* les: fix topic ID

* core/bloombits: fix interface conversion
2017-10-27 17:18:53 +03:00
Péter Szilágyi
6dafec0666 Merge pull request #15389 from mcdee/rlpdump
cmd/rlpdump: allow hex input to have leading '0x'
2017-10-27 13:11:22 +03:00
Jim McDonald
3e6d7c169b cmd/rlpdump: allow hex input to have leading '0x' 2017-10-27 09:40:52 +01:00
Péter Szilágyi
0095531a58 core, eth, les: fix messy code (#15367)
* core, eth, les: fix messy code

* les: fixed tx status test and rlp encoding

* core: add a workaround for light sync
2017-10-25 12:18:44 +03:00
Felföldi Zsolt
ca376ead88 les, light: LES/2 protocol version (#14970)
This PR implements the new LES protocol version extensions:

* new and more efficient Merkle proofs reply format (when replying to
  a multiple Merkle proofs request, we just send a single set of trie
  nodes containing all necessary nodes)
* BBT (BloomBitsTrie) works similarly to the existing CHT and contains
  the bloombits search data to speed up log searches
* GetTxStatusMsg returns the inclusion position or the
  pending/queued/unknown state of a transaction referenced by hash
* an optional signature of new block data (number/hash/td) can be
  included in AnnounceMsg to provide an option for "very light
  clients" (mobile/embedded devices) to skip expensive Ethash check
  and accept multiple signatures of somewhat trusted servers (still a
  lot better than trusting a single server completely and retrieving
  everything through RPC). The new client mode is not implemented in
  this PR, just the protocol extension.
2017-10-24 15:19:09 +02:00
Péter Szilágyi
6d6a5a9337 cmd, consensus, core, miner: instatx clique for --dev (#15323)
* cmd, consensus, core, miner: instatx clique for --dev

* cmd, consensus, clique: support configurable --dev block times

* cmd, core: allow --dev to use persistent storage too
2017-10-24 13:40:42 +03:00
Anton Markelov
ea5f2da39a README: add docker RPC access docs (#15362) 2017-10-24 09:55:20 +03:00
Péter Szilágyi
479aa61f11 Merge pull request #15343 from karalabe/txpool-replacement-propagation
core: fire tx event on replace, expand tests
2017-10-20 15:21:54 +03:00
Péter Szilágyi
0af1ab0c86 core: avoid warning when loading the transaction journal 2017-10-20 14:42:20 +03:00
Péter Szilágyi
65738c1eb3 event: fix datarace between Subscribe and Send 2017-10-20 14:42:19 +03:00
Péter Szilágyi
0e7d019e0e core: fire tx event on replace, expand tests 2017-10-20 14:42:19 +03:00
Péter Szilágyi
eaa4f8a5f9 Merge pull request #15344 from karalabe/ubuntu=artful
build: start shipping Ubuntu Artful Aardvark binaries
2017-10-20 13:25:19 +03:00
Martin Holst Swende
0900aae412 cmd/evm: print stateroot in evm utility (#15341) 2017-10-20 13:22:06 +03:00
Péter Szilágyi
ba2201981a build: start shipping Ubuntu Artful Aardvark binaries 2017-10-20 12:49:32 +03:00
baizhenxuan
eea996e4e1 whisper/shhclient: fix Version return type (#15306) 2017-10-18 11:16:12 +02:00
Péter Szilágyi
2ab7fe8982 Merge pull request #15313 from karalabe/puppeth-nongithub-faucet
cmd/faucet: support twitter, google+ and facebook auth too
2017-10-17 23:20:20 +03:00
Péter Szilágyi
5d2c947060 cmd/faucet: dynamic funding progress and visual feedback 2017-10-17 14:55:21 +03:00
Sumit Sarin
e5c19b6f98 README: fix typo (#15312) 2017-10-17 13:15:42 +02:00
RJ Catalano
dec8bba9d4 accounts/abi: improve type handling, add event support (#14743) 2017-10-17 13:07:08 +02:00
Péter Szilágyi
7f7abfe4d1 cmd/faucet: proper error handling all over 2017-10-17 12:08:57 +03:00
Péter Szilágyi
0bb194c956 cmd/faucet: support twitter, google+ and facebook auth too 2017-10-16 16:57:04 +03:00
Péter Szilágyi
e9295163aa VERSION, params: start 1.7.3 release cycle 2017-10-14 16:00:46 +03:00
Péter Szilágyi
1db4ecdc0b params: bump to 1.7.2 stable 2017-10-14 15:58:53 +03:00
Péter Szilágyi
fdb3bd287e Merge pull request #15298 from karalabe/stack-then-readonly
core/vm: check opcode stack before readonly enforcement
2017-10-14 15:57:08 +03:00
Péter Szilágyi
a91e682234 core/vm: check opcode stack before readonly enforcement 2017-10-14 15:42:48 +03:00
Péter Szilágyi
41b7745529 Merge pull request #15288 from karalabe/trie-hash-benchmark
trie: make hasher benchmark meaningful post-caches
2017-10-13 15:50:05 +03:00
Péter Szilágyi
a5fcaa55ac trie: make hasher benchmark meaningful post-caches 2017-10-13 11:22:38 +03:00
Péter Szilágyi
0ed4d76c79 Merge pull request #15275 from mcdee/master
core/types: fix test for TransactionsByPriceAndNonce
2017-10-13 10:51:22 +03:00
Péter Szilágyi
4b5e797288 Merge pull request #15287 from ernestodeltoro/typo_thoretical
ethash: fix typo
2017-10-13 10:47:48 +03:00
Ernesto del Toro
2e83c82f80 ethash: fix typo 2017-10-13 01:13:52 -04:00
Yondon Fu
a5330fe0c5 accounts/abi: include fixed array size in offset for dynamic type 2017-10-12 10:58:53 -04:00
Péter Szilágyi
8d8034fe59 Merge pull request #15269 from karalabe/puppeth-dumb-ip-filtering
cmd/puppeth: use dumb textual IP filtering
2017-10-12 13:46:57 +03:00
Péter Szilágyi
fd0e7b1c67 Merge pull request #15280 from terasum/master
miner: fix typo
2017-10-12 13:46:29 +03:00
terasum
e9382c6e9f miner: fix typo 2017-10-12 10:19:23 +08:00
Jim McDonald
c599b78f62 core/types: fix test for TransactionsByPriceAndNonce 2017-10-11 11:35:44 +01:00
Péter Szilágyi
ad44475231 Merge pull request #14785 from Arachnid/downloaddb
cmd: Added support for downloading to another DB instance
2017-10-11 13:10:18 +03:00
Péter Szilágyi
35767dfd0c cmd, eth: separate out FakePeer for future reuse 2017-10-10 15:52:11 +03:00
Nick Johnson
345332906c cmd: Added support for copying data to another DB instance 2017-10-10 15:52:10 +03:00
Jia Chenhui
cefeb58598 event: fix typo (#15270) 2017-10-10 14:11:15 +02:00
Péter Szilágyi
b45cc0c9e8 cmd/puppeth: use dumb textual IP filtering 2017-10-10 12:35:09 +03:00
Péter Szilágyi
3680cd5926 params: explain EIP150Hash (#15237) 2017-10-10 10:56:33 +02:00
Péter Szilágyi
d3beff7e20 consensus/clique: add fork hash enforcement (#15236) 2017-10-10 10:54:47 +02:00
Miya Chen
40a3856af9 eth/fetcher: check the origin of filter tasks (#14975)
* eth/fetcher: check the origin of filter task

* eth/fetcher: add some details to fetcher logs
2017-10-10 11:53:05 +03:00
Darrel Herbst
89860f4197 swarm/fuse: return amount of data written if the file exists (#15261)
If the file already existed, the WriteResponse.Size was being set
as the length of the entire file, not just the amount that was
written to the existing file.

Fixes #15216
2017-10-09 12:45:30 +02:00
Martin Holst Swende
88b1db7288 accounts/keystore: scan key directory without locks held (#15171)
The accountCache contains a file cache, and remembers from
scan to scan what files were present earlier. Thus, whenever
there's a change, the scan phase only bothers processing new
and removed files.
2017-10-09 12:40:50 +02:00
Guillaume Ballet
7a045af05b whisper/whisperv5: set filter SymKeyHash on creation (#15165) 2017-10-06 16:04:21 +02:00
Guillaume Ballet
36243c7ed8 internal/web3ext: make whisper v5 methods work (#15111) 2017-10-06 15:53:29 +02:00
holisticode
1ae0411d41 swarm/api: fixed 404 handling on missing default entry (#15139) 2017-10-06 15:45:54 +02:00
Darrel Herbst
d54e3539d4 p2p/nat: delete port mapping before adding (#15222)
Fixes #1024
2017-10-06 13:39:47 +02:00
Lio李欧
5df0b240ae eth: fix typo (#15252) 2017-10-06 12:55:18 +02:00
ligi
605c2b261f mobile: fix variadic argument expansion 2017-10-05 21:08:14 +03:00
Péter Szilágyi
4bc60e3aa8 Merge pull request #15241 from karalabe/puppeth-fork-management
cmd/puppeth: support managing fork block in the chain config
2017-10-05 19:40:25 +03:00
Péter Szilágyi
eb9abbd3f2 Merge pull request #15248 from karalabe/update-liner
vendor: update liner to fix docker and mips bugs
2017-10-05 16:45:13 +03:00
Péter Szilágyi
41d361565b vendor: update liner to fix docker and mips bugs 2017-10-05 15:57:33 +03:00
Péter Szilágyi
edba5e9854 cmd/puppeth: support managing fork block in the chain config 2017-10-04 12:15:58 +03:00
Felix Lange
c0a1f1c907 params, VERSION: v1.7.2 unstable 2017-10-03 19:19:37 +02:00
Felix Lange
0510164145 params: v1.7.1 stable 2017-10-03 19:18:39 +02:00
Péter Szilágyi
629b5837e9 core: revert invalid block dedup code (#15235) 2017-10-03 18:59:53 +02:00
Péter Szilágyi
c2d93ded35 Merge pull request #15232 from karalabe/macos-usbhw-fixes
accounts/usbwallet: handle bad interface number on macOS
2017-10-03 13:14:18 +03:00
Péter Szilágyi
8d126a4981 accounts/usbwallet: handle bad interface number on macOS 2017-10-03 12:45:45 +03:00
Péter Szilágyi
f4c49bc0f0 Merge pull request #15030 from rjl493456442/expose_vm_failed
internal, accounts, eth: utilize vm failed flag to help gas estimation
2017-10-02 16:23:21 +03:00
Péter Szilágyi
d347656280 Merge pull request #15224 from karalabe/byzantium-block-numbers
cmd/puppeth, params: enable Byzantium on all networks
2017-10-02 15:40:54 +03:00
rjl493456442
94903d572b internal, accounts, eth: utilize vm failed flag to help gas estimation 2017-10-02 15:26:40 +03:00
Péter Szilágyi
f86c4177d5 Merge pull request #15042 from rjl493456442/receipt_rpc
internal/ethapi: add status code to receipt rpc return
2017-10-02 14:41:10 +03:00
Péter Szilágyi
7e9e3a134b core/types, internal: swap Receipt.Failed to Status 2017-10-02 14:04:22 +03:00
Péter Szilágyi
7514e8a24d cmd/puppeth, params: enable Byzantium on all networks 2017-10-02 13:01:40 +03:00
rjl493456442
a31835c8b4 internal/ethapi: add status code to receipt rpc return 2017-10-02 11:42:53 +03:00
Felix Lange
d78ad226c2 ethclient, mobile: add TransactionSender (#15127)
* core/types: make Signer derive address instead of public key

There are two reasons to do this now: The upcoming ethclient signer
doesn't know the public key, just the address. EIP 208 will introduce a
new signer which derives the 'entry point' address for transactions with
zero signature. The entry point has no public key.

Other changes to the interface ease the path make to moving signature
crypto out of core/types later.

* ethclient, mobile: add TransactionSender

The new method can get the right signer without any crypto, and without
knowledge of the signature scheme that was used when the transaction was
included.
2017-10-01 11:03:28 +02:00
Martin Holst Swende
a660685746 tests: add ethash difficulty tests (#15191) 2017-09-27 15:30:41 +02:00
Péter Szilágyi
2ab2a9f131 core/bloombits, eth/filters: handle null topics (#15195)
When implementing the new bloombits based filter, I've accidentally broke null
topics by removing the special casing of common.Hash{} filter rules, which
acted as the wildcard topic until now.

This PR fixes the regression, but instead of using the magic hash
common.Hash{} as the null wildcard, the PR reworks the code to handle nil
topics during parsing, converting a JSON null into nil []common.Hash topic.
2017-09-27 12:14:52 +02:00
Péter Szilágyi
860e697b00 Merge pull request #15208 from ayeowch/fix-typo
cmd/geth: fix --password typo
2017-09-27 11:01:38 +03:00
ayeowch
f3c9585f2e cmd/geth: fix --password typo 2017-09-27 10:28:26 +10:00
Péter Szilágyi
229bf51f0d Merge pull request #15181 from fjl/state-revert-log-index
core/state: revert log index when removing logs
2017-09-26 17:11:46 +03:00
Péter Szilágyi
2ee885958b p2p: snappy encoding for devp2p (version bump to 5) (#15106)
* p2p: snappy encoding for devp2p (version bump to 5)

* p2p: remove lazy decompression, enforce 16MB limit
2017-09-26 16:54:49 +03:00
slumber1122
2b4a5f2677 internal/ethapi: remove code duplication around tx sending (#15158) 2017-09-25 10:38:42 +02:00
Derek Chiang
d6a6180366 contracts/chequebook: fix two contract issues (#15086)
This patch fixes the following issues:

* The contract executes send() when it does not have enough balance.
* The contract always sends a total amount of zero.
2017-09-25 10:36:13 +02:00
Lewis Marshall
9feec51e2d p2p: add network simulation framework (#14982)
This commit introduces a network simulation framework which
can be used to run simulated networks of devp2p nodes. The
intention is to use this for testing protocols, performing
benchmarks and visualising emergent network behaviour.
2017-09-25 10:08:07 +02:00
cdetrio
673007d7ae core/vm: standard vm traces (#15035) 2017-09-22 10:22:56 +02:00
Zahoor Mohamed
d558a595ad swarm/storage: pyramid chunker re-write (#14382) 2017-09-21 22:22:51 +02:00
Felix Lange
a0d783094e core/state: revert log index when removing logs 2017-09-21 20:46:21 +02:00
Ernesto del Toro
3c8656347f eth, internal/ethapi: fix spelling of 'Ethereum' (#15164) 2017-09-20 11:31:31 +02:00
gary rong
b9ff44bd64 params: rename EIP150 gas table (#15167) 2017-09-20 11:27:00 +02:00
Mark
cb5235eb07 miner: make starting of CPU agent more reliable (#15148) 2017-09-19 13:28:15 +02:00
Paul Litvak
a92d8a2654 trie: fix typo (#15152) 2017-09-18 23:07:19 +02:00
Davor Kapsa
dc17fa6b18 travis.yml: update go versions (#15154) 2017-09-18 23:07:02 +02:00
Dave Appleton
019dca9ba2 accounts/abi/backends: add AdjustTime (#15077) 2017-09-15 15:20:29 +02:00
Kyuntae Ethan Kim
c197d805f7 ethereum: fix typos in interfaces.go (#15149) 2017-09-15 11:30:17 +02:00
Péter Szilágyi
5705ad004e containers/docker: bump docker images to 1.7 release branch 2017-09-14 14:01:44 +03:00
Péter Szilágyi
a989cf5bad VERSION, params: begin 1.7.1 release cycle 2017-09-14 14:01:31 +03:00
Péter Szilágyi
6c6c7b2af3 params: release Geth 1.7.0 - Megara 2017-09-14 13:55:42 +03:00
Péter Szilágyi
5c93462b5e Merge pull request #15147 from karalabe/enable-byzantium-ropsten
params: enable Byzantium on Ropsten/tests, fix failures
2017-09-14 11:28:54 +03:00
Péter Szilágyi
701d60c889 params: enable Byzantium on Ropsten/tests, fix failures 2017-09-14 10:59:05 +03:00
Martin Holst Swende
9be07de539 params: Updated finalized gascosts for ECMUL/MODEXP (#15135)
* params: Updated finalized gascosts for ECMUL/MODEXP

* core,tests: Updates pending new tests

* tests: Updated with new tests

* core: revert state transition bugfix

* tests: Add expected failures due to  #15119
2017-09-14 10:35:54 +03:00
Péter Szilágyi
885c13c2c9 Merge pull request #15146 from karalabe/byzantium-rebrand
consensus, core, params: rebrand Metro to Byzantium
2017-09-14 10:29:08 +03:00
Péter Szilágyi
5bbd7fb390 consensus, core, params: rebrand Metro to Byzantium 2017-09-14 10:10:46 +03:00
Péter Szilágyi
79b11121a7 Merge pull request #15141 from karalabe/rinkeby-infura-bootnode
params: add Infura bootnode to Rinkeby
2017-09-13 12:48:00 +03:00
Péter Szilágyi
72af509abe params: add Infura bootnode to Rinkeby 2017-09-13 11:25:50 +03:00
Péter Szilágyi
382c9266e6 Merge pull request #15138 from karalabe/statesync-peer-drops
eth/downloader: track peer drops and deassign state sync tasks
2017-09-12 15:33:32 +03:00
Péter Szilágyi
f46adfac28 eth/downloader: track peer drops and deassign state sync tasks 2017-09-12 15:13:14 +03:00
Péter Szilágyi
514b1587db Merge pull request #15137 from karalabe/puppeth-keywords
cmd/puppeth: reserve "yournode" as a non-allowed ethstats user
2017-09-12 11:50:53 +03:00
Péter Szilágyi
66a7ef57e6 cmd/puppeth: reserve "yournode" as a non-allowed ethstats user 2017-09-12 11:35:35 +03:00
Péter Szilágyi
ecca2c3c1b Merge pull request #15129 from zsfelfoldi/cht1040
light: new CHTs for mainnet and ropsten
2017-09-12 10:59:01 +03:00
Zsolt Felfoldi
7a7f6a4f29 light: new CHTs for mainnet and ropsten 2017-09-11 23:36:16 +02:00
Péter Szilágyi
c8e70186a6 Merge pull request #14973 from rjl493456442/fix_downloader
eth/downloader: exit loop when there is no more available task
2017-09-11 14:02:02 +03:00
Péter Szilágyi
794741b8b2 Merge pull request #15124 from fjl/debug-gcpercent
internal/debug: add debug_setGCPercent
2017-09-11 13:34:34 +03:00
Felix Lange
48705f8aea internal/debug: add debug_setGCPercent 2017-09-11 12:29:47 +02:00
Péter Szilágyi
10b3f97c9d core: only fire one chain head per batch (#15123)
* core: only fire one chain head per batch

* miner: announce chan events synchronously
2017-09-11 13:13:05 +03:00
Felix Lange
5596b664c4 internal/debug: add debug_freeOSMemory (#15122) 2017-09-11 09:33:18 +02:00
Felix Lange
42a5b54bf5 core/vm: improve bitvec comments 2017-09-10 21:04:36 +02:00
Felix Lange
10181b57a9 core, eth/downloader: commit block data using batches (#15115)
* ethdb: add Putter interface and Has method

* ethdb: improve docs and add IdealBatchSize

* ethdb: remove memory batch lock

Batches are not safe for concurrent use.

* core: use ethdb.Putter for Write* functions

This covers the easy cases.

* core/state: simplify StateSync

* trie: optimize local node check

* ethdb: add ValueSize to Batch

* core: optimize HasHeader check

This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.

* core: write fast sync block data in batches

Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.

* eth/downloader: commit larger state batches

Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.

* core: optimize HasBlock check

This avoids a random database read to get the number.

* core: use numberCache in HasHeader

numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.

* core: write imported block data using a batch

Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.

This commit also removes posting of logs when a forked block is imported.

* core: fix DB write error handling

* ethdb: use RLock for Has

* core: fix HasBlock comment
2017-09-09 19:03:07 +03:00
holisticode
ac193e36ce swarm/api/http: add error pages (#14967) 2017-09-08 20:29:09 +02:00
Martin Holst Swende
d6681ed360 core/vm: Rename + updated doc on jumpdest analysis 2017-09-08 12:47:44 +02:00
nkbai
5ba9225fe3 accounts/abi/bind: pass non-empty directory when calling goimports (#15070) 2017-09-07 23:34:45 +02:00
Martin Holst Swende
fc87bc5f52 common: improve documentation of Hash.SetBytes (#15062)
Fixes #15004
2017-09-07 23:32:59 +02:00
Mark
c1740e4540 core/types, miner: avoid tx sender miscaching (#14773) 2017-09-07 23:22:27 +02:00
Benoit Verkindt
e3db1236de contracts/chequebook: fix race in test (#15058) 2017-09-07 23:14:47 +02:00
Fiisio
02b4d074f6 core/asm: use ContainsRune instead of IndexRune (#15098) 2017-09-07 23:11:48 +02:00
Fiisio
2dcb22afec swarm/fuse: use Equal instead of Compare (#15097) 2017-09-07 23:10:51 +02:00
Pawan Singh Pal
69c8be7c86 core: delete dao.go (#15113)
- dao.go is already present in consensus/misc
- core/dao.go is not used anywhere in the codebase
2017-09-07 23:08:06 +02:00
Péter Szilágyi
55e5926f34 Merge pull request #15103 from karalabe/disable-fastsync-postpivot
eth: disable fast sync after pivot is committed
2017-09-07 19:29:45 +03:00
Péter Szilágyi
f30179d62e eth: disable fast sync after pivot is committed 2017-09-06 15:02:44 +03:00
Péter Szilágyi
c4d21bc8e5 Merge pull request #14631 from zsfelfoldi/bloombits2
core/bloombits, eth/filter: transformed bloom bitmap based log search
2017-09-06 13:00:35 +03:00
Péter Szilágyi
160add8570 Merge pull request #15095 from karalabe/txpool-avoid-deep-reorg
core: use blocks and avoid deep reorgs in txpool
2017-09-06 12:01:07 +03:00
Péter Szilágyi
564c8f3ae6 core/bloombits: drop nil-matcher special case 2017-09-06 11:14:22 +03:00
Zsolt Felfoldi
451ffdb62b core/bloombits: use general filters instead of addresses and topics 2017-09-06 11:14:21 +03:00
Zsolt Felfoldi
6ff2c02991 core/bloombits: AddBloom index parameter and fixes variable names 2017-09-06 11:14:20 +03:00
Péter Szilágyi
f585f9eee8 core, eth: clean up bloom filtering, add some tests 2017-09-06 11:14:19 +03:00
Zsolt Felfoldi
4ea4d2dc34 core, eth: add bloombit indexer, filter based on it 2017-09-06 11:13:13 +03:00
Péter Szilágyi
1e67378df8 Merge pull request #15094 from karalabe/eth-auto-maxpeers
eth: use maxpeers from p2p layer instead of extra config
2017-09-06 10:38:37 +03:00
Péter Szilágyi
cc313e78b7 core: use blocks and avoid deep reorgs in txpool 2017-09-05 19:50:29 +03:00
Péter Szilágyi
b0ca1b67ce eth: use maxpeers from p2p layer instead of extra config 2017-09-05 19:18:28 +03:00
Nick Johnson
03d00361f5 Merge pull request #15092 from karalabe/puppeth-main-docker-images
Dockerfile, cmd/puppeth: fix missing SSL certificates, use main image in puppeth
2017-09-05 14:24:09 +01:00
Péter Szilágyi
f90a193f92 cmd/puppeth: switch node containers to main ones 2017-09-05 16:09:41 +03:00
Péter Szilágyi
8e14bb1448 Dockerfile: fix missing SSL certificates 2017-09-05 16:06:04 +03:00
Péter Szilágyi
cd6c861dc5 vendor: pull in latest changes for goleveldb (#15090) 2017-09-05 15:04:32 +02:00
Péter Szilágyi
c91f7beb53 Merge pull request #15085 from karalabe/txpool-immutable
core: make txpool operate on immutable state
2017-09-05 13:39:18 +03:00
Viktor Trón
2bacf36d80 bmt: Binary Merkle Tree Hash (#14334)
bmt is a new package that provides hashers for binary merkle tree hashes on
size-limited chunks. the main motivation is that using BMT hash as the chunk
hash of the swarm hash offers logsize inclusion proofs for arbitrary files on a
32-byte resolution completely viable to use in challenges on the blockchain.
2017-09-05 12:38:36 +02:00
Péter Szilágyi
32d8d42274 Merge pull request #15089 from karalabe/docker-multistage
Dockerfile: multi-stage builds, Go 1.9
2017-09-05 13:36:20 +03:00
Péter Szilágyi
da7d57e07c core: make txpool operate on immutable state 2017-09-05 13:34:41 +03:00
Martin Holst Swende
8cab3ab435 cmd/evm: adds ability to run individual state test file (#14998)
* cmd/evm: adds ability to run individual state test file

* cmd/evm: Fix statetest runner to be more json friendly

* cmd/evm, tests: minor polishes, dump state on fail
2017-09-05 12:24:26 +03:00
Péter Szilágyi
8f567dc8a2 Dockerfile: multi-stage builds, Go 1.9 2017-09-05 12:16:59 +03:00
Péter Szilágyi
504278e839 build: bump PPA builders to Go 1.9 (#15083) 2017-09-05 10:16:46 +02:00
Martin Holst Swende
e7408b5552 core/vm: Make MaxCodesize non-retroactive (#15072)
* core/vm: Make max_codesize only applicable post Spurious Dragon/158/155/161/170

* tests: Remove expected failure
2017-09-04 12:53:25 +03:00
Martin Holst Swende
1901521ed0 core: Fix flaw where underpriced locals were removed (#15081)
* core: Fix flaw where underpriced locals were removed

* core: minor code cleanups for tx pool tests
2017-09-04 12:48:36 +03:00
Martin Holst Swende
23b51a68cb core/vm: avoid state lookup during gas calc for call (#15061) 2017-09-04 10:56:45 +02:00
Martin Holst Swende
dc92779c0a p2p: change ping ticker to timer (#15071)
Using a Timer over Ticker seems to be a lot better, though I cannot fully
account for why that it behaves so (since Ticker should be more bursty, but not
necessarily more active over time, but that may depend on how long window it
uses to decide on when to tick next)
2017-09-04 09:24:52 +02:00
Péter Szilágyi
d70536b5d4 Merge pull request #15047 from holiman/ecmul_benchmarks
core/vm: more benchmarks
2017-08-28 17:25:36 +03:00
Martin Holst Swende
07635e43e2 core/vm: renamed struct member + go fmt 2017-08-28 13:33:24 +02:00
Martin Holst Swende
64a3a3d23c core/vm: Fix testcase input for ecmul 2017-08-28 13:30:26 +02:00
Péter Szilágyi
777540628e Merge pull request #15054 from karalabe/go-1.9
travis, appveyor: bump Go to 1.9 stable
2017-08-28 13:19:03 +03:00
Péter Szilágyi
a4df80f47f travis, appveyor: bump Go to 1.9 stable 2017-08-28 11:15:29 +03:00
Martin Holst Swende
bc2a5578c0 core/vm: more benchmarks 2017-08-27 14:00:32 +02:00
Oli Bye
ebf41d16a0 cmd/geth: fix --nousb typo (#15040) 2017-08-25 16:54:36 +03:00
Péter Szilágyi
9d0c51fb0f Merge pull request #15039 from karalabe/metropolis-contract-clash
core, tests: implement Metropolis EIP 684
2017-08-25 16:03:39 +03:00
Péter Szilágyi
08f27428b4 core, tests: implement Metropolis EIP 684 2017-08-25 13:00:27 +03:00
Péter Szilágyi
27a5622e99 Merge pull request #15028 from karalabe/metropolis-iceage
consensus, core, tests: implement Metropolis EIP 649
2017-08-25 11:00:51 +03:00
Péter Szilágyi
8596fc5974 Merge pull request #15033 from fjl/core-receipt-status-bytes
core/types: encode receipt status in PostState field
2017-08-25 10:31:44 +03:00
Felix Lange
ad16aeb0a2 core/types: encode receipt status in PostState field
This fixes a regression where the new Failed field in ReceiptForStorage
rejected previously stored receipts. Fix it by removing the new field
and store status in the PostState field. This also removes massive RLP
hackery around the status field.
2017-08-24 23:51:50 +02:00
Péter Szilágyi
b872961ec8 consensus, core, tests: implement Metropolis EIP 649 2017-08-24 17:16:39 +03:00
Felix Lange
54b1de67e2 core/vm: make jumpdest code nicer 2017-08-24 13:09:53 +02:00
nkbai
68955ed2eb core/types: fix create indicator in Transaction.String (#15025) 2017-08-24 12:48:13 +02:00
Péter Szilágyi
ff9a868232 core/state: revert metro suicide map addition (#15024) 2017-08-24 12:42:00 +02:00
Péter Szilágyi
20b818d206 Merge pull request #15029 from karalabe/rlp-raw-decode
rlp: fix decoding long strings into RawValue types
2017-08-24 13:32:45 +03:00
Péter Szilágyi
63246e2542 rlp: fix decoding long strings into RawValue types 2017-08-24 13:10:50 +03:00
Péter Szilágyi
3c48a25762 Merge pull request #15014 from rjl493456442/metropolis-eip658
core: add status as a consensus field in receipt
2017-08-23 14:39:37 +03:00
Martin Holst Swende
286ec5df40 cmd/evm, core/vm, internal/ethapi: Show error when exiting (#14985)
* cmd/evm, core/vm, internal/ethapi: Add 'err' to tracer interface CaptureEnd

* cmd/evm: fix nullpointer when there is no error
2017-08-23 14:37:18 +03:00
Péter Szilágyi
4ee92f2d19 core/types: reject Metro receipts with > 0x01 status bytes 2017-08-23 13:57:15 +03:00
Péter Szilágyi
f7e39a7724 Merge pull request #15000 from fjl/node-flock
node: fix instance dir locking and improve error message
2017-08-23 13:36:11 +03:00
Péter Szilágyi
79cdbcfe64 Merge pull request #15018 from Bo-Ye/master
metrics: change MetricsEnabledFlag to be const
2017-08-23 13:22:46 +03:00
Péter Szilágyi
79bf69b556 tests: pull in new test suite, enable most block tests 2017-08-22 18:35:18 +03:00
rjl493456442
28aea46ac0 core: implement Metropolis EIP 658, receipt status byte 2017-08-22 18:35:17 +03:00
Péter Szilágyi
fc5f8a3dda Merge pull request #15022 from karalabe/enable-byzantium-state-tests
tests: enable Byzantium state tests for CI
2017-08-22 15:06:02 +03:00
Péter Szilágyi
3cc476c8ab tests: enable Byzantium state tests for CI 2017-08-22 14:04:44 +03:00
Ti Zhou
2fd5ba6bd4 core/vm: fix typo in method documentation (#15019)
Signed-off-by: Ti Zhou <tizhou1986@gmail.com>
2017-08-22 12:43:36 +03:00
Bo Ye
b4b27ebaea metrics: change MetricsEnabledFlag to be const 2017-08-22 18:35:09 +12:00
Péter Szilágyi
8c037dc487 Merge pull request #15017 from karalabe/usbhid-macos-thread-fix
vendor: update USB HID lib (fix macOS crash)
2017-08-21 14:32:47 +03:00
Péter Szilágyi
3e14837c1c vendor: update USB HID lib (fix macOS crash) 2017-08-21 13:45:50 +03:00
George Angel
58f7f977e7 Dockerfile: use alpine:3.6, clean up apk incovation (#15006) 2017-08-21 12:26:42 +02:00
Péter Szilágyi
afdfdebd87 Merge pull request #14983 from karalabe/metropolis-revert
core/vm: implement REVERT metropolis opcode
2017-08-21 12:23:03 +03:00
Noman
e311bb520a whisper: Fix spelling and grammar in error (#15009)
* whisper: Fix spelling and grammar in error

* whisper: Fix grammar in comments
2017-08-21 12:22:00 +03:00
Péter Szilágyi
1ab3e30698 Merge pull request #15016 from gurrpi/patch-1
README: add missing full stop
2017-08-21 11:04:07 +03:00
gurrpi
314246da78 Adding period at end of sentence
missing period at line 119
2017-08-21 15:08:57 +09:00
Miya Chen
bf1e263128 core, light: send chain events using event.Feed (#14865) 2017-08-18 12:58:36 +02:00
Felix Lange
7e57fee355 node: fix instance dir locking and improve error message
The lock file was ineffective because opening leveldb storage in
read-only mode doesn't really take the lock. Fix it by including a
dedicated flock library (which is actually split out of goleveldb).
2017-08-18 12:14:00 +02:00
Péter Szilágyi
a4da8416ee Merge pull request #14996 from markya0616/send_not_announce
eth: send but not announce block to peers if propagate is true
2017-08-18 12:47:33 +03:00
Péter Szilágyi
998abb9107 Merge pull request #14999 from karalabe/puppeth-ethstats-blacklist
cmd/puppeth: support blacklisting malicious IPs on ethstats
2017-08-18 12:39:48 +03:00
Péter Szilágyi
059c767adf cmd/puppeth: support blacklisting malicious IPs on ethstats 2017-08-18 11:23:56 +03:00
mark.lin
d4f11d9b4f eth: send but not announce block to peers if propagate is true 2017-08-18 13:52:16 +08:00
Péter Szilágyi
104375f398 Merge pull request #14993 from karalabe/bn256-precompile-fixes
core/vm, crypto/bn256: fix bn256 use and pairing corner case
2017-08-17 17:46:45 +03:00
Péter Szilágyi
1bbd400899 tests: skip the bad tests from colliding account touches 2017-08-17 16:50:36 +03:00
Péter Szilágyi
f9fb70d2ee core/vm: rework reversion to work on a higher level 2017-08-17 16:50:35 +03:00
Péter Szilágyi
1335a6cc8c core/vm, crypto/bn256: fix bn256 use and pairing corner case 2017-08-17 16:46:46 +03:00
Jeffrey Wilcke
b70a73cd3e core/vm: implement REVERT metropolis opcode 2017-08-16 15:32:59 +03:00
Péter Szilágyi
0b978f91b6 Merge pull request #14981 from karalabe/metropolis-returndata
core/vm: implement RETURNDATA metropolis opcodes
2017-08-16 15:19:33 +03:00
Péter Szilágyi
64d199edf2 tests: pull in latest tests from upstream 2017-08-16 13:43:15 +03:00
Péter Szilágyi
4e0fea4d30 core/vm: polish RETURNDATA, add missing returns to CALL* 2017-08-16 13:43:14 +03:00
Jeffrey Wilcke
9bd6068fef core/vm: implement RETURNDATA metropolis opcodes 2017-08-16 13:43:08 +03:00
Péter Szilágyi
76069eef38 Merge pull request #14978 from karalabe/metropolis-staticcall
core/vm: implement metropolis static call opcode
2017-08-16 12:39:31 +03:00
Péter Szilágyi
3df7142b3e core/vm: minor polishes, fix STATICCALL for precompiles
* Fix STATICCALL so it is able to call precompiles too
 * Fix write detection to use the correct value argument of CALL
 * Fix write protection to ignore the value in CALLCODE
2017-08-15 14:40:12 +03:00
Jeffrey Wilcke
3d123bcde6 core/vm: implement metropolis static call opcode 2017-08-15 13:03:49 +03:00
Martin Holst Swende
3040243042 cmd/evm: add --receiver, support code from stdin (#14873) 2017-08-15 11:31:36 +02:00
Péter Szilágyi
9facf6423d Merge pull request #14959 from karalabe/metropolis-precompiles
core/vm: metropolis precompiles
2017-08-15 10:31:29 +03:00
Péter Szilágyi
2403656373 Merge pull request #14951 from egonelbre/megacheck_swarm
swarm: fix megacheck warnings
2017-08-14 19:44:17 +03:00
Péter Szilágyi
ef0edc6e32 Merge pull request #14885 from karalabe/trezor-boom
accounts, console, internal: support trezor hardware wallet
2017-08-14 18:52:40 +03:00
Egon Elbre
133de3d806 swarm: fix megacheck warnings 2017-08-14 18:12:37 +03:00
Péter Szilágyi
f8d8b56b28 core/vm: optimize copy-less data retrievals 2017-08-14 17:08:49 +03:00
Martin Holst Swende
d8aaa3a215 core/vm: benchmarking of metro precompiles 2017-08-14 15:37:09 +03:00
Péter Szilágyi
6131dd55c5 core/vm: polish precompile contract code, add tests and benches
* Update modexp gas calculation to new version
 * Fix modexp modulo 0 special case to return zero
2017-08-14 15:27:44 +03:00
Péter Szilágyi
02656f9f61 console: use keypad based pinpad (Trezor request) 2017-08-14 12:21:40 +03:00
Martin Holst Swende
967e097faa core/vm: Address review concerns 2017-08-14 10:57:54 +02:00
rjl493456442
02aa86e659 eth/downloader: exit loop when there is no more available task 2017-08-14 13:51:37 +08:00
Jeffrey Wilcke
7bbdf3e268 core: add Metropolis pre-compiles (EIP 197, 198 and 213) 2017-08-11 15:24:54 +03:00
Péter Szilágyi
6ca59d98f8 Merge pull request #14964 from fjl/tests-update-2
tests: update tests, use blockchain test "network" field
2017-08-11 15:23:22 +03:00
Joel Burget
833eeb9f23 core/vm/runtime: remove unused state parameter to NewEnv (#14953)
* core: Remove unused `state` parameter to `NewEnv`.

`cfg.State` is used instead.

* core/vm/runtime: remove unused import
2017-08-11 14:29:32 +03:00
Maximilian Meister
2b422b1a47 cmd/geth, cmd/swarm: sort commands and flags by name (#3462) 2017-08-11 13:29:05 +02:00
gary rong
73c5aba21f ethdb: return copied value from MemDatabase.Get (#14958) 2017-08-11 12:41:49 +02:00
Felix Lange
6a56b15019 tests: update tests, use blockchain test "network" field
Blockchain tests now include the "network" which determines the chain
config to use. Remove config matching based on test name and share the
name-to-config index with state tests.

Byzantium/Constantinople tests are still skipped because most of them
fail anyway.
2017-08-11 12:34:03 +02:00
Péter Szilágyi
5d9ac49c7e accounts: refactor API for generalized USB wallets 2017-08-09 13:26:07 +03:00
Péter Szilágyi
db568a61e2 accounts, console, internal: support trezor hardware wallet 2017-08-09 11:30:17 +03:00
Ivan Daniluk
17ce0a37de eth/downloader: fix race in downloadTesterPeer (#14942)
* eth/downloader: fix race in downloadTesterPeer

Signed-off-by: Ivan Daniluk <ivan.daniluk@gmail.com>

* eth/downloader: minor datarace fix cleanup
2017-08-08 20:10:09 +03:00
Egon Elbre
26b2d6e1aa contracts: fix megacheck errors (#14916)
* contracts: fix megacheck errors

* contracts: drop useless sleep, lets see what breaks
2017-08-08 19:41:35 +03:00
Felföldi Zsolt
fff6e03a79 les: fix megacheck warnings (#14941)
* les: fix megacheck warnings

* les: fixed testGetProofs
2017-08-08 19:31:08 +03:00
Egon Elbre
d375193797 whisper: fix megacheck warnings (#14925)
* whisper: fix megacheck warnings

* whisper/whisperv5: regenerate json codec to fix unused override type
2017-08-08 14:48:06 +03:00
Felix Lange
374c49e0ac Merge pull request #14522 from ethereum/go-ethereum/chainproc2 2017-08-08 13:37:59 +02:00
Egon Elbre
10ce8b0e3c crypto: fix megacheck warnings (#14917)
* crypto: fix megacheck warnings

* crypto/ecies: remove ASN.1 support
2017-08-08 13:58:22 +03:00
Péter Szilágyi
9a7e99f75d Merge pull request #14940 from karalabe/txpool-races
core: fix txpool journal and test races
2017-08-08 13:20:36 +03:00
Egon Elbre
6f8c7b0def ethdb: add basic and parallel sanity tests (#14938)
* ethdb: add basic sanity test

* ethdb: test MemDatabase

* ethdb: add parallel tests
2017-08-08 12:32:10 +03:00
Péter Szilágyi
1c45f2f42e core: fix txpool journal and test races 2017-08-08 12:22:01 +03:00
Egon Elbre
e063d538b8 rpc: fix megacheck warnings 2017-08-08 11:08:37 +02:00
Péter Szilágyi
43437806fb Merge pull request #14933 from egonelbre/megacheck_eth
eth: fix megacheck warnings
2017-08-08 09:59:52 +03:00
Egon Elbre
8f06b7980d eth: fix megacheck warnings 2017-08-07 19:54:20 +03:00
Egon Elbre
971079822e light: fix megacheck warnings (#14920) 2017-08-07 17:25:18 +02:00
Egon Elbre
f42bd73ce5 internal: fix megacheck warnings (#14919) 2017-08-07 17:14:40 +02:00
Péter Szilágyi
f5925b0459 Merge pull request #14928 from fjl/build-goroot-explain
internal/build: add GoTool and document why it uses GOROOT
2017-08-07 17:47:58 +03:00
Péter Szilágyi
8edaaa227d core: polish chain indexer a bit 2017-08-07 17:38:33 +03:00
Zsolt Felfoldi
bd74882d83 core: implement ChainIndexer 2017-08-07 17:37:08 +03:00
Péter Szilágyi
67439c1dba Merge pull request #14927 from karalabe/blockchain-test-cleanups
core: fix blockchain goroutine leaks in tests
2017-08-07 17:21:29 +03:00
Felix Lange
f59a49d591 internal/build: add GoTool and document why it uses GOROOT 2017-08-07 16:08:50 +02:00
Péter Szilágyi
2b50367fe9 core: fix blockchain goroutine leaks in tests 2017-08-07 16:00:47 +03:00
Péter Szilágyi
46cf0a616b Merge pull request #14914 from egonelbre/megacheck_consensus
consensus: fix megacheck warnings
2017-08-07 14:35:45 +03:00
Egon Elbre
85454e7678 cmd: fix megacheck warnings (#14912)
* cmd: fix megacheck warnings

* cmd: revert time.Until changes, keep readFloat
2017-08-07 14:34:21 +03:00
Egon Elbre
80de4dc72c consensus: revert time.Until change 2017-08-07 14:32:03 +03:00
Péter Szilágyi
8c2cf3c66c Merge pull request #14922 from egonelbre/megacheck_miner
miner: fix megacheck warnings
2017-08-07 14:31:41 +03:00
Péter Szilágyi
37e9fcacca Merge pull request #14923 from egonelbre/megacheck_node
node: fix megacheck warnings
2017-08-07 14:28:10 +03:00
Péter Szilágyi
b36f54c684 Merge pull request #14918 from egonelbre/megacheck_ethstats
ethstats: fix megacheck warnings
2017-08-07 14:25:08 +03:00
Péter Szilágyi
3991745c5f Merge pull request #14921 from egonelbre/megacheck_log
log: fix megacheck warnings
2017-08-07 14:24:22 +03:00
Péter Szilágyi
6dd2803b8e Merge pull request #14915 from egonelbre/megacheck_console
console: fix megacheck warnings
2017-08-07 14:21:21 +03:00
Péter Szilágyi
fc0c6c175c Merge pull request #14913 from egonelbre/megacheck_common
common: fix megacheck warnings
2017-08-07 14:16:55 +03:00
Egon Elbre
7c74e166b0 accounts: fix megacheck warnings (#14903)
* accounts: fix megacheck warnings

* accounts: don't modify abi in favor of full cleanup
2017-08-07 14:11:15 +03:00
Péter Szilágyi
f7848c2aa5 Merge pull request #14911 from karalabe/txpool-flaky-test
core: bump timeout test to avoid flakyness on overloaded ci
2017-08-07 14:09:35 +03:00
Egon Elbre
19866075ac node: fix megacheck warnings 2017-08-07 13:43:08 +03:00
Egon Elbre
faafeef79e miner: fix megacheck warnings 2017-08-07 13:41:22 +03:00
Egon Elbre
cd82b89fde log: fix megacheck warnings 2017-08-07 13:40:05 +03:00
Egon Elbre
3780d0b6f7 ethstats: fix megacheck warnings 2017-08-07 13:32:19 +03:00
Egon Elbre
ff89a3ddce console: fix megacheck warnings 2017-08-07 13:19:44 +03:00
Egon Elbre
aee70ae30b consensus: fix megacheck warnings 2017-08-07 13:18:08 +03:00
Egon Elbre
392151e251 common: fix megacheck warnings 2017-08-07 13:16:56 +03:00
Péter Szilágyi
5b742fb82b core: bump timeout test to avoid flakyness on overloaded ci 2017-08-07 12:53:32 +03:00
Péter Szilágyi
b159cdd8dd Merge pull request #14910 from karalabe/drop-yakkety-support
build: drop yakkety builds (launchpad end of life)
2017-08-07 12:42:30 +03:00
Péter Szilágyi
524ca544b2 build: drop yakkety builds (launchpad end of life) 2017-08-07 11:12:44 +03:00
Péter Szilágyi
1059927f9c Merge pull request #14905 from armellini13/patch-1
node: fix doc typo
2017-08-07 09:51:00 +02:00
Agustin Armellini Fischer
fca6e515d6 node: fix doc typo 2017-08-06 00:32:17 +02:00
Péter Szilágyi
ca436f4b90 Merge pull request #14897 from karalabe/cardinal-sin
cmd/puppeth: remove wrapping loop in single reads
2017-08-05 00:34:05 +02:00
Péter Szilágyi
350bb6d9ca Merge pull request #14898 from detailyang/patch-1
Makefile: call shell function to get pwd
2017-08-04 18:23:15 +02:00
detailyang
5e805aa865 Makefile: call shell function to get pwd 2017-08-04 23:54:30 +08:00
Péter Szilágyi
455fcc8309 cmd/puppeth: remove wrapping loop in single reads 2017-08-04 17:00:22 +02:00
evgk
0cc9b8791e core/vm: fix typo in comment (#14894) 2017-08-04 01:31:18 +02:00
Péter Szilágyi
8b84bd283f Merge pull request #14889 from karalabe/ethash-cache-on-import
cmd: add makecache cmd, use caches during import cmd
2017-08-03 18:40:21 +02:00
Péter Szilágyi
4a260dc1f2 cmd: add makecache cmd, use caches during import cmd 2017-08-03 18:39:34 +02:00
akiva
4371367cd1 Makefile, README: remove evm target, add puppeth to table (#14886) 2017-08-03 13:58:35 +02:00
Jim McDonald
bc0e6a5e68 ethclient: add NetworkID method (#14791)
There is currently no simple way to obtain the network ID from a Client. 
This adds a NetworkID method that wraps the net_version JSON-RPC call.
2017-08-01 10:59:46 +02:00
Lewis Marshall
60c858a529 swarm/api: make api.NewManifest synchronous (#14880)
Previously, NewManifest was asynchronous so subsequent code which tried
to use the returned manifest could error as the manifest was not yet
persisted.
2017-07-31 15:58:19 +02:00
Lewis Marshall
e9b850805e cmd/swarm: support exporting, importing chunk db (#14868) 2017-07-31 13:23:44 +02:00
njupt-moon
53f3460ab5 core/asm: fix hex number lexing (#14861) 2017-07-31 13:02:36 +02:00
Felix Lange
bdf98b4fcd common: EIP55-compliant Address.Hex() (#14815)
This patch updates the Address type in common/types.go so that the Hex
function provides an EIP55-compliant output string. The implementation is pretty lightweight; 
on my laptop the benchmark gives 1100ns/op, with the majority of that value due to the Keccak hash.
2017-07-31 12:22:02 +02:00
bas-vk
c259e6874e light: update txpool signer to EIP155 (#14720) 2017-07-31 12:06:01 +02:00
Lee Hyeon
13cda8d9b6 Makefile: fixed GOBIN absolute path issue (#14854) 2017-07-31 12:00:17 +02:00
Mark
4f9789b28d core: avoid write existing block again (#14849) 2017-07-31 11:59:07 +02:00
Péter Szilágyi
ee748d1451 Merge pull request #14882 from am2rican5/rinkeby-genesis
mobile: add RinkebyGenesis method
2017-07-31 12:27:45 +03:00
am2rican5
0732ad4e47 mobile: add RinkebyGenesis method 2017-07-31 17:44:08 +09:00
Péter Szilágyi
3d32690b54 cmd, core, eth: journal local transactions to disk (#14784)
* core: reduce txpool event loop goroutines and sync structs

* cmd, core, eth: journal local transactions to disk

* core: journal replacement pending transactions too

* core: separate transaction journal from pool
2017-07-28 15:09:39 +02:00
Péter Szilágyi
a602ee90f2 Merge pull request #14858 from xmikus01/patch-1
trie: typo in comment
2017-07-26 09:40:41 +02:00
Péter Szilágyi
fc78ce61c0 Merge pull request #14859 from cdetrio/evm-flag-gasprice
core/vm/runtime: fix evm command to use --gasprice flag value
2017-07-26 09:40:05 +02:00
cdetrio
ffebf00114 core/vm/runtime: fix evm command to use --gasprice flag value 2017-07-25 13:08:29 -04:00
Petr Mikusek
99da85c895 trie: typo in comment 2017-07-25 19:02:40 +02:00
Lewis Marshall
f4841ff43d swarm/api/http: redirect root manifest requests to include trailing slash (#14806) 2017-07-25 11:51:26 +02:00
Leo Shklovskii
3a678a15c9 cmd/abigen: update generated go file header text (#14845)
As per https://golang.org/s/generatedcode. This will allow other tools
such as golint to properly ignore the files.
2017-07-24 14:09:03 +02:00
Felix Lange
3e0dbe0eaa core/vm: remove logging and add section labels to struct logs (#14782) 2017-07-19 14:32:45 +02:00
Chase Wright
1802682f65 node: Rename TrusterNodes (#14827)
* node: Rename TrusterNodes

* node: Rename TrusterNodes
2017-07-18 11:58:46 +03:00
Péter Szilágyi
4d2249773a Merge pull request #14824 from karalabe/faucet-ignore-whitespace
cmd/faucet: ignore whitespace in gist content
2017-07-17 21:19:16 +03:00
Péter Szilágyi
8a8fc5f8ef Merge pull request #14823 from karalabe/puppeth-limited-logging
cmd/puppeth: limit cotnainers to 10MB logs
2017-07-17 21:18:31 +03:00
Péter Szilágyi
671ba3791d cmd/faucet: ignore whitespace in gist content 2017-07-17 21:15:25 +03:00
Péter Szilágyi
9488e7fd5f cmd/puppeth: limit cotnainers to 10MB logs 2017-07-17 20:38:40 +03:00
Elias Naur
23c6fcdbe8 mobile: don't retain transient []byte in CallMsg.SetData (#14804)
* mobile: don't retain transient []byte in CallMsg.SetData

Go mobile doesn't copy []byte parameters, for performance and to allow
writes to the byte array be reflected in the native byte array.
Unfortunately, that means []byte arguments are only valid during the
call it is being passed into.

CallMsg.SetData retains such a byte array. Copy it instead

Fixes #14675

* mobile: copy all []byte arguments from gomobile

To avoid subtle errors when accidentially retaining an otherwise
transient byte slice coming from gomobile, copy all byte slices before
use.

* mobile: replace copySlice with common.CopyBytes
2017-07-17 15:25:46 +03:00
ligi
cf5d4b5541 mobile: use EIP155 signer (#14817)
* mobile: Use EIP155Signer - closes #14762

* mobile: Correctly fall back on HomesteadSigner when no chainID is passed in
2017-07-17 14:31:26 +03:00
Péter Szilágyi
b95c2b58f0 Merge pull request #14807 from Dexaran/master
README: get rid of the non-existent disasm command
2017-07-17 12:48:56 +03:00
Péter Szilágyi
8e9197f2a1 README: get rid of the on-existent disasm command 2017-07-17 12:48:12 +03:00
Péter Szilágyi
c65f10a17b Merge pull request #14733 from karalabe/metro-eip100
consensus/ethash, core: implement Metropolis EIP 100
2017-07-17 12:43:13 +03:00
Péter Szilágyi
a56f3dc0d9 core, ethclient: implement Metropolis EIP 98 (#14750)
Implements ethereum/EIPs#98
2017-07-17 10:34:53 +02:00
Martin Holst Swende
47359301a2 core: blocknumber in genesis as hex (#14812) 2017-07-17 10:33:13 +02:00
Jim McDonald
688ee6d1e5 cmd/geth: update tests for EIP55-compliant Address.Hex() 2017-07-16 13:29:20 +01:00
Jim McDonald
ad8d519eb5 common: tests for EIP55-compliant Address.Hex() 2017-07-16 13:28:22 +01:00
Jim McDonald
9e80d9bee1 common: Address.Hex() outputs EIP55-compliant string 2017-07-16 13:26:16 +01:00
Péter Szilágyi
0ff35e170d core: remove redundant storage of transactions and receipts (#14801)
* core: remove redundant storage of transactions and receipts

* core, eth, internal: new transaction schema usage polishes

* eth: implement upgrade mechanism for db deduplication

* core, eth: drop old sequential key db upgrader

* eth: close last iterator on successful db upgrage

* core: prefix the lookup entries to make their purpose clearer
2017-07-14 19:39:53 +03:00
Péter Szilágyi
8d6a5a3581 VERSION, params: begin 1.7.0 cycle (cannot downgrade) 2017-07-14 18:10:18 +03:00
Dexaran
33a24bb731 Update README.md 2017-07-14 11:56:08 +04:00
Dexaran
2d47c6bfde Update README.md 2017-07-14 11:55:10 +04:00
Dexaran
d68ad36bb9 Update README.md 2017-07-14 11:54:35 +04:00
Dexaran
df74a43396 Update README.md 2017-07-14 11:53:46 +04:00
Dexaran
01cb9948f1 Update README.md 2017-07-14 11:53:09 +04:00
Felix Lange
bf468a81ec params, VERSION: v1.6.8 unstable 2017-07-11 16:33:39 +02:00
Felix Lange
ab5646c532 params: v1.6.7 stable 2017-07-11 16:32:36 +02:00
Péter Szilágyi
3b25012481 Merge pull request #14792 from karalabe/slim-docker-image
dockerignore: ignore all git metadata and all tests
2017-07-11 17:07:01 +03:00
Péter Szilágyi
bd381be9c8 dockerignore: ignore all git metadata and all tests 2017-07-11 17:02:22 +03:00
Felix Lange
225de7ca0a tests: update tests and implement general state tests (#14734)
Tests are now included as a submodule. This should make updating easier
and removes ~60MB of JSON data from the working copy.

State tests are replaced by General State Tests, which run the same test
with multiple fork configurations.

With the new test runner, consensus tests are run as subtests by walking
json files. Many hex issues have been fixed upstream since the last
update and most custom parsing code is replaced by existing JSON hex
types. Tests can now be marked as 'expected failures', ensuring that
fixes for those tests will trigger an update to test configuration. The
new test runner also supports parallel execution and the -short flag.
2017-07-11 13:49:14 +02:00
Péter Szilágyi
bd01cd7183 vendor: update go-stack to fix a sigpanic panic (#14790) 2017-07-11 13:43:33 +02:00
Daniel Sloof
0958770625 internal/web3ext: fix debug.traceBlockFromFile wrapper (#14774)
As stated in the documentation, this method should be called traceBlockFromFile
and not traceBlockByFile. Previously this would result in a 'The method ... does
not exist/is not available' error.
2017-07-10 12:47:50 +02:00
Péter Szilágyi
4f7a38001f Merge pull request #14737 from holiman/txpool_localaccounts
Txpool localaccounts
2017-07-10 12:43:23 +03:00
Emin Mahrt
65f0e905dd README: add missing full stop (#14766) 2017-07-07 16:41:26 +02:00
Nick Johnson
4b8860a7b4 Merge pull request #14723 from Arachnid/downloadrefactor
Refactor downloader to use interfaces instead of callbacks
2017-07-06 12:40:22 +01:00
Péter Szilágyi
34ec9913f6 core: test locals support in txpool queue limits, fix
The commit reworks the transaction pool queue limitation tests
to cater for testing local accounts, also testing the nolocal flag.

In addition, it also fixes a panic if local transactions exceeded
the global queue allowance (no accounts left to drop from) and also
fixes queue eviction to operate on all accounts, not just the one
being updated.
2017-07-06 11:51:59 +03:00
ligi
f25486c3fb core: fix typo in error message (#14763) 2017-07-06 00:19:38 +02:00
Péter Szilágyi
88b4fe7d21 core: handle nolocals during add, exepmt locals from expiration 2017-07-05 17:16:42 +03:00
Péter Szilágyi
5e38f7a664 cmd, core: add --txpool.nolocals to disable local price exemptions 2017-07-05 17:06:05 +03:00
Péter Szilágyi
4c1d0b164b eth: drop leftover from previous nonce protection scheme 2017-07-05 16:53:40 +03:00
Péter Szilágyi
48ee7f9de7 core, eth, les: polish txpool API around local/remote txs 2017-07-05 16:51:55 +03:00
Nick Johnson
fe13949d9d eth/downloader: Doc fixes 2017-07-05 11:13:16 +01:00
Péter Szilágyi
138f26c93a Merge pull request #14749 from karalabe/disable-metro-allprotocols
params: remove redundant consts, disable metro on AllProtocolChanges
2017-07-04 14:11:04 +03:00
Péter Szilágyi
8f12d76a47 params: remove redundant consts, disable metro on AllProtocolChanges 2017-07-04 12:28:58 +03:00
Nick Johnson
be8f8409bc eth/downloader, les, light: Changes in response to review 2017-07-03 15:17:12 +01:00
Martin Holst Swende
a633a2d7ea core: Prevent local tx:s from being discarded 2017-06-30 22:55:10 +02:00
Martin Holst Swende
67aff49822 core: Change local-handling to use sender-account instead of tx hashes 2017-06-30 22:43:26 +02:00
Péter Szilágyi
8c313eed26 consensus, core: EIP 100 polishes, fix chain maker diff
This PR polishes the EIP 100 difficulty adjustment algorithm
to match the same mechanisms as the Homestead was implemented
to keep the code uniform. It also avoids a few memory allocs
by reusing big1 and big2, pulling it out of the common package
and into ethash.

The commit also fixes chain maker to forward the uncle hash
when creating a simulated chain (it wasn't needed until now
so we just skipped a copy there).
2017-06-30 16:42:09 +03:00
Jeffrey Wilcke
c4d28aee9b consensus/ethash: implement Metropolis EIP 100 2017-06-30 16:41:25 +03:00
Péter Szilágyi
a0aa071ca6 Merge pull request #14732 from ethersphere/swarm-remove-ethapi
cmd/swarm: Exit if --ethapi is set
2017-06-30 13:06:52 +03:00
Lewis Marshall
c7041fe145 cmd/swarm: Exit if --ethapi is set
The previous attempt to use --ethapi as a fallback if --ens-api is not
set does not work because --ens-api has a default value, and also
setting --ens-api to "" is the suggested way to disable ENS lookups.

Signed-off-by: Lewis Marshall <lewis@lmars.net>
2017-06-30 10:16:03 +01:00
Péter Szilágyi
41318f3776 Merge pull request #14646 from ethersphere/swarm-ens-api
cmd/swarm: Support using Mainnet for resolving ENS names
2017-06-29 19:09:01 +03:00
Péter Szilágyi
65b96dc4b9 Merge pull request #14727 from holiman/count_txs_right
Fix error when reporting numer of txs in imported blocks
2017-06-29 16:49:43 +03:00
Martin Holst Swende
8bbd598ef4 core: fix an off-by-one when the block import counts blocks 2017-06-29 14:19:10 +02:00
Nick Johnson
ae11545bc5 eth, les: Refactor downloader peer to use structs 2017-06-29 12:49:18 +01:00
Nick Johnson
0550957989 eth, les, light: Refactor downloader to use blockchain interface 2017-06-28 15:58:41 +01:00
Péter Szilágyi
dfd076244d Merge pull request #14718 from holiman/gascalc_fix
core/vm: fix overflow in gas calculation formula
2017-06-28 13:12:13 +03:00
Martin Holst Swende
6dc32e897a core/vm: add benchmarks for some ops and precompiles (#14641) 2017-06-28 11:45:45 +02:00
Martin Holst Swende
e4301564c2 core/vm : fix testcase for gas calculation 2017-06-28 10:47:07 +02:00
Martin Holst Swende
bae7565231 core/vm: fix overflow in gas calculation formula 2017-06-28 09:51:31 +02:00
Felix Lange
9e5f03b6c4 core/state: access trie through Database interface, track errors (#14589)
With this commit, core/state's access to the underlying key/value database is
mediated through an interface. Database errors are tracked in StateDB and
returned by CommitTo or the new Error method.

Motivation for this change: We can remove the light client's duplicated copy of
core/state. The light client now supports node iteration, so tracing and storage
enumeration can work with the light client (not implemented in this commit).
2017-06-27 15:57:06 +02:00
Péter Szilágyi
bb366271fe Merge pull request #14686 from fjl/hexutil-json-type-error
common/hexutil: wrap errors in json.UnmarshalTypeError
2017-06-27 15:02:43 +03:00
Felix Lange
4a741df757 common/hexutil: wrap errors in json.UnmarshalTypeError
This adds type and struct field context to error messages.
Instead of "hex string of odd length" users will now see "json: cannot
unmarshal hex string of odd length into Go struct field SendTxArgs.from
of type common.Address".
2017-06-27 13:15:12 +02:00
RJ Catalano
5421a08d2f accounts/abi: reorganizing package with small fixes (#14610)
* accounts/abi: reorganizing package and some notes and a quick correction of name.

Signed-off-by: RJ Catalano <rj@monax.io>

get rid of some imports

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: move file names

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fix boolean decode function

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fix for the array set and for creating a bool

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: be very very very correct

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fix up error message and variable names

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: take out unnecessary argument in pack method

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: add bool unpack test and add a panic to readBool function

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fix panic message

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: change from panic to basic error

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fix nil to false

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fill out type regex tests and fill with the correct type for integers

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: move packNumbers into pack.go.

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: separation of the testing suite into appropriately named files.

Signed-off-by: RJ Catalano <rj@monax.io>

* account/abi: change to hex string tests.

Signed-off-by: RJ Catalano <rj@monax.io>

* account/abi: fix up rest of tests to hex

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: declare bool at the package level

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: use errors package in the error file.

Signed-off-by: RJ Catalano <rj@monax.io>

* accounts/abi: fix ugly hack and fix error type declaration.

Signed-off-by: RJ Catalano <rj@monax.io>
2017-06-27 11:05:33 +03:00
Addy Yeow
cf611c50b9 build: fix devel golang detection on debian/ubuntu (#14711) 2017-06-27 09:11:41 +02:00
Péter Szilágyi
c008176f9f Merge pull request #14690 from karalabe/faucet-key-reuse
cmd/puppeth: fix key reuse during faucet deploys
2017-06-26 14:03:46 +03:00
Lewis Marshall
f3359d5e58 cmd/swarm: Support using Mainnet for resolving ENS names
Signed-off-by: Lewis Marshall <lewis@lmars.net>
2017-06-26 12:51:33 +02:00
Péter Szilágyi
feb2932706 Merge pull request #14540 from bas-vk/whisper-api
whisperv5: integrate whisper and implement API
2017-06-26 13:44:35 +03:00
Péter Szilágyi
ea1d1825a8 cmd/geth: fix whisper flag group capitalization 2017-06-26 13:40:43 +03:00
Péter Szilágyi
f321ed23fb Merge pull request #14687 from markya0616/unused_events
core: remove unused events
2017-06-26 13:27:39 +03:00
bloonfield
413dc1d265 rpc: fix closure problem in batch processing (#14688)
Demo of the issue: https://play.golang.org/p/EeTLFfppqC
2017-06-26 12:26:22 +03:00
Péter Szilágyi
fdf2184b1e Merge pull request #14697 from homotopycolimit/master
swarm/storage: remove panic on invalid chunk
2017-06-26 12:21:11 +03:00
Aron
3c7338d6c8 Makefile: add make swarm command (#14698)
* Makefile: add make swarm command

* Makefile: minor code formatting polishes
2017-06-26 12:19:53 +03:00
G. Kay Lee
ef8d4711d5 Update README.md (#14701)
README: change heading to "Go Ethereum"
2017-06-26 12:17:06 +03:00
aron
caa00b7e6d swarm/storage: remove panic on invalid chunk 2017-06-25 16:10:54 +02:00
Péter Szilágyi
5603eb9116 cmd/puppeth: fix key reuse during faucet deploys 2017-06-23 13:34:21 +03:00
Felix Lange
78c04c920d params, VERSION: 1.6.7 unstable 2017-06-23 11:28:43 +02:00
Felix Lange
10a45cb59b params: 1.6.6 stable 2017-06-23 11:26:54 +02:00
Péter Szilágyi
cd88f69715 Merge pull request #14596 from markya0616/valid_clique_vote
consensus/clique: choose valid votes
2017-06-23 11:21:38 +03:00
Péter Szilágyi
46d0d04f97 Merge pull request #14685 from karalabe/ethdb-metrics-fail-fix
eth: gracefully error if database cannot be opened
2017-06-23 11:10:29 +03:00
Péter Szilágyi
514659a023 consensus/clique: minor cleanups 2017-06-23 11:06:38 +03:00
Felix Lange
01c9cf1cb5 node: don't return non-nil database on error 2017-06-23 09:56:30 +02:00
Péter Szilágyi
b751cf3901 Merge pull request #14681 from fjl/build-fixup
travis.yml, cmd/swarm: fix Travis CI build
2017-06-23 10:30:27 +03:00
Bas van Kervel
b6e99deee9 whisper: renamed Info#Message to Info#Messages 2017-06-23 09:18:02 +02:00
Péter Szilágyi
d40179f882 eth: gracefully error if database cannot be opened 2017-06-23 10:12:41 +03:00
mark.lin
beb708e6d7 core: remove unused events 2017-06-23 10:39:38 +08:00
Felix Lange
f2c5b2cc1c travis.yml: add fakeroot to launchpad builder 2017-06-22 22:21:59 +02:00
Felix Lange
a4277450b2 cmd/swarm: disable TestCLISwarmUp because it's flaky 2017-06-22 22:17:53 +02:00
Péter Szilágyi
b664bedcf2 Merge pull request #14673 from holiman/txfix
core: add testcase for txpool
2017-06-22 23:01:43 +03:00
Péter Szilágyi
eebde1a2e2 core: ensure transactions correctly drop on pool limiting 2017-06-22 21:03:54 +03:00
Martin Holst Swende
b0b3cf2eeb core: add testcase for txpool 2017-06-22 20:36:07 +03:00
Péter Szilágyi
c98d9b49bf Merge pull request #14677 from karalabe/miner-cli-gasprice
cmd/geth: corrently init gas price for CLI CPU mining
2017-06-22 15:54:24 +03:00
Felix Lange
0042f13d47 eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue

Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.

State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.

Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)

* eth/downloader: remove cancelTimeout channel

* eth/downloader: retry state requests on timeout

* eth/downloader: improve comment

* eth/downloader: mark peers idle when state sync is done

* eth/downloader: move pivot block splitting to processContent

This change also ensures that pivot block receipts aren't imported
before the pivot block itself.

* eth/downloader: limit state node retries

* eth/downloader: improve state node error handling and retry check

* eth/downloader: remove maxStateNodeRetries

It fails the sync too much.

* eth/downloader: remove last use of cancelCh in statesync.go

Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.

* eth/downloader: fix leak in runStateSync

* eth/downloader: don't run processFullSyncContent in LightSync mode

* eth/downloader: improve comments

* eth/downloader: fix vet, megacheck

* eth/downloader: remove unrequested tasks anyway

* eth/downloader, trie: various polishes around duplicate items

This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.

The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.

The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.

The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.

* eth/downloader: fix state request leak

This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.

The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.

Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.

* core, eth, trie: support batched trie sync db writes

* trie: rename SyncMemCache to syncMemBatch
2017-06-22 15:26:03 +03:00
Péter Szilágyi
d432688886 cmd/geth: corrently init gas price for CLI CPU mining 2017-06-22 14:58:07 +03:00
Péter Szilágyi
58a1e13e6d Merge pull request #14657 from markya0616/refactor_clique
consensus/clique: fix typo and don't need to add snapshot into recents again
2017-06-21 17:58:15 +03:00
Lewis Marshall
a1f3878ec5 swarm/test: add integration test for 'swarm up' (#14353) 2017-06-21 14:54:23 +02:00
Maximilian Meister
a20a02ce0b README: document new config file option (#14348) 2017-06-21 14:53:08 +02:00
Martin Holst Swende
9a44e1035e cmd/evm, core/vm: add --nomemory, --nostack to evm (#14617) 2017-06-21 14:52:31 +02:00
Bas van Kervel
c62d5422bb whisper: use hexutil.UnmarshalFixedText for topic parsing 2017-06-21 12:58:00 +02:00
Péter Szilágyi
9012863ad7 Merge pull request #14667 from fjl/swarm-fuse-cleanup
swarm/fuse: simplify externalUnmount, use subtests
2017-06-21 13:51:00 +03:00
Felföldi Zsolt
a5d08c893d les: code refactoring (#14416)
This commit does various code refactorings:

- generalizes and moves the request retrieval/timeout/resend logic out of LesOdr
  (will be used by a subsequent PR)
- reworks the peer management logic so that all services can register with
  peerSet to get notified about added/dropped peers (also gets rid of the ugly
  getAllPeers callback in requestDistributor)
- moves peerSet, LesOdr, requestDistributor and retrieveManager initialization
  out of ProtocolManager because I believe they do not really belong there and the
  whole init process was ugly and ad-hoc
2017-06-21 12:27:38 +02:00
Bas van Kervel
a4e4c76cb3 whisper: use syncmap for dynamic configuration options 2017-06-21 11:15:47 +02:00
Bas van Kervel
7a11e86442 whisper: move flags from whisper package to utils 2017-06-21 10:49:14 +02:00
Bas van Kervel
4a1d516d78 whisper: renamed errors 2017-06-21 10:32:36 +02:00
Bas van Kervel
b6b0e00198 whisper: fallback to default config if none is specified 2017-06-21 10:24:34 +02:00
Bas van Kervel
3d66ba56ef whisper: remove obsolete api tests 2017-06-21 10:23:45 +02:00
Bas van Kervel
767dc6c73d whisper: remove gencodec override for config 2017-06-21 10:11:54 +02:00
Bas van Kervel
36e7963467 whisper: move ShhClient to its own package 2017-06-21 10:11:46 +02:00
Felix Lange
98e101ef8e swarm/fuse: use subtests 2017-06-21 09:56:58 +02:00
Felix Lange
50c18e6eb8 swarm/fuse: simplify externalUnmount
The code looked for /usr/bin/diskutil on darwin, but it's actually
located in /usr/sbin. Fix that by not specifying the absolute path.
Also remove weird timeout construction and extra whitespace.
2017-06-21 09:56:58 +02:00
Jim McDonald
60e27b51bc ethclient: fix TransactionByHash pending return value. (#14663)
As per #14661 TransactionByHash always returns false for pending.
This uses blockNumber rather than blockHash to ensure that it returns
the correct value for pending and will not suffer side-effects if
eth_getTransactionByHash is fixed in future.
2017-06-21 09:53:50 +02:00
Felix Lange
693d9ccbfb trie: more node iterator improvements (#14615)
* ethdb: remove Set

Set deadlocks immediately and isn't part of the Database interface.

* trie: add Err to Iterator

This is useful for testing because the underlying NodeIterator doesn't
need to be kept in a separate variable just to get the error.

* trie: add LeafKey to iterator, panic when not at leaf

LeafKey is useful for callers that can't interpret Path.

* trie: retry failed seek/peek in iterator Next

Instead of failing iteration irrecoverably, make it so Next retries the
pending seek or peek every time.

Smaller changes in this commit make this easier to test:

* The iterator previously returned from Next on encountering a hash
  node. This caused it to visit the same path twice.
* Path returned nibbles with terminator symbol for valueNode attached
  to fullNode, but removed it for valueNode attached to shortNode. Now
  the terminator is always present. This makes Path unique to each node
  and simplifies Leaf.

* trie: add Path to MissingNodeError

The light client trie iterator needs to know the path of the node that's
missing so it can retrieve a proof for it. NodeIterator.Path is not
sufficient because it is updated when the node is resolved and actually
visited by the iterator.

Also remove unused fields. They were added a long time ago before we
knew which fields would be needed for the light client.
2017-06-20 18:26:09 +02:00
mark.lin
5c53a5be66 consensus/clique: fix typo and don't add snapshot into recents again 2017-06-20 10:20:45 +08:00
Péter Szilágyi
431cf2a1e4 Merge pull request #14635 from necaremus/patch-1
cmd/geth: fixed a minor typo in the comments
2017-06-16 17:11:54 +03:00
necaremus
4f77857f74 cmd/geth: fixed a minor typo in the comments 2017-06-16 15:03:19 +02:00
Alan Chen
fade09a7ff eth: remove les server from protocol manager (#14625) 2017-06-15 15:28:57 +02:00
Bas van Kervel
b58a501673 whisperv5: integrate whisper and add whisper RPC simulator 2017-06-15 11:53:15 +02:00
mark.lin
db6e695002 consensus/clique: choose valid votes 2017-06-14 16:49:33 +08:00
Péter Szilágyi
335abdceb1 Merge pull request #14581 from holiman/byte_opt
core/vm: improve opByte
2017-06-13 14:44:19 +03:00
Péter Szilágyi
732273094c Merge pull request #14604 from bas-vk/mobile-getfrom
mobile: use EIP155 signer for determining sender
2017-06-13 14:09:25 +03:00
Péter Szilágyi
b8793edd83 mobile: add a regression test for signer recovery 2017-06-13 13:39:39 +03:00
Bas van Kervel
eb92522278 mobile: use EIP155 signer for determining sender 2017-06-13 09:13:59 +02:00
S. Matthew English
061889d4ea rlp, trie, contracts, compression, consensus: improve comments (#14580) 2017-06-12 14:45:17 +02:00
Péter Szilágyi
e3dfd55820 Merge pull request #14598 from konradkonrad/fix_makedag
consensus/ethash, cmd/geth: Fix `makedag` epoch
2017-06-12 13:34:26 +03:00
Konrad Feldmeier
2fefe4baa0 consensus: Fix makedag epoch
`geth makedag <blocknumber> <path>` was creating DAGs for
`<blocknumber>/<epoch_length> + 1`, hence
it was impossible to create an epoch 0 DAG.

This fixes the calculations in `consensus/ethash/ethash.go` for
`MakeDataset` and `MakeCache`, and applies `gofmt`.
2017-06-12 11:15:16 +02:00
Martin Holst Swende
ac9865791a core/vm, common/math: Add doc about Byte, fix format 2017-06-08 23:16:05 +02:00
Martin Holst Swende
80f7c6c299 cmd/evm: add --prestate, --sender, --json flags for fuzzing (#14476) 2017-06-07 17:09:08 +02:00
bailantaotao
bc24b7a912 core/types: use Header.Hash for block hashes (#14587)
Fixes #14586
2017-06-07 12:06:25 +02:00
Martin Holst Swende
1496b3aff6 common/math, core/vm: Un-expose bigEndianByteAt, use correct terms for endianness 2017-06-06 18:38:38 +02:00
Lewis Marshall
1e9f86b49e cmd/swarm: fix error handling in 'swarm up' (#14557)
The error returned by client.Upload was previously being ignored due to becoming
out of scope outside the if statement. This has been fixed by instead defining a
function which returns the hash and error (rather than trying to set the hash in
each branch of the if statement).
2017-06-06 09:39:10 +02:00
Péter Szilágyi
65ea913e29 Merge pull request #14583 from ethersphere/core-log-fixes
core: Fix VM error logging
2017-06-06 10:31:27 +03:00
FaceHo
9a0e433b13 accounts: fix spelling error (#14567) 2017-06-06 09:28:47 +02:00
Lewis Marshall
04d2de9119 core: Fix VM error logging
Signed-off-by: Lewis Marshall <lewis@lmars.net>
2017-06-05 23:51:32 +01:00
Martin Holst Swende
f4b5f67ee0 core/vm: improved jumpdest analysis 2017-06-05 09:15:46 +02:00
Martin Holst Swende
3285a0fda3 core/vm, common/math: Add fast getByte for bigints, improve opByte 2017-06-05 08:44:11 +02:00
Péter Szilágyi
6171d01b11 VERSION, params: begin Geth 1.6.6 release cycle 2017-06-01 22:08:19 +03:00
Péter Szilágyi
cf87713dd4 params: mark Geth v1.6.5 stable (Hat Trick) 2017-06-01 21:48:47 +03:00
Péter Szilágyi
ac92d7c411 Merge pull request #14570 from Arachnid/jumpdestanalysis
core/vm: Use a bitmap instead of a map for jumpdest analysis
2017-06-01 21:44:50 +03:00
Nick Johnson
d5a79934dc core/vm: Use a bitmap instead of a map for jumpdest analysis
t push --force
2017-06-01 19:14:05 +01:00
Péter Szilágyi
0424192e61 VERSION, params: begin geth 1.6.5 cycle 2017-06-01 17:37:44 +03:00
2791 changed files with 449124 additions and 2587892 deletions

View File

@@ -1,3 +1,9 @@
**/.git
.git
!.git/HEAD
!.git/refs/heads
**/*_test.go
build/_workspace
build/_bin
tests/testdata

1
.gitattributes vendored
View File

@@ -1,2 +1,3 @@
# Auto detect text files and perform LF normalization
* text=auto
*.sol linguist-language=Solidity

32
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,32 @@
# Lines starting with '#' are comments.
# Each line is a file pattern followed by one or more owners.
accounts/usbwallet @karalabe
consensus @karalabe
core/ @karalabe @holiman
eth/ @karalabe
les/ @zsfelfoldi
light/ @zsfelfoldi
mobile/ @karalabe
p2p/ @fjl @zsfelfoldi
swarm/bmt @zelig
swarm/dev @lmars
swarm/fuse @jmozah @holisticode
swarm/grafana_dashboards @nonsense
swarm/metrics @nonsense @holisticode
swarm/multihash @nolash
swarm/network/bitvector @zelig @janos @gbalint
swarm/network/priorityqueue @zelig @janos @gbalint
swarm/network/simulations @zelig
swarm/network/stream @janos @zelig @gbalint @holisticode @justelad
swarm/network/stream/intervals @janos
swarm/network/stream/testing @zelig
swarm/pot @zelig
swarm/pss @nolash @zelig @nonsense
swarm/services @zelig
swarm/state @justelad
swarm/storage/encryption @gbalint @zelig @nagydani
swarm/storage/mock @janos
swarm/storage/mru @nolash
swarm/testutil @lmars
whisper/ @gballet @gluk256

View File

@@ -2,7 +2,7 @@
Before you do a feature request please check and make sure that it isn't possible
through some other means. The JavaScript enabled console is a powerful feature
in the right hands. Please check our [Bitchin' tricks](https://github.com/ethereum/go-ethereum/wiki/bitchin-tricks) wiki page for more info
in the right hands. Please check our [Wiki page](https://github.com/ethereum/go-ethereum/wiki) for more info
and help.
## Contributing

View File

@@ -1,3 +1,9 @@
Hi there,
please note that this is an issue tracker reserved for bug reports and feature requests.
For general questions please use the gitter channel or the Ethereum stack exchange at https://ethereum.stackexchange.com.
#### System information
Geth version: `geth version`

11
.github/no-response.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# Number of days of inactivity before an Issue is closed for lack of response
daysUntilClose: 30
# Label requiring a response
responseRequiredLabel: more-information-needed
# Comment to post when closing an Issue for lack of response. Set to `false` to disable
closeComment: >
This issue has been automatically closed because there has been no response
to our request for more information from the original author. With only the
information that is currently in the issue, we don't have enough information
to take action. Please reach out if you have or find the answers we need so
that we can investigate further.

17
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 366
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 42
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

15
.gitignore vendored
View File

@@ -30,3 +30,18 @@ build/_vendor/pkg
# travis
profile.tmp
profile.cov
# IdeaIDE
.idea
# VS Code
.vscode
# dashboard
/dashboard/assets/flow-typed
/dashboard/assets/node_modules
/dashboard/assets/stats.json
/dashboard/assets/bundle.js
/dashboard/assets/package-lock.json
**/yarn-error.log

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "tests"]
path = tests/testdata
url = https://github.com/ethereum/tests

View File

@@ -65,7 +65,8 @@ Enrique Fynn <enriquefynn@gmail.com>
Vincent G <caktux@gmail.com>
RJ Catalano <rj@erisindustries.com>
RJ Catalano <catalanor0220@gmail.com>
RJ Catalano <catalanor0220@gmail.com> <rj@erisindustries.com>
Nchinda Nchinda <nchinda2@gmail.com>
@@ -109,3 +110,14 @@ Frank Wang <eternnoir@gmail.com>
Gary Rong <garyrong0905@gmail.com>
Guillaume Nicolas <guin56@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com> <sorin@users.noreply.github.com>
Valentin Wüstholz <wuestholz@gmail.com>
Valentin Wüstholz <wuestholz@gmail.com> <wuestholz@users.noreply.github.com>
Armin Braun <me@obrown.io>
Ernesto del Toro <ernesto.deltoro@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com> <ernestodeltoro@users.noreply.github.com>

View File

@@ -6,56 +6,77 @@ matrix:
- os: linux
dist: trusty
sudo: required
go: 1.7.6
go: 1.9.x
script:
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install fuse
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- os: linux
dist: trusty
sudo: required
go: 1.8.3
go: 1.10.x
script:
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install fuse
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage -misspell
- go run build/ci.go test -coverage $TEST_PACKAGES
- os: osx
go: 1.8.3
sudo: required
go: 1.10.x
script:
- brew update
- brew install caskroom/cask/brew-cask
- brew cask install osxfuse
- unset -f cd # workaround for https://github.com/travis-ci/travis-ci/issues/8703
- go run build/ci.go install
- go run build/ci.go test -coverage -misspell
- go run build/ci.go test -coverage $TEST_PACKAGES
# This builder does the Ubuntu PPA and Linux Azure uploads
# This builder only tests code linters on latest version of Go
- os: linux
dist: trusty
sudo: required
go: 1.8.3
go: 1.10.x
env:
- lint
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go lint
# This builder does the Ubuntu PPA upload
- os: linux
dist: trusty
go: 1.10.x
env:
- ubuntu-ppa
- azure-linux
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
- devscripts
- debhelper
- dput
- fakeroot
script:
- go run build/ci.go debsrc -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>" -upload ppa:ethereum/ethereum
# This builder does the Linux Azure uploads
- os: linux
dist: trusty
sudo: required
go: 1.10.x
env:
- azure-linux
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
- gcc-multilib
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go debsrc -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>" -upload ppa:ethereum/ethereum
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -arch 386
@@ -65,24 +86,25 @@ matrix:
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
- sudo ln -s /usr/include/asm-generic /usr/include/asm
- GOARM=5 CC=arm-linux-gnueabi-gcc go run build/ci.go install -arch arm
- GOARM=5 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=6 CC=arm-linux-gnueabi-gcc go run build/ci.go install -arch arm
- GOARM=6 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=7 CC=arm-linux-gnueabihf-gcc go run build/ci.go install -arch arm
- GOARM=7 go run build/ci.go install -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- CC=aarch64-linux-gnu-gcc go run build/ci.go install -arch arm64
- go run build/ci.go install -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the Linux Azure MIPS xgo uploads
- os: linux
dist: trusty
sudo: required
services:
- docker
go: 1.8.3
go: 1.10.x
env:
- azure-linux-mips
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go xgo --alltools -- --targets=linux/mips --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips; do mv -f "${bin}" "${bin/-linux-mips/}"; done
@@ -102,7 +124,7 @@ matrix:
# This builder does the Android Maven and Azure uploads
- os: linux
dist: precise # Needed for the android tools
dist: trusty
addons:
apt:
packages:
@@ -119,17 +141,19 @@ matrix:
env:
- azure-android
- maven-android
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://storage.googleapis.com/golang/go1.8.3.linux-amd64.tar.gz | tar -xz
- curl https://storage.googleapis.com/golang/go1.10.3.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go
script:
# Build the Android archive and upload it to Maven Central and Azure
- curl https://dl.google.com/android/repository/android-ndk-r14b-linux-x86_64.zip -o android-ndk-r14b.zip
- unzip -q android-ndk-r14b.zip && rm android-ndk-r14b.zip
- mv android-ndk-r14b $HOME
- export ANDROID_NDK=$HOME/android-ndk-r14b
- curl https://dl.google.com/android/repository/android-ndk-r17b-linux-x86_64.zip -o android-ndk-r17b.zip
- unzip -q android-ndk-r17b.zip && rm android-ndk-r17b.zip
- mv android-ndk-r17b $HOME
- export ANDROID_NDK=$HOME/android-ndk-r17b
- mkdir -p $GOPATH/src/github.com/ethereum
- ln -s `pwd` $GOPATH/src/github.com/ethereum
@@ -137,11 +161,13 @@ matrix:
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- os: osx
go: 1.8.3
go: 1.10.x
env:
- azure-osx
- azure-ios
- cocoapods-ios
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload gethstore/builds
@@ -157,24 +183,21 @@ matrix:
- xctool -version
- xcrun simctl list
# Workaround for https://github.com/golang/go/issues/23749
- export CGO_CFLAGS_ALLOW='-fmodules|-fblocks|-fobjc-arc'
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
# This builder does the Azure archive purges to avoid accumulating junk
- os: linux
dist: trusty
sudo: required
go: 1.8.3
go: 1.10.x
env:
- azure-purge
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go purge -store gethstore/builds -days 14
install:
- go get golang.org/x/tools/cmd/cover
script:
- go run build/ci.go install
- go run build/ci.go test -coverage
notifications:
webhooks:
urls:

93
AUTHORS
View File

@@ -1,85 +1,174 @@
# This is the official list of go-ethereum authors for copyright purposes.
Afri Schoedon <5chdn@users.noreply.github.com>
Agustin Armellini Fischer <armellini13@gmail.com>
Airead <fgh1987168@gmail.com>
Alan Chen <alanchchen@users.noreply.github.com>
Alejandro Isaza <alejandro.isaza@gmail.com>
Ales Katona <ales@coinbase.com>
Alex Leverington <alex@ethdev.com>
Alex Wu <wuyiding@gmail.com>
Alexandre Van de Sande <alex.vandesande@ethdev.com>
Ali Hajimirza <Ali92hm@users.noreply.github.com>
Anton Evangelatov <anton.evangelatov@gmail.com>
Arba Sasmoyo <arba.sasmoyo@gmail.com>
Armani Ferrante <armaniferrante@berkeley.edu>
Armin Braun <me@obrown.io>
Aron Fischer <github@aron.guru>
Bas van Kervel <bas@ethdev.com>
Benjamin Brent <benjamin@benjaminbrent.com>
Benoit Verkindt <benoit.verkindt@gmail.com>
Bo <bohende@gmail.com>
Bo Ye <boy.e.computer.1982@outlook.com>
Bob Glickstein <bobg@users.noreply.github.com>
Brian Schroeder <bts@gmail.com>
Casey Detrio <cdetrio@gmail.com>
Chase Wright <mysticryuujin@gmail.com>
Christoph Jentzsch <jentzsch.software@gmail.com>
Daniel A. Nagy <nagy.da@gmail.com>
Daniel Sloof <goapsychadelic@gmail.com>
Darrel Herbst <dherbst@gmail.com>
Dave Appleton <calistralabs@gmail.com>
Diego Siqueira <DiSiqueira@users.noreply.github.com>
Dmitry Shulyak <yashulyak@gmail.com>
Egon Elbre <egonelbre@gmail.com>
Elias Naur <elias.naur@gmail.com>
Elliot Shepherd <elliot@identitii.com>
Enrique Fynn <enriquefynn@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com>
Ethan Buchman <ethan@coinculture.info>
Eugene Valeyev <evgen.povt@gmail.com>
Evangelos Pappas <epappas@evalonlabs.com>
Evgeny Danilenko <6655321@bk.ru>
Fabian Vogelsteller <fabian@frozeman.de>
Fabio Barone <fabio.barone.co@gmail.com>
Fabio Berger <fabioberger1991@gmail.com>
FaceHo <facehoshi@gmail.com>
Felix Lange <fjl@twurst.com>
Fiisio <liangcszzu@163.com>
Frank Wang <eternnoir@gmail.com>
Furkan KAMACI <furkankamaci@gmail.com>
Gary Rong <garyrong0905@gmail.com>
George Ornbo <george@shapeshed.com>
Gregg Dourgarian <greggd@tempworks.com>
Guillaume Ballet <gballet@gmail.com>
Guillaume Nicolas <guin56@gmail.com>
Gustav Simonsson <gustav.simonsson@gmail.com>
Hao Bryan Cheng <haobcheng@gmail.com>
Henning Diedrich <hd@eonblast.com>
Isidoro Ghezzi <isidoro.ghezzi@icloud.com>
Ivan Daniluk <ivan.daniluk@gmail.com>
Jae Kwon <jkwon.work@gmail.com>
Jamie Pitts <james.pitts@gmail.com>
Janoš Guljaš <janos@users.noreply.github.com>
Jason Carver <jacarver@linkedin.com>
Jay Guo <guojiannan1101@gmail.com>
Jeff R. Allen <jra@nella.org>
Jeffrey Wilcke <jeffrey@ethereum.org>
Jens Agerberg <github@agerberg.me>
Jia Chenhui <jiachenhui1989@gmail.com>
Jim McDonald <Jim@mcdee.net>
Joel Burget <joelburget@gmail.com>
Jonathan Brown <jbrown@bluedroplet.com>
Joseph Chow <ethereum@outlook.com>
Justin Clark-Casey <justincc@justincc.org>
Justin Drake <drakefjustin@gmail.com>
Kenji Siu <kenji@isuntv.com>
Kobi Gurkan <kobigurk@gmail.com>
Konrad Feldmeier <konrad@brainbot.com>
Kurkó Mihály <kurkomisi@users.noreply.github.com>
Kyuntae Ethan Kim <ethan.kyuntae.kim@gmail.com>
Lefteris Karapetsas <lefteris@refu.co>
Leif Jurvetson <leijurv@gmail.com>
Leo Shklovskii <leo@thermopylae.net>
Lewis Marshall <lewis@lmars.net>
Lio李欧 <lionello@users.noreply.github.com>
Louis Holbrook <dev@holbrook.no>
Luca Zeug <luclu@users.noreply.github.com>
Magicking <s@6120.eu>
Maran Hidskes <maran.hidskes@gmail.com>
Marek Kotewicz <marek.kotewicz@gmail.com>
Mark <markya0616@gmail.com>
Martin Holst Swende <martin@swende.se>
Matthew Di Ferrante <mattdf@users.noreply.github.com>
Matthew Wampler-Doty <matthew.wampler.doty@gmail.com>
Maximilian Meister <mmeister@suse.de>
Micah Zoltu <micah@zoltu.net>
Michael Ruminer <michael.ruminer+github@gmail.com>
Miguel Mota <miguelmota2@gmail.com>
Miya Chen <miyatlchen@gmail.com>
Nchinda Nchinda <nchinda2@gmail.com>
Nick Dodson <silentcicero@outlook.com>
Nick Johnson <arachnid@notdot.net>
Nicolas Guillaume <gunicolas@sqli.com>
Noman <noman@noman.land>
Oli Bye <olibye@users.noreply.github.com>
Paul Litvak <litvakpol@012.net.il>
Paulo L F Casaretto <pcasaretto@gmail.com>
Paweł Bylica <chfast@gmail.com>
Peter Pratscher <pratscher@gmail.com>
Petr Mikusek <petr@mikusek.info>
Péter Szilágyi <peterke@gmail.com>
RJ Catalano <rj@erisindustries.com>
RJ Catalano <catalanor0220@gmail.com>
Ramesh Nair <ram@hiddentao.com>
Ricardo Catalinas Jiménez <r@untroubled.be>
Ricardo Domingos <ricardohsd@gmail.com>
Richard Hart <richardhart92@gmail.com>
Rob <robert@rojotek.com>
Robert Zaremba <robert.zaremba@scale-it.pl>
Russ Cox <rsc@golang.org>
Rémy Roy <remyroy@remyroy.com>
S. Matthew English <s-matthew-english@users.noreply.github.com>
Shintaro Kaneko <kaneshin0120@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com>
Stein Dekker <dekker.stein@gmail.com>
Steve Waldman <swaldman@mchange.com>
Steven Roose <stevenroose@gmail.com>
Taylor Gerring <taylor.gerring@gmail.com>
Thomas Bocek <tom@tomp2p.net>
Ti Zhou <tizhou1986@gmail.com>
Tosh Camille <tochecamille@gmail.com>
Valentin Wüstholz <wuestholz@users.noreply.github.com>
Valentin Wüstholz <wuestholz@gmail.com>
Victor Farazdagi <simple.square@gmail.com>
Victor Tran <vu.tran54@gmail.com>
Viktor Trón <viktor.tron@gmail.com>
Ville Sundell <github@solarius.fi>
Vincent G <caktux@gmail.com>
Vitalik Buterin <v@buterin.com>
Vitaly V <vvelikodny@gmail.com>
Vivek Anand <vivekanand1101@users.noreply.github.com>
Vlad Gluhovsky <gluk256@users.noreply.github.com>
Yohann Léon <sybiload@gmail.com>
Yoichi Hirai <i@yoichihirai.com>
Yondon Fu <yondon.fu@gmail.com>
Zach <zach.ramsay@gmail.com>
Zahoor Mohamed <zahoor@zahoor.in>
Zoe Nolan <github@zoenolan.org>
Zsolt Felföldi <zsfelfoldi@gmail.com>
am2rican5 <am2rican5@gmail.com>
ayeowch <ayeowch@gmail.com>
b00ris <b00ris@mail.ru>
bailantaotao <Edwin@maicoin.com>
baizhenxuan <nkbai@163.com>
bloonfield <bloonfield@163.com>
changhong <changhong.yu@shanbay.com>
evgk <evgeniy.kamyshev@gmail.com>
ferhat elmas <elmas.ferhat@gmail.com>
holisticode <holistic.computing@gmail.com>
jtakalai <juuso.takalainen@streamr.com>
ken10100147 <sunhongping@kanjian.com>
ligi <ligi@ligi.de>
mark.lin <mark@maicoin.com>
necaremus <necaremus@gmail.com>
njupt-moon <1015041018@njupt.edu.cn>
nkbai <nkbai@163.com>
rhaps107 <dod-source@yandex.ru>
slumber1122 <slumber1122@gmail.com>
sunxiaojun2014 <sunxiaojun-xy@360.cn>
terasum <terasum@163.com>
tsarpaul <Litvakpol@012.net.il>
xiekeyang <xiekeyang@users.noreply.github.com>
yoza <yoza.is12s@gmail.com>
ΞTHΞЯSPHΞЯΞ <{viktor.tron,nagydani,zsfelfoldi}@gmail.com>
Максим Чусовлянов <mchusovlianov@gmail.com>
Ralph Caraveo <deckarep@gmail.com>

View File

@@ -1,15 +1,16 @@
FROM alpine:3.5
# Build Geth in a stock Go builder container
FROM golang:1.10-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers
ADD . /go-ethereum
RUN \
apk add --update git go make gcc musl-dev linux-headers && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /usr/local/bin/ && \
apk del git go make gcc musl-dev linux-headers && \
rm -rf /go-ethereum && rm -rf /var/cache/apk/*
RUN cd /go-ethereum && make geth
EXPOSE 8545
EXPOSE 30303
EXPOSE 30303/udp
# Pull Geth into a second stage deploy alpine container
FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp
ENTRYPOINT ["geth"]

15
Dockerfile.alltools Normal file
View File

@@ -0,0 +1,15 @@
# Build Geth in a stock Go builder container
FROM golang:1.10-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers
ADD . /go-ethereum
RUN cd /go-ethereum && make all
# Pull all binaries into a second stage deploy alpine container
FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp

View File

@@ -2,13 +2,13 @@
# with Go source code. If you know what GOPATH is then you probably
# don't need to bother with make.
.PHONY: geth android ios geth-cross evm all test clean
.PHONY: geth android ios geth-cross swarm evm all test clean
.PHONY: geth-linux geth-linux-386 geth-linux-amd64 geth-linux-mips64 geth-linux-mips64le
.PHONY: geth-linux-arm geth-linux-arm-5 geth-linux-arm-6 geth-linux-arm-7 geth-linux-arm64
.PHONY: geth-darwin geth-darwin-386 geth-darwin-amd64
.PHONY: geth-windows geth-windows-386 geth-windows-amd64
GOBIN = build/bin
GOBIN = $(shell pwd)/build/bin
GO ?= latest
geth:
@@ -16,10 +16,10 @@ geth:
@echo "Done building."
@echo "Run \"$(GOBIN)/geth\" to launch geth."
evm:
build/env.sh go run build/ci.go install ./cmd/evm
swarm:
build/env.sh go run build/ci.go install ./cmd/swarm
@echo "Done building."
@echo "Run \"$(GOBIN)/evm\" to start the evm."
@echo "Run \"$(GOBIN)/swarm\" to launch swarm."
all:
build/env.sh go run build/ci.go install
@@ -37,7 +37,11 @@ ios:
test: all
build/env.sh go run build/ci.go test
lint: ## Run linters.
build/env.sh go run build/ci.go lint
clean:
./build/clean_go_build_cache.sh
rm -fr build/_workspace/pkg/ $(GOBIN)/*
# The devtools target installs tools required for 'go generate'.
@@ -45,9 +49,13 @@ clean:
devtools:
env GOBIN= go get -u golang.org/x/tools/cmd/stringer
env GOBIN= go get -u github.com/jteeuwen/go-bindata/go-bindata
env GOBIN= go get -u github.com/kevinburke/go-bindata/go-bindata
env GOBIN= go get -u github.com/fjl/gencodec
env GOBIN= go get -u github.com/golang/protobuf/protoc-gen-go
env GOBIN= go install ./cmd/abigen
@type "npm" 2> /dev/null || echo 'Please install node.js and npm'
@type "solc" 2> /dev/null || echo 'Please install solc'
@type "protoc" 2> /dev/null || echo 'Please install protoc'
# Cross Compilation Targets (xgo)

View File

@@ -1,10 +1,12 @@
## Ethereum Go
## Go Ethereum
Official golang implementation of the Ethereum protocol.
[![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://godoc.org/github.com/ethereum/go-ethereum)
[![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[![Travis](https://travis-ci.org/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ethereum/go-ethereum?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
Automated builds are available for stable releases and the unstable master branch.
@@ -32,14 +34,14 @@ The go-ethereum project comes with several wrappers/executables found in the `cm
| Command | Description |
|:----------:|-------------|
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default) archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `disasm` | Bytecode disassembler to convert EVM (Ethereum Virtual Machine) bytecode into more user friendly assembly-like opcodes (e.g. `echo "6001" | disasm`). For details on the individual opcodes, please see pages 22-30 of the [Ethereum Yellow Paper](http://gavwood.com/paper.pdf). |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow insolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `swarm` | swarm daemon and tools. This is the entrypoint for the swarm network. `swarm --help` for command line options and subcommands. See https://swarm-guide.readthedocs.io for swarm documentation. |
| `swarm` | Swarm daemon and tools. This is the entrypoint for the Swarm network. `swarm --help` for command line options and subcommands. See [Swarm README](https://github.com/ethereum/go-ethereum/tree/master/swarm) for more information. |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
## Running geth
@@ -56,20 +58,18 @@ the user doesn't care about years-old historical data, so we can fast-sync quick
state of the network. To do so:
```
$ geth --fast --cache=512 console
$ geth console
```
This command will:
* Start geth in fast sync mode (`--fast`), causing it to download more data in exchange for avoiding
processing the entire history of the Ethereum network, which is very CPU intensive.
* Bump the memory allowance of the database to 512MB (`--cache=512`), which can help significantly in
sync times especially for HDD users. This flag is optional and you can set it as high or as low as
you'd like, though we'd recommend the 512MB - 2GB range.
* Start geth in fast sync mode (default, can be changed with the `--syncmode` flag), causing it to
download more data in exchange for avoiding processing the entire history of the Ethereum network,
which is very CPU intensive.
* Start up Geth's built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://github.com/ethereum/wiki/wiki/JavaScript-API)
as well as Geth's own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
This too is optional and if you leave it out you can always attach to an already running Geth instance
This tool is optional and if you leave it out you can always attach to an already running Geth instance
with `geth attach`.
### Full node on the Ethereum test network
@@ -80,12 +80,11 @@ entire system. In other words, instead of attaching to the main network, you wan
network with your node, which is fully equivalent to the main network, but with play-Ether only.
```
$ geth --testnet --fast --cache=512 console
$ geth --testnet console
```
The `--fast`, `--cache` flags and `console` subcommand have the exact same meaning as above and they
are equally useful on the testnet too. Please see above for their explanations if you've skipped to
here.
The `console` subcommand have the exact same meaning as above and they are equally useful on the
testnet too. Please see above for their explanations if you've skipped to here.
Specifying the `--testnet` flag however will reconfigure your Geth instance a bit:
@@ -102,6 +101,30 @@ over between the main network and test network, you should make sure to always u
for play-money and real-money. Unless you manually move accounts, Geth will by default correctly
separate the two networks and will not make any accounts available between them.*
### Full node on the Rinkeby test network
The above test network is a cross client one based on the ethash proof-of-work consensus algorithm. As such, it has certain extra overhead and is more susceptible to reorganization attacks due to the network's low difficulty / security. Go Ethereum also supports connecting to a proof-of-authority based test network called [*Rinkeby*](https://www.rinkeby.io) (operated by members of the community). This network is lighter, more secure, but is only supported by go-ethereum.
```
$ geth --rinkeby console
```
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a configuration file via:
```
$ geth --config /path/to/your_config.toml
```
To get an idea how the file should look like you can use the `dumpconfig` subcommand to export your existing configuration:
```
$ geth --your-favourite-flags dumpconfig
```
*Note: This works only with geth v1.6.0 and above.*
#### Docker quick start
One of the quickest ways to get Ethereum up and running on your machine is by using Docker:
@@ -109,15 +132,17 @@ One of the quickest ways to get Ethereum up and running on your machine is by us
```
docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
-p 8545:8545 -p 30303:30303 \
ethereum/client-go --fast --cache=512
ethereum/client-go
```
This will start geth in fast sync mode with a DB memory allowance of 512MB just as the above command does. It will also create a persistent volume in your home directory for saving your blockchain as well as map the default ports. There is also an `alpine` tag available for a slim version of the image.
This will start geth in fast-sync mode with a DB memory allowance of 1GB just as the above command does. It will also create a persistent volume in your home directory for saving your blockchain as well as map the default ports. There is also an `alpine` tag available for a slim version of the image.
Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containers and/or hosts. By default, `geth` binds to the local interface and RPC endpoints is not accessible from the outside.
### Programatically interfacing Geth nodes
As a developer, sooner rather than later you'll want to start interacting with Geth and the Ethereum
network via your own programs and not manually through the console. To aid this, Geth has built in
network via your own programs and not manually through the console. To aid this, Geth has built-in
support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC) and
[Geth specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)). These can be
exposed via HTTP, WebSockets and IPC (unix sockets on unix based platforms, and named pipes on Windows).
@@ -248,7 +273,7 @@ instance for mining, run it with all your usual flags, extended by:
$ geth <usual-flags> --mine --minerthreads=1 --etherbase=0x0000000000000000000000000000000000000000
```
Which will start mining bocks and transactions on a single CPU thread, crediting all proceedings to
Which will start mining blocks and transactions on a single CPU thread, crediting all proceedings to
the account specified by `--etherbase`. You can further tune the mining by changing the default gas
limit blocks converge to (`--targetgaslimit`) and the price transactions are accepted at (`--gasprice`).

View File

@@ -1 +0,0 @@
1.6.4

View File

@@ -17,15 +17,10 @@
package abi
import (
"encoding/binary"
"bytes"
"encoding/json"
"fmt"
"io"
"math/big"
"reflect"
"strings"
"github.com/ethereum/go-ethereum/common"
)
// The ABI holds information about a contract's context and available
@@ -56,329 +51,52 @@ func JSON(reader io.Reader) (ABI, error) {
// methods string signature. (signature = baz(uint32,string32))
func (abi ABI) Pack(name string, args ...interface{}) ([]byte, error) {
// Fetch the ABI of the requested method
var method Method
if name == "" {
method = abi.Constructor
} else {
m, exist := abi.Methods[name]
if !exist {
return nil, fmt.Errorf("method '%s' not found", name)
// constructor
arguments, err := abi.Constructor.Inputs.Pack(args...)
if err != nil {
return nil, err
}
method = m
return arguments, nil
}
arguments, err := method.pack(method, args...)
method, exist := abi.Methods[name]
if !exist {
return nil, fmt.Errorf("method '%s' not found", name)
}
arguments, err := method.Inputs.Pack(args...)
if err != nil {
return nil, err
}
// Pack up the method ID too if not a constructor and return
if name == "" {
return arguments, nil
}
return append(method.Id(), arguments...), nil
}
// toGoSliceType parses the input and casts it to the proper slice defined by the ABI
// argument in T.
func toGoSlice(i int, t Argument, output []byte) (interface{}, error) {
index := i * 32
// The slice must, at very least be large enough for the index+32 which is exactly the size required
// for the [offset in output, size of offset].
if index+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go slice: insufficient size output %d require %d", len(output), index+32)
}
elem := t.Type.Elem
// first we need to create a slice of the type
var refSlice reflect.Value
switch elem.T {
case IntTy, UintTy, BoolTy:
// create a new reference slice matching the element type
switch t.Type.Kind {
case reflect.Bool:
refSlice = reflect.ValueOf([]bool(nil))
case reflect.Uint8:
refSlice = reflect.ValueOf([]uint8(nil))
case reflect.Uint16:
refSlice = reflect.ValueOf([]uint16(nil))
case reflect.Uint32:
refSlice = reflect.ValueOf([]uint32(nil))
case reflect.Uint64:
refSlice = reflect.ValueOf([]uint64(nil))
case reflect.Int8:
refSlice = reflect.ValueOf([]int8(nil))
case reflect.Int16:
refSlice = reflect.ValueOf([]int16(nil))
case reflect.Int32:
refSlice = reflect.ValueOf([]int32(nil))
case reflect.Int64:
refSlice = reflect.ValueOf([]int64(nil))
default:
refSlice = reflect.ValueOf([]*big.Int(nil))
}
case AddressTy: // address must be of slice Address
refSlice = reflect.ValueOf([]common.Address(nil))
case HashTy: // hash must be of slice hash
refSlice = reflect.ValueOf([]common.Hash(nil))
case FixedBytesTy:
refSlice = reflect.ValueOf([][]byte(nil))
default: // no other types are supported
return nil, fmt.Errorf("abi: unsupported slice type %v", elem.T)
}
var slice []byte
var size int
var offset int
if t.Type.IsSlice {
// get the offset which determines the start of this array ...
offset = int(binary.BigEndian.Uint64(output[index+24 : index+32]))
if offset+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go slice: offset %d would go over slice boundary (len=%d)", len(output), offset+32)
}
slice = output[offset:]
// ... starting with the size of the array in elements ...
size = int(binary.BigEndian.Uint64(slice[24:32]))
slice = slice[32:]
// ... and make sure that we've at the very least the amount of bytes
// available in the buffer.
if size*32 > len(slice) {
return nil, fmt.Errorf("abi: cannot marshal in to go slice: insufficient size output %d require %d", len(output), offset+32+size*32)
}
// reslice to match the required size
slice = slice[:size*32]
} else if t.Type.IsArray {
//get the number of elements in the array
size = t.Type.SliceSize
//check to make sure array size matches up
if index+32*size > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go array: offset %d would go over slice boundary (len=%d)", len(output), index+32*size)
}
//slice is there for a fixed amount of times
slice = output[index : index+size*32]
}
for i := 0; i < size; i++ {
var (
inter interface{} // interface type
returnOutput = slice[i*32 : i*32+32] // the return output
)
// set inter to the correct type (cast)
switch elem.T {
case IntTy, UintTy:
inter = readInteger(t.Type.Kind, returnOutput)
case BoolTy:
inter = !allZero(returnOutput)
case AddressTy:
inter = common.BytesToAddress(returnOutput)
case HashTy:
inter = common.BytesToHash(returnOutput)
case FixedBytesTy:
inter = returnOutput
}
// append the item to our reflect slice
refSlice = reflect.Append(refSlice, reflect.ValueOf(inter))
}
// return the interface
return refSlice.Interface(), nil
}
func readInteger(kind reflect.Kind, b []byte) interface{} {
switch kind {
case reflect.Uint8:
return uint8(b[len(b)-1])
case reflect.Uint16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case reflect.Uint32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case reflect.Uint64:
return binary.BigEndian.Uint64(b[len(b)-8:])
case reflect.Int8:
return int8(b[len(b)-1])
case reflect.Int16:
return int16(binary.BigEndian.Uint16(b[len(b)-2:]))
case reflect.Int32:
return int32(binary.BigEndian.Uint32(b[len(b)-4:]))
case reflect.Int64:
return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
default:
return new(big.Int).SetBytes(b)
}
}
func allZero(b []byte) bool {
for _, byte := range b {
if byte != 0 {
return false
}
}
return true
}
// toGoType parses the input and casts it to the proper type defined by the ABI
// argument in T.
func toGoType(i int, t Argument, output []byte) (interface{}, error) {
// we need to treat slices differently
if (t.Type.IsSlice || t.Type.IsArray) && t.Type.T != BytesTy && t.Type.T != StringTy && t.Type.T != FixedBytesTy && t.Type.T != FunctionTy {
return toGoSlice(i, t, output)
}
index := i * 32
if index+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %d require %d", len(output), index+32)
}
// Parse the given index output and check whether we need to read
// a different offset and length based on the type (i.e. string, bytes)
var returnOutput []byte
switch t.Type.T {
case StringTy, BytesTy: // variable arrays are written at the end of the return bytes
// parse offset from which we should start reading
offset := int(binary.BigEndian.Uint64(output[index+24 : index+32]))
if offset+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %d require %d", len(output), offset+32)
}
// parse the size up until we should be reading
size := int(binary.BigEndian.Uint64(output[offset+24 : offset+32]))
if offset+32+size > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %d require %d", len(output), offset+32+size)
}
// get the bytes for this return value
returnOutput = output[offset+32 : offset+32+size]
default:
returnOutput = output[index : index+32]
}
// convert the bytes to whatever is specified by the ABI.
switch t.Type.T {
case IntTy, UintTy:
return readInteger(t.Type.Kind, returnOutput), nil
case BoolTy:
return !allZero(returnOutput), nil
case AddressTy:
return common.BytesToAddress(returnOutput), nil
case HashTy:
return common.BytesToHash(returnOutput), nil
case BytesTy, FixedBytesTy, FunctionTy:
return returnOutput, nil
case StringTy:
return string(returnOutput), nil
}
return nil, fmt.Errorf("abi: unknown type %v", t.Type.T)
}
// these variable are used to determine certain types during type assertion for
// assignment.
var (
r_interSlice = reflect.TypeOf([]interface{}{})
r_hash = reflect.TypeOf(common.Hash{})
r_bytes = reflect.TypeOf([]byte{})
r_byte = reflect.TypeOf(byte(0))
)
// Unpack output in v according to the abi specification
func (abi ABI) Unpack(v interface{}, name string, output []byte) error {
var method = abi.Methods[name]
func (abi ABI) Unpack(v interface{}, name string, output []byte) (err error) {
if len(output) == 0 {
return fmt.Errorf("abi: unmarshalling empty output")
}
// make sure the passed value is a pointer
valueOf := reflect.ValueOf(v)
if reflect.Ptr != valueOf.Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
// since there can't be naming collisions with contracts and events,
// we need to decide whether we're calling a method or an event
if method, ok := abi.Methods[name]; ok {
if len(output)%32 != 0 {
return fmt.Errorf("abi: improperly formatted output")
}
return method.Outputs.Unpack(v, output)
} else if event, ok := abi.Events[name]; ok {
return event.Inputs.Unpack(v, output)
}
var (
value = valueOf.Elem()
typ = value.Type()
)
if len(method.Outputs) > 1 {
switch value.Kind() {
// struct will match named return values to the struct's field
// names
case reflect.Struct:
for i := 0; i < len(method.Outputs); i++ {
marshalledValue, err := toGoType(i, method.Outputs[i], output)
if err != nil {
return err
}
reflectValue := reflect.ValueOf(marshalledValue)
for j := 0; j < typ.NumField(); j++ {
field := typ.Field(j)
// TODO read tags: `abi:"fieldName"`
if field.Name == strings.ToUpper(method.Outputs[i].Name[:1])+method.Outputs[i].Name[1:] {
if err := set(value.Field(j), reflectValue, method.Outputs[i]); err != nil {
return err
}
}
}
}
case reflect.Slice:
if !value.Type().AssignableTo(r_interSlice) {
return fmt.Errorf("abi: cannot marshal tuple in to slice %T (only []interface{} is supported)", v)
}
// if the slice already contains values, set those instead of the interface slice itself.
if value.Len() > 0 {
if len(method.Outputs) > value.Len() {
return fmt.Errorf("abi: cannot marshal in to slices of unequal size (require: %v, got: %v)", len(method.Outputs), value.Len())
}
for i := 0; i < len(method.Outputs); i++ {
marshalledValue, err := toGoType(i, method.Outputs[i], output)
if err != nil {
return err
}
reflectValue := reflect.ValueOf(marshalledValue)
if err := set(value.Index(i).Elem(), reflectValue, method.Outputs[i]); err != nil {
return err
}
}
return nil
}
// create a new slice and start appending the unmarshalled
// values to the new interface slice.
z := reflect.MakeSlice(typ, 0, len(method.Outputs))
for i := 0; i < len(method.Outputs); i++ {
marshalledValue, err := toGoType(i, method.Outputs[i], output)
if err != nil {
return err
}
z = reflect.Append(z, reflect.ValueOf(marshalledValue))
}
value.Set(z)
default:
return fmt.Errorf("abi: cannot unmarshal tuple in to %v", typ)
}
} else {
marshalledValue, err := toGoType(0, method.Outputs[0], output)
if err != nil {
return err
}
if err := set(value, reflect.ValueOf(marshalledValue), method.Outputs[0]); err != nil {
return err
}
}
return nil
return fmt.Errorf("abi: could not locate named method or event")
}
// UnmarshalJSON implements json.Unmarshaler interface
func (abi *ABI) UnmarshalJSON(data []byte) error {
var fields []struct {
Type string
Name string
Constant bool
Indexed bool
Anonymous bool
Inputs []Argument
Outputs []Argument
@@ -415,3 +133,14 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
return nil
}
// MethodById looks up a method by the 4-byte id
// returns nil if none found
func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
for _, method := range abi.Methods {
if bytes.Equal(method.Id(), sigdata[:4]) {
return &method, nil
}
}
return nil, fmt.Errorf("no method with id: %#x", sigdata[:4])
}

File diff suppressed because it is too large Load Diff

View File

@@ -19,6 +19,8 @@ package abi
import (
"encoding/json"
"fmt"
"reflect"
"strings"
)
// Argument holds the name of the argument and the corresponding type.
@@ -29,7 +31,10 @@ type Argument struct {
Indexed bool // indexed is only used by events
}
func (a *Argument) UnmarshalJSON(data []byte) error {
type Arguments []Argument
// UnmarshalJSON implements json.Unmarshaler interface
func (argument *Argument) UnmarshalJSON(data []byte) error {
var extarg struct {
Name string
Type string
@@ -40,12 +45,246 @@ func (a *Argument) UnmarshalJSON(data []byte) error {
return fmt.Errorf("argument json err: %v", err)
}
a.Type, err = NewType(extarg.Type)
argument.Type, err = NewType(extarg.Type)
if err != nil {
return err
}
a.Name = extarg.Name
a.Indexed = extarg.Indexed
argument.Name = extarg.Name
argument.Indexed = extarg.Indexed
return nil
}
// LengthNonIndexed returns the number of arguments when not counting 'indexed' ones. Only events
// can ever have 'indexed' arguments, it should always be false on arguments for method input/output
func (arguments Arguments) LengthNonIndexed() int {
out := 0
for _, arg := range arguments {
if !arg.Indexed {
out++
}
}
return out
}
// NonIndexed returns the arguments with indexed arguments filtered out
func (arguments Arguments) NonIndexed() Arguments {
var ret []Argument
for _, arg := range arguments {
if !arg.Indexed {
ret = append(ret, arg)
}
}
return ret
}
// isTuple returns true for non-atomic constructs, like (uint,uint) or uint[]
func (arguments Arguments) isTuple() bool {
return len(arguments) > 1
}
// Unpack performs the operation hexdata -> Go format
func (arguments Arguments) Unpack(v interface{}, data []byte) error {
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
if arguments.isTuple() {
return arguments.unpackTuple(v, marshalledValues)
}
return arguments.unpackAtomic(v, marshalledValues)
}
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
var (
value = reflect.ValueOf(v).Elem()
typ = value.Type()
kind = value.Kind()
)
if err := requireUnpackKind(value, typ, kind, arguments); err != nil {
return err
}
// If the interface is a struct, get of abi->struct_field mapping
var abi2struct map[string]string
if kind == reflect.Struct {
var err error
abi2struct, err = mapAbiToStructFields(arguments, value)
if err != nil {
return err
}
}
for i, arg := range arguments.NonIndexed() {
reflectValue := reflect.ValueOf(marshalledValues[i])
switch kind {
case reflect.Struct:
if structField, ok := abi2struct[arg.Name]; ok {
if err := set(value.FieldByName(structField), reflectValue, arg); err != nil {
return err
}
}
case reflect.Slice, reflect.Array:
if value.Len() < i {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
v := value.Index(i)
if err := requireAssignable(v, reflectValue); err != nil {
return err
}
if err := set(v.Elem(), reflectValue, arg); err != nil {
return err
}
default:
return fmt.Errorf("abi:[2] cannot unmarshal tuple in to %v", typ)
}
}
return nil
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues []interface{}) error {
if len(marshalledValues) != 1 {
return fmt.Errorf("abi: wrong length, expected single value, got %d", len(marshalledValues))
}
elem := reflect.ValueOf(v).Elem()
kind := elem.Kind()
reflectValue := reflect.ValueOf(marshalledValues[0])
var abi2struct map[string]string
if kind == reflect.Struct {
var err error
if abi2struct, err = mapAbiToStructFields(arguments, elem); err != nil {
return err
}
arg := arguments.NonIndexed()[0]
if structField, ok := abi2struct[arg.Name]; ok {
return set(elem.FieldByName(structField), reflectValue, arg)
}
return nil
}
return set(elem, reflectValue, arguments.NonIndexed()[0])
}
// Computes the full size of an array;
// i.e. counting nested arrays, which count towards size for unpacking.
func getArraySize(arr *Type) int {
size := arr.Size
// Arrays can be nested, with each element being the same size
arr = arr.Elem
for arr.T == ArrayTy {
// Keep multiplying by elem.Size while the elem is an array.
size *= arr.Size
arr = arr.Elem
}
// Now we have the full array size, including its children.
return size
}
// UnpackValues can be used to unpack ABI-encoded hexdata according to the ABI-specification,
// without supplying a struct to unpack into. Instead, this method returns a list containing the
// values. An atomic argument will be a list with one element.
func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
retval := make([]interface{}, 0, arguments.LengthNonIndexed())
virtualArgs := 0
for index, arg := range arguments.NonIndexed() {
marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data)
if arg.Type.T == ArrayTy {
// If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256.
// This means that we need to add two 'virtual' arguments when
// we count the index from now on.
//
// Array values nested multiple levels deep are also encoded inline:
// [2][3]uint256: uint256,uint256,uint256,uint256,uint256,uint256
//
// Calculate the full array size to get the correct offset for the next argument.
// Decrement it by 1, as the normal index increment is still applied.
virtualArgs += getArraySize(&arg.Type) - 1
}
if err != nil {
return nil, err
}
retval = append(retval, marshalledValue)
}
return retval, nil
}
// PackValues performs the operation Go format -> Hexdata
// It is the semantic opposite of UnpackValues
func (arguments Arguments) PackValues(args []interface{}) ([]byte, error) {
return arguments.Pack(args...)
}
// Pack performs the operation Go format -> Hexdata
func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
// Make sure arguments match up and pack them
abiArgs := arguments
if len(args) != len(abiArgs) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(abiArgs))
}
// variable input is the output appended at the end of packed
// output. This is used for strings and bytes types input.
var variableInput []byte
// input offset is the bytes offset for packed output
inputOffset := 0
for _, abiArg := range abiArgs {
if abiArg.Type.T == ArrayTy {
inputOffset += 32 * abiArg.Type.Size
} else {
inputOffset += 32
}
}
var ret []byte
for i, a := range args {
input := abiArgs[i]
// pack the input
packed, err := input.Type.pack(reflect.ValueOf(a))
if err != nil {
return nil, err
}
// check for a slice type (string, bytes, slice)
if input.Type.requiresLengthPrefix() {
// calculate the offset
offset := inputOffset + len(variableInput)
// set the offset
ret = append(ret, packNum(reflect.ValueOf(offset))...)
// Append the packed output to the variable input. The variable input
// will be appended at the end of the input.
variableInput = append(variableInput, packed...)
} else {
// append the packed value to the input
ret = append(ret, packed...)
}
}
// append the variable input at the end of the packed input
ret = append(ret, variableInput...)
return ret, nil
}
// capitalise makes the first character of a string upper case, also removing any
// prefixing underscores from the variable names.
func capitalise(input string) string {
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return strings.ToUpper(input[:1]) + input[1:]
}

View File

@@ -52,12 +52,6 @@ type ContractCaller interface {
CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error)
}
// DeployBackend wraps the operations needed by WaitMined and WaitDeployed.
type DeployBackend interface {
TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error)
CodeAt(ctx context.Context, account common.Address, blockNumber *big.Int) ([]byte, error)
}
// PendingContractCaller defines methods to perform contract calls on the pending state.
// Call will try to discover this interface when access to the pending state is requested.
// If the backend does not support the pending state, Call returns ErrNoPendingState.
@@ -85,13 +79,34 @@ type ContractTransactor interface {
// There is no guarantee that this is the true gas limit requirement as other
// transactions may be added or removed by miners, but it should provide a basis
// for setting a reasonable default.
EstimateGas(ctx context.Context, call ethereum.CallMsg) (usedGas *big.Int, err error)
EstimateGas(ctx context.Context, call ethereum.CallMsg) (gas uint64, err error)
// SendTransaction injects the transaction into the pending pool for execution.
SendTransaction(ctx context.Context, tx *types.Transaction) error
}
// ContractFilterer defines the methods needed to access log events using one-off
// queries or continuous event subscriptions.
type ContractFilterer interface {
// FilterLogs executes a log filter operation, blocking during execution and
// returning all the results in one batch.
//
// TODO(karalabe): Deprecate when the subscription one can return past data too.
FilterLogs(ctx context.Context, query ethereum.FilterQuery) ([]types.Log, error)
// SubscribeFilterLogs creates a background log filtering operation, returning
// a subscription immediately, which can be used to stream the found events.
SubscribeFilterLogs(ctx context.Context, query ethereum.FilterQuery, ch chan<- types.Log) (ethereum.Subscription, error)
}
// DeployBackend wraps the operations needed by WaitMined and WaitDeployed.
type DeployBackend interface {
TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error)
CodeAt(ctx context.Context, account common.Address, blockNumber *big.Int) ([]byte, error)
}
// ContractBackend defines the methods needed to work with contracts on a read-write basis.
type ContractBackend interface {
ContractCaller
ContractTransactor
ContractFilterer
}

View File

@@ -22,6 +22,7 @@ import (
"fmt"
"math/big"
"sync"
"time"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
@@ -29,18 +30,23 @@ import (
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/bloombits"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/eth/filters"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
)
// This nil assignment ensures compile time that SimulatedBackend implements bind.ContractBackend.
var _ bind.ContractBackend = (*SimulatedBackend)(nil)
var errBlockNumberUnsupported = errors.New("SimulatedBackend cannot access blocks other than the latest block")
var errGasEstimationFailed = errors.New("gas required exceeds allowance or always failing transaction")
// SimulatedBackend implements bind.ContractBackend, simulating a blockchain in
// the background. Its main purpose is to allow easily testing contract bindings.
@@ -52,17 +58,25 @@ type SimulatedBackend struct {
pendingBlock *types.Block // Currently pending block that will be imported on request
pendingState *state.StateDB // Currently pending state that will be the active on on request
events *filters.EventSystem // Event system for filtering log events live
config *params.ChainConfig
}
// NewSimulatedBackend creates a new binding backend using a simulated blockchain
// for testing purposes.
func NewSimulatedBackend(alloc core.GenesisAlloc) *SimulatedBackend {
database, _ := ethdb.NewMemDatabase()
genesis := core.Genesis{Config: params.AllProtocolChanges, Alloc: alloc}
func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
database := ethdb.NewMemDatabase()
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
genesis.MustCommit(database)
blockchain, _ := core.NewBlockChain(database, genesis.Config, ethash.NewFaker(), new(event.TypeMux), vm.Config{})
backend := &SimulatedBackend{database: database, blockchain: blockchain, config: genesis.Config}
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{})
backend := &SimulatedBackend{
database: database,
blockchain: blockchain,
config: genesis.Config,
events: filters.NewEventSystem(new(event.TypeMux), &filterBackend{database, blockchain}, false),
}
backend.rollback()
return backend
}
@@ -88,9 +102,11 @@ func (b *SimulatedBackend) Rollback() {
}
func (b *SimulatedBackend) rollback() {
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), b.database, 1, func(int, *core.BlockGen) {})
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(int, *core.BlockGen) {})
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), b.database)
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database())
}
// CodeAt returns the code associated with a certain account in the blockchain.
@@ -144,7 +160,8 @@ func (b *SimulatedBackend) StorageAt(ctx context.Context, contract common.Addres
// TransactionReceipt returns the receipt of a transaction.
func (b *SimulatedBackend) TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error) {
return core.GetReceipt(b.database, txHash), nil
receipt, _, _, _ := rawdb.ReadReceipt(b.database, txHash)
return receipt, nil
}
// PendingCodeAt returns the code associated with an account in the pending state.
@@ -167,7 +184,7 @@ func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallM
if err != nil {
return nil, err
}
rval, _, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), state)
rval, _, _, err := b.callContract(ctx, call, b.blockchain.CurrentBlock(), state)
return rval, err
}
@@ -177,7 +194,7 @@ func (b *SimulatedBackend) PendingCallContract(ctx context.Context, call ethereu
defer b.mu.Unlock()
defer b.pendingState.RevertToSnapshot(b.pendingState.Snapshot())
rval, _, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
rval, _, _, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
return rval, err
}
@@ -198,46 +215,63 @@ func (b *SimulatedBackend) SuggestGasPrice(ctx context.Context) (*big.Int, error
// EstimateGas executes the requested code against the currently pending block/state and
// returns the used amount of gas.
func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMsg) (*big.Int, error) {
func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMsg) (uint64, error) {
b.mu.Lock()
defer b.mu.Unlock()
// Binary search the gas requirement, as it may be higher than the amount used
var lo, hi uint64
if call.Gas != nil {
hi = call.Gas.Uint64()
// Determine the lowest and highest possible gas limits to binary search in between
var (
lo uint64 = params.TxGas - 1
hi uint64
cap uint64
)
if call.Gas >= params.TxGas {
hi = call.Gas
} else {
hi = b.pendingBlock.GasLimit().Uint64()
hi = b.pendingBlock.GasLimit()
}
for lo+1 < hi {
// Take a guess at the gas, and check transaction validity
mid := (hi + lo) / 2
call.Gas = new(big.Int).SetUint64(mid)
cap = hi
// Create a helper to check if a gas allowance results in an executable transaction
executable := func(gas uint64) bool {
call.Gas = gas
snapshot := b.pendingState.Snapshot()
_, gas, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
_, _, failed, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
b.pendingState.RevertToSnapshot(snapshot)
// If the transaction became invalid or used all the gas (failed), raise the gas limit
if err != nil || gas.Cmp(call.Gas) == 0 {
lo = mid
continue
if err != nil || failed {
return false
}
// Otherwise assume the transaction succeeded, lower the gas limit
hi = mid
return true
}
return new(big.Int).SetUint64(hi), nil
// Execute the binary search and hone in on an executable gas limit
for lo+1 < hi {
mid := (hi + lo) / 2
if !executable(mid) {
lo = mid
} else {
hi = mid
}
}
// Reject the transaction as invalid if it still fails at the highest allowance
if hi == cap {
if !executable(hi) {
return 0, errGasEstimationFailed
}
}
return hi, nil
}
// callContract implemens common code between normal and pending contract calls.
// callContract implements common code between normal and pending contract calls.
// state is modified during execution, make sure to copy it if necessary.
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, statedb *state.StateDB) ([]byte, *big.Int, error) {
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, statedb *state.StateDB) ([]byte, uint64, bool, error) {
// Ensure message is initialized properly.
if call.GasPrice == nil {
call.GasPrice = big.NewInt(1)
}
if call.Gas == nil || call.Gas.Sign() == 0 {
call.Gas = big.NewInt(50000000)
if call.Gas == 0 {
call.Gas = 50000000
}
if call.Value == nil {
call.Value = new(big.Int)
@@ -252,9 +286,9 @@ func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallM
// Create a new environment which holds all relevant information
// about the transaction and calling mechanisms.
vmenv := vm.NewEVM(evmContext, statedb, b.config, vm.Config{})
gaspool := new(core.GasPool).AddGas(math.MaxBig256)
ret, gasUsed, _, err := core.NewStateTransition(vmenv, msg, gaspool).TransitionDb()
return ret, gasUsed, err
gaspool := new(core.GasPool).AddGas(math.MaxUint64)
return core.NewStateTransition(vmenv, msg, gaspool).TransitionDb()
}
// SendTransaction updates the pending block to include the given transaction.
@@ -272,14 +306,102 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
panic(fmt.Errorf("invalid transaction nonce: got %d, want %d", tx.Nonce(), nonce))
}
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), b.database, 1, func(number int, block *core.BlockGen) {
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTxWithChain(b.blockchain, tx)
}
block.AddTxWithChain(b.blockchain, tx)
})
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database())
return nil
}
// FilterLogs executes a log filter operation, blocking during execution and
// returning all the results in one batch.
//
// TODO(karalabe): Deprecate when the subscription one can return past data too.
func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.FilterQuery) ([]types.Log, error) {
var filter *filters.Filter
if query.BlockHash != nil {
// Block filter requested, construct a single-shot filter
filter = filters.NewBlockFilter(&filterBackend{b.database, b.blockchain}, *query.BlockHash, query.Addresses, query.Topics)
} else {
// Initialize unset filter boundaried to run from genesis to chain head
from := int64(0)
if query.FromBlock != nil {
from = query.FromBlock.Int64()
}
to := int64(-1)
if query.ToBlock != nil {
to = query.ToBlock.Int64()
}
// Construct the range filter
filter = filters.NewRangeFilter(&filterBackend{b.database, b.blockchain}, from, to, query.Addresses, query.Topics)
}
// Run the filter and return all the logs
logs, err := filter.Logs(ctx)
if err != nil {
return nil, err
}
res := make([]types.Log, len(logs))
for i, log := range logs {
res[i] = *log
}
return res, nil
}
// SubscribeFilterLogs creates a background log filtering operation, returning a
// subscription immediately, which can be used to stream the found events.
func (b *SimulatedBackend) SubscribeFilterLogs(ctx context.Context, query ethereum.FilterQuery, ch chan<- types.Log) (ethereum.Subscription, error) {
// Subscribe to contract events
sink := make(chan []*types.Log)
sub, err := b.events.SubscribeLogs(query, sink)
if err != nil {
return nil, err
}
// Since we're getting logs in batches, we need to flatten them into a plain stream
return event.NewSubscription(func(quit <-chan struct{}) error {
defer sub.Unsubscribe()
for {
select {
case logs := <-sink:
for _, log := range logs {
select {
case ch <- *log:
case err := <-sub.Err():
return err
case <-quit:
return nil
}
}
case err := <-sub.Err():
return err
case <-quit:
return nil
}
}
}), nil
}
// AdjustTime adds a time shift to the simulated clock.
func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
b.mu.Lock()
defer b.mu.Unlock()
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTx(tx)
}
block.AddTx(tx)
block.OffsetTime(int64(adjustment.Seconds()))
})
statedb, _ := b.blockchain.State()
b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), b.database)
b.pendingState, _ = state.New(b.pendingBlock.Root(), statedb.Database())
return nil
}
@@ -293,6 +415,72 @@ func (m callmsg) Nonce() uint64 { return 0 }
func (m callmsg) CheckNonce() bool { return false }
func (m callmsg) To() *common.Address { return m.CallMsg.To }
func (m callmsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callmsg) Gas() *big.Int { return m.CallMsg.Gas }
func (m callmsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callmsg) Value() *big.Int { return m.CallMsg.Value }
func (m callmsg) Data() []byte { return m.CallMsg.Data }
// filterBackend implements filters.Backend to support filtering for logs without
// taking bloom-bits acceleration structures into account.
type filterBackend struct {
db ethdb.Database
bc *core.BlockChain
}
func (fb *filterBackend) ChainDb() ethdb.Database { return fb.db }
func (fb *filterBackend) EventMux() *event.TypeMux { panic("not supported") }
func (fb *filterBackend) HeaderByNumber(ctx context.Context, block rpc.BlockNumber) (*types.Header, error) {
if block == rpc.LatestBlockNumber {
return fb.bc.CurrentHeader(), nil
}
return fb.bc.GetHeaderByNumber(uint64(block.Int64())), nil
}
func (fb *filterBackend) HeaderByHash(ctx context.Context, hash common.Hash) (*types.Header, error) {
return fb.bc.GetHeaderByHash(hash), nil
}
func (fb *filterBackend) GetReceipts(ctx context.Context, hash common.Hash) (types.Receipts, error) {
number := rawdb.ReadHeaderNumber(fb.db, hash)
if number == nil {
return nil, nil
}
return rawdb.ReadReceipts(fb.db, hash, *number), nil
}
func (fb *filterBackend) GetLogs(ctx context.Context, hash common.Hash) ([][]*types.Log, error) {
number := rawdb.ReadHeaderNumber(fb.db, hash)
if number == nil {
return nil, nil
}
receipts := rawdb.ReadReceipts(fb.db, hash, *number)
if receipts == nil {
return nil, nil
}
logs := make([][]*types.Log, len(receipts))
for i, receipt := range receipts {
logs[i] = receipt.Logs
}
return logs, nil
}
func (fb *filterBackend) SubscribeNewTxsEvent(ch chan<- core.NewTxsEvent) event.Subscription {
return event.NewSubscription(func(quit <-chan struct{}) error {
<-quit
return nil
})
}
func (fb *filterBackend) SubscribeChainEvent(ch chan<- core.ChainEvent) event.Subscription {
return fb.bc.SubscribeChainEvent(ch)
}
func (fb *filterBackend) SubscribeRemovedLogsEvent(ch chan<- core.RemovedLogsEvent) event.Subscription {
return fb.bc.SubscribeRemovedLogsEvent(ch)
}
func (fb *filterBackend) SubscribeLogsEvent(ch chan<- []*types.Log) event.Subscription {
return fb.bc.SubscribeLogsEvent(ch)
}
func (fb *filterBackend) BloomStatus() (uint64, uint64) { return 4096, 0 }
func (fb *filterBackend) ServiceFilter(ctx context.Context, ms *bloombits.MatcherSession) {
panic("not supported")
}

View File

@@ -27,6 +27,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/event"
)
// SignerFn is a signer function callback when a contract requires a method to
@@ -50,11 +51,27 @@ type TransactOpts struct {
Value *big.Int // Funds to transfer along along the transaction (nil = 0 = no funds)
GasPrice *big.Int // Gas price to use for the transaction execution (nil = gas price oracle)
GasLimit *big.Int // Gas limit to set for the transaction execution (nil = estimate + 10%)
GasLimit uint64 // Gas limit to set for the transaction execution (0 = estimate)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// FilterOpts is the collection of options to fine tune filtering for events
// within a bound contract.
type FilterOpts struct {
Start uint64 // Start of the queried range
End *uint64 // End of the range (nil = latest)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// WatchOpts is the collection of options to fine tune subscribing for events
// within a bound contract.
type WatchOpts struct {
Start *uint64 // Start of the queried range (nil = latest)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// BoundContract is the base wrapper object that reflects a contract on the
// Ethereum network. It contains a collection of methods that are used by the
// higher level contract bindings to operate.
@@ -63,16 +80,18 @@ type BoundContract struct {
abi abi.ABI // Reflect based ABI to access the correct Ethereum methods
caller ContractCaller // Read interface to interact with the blockchain
transactor ContractTransactor // Write interface to interact with the blockchain
filterer ContractFilterer // Event filtering to interact with the blockchain
}
// NewBoundContract creates a low level contract interface through which calls
// and transactions may be made through.
func NewBoundContract(address common.Address, abi abi.ABI, caller ContractCaller, transactor ContractTransactor) *BoundContract {
func NewBoundContract(address common.Address, abi abi.ABI, caller ContractCaller, transactor ContractTransactor, filterer ContractFilterer) *BoundContract {
return &BoundContract{
address: address,
abi: abi,
caller: caller,
transactor: transactor,
filterer: filterer,
}
}
@@ -80,7 +99,7 @@ func NewBoundContract(address common.Address, abi abi.ABI, caller ContractCaller
// deployment address with a Go wrapper.
func DeployContract(opts *TransactOpts, abi abi.ABI, bytecode []byte, backend ContractBackend, params ...interface{}) (common.Address, *types.Transaction, *BoundContract, error) {
// Otherwise try to deploy the contract
c := NewBoundContract(common.Address{}, abi, backend, backend)
c := NewBoundContract(common.Address{}, abi, backend, backend, backend)
input, err := c.abi.Pack("", params...)
if err != nil {
@@ -189,7 +208,7 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
}
}
gasLimit := opts.GasLimit
if gasLimit == nil {
if gasLimit == 0 {
// Gas estimation cannot succeed without code for method invocations
if contract != nil {
if code, err := c.transactor.PendingCodeAt(ensureContext(opts.Context), c.address); err != nil {
@@ -225,6 +244,104 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
return signedTx, nil
}
// FilterLogs filters contract logs for past blocks, returning the necessary
// channels to construct a strongly typed bound iterator on top of them.
func (c *BoundContract) FilterLogs(opts *FilterOpts, name string, query ...[]interface{}) (chan types.Log, event.Subscription, error) {
// Don't crash on a lazy user
if opts == nil {
opts = new(FilterOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].Id()}}, query...)
topics, err := makeTopics(query...)
if err != nil {
return nil, nil, err
}
// Start the background filtering
logs := make(chan types.Log, 128)
config := ethereum.FilterQuery{
Addresses: []common.Address{c.address},
Topics: topics,
FromBlock: new(big.Int).SetUint64(opts.Start),
}
if opts.End != nil {
config.ToBlock = new(big.Int).SetUint64(*opts.End)
}
/* TODO(karalabe): Replace the rest of the method below with this when supported
sub, err := c.filterer.SubscribeFilterLogs(ensureContext(opts.Context), config, logs)
*/
buff, err := c.filterer.FilterLogs(ensureContext(opts.Context), config)
if err != nil {
return nil, nil, err
}
sub, err := event.NewSubscription(func(quit <-chan struct{}) error {
for _, log := range buff {
select {
case logs <- log:
case <-quit:
return nil
}
}
return nil
}), nil
if err != nil {
return nil, nil, err
}
return logs, sub, nil
}
// WatchLogs filters subscribes to contract logs for future blocks, returning a
// subscription object that can be used to tear down the watcher.
func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]interface{}) (chan types.Log, event.Subscription, error) {
// Don't crash on a lazy user
if opts == nil {
opts = new(WatchOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].Id()}}, query...)
topics, err := makeTopics(query...)
if err != nil {
return nil, nil, err
}
// Start the background filtering
logs := make(chan types.Log, 128)
config := ethereum.FilterQuery{
Addresses: []common.Address{c.address},
Topics: topics,
}
if opts.Start != nil {
config.FromBlock = new(big.Int).SetUint64(*opts.Start)
}
sub, err := c.filterer.SubscribeFilterLogs(ensureContext(opts.Context), config, logs)
if err != nil {
return nil, nil, err
}
return logs, sub, nil
}
// UnpackLog unpacks a retrieved log into the provided output structure.
func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log) error {
if len(log.Data) > 0 {
if err := c.abi.Unpack(out, event, log.Data); err != nil {
return err
}
}
var indexed abi.Arguments
for _, arg := range c.abi.Events[event].Inputs {
if arg.Indexed {
indexed = append(indexed, arg)
}
}
return parseTopics(out, indexed, log.Topics[1:])
}
// ensureContext is a helper method to ensure a context is not nil, even if the
// user specified it as such.
func ensureContext(ctx context.Context) context.Context {
if ctx == nil {
return context.TODO()

View File

@@ -63,10 +63,11 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
return r
}, abis[i])
// Extract the call and transact methods, and sort them alphabetically
// Extract the call and transact methods; events; and sort them alphabetically
var (
calls = make(map[string]*tmplMethod)
transacts = make(map[string]*tmplMethod)
events = make(map[string]*tmplEvent)
)
for _, original := range evmABI.Methods {
// Normalize the method for capital cases and non-anonymous inputs/outputs
@@ -89,11 +90,33 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
}
// Append the methods to the call or transact lists
if original.Const {
calls[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original)}
calls[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original.Outputs)}
} else {
transacts[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original)}
transacts[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original.Outputs)}
}
}
for _, original := range evmABI.Events {
// Skip anonymous events as they don't support explicit filtering
if original.Anonymous {
continue
}
// Normalize the event for capital cases and non-anonymous outputs
normalized := original
normalized.Name = methodNormalizer[lang](original.Name)
normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs {
// Indexed fields are input, non-indexed ones are outputs
if input.Indexed {
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
}
}
// Append the event to the accumulator list
events[original.Name] = &tmplEvent{Original: original, Normalized: normalized}
}
contracts[types[i]] = &tmplContract{
Type: capitalise(types[i]),
InputABI: strings.Replace(strippedABI, "\"", "\\\"", -1),
@@ -101,6 +124,7 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
Constructor: evmABI.Constructor,
Calls: calls,
Transacts: transacts,
Events: events,
}
}
// Generate the contract template data content and render it
@@ -111,10 +135,11 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
buffer := new(bytes.Buffer)
funcs := map[string]interface{}{
"bindtype": bindType[lang],
"namedtype": namedType[lang],
"capitalise": capitalise,
"decapitalise": decapitalise,
"bindtype": bindType[lang],
"bindtopictype": bindTopicType[lang],
"namedtype": namedType[lang],
"capitalise": capitalise,
"decapitalise": decapitalise,
}
tmpl := template.Must(template.New("").Funcs(funcs).Parse(tmplSource[lang]))
if err := tmpl.Execute(buffer, data); err != nil {
@@ -122,138 +147,194 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
}
// For Go bindings pass the code through goimports to clean it up and double check
if lang == LangGo {
code, err := imports.Process("", buffer.Bytes(), nil)
code, err := imports.Process(".", buffer.Bytes(), nil)
if err != nil {
return "", fmt.Errorf("%v\n%s", err, buffer)
}
return string(code), nil
}
// For all others just return as is for now
return string(buffer.Bytes()), nil
return buffer.String(), nil
}
// bindType is a set of type binders that convert Solidity types to some supported
// programming language.
// programming language types.
var bindType = map[Lang]func(kind abi.Type) string{
LangGo: bindTypeGo,
LangJava: bindTypeJava,
}
// Helper function for the binding generators.
// It reads the unmatched characters after the inner type-match,
// (since the inner type is a prefix of the total type declaration),
// looks for valid arrays (possibly a dynamic one) wrapping the inner type,
// and returns the sizes of these arrays.
//
// Returned array sizes are in the same order as solidity signatures; inner array size first.
// Array sizes may also be "", indicating a dynamic array.
func wrapArray(stringKind string, innerLen int, innerMapping string) (string, []string) {
remainder := stringKind[innerLen:]
//find all the sizes
matches := regexp.MustCompile(`\[(\d*)\]`).FindAllStringSubmatch(remainder, -1)
parts := make([]string, 0, len(matches))
for _, match := range matches {
//get group 1 from the regex match
parts = append(parts, match[1])
}
return innerMapping, parts
}
// Translates the array sizes to a Go-lang declaration of a (nested) array of the inner type.
// Simply returns the inner type if arraySizes is empty.
func arrayBindingGo(inner string, arraySizes []string) string {
out := ""
//prepend all array sizes, from outer (end arraySizes) to inner (start arraySizes)
for i := len(arraySizes) - 1; i >= 0; i-- {
out += "[" + arraySizes[i] + "]"
}
out += inner
return out
}
// bindTypeGo converts a Solidity type to a Go one. Since there is no clear mapping
// from all Solidity types to Go ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. *big.Int).
func bindTypeGo(kind abi.Type) string {
stringKind := kind.String()
innerLen, innerMapping := bindUnnestedTypeGo(stringKind)
return arrayBindingGo(wrapArray(stringKind, innerLen, innerMapping))
}
// The inner function of bindTypeGo, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the the translated type.
func bindUnnestedTypeGo(stringKind string) (int, string) {
switch {
case strings.HasPrefix(stringKind, "address"):
parts := regexp.MustCompile(`address(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return stringKind
}
return fmt.Sprintf("%scommon.Address", parts[1])
return len("address"), "common.Address"
case strings.HasPrefix(stringKind, "bytes"):
parts := regexp.MustCompile(`bytes([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 3 {
return stringKind
}
return fmt.Sprintf("%s[%s]byte", parts[2], parts[1])
parts := regexp.MustCompile(`bytes([0-9]*)`).FindStringSubmatch(stringKind)
return len(parts[0]), fmt.Sprintf("[%s]byte", parts[1])
case strings.HasPrefix(stringKind, "int") || strings.HasPrefix(stringKind, "uint"):
parts := regexp.MustCompile(`(u)?int([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 4 {
return stringKind
}
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(stringKind)
switch parts[2] {
case "8", "16", "32", "64":
return fmt.Sprintf("%s%sint%s", parts[3], parts[1], parts[2])
return len(parts[0]), fmt.Sprintf("%sint%s", parts[1], parts[2])
}
return fmt.Sprintf("%s*big.Int", parts[3])
return len(parts[0]), "*big.Int"
case strings.HasPrefix(stringKind, "bool") || strings.HasPrefix(stringKind, "string"):
parts := regexp.MustCompile(`([a-z]+)(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 3 {
return stringKind
}
return fmt.Sprintf("%s%s", parts[2], parts[1])
case strings.HasPrefix(stringKind, "bool"):
return len("bool"), "bool"
case strings.HasPrefix(stringKind, "string"):
return len("string"), "string"
default:
return stringKind
return len(stringKind), stringKind
}
}
// Translates the array sizes to a Java declaration of a (nested) array of the inner type.
// Simply returns the inner type if arraySizes is empty.
func arrayBindingJava(inner string, arraySizes []string) string {
// Java array type declarations do not include the length.
return inner + strings.Repeat("[]", len(arraySizes))
}
// bindTypeJava converts a Solidity type to a Java one. Since there is no clear mapping
// from all Solidity types to Java ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. BigDecimal).
func bindTypeJava(kind abi.Type) string {
stringKind := kind.String()
innerLen, innerMapping := bindUnnestedTypeJava(stringKind)
return arrayBindingJava(wrapArray(stringKind, innerLen, innerMapping))
}
// The inner function of bindTypeJava, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the the translated type.
func bindUnnestedTypeJava(stringKind string) (int, string) {
switch {
case strings.HasPrefix(stringKind, "address"):
parts := regexp.MustCompile(`address(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return stringKind
return len(stringKind), stringKind
}
if parts[1] == "" {
return fmt.Sprintf("Address")
return len("address"), "Address"
}
return fmt.Sprintf("Addresses")
return len(parts[0]), "Addresses"
case strings.HasPrefix(stringKind, "bytes"):
parts := regexp.MustCompile(`bytes([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 3 {
return stringKind
parts := regexp.MustCompile(`bytes([0-9]*)`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return len(stringKind), stringKind
}
if parts[2] != "" {
return "byte[][]"
}
return "byte[]"
return len(parts[0]), "byte[]"
case strings.HasPrefix(stringKind, "int") || strings.HasPrefix(stringKind, "uint"):
parts := regexp.MustCompile(`(u)?int([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 4 {
return stringKind
//Note that uint and int (without digits) are also matched,
// these are size 256, and will translate to BigInt (the default).
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(stringKind)
if len(parts) != 3 {
return len(stringKind), stringKind
}
switch parts[2] {
case "8", "16", "32", "64":
if parts[1] == "" {
if parts[3] == "" {
return fmt.Sprintf("int%s", parts[2])
}
return fmt.Sprintf("int%s[]", parts[2])
}
namedSize := map[string]string{
"8": "byte",
"16": "short",
"32": "int",
"64": "long",
}[parts[2]]
//default to BigInt
if namedSize == "" {
namedSize = "BigInt"
}
if parts[3] == "" {
return fmt.Sprintf("BigInt")
}
return fmt.Sprintf("BigInts")
return len(parts[0]), namedSize
case strings.HasPrefix(stringKind, "bool"):
parts := regexp.MustCompile(`bool(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return stringKind
}
if parts[1] == "" {
return fmt.Sprintf("bool")
}
return fmt.Sprintf("bool[]")
return len("bool"), "boolean"
case strings.HasPrefix(stringKind, "string"):
parts := regexp.MustCompile(`string(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return stringKind
}
if parts[1] == "" {
return fmt.Sprintf("String")
}
return fmt.Sprintf("String[]")
return len("string"), "String"
default:
return stringKind
return len(stringKind), stringKind
}
}
// bindTopicType is a set of type binders that convert Solidity types to some
// supported programming language topic types.
var bindTopicType = map[Lang]func(kind abi.Type) string{
LangGo: bindTopicTypeGo,
LangJava: bindTopicTypeJava,
}
// bindTypeGo converts a Solidity topic type to a Go one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeGo(kind abi.Type) string {
bound := bindTypeGo(kind)
if bound == "string" || bound == "[]byte" {
bound = "common.Hash"
}
return bound
}
// bindTypeGo converts a Solidity topic type to a Java one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type) string {
bound := bindTypeJava(kind)
if bound == "String" || bound == "Bytes" {
bound = "Hash"
}
return bound
}
// namedType is a set of functions that transform language specific types to
// named versions that my be used inside method names.
var namedType = map[Lang]func(string, abi.Type) string{
@@ -273,11 +354,13 @@ func namedTypeJava(javaKind string, solKind abi.Type) string {
return "String"
case "string[]":
return "Strings"
case "bool":
case "boolean":
return "Bool"
case "bool[]":
case "boolean[]":
return "Bools"
case "BigInt":
case "BigInt[]":
return "BigInts"
default:
parts := regexp.MustCompile(`(u)?int([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(solKind.String())
if len(parts) != 4 {
return javaKind
@@ -292,8 +375,6 @@ func namedTypeJava(javaKind string, solKind abi.Type) string {
default:
return javaKind
}
default:
return javaKind
}
}
@@ -304,26 +385,71 @@ var methodNormalizer = map[Lang]func(string) string{
LangJava: decapitalise,
}
// capitalise makes the first character of a string upper case.
// capitalise makes a camel-case string which starts with an upper case character.
func capitalise(input string) string {
return strings.ToUpper(input[:1]) + input[1:]
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return toCamelCase(strings.ToUpper(input[:1]) + input[1:])
}
// decapitalise makes the first character of a string lower case.
// decapitalise makes a camel-case string which starts with a lower case character.
func decapitalise(input string) string {
return strings.ToLower(input[:1]) + input[1:]
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return toCamelCase(strings.ToLower(input[:1]) + input[1:])
}
// structured checks whether a method has enough information to return a proper
// Go struct ot if flat returns are needed.
func structured(method abi.Method) bool {
if len(method.Outputs) < 2 {
// toCamelCase converts an under-score string to a camel-case string
func toCamelCase(input string) string {
toupper := false
result := ""
for k, v := range input {
switch {
case k == 0:
result = strings.ToUpper(string(input[0]))
case toupper:
result += strings.ToUpper(string(v))
toupper = false
case v == '_':
toupper = true
default:
result += string(v)
}
}
return result
}
// structured checks whether a list of ABI data types has enough information to
// operate through a proper Go struct or if flat returns are needed.
func structured(args abi.Arguments) bool {
if len(args) < 2 {
return false
}
for _, out := range method.Outputs {
exists := make(map[string]bool)
for _, out := range args {
// If the name is anonymous, we can't organize into a struct
if out.Name == "" {
return false
}
// If the field name is empty when normalized or collides (var, Var, _var, _Var),
// we can't organize into a struct
field := capitalise(out.Name)
if field == "" || exists[field] {
return false
}
exists[field] = true
}
return true
}

View File

@@ -126,6 +126,7 @@ var bindTests = []struct {
{"type":"function","name":"namedOutput","constant":true,"inputs":[],"outputs":[{"name":"str","type":"string"}]},
{"type":"function","name":"anonOutput","constant":true,"inputs":[],"outputs":[{"name":"","type":"string"}]},
{"type":"function","name":"namedOutputs","constant":true,"inputs":[],"outputs":[{"name":"str1","type":"string"},{"name":"str2","type":"string"}]},
{"type":"function","name":"collidingOutputs","constant":true,"inputs":[],"outputs":[{"name":"str","type":"string"},{"name":"Str","type":"string"}]},
{"type":"function","name":"anonOutputs","constant":true,"inputs":[],"outputs":[{"name":"","type":"string"},{"name":"","type":"string"}]},
{"type":"function","name":"mixedOutputs","constant":true,"inputs":[],"outputs":[{"name":"","type":"string"},{"name":"str","type":"string"}]}
]
@@ -140,12 +141,71 @@ var bindTests = []struct {
str1, err = b.NamedOutput(nil)
str1, err = b.AnonOutput(nil)
res, _ := b.NamedOutputs(nil)
str1, str2, err = b.CollidingOutputs(nil)
str1, str2, err = b.AnonOutputs(nil)
str1, str2, err = b.MixedOutputs(nil)
fmt.Println(str1, str2, res.Str1, res.Str2, err)
}`,
},
// Tests that named, anonymous and indexed events are handled correctly
{
`EventChecker`, ``, ``,
`
[
{"type":"event","name":"empty","inputs":[]},
{"type":"event","name":"indexed","inputs":[{"name":"addr","type":"address","indexed":true},{"name":"num","type":"int256","indexed":true}]},
{"type":"event","name":"mixed","inputs":[{"name":"addr","type":"address","indexed":true},{"name":"num","type":"int256"}]},
{"type":"event","name":"anonymous","anonymous":true,"inputs":[]},
{"type":"event","name":"dynamic","inputs":[{"name":"idxStr","type":"string","indexed":true},{"name":"idxDat","type":"bytes","indexed":true},{"name":"str","type":"string"},{"name":"dat","type":"bytes"}]}
]
`,
`if e, err := NewEventChecker(common.Address{}, nil); e == nil || err != nil {
t.Fatalf("binding (%v) nil or error (%v) not nil", e, nil)
} else if false { // Don't run, just compile and test types
var (
err error
res bool
str string
dat []byte
hash common.Hash
)
_, err = e.FilterEmpty(nil)
_, err = e.FilterIndexed(nil, []common.Address{}, []*big.Int{})
mit, err := e.FilterMixed(nil, []common.Address{})
res = mit.Next() // Make sure the iterator has a Next method
err = mit.Error() // Make sure the iterator has an Error method
err = mit.Close() // Make sure the iterator has a Close method
fmt.Println(mit.Event.Raw.BlockHash) // Make sure the raw log is contained within the results
fmt.Println(mit.Event.Num) // Make sure the unpacked non-indexed fields are present
fmt.Println(mit.Event.Addr) // Make sure the reconstructed indexed fields are present
dit, err := e.FilterDynamic(nil, []string{}, [][]byte{})
str = dit.Event.Str // Make sure non-indexed strings retain their type
dat = dit.Event.Dat // Make sure non-indexed bytes retain their type
hash = dit.Event.IdxStr // Make sure indexed strings turn into hashes
hash = dit.Event.IdxDat // Make sure indexed bytes turn into hashes
sink := make(chan *EventCheckerMixed)
sub, err := e.WatchMixed(nil, sink, []common.Address{})
defer sub.Unsubscribe()
event := <-sink
fmt.Println(event.Raw.BlockHash) // Make sure the raw log is contained within the results
fmt.Println(event.Num) // Make sure the unpacked non-indexed fields are present
fmt.Println(event.Addr) // Make sure the reconstructed indexed fields are present
fmt.Println(res, str, dat, hash, err)
}
// Run a tiny reflection test to ensure disallowed methods don't appear
if _, ok := reflect.TypeOf(&EventChecker{}).MethodByName("FilterAnonymous"); ok {
t.Errorf("binding has disallowed method (FilterAnonymous)")
}`,
},
// Test that contract interactions (deploy, transact and call) generate working code
{
`Interactor`,
@@ -169,7 +229,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy an interaction tester contract and call a transaction on it
_, _, interactor, err := DeployInteractor(auth, sim, "Deploy string")
@@ -210,7 +270,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a tuple tester contract and execute a structured call on it
_, _, getter, err := DeployGetter(auth, sim)
@@ -242,7 +302,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a tuple tester contract and execute a structured call on it
_, _, tupler, err := DeployTupler(auth, sim)
@@ -284,7 +344,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a slice tester contract and execute a n array call on it
_, _, slicer, err := DeploySlicer(auth, sim)
@@ -318,7 +378,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a default method invoker contract and execute its default method
_, _, defaulter, err := DeployDefaulter(auth, sim)
@@ -351,7 +411,7 @@ var bindTests = []struct {
`[{"constant":true,"inputs":[],"name":"String","outputs":[{"name":"","type":"string"}],"type":"function"}]`,
`
// Create a simulator and wrap a non-deployed contract
sim := backends.NewSimulatedBackend(nil)
sim := backends.NewSimulatedBackend(nil, uint64(10000000000))
nonexistent, err := NewNonExistent(common.Address{}, sim)
if err != nil {
@@ -365,7 +425,7 @@ var bindTests = []struct {
}
`,
},
// Tests that gas estimation works for contracts with weird gas mechanics too.
// Tests that gas estimation works for contracts with weird gas mechanics too.
{
`FunkyGasPattern`,
`
@@ -387,7 +447,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a funky gas pattern contract
_, _, limiter, err := DeployFunkyGasPattern(auth, sim)
@@ -397,7 +457,6 @@ var bindTests = []struct {
sim.Commit()
// Set the field with automatic estimation and check that it succeeds
auth.GasLimit = nil
if _, err := limiter.SetField(auth, "automatic"); err != nil {
t.Fatalf("Failed to call automatically gased transaction: %v", err)
}
@@ -423,7 +482,7 @@ var bindTests = []struct {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}})
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a sender tester contract and execute a structured call on it
_, _, callfrom, err := DeployCallFrom(auth, sim)
@@ -447,6 +506,309 @@ var bindTests = []struct {
}
`,
},
// Tests that methods and returns with underscores inside work correctly.
{
`Underscorer`,
`
contract Underscorer {
function UnderscoredOutput() constant returns (int _int, string _string) {
return (314, "pi");
}
function LowerLowerCollision() constant returns (int _res, int res) {
return (1, 2);
}
function LowerUpperCollision() constant returns (int _res, int Res) {
return (1, 2);
}
function UpperLowerCollision() constant returns (int _Res, int res) {
return (1, 2);
}
function UpperUpperCollision() constant returns (int _Res, int Res) {
return (1, 2);
}
function PurelyUnderscoredOutput() constant returns (int _, int res) {
return (1, 2);
}
function AllPurelyUnderscoredOutput() constant returns (int _, int __) {
return (1, 2);
}
function _under_scored_func() constant returns (int _int) {
return 0;
}
}
`, `6060604052341561000f57600080fd5b6103858061001e6000396000f30060606040526004361061008e576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff16806303a592131461009357806346546dbe146100c357806367e6633d146100ec5780639df4848514610181578063af7486ab146101b1578063b564b34d146101e1578063e02ab24d14610211578063e409ca4514610241575b600080fd5b341561009e57600080fd5b6100a6610271565b604051808381526020018281526020019250505060405180910390f35b34156100ce57600080fd5b6100d6610286565b6040518082815260200191505060405180910390f35b34156100f757600080fd5b6100ff61028e565b6040518083815260200180602001828103825283818151815260200191508051906020019080838360005b8381101561014557808201518184015260208101905061012a565b50505050905090810190601f1680156101725780820380516001836020036101000a031916815260200191505b50935050505060405180910390f35b341561018c57600080fd5b6101946102dc565b604051808381526020018281526020019250505060405180910390f35b34156101bc57600080fd5b6101c46102f1565b604051808381526020018281526020019250505060405180910390f35b34156101ec57600080fd5b6101f4610306565b604051808381526020018281526020019250505060405180910390f35b341561021c57600080fd5b61022461031b565b604051808381526020018281526020019250505060405180910390f35b341561024c57600080fd5b610254610330565b604051808381526020018281526020019250505060405180910390f35b60008060016002819150809050915091509091565b600080905090565b6000610298610345565b61013a8090506040805190810160405280600281526020017f7069000000000000000000000000000000000000000000000000000000000000815250915091509091565b60008060016002819150809050915091509091565b60008060016002819150809050915091509091565b60008060016002819150809050915091509091565b60008060016002819150809050915091509091565b60008060016002819150809050915091509091565b6020604051908101604052806000815250905600a165627a7a72305820d1a53d9de9d1e3d55cb3dc591900b63c4f1ded79114f7b79b332684840e186a40029`,
`[{"constant":true,"inputs":[],"name":"LowerUpperCollision","outputs":[{"name":"_res","type":"int256"},{"name":"Res","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"_under_scored_func","outputs":[{"name":"_int","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"UnderscoredOutput","outputs":[{"name":"_int","type":"int256"},{"name":"_string","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"PurelyUnderscoredOutput","outputs":[{"name":"_","type":"int256"},{"name":"res","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"UpperLowerCollision","outputs":[{"name":"_Res","type":"int256"},{"name":"res","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"AllPurelyUnderscoredOutput","outputs":[{"name":"_","type":"int256"},{"name":"__","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"UpperUpperCollision","outputs":[{"name":"_Res","type":"int256"},{"name":"Res","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"LowerLowerCollision","outputs":[{"name":"_res","type":"int256"},{"name":"res","type":"int256"}],"payable":false,"stateMutability":"view","type":"function"}]`,
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy a underscorer tester contract and execute a structured call on it
_, _, underscorer, err := DeployUnderscorer(auth, sim)
if err != nil {
t.Fatalf("Failed to deploy underscorer contract: %v", err)
}
sim.Commit()
// Verify that underscored return values correctly parse into structs
if res, err := underscorer.UnderscoredOutput(nil); err != nil {
t.Errorf("Failed to call constant function: %v", err)
} else if res.Int.Cmp(big.NewInt(314)) != 0 || res.String != "pi" {
t.Errorf("Invalid result, want: {314, \"pi\"}, got: %+v", res)
}
// Verify that underscored and non-underscored name collisions force tuple outputs
var a, b *big.Int
a, b, _ = underscorer.LowerLowerCollision(nil)
a, b, _ = underscorer.LowerUpperCollision(nil)
a, b, _ = underscorer.UpperLowerCollision(nil)
a, b, _ = underscorer.UpperUpperCollision(nil)
a, b, _ = underscorer.PurelyUnderscoredOutput(nil)
a, b, _ = underscorer.AllPurelyUnderscoredOutput(nil)
a, _ = underscorer.UnderScoredFunc(nil)
fmt.Println(a, b, err)
`,
},
// Tests that logs can be successfully filtered and decoded.
{
`Eventer`,
`
contract Eventer {
event SimpleEvent (
address indexed Addr,
bytes32 indexed Id,
bool indexed Flag,
uint Value
);
function raiseSimpleEvent(address addr, bytes32 id, bool flag, uint value) {
SimpleEvent(addr, id, flag, value);
}
event NodataEvent (
uint indexed Number,
int16 indexed Short,
uint32 indexed Long
);
function raiseNodataEvent(uint number, int16 short, uint32 long) {
NodataEvent(number, short, long);
}
event DynamicEvent (
string indexed IndexedString,
bytes indexed IndexedBytes,
string NonIndexedString,
bytes NonIndexedBytes
);
function raiseDynamicEvent(string str, bytes blob) {
DynamicEvent(str, blob, str, blob);
}
}
`,
`6060604052341561000f57600080fd5b61042c8061001e6000396000f300606060405260043610610057576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff168063528300ff1461005c578063630c31e2146100fc578063c7d116dd14610156575b600080fd5b341561006757600080fd5b6100fa600480803590602001908201803590602001908080601f0160208091040260200160405190810160405280939291908181526020018383808284378201915050505050509190803590602001908201803590602001908080601f01602080910402602001604051908101604052809392919081815260200183838082843782019150505050505091905050610194565b005b341561010757600080fd5b610154600480803573ffffffffffffffffffffffffffffffffffffffff16906020019091908035600019169060200190919080351515906020019091908035906020019091905050610367565b005b341561016157600080fd5b610192600480803590602001909190803560010b90602001909190803563ffffffff169060200190919050506103c3565b005b806040518082805190602001908083835b6020831015156101ca57805182526020820191506020810190506020830392506101a5565b6001836020036101000a0380198251168184511680821785525050505050509050019150506040518091039020826040518082805190602001908083835b60208310151561022d5780518252602082019150602081019050602083039250610208565b6001836020036101000a03801982511681845116808217855250505050505090500191505060405180910390207f3281fd4f5e152dd3385df49104a3f633706e21c9e80672e88d3bcddf33101f008484604051808060200180602001838103835285818151815260200191508051906020019080838360005b838110156102c15780820151818401526020810190506102a6565b50505050905090810190601f1680156102ee5780820380516001836020036101000a031916815260200191505b50838103825284818151815260200191508051906020019080838360005b8381101561032757808201518184015260208101905061030c565b50505050905090810190601f1680156103545780820380516001836020036101000a031916815260200191505b5094505050505060405180910390a35050565b81151583600019168573ffffffffffffffffffffffffffffffffffffffff167f1f097de4289df643bd9c11011cc61367aa12983405c021056e706eb5ba1250c8846040518082815260200191505060405180910390a450505050565b8063ffffffff168260010b847f3ca7f3a77e5e6e15e781850bc82e32adfa378a2a609370db24b4d0fae10da2c960405160405180910390a45050505600a165627a7a72305820d1f8a8bbddbc5bb29f285891d6ae1eef8420c52afdc05e1573f6114d8e1714710029`,
`[{"constant":false,"inputs":[{"name":"str","type":"string"},{"name":"blob","type":"bytes"}],"name":"raiseDynamicEvent","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"addr","type":"address"},{"name":"id","type":"bytes32"},{"name":"flag","type":"bool"},{"name":"value","type":"uint256"}],"name":"raiseSimpleEvent","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"number","type":"uint256"},{"name":"short","type":"int16"},{"name":"long","type":"uint32"}],"name":"raiseNodataEvent","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"anonymous":false,"inputs":[{"indexed":true,"name":"Addr","type":"address"},{"indexed":true,"name":"Id","type":"bytes32"},{"indexed":true,"name":"Flag","type":"bool"},{"indexed":false,"name":"Value","type":"uint256"}],"name":"SimpleEvent","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"Number","type":"uint256"},{"indexed":true,"name":"Short","type":"int16"},{"indexed":true,"name":"Long","type":"uint32"}],"name":"NodataEvent","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"IndexedString","type":"string"},{"indexed":true,"name":"IndexedBytes","type":"bytes"},{"indexed":false,"name":"NonIndexedString","type":"string"},{"indexed":false,"name":"NonIndexedBytes","type":"bytes"}],"name":"DynamicEvent","type":"event"}]`,
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
// Deploy an eventer contract
_, _, eventer, err := DeployEventer(auth, sim)
if err != nil {
t.Fatalf("Failed to deploy eventer contract: %v", err)
}
sim.Commit()
// Inject a few events into the contract, gradually more in each block
for i := 1; i <= 3; i++ {
for j := 1; j <= i; j++ {
if _, err := eventer.RaiseSimpleEvent(auth, common.Address{byte(j)}, [32]byte{byte(j)}, true, big.NewInt(int64(10*i+j))); err != nil {
t.Fatalf("block %d, event %d: raise failed: %v", i, j, err)
}
}
sim.Commit()
}
// Test filtering for certain events and ensure they can be found
sit, err := eventer.FilterSimpleEvent(nil, []common.Address{common.Address{1}, common.Address{3}}, [][32]byte{{byte(1)}, {byte(2)}, {byte(3)}}, []bool{true})
if err != nil {
t.Fatalf("failed to filter for simple events: %v", err)
}
defer sit.Close()
sit.Next()
if sit.Event.Value.Uint64() != 11 || !sit.Event.Flag {
t.Errorf("simple log content mismatch: have %v, want {11, true}", sit.Event)
}
sit.Next()
if sit.Event.Value.Uint64() != 21 || !sit.Event.Flag {
t.Errorf("simple log content mismatch: have %v, want {21, true}", sit.Event)
}
sit.Next()
if sit.Event.Value.Uint64() != 31 || !sit.Event.Flag {
t.Errorf("simple log content mismatch: have %v, want {31, true}", sit.Event)
}
sit.Next()
if sit.Event.Value.Uint64() != 33 || !sit.Event.Flag {
t.Errorf("simple log content mismatch: have %v, want {33, true}", sit.Event)
}
if sit.Next() {
t.Errorf("unexpected simple event found: %+v", sit.Event)
}
if err = sit.Error(); err != nil {
t.Fatalf("simple event iteration failed: %v", err)
}
// Test raising and filtering for an event with no data component
if _, err := eventer.RaiseNodataEvent(auth, big.NewInt(314), 141, 271); err != nil {
t.Fatalf("failed to raise nodata event: %v", err)
}
sim.Commit()
nit, err := eventer.FilterNodataEvent(nil, []*big.Int{big.NewInt(314)}, []int16{140, 141, 142}, []uint32{271})
if err != nil {
t.Fatalf("failed to filter for nodata events: %v", err)
}
defer nit.Close()
if !nit.Next() {
t.Fatalf("nodata log not found: %v", nit.Error())
}
if nit.Event.Number.Uint64() != 314 {
t.Errorf("nodata log content mismatch: have %v, want 314", nit.Event.Number)
}
if nit.Next() {
t.Errorf("unexpected nodata event found: %+v", nit.Event)
}
if err = nit.Error(); err != nil {
t.Fatalf("nodata event iteration failed: %v", err)
}
// Test raising and filtering for events with dynamic indexed components
if _, err := eventer.RaiseDynamicEvent(auth, "Hello", []byte("World")); err != nil {
t.Fatalf("failed to raise dynamic event: %v", err)
}
sim.Commit()
dit, err := eventer.FilterDynamicEvent(nil, []string{"Hi", "Hello", "Bye"}, [][]byte{[]byte("World")})
if err != nil {
t.Fatalf("failed to filter for dynamic events: %v", err)
}
defer dit.Close()
if !dit.Next() {
t.Fatalf("dynamic log not found: %v", dit.Error())
}
if dit.Event.NonIndexedString != "Hello" || string(dit.Event.NonIndexedBytes) != "World" || dit.Event.IndexedString != common.HexToHash("0x06b3dfaec148fb1bb2b066f10ec285e7c9bf402ab32aa78a5d38e34566810cd2") || dit.Event.IndexedBytes != common.HexToHash("0xf2208c967df089f60420785795c0a9ba8896b0f6f1867fa7f1f12ad6f79c1a18") {
t.Errorf("dynamic log content mismatch: have %v, want {'0x06b3dfaec148fb1bb2b066f10ec285e7c9bf402ab32aa78a5d38e34566810cd2, '0xf2208c967df089f60420785795c0a9ba8896b0f6f1867fa7f1f12ad6f79c1a18', 'Hello', 'World'}", dit.Event)
}
if dit.Next() {
t.Errorf("unexpected dynamic event found: %+v", dit.Event)
}
if err = dit.Error(); err != nil {
t.Fatalf("dynamic event iteration failed: %v", err)
}
// Test subscribing to an event and raising it afterwards
ch := make(chan *EventerSimpleEvent, 16)
sub, err := eventer.WatchSimpleEvent(nil, ch, nil, nil, nil)
if err != nil {
t.Fatalf("failed to subscribe to simple events: %v", err)
}
if _, err := eventer.RaiseSimpleEvent(auth, common.Address{255}, [32]byte{255}, true, big.NewInt(255)); err != nil {
t.Fatalf("failed to raise subscribed simple event: %v", err)
}
sim.Commit()
select {
case event := <-ch:
if event.Value.Uint64() != 255 {
t.Errorf("simple log content mismatch: have %v, want 255", event)
}
case <-time.After(250 * time.Millisecond):
t.Fatalf("subscribed simple event didn't arrive")
}
// Unsubscribe from the event and make sure we're not delivered more
sub.Unsubscribe()
if _, err := eventer.RaiseSimpleEvent(auth, common.Address{254}, [32]byte{254}, true, big.NewInt(254)); err != nil {
t.Fatalf("failed to raise subscribed simple event: %v", err)
}
sim.Commit()
select {
case event := <-ch:
t.Fatalf("unsubscribed simple event arrived: %v", event)
case <-time.After(250 * time.Millisecond):
}
`,
},
{
`DeeplyNestedArray`,
`
contract DeeplyNestedArray {
uint64[3][4][5] public deepUint64Array;
function storeDeepUintArray(uint64[3][4][5] arr) public {
deepUint64Array = arr;
}
function retrieveDeepArray() public view returns (uint64[3][4][5]) {
return deepUint64Array;
}
}
`,
`6060604052341561000f57600080fd5b6106438061001e6000396000f300606060405260043610610057576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff168063344248551461005c5780638ed4573a1461011457806398ed1856146101ab575b600080fd5b341561006757600080fd5b610112600480806107800190600580602002604051908101604052809291906000905b828210156101055783826101800201600480602002604051908101604052809291906000905b828210156100f25783826060020160038060200260405190810160405280929190826003602002808284378201915050505050815260200190600101906100b0565b505050508152602001906001019061008a565b5050505091905050610208565b005b341561011f57600080fd5b61012761021d565b604051808260056000925b8184101561019b578284602002015160046000925b8184101561018d5782846020020151600360200280838360005b8381101561017c578082015181840152602081019050610161565b505050509050019260010192610147565b925050509260010192610132565b9250505091505060405180910390f35b34156101b657600080fd5b6101de6004808035906020019091908035906020019091908035906020019091905050610309565b604051808267ffffffffffffffff1667ffffffffffffffff16815260200191505060405180910390f35b80600090600561021992919061035f565b5050565b6102256103b0565b6000600580602002604051908101604052809291906000905b8282101561030057838260040201600480602002604051908101604052809291906000905b828210156102ed578382016003806020026040519081016040528092919082600380156102d9576020028201916000905b82829054906101000a900467ffffffffffffffff1667ffffffffffffffff16815260200190600801906020826007010492830192600103820291508084116102945790505b505050505081526020019060010190610263565b505050508152602001906001019061023e565b50505050905090565b60008360058110151561031857fe5b600402018260048110151561032957fe5b018160038110151561033757fe5b6004918282040191900660080292509250509054906101000a900467ffffffffffffffff1681565b826005600402810192821561039f579160200282015b8281111561039e5782518290600461038e9291906103df565b5091602001919060040190610375565b5b5090506103ac919061042d565b5090565b610780604051908101604052806005905b6103c9610459565b8152602001906001900390816103c15790505090565b826004810192821561041c579160200282015b8281111561041b5782518290600361040b929190610488565b50916020019190600101906103f2565b5b5090506104299190610536565b5090565b61045691905b8082111561045257600081816104499190610562565b50600401610433565b5090565b90565b610180604051908101604052806004905b6104726105a7565b81526020019060019003908161046a5790505090565b82600380016004900481019282156105255791602002820160005b838211156104ef57835183826101000a81548167ffffffffffffffff021916908367ffffffffffffffff16021790555092602001926008016020816007010492830192600103026104a3565b80156105235782816101000a81549067ffffffffffffffff02191690556008016020816007010492830192600103026104ef565b505b50905061053291906105d9565b5090565b61055f91905b8082111561055b57600081816105529190610610565b5060010161053c565b5090565b90565b50600081816105719190610610565b50600101600081816105839190610610565b50600101600081816105959190610610565b5060010160006105a59190610610565b565b6060604051908101604052806003905b600067ffffffffffffffff168152602001906001900390816105b75790505090565b61060d91905b8082111561060957600081816101000a81549067ffffffffffffffff0219169055506001016105df565b5090565b90565b50600090555600a165627a7a7230582087e5a43f6965ab6ef7a4ff056ab80ed78fd8c15cff57715a1bf34ec76a93661c0029`,
`[{"constant":false,"inputs":[{"name":"arr","type":"uint64[3][4][5]"}],"name":"storeDeepUintArray","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"retrieveDeepArray","outputs":[{"name":"","type":"uint64[3][4][5]"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"","type":"uint256"},{"name":"","type":"uint256"},{"name":"","type":"uint256"}],"name":"deepUint64Array","outputs":[{"name":"","type":"uint64"}],"payable":false,"stateMutability":"view","type":"function"}]`,
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAlloc{auth.From: {Balance: big.NewInt(10000000000)}}, 10000000)
//deploy the test contract
_, _, testContract, err := DeployDeeplyNestedArray(auth, sim)
if err != nil {
t.Fatalf("Failed to deploy test contract: %v", err)
}
// Finish deploy.
sim.Commit()
//Create coordinate-filled array, for testing purposes.
testArr := [5][4][3]uint64{}
for i := 0; i < 5; i++ {
testArr[i] = [4][3]uint64{}
for j := 0; j < 4; j++ {
testArr[i][j] = [3]uint64{}
for k := 0; k < 3; k++ {
//pack the coordinates, each array value will be unique, and can be validated easily.
testArr[i][j][k] = uint64(i) << 16 | uint64(j) << 8 | uint64(k)
}
}
}
if _, err := testContract.StoreDeepUintArray(&bind.TransactOpts{
From: auth.From,
Signer: auth.Signer,
}, testArr); err != nil {
t.Fatalf("Failed to store nested array in test contract: %v", err)
}
sim.Commit()
retrievedArr, err := testContract.RetrieveDeepArray(&bind.CallOpts{
From: auth.From,
Pending: false,
})
if err != nil {
t.Fatalf("Failed to retrieve nested array from test contract: %v", err)
}
//quick check to see if contents were copied
// (See accounts/abi/unpack_test.go for more extensive testing)
if retrievedArr[4][3][2] != testArr[4][3][2] {
t.Fatalf("Retrieved value does not match expected value! got: %d, expected: %d. %v", retrievedArr[4][3][2], testArr[4][3][2], err)
}
`,
},
}
// Tests that packages generated by the binder can be successfully compiled and
@@ -458,8 +820,8 @@ func TestBindings(t *testing.T) {
t.Skip("go sdk not found for testing")
}
// Skip the test if the go-ethereum sources are symlinked (https://github.com/golang/go/issues/14845)
linkTestCode := fmt.Sprintf("package linktest\nfunc CheckSymlinks(){\nfmt.Println(backends.NewSimulatedBackend(nil))\n}")
linkTestDeps, err := imports.Process("", []byte(linkTestCode), nil)
linkTestCode := fmt.Sprintf("package linktest\nfunc CheckSymlinks(){\nfmt.Println(backends.NewSimulatedBackend(nil,uint64(10000000000)))\n}")
linkTestDeps, err := imports.Process(os.TempDir(), []byte(linkTestCode), nil)
if err != nil {
t.Fatalf("failed check for goimports symlink bug: %v", err)
}
@@ -498,7 +860,7 @@ func TestBindings(t *testing.T) {
}
}
// Test the entire package and report any failures
cmd := exec.Command(gocmd, "test", "-v")
cmd := exec.Command(gocmd, "test", "-v", "-count", "1")
cmd.Dir = pkg
if out, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("failed to run binding test: %v\n%s", err, out)

View File

@@ -32,6 +32,7 @@ type tmplContract struct {
Constructor abi.Method // Contract constructor for deploy parametrization
Calls map[string]*tmplMethod // Contract calls that only read state data
Transacts map[string]*tmplMethod // Contract calls that write state data
Events map[string]*tmplEvent // Contract events accessors
}
// tmplMethod is a wrapper around an abi.Method that contains a few preprocessed
@@ -39,7 +40,13 @@ type tmplContract struct {
type tmplMethod struct {
Original abi.Method // Original method as parsed by the abi package
Normalized abi.Method // Normalized version of the parsed method (capitalized names, non-anonymous args/returns)
Structured bool // Whether the returns should be accumulated into a contract
Structured bool // Whether the returns should be accumulated into a struct
}
// tmplEvent is a wrapper around an a
type tmplEvent struct {
Original abi.Event // Original event as parsed by the abi package
Normalized abi.Event // Normalized version of the parsed fields
}
// tmplSource is language to template mapping containing all the supported
@@ -52,8 +59,8 @@ var tmplSource = map[Lang]string{
// tmplSourceGo is the Go source template use to generate the contract binding
// based on.
const tmplSourceGo = `
// This file is an automatically generated Go binding. Do not modify as any
// change will likely be lost upon the next re-generation!
// Code generated - DO NOT EDIT.
// This file is a generated binding and any manual changes will be lost.
package {{.Package}}
@@ -75,7 +82,7 @@ package {{.Package}}
if err != nil {
return common.Address{}, nil, nil, err
}
return address, tx, &{{.Type}}{ {{.Type}}Caller: {{.Type}}Caller{contract: contract}, {{.Type}}Transactor: {{.Type}}Transactor{contract: contract} }, nil
return address, tx, &{{.Type}}{ {{.Type}}Caller: {{.Type}}Caller{contract: contract}, {{.Type}}Transactor: {{.Type}}Transactor{contract: contract}, {{.Type}}Filterer: {{.Type}}Filterer{contract: contract} }, nil
}
{{end}}
@@ -83,6 +90,7 @@ package {{.Package}}
type {{.Type}} struct {
{{.Type}}Caller // Read-only binding to the contract
{{.Type}}Transactor // Write-only binding to the contract
{{.Type}}Filterer // Log filterer for contract events
}
// {{.Type}}Caller is an auto generated read-only Go binding around an Ethereum contract.
@@ -95,6 +103,11 @@ package {{.Package}}
contract *bind.BoundContract // Generic contract wrapper for the low level calls
}
// {{.Type}}Filterer is an auto generated log filtering Go binding around an Ethereum contract events.
type {{.Type}}Filterer struct {
contract *bind.BoundContract // Generic contract wrapper for the low level calls
}
// {{.Type}}Session is an auto generated Go binding around an Ethereum contract,
// with pre-set call and transact options.
type {{.Type}}Session struct {
@@ -134,16 +147,16 @@ package {{.Package}}
// New{{.Type}} creates a new instance of {{.Type}}, bound to a specific deployed contract.
func New{{.Type}}(address common.Address, backend bind.ContractBackend) (*{{.Type}}, error) {
contract, err := bind{{.Type}}(address, backend, backend)
contract, err := bind{{.Type}}(address, backend, backend, backend)
if err != nil {
return nil, err
}
return &{{.Type}}{ {{.Type}}Caller: {{.Type}}Caller{contract: contract}, {{.Type}}Transactor: {{.Type}}Transactor{contract: contract} }, nil
return &{{.Type}}{ {{.Type}}Caller: {{.Type}}Caller{contract: contract}, {{.Type}}Transactor: {{.Type}}Transactor{contract: contract}, {{.Type}}Filterer: {{.Type}}Filterer{contract: contract} }, nil
}
// New{{.Type}}Caller creates a new read-only instance of {{.Type}}, bound to a specific deployed contract.
func New{{.Type}}Caller(address common.Address, caller bind.ContractCaller) (*{{.Type}}Caller, error) {
contract, err := bind{{.Type}}(address, caller, nil)
contract, err := bind{{.Type}}(address, caller, nil, nil)
if err != nil {
return nil, err
}
@@ -152,20 +165,29 @@ package {{.Package}}
// New{{.Type}}Transactor creates a new write-only instance of {{.Type}}, bound to a specific deployed contract.
func New{{.Type}}Transactor(address common.Address, transactor bind.ContractTransactor) (*{{.Type}}Transactor, error) {
contract, err := bind{{.Type}}(address, nil, transactor)
contract, err := bind{{.Type}}(address, nil, transactor, nil)
if err != nil {
return nil, err
}
return &{{.Type}}Transactor{contract: contract}, nil
}
// New{{.Type}}Filterer creates a new log filterer instance of {{.Type}}, bound to a specific deployed contract.
func New{{.Type}}Filterer(address common.Address, filterer bind.ContractFilterer) (*{{.Type}}Filterer, error) {
contract, err := bind{{.Type}}(address, nil, nil, filterer)
if err != nil {
return nil, err
}
return &{{.Type}}Filterer{contract: contract}, nil
}
// bind{{.Type}} binds a generic wrapper to an already deployed contract.
func bind{{.Type}}(address common.Address, caller bind.ContractCaller, transactor bind.ContractTransactor) (*bind.BoundContract, error) {
func bind{{.Type}}(address common.Address, caller bind.ContractCaller, transactor bind.ContractTransactor, filterer bind.ContractFilterer) (*bind.BoundContract, error) {
parsed, err := abi.JSON(strings.NewReader({{.Type}}ABI))
if err != nil {
return nil, err
}
return bind.NewBoundContract(address, parsed, caller, transactor), nil
return bind.NewBoundContract(address, parsed, caller, transactor, filterer), nil
}
// Call invokes the (constant) contract method with params as input values and
@@ -263,6 +285,137 @@ package {{.Package}}
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
{{end}}
{{range .Events}}
// {{$contract.Type}}{{.Normalized.Name}}Iterator is returned from Filter{{.Normalized.Name}} and is used to iterate over the raw logs and unpacked data for {{.Normalized.Name}} events raised by the {{$contract.Type}} contract.
type {{$contract.Type}}{{.Normalized.Name}}Iterator struct {
Event *{{$contract.Type}}{{.Normalized.Name}} // Event containing the contract specifics and raw log
contract *bind.BoundContract // Generic contract to use for unpacking event data
event string // Event name to use for unpacking event data
logs chan types.Log // Log channel receiving the found contract events
sub ethereum.Subscription // Subscription for errors, completion and termination
done bool // Whether the subscription completed delivering logs
fail error // Occurred error to stop iteration
}
// Next advances the iterator to the subsequent event, returning whether there
// are any more events found. In case of a retrieval or parsing error, false is
// returned and Error() can be queried for the exact failure.
func (it *{{$contract.Type}}{{.Normalized.Name}}Iterator) Next() bool {
// If the iterator failed, stop iterating
if (it.fail != nil) {
return false
}
// If the iterator completed, deliver directly whatever's available
if (it.done) {
select {
case log := <-it.logs:
it.Event = new({{$contract.Type}}{{.Normalized.Name}})
if err := it.contract.UnpackLog(it.Event, it.event, log); err != nil {
it.fail = err
return false
}
it.Event.Raw = log
return true
default:
return false
}
}
// Iterator still in progress, wait for either a data or an error event
select {
case log := <-it.logs:
it.Event = new({{$contract.Type}}{{.Normalized.Name}})
if err := it.contract.UnpackLog(it.Event, it.event, log); err != nil {
it.fail = err
return false
}
it.Event.Raw = log
return true
case err := <-it.sub.Err():
it.done = true
it.fail = err
return it.Next()
}
}
// Error returns any retrieval or parsing error occurred during filtering.
func (it *{{$contract.Type}}{{.Normalized.Name}}Iterator) Error() error {
return it.fail
}
// Close terminates the iteration process, releasing any pending underlying
// resources.
func (it *{{$contract.Type}}{{.Normalized.Name}}Iterator) Close() error {
it.sub.Unsubscribe()
return nil
}
// {{$contract.Type}}{{.Normalized.Name}} represents a {{.Normalized.Name}} event raised by the {{$contract.Type}} contract.
type {{$contract.Type}}{{.Normalized.Name}} struct { {{range .Normalized.Inputs}}
{{capitalise .Name}} {{if .Indexed}}{{bindtopictype .Type}}{{else}}{{bindtype .Type}}{{end}}; {{end}}
Raw types.Log // Blockchain specific contextual infos
}
// Filter{{.Normalized.Name}} is a free log retrieval operation binding the contract event 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Filter{{.Normalized.Name}}(opts *bind.FilterOpts{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type}}{{end}}{{end}}) (*{{$contract.Type}}{{.Normalized.Name}}Iterator, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
for _, {{.Name}}Item := range {{.Name}} {
{{.Name}}Rule = append({{.Name}}Rule, {{.Name}}Item)
}{{end}}{{end}}
logs, sub, err := _{{$contract.Type}}.contract.FilterLogs(opts, "{{.Original.Name}}"{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}}Rule{{end}}{{end}})
if err != nil {
return nil, err
}
return &{{$contract.Type}}{{.Normalized.Name}}Iterator{contract: _{{$contract.Type}}.contract, event: "{{.Original.Name}}", logs: logs, sub: sub}, nil
}
// Watch{{.Normalized.Name}} is a free log subscription operation binding the contract event 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Watch{{.Normalized.Name}}(opts *bind.WatchOpts, sink chan<- *{{$contract.Type}}{{.Normalized.Name}}{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type}}{{end}}{{end}}) (event.Subscription, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
for _, {{.Name}}Item := range {{.Name}} {
{{.Name}}Rule = append({{.Name}}Rule, {{.Name}}Item)
}{{end}}{{end}}
logs, sub, err := _{{$contract.Type}}.contract.WatchLogs(opts, "{{.Original.Name}}"{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}}Rule{{end}}{{end}})
if err != nil {
return nil, err
}
return event.NewSubscription(func(quit <-chan struct{}) error {
defer sub.Unsubscribe()
for {
select {
case log := <-logs:
// New log arrived, parse the event and forward to the user
event := new({{$contract.Type}}{{.Normalized.Name}})
if err := _{{$contract.Type}}.contract.UnpackLog(event, "{{.Original.Name}}", log); err != nil {
return err
}
event.Raw = log
select {
case sink <- event:
case err := <-sub.Err():
return err
case <-quit:
return nil
}
case err := <-sub.Err():
return err
case <-quit:
return nil
}
}
}), nil
}
{{end}}
{{end}}
`

189
accounts/abi/bind/topics.go Normal file
View File

@@ -0,0 +1,189 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind
import (
"errors"
"fmt"
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// makeTopics converts a filter query argument list into a filter topic set.
func makeTopics(query ...[]interface{}) ([][]common.Hash, error) {
topics := make([][]common.Hash, len(query))
for i, filter := range query {
for _, rule := range filter {
var topic common.Hash
// Try to generate the topic based on simple types
switch rule := rule.(type) {
case common.Hash:
copy(topic[:], rule[:])
case common.Address:
copy(topic[common.HashLength-common.AddressLength:], rule[:])
case *big.Int:
blob := rule.Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case bool:
if rule {
topic[common.HashLength-1] = 1
}
case int8:
blob := big.NewInt(int64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case int16:
blob := big.NewInt(int64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case int32:
blob := big.NewInt(int64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case int64:
blob := big.NewInt(rule).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint8:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint16:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint32:
blob := new(big.Int).SetUint64(uint64(rule)).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case uint64:
blob := new(big.Int).SetUint64(rule).Bytes()
copy(topic[common.HashLength-len(blob):], blob)
case string:
hash := crypto.Keccak256Hash([]byte(rule))
copy(topic[:], hash[:])
case []byte:
hash := crypto.Keccak256Hash(rule)
copy(topic[:], hash[:])
default:
// Attempt to generate the topic from funky types
val := reflect.ValueOf(rule)
switch {
case val.Kind() == reflect.Array && reflect.TypeOf(rule).Elem().Kind() == reflect.Uint8:
reflect.Copy(reflect.ValueOf(topic[common.HashLength-val.Len():]), val)
default:
return nil, fmt.Errorf("unsupported indexed type: %T", rule)
}
}
topics[i] = append(topics[i], topic)
}
}
return topics, nil
}
// Big batch of reflect types for topic reconstruction.
var (
reflectHash = reflect.TypeOf(common.Hash{})
reflectAddress = reflect.TypeOf(common.Address{})
reflectBigInt = reflect.TypeOf(new(big.Int))
)
// parseTopics converts the indexed topic fields into actual log field values.
//
// Note, dynamic types cannot be reconstructed since they get mapped to Keccak256
// hashes as the topic value!
func parseTopics(out interface{}, fields abi.Arguments, topics []common.Hash) error {
// Sanity check that the fields and topics match up
if len(fields) != len(topics) {
return errors.New("topic/field count mismatch")
}
// Iterate over all the fields and reconstruct them from topics
for _, arg := range fields {
if !arg.Indexed {
return errors.New("non-indexed field in topic reconstruction")
}
field := reflect.ValueOf(out).Elem().FieldByName(capitalise(arg.Name))
// Try to parse the topic back into the fields based on primitive types
switch field.Kind() {
case reflect.Bool:
if topics[0][common.HashLength-1] == 1 {
field.Set(reflect.ValueOf(true))
}
case reflect.Int8:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(int8(num.Int64())))
case reflect.Int16:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(int16(num.Int64())))
case reflect.Int32:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(int32(num.Int64())))
case reflect.Int64:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(num.Int64()))
case reflect.Uint8:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(uint8(num.Uint64())))
case reflect.Uint16:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(uint16(num.Uint64())))
case reflect.Uint32:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(uint32(num.Uint64())))
case reflect.Uint64:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(num.Uint64()))
default:
// Ran out of plain primitive types, try custom types
switch field.Type() {
case reflectHash: // Also covers all dynamic types
field.Set(reflect.ValueOf(topics[0]))
case reflectAddress:
var addr common.Address
copy(addr[:], topics[0][common.HashLength-common.AddressLength:])
field.Set(reflect.ValueOf(addr))
case reflectBigInt:
num := new(big.Int).SetBytes(topics[0][:])
field.Set(reflect.ValueOf(num))
default:
// Ran out of custom types, try the crazies
switch {
case arg.Type.T == abi.FixedBytesTy:
reflect.Copy(field, reflect.ValueOf(topics[0][common.HashLength-arg.Type.Size:]))
default:
return fmt.Errorf("unsupported indexed type: %v", arg.Type)
}
}
}
topics = topics[1:]
}
return nil
}

View File

@@ -34,18 +34,18 @@ var testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d
var waitDeployedTests = map[string]struct {
code string
gas *big.Int
gas uint64
wantAddress common.Address
wantErr error
}{
"successful deploy": {
code: `6060604052600a8060106000396000f360606040526008565b00`,
gas: big.NewInt(3000000),
gas: 3000000,
wantAddress: common.HexToAddress("0x3a220f351252089d385b29beca14e27f204c296a"),
},
"empty code": {
code: ``,
gas: big.NewInt(300000),
gas: 300000,
wantErr: bind.ErrNoCodeAfterDeploy,
wantAddress: common.HexToAddress("0x3a220f351252089d385b29beca14e27f204c296a"),
},
@@ -53,9 +53,11 @@ var waitDeployedTests = map[string]struct {
func TestWaitDeployed(t *testing.T) {
for name, test := range waitDeployedTests {
backend := backends.NewSimulatedBackend(core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
})
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
}, 10000000,
)
// Create the transaction.
tx := types.NewContractCreation(0, big.NewInt(0), test.gas, big.NewInt(1), common.FromHex(test.code))

View File

@@ -17,10 +17,15 @@
package abi
import (
"errors"
"fmt"
"reflect"
)
var (
errBadBool = errors.New("abi: improperly encoded boolean value")
)
// formatSliceString formats the reflection kind with the given slice size
// and returns a formatted string representation.
func formatSliceString(kind reflect.Kind, sliceSize int) string {
@@ -34,22 +39,23 @@ func formatSliceString(kind reflect.Kind, sliceSize int) string {
// type in t.
func sliceTypeCheck(t Type, val reflect.Value) error {
if val.Kind() != reflect.Slice && val.Kind() != reflect.Array {
return typeErr(formatSliceString(t.Kind, t.SliceSize), val.Type())
}
if t.IsArray && val.Len() != t.SliceSize {
return typeErr(formatSliceString(t.Elem.Kind, t.SliceSize), formatSliceString(val.Type().Elem().Kind(), val.Len()))
return typeErr(formatSliceString(t.Kind, t.Size), val.Type())
}
if t.Elem.IsSlice {
if t.T == ArrayTy && val.Len() != t.Size {
return typeErr(formatSliceString(t.Elem.Kind, t.Size), formatSliceString(val.Type().Elem().Kind(), val.Len()))
}
if t.Elem.T == SliceTy {
if val.Len() > 0 {
return sliceTypeCheck(*t.Elem, val.Index(0))
}
} else if t.Elem.IsArray {
} else if t.Elem.T == ArrayTy {
return sliceTypeCheck(*t.Elem, val.Index(0))
}
if elemKind := val.Type().Elem().Kind(); elemKind != t.Elem.Kind {
return typeErr(formatSliceString(t.Elem.Kind, t.SliceSize), val.Type())
return typeErr(formatSliceString(t.Elem.Kind, t.Size), val.Type())
}
return nil
}
@@ -57,20 +63,19 @@ func sliceTypeCheck(t Type, val reflect.Value) error {
// typeCheck checks that the given reflection value can be assigned to the reflection
// type in t.
func typeCheck(t Type, value reflect.Value) error {
if t.IsSlice || t.IsArray {
if t.T == SliceTy || t.T == ArrayTy {
return sliceTypeCheck(t, value)
}
// Check base type validity. Element types will be checked later on.
if t.Kind != value.Kind() {
return typeErr(t.Kind, value.Kind())
} else if t.T == FixedBytesTy && t.Size != value.Len() {
return typeErr(t.Type, value.Type())
} else {
return nil
}
return nil
}
// varErr returns a formatted error.
func varErr(expected, got reflect.Kind) error {
return typeErr(expected, got)
}
// typeErr returns a formatted type casting error.

View File

@@ -30,7 +30,18 @@ import (
type Event struct {
Name string
Anonymous bool
Inputs []Argument
Inputs Arguments
}
func (e Event) String() string {
inputs := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Name, input.Type)
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", input.Name, input.Type)
}
}
return fmt.Sprintf("e %v(%v)", e.Name, strings.Join(inputs, ", "))
}
// Id returns the canonical representation of the event's signature used by the

View File

@@ -17,13 +17,69 @@
package abi
import (
"bytes"
"encoding/hex"
"encoding/json"
"math/big"
"reflect"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var jsonEventTransfer = []byte(`{
"anonymous": false,
"inputs": [
{
"indexed": true, "name": "from", "type": "address"
}, {
"indexed": true, "name": "to", "type": "address"
}, {
"indexed": false, "name": "value", "type": "uint256"
}],
"name": "Transfer",
"type": "event"
}`)
var jsonEventPledge = []byte(`{
"anonymous": false,
"inputs": [{
"indexed": false, "name": "who", "type": "address"
}, {
"indexed": false, "name": "wad", "type": "uint128"
}, {
"indexed": false, "name": "currency", "type": "bytes3"
}],
"name": "Pledge",
"type": "event"
}`)
var jsonEventMixedCase = []byte(`{
"anonymous": false,
"inputs": [{
"indexed": false, "name": "value", "type": "uint256"
}, {
"indexed": false, "name": "_value", "type": "uint256"
}, {
"indexed": false, "name": "Value", "type": "uint256"
}],
"name": "MixedCase",
"type": "event"
}`)
// 1000000
var transferData1 = "00000000000000000000000000000000000000000000000000000000000f4240"
// "0x00Ce0d46d924CC8437c806721496599FC3FFA268", 2218516807680, "usd"
var pledgeData1 = "00000000000000000000000000ce0d46d924cc8437c806721496599fc3ffa2680000000000000000000000000000000000000000000000000000020489e800007573640000000000000000000000000000000000000000000000000000000000"
// 1000000,2218516807680,1000001
var mixedCaseData1 = "00000000000000000000000000000000000000000000000000000000000f42400000000000000000000000000000000000000000000000000000020489e8000000000000000000000000000000000000000000000000000000000000000f4241"
func TestEventId(t *testing.T) {
var table = []struct {
definition string
@@ -31,7 +87,7 @@ func TestEventId(t *testing.T) {
}{
{
definition: `[
{ "type" : "event", "name" : "balance", "inputs": [{ "name" : "in", "type": "uint" }] },
{ "type" : "event", "name" : "balance", "inputs": [{ "name" : "in", "type": "uint256" }] },
{ "type" : "event", "name" : "check", "inputs": [{ "name" : "t", "type": "address" }, { "name": "b", "type": "uint256" }] }
]`,
expectations: map[string]common.Hash{
@@ -54,3 +110,286 @@ func TestEventId(t *testing.T) {
}
}
}
// TestEventMultiValueWithArrayUnpack verifies that array fields will be counted after parsing array.
func TestEventMultiValueWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": false, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 [2]uint8
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
var i uint8 = 1
for ; i <= 3; i++ {
b.Write(packNum(reflect.ValueOf(i)))
}
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{1, 2}, rst.Value1)
require.Equal(t, uint8(3), rst.Value2)
}
func TestEventTupleUnpack(t *testing.T) {
type EventTransfer struct {
Value *big.Int
}
type EventTransferWithTag struct {
// this is valid because `value` is not exportable,
// so value is only unmarshalled into `Value1`.
value *big.Int
Value1 *big.Int `abi:"value"`
}
type BadEventTransferWithSameFieldAndTag struct {
Value *big.Int
Value1 *big.Int `abi:"value"`
}
type BadEventTransferWithDuplicatedTag struct {
Value1 *big.Int `abi:"value"`
Value2 *big.Int `abi:"value"`
}
type BadEventTransferWithEmptyTag struct {
Value *big.Int `abi:""`
}
type EventPledge struct {
Who common.Address
Wad *big.Int
Currency [3]byte
}
type BadEventPledge struct {
Who string
Wad int
Currency [3]byte
}
type EventMixedCase struct {
Value1 *big.Int `abi:"value"`
Value2 *big.Int `abi:"_value"`
Value3 *big.Int `abi:"Value"`
}
bigint := new(big.Int)
bigintExpected := big.NewInt(1000000)
bigintExpected2 := big.NewInt(2218516807680)
bigintExpected3 := big.NewInt(1000001)
addr := common.HexToAddress("0x00Ce0d46d924CC8437c806721496599FC3FFA268")
var testCases = []struct {
data string
dest interface{}
expected interface{}
jsonLog []byte
error string
name string
}{{
transferData1,
&EventTransfer{},
&EventTransfer{Value: bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into structure",
}, {
transferData1,
&[]interface{}{&bigint},
&[]interface{}{&bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into slice",
}, {
transferData1,
&EventTransferWithTag{},
&EventTransferWithTag{Value1: bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into structure with abi: tag",
}, {
transferData1,
&BadEventTransferWithDuplicatedTag{},
&BadEventTransferWithDuplicatedTag{},
jsonEventTransfer,
"struct: abi tag in 'Value2' already mapped",
"Can not unpack ERC20 Transfer event with duplicated abi tag",
}, {
transferData1,
&BadEventTransferWithSameFieldAndTag{},
&BadEventTransferWithSameFieldAndTag{},
jsonEventTransfer,
"abi: multiple variables maps to the same abi field 'value'",
"Can not unpack ERC20 Transfer event with a field and a tag mapping to the same abi variable",
}, {
transferData1,
&BadEventTransferWithEmptyTag{},
&BadEventTransferWithEmptyTag{},
jsonEventTransfer,
"struct: abi tag in 'Value' is empty",
"Can not unpack ERC20 Transfer event with an empty tag",
}, {
pledgeData1,
&EventPledge{},
&EventPledge{
addr,
bigintExpected2,
[3]byte{'u', 's', 'd'}},
jsonEventPledge,
"",
"Can unpack Pledge event into structure",
}, {
pledgeData1,
&[]interface{}{&common.Address{}, &bigint, &[3]byte{}},
&[]interface{}{
&addr,
&bigintExpected2,
&[3]byte{'u', 's', 'd'}},
jsonEventPledge,
"",
"Can unpack Pledge event into slice",
}, {
pledgeData1,
&[3]interface{}{&common.Address{}, &bigint, &[3]byte{}},
&[3]interface{}{
&addr,
&bigintExpected2,
&[3]byte{'u', 's', 'd'}},
jsonEventPledge,
"",
"Can unpack Pledge event into an array",
}, {
pledgeData1,
&[]interface{}{new(int), 0, 0},
&[]interface{}{},
jsonEventPledge,
"abi: cannot unmarshal common.Address in to int",
"Can not unpack Pledge event into slice with wrong types",
}, {
pledgeData1,
&BadEventPledge{},
&BadEventPledge{},
jsonEventPledge,
"abi: cannot unmarshal common.Address in to string",
"Can not unpack Pledge event into struct with wrong filed types",
}, {
pledgeData1,
&[]interface{}{common.Address{}, new(big.Int)},
&[]interface{}{},
jsonEventPledge,
"abi: insufficient number of elements in the list/array for unpack, want 3, got 2",
"Can not unpack Pledge event into too short slice",
}, {
pledgeData1,
new(map[string]interface{}),
&[]interface{}{},
jsonEventPledge,
"abi: cannot unmarshal tuple into map[string]interface {}",
"Can not unpack Pledge event into map",
}, {
mixedCaseData1,
&EventMixedCase{},
&EventMixedCase{Value1: bigintExpected, Value2: bigintExpected2, Value3: bigintExpected3},
jsonEventMixedCase,
"",
"Can unpack abi variables with mixed case",
}}
for _, tc := range testCases {
assert := assert.New(t)
tc := tc
t.Run(tc.name, func(t *testing.T) {
err := unpackTestEventData(tc.dest, tc.data, tc.jsonLog, assert)
if tc.error == "" {
assert.Nil(err, "Should be able to unpack event data.")
assert.Equal(tc.expected, tc.dest, tc.name)
} else {
assert.EqualError(err, tc.error, tc.name)
}
})
}
}
func unpackTestEventData(dest interface{}, hexData string, jsonEvent []byte, assert *assert.Assertions) error {
data, err := hex.DecodeString(hexData)
assert.NoError(err, "Hex data should be a correct hex-string")
var e Event
assert.NoError(json.Unmarshal(jsonEvent, &e), "Should be able to unmarshal event ABI")
a := ABI{Events: map[string]Event{"e": e}}
return a.Unpack(dest, "e", data)
}
/*
Taken from
https://github.com/ethereum/go-ethereum/pull/15568
*/
type testResult struct {
Values [2]*big.Int
Value1 *big.Int
Value2 *big.Int
}
type testCase struct {
definition string
want testResult
}
func (tc testCase) encoded(intType, arrayType Type) []byte {
var b bytes.Buffer
if tc.want.Value1 != nil {
val, _ := intType.pack(reflect.ValueOf(tc.want.Value1))
b.Write(val)
}
if !reflect.DeepEqual(tc.want.Values, [2]*big.Int{nil, nil}) {
val, _ := arrayType.pack(reflect.ValueOf(tc.want.Values))
b.Write(val)
}
if tc.want.Value2 != nil {
val, _ := intType.pack(reflect.ValueOf(tc.want.Value2))
b.Write(val)
}
return b.Bytes()
}
// TestEventUnpackIndexed verifies that indexed field will be skipped by event decoder.
func TestEventUnpackIndexed(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 uint8
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
b.Write(packNum(reflect.ValueOf(uint8(8))))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, uint8(0), rst.Value1)
require.Equal(t, uint8(8), rst.Value2)
}
// TestEventIndexedWithArrayUnpack verifies that decoder will not overlow when static array is indexed input.
func TestEventIndexedWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"string"}]}]`
type testStruct struct {
Value1 [2]uint8
Value2 string
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
stringOut := "abc"
// number of fields that will be encoded * 32
b.Write(packNum(reflect.ValueOf(32)))
b.Write(packNum(reflect.ValueOf(len(stringOut))))
b.Write(common.RightPadBytes([]byte(stringOut), 32))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{0, 0}, rst.Value1)
require.Equal(t, stringOut, rst.Value2)
}

View File

@@ -18,13 +18,12 @@ package abi
import (
"fmt"
"reflect"
"strings"
"github.com/ethereum/go-ethereum/crypto"
)
// Callable method given a `Name` and whether the method is a constant.
// Method represents a callable given a `Name` and whether the method is a constant.
// If the method is `Const` no transaction needs to be created for this
// particular Method call. It can easily be simulated using a local VM.
// For example a `Balance()` method only needs to retrieve something
@@ -35,46 +34,8 @@ import (
type Method struct {
Name string
Const bool
Inputs []Argument
Outputs []Argument
}
func (m Method) pack(method Method, args ...interface{}) ([]byte, error) {
// Make sure arguments match up and pack them
if len(args) != len(method.Inputs) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(method.Inputs))
}
// variable input is the output appended at the end of packed
// output. This is used for strings and bytes types input.
var variableInput []byte
var ret []byte
for i, a := range args {
input := method.Inputs[i]
// pack the input
packed, err := input.Type.pack(reflect.ValueOf(a))
if err != nil {
return nil, fmt.Errorf("`%s` %v", method.Name, err)
}
// check for a slice type (string, bytes, slice)
if input.Type.requiresLengthPrefix() {
// calculate the offset
offset := len(method.Inputs)*32 + len(variableInput)
// set the offset
ret = append(ret, packNum(reflect.ValueOf(offset))...)
// Append the packed output to the variable input. The variable input
// will be appended at the end of the input.
variableInput = append(variableInput, packed...)
} else {
// append the packed value to the input
ret = append(ret, packed...)
}
}
// append the variable input at the end of the packed input
ret = append(ret, variableInput...)
return ret, nil
Inputs Arguments
Outputs Arguments
}
// Sig returns the methods string signature according to the ABI spec.
@@ -84,35 +45,33 @@ func (m Method) pack(method Method, args ...interface{}) ([]byte, error) {
// function foo(uint32 a, int b) = "foo(uint32,int256)"
//
// Please note that "int" is substitute for its canonical representation "int256"
func (m Method) Sig() string {
types := make([]string, len(m.Inputs))
i := 0
for _, input := range m.Inputs {
func (method Method) Sig() string {
types := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
types[i] = input.Type.String()
i++
}
return fmt.Sprintf("%v(%v)", m.Name, strings.Join(types, ","))
return fmt.Sprintf("%v(%v)", method.Name, strings.Join(types, ","))
}
func (m Method) String() string {
inputs := make([]string, len(m.Inputs))
for i, input := range m.Inputs {
func (method Method) String() string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Name, input.Type)
}
outputs := make([]string, len(m.Outputs))
for i, output := range m.Outputs {
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
if len(output.Name) > 0 {
outputs[i] = fmt.Sprintf("%v ", output.Name)
}
outputs[i] += output.Type.String()
}
constant := ""
if m.Const {
if method.Const {
constant = "constant "
}
return fmt.Sprintf("function %v(%v) %sreturns(%v)", m.Name, strings.Join(inputs, ", "), constant, strings.Join(outputs, ", "))
return fmt.Sprintf("function %v(%v) %sreturns(%v)", method.Name, strings.Join(inputs, ", "), constant, strings.Join(outputs, ", "))
}
func (m Method) Id() []byte {
return crypto.Keccak256([]byte(m.Sig()))[:4]
func (method Method) Id() []byte {
return crypto.Keccak256([]byte(method.Sig()))[:4]
}

View File

@@ -25,61 +25,20 @@ import (
)
var (
big_t = reflect.TypeOf(big.Int{})
ubig_t = reflect.TypeOf(big.Int{})
byte_t = reflect.TypeOf(byte(0))
byte_ts = reflect.TypeOf([]byte(nil))
uint_t = reflect.TypeOf(uint(0))
uint8_t = reflect.TypeOf(uint8(0))
uint16_t = reflect.TypeOf(uint16(0))
uint32_t = reflect.TypeOf(uint32(0))
uint64_t = reflect.TypeOf(uint64(0))
int_t = reflect.TypeOf(int(0))
int8_t = reflect.TypeOf(int8(0))
int16_t = reflect.TypeOf(int16(0))
int32_t = reflect.TypeOf(int32(0))
int64_t = reflect.TypeOf(int64(0))
hash_t = reflect.TypeOf(common.Hash{})
address_t = reflect.TypeOf(common.Address{})
uint_ts = reflect.TypeOf([]uint(nil))
uint8_ts = reflect.TypeOf([]uint8(nil))
uint16_ts = reflect.TypeOf([]uint16(nil))
uint32_ts = reflect.TypeOf([]uint32(nil))
uint64_ts = reflect.TypeOf([]uint64(nil))
ubig_ts = reflect.TypeOf([]*big.Int(nil))
int_ts = reflect.TypeOf([]int(nil))
int8_ts = reflect.TypeOf([]int8(nil))
int16_ts = reflect.TypeOf([]int16(nil))
int32_ts = reflect.TypeOf([]int32(nil))
int64_ts = reflect.TypeOf([]int64(nil))
big_ts = reflect.TypeOf([]*big.Int(nil))
bigT = reflect.TypeOf(&big.Int{})
derefbigT = reflect.TypeOf(big.Int{})
uint8T = reflect.TypeOf(uint8(0))
uint16T = reflect.TypeOf(uint16(0))
uint32T = reflect.TypeOf(uint32(0))
uint64T = reflect.TypeOf(uint64(0))
int8T = reflect.TypeOf(int8(0))
int16T = reflect.TypeOf(int16(0))
int32T = reflect.TypeOf(int32(0))
int64T = reflect.TypeOf(int64(0))
addressT = reflect.TypeOf(common.Address{})
)
// U256 converts a big Int into a 256bit EVM number.
func U256(n *big.Int) []byte {
return math.PaddedBigBytes(math.U256(n), 32)
}
// packNum packs the given number (using the reflect value) and will cast it to appropriate number representation
func packNum(value reflect.Value) []byte {
switch kind := value.Kind(); kind {
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return U256(new(big.Int).SetUint64(value.Uint()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return U256(big.NewInt(value.Int()))
case reflect.Ptr:
return U256(value.Interface().(*big.Int))
}
return nil
}
// checks whether the given reflect value is signed. This also works for slices with a number type
func isSigned(v reflect.Value) bool {
switch v.Type() {
case int_ts, int8_ts, int16_ts, int32_ts, int64_ts, int_t, int8_t, int16_t, int32_t, int64_t:
return true
}
return false
}

View File

@@ -18,9 +18,7 @@ package abi
import (
"bytes"
"math"
"math/big"
"reflect"
"testing"
)
@@ -33,50 +31,3 @@ func TestNumberTypes(t *testing.T) {
t.Errorf("expected %x got %x", ubytes, unsigned)
}
}
func TestPackNumber(t *testing.T) {
tests := []struct {
value reflect.Value
packed []byte
}{
// Protocol limits
{reflect.ValueOf(0), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},
{reflect.ValueOf(1), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}},
{reflect.ValueOf(-1), []byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}},
// Type corner cases
{reflect.ValueOf(uint8(math.MaxUint8)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255}},
{reflect.ValueOf(uint16(math.MaxUint16)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255}},
{reflect.ValueOf(uint32(math.MaxUint32)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255}},
{reflect.ValueOf(uint64(math.MaxUint64)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255}},
{reflect.ValueOf(int8(math.MaxInt8)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 127}},
{reflect.ValueOf(int16(math.MaxInt16)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 127, 255}},
{reflect.ValueOf(int32(math.MaxInt32)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 127, 255, 255, 255}},
{reflect.ValueOf(int64(math.MaxInt64)), []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 127, 255, 255, 255, 255, 255, 255, 255}},
{reflect.ValueOf(int8(math.MinInt8)), []byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 128}},
{reflect.ValueOf(int16(math.MinInt16)), []byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 128, 0}},
{reflect.ValueOf(int32(math.MinInt32)), []byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 128, 0, 0, 0}},
{reflect.ValueOf(int64(math.MinInt64)), []byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 128, 0, 0, 0, 0, 0, 0, 0}},
}
for i, tt := range tests {
packed := packNum(tt.value)
if !bytes.Equal(packed, tt.packed) {
t.Errorf("test %d: pack mismatch: have %x, want %x", i, packed, tt.packed)
}
}
if packed := packNum(reflect.ValueOf("string")); packed != nil {
t.Errorf("expected 'string' to pack to nil. got %x instead", packed)
}
}
func TestSigned(t *testing.T) {
if isSigned(reflect.ValueOf(uint(10))) {
t.Error("signed")
}
if !isSigned(reflect.ValueOf(int(10))) {
t.Error("not signed")
}
}

81
accounts/abi/pack.go Normal file
View File

@@ -0,0 +1,81 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
)
// packBytesSlice packs the given bytes as [L, V] as the canonical representation
// bytes slice
func packBytesSlice(bytes []byte, l int) []byte {
len := packNum(reflect.ValueOf(l))
return append(len, common.RightPadBytes(bytes, (l+31)/32*32)...)
}
// packElement packs the given reflect value according to the abi specification in
// t.
func packElement(t Type, reflectValue reflect.Value) []byte {
switch t.T {
case IntTy, UintTy:
return packNum(reflectValue)
case StringTy:
return packBytesSlice([]byte(reflectValue.String()), reflectValue.Len())
case AddressTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.LeftPadBytes(reflectValue.Bytes(), 32)
case BoolTy:
if reflectValue.Bool() {
return math.PaddedBigBytes(common.Big1, 32)
}
return math.PaddedBigBytes(common.Big0, 32)
case BytesTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return packBytesSlice(reflectValue.Bytes(), reflectValue.Len())
case FixedBytesTy, FunctionTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.RightPadBytes(reflectValue.Bytes(), 32)
default:
panic("abi: fatal error")
}
}
// packNum packs the given number (using the reflect value) and will cast it to appropriate number representation
func packNum(value reflect.Value) []byte {
switch kind := value.Kind(); kind {
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return U256(new(big.Int).SetUint64(value.Uint()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return U256(big.NewInt(value.Int()))
case reflect.Ptr:
return U256(value.Interface().(*big.Int))
default:
panic("abi: fatal error")
}
}

443
accounts/abi/pack_test.go Normal file
View File

@@ -0,0 +1,443 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"math"
"math/big"
"reflect"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
)
func TestPack(t *testing.T) {
for i, test := range []struct {
typ string
input interface{}
output []byte
}{
{
"uint8",
uint8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint8[]",
[]uint8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16",
uint16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16[]",
[]uint16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32",
uint32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32[]",
[]uint32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64",
uint64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64[]",
[]uint64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256",
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256[]",
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8",
int8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8[]",
[]int8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16",
int16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16[]",
[]int16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32",
int32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32[]",
[]int32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64",
int64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64[]",
[]int64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256",
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256[]",
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"bytes1",
[1]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes2",
[2]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes3",
[3]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes4",
[4]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes5",
[5]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes6",
[6]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes7",
[7]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes8",
[8]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes9",
[9]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes10",
[10]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes11",
[11]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes12",
[12]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes13",
[13]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes14",
[14]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes15",
[15]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes16",
[16]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes17",
[17]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes18",
[18]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes19",
[19]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes20",
[20]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes21",
[21]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes22",
[22]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes23",
[23]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes25",
[25]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes26",
[26]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes27",
[27]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes28",
[28]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes29",
[29]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes30",
[30]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes31",
[31]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes32",
[32]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"uint32[2][3][4]",
[4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018"),
},
{
"address[]",
[]common.Address{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000"),
},
{
"bytes32[]",
[]common.Hash{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000201000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000"),
},
{
"function",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"string",
"foobar",
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000006666f6f6261720000000000000000000000000000000000000000000000000000"),
},
} {
typ, err := NewType(test.typ)
if err != nil {
t.Fatalf("%v failed. Unexpected parse error: %v", i, err)
}
output, err := typ.pack(reflect.ValueOf(test.input))
if err != nil {
t.Fatalf("%v failed. Unexpected pack error: %v", i, err)
}
if !bytes.Equal(output, test.output) {
t.Errorf("%d failed. Expected bytes: '%x' Got: '%x'", i, test.output, output)
}
}
}
func TestMethodPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
sig := abi.Methods["slice"].Id()
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
packed, err := abi.Pack("slice", []uint32{1, 2})
if err != nil {
t.Error(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
var addrA, addrB = common.Address{1}, common.Address{2}
sig = abi.Methods["sliceAddress"].Id()
sig = append(sig, common.LeftPadBytes([]byte{32}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrA[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrB[:], 32)...)
packed, err = abi.Pack("sliceAddress", []common.Address{addrA, addrB})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
var addrC, addrD = common.Address{3}, common.Address{4}
sig = abi.Methods["sliceMultiAddress"].Id()
sig = append(sig, common.LeftPadBytes([]byte{64}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{160}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrA[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrB[:], 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrC[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrD[:], 32)...)
packed, err = abi.Pack("sliceMultiAddress", []common.Address{addrA, addrB}, []common.Address{addrC, addrD})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["slice256"].Id()
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
packed, err = abi.Pack("slice256", []*big.Int{big.NewInt(1), big.NewInt(2)})
if err != nil {
t.Error(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
}
func TestPackNumber(t *testing.T) {
tests := []struct {
value reflect.Value
packed []byte
}{
// Protocol limits
{reflect.ValueOf(0), common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000000")},
{reflect.ValueOf(1), common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")},
{reflect.ValueOf(-1), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")},
// Type corner cases
{reflect.ValueOf(uint8(math.MaxUint8)), common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000000000ff")},
{reflect.ValueOf(uint16(math.MaxUint16)), common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000ffff")},
{reflect.ValueOf(uint32(math.MaxUint32)), common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000ffffffff")},
{reflect.ValueOf(uint64(math.MaxUint64)), common.Hex2Bytes("000000000000000000000000000000000000000000000000ffffffffffffffff")},
{reflect.ValueOf(int8(math.MaxInt8)), common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000007f")},
{reflect.ValueOf(int16(math.MaxInt16)), common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000007fff")},
{reflect.ValueOf(int32(math.MaxInt32)), common.Hex2Bytes("000000000000000000000000000000000000000000000000000000007fffffff")},
{reflect.ValueOf(int64(math.MaxInt64)), common.Hex2Bytes("0000000000000000000000000000000000000000000000007fffffffffffffff")},
{reflect.ValueOf(int8(math.MinInt8)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff80")},
{reflect.ValueOf(int16(math.MinInt16)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff8000")},
{reflect.ValueOf(int32(math.MinInt32)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffff80000000")},
{reflect.ValueOf(int64(math.MinInt64)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffff8000000000000000")},
}
for i, tt := range tests {
packed := packNum(tt.value)
if !bytes.Equal(packed, tt.packed) {
t.Errorf("test %d: pack mismatch: have %x, want %x", i, packed, tt.packed)
}
}
}

View File

@@ -1,66 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"reflect"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
)
// packBytesSlice packs the given bytes as [L, V] as the canonical representation
// bytes slice
func packBytesSlice(bytes []byte, l int) []byte {
len := packNum(reflect.ValueOf(l))
return append(len, common.RightPadBytes(bytes, (l+31)/32*32)...)
}
// packElement packs the given reflect value according to the abi specification in
// t.
func packElement(t Type, reflectValue reflect.Value) []byte {
switch t.T {
case IntTy, UintTy:
return packNum(reflectValue)
case StringTy:
return packBytesSlice([]byte(reflectValue.String()), reflectValue.Len())
case AddressTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.LeftPadBytes(reflectValue.Bytes(), 32)
case BoolTy:
if reflectValue.Bool() {
return math.PaddedBigBytes(common.Big1, 32)
} else {
return math.PaddedBigBytes(common.Big0, 32)
}
case BytesTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return packBytesSlice(reflectValue.Bytes(), reflectValue.Len())
case FixedBytesTy, FunctionTy:
if reflectValue.Kind() == reflect.Array {
reflectValue = mustArrayToByteSlice(reflectValue)
}
return common.RightPadBytes(reflectValue.Bytes(), 32)
}
panic("abi: fatal error")
}

View File

@@ -19,12 +19,13 @@ package abi
import (
"fmt"
"reflect"
"strings"
)
// indirect recursively dereferences the value until it either gets the value
// or finds a big.Int
func indirect(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Ptr && v.Elem().Type() != big_t {
if v.Kind() == reflect.Ptr && v.Elem().Type() != derefbigT {
return indirect(v.Elem())
}
return v
@@ -32,30 +33,30 @@ func indirect(v reflect.Value) reflect.Value {
// reflectIntKind returns the reflect using the given size and
// unsignedness.
func reflectIntKind(unsigned bool, size int) reflect.Kind {
func reflectIntKindAndType(unsigned bool, size int) (reflect.Kind, reflect.Type) {
switch size {
case 8:
if unsigned {
return reflect.Uint8
return reflect.Uint8, uint8T
}
return reflect.Int8
return reflect.Int8, int8T
case 16:
if unsigned {
return reflect.Uint16
return reflect.Uint16, uint16T
}
return reflect.Int16
return reflect.Int16, int16T
case 32:
if unsigned {
return reflect.Uint32
return reflect.Uint32, uint32T
}
return reflect.Int32
return reflect.Int32, int32T
case 64:
if unsigned {
return reflect.Uint64
return reflect.Uint64, uint64T
}
return reflect.Int64
return reflect.Int64, int64T
}
return reflect.Ptr
return reflect.Ptr, bigT
}
// mustArrayToBytesSlice creates a new byte slice with the exact same size as value
@@ -73,15 +74,9 @@ func mustArrayToByteSlice(value reflect.Value) reflect.Value {
func set(dst, src reflect.Value, output Argument) error {
dstType := dst.Type()
srcType := src.Type()
switch {
case dstType.AssignableTo(src.Type()):
case dstType.AssignableTo(srcType):
dst.Set(src)
case dstType.Kind() == reflect.Array && srcType.Kind() == reflect.Slice:
if dst.Len() < output.Type.SliceSize {
return fmt.Errorf("abi: cannot unmarshal src (len=%d) in to dst (len=%d)", output.Type.SliceSize, dst.Len())
}
reflect.Copy(dst, src)
case dstType.Kind() == reflect.Interface:
dst.Set(src)
case dstType.Kind() == reflect.Ptr:
@@ -91,3 +86,127 @@ func set(dst, src reflect.Value, output Argument) error {
}
return nil
}
// requireAssignable assures that `dest` is a pointer and it's not an interface.
func requireAssignable(dst, src reflect.Value) error {
if dst.Kind() != reflect.Ptr && dst.Kind() != reflect.Interface {
return fmt.Errorf("abi: cannot unmarshal %v into %v", src.Type(), dst.Type())
}
return nil
}
// requireUnpackKind verifies preconditions for unpacking `args` into `kind`
func requireUnpackKind(v reflect.Value, t reflect.Type, k reflect.Kind,
args Arguments) error {
switch k {
case reflect.Struct:
case reflect.Slice, reflect.Array:
if minLen := args.LengthNonIndexed(); v.Len() < minLen {
return fmt.Errorf("abi: insufficient number of elements in the list/array for unpack, want %d, got %d",
minLen, v.Len())
}
default:
return fmt.Errorf("abi: cannot unmarshal tuple into %v", t)
}
return nil
}
// mapAbiToStringField maps abi to struct fields.
// first round: for each Exportable field that contains a `abi:""` tag
// and this field name exists in the arguments, pair them together.
// second round: for each argument field that has not been already linked,
// find what variable is expected to be mapped into, if it exists and has not been
// used, pair them.
func mapAbiToStructFields(args Arguments, value reflect.Value) (map[string]string, error) {
typ := value.Type()
abi2struct := make(map[string]string)
struct2abi := make(map[string]string)
// first round ~~~
for i := 0; i < typ.NumField(); i++ {
structFieldName := typ.Field(i).Name
// skip private struct fields.
if structFieldName[:1] != strings.ToUpper(structFieldName[:1]) {
continue
}
// skip fields that have no abi:"" tag.
var ok bool
var tagName string
if tagName, ok = typ.Field(i).Tag.Lookup("abi"); !ok {
continue
}
// check if tag is empty.
if tagName == "" {
return nil, fmt.Errorf("struct: abi tag in '%s' is empty", structFieldName)
}
// check which argument field matches with the abi tag.
found := false
for _, abiField := range args.NonIndexed() {
if abiField.Name == tagName {
if abi2struct[abiField.Name] != "" {
return nil, fmt.Errorf("struct: abi tag in '%s' already mapped", structFieldName)
}
// pair them
abi2struct[abiField.Name] = structFieldName
struct2abi[structFieldName] = abiField.Name
found = true
}
}
// check if this tag has been mapped.
if !found {
return nil, fmt.Errorf("struct: abi tag '%s' defined but not found in abi", tagName)
}
}
// second round ~~~
for _, arg := range args {
abiFieldName := arg.Name
structFieldName := capitalise(abiFieldName)
if structFieldName == "" {
return nil, fmt.Errorf("abi: purely underscored output cannot unpack to struct")
}
// this abi has already been paired, skip it... unless there exists another, yet unassigned
// struct field with the same field name. If so, raise an error:
// abi: [ { "name": "value" } ]
// struct { Value *big.Int , Value1 *big.Int `abi:"value"`}
if abi2struct[abiFieldName] != "" {
if abi2struct[abiFieldName] != structFieldName &&
struct2abi[structFieldName] == "" &&
value.FieldByName(structFieldName).IsValid() {
return nil, fmt.Errorf("abi: multiple variables maps to the same abi field '%s'", abiFieldName)
}
continue
}
// return an error if this struct field has already been paired.
if struct2abi[structFieldName] != "" {
return nil, fmt.Errorf("abi: multiple outputs mapping to the same struct field '%s'", structFieldName)
}
if value.FieldByName(structFieldName).IsValid() {
// pair them
abi2struct[abiFieldName] = structFieldName
struct2abi[structFieldName] = abiFieldName
} else {
// not paired, but annotate as used, to detect cases like
// abi : [ { "name": "value" }, { "name": "_value" } ]
// struct { Value *big.Int }
struct2abi[structFieldName] = abiFieldName
}
}
return abi2struct, nil
}

View File

@@ -21,27 +21,27 @@ import (
"reflect"
"regexp"
"strconv"
"strings"
)
// Type enumerator
const (
IntTy byte = iota
UintTy
BoolTy
StringTy
SliceTy
ArrayTy
AddressTy
FixedBytesTy
BytesTy
HashTy
FixedpointTy
FixedPointTy
FunctionTy
)
// Type is the reflection of the supported argument type
type Type struct {
IsSlice, IsArray bool
SliceSize int
Elem *Type
Kind reflect.Kind
@@ -53,54 +53,57 @@ type Type struct {
}
var (
// fullTypeRegex parses the abi types
//
// Types can be in the format of:
//
// Input = Type [ "[" [ Number ] "]" ] Name .
// Type = [ "u" ] "int" [ Number ] [ x ] [ Number ].
//
// Examples:
//
// string int uint fixed
// string32 int8 uint8 uint[]
// address int256 uint256 fixed128x128[2]
fullTypeRegex = regexp.MustCompile(`([a-zA-Z0-9]+)(\[([0-9]*)\])?`)
// typeRegex parses the abi sub types
typeRegex = regexp.MustCompile("([a-zA-Z]+)(([0-9]+)(x([0-9]+))?)?")
)
// NewType creates a new reflection type of abi type given in t.
func NewType(t string) (typ Type, err error) {
res := fullTypeRegex.FindAllStringSubmatch(t, -1)[0]
// check if type is slice and parse type.
switch {
case res[3] != "":
// err is ignored. Already checked for number through the regexp
typ.SliceSize, _ = strconv.Atoi(res[3])
typ.IsArray = true
case res[2] != "":
typ.IsSlice, typ.SliceSize = true, -1
case res[0] == "":
return Type{}, fmt.Errorf("abi: type parse error: %s", t)
// check that array brackets are equal if they exist
if strings.Count(t, "[") != strings.Count(t, "]") {
return Type{}, fmt.Errorf("invalid arg type in abi")
}
if typ.IsArray || typ.IsSlice {
sliceType, err := NewType(res[1])
typ.stringKind = t
// if there are brackets, get ready to go into slice/array mode and
// recursively create the type
if strings.Count(t, "[") != 0 {
i := strings.LastIndex(t, "[")
// recursively embed the type
embeddedType, err := NewType(t[:i])
if err != nil {
return Type{}, err
}
typ.Elem = &sliceType
typ.stringKind = sliceType.stringKind + t[len(res[1]):]
// Although we know that this is an array, we cannot return
// as we don't know the type of the element, however, if it
// is still an array, then don't determine the type.
if typ.Elem.IsArray || typ.Elem.IsSlice {
return typ, nil
}
}
// grab the last cell and create a type from there
sliced := t[i:]
// grab the slice size with regexp
re := regexp.MustCompile("[0-9]+")
intz := re.FindAllString(sliced, -1)
if len(intz) == 0 {
// is a slice
typ.T = SliceTy
typ.Kind = reflect.Slice
typ.Elem = &embeddedType
typ.Type = reflect.SliceOf(embeddedType.Type)
} else if len(intz) == 1 {
// is a array
typ.T = ArrayTy
typ.Kind = reflect.Array
typ.Elem = &embeddedType
typ.Size, err = strconv.Atoi(intz[0])
if err != nil {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
typ.Type = reflect.ArrayOf(typ.Size, embeddedType.Type)
} else {
return Type{}, fmt.Errorf("invalid formatting of array type")
}
return typ, err
}
// parse the type and size of the abi-type.
parsedType := typeRegex.FindAllStringSubmatch(res[1], -1)[0]
parsedType := typeRegex.FindAllStringSubmatch(t, -1)[0]
// varSize is the size of the variable
var varSize int
if len(parsedType[3]) > 0 {
@@ -109,62 +112,52 @@ func NewType(t string) (typ Type, err error) {
if err != nil {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
} else {
if parsedType[0] == "uint" || parsedType[0] == "int" {
// this should fail because it means that there's something wrong with
// the abi type (the compiler should always format it to the size...always)
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
}
// varType is the parsed abi type
varType := parsedType[1]
// substitute canonical integer
if varSize == 0 && (varType == "int" || varType == "uint") {
varSize = 256
t += "256"
}
// only set stringKind if not array or slice, as for those,
// the correct string type has been set
if !(typ.IsArray || typ.IsSlice) {
typ.stringKind = t
}
switch varType {
switch varType := parsedType[1]; varType {
case "int":
typ.Kind = reflectIntKind(false, varSize)
typ.Type = big_t
typ.Kind, typ.Type = reflectIntKindAndType(false, varSize)
typ.Size = varSize
typ.T = IntTy
case "uint":
typ.Kind = reflectIntKind(true, varSize)
typ.Type = ubig_t
typ.Kind, typ.Type = reflectIntKindAndType(true, varSize)
typ.Size = varSize
typ.T = UintTy
case "bool":
typ.Kind = reflect.Bool
typ.T = BoolTy
typ.Type = reflect.TypeOf(bool(false))
case "address":
typ.Kind = reflect.Array
typ.Type = address_t
typ.Type = addressT
typ.Size = 20
typ.T = AddressTy
case "string":
typ.Kind = reflect.String
typ.Size = -1
typ.Type = reflect.TypeOf("")
typ.T = StringTy
case "bytes":
sliceType, _ := NewType("uint8")
typ.Elem = &sliceType
if varSize == 0 {
typ.IsSlice = true
typ.T = BytesTy
typ.SliceSize = -1
typ.Kind = reflect.Slice
typ.Type = reflect.SliceOf(reflect.TypeOf(byte(0)))
} else {
typ.IsArray = true
typ.T = FixedBytesTy
typ.SliceSize = varSize
typ.Kind = reflect.Array
typ.Size = varSize
typ.Type = reflect.ArrayOf(varSize, reflect.TypeOf(byte(0)))
}
case "function":
sliceType, _ := NewType("uint8")
typ.Elem = &sliceType
typ.IsArray = true
typ.Kind = reflect.Array
typ.T = FunctionTy
typ.SliceSize = 24
typ.Size = 24
typ.Type = reflect.ArrayOf(24, reflect.TypeOf(byte(0)))
default:
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
@@ -185,7 +178,7 @@ func (t Type) pack(v reflect.Value) ([]byte, error) {
return nil, err
}
if (t.IsSlice || t.IsArray) && t.T != BytesTy && t.T != FixedBytesTy && t.T != FunctionTy {
if t.T == SliceTy || t.T == ArrayTy {
var packed []byte
for i := 0; i < v.Len(); i++ {
@@ -195,18 +188,17 @@ func (t Type) pack(v reflect.Value) ([]byte, error) {
}
packed = append(packed, val...)
}
if t.IsSlice {
if t.T == SliceTy {
return packBytesSlice(packed, v.Len()), nil
} else if t.IsArray {
} else if t.T == ArrayTy {
return packed, nil
}
}
return packElement(t, v), nil
}
// requireLengthPrefix returns whether the type requires any sort of length
// prefixing.
func (t Type) requiresLengthPrefix() bool {
return t.T != FixedBytesTy && (t.T == StringTy || t.T == BytesTy || t.IsSlice)
return t.T == StringTy || t.T == BytesTy || t.T == SliceTy
}

View File

@@ -17,8 +17,12 @@
package abi
import (
"math/big"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/common"
)
// typeWithoutStringer is a alias for the Type type which simply doesn't implement
@@ -31,33 +35,58 @@ func TestTypeRegexp(t *testing.T) {
blob string
kind Type
}{
{"int", Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8", Type{Kind: reflect.Int8, Type: big_t, Size: 8, T: IntTy, stringKind: "int8"}},
{"int256", Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}},
{"int[]", Type{IsSlice: true, SliceSize: -1, Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, Elem: &Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int[2]", Type{IsArray: true, SliceSize: 2, Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, Elem: &Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"int32[]", Type{IsSlice: true, SliceSize: -1, Kind: reflect.Int32, Type: big_t, Size: 32, T: IntTy, Elem: &Type{Kind: reflect.Int32, Type: big_t, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", Type{IsArray: true, SliceSize: 2, Kind: reflect.Int32, Type: big_t, Size: 32, T: IntTy, Elem: &Type{Kind: reflect.Int32, Type: big_t, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"uint", Type{Kind: reflect.Ptr, Type: ubig_t, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8", Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint256", Type{Kind: reflect.Ptr, Type: ubig_t, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint[]", Type{IsSlice: true, SliceSize: -1, Kind: reflect.Ptr, Type: ubig_t, Size: 256, T: UintTy, Elem: &Type{Kind: reflect.Ptr, Type: ubig_t, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint[2]", Type{IsArray: true, SliceSize: 2, Kind: reflect.Ptr, Type: ubig_t, Size: 256, T: UintTy, Elem: &Type{Kind: reflect.Ptr, Type: ubig_t, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"uint32[]", Type{IsSlice: true, SliceSize: -1, Kind: reflect.Uint32, Type: ubig_t, Size: 32, T: UintTy, Elem: &Type{Kind: reflect.Uint32, Type: big_t, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", Type{IsArray: true, SliceSize: 2, Kind: reflect.Uint32, Type: ubig_t, Size: 32, T: UintTy, Elem: &Type{Kind: reflect.Uint32, Type: big_t, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"bytes", Type{IsSlice: true, SliceSize: -1, Elem: &Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}, T: BytesTy, stringKind: "bytes"}},
{"bytes32", Type{IsArray: true, SliceSize: 32, Elem: &Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}, T: FixedBytesTy, stringKind: "bytes32"}},
{"bytes[]", Type{IsSlice: true, SliceSize: -1, Elem: &Type{IsSlice: true, SliceSize: -1, Elem: &Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}, T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", Type{IsArray: true, SliceSize: 2, Elem: &Type{IsSlice: true, SliceSize: -1, Elem: &Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}, T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", Type{IsSlice: true, SliceSize: -1, Elem: &Type{IsArray: true, SliceSize: 32, Elem: &Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}, T: FixedBytesTy, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", Type{IsArray: true, SliceSize: 2, Elem: &Type{IsArray: true, SliceSize: 32, Elem: &Type{Kind: reflect.Uint8, Type: ubig_t, Size: 8, T: UintTy, stringKind: "uint8"}, T: FixedBytesTy, stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", Type{Kind: reflect.String, Size: -1, T: StringTy, stringKind: "string"}},
{"string[]", Type{IsSlice: true, SliceSize: -1, Kind: reflect.String, T: StringTy, Size: -1, Elem: &Type{Kind: reflect.String, T: StringTy, Size: -1, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", Type{IsArray: true, SliceSize: 2, Kind: reflect.String, T: StringTy, Size: -1, Elem: &Type{Kind: reflect.String, T: StringTy, Size: -1, stringKind: "string"}, stringKind: "string[2]"}},
{"address", Type{Kind: reflect.Array, Type: address_t, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", Type{IsSlice: true, SliceSize: -1, Kind: reflect.Array, Type: address_t, T: AddressTy, Size: 20, Elem: &Type{Kind: reflect.Array, Type: address_t, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", Type{IsArray: true, SliceSize: 2, Kind: reflect.Array, Type: address_t, T: AddressTy, Size: 20, Elem: &Type{Kind: reflect.Array, Type: address_t, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
{"bool", Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}},
{"bool[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool(nil)), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][][2]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][2][]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", Type{Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]*big.Int{}), Size: 2, Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}},
{"bytes[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]byte{}), Elem: &Type{Kind: reflect.Slice, Type: reflect.TypeOf([]byte{}), T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]byte{}), Elem: &Type{T: BytesTy, Type: reflect.TypeOf([]byte{}), Kind: reflect.Slice, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][32]byte{}), Elem: &Type{Kind: reflect.Array, Type: reflect.TypeOf([32]byte{}), T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][32]byte{}), Elem: &Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}},
{"string[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]string{}), Elem: &Type{Kind: reflect.String, Type: reflect.TypeOf(""), T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]string{}), Elem: &Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}, stringKind: "string[2]"}},
{"address", Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
// TODO when fixed types are implemented properly
// {"fixed", Type{}},
// {"fixed128x128", Type{}},
@@ -66,13 +95,189 @@ func TestTypeRegexp(t *testing.T) {
// {"fixed128x128[]", Type{}},
// {"fixed128x128[2]", Type{}},
}
for i, tt := range tests {
for _, tt := range tests {
typ, err := NewType(tt.blob)
if err != nil {
t.Errorf("type %d: failed to parse type string: %v", i, err)
t.Errorf("type %q: failed to parse type string: %v", tt.blob, err)
}
if !reflect.DeepEqual(typ, tt.kind) {
t.Errorf("type %d: parsed type mismatch:\n have %+v\n want %+v", i, typeWithoutStringer(typ), typeWithoutStringer(tt.kind))
t.Errorf("type %q: parsed type mismatch:\nGOT %s\nWANT %s ", tt.blob, spew.Sdump(typeWithoutStringer(typ)), spew.Sdump(typeWithoutStringer(tt.kind)))
}
}
}
func TestTypeCheck(t *testing.T) {
for i, test := range []struct {
typ string
input interface{}
err string
}{
{"uint", big.NewInt(1), "unsupported arg type: uint"},
{"int", big.NewInt(1), "unsupported arg type: int"},
{"uint256", big.NewInt(1), ""},
{"uint256[][3][]", [][3][]*big.Int{{{}}}, ""},
{"uint256[][][3]", [3][][]*big.Int{{{}}}, ""},
{"uint256[3][][]", [][][3]*big.Int{{{}}}, ""},
{"uint256[3][3][3]", [3][3][3]*big.Int{{{}}}, ""},
{"uint8[][]", [][]uint8{}, ""},
{"int256", big.NewInt(1), ""},
{"uint8", uint8(1), ""},
{"uint16", uint16(1), ""},
{"uint32", uint32(1), ""},
{"uint64", uint64(1), ""},
{"int8", int8(1), ""},
{"int16", int16(1), ""},
{"int32", int32(1), ""},
{"int64", int64(1), ""},
{"uint24", big.NewInt(1), ""},
{"uint40", big.NewInt(1), ""},
{"uint48", big.NewInt(1), ""},
{"uint56", big.NewInt(1), ""},
{"uint72", big.NewInt(1), ""},
{"uint80", big.NewInt(1), ""},
{"uint88", big.NewInt(1), ""},
{"uint96", big.NewInt(1), ""},
{"uint104", big.NewInt(1), ""},
{"uint112", big.NewInt(1), ""},
{"uint120", big.NewInt(1), ""},
{"uint128", big.NewInt(1), ""},
{"uint136", big.NewInt(1), ""},
{"uint144", big.NewInt(1), ""},
{"uint152", big.NewInt(1), ""},
{"uint160", big.NewInt(1), ""},
{"uint168", big.NewInt(1), ""},
{"uint176", big.NewInt(1), ""},
{"uint184", big.NewInt(1), ""},
{"uint192", big.NewInt(1), ""},
{"uint200", big.NewInt(1), ""},
{"uint208", big.NewInt(1), ""},
{"uint216", big.NewInt(1), ""},
{"uint224", big.NewInt(1), ""},
{"uint232", big.NewInt(1), ""},
{"uint240", big.NewInt(1), ""},
{"uint248", big.NewInt(1), ""},
{"int24", big.NewInt(1), ""},
{"int40", big.NewInt(1), ""},
{"int48", big.NewInt(1), ""},
{"int56", big.NewInt(1), ""},
{"int72", big.NewInt(1), ""},
{"int80", big.NewInt(1), ""},
{"int88", big.NewInt(1), ""},
{"int96", big.NewInt(1), ""},
{"int104", big.NewInt(1), ""},
{"int112", big.NewInt(1), ""},
{"int120", big.NewInt(1), ""},
{"int128", big.NewInt(1), ""},
{"int136", big.NewInt(1), ""},
{"int144", big.NewInt(1), ""},
{"int152", big.NewInt(1), ""},
{"int160", big.NewInt(1), ""},
{"int168", big.NewInt(1), ""},
{"int176", big.NewInt(1), ""},
{"int184", big.NewInt(1), ""},
{"int192", big.NewInt(1), ""},
{"int200", big.NewInt(1), ""},
{"int208", big.NewInt(1), ""},
{"int216", big.NewInt(1), ""},
{"int224", big.NewInt(1), ""},
{"int232", big.NewInt(1), ""},
{"int240", big.NewInt(1), ""},
{"int248", big.NewInt(1), ""},
{"uint30", uint8(1), "abi: cannot use uint8 as type ptr as argument"},
{"uint8", uint16(1), "abi: cannot use uint16 as type uint8 as argument"},
{"uint8", uint32(1), "abi: cannot use uint32 as type uint8 as argument"},
{"uint8", uint64(1), "abi: cannot use uint64 as type uint8 as argument"},
{"uint8", int8(1), "abi: cannot use int8 as type uint8 as argument"},
{"uint8", int16(1), "abi: cannot use int16 as type uint8 as argument"},
{"uint8", int32(1), "abi: cannot use int32 as type uint8 as argument"},
{"uint8", int64(1), "abi: cannot use int64 as type uint8 as argument"},
{"uint16", uint16(1), ""},
{"uint16", uint8(1), "abi: cannot use uint8 as type uint16 as argument"},
{"uint16[]", []uint16{1, 2, 3}, ""},
{"uint16[]", [3]uint16{1, 2, 3}, ""},
{"uint16[]", []uint32{1, 2, 3}, "abi: cannot use []uint32 as type [0]uint16 as argument"},
{"uint16[3]", [3]uint32{1, 2, 3}, "abi: cannot use [3]uint32 as type [3]uint16 as argument"},
{"uint16[3]", [4]uint16{1, 2, 3}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"uint16[3]", []uint16{1, 2, 3}, ""},
{"uint16[3]", []uint16{1, 2, 3, 4}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"address[]", []common.Address{{1}}, ""},
{"address[1]", []common.Address{{1}}, ""},
{"address[1]", [1]common.Address{{1}}, ""},
{"address[2]", [1]common.Address{{1}}, "abi: cannot use [1]array as type [2]array as argument"},
{"bytes32", [32]byte{}, ""},
{"bytes31", [31]byte{}, ""},
{"bytes30", [30]byte{}, ""},
{"bytes29", [29]byte{}, ""},
{"bytes28", [28]byte{}, ""},
{"bytes27", [27]byte{}, ""},
{"bytes26", [26]byte{}, ""},
{"bytes25", [25]byte{}, ""},
{"bytes24", [24]byte{}, ""},
{"bytes23", [23]byte{}, ""},
{"bytes22", [22]byte{}, ""},
{"bytes21", [21]byte{}, ""},
{"bytes20", [20]byte{}, ""},
{"bytes19", [19]byte{}, ""},
{"bytes18", [18]byte{}, ""},
{"bytes17", [17]byte{}, ""},
{"bytes16", [16]byte{}, ""},
{"bytes15", [15]byte{}, ""},
{"bytes14", [14]byte{}, ""},
{"bytes13", [13]byte{}, ""},
{"bytes12", [12]byte{}, ""},
{"bytes11", [11]byte{}, ""},
{"bytes10", [10]byte{}, ""},
{"bytes9", [9]byte{}, ""},
{"bytes8", [8]byte{}, ""},
{"bytes7", [7]byte{}, ""},
{"bytes6", [6]byte{}, ""},
{"bytes5", [5]byte{}, ""},
{"bytes4", [4]byte{}, ""},
{"bytes3", [3]byte{}, ""},
{"bytes2", [2]byte{}, ""},
{"bytes1", [1]byte{}, ""},
{"bytes32", [33]byte{}, "abi: cannot use [33]uint8 as type [32]uint8 as argument"},
{"bytes32", common.Hash{1}, ""},
{"bytes31", common.Hash{1}, "abi: cannot use common.Hash as type [31]uint8 as argument"},
{"bytes31", [32]byte{}, "abi: cannot use [32]uint8 as type [31]uint8 as argument"},
{"bytes", []byte{0, 1}, ""},
{"bytes", [2]byte{0, 1}, "abi: cannot use array as type slice as argument"},
{"bytes", common.Hash{1}, "abi: cannot use array as type slice as argument"},
{"string", "hello world", ""},
{"string", string(""), ""},
{"string", []byte{}, "abi: cannot use slice as type string as argument"},
{"bytes32[]", [][32]byte{{}}, ""},
{"function", [24]byte{}, ""},
{"bytes20", common.Address{}, ""},
{"address", [20]byte{}, ""},
{"address", common.Address{}, ""},
{"bytes32[]]", "", "invalid arg type in abi"},
{"invalidType", "", "unsupported arg type: invalidType"},
{"invalidSlice[]", "", "unsupported arg type: invalidSlice"},
} {
typ, err := NewType(test.typ)
if err != nil && len(test.err) == 0 {
t.Fatal("unexpected parse error:", err)
} else if err != nil && len(test.err) != 0 {
if err.Error() != test.err {
t.Errorf("%d failed. Expected err: '%v' got err: '%v'", i, test.err, err)
}
continue
}
err = typeCheck(typ, reflect.ValueOf(test.input))
if err != nil && len(test.err) == 0 {
t.Errorf("%d failed. Expected no err but got: %v", i, err)
continue
}
if err == nil && len(test.err) != 0 {
t.Errorf("%d failed. Expected err: %v but got none", i, test.err)
continue
}
if err != nil && len(test.err) != 0 && err.Error() != test.err {
t.Errorf("%d failed. Expected err: '%v' got err: '%v'", i, test.err, err)
}
}
}

230
accounts/abi/unpack.go Normal file
View File

@@ -0,0 +1,230 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"encoding/binary"
"fmt"
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/common"
)
// reads the integer based on its kind
func readInteger(kind reflect.Kind, b []byte) interface{} {
switch kind {
case reflect.Uint8:
return b[len(b)-1]
case reflect.Uint16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case reflect.Uint32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case reflect.Uint64:
return binary.BigEndian.Uint64(b[len(b)-8:])
case reflect.Int8:
return int8(b[len(b)-1])
case reflect.Int16:
return int16(binary.BigEndian.Uint16(b[len(b)-2:]))
case reflect.Int32:
return int32(binary.BigEndian.Uint32(b[len(b)-4:]))
case reflect.Int64:
return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
default:
return new(big.Int).SetBytes(b)
}
}
// reads a bool
func readBool(word []byte) (bool, error) {
for _, b := range word[:31] {
if b != 0 {
return false, errBadBool
}
}
switch word[31] {
case 0:
return false, nil
case 1:
return true, nil
default:
return false, errBadBool
}
}
// A function type is simply the address with the function selection signature at the end.
// This enforces that standard by always presenting it as a 24-array (address + sig = 24 bytes)
func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
if t.T != FunctionTy {
return [24]byte{}, fmt.Errorf("abi: invalid type in call to make function type byte array")
}
if garbage := binary.BigEndian.Uint64(word[24:32]); garbage != 0 {
err = fmt.Errorf("abi: got improperly encoded function type, got %v", word)
} else {
copy(funcTy[:], word[0:24])
}
return
}
// through reflection, creates a fixed array to be read from
func readFixedBytes(t Type, word []byte) (interface{}, error) {
if t.T != FixedBytesTy {
return nil, fmt.Errorf("abi: invalid type in call to make fixed byte array")
}
// convert
array := reflect.New(t.Type).Elem()
reflect.Copy(array, reflect.ValueOf(word[0:t.Size]))
return array.Interface(), nil
}
func getFullElemSize(elem *Type) int {
//all other should be counted as 32 (slices have pointers to respective elements)
size := 32
//arrays wrap it, each element being the same size
for elem.T == ArrayTy {
size *= elem.Size
elem = elem.Elem
}
return size
}
// iteratively unpack elements
func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error) {
if size < 0 {
return nil, fmt.Errorf("cannot marshal input to array, size is negative (%d)", size)
}
if start+32*size > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go array: offset %d would go over slice boundary (len=%d)", len(output), start+32*size)
}
// this value will become our slice or our array, depending on the type
var refSlice reflect.Value
if t.T == SliceTy {
// declare our slice
refSlice = reflect.MakeSlice(t.Type, size, size)
} else if t.T == ArrayTy {
// declare our array
refSlice = reflect.New(t.Type).Elem()
} else {
return nil, fmt.Errorf("abi: invalid type in array/slice unpacking stage")
}
// Arrays have packed elements, resulting in longer unpack steps.
// Slices have just 32 bytes per element (pointing to the contents).
elemSize := 32
if t.T == ArrayTy {
elemSize = getFullElemSize(t.Elem)
}
for i, j := start, 0; j < size; i, j = i+elemSize, j+1 {
inter, err := toGoType(i, *t.Elem, output)
if err != nil {
return nil, err
}
// append the item to our reflect slice
refSlice.Index(j).Set(reflect.ValueOf(inter))
}
// return the interface
return refSlice.Interface(), nil
}
// toGoType parses the output bytes and recursively assigns the value of these bytes
// into a go type with accordance with the ABI spec.
func toGoType(index int, t Type, output []byte) (interface{}, error) {
if index+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %d require %d", len(output), index+32)
}
var (
returnOutput []byte
begin, end int
err error
)
// if we require a length prefix, find the beginning word and size returned.
if t.requiresLengthPrefix() {
begin, end, err = lengthPrefixPointsTo(index, output)
if err != nil {
return nil, err
}
} else {
returnOutput = output[index : index+32]
}
switch t.T {
case SliceTy:
return forEachUnpack(t, output, begin, end)
case ArrayTy:
return forEachUnpack(t, output, index, t.Size)
case StringTy: // variable arrays are written at the end of the return bytes
return string(output[begin : begin+end]), nil
case IntTy, UintTy:
return readInteger(t.Kind, returnOutput), nil
case BoolTy:
return readBool(returnOutput)
case AddressTy:
return common.BytesToAddress(returnOutput), nil
case HashTy:
return common.BytesToHash(returnOutput), nil
case BytesTy:
return output[begin : begin+end], nil
case FixedBytesTy:
return readFixedBytes(t, returnOutput)
case FunctionTy:
return readFunctionType(t, returnOutput)
default:
return nil, fmt.Errorf("abi: unknown type %v", t.T)
}
}
// interprets a 32 byte slice as an offset and then determines which indice to look to decode the type.
func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err error) {
bigOffsetEnd := big.NewInt(0).SetBytes(output[index : index+32])
bigOffsetEnd.Add(bigOffsetEnd, common.Big32)
outputLength := big.NewInt(int64(len(output)))
if bigOffsetEnd.Cmp(outputLength) > 0 {
return 0, 0, fmt.Errorf("abi: cannot marshal in to go slice: offset %v would go over slice boundary (len=%v)", bigOffsetEnd, outputLength)
}
if bigOffsetEnd.BitLen() > 63 {
return 0, 0, fmt.Errorf("abi offset larger than int64: %v", bigOffsetEnd)
}
offsetEnd := int(bigOffsetEnd.Uint64())
lengthBig := big.NewInt(0).SetBytes(output[offsetEnd-32 : offsetEnd])
totalSize := big.NewInt(0)
totalSize.Add(totalSize, bigOffsetEnd)
totalSize.Add(totalSize, lengthBig)
if totalSize.BitLen() > 63 {
return 0, 0, fmt.Errorf("abi length larger than int64: %v", totalSize)
}
if totalSize.Cmp(outputLength) > 0 {
return 0, 0, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %v require %v", outputLength, totalSize)
}
start = int(bigOffsetEnd.Uint64())
length = int(lengthBig.Uint64())
return
}

817
accounts/abi/unpack_test.go Normal file
View File

@@ -0,0 +1,817 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"encoding/hex"
"fmt"
"math/big"
"reflect"
"strconv"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/stretchr/testify/require"
)
type unpackTest struct {
def string // ABI definition JSON
enc string // evm return data
want interface{} // the expected output
err string // empty or error if expected
}
func (test unpackTest) checkError(err error) error {
if err != nil {
if len(test.err) == 0 {
return fmt.Errorf("expected no err but got: %v", err)
} else if err.Error() != test.err {
return fmt.Errorf("expected err: '%v' got err: %q", test.err, err)
}
} else if len(test.err) > 0 {
return fmt.Errorf("expected err: %v but got none", test.err)
}
return nil
}
var unpackTests = []unpackTest{
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: true,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000000",
want: false,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000001000000000001",
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000003",
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint32(1),
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint16(0),
err: "abi: cannot unmarshal uint32 in to uint16",
},
{
def: `[{"type": "uint17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint16(0),
err: "abi: cannot unmarshal *big.Int in to uint16",
},
{
def: `[{"type": "uint17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int32(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int16(0),
err: "abi: cannot unmarshal int32 in to int16",
},
{
def: `[{"type": "int17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int16(0),
err: "abi: cannot unmarshal *big.Int in to int16",
},
{
def: `[{"type": "int17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "address"}]`,
enc: "0000000000000000000000000100000000000000000000000000000000000000",
want: common.Address{1},
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{},
err: "abi: cannot unmarshal []uint8 in to [32]uint8",
},
{
def: `[{"type": "bytes32"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: []byte(nil),
err: "abi: cannot unmarshal [32]uint8 in to []uint8",
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "function"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [24]byte{1},
},
// slices
{
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint8{1, 2},
},
{
def: `[{"type": "uint8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint8{1, 2},
},
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000E0000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][2]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
want: [2][]uint8{{1}, {1}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint16{1, 2},
},
{
def: `[{"type": "uint16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint16{1, 2},
},
{
def: `[{"type": "uint32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint32{1, 2},
},
{
def: `[{"type": "uint32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint32{1, 2},
},
{
def: `[{"type": "uint32[2][3][4]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018",
want: [4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
},
{
def: `[{"type": "uint64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint64{1, 2},
},
{
def: `[{"type": "uint64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint64{1, 2},
},
{
def: `[{"type": "uint256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "uint256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "int8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int8{1, 2},
},
{
def: `[{"type": "int8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int8{1, 2},
},
{
def: `[{"type": "int16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int16{1, 2},
},
{
def: `[{"type": "int16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int16{1, 2},
},
{
def: `[{"type": "int32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int32{1, 2},
},
{
def: `[{"type": "int32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int32{1, 2},
},
{
def: `[{"type": "int64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int64{1, 2},
},
{
def: `[{"type": "int64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int64{1, 2},
},
{
def: `[{"type": "int256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "int256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
// struct outputs
{
def: `[{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int","type":"int256"},{"name":"Int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'Int'",
},
{
def: `[{"name":"int","type":"int256"},{"name":"_int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'Int'",
},
{
def: `[{"name":"Int","type":"int256"},{"name":"_int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'Int'",
},
{
def: `[{"name":"Int","type":"int256"},{"name":"_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: purely underscored output cannot unpack to struct",
},
}
func TestUnpack(t *testing.T) {
for i, test := range unpackTests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
def := fmt.Sprintf(`[{ "name" : "method", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.enc)
if err != nil {
t.Fatalf("invalid hex: %s" + test.enc)
}
outptr := reflect.New(reflect.TypeOf(test.want))
err = abi.Unpack(outptr.Interface(), "method", encb)
if err := test.checkError(err); err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
}
out := outptr.Elem().Interface()
if !reflect.DeepEqual(test.want, out) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, test.want, out)
}
})
}
}
type methodMultiOutput struct {
Int *big.Int
String string
}
func methodMultiReturn(require *require.Assertions) (ABI, []byte, methodMultiOutput) {
const definition = `[
{ "name" : "multi", "constant" : false, "outputs": [ { "name": "Int", "type": "uint256" }, { "name": "String", "type": "string" } ] }]`
var expected = methodMultiOutput{big.NewInt(1), "hello"}
abi, err := JSON(strings.NewReader(definition))
require.NoError(err)
// using buff to make the code readable
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000005"))
buff.Write(common.RightPadBytes([]byte(expected.String), 32))
return abi, buff.Bytes(), expected
}
func TestMethodMultiReturn(t *testing.T) {
type reversed struct {
String string
Int *big.Int
}
abi, data, expected := methodMultiReturn(require.New(t))
bigint := new(big.Int)
var testCases = []struct {
dest interface{}
expected interface{}
error string
name string
}{{
&methodMultiOutput{},
&expected,
"",
"Can unpack into structure",
}, {
&reversed{},
&reversed{expected.String, expected.Int},
"",
"Can unpack into reversed structure",
}, {
&[]interface{}{&bigint, new(string)},
&[]interface{}{&expected.Int, &expected.String},
"",
"Can unpack into a slice",
}, {
&[2]interface{}{&bigint, new(string)},
&[2]interface{}{&expected.Int, &expected.String},
"",
"Can unpack into an array",
}, {
&[]interface{}{new(int), new(int)},
&[]interface{}{&expected.Int, &expected.String},
"abi: cannot unmarshal *big.Int in to int",
"Can not unpack into a slice with wrong types",
}, {
&[]interface{}{new(int)},
&[]interface{}{},
"abi: insufficient number of elements in the list/array for unpack, want 2, got 1",
"Can not unpack into a slice with wrong types",
}}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
require := require.New(t)
err := abi.Unpack(tc.dest, "multi", data)
if tc.error == "" {
require.Nil(err, "Should be able to unpack method outputs.")
require.Equal(tc.expected, tc.dest)
} else {
require.EqualError(err, tc.error)
}
})
}
}
func TestMultiReturnWithArray(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"type": "uint64[3]"}, {"type": "uint64"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000900000000000000000000000000000000000000000000000000000000000000090000000000000000000000000000000000000000000000000000000000000009"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000008"))
ret1, ret1Exp := new([3]uint64), [3]uint64{9, 9, 9}
ret2, ret2Exp := new(uint64), uint64(8)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
t.Error("array result", *ret1, "!= Expected", ret1Exp)
}
if *ret2 != ret2Exp {
t.Error("int result", *ret2, "!= Expected", ret2Exp)
}
}
func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
// Similar to TestMultiReturnWithArray, but with a special case in mind:
// values of nested static arrays count towards the size as well, and any element following
// after such nested array argument should be read with the correct offset,
// so that it does not read content from the previous array argument.
const definition = `[{"name" : "multi", "outputs": [{"type": "uint64[3][2][4]"}, {"type": "uint64"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
// construct the test array, each 3 char element is joined with 61 '0' chars,
// to from the ((3 + 61) * 0.5) = 32 byte elements in the array.
buff.Write(common.Hex2Bytes(strings.Join([]string{
"", //empty, to apply the 61-char separator to the first element as well.
"111", "112", "113", "121", "122", "123",
"211", "212", "213", "221", "222", "223",
"311", "312", "313", "321", "322", "323",
"411", "412", "413", "421", "422", "423",
}, "0000000000000000000000000000000000000000000000000000000000000")))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000009876"))
ret1, ret1Exp := new([4][2][3]uint64), [4][2][3]uint64{
{{0x111, 0x112, 0x113}, {0x121, 0x122, 0x123}},
{{0x211, 0x212, 0x213}, {0x221, 0x222, 0x223}},
{{0x311, 0x312, 0x313}, {0x321, 0x322, 0x323}},
{{0x411, 0x412, 0x413}, {0x421, 0x422, 0x423}},
}
ret2, ret2Exp := new(uint64), uint64(0x9876)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
t.Error("array result", *ret1, "!= Expected", ret1Exp)
}
if *ret2 != ret2Exp {
t.Error("int result", *ret2, "!= Expected", ret2Exp)
}
}
func TestUnmarshal(t *testing.T) {
const definition = `[
{ "name" : "int", "constant" : false, "outputs": [ { "type": "uint256" } ] },
{ "name" : "bool", "constant" : false, "outputs": [ { "type": "bool" } ] },
{ "name" : "bytes", "constant" : false, "outputs": [ { "type": "bytes" } ] },
{ "name" : "fixed", "constant" : false, "outputs": [ { "type": "bytes32" } ] },
{ "name" : "multi", "constant" : false, "outputs": [ { "type": "bytes" }, { "type": "bytes" } ] },
{ "name" : "intArraySingle", "constant" : false, "outputs": [ { "type": "uint256[3]" } ] },
{ "name" : "addressSliceSingle", "constant" : false, "outputs": [ { "type": "address[]" } ] },
{ "name" : "addressSliceDouble", "constant" : false, "outputs": [ { "name": "a", "type": "address[]" }, { "name": "b", "type": "address[]" } ] },
{ "name" : "mixedBytes", "constant" : true, "outputs": [ { "name": "a", "type": "bytes" }, { "name": "b", "type": "bytes32" } ] }]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
// marshall mixed bytes (mixedBytes)
p0, p0Exp := []byte{}, common.Hex2Bytes("01020000000000000000")
p1, p1Exp := [32]byte{}, common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000ddeeff")
mixedBytes := []interface{}{&p0, &p1}
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000ddeeff"))
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000a"))
buff.Write(common.Hex2Bytes("0102000000000000000000000000000000000000000000000000000000000000"))
err = abi.Unpack(&mixedBytes, "mixedBytes", buff.Bytes())
if err != nil {
t.Error(err)
} else {
if !bytes.Equal(p0, p0Exp) {
t.Errorf("unexpected value unpacked: want %x, got %x", p0Exp, p0)
}
if !bytes.Equal(p1[:], p1Exp) {
t.Errorf("unexpected value unpacked: want %x, got %x", p1Exp, p1)
}
}
// marshal int
var Int *big.Int
err = abi.Unpack(&Int, "int", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
if Int == nil || Int.Cmp(big.NewInt(1)) != 0 {
t.Error("expected Int to be 1 got", Int)
}
// marshal bool
var Bool bool
err = abi.Unpack(&Bool, "bool", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
if !Bool {
t.Error("expected Bool to be true")
}
// marshal dynamic bytes max length 32
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
bytesOut := common.RightPadBytes([]byte("hello"), 32)
buff.Write(bytesOut)
var Bytes []byte
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, bytesOut) {
t.Errorf("expected %x got %x", bytesOut, Bytes)
}
// marshall dynamic bytes max length 64
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040"))
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, bytesOut) {
t.Errorf("expected %x got %x", bytesOut, Bytes)
}
// marshall dynamic bytes max length 64
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000003f"))
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, bytesOut[:len(bytesOut)-1]) {
t.Errorf("expected %x got %x", bytesOut[:len(bytesOut)-1], Bytes)
}
// marshal dynamic bytes output empty
err = abi.Unpack(&Bytes, "bytes", nil)
if err == nil {
t.Error("expected error")
}
// marshal dynamic bytes length 5
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000005"))
buff.Write(common.RightPadBytes([]byte("hello"), 32))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, []byte("hello")) {
t.Errorf("expected %x got %x", bytesOut, Bytes)
}
// marshal dynamic bytes length 5
buff.Reset()
buff.Write(common.RightPadBytes([]byte("hello"), 32))
var hash common.Hash
err = abi.Unpack(&hash, "fixed", buff.Bytes())
if err != nil {
t.Error(err)
}
helloHash := common.BytesToHash(common.RightPadBytes([]byte("hello"), 32))
if hash != helloHash {
t.Errorf("Expected %x to equal %x", hash, helloHash)
}
// marshal error
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err == nil {
t.Error("expected error")
}
err = abi.Unpack(&Bytes, "multi", make([]byte, 64))
if err == nil {
t.Error("expected error")
}
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000003"))
// marshal int array
var intArray [3]*big.Int
err = abi.Unpack(&intArray, "intArraySingle", buff.Bytes())
if err != nil {
t.Error(err)
}
var testAgainstIntArray [3]*big.Int
testAgainstIntArray[0] = big.NewInt(1)
testAgainstIntArray[1] = big.NewInt(2)
testAgainstIntArray[2] = big.NewInt(3)
for i, Int := range intArray {
if Int.Cmp(testAgainstIntArray[i]) != 0 {
t.Errorf("expected %v, got %v", testAgainstIntArray[i], Int)
}
}
// marshal address slice
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020")) // offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // size
buff.Write(common.Hex2Bytes("0000000000000000000000000100000000000000000000000000000000000000"))
var outAddr []common.Address
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
if len(outAddr) != 1 {
t.Fatal("expected 1 item, got", len(outAddr))
}
if outAddr[0] != (common.Address{1}) {
t.Errorf("expected %x, got %x", common.Address{1}, outAddr[0])
}
// marshal multiple address slice
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040")) // offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000080")) // offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // size
buff.Write(common.Hex2Bytes("0000000000000000000000000100000000000000000000000000000000000000"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // size
buff.Write(common.Hex2Bytes("0000000000000000000000000200000000000000000000000000000000000000"))
buff.Write(common.Hex2Bytes("0000000000000000000000000300000000000000000000000000000000000000"))
var outAddrStruct struct {
A []common.Address
B []common.Address
}
err = abi.Unpack(&outAddrStruct, "addressSliceDouble", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
if len(outAddrStruct.A) != 1 {
t.Fatal("expected 1 item, got", len(outAddrStruct.A))
}
if outAddrStruct.A[0] != (common.Address{1}) {
t.Errorf("expected %x, got %x", common.Address{1}, outAddrStruct.A[0])
}
if len(outAddrStruct.B) != 2 {
t.Fatal("expected 1 item, got", len(outAddrStruct.B))
}
if outAddrStruct.B[0] != (common.Address{2}) {
t.Errorf("expected %x, got %x", common.Address{2}, outAddrStruct.B[0])
}
if outAddrStruct.B[1] != (common.Address{3}) {
t.Errorf("expected %x, got %x", common.Address{3}, outAddrStruct.B[1])
}
// marshal invalid address slice
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000100"))
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
if err == nil {
t.Fatal("expected error:", err)
}
}
func TestOOMMaliciousInput(t *testing.T) {
oomTests := []unpackTest{
{
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"0000000000000000000000000000000000000000000000000000000000000003" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Length larger than 64 bits
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"00ffffffffffffffffffffffffffffffffffffffffffffff0000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Offset very large (over 64 bits)
def: `[{"type": "uint8[]"}]`,
enc: "00ffffffffffffffffffffffffffffffffffffffffffffff0000000000000020" + // offset
"0000000000000000000000000000000000000000000000000000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Offset very large (below 64 bits)
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000007ffffffffff00020" + // offset
"0000000000000000000000000000000000000000000000000000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Offset negative (as 64 bit)
def: `[{"type": "uint8[]"}]`,
enc: "000000000000000000000000000000000000000000000000f000000000000020" + // offset
"0000000000000000000000000000000000000000000000000000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Negative length
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"000000000000000000000000000000000000000000000000f000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Very large length
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"0000000000000000000000000000000000000000000000007fffffffff000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
}
for i, test := range oomTests {
def := fmt.Sprintf(`[{ "name" : "method", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.enc)
if err != nil {
t.Fatalf("invalid hex: %s" + test.enc)
}
_, err = abi.Methods["method"].Outputs.UnpackValues(encb)
if err == nil {
t.Fatalf("Expected error on malicious input, test %d", i)
}
}
}

View File

@@ -42,8 +42,9 @@ type Wallet interface {
URL() URL
// Status returns a textual status to aid the user in the current state of the
// wallet.
Status() string
// wallet. It also returns an error indicating any failure the wallet might have
// encountered.
Status() (string, error)
// Open initializes access to a wallet instance. It is not meant to unlock or
// decrypt account keys, rather simply to establish a connection to hardware
@@ -105,7 +106,7 @@ type Wallet interface {
// or optionally with the aid of any location metadata from the embedded URL field.
//
// If the wallet requires additional authentication to sign the request (e.g.
// a password to decrypt the account, or a PIN code o verify the transaction),
// a password to decrypt the account, or a PIN code to verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignTxWithPassphrase, or by other means (e.g. unlock
@@ -147,9 +148,26 @@ type Backend interface {
Subscribe(sink chan<- WalletEvent) event.Subscription
}
// WalletEventType represents the different event types that can be fired by
// the wallet subscription subsystem.
type WalletEventType int
const (
// WalletArrived is fired when a new wallet is detected either via USB or via
// a filesystem event in the keystore.
WalletArrived WalletEventType = iota
// WalletOpened is fired when a wallet is successfully opened with the purpose
// of starting any background processes such as automatic key derivation.
WalletOpened
// WalletDropped
WalletDropped
)
// WalletEvent is an event fired by an account backend when a wallet arrival or
// departure is detected.
type WalletEvent struct {
Wallet Wallet // Wallet instance arrived or departed
Arrive bool // Whether the wallet was added or removed
Wallet Wallet // Wallet instance arrived or departed
Kind WalletEventType // Event type that happened in the system
}

View File

@@ -38,7 +38,7 @@ var ErrNotSupported = errors.New("not supported")
var ErrInvalidPassphrase = errors.New("invalid passphrase")
// ErrWalletAlreadyOpen is returned if a wallet is attempted to be opened the
// secodn time.
// second time.
var ErrWalletAlreadyOpen = errors.New("wallet already open")
// ErrWalletClosed is returned if a wallet is attempted to be opened the
@@ -62,7 +62,7 @@ func NewAuthNeededError(needed string) error {
}
}
// Error implements the standard error interfacel.
// Error implements the standard error interface.
func (err *AuthNeededError) Error() string {
return fmt.Sprintf("authentication needed: %s", err.Needed)
}

View File

@@ -27,12 +27,17 @@ import (
// DefaultRootDerivationPath is the root path to which custom derivation endpoints
// are appended. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
var DefaultRootDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0}
var DefaultRootDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
// DefaultBaseDerivationPath is the base path from which custom derivation endpoints
// are incremented. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}
// DefaultLedgerBaseDerivationPath is the base path from which custom derivation endpoints
// are incremented. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
var DefaultLedgerBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
// DerivationPath represents the computer friendly version of a hierarchical
// deterministic wallet account derivaion path.

View File

@@ -37,11 +37,11 @@ func TestHDPathParsing(t *testing.T) {
{"m/2147483692/2147483708/2147483648/2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Plain relative derivation paths
{"0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"128", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"0'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"128'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}},
{"128", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 128}},
{"0'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
{"128'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 128}},
{"2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
// Hexadecimal absolute derivation paths
{"m/0x2C'/0x3c'/0x00'/0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
@@ -52,11 +52,11 @@ func TestHDPathParsing(t *testing.T) {
{"m/0x8000002C/0x8000003c/0x80000000/0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Hexadecimal relative derivation paths
{"0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"0x80", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"0x00'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"0x80'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}},
{"0x80", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 128}},
{"0x00'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
{"0x80'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 128}},
{"0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
// Weird inputs just to ensure they work
{" m / 44 '\n/\n 60 \n\n\t' /\n0 ' /\t\t 0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},

View File

@@ -20,7 +20,6 @@ import (
"bufio"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sort"
@@ -28,6 +27,7 @@ import (
"sync"
"time"
mapset "github.com/deckarep/golang-set"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
@@ -71,6 +71,7 @@ type accountCache struct {
byAddr map[common.Address][]accounts.Account
throttle *time.Timer
notify chan struct{}
fileC fileCache
}
func newAccountCache(keydir string) (*accountCache, chan struct{}) {
@@ -78,6 +79,7 @@ func newAccountCache(keydir string) (*accountCache, chan struct{}) {
keydir: keydir,
byAddr: make(map[common.Address][]accounts.Account),
notify: make(chan struct{}, 1),
fileC: fileCache{all: mapset.NewThreadUnsafeSet()},
}
ac.watcher = newWatcher(ac)
return ac, ac.notify
@@ -127,6 +129,23 @@ func (ac *accountCache) delete(removed accounts.Account) {
}
}
// deleteByFile removes an account referenced by the given path.
func (ac *accountCache) deleteByFile(path string) {
ac.mu.Lock()
defer ac.mu.Unlock()
i := sort.Search(len(ac.all), func(i int) bool { return ac.all[i].URL.Path >= path })
if i < len(ac.all) && ac.all[i].URL.Path == path {
removed := ac.all[i]
ac.all = append(ac.all[:i], ac.all[i+1:]...)
if ba := removeAccount(ac.byAddr[removed.Address], removed); len(ba) == 0 {
delete(ac.byAddr, removed.Address)
} else {
ac.byAddr[removed.Address] = ba
}
}
}
func removeAccount(slice []accounts.Account, elem accounts.Account) []accounts.Account {
for i := range slice {
if slice[i] == elem {
@@ -167,15 +186,16 @@ func (ac *accountCache) find(a accounts.Account) (accounts.Account, error) {
default:
err := &AmbiguousAddrError{Addr: a.Address, Matches: make([]accounts.Account, len(matches))}
copy(err.Matches, matches)
sort.Sort(accountsByURL(err.Matches))
return accounts.Account{}, err
}
}
func (ac *accountCache) maybeReload() {
ac.mu.Lock()
defer ac.mu.Unlock()
if ac.watcher.running {
ac.mu.Unlock()
return // A watcher is running and will keep the cache up-to-date.
}
if ac.throttle == nil {
@@ -184,12 +204,15 @@ func (ac *accountCache) maybeReload() {
select {
case <-ac.throttle.C:
default:
ac.mu.Unlock()
return // The cache was reloaded recently.
}
}
// No watcher running, start it.
ac.watcher.start()
ac.reload()
ac.throttle.Reset(minReloadInterval)
ac.mu.Unlock()
ac.scanAccounts()
}
func (ac *accountCache) close() {
@@ -205,80 +228,71 @@ func (ac *accountCache) close() {
ac.mu.Unlock()
}
// reload caches addresses of existing accounts.
// Callers must hold ac.mu.
func (ac *accountCache) reload() {
accounts, err := ac.scan()
// scanAccounts checks if any changes have occurred on the filesystem, and
// updates the account cache accordingly
func (ac *accountCache) scanAccounts() error {
// Scan the entire folder metadata for file changes
creates, deletes, updates, err := ac.fileC.scan(ac.keydir)
if err != nil {
log.Debug("Failed to reload keystore contents", "err", err)
return err
}
ac.all = accounts
sort.Sort(ac.all)
for k := range ac.byAddr {
delete(ac.byAddr, k)
if creates.Cardinality() == 0 && deletes.Cardinality() == 0 && updates.Cardinality() == 0 {
return nil
}
for _, a := range accounts {
ac.byAddr[a.Address] = append(ac.byAddr[a.Address], a)
// Create a helper method to scan the contents of the key files
var (
buf = new(bufio.Reader)
key struct {
Address string `json:"address"`
}
)
readAccount := func(path string) *accounts.Account {
fd, err := os.Open(path)
if err != nil {
log.Trace("Failed to open keystore file", "path", path, "err", err)
return nil
}
defer fd.Close()
buf.Reset(fd)
// Parse the address.
key.Address = ""
err = json.NewDecoder(buf).Decode(&key)
addr := common.HexToAddress(key.Address)
switch {
case err != nil:
log.Debug("Failed to decode keystore key", "path", path, "err", err)
case (addr == common.Address{}):
log.Debug("Failed to decode keystore key", "path", path, "err", "missing or zero address")
default:
return &accounts.Account{Address: addr, URL: accounts.URL{Scheme: KeyStoreScheme, Path: path}}
}
return nil
}
// Process all the file diffs
start := time.Now()
for _, p := range creates.ToSlice() {
if a := readAccount(p.(string)); a != nil {
ac.add(*a)
}
}
for _, p := range deletes.ToSlice() {
ac.deleteByFile(p.(string))
}
for _, p := range updates.ToSlice() {
path := p.(string)
ac.deleteByFile(path)
if a := readAccount(path); a != nil {
ac.add(*a)
}
}
end := time.Now()
select {
case ac.notify <- struct{}{}:
default:
}
log.Debug("Reloaded keystore contents", "accounts", len(ac.all))
}
func (ac *accountCache) scan() ([]accounts.Account, error) {
files, err := ioutil.ReadDir(ac.keydir)
if err != nil {
return nil, err
}
var (
buf = new(bufio.Reader)
addrs []accounts.Account
keyJSON struct {
Address string `json:"address"`
}
)
for _, fi := range files {
path := filepath.Join(ac.keydir, fi.Name())
if skipKeyFile(fi) {
log.Trace("Ignoring file on account scan", "path", path)
continue
}
logger := log.New("path", path)
fd, err := os.Open(path)
if err != nil {
logger.Trace("Failed to open keystore file", "err", err)
continue
}
buf.Reset(fd)
// Parse the address.
keyJSON.Address = ""
err = json.NewDecoder(buf).Decode(&keyJSON)
addr := common.HexToAddress(keyJSON.Address)
switch {
case err != nil:
logger.Debug("Failed to decode keystore key", "err", err)
case (addr == common.Address{}):
logger.Debug("Failed to decode keystore key", "err", "missing or zero address")
default:
addrs = append(addrs, accounts.Account{Address: addr, URL: accounts.URL{Scheme: KeyStoreScheme, Path: path}})
}
fd.Close()
}
return addrs, err
}
func skipKeyFile(fi os.FileInfo) bool {
// Skip editor backups and UNIX-style hidden files.
if strings.HasSuffix(fi.Name(), "~") || strings.HasPrefix(fi.Name(), ".") {
return true
}
// Skip misc special files, directories (yes, symlinks too).
if fi.IsDir() || fi.Mode()&os.ModeType != 0 {
return true
}
return false
log.Trace("Handled keystore changes", "time", end.Sub(start))
return nil
}

View File

@@ -18,6 +18,7 @@ package keystore
import (
"fmt"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
@@ -58,7 +59,7 @@ func TestWatchNewFile(t *testing.T) {
// Ensure the watcher is started before adding any files.
ks.Accounts()
time.Sleep(200 * time.Millisecond)
time.Sleep(1000 * time.Millisecond)
// Move in the files.
wantAccounts := make([]accounts.Account, len(cachetestAccounts))
@@ -295,3 +296,111 @@ func TestCacheFind(t *testing.T) {
}
}
}
func waitForAccounts(wantAccounts []accounts.Account, ks *KeyStore) error {
var list []accounts.Account
for d := 200 * time.Millisecond; d < 8*time.Second; d *= 2 {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
return fmt.Errorf("wasn't notified of new accounts")
}
return nil
}
time.Sleep(d)
}
return fmt.Errorf("\ngot %v\nwant %v", list, wantAccounts)
}
// TestUpdatedKeyfileContents tests that updating the contents of a keystore file
// is noticed by the watcher, and the account cache is updated accordingly
func TestUpdatedKeyfileContents(t *testing.T) {
t.Parallel()
// Create a temporary kesytore to test with
rand.Seed(time.Now().UnixNano())
dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-watch-test-%d-%d", os.Getpid(), rand.Int()))
ks := NewKeyStore(dir, LightScryptN, LightScryptP)
list := ks.Accounts()
if len(list) > 0 {
t.Error("initial account list not empty:", list)
}
time.Sleep(100 * time.Millisecond)
// Create the directory and copy a key file into it.
os.MkdirAll(dir, 0700)
defer os.RemoveAll(dir)
file := filepath.Join(dir, "aaa")
// Place one of our testfiles in there
if err := cp.CopyFile(file, cachetestAccounts[0].URL.Path); err != nil {
t.Fatal(err)
}
// ks should see the account.
wantAccounts := []accounts.Account{cachetestAccounts[0]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
if err := waitForAccounts(wantAccounts, ks); err != nil {
t.Error(err)
return
}
// needed so that modTime of `file` is different to its current value after forceCopyFile
time.Sleep(1000 * time.Millisecond)
// Now replace file contents
if err := forceCopyFile(file, cachetestAccounts[1].URL.Path); err != nil {
t.Fatal(err)
return
}
wantAccounts = []accounts.Account{cachetestAccounts[1]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
if err := waitForAccounts(wantAccounts, ks); err != nil {
t.Errorf("First replacement failed")
t.Error(err)
return
}
// needed so that modTime of `file` is different to its current value after forceCopyFile
time.Sleep(1000 * time.Millisecond)
// Now replace file contents again
if err := forceCopyFile(file, cachetestAccounts[2].URL.Path); err != nil {
t.Fatal(err)
return
}
wantAccounts = []accounts.Account{cachetestAccounts[2]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
if err := waitForAccounts(wantAccounts, ks); err != nil {
t.Errorf("Second replacement failed")
t.Error(err)
return
}
// needed so that modTime of `file` is different to its current value after ioutil.WriteFile
time.Sleep(1000 * time.Millisecond)
// Now replace file contents with crap
if err := ioutil.WriteFile(file, []byte("foo"), 0644); err != nil {
t.Fatal(err)
return
}
if err := waitForAccounts([]accounts.Account{}, ks); err != nil {
t.Errorf("Emptying account file failed")
t.Error(err)
return
}
}
// forceCopyFile is like cp.CopyFile, but doesn't complain if the destination exists.
func forceCopyFile(dst, src string) error {
data, err := ioutil.ReadFile(src)
if err != nil {
return err
}
return ioutil.WriteFile(dst, data, 0644)
}

View File

@@ -0,0 +1,102 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"io/ioutil"
"os"
"path/filepath"
"strings"
"sync"
"time"
mapset "github.com/deckarep/golang-set"
"github.com/ethereum/go-ethereum/log"
)
// fileCache is a cache of files seen during scan of keystore.
type fileCache struct {
all mapset.Set // Set of all files from the keystore folder
lastMod time.Time // Last time instance when a file was modified
mu sync.RWMutex
}
// scan performs a new scan on the given directory, compares against the already
// cached filenames, and returns file sets: creates, deletes, updates.
func (fc *fileCache) scan(keyDir string) (mapset.Set, mapset.Set, mapset.Set, error) {
t0 := time.Now()
// List all the failes from the keystore folder
files, err := ioutil.ReadDir(keyDir)
if err != nil {
return nil, nil, nil, err
}
t1 := time.Now()
fc.mu.Lock()
defer fc.mu.Unlock()
// Iterate all the files and gather their metadata
all := mapset.NewThreadUnsafeSet()
mods := mapset.NewThreadUnsafeSet()
var newLastMod time.Time
for _, fi := range files {
path := filepath.Join(keyDir, fi.Name())
// Skip any non-key files from the folder
if nonKeyFile(fi) {
log.Trace("Ignoring file on account scan", "path", path)
continue
}
// Gather the set of all and fresly modified files
all.Add(path)
modified := fi.ModTime()
if modified.After(fc.lastMod) {
mods.Add(path)
}
if modified.After(newLastMod) {
newLastMod = modified
}
}
t2 := time.Now()
// Update the tracked files and return the three sets
deletes := fc.all.Difference(all) // Deletes = previous - current
creates := all.Difference(fc.all) // Creates = current - previous
updates := mods.Difference(creates) // Updates = modified - creates
fc.all, fc.lastMod = all, newLastMod
t3 := time.Now()
// Report on the scanning stats and return
log.Debug("FS scan times", "list", t1.Sub(t0), "set", t2.Sub(t1), "diff", t3.Sub(t2))
return creates, deletes, updates, nil
}
// nonKeyFile ignores editor backups, hidden files and folders/symlinks.
func nonKeyFile(fi os.FileInfo) bool {
// Skip editor backups and UNIX-style hidden files.
if strings.HasSuffix(fi.Name(), "~") || strings.HasPrefix(fi.Name(), ".") {
return true
}
// Skip misc special files, directories (yes, symlinks too).
if fi.IsDir() || fi.Mode()&os.ModeType != 0 {
return true
}
return false
}

View File

@@ -91,14 +91,6 @@ type cipherparamsJSON struct {
IV string `json:"iv"`
}
type scryptParamsJSON struct {
N int `json:"n"`
R int `json:"r"`
P int `json:"p"`
DkLen int `json:"dklen"`
Salt string `json:"salt"`
}
func (k *Key) MarshalJSON() (j []byte, err error) {
jStruct := plainKeyJSON{
hex.EncodeToString(k.Address[:]),

View File

@@ -50,7 +50,7 @@ var (
var KeyStoreType = reflect.TypeOf(&KeyStore{})
// KeyStoreScheme is the protocol scheme prefixing account and wallet URLs.
var KeyStoreScheme = "keystore"
const KeyStoreScheme = "keystore"
// Maximum time between wallet refreshes (if filesystem notifications don't work).
const walletRefreshCycle = 3 * time.Second
@@ -143,14 +143,14 @@ func (ks *KeyStore) refreshWallets() {
for _, account := range accs {
// Drop wallets while they were in front of the next account
for len(ks.wallets) > 0 && ks.wallets[0].URL().Cmp(account.URL) < 0 {
events = append(events, accounts.WalletEvent{Wallet: ks.wallets[0], Arrive: false})
events = append(events, accounts.WalletEvent{Wallet: ks.wallets[0], Kind: accounts.WalletDropped})
ks.wallets = ks.wallets[1:]
}
// If there are no more wallets or the account is before the next, wrap new wallet
if len(ks.wallets) == 0 || ks.wallets[0].URL().Cmp(account.URL) > 0 {
wallet := &keystoreWallet{account: account, keystore: ks}
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: true})
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletArrived})
wallets = append(wallets, wallet)
continue
}
@@ -163,7 +163,7 @@ func (ks *KeyStore) refreshWallets() {
}
// Drop any leftover wallets and set the new batch
for _, wallet := range ks.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: false})
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletDropped})
}
ks.wallets = wallets
ks.mu.Unlock()

View File

@@ -28,17 +28,18 @@ package keystore
import (
"bytes"
"crypto/aes"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"path/filepath"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/crypto/randentropy"
"github.com/pborman/uuid"
"golang.org/x/crypto/pbkdf2"
"golang.org/x/crypto/scrypt"
@@ -90,6 +91,12 @@ func (ks keyStorePassphrase) GetKey(addr common.Address, filename, auth string)
return key, nil
}
// StoreKey generates a key, encrypts with 'auth' and stores in the given directory
func StoreKey(dir, auth string, scryptN, scryptP int) (common.Address, error) {
_, a, err := storeNewKey(&keyStorePassphrase{dir, scryptN, scryptP}, rand.Reader, auth)
return a.Address, err
}
func (ks keyStorePassphrase) StoreKey(filename string, key *Key, auth string) error {
keyjson, err := EncryptKey(key, auth, ks.scryptN, ks.scryptP)
if err != nil {
@@ -101,16 +108,19 @@ func (ks keyStorePassphrase) StoreKey(filename string, key *Key, auth string) er
func (ks keyStorePassphrase) JoinPath(filename string) string {
if filepath.IsAbs(filename) {
return filename
} else {
return filepath.Join(ks.keysDirPath, filename)
}
return filepath.Join(ks.keysDirPath, filename)
}
// EncryptKey encrypts a key using the specified scrypt parameters into a json
// blob that can be decrypted later on.
func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
authArray := []byte(auth)
salt := randentropy.GetEntropyCSPRNG(32)
salt := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
derivedKey, err := scrypt.Key(authArray, salt, scryptN, scryptR, scryptP, scryptDKLen)
if err != nil {
return nil, err
@@ -118,7 +128,10 @@ func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
encryptKey := derivedKey[:16]
keyBytes := math.PaddedBigBytes(key.PrivateKey.D, 32)
iv := randentropy.GetEntropyCSPRNG(aes.BlockSize) // 16
iv := make([]byte, aes.BlockSize) // 16
if _, err := io.ReadFull(rand.Reader, iv); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
cipherText, err := aesCTRXOR(encryptKey, keyBytes, iv)
if err != nil {
return nil, err
@@ -140,7 +153,7 @@ func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
Cipher: "aes-128-ctr",
CipherText: hex.EncodeToString(cipherText),
CipherParams: cipherParamsJSON,
KDF: "scrypt",
KDF: keyHeaderKDF,
KDFParams: scryptParamsJSON,
MAC: hex.EncodeToString(mac),
}
@@ -275,7 +288,7 @@ func getKDFKey(cryptoJSON cryptoJSON, auth string) ([]byte, error) {
}
dkLen := ensureInt(cryptoJSON.KDFParams["dklen"])
if cryptoJSON.KDF == "scrypt" {
if cryptoJSON.KDF == keyHeaderKDF {
n := ensureInt(cryptoJSON.KDFParams["n"])
r := ensureInt(cryptoJSON.KDFParams["r"])
p := ensureInt(cryptoJSON.KDFParams["p"])

View File

@@ -56,7 +56,6 @@ func (ks keyStorePlain) StoreKey(filename string, key *Key, auth string) error {
func (ks keyStorePlain) JoinPath(filename string) string {
if filepath.IsAbs(filename) {
return filename
} else {
return filepath.Join(ks.keysDirPath, filename)
}
return filepath.Join(ks.keysDirPath, filename)
}

View File

@@ -22,6 +22,7 @@ import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
@@ -140,21 +141,32 @@ func TestV3_PBKDF2_1(t *testing.T) {
testDecryptV3(tests["wikipage_test_vector_pbkdf2"], t)
}
var testsSubmodule = filepath.Join("..", "..", "tests", "testdata", "KeyStoreTests")
func skipIfSubmoduleMissing(t *testing.T) {
if !common.FileExist(testsSubmodule) {
t.Skipf("can't find JSON tests from submodule at %s", testsSubmodule)
}
}
func TestV3_PBKDF2_2(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["test1"], t)
}
func TestV3_PBKDF2_3(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["python_generated_test_with_odd_iv"], t)
}
func TestV3_PBKDF2_4(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["evilnonce"], t)
}
@@ -165,8 +177,9 @@ func TestV3_Scrypt_1(t *testing.T) {
}
func TestV3_Scrypt_2(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["test2"], t)
}

View File

@@ -272,82 +272,104 @@ func TestWalletNotifierLifecycle(t *testing.T) {
t.Errorf("wallet notifier didn't terminate after unsubscribe")
}
type walletEvent struct {
accounts.WalletEvent
a accounts.Account
}
// Tests that wallet notifications and correctly fired when accounts are added
// or deleted from the keystore.
func TestWalletNotifications(t *testing.T) {
// Create a temporary kesytore to test with
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Subscribe to the wallet feed
updates := make(chan accounts.WalletEvent, 1)
sub := ks.Subscribe(updates)
// Subscribe to the wallet feed and collect events.
var (
events []walletEvent
updates = make(chan accounts.WalletEvent)
sub = ks.Subscribe(updates)
)
defer sub.Unsubscribe()
go func() {
for {
select {
case ev := <-updates:
events = append(events, walletEvent{ev, ev.Wallet.Accounts()[0]})
case <-sub.Err():
close(updates)
return
}
}
}()
// Randomly add and remove account and make sure events and wallets are in sync
live := make(map[common.Address]accounts.Account)
// Randomly add and remove accounts.
var (
live = make(map[common.Address]accounts.Account)
wantEvents []walletEvent
)
for i := 0; i < 1024; i++ {
// Execute a creation or deletion and ensure event arrival
if create := len(live) == 0 || rand.Int()%4 > 0; create {
// Add a new account and ensure wallet notifications arrives
account, err := ks.NewAccount("")
if err != nil {
t.Fatalf("failed to create test account: %v", err)
}
select {
case event := <-updates:
if !event.Arrive {
t.Errorf("departure event on account creation")
}
if event.Wallet.Accounts()[0] != account {
t.Errorf("account mismatch on created wallet: have %v, want %v", event.Wallet.Accounts()[0], account)
}
default:
t.Errorf("wallet arrival event not fired on account creation")
}
live[account.Address] = account
wantEvents = append(wantEvents, walletEvent{accounts.WalletEvent{Kind: accounts.WalletArrived}, account})
} else {
// Select a random account to delete (crude, but works)
// Delete a random account.
var account accounts.Account
for _, a := range live {
account = a
break
}
// Remove an account and ensure wallet notifiaction arrives
if err := ks.Delete(account, ""); err != nil {
t.Fatalf("failed to delete test account: %v", err)
}
select {
case event := <-updates:
if event.Arrive {
t.Errorf("arrival event on account deletion")
}
if event.Wallet.Accounts()[0] != account {
t.Errorf("account mismatch on deleted wallet: have %v, want %v", event.Wallet.Accounts()[0], account)
}
default:
t.Errorf("wallet departure event not fired on account creation")
}
delete(live, account.Address)
wantEvents = append(wantEvents, walletEvent{accounts.WalletEvent{Kind: accounts.WalletDropped}, account})
}
// Retrieve the list of wallets and ensure it matches with our required live set
liveList := make([]accounts.Account, 0, len(live))
for _, account := range live {
liveList = append(liveList, account)
}
sort.Sort(accountsByURL(liveList))
}
wallets := ks.Wallets()
if len(liveList) != len(wallets) {
t.Errorf("wallet list doesn't match required accounts: have %v, want %v", wallets, liveList)
} else {
for j, wallet := range wallets {
if accs := wallet.Accounts(); len(accs) != 1 {
t.Errorf("wallet %d: contains invalid number of accounts: have %d, want 1", j, len(accs))
} else if accs[0] != liveList[j] {
t.Errorf("wallet %d: account mismatch: have %v, want %v", j, accs[0], liveList[j])
}
// Shut down the event collector and check events.
sub.Unsubscribe()
<-updates
checkAccounts(t, live, ks.Wallets())
checkEvents(t, wantEvents, events)
}
// checkAccounts checks that all known live accounts are present in the wallet list.
func checkAccounts(t *testing.T, live map[common.Address]accounts.Account, wallets []accounts.Wallet) {
if len(live) != len(wallets) {
t.Errorf("wallet list doesn't match required accounts: have %d, want %d", len(wallets), len(live))
return
}
liveList := make([]accounts.Account, 0, len(live))
for _, account := range live {
liveList = append(liveList, account)
}
sort.Sort(accountsByURL(liveList))
for j, wallet := range wallets {
if accs := wallet.Accounts(); len(accs) != 1 {
t.Errorf("wallet %d: contains invalid number of accounts: have %d, want 1", j, len(accs))
} else if accs[0] != liveList[j] {
t.Errorf("wallet %d: account mismatch: have %v, want %v", j, accs[0], liveList[j])
}
}
}
// checkEvents checks that all events in 'want' are present in 'have'. Events may be present multiple times.
func checkEvents(t *testing.T, want []walletEvent, have []walletEvent) {
for _, wantEv := range want {
nmatch := 0
for ; len(have) > 0; nmatch++ {
if have[0].Kind != wantEv.Kind || have[0].a != wantEv.a {
break
}
have = have[1:]
}
if nmatch == 0 {
t.Fatalf("can't find event with Kind=%v for %x", wantEv.Kind, wantEv.a.Address)
}
}
}

View File

@@ -36,16 +36,16 @@ func (w *keystoreWallet) URL() accounts.URL {
return w.account.URL
}
// Status implements accounts.Wallet, always returning "open", since there is no
// concept of open/close for plain keystore accounts.
func (w *keystoreWallet) Status() string {
// Status implements accounts.Wallet, returning whether the account held by the
// keystore wallet is unlocked or not.
func (w *keystoreWallet) Status() (string, error) {
w.keystore.mu.RLock()
defer w.keystore.mu.RUnlock()
if _, ok := w.keystore.unlocked[w.account.Address]; ok {
return "Unlocked"
return "Unlocked", nil
}
return "Locked"
return "Locked", nil
}
// Open implements accounts.Wallet, but is a noop for plain wallets since there

View File

@@ -58,6 +58,9 @@ func decryptPreSaleKey(fileContent []byte, password string) (key *Key, err error
if err != nil {
return nil, errors.New("invalid hex in encSeed")
}
if len(encSeedBytes) < 16 {
return nil, errors.New("invalid encSeed, too short")
}
iv := encSeedBytes[:16]
cipherText := encSeedBytes[16:]
/*

View File

@@ -70,7 +70,6 @@ func (w *watcher) loop() {
return
}
defer notify.Stop(w.ev)
logger.Trace("Started watching keystore folder")
defer logger.Trace("Stopped watching keystore folder")
@@ -82,32 +81,28 @@ func (w *watcher) loop() {
// When an event occurs, the reload call is delayed a bit so that
// multiple events arriving quickly only cause a single reload.
var (
debounce = time.NewTimer(0)
debounceDuration = 500 * time.Millisecond
inCycle, hadEvent bool
debounceDuration = 500 * time.Millisecond
rescanTriggered = false
debounce = time.NewTimer(0)
)
// Ignore initial trigger
if !debounce.Stop() {
<-debounce.C
}
defer debounce.Stop()
for {
select {
case <-w.quit:
return
case <-w.ev:
if !inCycle {
// Trigger the scan (with delay), if not already triggered
if !rescanTriggered {
debounce.Reset(debounceDuration)
inCycle = true
} else {
hadEvent = true
rescanTriggered = true
}
case <-debounce.C:
w.ac.mu.Lock()
w.ac.reload()
w.ac.mu.Unlock()
if hadEvent {
debounce.Reset(debounceDuration)
inCycle, hadEvent = true, false
} else {
inCycle, hadEvent = false, false
}
w.ac.scanAccounts()
rescanTriggered = false
}
}
}

View File

@@ -41,6 +41,11 @@ type Manager struct {
// NewManager creates a generic account manager to sign transaction via various
// supported backends.
func NewManager(backends ...Backend) *Manager {
// Retrieve the initial list of wallets from the backends and sort by URL
var wallets []Wallet
for _, backend := range backends {
wallets = merge(wallets, backend.Wallets()...)
}
// Subscribe to wallet notifications from all backends
updates := make(chan WalletEvent, 4*len(backends))
@@ -48,11 +53,6 @@ func NewManager(backends ...Backend) *Manager {
for i, backend := range backends {
subs[i] = backend.Subscribe(updates)
}
// Retrieve the initial list of wallets from the backends and sort by URL
var wallets []Wallet
for _, backend := range backends {
wallets = merge(wallets, backend.Wallets()...)
}
// Assemble the account manager and return
am := &Manager{
backends: make(map[reflect.Type][]Backend),
@@ -96,9 +96,10 @@ func (am *Manager) update() {
case event := <-am.updates:
// Wallet event arrived, update local cache
am.lock.Lock()
if event.Arrive {
switch event.Kind {
case WalletArrived:
am.wallets = merge(am.wallets, event.Wallet)
} else {
case WalletDropped:
am.wallets = drop(am.wallets, event.Wallet)
}
am.lock.Unlock()

View File

@@ -74,6 +74,22 @@ func (u URL) MarshalJSON() ([]byte, error) {
return json.Marshal(u.String())
}
// UnmarshalJSON parses url.
func (u *URL) UnmarshalJSON(input []byte) error {
var textURL string
err := json.Unmarshal(input, &textURL)
if err != nil {
return err
}
url, err := parseURL(textURL)
if err != nil {
return err
}
u.Scheme = url.Scheme
u.Path = url.Path
return nil
}
// Cmp compares x and y and returns:
//
// -1 if x < y

96
accounts/url_test.go Normal file
View File

@@ -0,0 +1,96 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"testing"
)
func TestURLParsing(t *testing.T) {
url, err := parseURL("https://ethereum.org")
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if url.Scheme != "https" {
t.Errorf("expected: %v, got: %v", "https", url.Scheme)
}
if url.Path != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.Path)
}
_, err = parseURL("ethereum.org")
if err == nil {
t.Error("expected err, got: nil")
}
}
func TestURLString(t *testing.T) {
url := URL{Scheme: "https", Path: "ethereum.org"}
if url.String() != "https://ethereum.org" {
t.Errorf("expected: %v, got: %v", "https://ethereum.org", url.String())
}
url = URL{Scheme: "", Path: "ethereum.org"}
if url.String() != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.String())
}
}
func TestURLMarshalJSON(t *testing.T) {
url := URL{Scheme: "https", Path: "ethereum.org"}
json, err := url.MarshalJSON()
if err != nil {
t.Errorf("unexpcted error: %v", err)
}
if string(json) != "\"https://ethereum.org\"" {
t.Errorf("expected: %v, got: %v", "\"https://ethereum.org\"", string(json))
}
}
func TestURLUnmarshalJSON(t *testing.T) {
url := &URL{}
err := url.UnmarshalJSON([]byte("\"https://ethereum.org\""))
if err != nil {
t.Errorf("unexpcted error: %v", err)
}
if url.Scheme != "https" {
t.Errorf("expected: %v, got: %v", "https", url.Scheme)
}
if url.Path != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "https", url.Path)
}
}
func TestURLComparison(t *testing.T) {
tests := []struct {
urlA URL
urlB URL
expect int
}{
{URL{"https", "ethereum.org"}, URL{"https", "ethereum.org"}, 0},
{URL{"http", "ethereum.org"}, URL{"https", "ethereum.org"}, -1},
{URL{"https", "ethereum.org/a"}, URL{"https", "ethereum.org"}, 1},
{URL{"https", "abc.org"}, URL{"https", "ethereum.org"}, -1},
}
for i, tt := range tests {
result := tt.urlA.Cmp(tt.urlB)
if result != tt.expect {
t.Errorf("test %d: cmp mismatch: expected: %d, got: %d", i, tt.expect, result)
}
}
}

238
accounts/usbwallet/hub.go Normal file
View File

@@ -0,0 +1,238 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package usbwallet
import (
"errors"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
)
// LedgerScheme is the protocol scheme prefixing account and wallet URLs.
const LedgerScheme = "ledger"
// TrezorScheme is the protocol scheme prefixing account and wallet URLs.
const TrezorScheme = "trezor"
// refreshCycle is the maximum time between wallet refreshes (if USB hotplug
// notifications don't work).
const refreshCycle = time.Second
// refreshThrottling is the minimum time between wallet refreshes to avoid USB
// trashing.
const refreshThrottling = 500 * time.Millisecond
// Hub is a accounts.Backend that can find and handle generic USB hardware wallets.
type Hub struct {
scheme string // Protocol scheme prefixing account and wallet URLs.
vendorID uint16 // USB vendor identifier used for device discovery
productIDs []uint16 // USB product identifiers used for device discovery
usageID uint16 // USB usage page identifier used for macOS device discovery
endpointID int // USB endpoint identifier used for non-macOS device discovery
makeDriver func(log.Logger) driver // Factory method to construct a vendor specific driver
refreshed time.Time // Time instance when the list of wallets was last refreshed
wallets []accounts.Wallet // List of USB wallet devices currently tracking
updateFeed event.Feed // Event feed to notify wallet additions/removals
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
quit chan chan error
stateLock sync.RWMutex // Protects the internals of the hub from racey access
// TODO(karalabe): remove if hotplug lands on Windows
commsPend int // Number of operations blocking enumeration
commsLock sync.Mutex // Lock protecting the pending counter and enumeration
}
// NewLedgerHub creates a new hardware wallet manager for Ledger devices.
func NewLedgerHub() (*Hub, error) {
return newHub(LedgerScheme, 0x2c97, []uint16{0x0000 /* Ledger Blue */, 0x0001 /* Ledger Nano S */}, 0xffa0, 0, newLedgerDriver)
}
// NewTrezorHub creates a new hardware wallet manager for Trezor devices.
func NewTrezorHub() (*Hub, error) {
return newHub(TrezorScheme, 0x534c, []uint16{0x0001 /* Trezor 1 */}, 0xff00, 0, newTrezorDriver)
}
// newHub creates a new hardware wallet manager for generic USB devices.
func newHub(scheme string, vendorID uint16, productIDs []uint16, usageID uint16, endpointID int, makeDriver func(log.Logger) driver) (*Hub, error) {
if !hid.Supported() {
return nil, errors.New("unsupported platform")
}
hub := &Hub{
scheme: scheme,
vendorID: vendorID,
productIDs: productIDs,
usageID: usageID,
endpointID: endpointID,
makeDriver: makeDriver,
quit: make(chan chan error),
}
hub.refreshWallets()
return hub, nil
}
// Wallets implements accounts.Backend, returning all the currently tracked USB
// devices that appear to be hardware wallets.
func (hub *Hub) Wallets() []accounts.Wallet {
// Make sure the list of wallets is up to date
hub.refreshWallets()
hub.stateLock.RLock()
defer hub.stateLock.RUnlock()
cpy := make([]accounts.Wallet, len(hub.wallets))
copy(cpy, hub.wallets)
return cpy
}
// refreshWallets scans the USB devices attached to the machine and updates the
// list of wallets based on the found devices.
func (hub *Hub) refreshWallets() {
// Don't scan the USB like crazy it the user fetches wallets in a loop
hub.stateLock.RLock()
elapsed := time.Since(hub.refreshed)
hub.stateLock.RUnlock()
if elapsed < refreshThrottling {
return
}
// Retrieve the current list of USB wallet devices
var devices []hid.DeviceInfo
if runtime.GOOS == "linux" {
// hidapi on Linux opens the device during enumeration to retrieve some infos,
// breaking the Ledger protocol if that is waiting for user confirmation. This
// is a bug acknowledged at Ledger, but it won't be fixed on old devices so we
// need to prevent concurrent comms ourselves. The more elegant solution would
// be to ditch enumeration in favor of hotplug events, but that don't work yet
// on Windows so if we need to hack it anyway, this is more elegant for now.
hub.commsLock.Lock()
if hub.commsPend > 0 { // A confirmation is pending, don't refresh
hub.commsLock.Unlock()
return
}
}
for _, info := range hid.Enumerate(hub.vendorID, 0) {
for _, id := range hub.productIDs {
if info.ProductID == id && (info.UsagePage == hub.usageID || info.Interface == hub.endpointID) {
devices = append(devices, info)
break
}
}
}
if runtime.GOOS == "linux" {
// See rationale before the enumeration why this is needed and only on Linux.
hub.commsLock.Unlock()
}
// Transform the current list of wallets into the new one
hub.stateLock.Lock()
wallets := make([]accounts.Wallet, 0, len(devices))
events := []accounts.WalletEvent{}
for _, device := range devices {
url := accounts.URL{Scheme: hub.scheme, Path: device.Path}
// Drop wallets in front of the next device or those that failed for some reason
for len(hub.wallets) > 0 {
// Abort if we're past the current device and found an operational one
_, failure := hub.wallets[0].Status()
if hub.wallets[0].URL().Cmp(url) >= 0 || failure == nil {
break
}
// Drop the stale and failed devices
events = append(events, accounts.WalletEvent{Wallet: hub.wallets[0], Kind: accounts.WalletDropped})
hub.wallets = hub.wallets[1:]
}
// If there are no more wallets or the device is before the next, wrap new wallet
if len(hub.wallets) == 0 || hub.wallets[0].URL().Cmp(url) > 0 {
logger := log.New("url", url)
wallet := &wallet{hub: hub, driver: hub.makeDriver(logger), url: &url, info: device, log: logger}
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletArrived})
wallets = append(wallets, wallet)
continue
}
// If the device is the same as the first wallet, keep it
if hub.wallets[0].URL().Cmp(url) == 0 {
wallets = append(wallets, hub.wallets[0])
hub.wallets = hub.wallets[1:]
continue
}
}
// Drop any leftover wallets and set the new batch
for _, wallet := range hub.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletDropped})
}
hub.refreshed = time.Now()
hub.wallets = wallets
hub.stateLock.Unlock()
// Fire all wallet events and return
for _, event := range events {
hub.updateFeed.Send(event)
}
}
// Subscribe implements accounts.Backend, creating an async subscription to
// receive notifications on the addition or removal of USB wallets.
func (hub *Hub) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
// We need the mutex to reliably start/stop the update loop
hub.stateLock.Lock()
defer hub.stateLock.Unlock()
// Subscribe the caller and track the subscriber count
sub := hub.updateScope.Track(hub.updateFeed.Subscribe(sink))
// Subscribers require an active notification loop, start it
if !hub.updating {
hub.updating = true
go hub.updater()
}
return sub
}
// updater is responsible for maintaining an up-to-date list of wallets managed
// by the USB hub, and for firing wallet addition/removal events.
func (hub *Hub) updater() {
for {
// TODO: Wait for a USB hotplug event (not supported yet) or a refresh timeout
// <-hub.changes
time.Sleep(refreshCycle)
// Run the wallet refresher
hub.refreshWallets()
// If all our subscribers left, stop the updater
hub.stateLock.Lock()
if hub.updateScope.Count() == 0 {
hub.updating = false
hub.stateLock.Unlock()
return
}
hub.stateLock.Unlock()
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,905 @@
// This file originates from the SatoshiLabs Trezor `common` repository at:
// https://github.com/trezor/trezor-common/blob/master/protob/messages.proto
// dated 28.07.2017, commit dd8ec3231fb5f7992360aff9bdfe30bb58130f4b.
syntax = "proto2";
/**
* Messages for TREZOR communication
*/
// Sugar for easier handling in Java
option java_package = "com.satoshilabs.trezor.lib.protobuf";
option java_outer_classname = "TrezorMessage";
import "types.proto";
/**
* Mapping between Trezor wire identifier (uint) and a protobuf message
*/
enum MessageType {
MessageType_Initialize = 0 [(wire_in) = true];
MessageType_Ping = 1 [(wire_in) = true];
MessageType_Success = 2 [(wire_out) = true];
MessageType_Failure = 3 [(wire_out) = true];
MessageType_ChangePin = 4 [(wire_in) = true];
MessageType_WipeDevice = 5 [(wire_in) = true];
MessageType_FirmwareErase = 6 [(wire_in) = true, (wire_bootloader) = true];
MessageType_FirmwareUpload = 7 [(wire_in) = true, (wire_bootloader) = true];
MessageType_FirmwareRequest = 8 [(wire_out) = true, (wire_bootloader) = true];
MessageType_GetEntropy = 9 [(wire_in) = true];
MessageType_Entropy = 10 [(wire_out) = true];
MessageType_GetPublicKey = 11 [(wire_in) = true];
MessageType_PublicKey = 12 [(wire_out) = true];
MessageType_LoadDevice = 13 [(wire_in) = true];
MessageType_ResetDevice = 14 [(wire_in) = true];
MessageType_SignTx = 15 [(wire_in) = true];
MessageType_SimpleSignTx = 16 [(wire_in) = true, deprecated = true];
MessageType_Features = 17 [(wire_out) = true];
MessageType_PinMatrixRequest = 18 [(wire_out) = true];
MessageType_PinMatrixAck = 19 [(wire_in) = true, (wire_tiny) = true];
MessageType_Cancel = 20 [(wire_in) = true];
MessageType_TxRequest = 21 [(wire_out) = true];
MessageType_TxAck = 22 [(wire_in) = true];
MessageType_CipherKeyValue = 23 [(wire_in) = true];
MessageType_ClearSession = 24 [(wire_in) = true];
MessageType_ApplySettings = 25 [(wire_in) = true];
MessageType_ButtonRequest = 26 [(wire_out) = true];
MessageType_ButtonAck = 27 [(wire_in) = true, (wire_tiny) = true];
MessageType_ApplyFlags = 28 [(wire_in) = true];
MessageType_GetAddress = 29 [(wire_in) = true];
MessageType_Address = 30 [(wire_out) = true];
MessageType_SelfTest = 32 [(wire_in) = true, (wire_bootloader) = true];
MessageType_BackupDevice = 34 [(wire_in) = true];
MessageType_EntropyRequest = 35 [(wire_out) = true];
MessageType_EntropyAck = 36 [(wire_in) = true];
MessageType_SignMessage = 38 [(wire_in) = true];
MessageType_VerifyMessage = 39 [(wire_in) = true];
MessageType_MessageSignature = 40 [(wire_out) = true];
MessageType_PassphraseRequest = 41 [(wire_out) = true];
MessageType_PassphraseAck = 42 [(wire_in) = true, (wire_tiny) = true];
MessageType_EstimateTxSize = 43 [(wire_in) = true, deprecated = true];
MessageType_TxSize = 44 [(wire_out) = true, deprecated = true];
MessageType_RecoveryDevice = 45 [(wire_in) = true];
MessageType_WordRequest = 46 [(wire_out) = true];
MessageType_WordAck = 47 [(wire_in) = true];
MessageType_CipheredKeyValue = 48 [(wire_out) = true];
MessageType_EncryptMessage = 49 [(wire_in) = true, deprecated = true];
MessageType_EncryptedMessage = 50 [(wire_out) = true, deprecated = true];
MessageType_DecryptMessage = 51 [(wire_in) = true, deprecated = true];
MessageType_DecryptedMessage = 52 [(wire_out) = true, deprecated = true];
MessageType_SignIdentity = 53 [(wire_in) = true];
MessageType_SignedIdentity = 54 [(wire_out) = true];
MessageType_GetFeatures = 55 [(wire_in) = true];
MessageType_EthereumGetAddress = 56 [(wire_in) = true];
MessageType_EthereumAddress = 57 [(wire_out) = true];
MessageType_EthereumSignTx = 58 [(wire_in) = true];
MessageType_EthereumTxRequest = 59 [(wire_out) = true];
MessageType_EthereumTxAck = 60 [(wire_in) = true];
MessageType_GetECDHSessionKey = 61 [(wire_in) = true];
MessageType_ECDHSessionKey = 62 [(wire_out) = true];
MessageType_SetU2FCounter = 63 [(wire_in) = true];
MessageType_EthereumSignMessage = 64 [(wire_in) = true];
MessageType_EthereumVerifyMessage = 65 [(wire_in) = true];
MessageType_EthereumMessageSignature = 66 [(wire_out) = true];
MessageType_DebugLinkDecision = 100 [(wire_debug_in) = true, (wire_tiny) = true];
MessageType_DebugLinkGetState = 101 [(wire_debug_in) = true];
MessageType_DebugLinkState = 102 [(wire_debug_out) = true];
MessageType_DebugLinkStop = 103 [(wire_debug_in) = true];
MessageType_DebugLinkLog = 104 [(wire_debug_out) = true];
MessageType_DebugLinkMemoryRead = 110 [(wire_debug_in) = true];
MessageType_DebugLinkMemory = 111 [(wire_debug_out) = true];
MessageType_DebugLinkMemoryWrite = 112 [(wire_debug_in) = true];
MessageType_DebugLinkFlashErase = 113 [(wire_debug_in) = true];
}
////////////////////
// Basic messages //
////////////////////
/**
* Request: Reset device to default state and ask for device details
* @next Features
*/
message Initialize {
}
/**
* Request: Ask for device details (no device reset)
* @next Features
*/
message GetFeatures {
}
/**
* Response: Reports various information about the device
* @prev Initialize
* @prev GetFeatures
*/
message Features {
optional string vendor = 1; // name of the manufacturer, e.g. "bitcointrezor.com"
optional uint32 major_version = 2; // major version of the device, e.g. 1
optional uint32 minor_version = 3; // minor version of the device, e.g. 0
optional uint32 patch_version = 4; // patch version of the device, e.g. 0
optional bool bootloader_mode = 5; // is device in bootloader mode?
optional string device_id = 6; // device's unique identifier
optional bool pin_protection = 7; // is device protected by PIN?
optional bool passphrase_protection = 8; // is node/mnemonic encrypted using passphrase?
optional string language = 9; // device language
optional string label = 10; // device description label
repeated CoinType coins = 11; // supported coins
optional bool initialized = 12; // does device contain seed?
optional bytes revision = 13; // SCM revision of firmware
optional bytes bootloader_hash = 14; // hash of the bootloader
optional bool imported = 15; // was storage imported from an external source?
optional bool pin_cached = 16; // is PIN already cached in session?
optional bool passphrase_cached = 17; // is passphrase already cached in session?
optional bool firmware_present = 18; // is valid firmware loaded?
optional bool needs_backup = 19; // does storage need backup? (equals to Storage.needs_backup)
optional uint32 flags = 20; // device flags (equals to Storage.flags)
}
/**
* Request: clear session (removes cached PIN, passphrase, etc).
* @next Success
*/
message ClearSession {
}
/**
* Request: change language and/or label of the device
* @next Success
* @next Failure
* @next ButtonRequest
* @next PinMatrixRequest
*/
message ApplySettings {
optional string language = 1;
optional string label = 2;
optional bool use_passphrase = 3;
optional bytes homescreen = 4;
}
/**
* Request: set flags of the device
* @next Success
* @next Failure
*/
message ApplyFlags {
optional uint32 flags = 1; // bitmask, can only set bits, not unset
}
/**
* Request: Starts workflow for setting/changing/removing the PIN
* @next ButtonRequest
* @next PinMatrixRequest
*/
message ChangePin {
optional bool remove = 1; // is PIN removal requested?
}
/**
* Request: Test if the device is alive, device sends back the message in Success response
* @next Success
*/
message Ping {
optional string message = 1; // message to send back in Success message
optional bool button_protection = 2; // ask for button press
optional bool pin_protection = 3; // ask for PIN if set in device
optional bool passphrase_protection = 4; // ask for passphrase if set in device
}
/**
* Response: Success of the previous request
*/
message Success {
optional string message = 1; // human readable description of action or request-specific payload
}
/**
* Response: Failure of the previous request
*/
message Failure {
optional FailureType code = 1; // computer-readable definition of the error state
optional string message = 2; // human-readable message of the error state
}
/**
* Response: Device is waiting for HW button press.
* @next ButtonAck
* @next Cancel
*/
message ButtonRequest {
optional ButtonRequestType code = 1;
optional string data = 2;
}
/**
* Request: Computer agrees to wait for HW button press
* @prev ButtonRequest
*/
message ButtonAck {
}
/**
* Response: Device is asking computer to show PIN matrix and awaits PIN encoded using this matrix scheme
* @next PinMatrixAck
* @next Cancel
*/
message PinMatrixRequest {
optional PinMatrixRequestType type = 1;
}
/**
* Request: Computer responds with encoded PIN
* @prev PinMatrixRequest
*/
message PinMatrixAck {
required string pin = 1; // matrix encoded PIN entered by user
}
/**
* Request: Abort last operation that required user interaction
* @prev ButtonRequest
* @prev PinMatrixRequest
* @prev PassphraseRequest
*/
message Cancel {
}
/**
* Response: Device awaits encryption passphrase
* @next PassphraseAck
* @next Cancel
*/
message PassphraseRequest {
}
/**
* Request: Send passphrase back
* @prev PassphraseRequest
*/
message PassphraseAck {
required string passphrase = 1;
}
/**
* Request: Request a sample of random data generated by hardware RNG. May be used for testing.
* @next ButtonRequest
* @next Entropy
* @next Failure
*/
message GetEntropy {
required uint32 size = 1; // size of requested entropy
}
/**
* Response: Reply with random data generated by internal RNG
* @prev GetEntropy
*/
message Entropy {
required bytes entropy = 1; // stream of random generated bytes
}
/**
* Request: Ask device for public key corresponding to address_n path
* @next PassphraseRequest
* @next PublicKey
* @next Failure
*/
message GetPublicKey {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional string ecdsa_curve_name = 2; // ECDSA curve name to use
optional bool show_display = 3; // optionally show on display before sending the result
optional string coin_name = 4 [default='Bitcoin'];
}
/**
* Response: Contains public key derived from device private seed
* @prev GetPublicKey
*/
message PublicKey {
required HDNodeType node = 1; // BIP32 public node
optional string xpub = 2; // serialized form of public node
}
/**
* Request: Ask device for address corresponding to address_n path
* @next PassphraseRequest
* @next Address
* @next Failure
*/
message GetAddress {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional string coin_name = 2 [default='Bitcoin'];
optional bool show_display = 3 ; // optionally show on display before sending the result
optional MultisigRedeemScriptType multisig = 4; // filled if we are showing a multisig address
optional InputScriptType script_type = 5 [default=SPENDADDRESS]; // used to distinguish between various address formats (non-segwit, segwit, etc.)
}
/**
* Request: Ask device for Ethereum address corresponding to address_n path
* @next PassphraseRequest
* @next EthereumAddress
* @next Failure
*/
message EthereumGetAddress {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional bool show_display = 2; // optionally show on display before sending the result
}
/**
* Response: Contains address derived from device private seed
* @prev GetAddress
*/
message Address {
required string address = 1; // Coin address in Base58 encoding
}
/**
* Response: Contains an Ethereum address derived from device private seed
* @prev EthereumGetAddress
*/
message EthereumAddress {
required bytes address = 1; // Coin address as an Ethereum 160 bit hash
}
/**
* Request: Request device to wipe all sensitive data and settings
* @next ButtonRequest
*/
message WipeDevice {
}
/**
* Request: Load seed and related internal settings from the computer
* @next ButtonRequest
* @next Success
* @next Failure
*/
message LoadDevice {
optional string mnemonic = 1; // seed encoded as BIP-39 mnemonic (12, 18 or 24 words)
optional HDNodeType node = 2; // BIP-32 node
optional string pin = 3; // set PIN protection
optional bool passphrase_protection = 4; // enable master node encryption using passphrase
optional string language = 5 [default='english']; // device language
optional string label = 6; // device label
optional bool skip_checksum = 7; // do not test mnemonic for valid BIP-39 checksum
optional uint32 u2f_counter = 8; // U2F counter
}
/**
* Request: Ask device to do initialization involving user interaction
* @next EntropyRequest
* @next Failure
*/
message ResetDevice {
optional bool display_random = 1; // display entropy generated by the device before asking for additional entropy
optional uint32 strength = 2 [default=256]; // strength of seed in bits
optional bool passphrase_protection = 3; // enable master node encryption using passphrase
optional bool pin_protection = 4; // enable PIN protection
optional string language = 5 [default='english']; // device language
optional string label = 6; // device label
optional uint32 u2f_counter = 7; // U2F counter
optional bool skip_backup = 8; // postpone seed backup to BackupDevice workflow
}
/**
* Request: Perform backup of the device seed if not backed up using ResetDevice
* @next ButtonRequest
*/
message BackupDevice {
}
/**
* Response: Ask for additional entropy from host computer
* @prev ResetDevice
* @next EntropyAck
*/
message EntropyRequest {
}
/**
* Request: Provide additional entropy for seed generation function
* @prev EntropyRequest
* @next ButtonRequest
*/
message EntropyAck {
optional bytes entropy = 1; // 256 bits (32 bytes) of random data
}
/**
* Request: Start recovery workflow asking user for specific words of mnemonic
* Used to recovery device safely even on untrusted computer.
* @next WordRequest
*/
message RecoveryDevice {
optional uint32 word_count = 1; // number of words in BIP-39 mnemonic
optional bool passphrase_protection = 2; // enable master node encryption using passphrase
optional bool pin_protection = 3; // enable PIN protection
optional string language = 4 [default='english']; // device language
optional string label = 5; // device label
optional bool enforce_wordlist = 6; // enforce BIP-39 wordlist during the process
// 7 reserved for unused recovery method
optional uint32 type = 8; // supported recovery type (see RecoveryType)
optional uint32 u2f_counter = 9; // U2F counter
optional bool dry_run = 10; // perform dry-run recovery workflow (for safe mnemonic validation)
}
/**
* Response: Device is waiting for user to enter word of the mnemonic
* Its position is shown only on device's internal display.
* @prev RecoveryDevice
* @prev WordAck
*/
message WordRequest {
optional WordRequestType type = 1;
}
/**
* Request: Computer replies with word from the mnemonic
* @prev WordRequest
* @next WordRequest
* @next Success
* @next Failure
*/
message WordAck {
required string word = 1; // one word of mnemonic on asked position
}
//////////////////////////////
// Message signing messages //
//////////////////////////////
/**
* Request: Ask device to sign message
* @next MessageSignature
* @next Failure
*/
message SignMessage {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
required bytes message = 2; // message to be signed
optional string coin_name = 3 [default='Bitcoin']; // coin to use for signing
optional InputScriptType script_type = 4 [default=SPENDADDRESS]; // used to distinguish between various address formats (non-segwit, segwit, etc.)
}
/**
* Request: Ask device to verify message
* @next Success
* @next Failure
*/
message VerifyMessage {
optional string address = 1; // address to verify
optional bytes signature = 2; // signature to verify
optional bytes message = 3; // message to verify
optional string coin_name = 4 [default='Bitcoin']; // coin to use for verifying
}
/**
* Response: Signed message
* @prev SignMessage
*/
message MessageSignature {
optional string address = 1; // address used to sign the message
optional bytes signature = 2; // signature of the message
}
///////////////////////////
// Encryption/decryption //
///////////////////////////
/**
* Request: Ask device to encrypt message
* @next EncryptedMessage
* @next Failure
*/
message EncryptMessage {
optional bytes pubkey = 1; // public key
optional bytes message = 2; // message to encrypt
optional bool display_only = 3; // show just on display? (don't send back via wire)
repeated uint32 address_n = 4; // BIP-32 path to derive the signing key from master node
optional string coin_name = 5 [default='Bitcoin']; // coin to use for signing
}
/**
* Response: Encrypted message
* @prev EncryptMessage
*/
message EncryptedMessage {
optional bytes nonce = 1; // nonce used during encryption
optional bytes message = 2; // encrypted message
optional bytes hmac = 3; // message hmac
}
/**
* Request: Ask device to decrypt message
* @next Success
* @next Failure
*/
message DecryptMessage {
repeated uint32 address_n = 1; // BIP-32 path to derive the decryption key from master node
optional bytes nonce = 2; // nonce used during encryption
optional bytes message = 3; // message to decrypt
optional bytes hmac = 4; // message hmac
}
/**
* Response: Decrypted message
* @prev DecryptedMessage
*/
message DecryptedMessage {
optional bytes message = 1; // decrypted message
optional string address = 2; // address used to sign the message (if used)
}
/**
* Request: Ask device to encrypt or decrypt value of given key
* @next CipheredKeyValue
* @next Failure
*/
message CipherKeyValue {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional string key = 2; // key component of key:value
optional bytes value = 3; // value component of key:value
optional bool encrypt = 4; // are we encrypting (True) or decrypting (False)?
optional bool ask_on_encrypt = 5; // should we ask on encrypt operation?
optional bool ask_on_decrypt = 6; // should we ask on decrypt operation?
optional bytes iv = 7; // initialization vector (will be computed if not set)
}
/**
* Response: Return ciphered/deciphered value
* @prev CipherKeyValue
*/
message CipheredKeyValue {
optional bytes value = 1; // ciphered/deciphered value
}
//////////////////////////////////
// Transaction signing messages //
//////////////////////////////////
/**
* Request: Estimated size of the transaction
* This behaves exactly like SignTx, which means that it can ask using TxRequest
* This call is non-blocking (except possible PassphraseRequest to unlock the seed)
* @next TxSize
* @next Failure
*/
message EstimateTxSize {
required uint32 outputs_count = 1; // number of transaction outputs
required uint32 inputs_count = 2; // number of transaction inputs
optional string coin_name = 3 [default='Bitcoin']; // coin to use
}
/**
* Response: Estimated size of the transaction
* @prev EstimateTxSize
*/
message TxSize {
optional uint32 tx_size = 1; // estimated size of transaction in bytes
}
/**
* Request: Ask device to sign transaction
* @next PassphraseRequest
* @next PinMatrixRequest
* @next TxRequest
* @next Failure
*/
message SignTx {
required uint32 outputs_count = 1; // number of transaction outputs
required uint32 inputs_count = 2; // number of transaction inputs
optional string coin_name = 3 [default='Bitcoin']; // coin to use
optional uint32 version = 4 [default=1]; // transaction version
optional uint32 lock_time = 5 [default=0]; // transaction lock_time
}
/**
* Request: Simplified transaction signing
* This method doesn't support streaming, so there are hardware limits in number of inputs and outputs.
* In case of success, the result is returned using TxRequest message.
* @next PassphraseRequest
* @next PinMatrixRequest
* @next TxRequest
* @next Failure
*/
message SimpleSignTx {
repeated TxInputType inputs = 1; // transaction inputs
repeated TxOutputType outputs = 2; // transaction outputs
repeated TransactionType transactions = 3; // transactions whose outputs are used to build current inputs
optional string coin_name = 4 [default='Bitcoin']; // coin to use
optional uint32 version = 5 [default=1]; // transaction version
optional uint32 lock_time = 6 [default=0]; // transaction lock_time
}
/**
* Response: Device asks for information for signing transaction or returns the last result
* If request_index is set, device awaits TxAck message (with fields filled in according to request_type)
* If signature_index is set, 'signature' contains signed input of signature_index's input
* @prev SignTx
* @prev SimpleSignTx
* @prev TxAck
*/
message TxRequest {
optional RequestType request_type = 1; // what should be filled in TxAck message?
optional TxRequestDetailsType details = 2; // request for tx details
optional TxRequestSerializedType serialized = 3; // serialized data and request for next
}
/**
* Request: Reported transaction data
* @prev TxRequest
* @next TxRequest
*/
message TxAck {
optional TransactionType tx = 1;
}
/**
* Request: Ask device to sign transaction
* All fields are optional from the protocol's point of view. Each field defaults to value `0` if missing.
* Note: the first at most 1024 bytes of data MUST be transmitted as part of this message.
* @next PassphraseRequest
* @next PinMatrixRequest
* @next EthereumTxRequest
* @next Failure
*/
message EthereumSignTx {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional bytes nonce = 2; // <=256 bit unsigned big endian
optional bytes gas_price = 3; // <=256 bit unsigned big endian (in wei)
optional bytes gas_limit = 4; // <=256 bit unsigned big endian
optional bytes to = 5; // 160 bit address hash
optional bytes value = 6; // <=256 bit unsigned big endian (in wei)
optional bytes data_initial_chunk = 7; // The initial data chunk (<= 1024 bytes)
optional uint32 data_length = 8; // Length of transaction payload
optional uint32 chain_id = 9; // Chain Id for EIP 155
}
/**
* Response: Device asks for more data from transaction payload, or returns the signature.
* If data_length is set, device awaits that many more bytes of payload.
* Otherwise, the signature_* fields contain the computed transaction signature. All three fields will be present.
* @prev EthereumSignTx
* @next EthereumTxAck
*/
message EthereumTxRequest {
optional uint32 data_length = 1; // Number of bytes being requested (<= 1024)
optional uint32 signature_v = 2; // Computed signature (recovery parameter, limited to 27 or 28)
optional bytes signature_r = 3; // Computed signature R component (256 bit)
optional bytes signature_s = 4; // Computed signature S component (256 bit)
}
/**
* Request: Transaction payload data.
* @prev EthereumTxRequest
* @next EthereumTxRequest
*/
message EthereumTxAck {
optional bytes data_chunk = 1; // Bytes from transaction payload (<= 1024 bytes)
}
////////////////////////////////////////
// Ethereum: Message signing messages //
////////////////////////////////////////
/**
* Request: Ask device to sign message
* @next EthereumMessageSignature
* @next Failure
*/
message EthereumSignMessage {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
required bytes message = 2; // message to be signed
}
/**
* Request: Ask device to verify message
* @next Success
* @next Failure
*/
message EthereumVerifyMessage {
optional bytes address = 1; // address to verify
optional bytes signature = 2; // signature to verify
optional bytes message = 3; // message to verify
}
/**
* Response: Signed message
* @prev EthereumSignMessage
*/
message EthereumMessageSignature {
optional bytes address = 1; // address used to sign the message
optional bytes signature = 2; // signature of the message
}
///////////////////////
// Identity messages //
///////////////////////
/**
* Request: Ask device to sign identity
* @next SignedIdentity
* @next Failure
*/
message SignIdentity {
optional IdentityType identity = 1; // identity
optional bytes challenge_hidden = 2; // non-visible challenge
optional string challenge_visual = 3; // challenge shown on display (e.g. date+time)
optional string ecdsa_curve_name = 4; // ECDSA curve name to use
}
/**
* Response: Device provides signed identity
* @prev SignIdentity
*/
message SignedIdentity {
optional string address = 1; // identity address
optional bytes public_key = 2; // identity public key
optional bytes signature = 3; // signature of the identity data
}
///////////////////
// ECDH messages //
///////////////////
/**
* Request: Ask device to generate ECDH session key
* @next ECDHSessionKey
* @next Failure
*/
message GetECDHSessionKey {
optional IdentityType identity = 1; // identity
optional bytes peer_public_key = 2; // peer's public key
optional string ecdsa_curve_name = 3; // ECDSA curve name to use
}
/**
* Response: Device provides ECDH session key
* @prev GetECDHSessionKey
*/
message ECDHSessionKey {
optional bytes session_key = 1; // ECDH session key
}
///////////////////
// U2F messages //
///////////////////
/**
* Request: Set U2F counter
* @next Success
*/
message SetU2FCounter {
optional uint32 u2f_counter = 1; // counter
}
/////////////////////////
// Bootloader messages //
/////////////////////////
/**
* Request: Ask device to erase its firmware (so it can be replaced via FirmwareUpload)
* @next Success
* @next FirmwareRequest
* @next Failure
*/
message FirmwareErase {
optional uint32 length = 1; // length of new firmware
}
/**
* Response: Ask for firmware chunk
* @next FirmwareUpload
*/
message FirmwareRequest {
optional uint32 offset = 1; // offset of requested firmware chunk
optional uint32 length = 2; // length of requested firmware chunk
}
/**
* Request: Send firmware in binary form to the device
* @next Success
* @next Failure
*/
message FirmwareUpload {
required bytes payload = 1; // firmware to be loaded into device
optional bytes hash = 2; // hash of the payload
}
/**
* Request: Perform a device self-test
* @next Success
* @next Failure
*/
message SelfTest {
optional bytes payload = 1; // payload to be used in self-test
}
/////////////////////////////////////////////////////////////
// Debug messages (only available if DebugLink is enabled) //
/////////////////////////////////////////////////////////////
/**
* Request: "Press" the button on the device
* @next Success
*/
message DebugLinkDecision {
required bool yes_no = 1; // true for "Confirm", false for "Cancel"
}
/**
* Request: Computer asks for device state
* @next DebugLinkState
*/
message DebugLinkGetState {
}
/**
* Response: Device current state
* @prev DebugLinkGetState
*/
message DebugLinkState {
optional bytes layout = 1; // raw buffer of display
optional string pin = 2; // current PIN, blank if PIN is not set/enabled
optional string matrix = 3; // current PIN matrix
optional string mnemonic = 4; // current BIP-39 mnemonic
optional HDNodeType node = 5; // current BIP-32 node
optional bool passphrase_protection = 6; // is node/mnemonic encrypted using passphrase?
optional string reset_word = 7; // word on device display during ResetDevice workflow
optional bytes reset_entropy = 8; // current entropy during ResetDevice workflow
optional string recovery_fake_word = 9; // (fake) word on display during RecoveryDevice workflow
optional uint32 recovery_word_pos = 10; // index of mnemonic word the device is expecting during RecoveryDevice workflow
}
/**
* Request: Ask device to restart
*/
message DebugLinkStop {
}
/**
* Response: Device wants host to log event
*/
message DebugLinkLog {
optional uint32 level = 1;
optional string bucket = 2;
optional string text = 3;
}
/**
* Request: Read memory from device
* @next DebugLinkMemory
*/
message DebugLinkMemoryRead {
optional uint32 address = 1;
optional uint32 length = 2;
}
/**
* Response: Device sends memory back
* @prev DebugLinkMemoryRead
*/
message DebugLinkMemory {
optional bytes memory = 1;
}
/**
* Request: Write memory to device.
* WARNING: Writing to the wrong location can irreparably break the device.
*/
message DebugLinkMemoryWrite {
optional uint32 address = 1;
optional bytes memory = 2;
optional bool flash = 3;
}
/**
* Request: Erase block of flash on device
* WARNING: Writing to the wrong location can irreparably break the device.
*/
message DebugLinkFlashErase {
optional uint32 sector = 1;
}

View File

@@ -0,0 +1,46 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Trezor hardware
// wallets. The wire protocol spec can be found on the SatoshiLabs website:
// https://doc.satoshilabs.com/trezor-tech/api-protobuf.html
//go:generate protoc --go_out=import_path=trezor:. types.proto messages.proto
// Package trezor contains the wire protocol wrapper in Go.
package trezor
import (
"reflect"
"github.com/golang/protobuf/proto"
)
// Type returns the protocol buffer type number of a specific message. If the
// message is nil, this method panics!
func Type(msg proto.Message) uint16 {
return uint16(MessageType_value["MessageType_"+reflect.TypeOf(msg).Elem().Name()])
}
// Name returns the friendly message type name of a specific protocol buffer
// type number.
func Name(kind uint16) string {
name := MessageType_name[int32(kind)]
if len(name) < 12 {
return name
}
return name[12:]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,278 @@
// This file originates from the SatoshiLabs Trezor `common` repository at:
// https://github.com/trezor/trezor-common/blob/master/protob/types.proto
// dated 28.07.2017, commit dd8ec3231fb5f7992360aff9bdfe30bb58130f4b.
syntax = "proto2";
/**
* Types for TREZOR communication
*
* @author Marek Palatinus <slush@satoshilabs.com>
* @version 1.2
*/
// Sugar for easier handling in Java
option java_package = "com.satoshilabs.trezor.lib.protobuf";
option java_outer_classname = "TrezorType";
import "google/protobuf/descriptor.proto";
/**
* Options for specifying message direction and type of wire (normal/debug)
*/
extend google.protobuf.EnumValueOptions {
optional bool wire_in = 50002; // message can be transmitted via wire from PC to TREZOR
optional bool wire_out = 50003; // message can be transmitted via wire from TREZOR to PC
optional bool wire_debug_in = 50004; // message can be transmitted via debug wire from PC to TREZOR
optional bool wire_debug_out = 50005; // message can be transmitted via debug wire from TREZOR to PC
optional bool wire_tiny = 50006; // message is handled by TREZOR when the USB stack is in tiny mode
optional bool wire_bootloader = 50007; // message is only handled by TREZOR Bootloader
}
/**
* Type of failures returned by Failure message
* @used_in Failure
*/
enum FailureType {
Failure_UnexpectedMessage = 1;
Failure_ButtonExpected = 2;
Failure_DataError = 3;
Failure_ActionCancelled = 4;
Failure_PinExpected = 5;
Failure_PinCancelled = 6;
Failure_PinInvalid = 7;
Failure_InvalidSignature = 8;
Failure_ProcessError = 9;
Failure_NotEnoughFunds = 10;
Failure_NotInitialized = 11;
Failure_FirmwareError = 99;
}
/**
* Type of script which will be used for transaction output
* @used_in TxOutputType
*/
enum OutputScriptType {
PAYTOADDRESS = 0; // used for all addresses (bitcoin, p2sh, witness)
PAYTOSCRIPTHASH = 1; // p2sh address (deprecated; use PAYTOADDRESS)
PAYTOMULTISIG = 2; // only for change output
PAYTOOPRETURN = 3; // op_return
PAYTOWITNESS = 4; // only for change output
PAYTOP2SHWITNESS = 5; // only for change output
}
/**
* Type of script which will be used for transaction output
* @used_in TxInputType
*/
enum InputScriptType {
SPENDADDRESS = 0; // standard p2pkh address
SPENDMULTISIG = 1; // p2sh multisig address
EXTERNAL = 2; // reserved for external inputs (coinjoin)
SPENDWITNESS = 3; // native segwit
SPENDP2SHWITNESS = 4; // segwit over p2sh (backward compatible)
}
/**
* Type of information required by transaction signing process
* @used_in TxRequest
*/
enum RequestType {
TXINPUT = 0;
TXOUTPUT = 1;
TXMETA = 2;
TXFINISHED = 3;
TXEXTRADATA = 4;
}
/**
* Type of button request
* @used_in ButtonRequest
*/
enum ButtonRequestType {
ButtonRequest_Other = 1;
ButtonRequest_FeeOverThreshold = 2;
ButtonRequest_ConfirmOutput = 3;
ButtonRequest_ResetDevice = 4;
ButtonRequest_ConfirmWord = 5;
ButtonRequest_WipeDevice = 6;
ButtonRequest_ProtectCall = 7;
ButtonRequest_SignTx = 8;
ButtonRequest_FirmwareCheck = 9;
ButtonRequest_Address = 10;
ButtonRequest_PublicKey = 11;
}
/**
* Type of PIN request
* @used_in PinMatrixRequest
*/
enum PinMatrixRequestType {
PinMatrixRequestType_Current = 1;
PinMatrixRequestType_NewFirst = 2;
PinMatrixRequestType_NewSecond = 3;
}
/**
* Type of recovery procedure. These should be used as bitmask, e.g.,
* `RecoveryDeviceType_ScrambledWords | RecoveryDeviceType_Matrix`
* listing every method supported by the host computer.
*
* Note that ScrambledWords must be supported by every implementation
* for backward compatibility; there is no way to not support it.
*
* @used_in RecoveryDevice
*/
enum RecoveryDeviceType {
// use powers of two when extending this field
RecoveryDeviceType_ScrambledWords = 0; // words in scrambled order
RecoveryDeviceType_Matrix = 1; // matrix recovery type
}
/**
* Type of Recovery Word request
* @used_in WordRequest
*/
enum WordRequestType {
WordRequestType_Plain = 0;
WordRequestType_Matrix9 = 1;
WordRequestType_Matrix6 = 2;
}
/**
* Structure representing BIP32 (hierarchical deterministic) node
* Used for imports of private key into the device and exporting public key out of device
* @used_in PublicKey
* @used_in LoadDevice
* @used_in DebugLinkState
* @used_in Storage
*/
message HDNodeType {
required uint32 depth = 1;
required uint32 fingerprint = 2;
required uint32 child_num = 3;
required bytes chain_code = 4;
optional bytes private_key = 5;
optional bytes public_key = 6;
}
message HDNodePathType {
required HDNodeType node = 1; // BIP-32 node in deserialized form
repeated uint32 address_n = 2; // BIP-32 path to derive the key from node
}
/**
* Structure representing Coin
* @used_in Features
*/
message CoinType {
optional string coin_name = 1;
optional string coin_shortcut = 2;
optional uint32 address_type = 3 [default=0];
optional uint64 maxfee_kb = 4;
optional uint32 address_type_p2sh = 5 [default=5];
optional string signed_message_header = 8;
optional uint32 xpub_magic = 9 [default=76067358]; // default=0x0488b21e
optional uint32 xprv_magic = 10 [default=76066276]; // default=0x0488ade4
optional bool segwit = 11;
optional uint32 forkid = 12;
}
/**
* Type of redeem script used in input
* @used_in TxInputType
*/
message MultisigRedeemScriptType {
repeated HDNodePathType pubkeys = 1; // pubkeys from multisig address (sorted lexicographically)
repeated bytes signatures = 2; // existing signatures for partially signed input
optional uint32 m = 3; // "m" from n, how many valid signatures is necessary for spending
}
/**
* Structure representing transaction input
* @used_in SimpleSignTx
* @used_in TransactionType
*/
message TxInputType {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
required bytes prev_hash = 2; // hash of previous transaction output to spend by this input
required uint32 prev_index = 3; // index of previous output to spend
optional bytes script_sig = 4; // script signature, unset for tx to sign
optional uint32 sequence = 5 [default=4294967295]; // sequence (default=0xffffffff)
optional InputScriptType script_type = 6 [default=SPENDADDRESS]; // defines template of input script
optional MultisigRedeemScriptType multisig = 7; // Filled if input is going to spend multisig tx
optional uint64 amount = 8; // amount of previous transaction output (for segwit only)
}
/**
* Structure representing transaction output
* @used_in SimpleSignTx
* @used_in TransactionType
*/
message TxOutputType {
optional string address = 1; // target coin address in Base58 encoding
repeated uint32 address_n = 2; // BIP-32 path to derive the key from master node; has higher priority than "address"
required uint64 amount = 3; // amount to spend in satoshis
required OutputScriptType script_type = 4; // output script type
optional MultisigRedeemScriptType multisig = 5; // defines multisig address; script_type must be PAYTOMULTISIG
optional bytes op_return_data = 6; // defines op_return data; script_type must be PAYTOOPRETURN, amount must be 0
}
/**
* Structure representing compiled transaction output
* @used_in TransactionType
*/
message TxOutputBinType {
required uint64 amount = 1;
required bytes script_pubkey = 2;
}
/**
* Structure representing transaction
* @used_in SimpleSignTx
*/
message TransactionType {
optional uint32 version = 1;
repeated TxInputType inputs = 2;
repeated TxOutputBinType bin_outputs = 3;
repeated TxOutputType outputs = 5;
optional uint32 lock_time = 4;
optional uint32 inputs_cnt = 6;
optional uint32 outputs_cnt = 7;
optional bytes extra_data = 8;
optional uint32 extra_data_len = 9;
}
/**
* Structure representing request details
* @used_in TxRequest
*/
message TxRequestDetailsType {
optional uint32 request_index = 1; // device expects TxAck message from the computer
optional bytes tx_hash = 2; // tx_hash of requested transaction
optional uint32 extra_data_len = 3; // length of requested extra data
optional uint32 extra_data_offset = 4; // offset of requested extra data
}
/**
* Structure representing serialized data
* @used_in TxRequest
*/
message TxRequestSerializedType {
optional uint32 signature_index = 1; // 'signature' field contains signed input of this index
optional bytes signature = 2; // signature of the signature_index input
optional bytes serialized_tx = 3; // part of serialized and signed transaction
}
/**
* Structure representing identity data
* @used_in IdentityType
*/
message IdentityType {
optional string proto = 1; // proto part of URI
optional string user = 2; // user part of URI
optional string host = 3; // host part of URI
optional string port = 4; // port part of URI
optional string path = 5; // path part of URI
optional uint32 index = 6 [default=0]; // identity index
}

View File

@@ -0,0 +1,462 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Ledger hardware
// wallets. The wire protocol spec can be found in the Ledger Blue GitHub repo:
// https://raw.githubusercontent.com/LedgerHQ/blue-app-eth/master/doc/ethapp.asc
package usbwallet
import (
"encoding/binary"
"encoding/hex"
"errors"
"fmt"
"io"
"math/big"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
)
// ledgerOpcode is an enumeration encoding the supported Ledger opcodes.
type ledgerOpcode byte
// ledgerParam1 is an enumeration encoding the supported Ledger parameters for
// specific opcodes. The same parameter values may be reused between opcodes.
type ledgerParam1 byte
// ledgerParam2 is an enumeration encoding the supported Ledger parameters for
// specific opcodes. The same parameter values may be reused between opcodes.
type ledgerParam2 byte
const (
ledgerOpRetrieveAddress ledgerOpcode = 0x02 // Returns the public key and Ethereum address for a given BIP 32 path
ledgerOpSignTransaction ledgerOpcode = 0x04 // Signs an Ethereum transaction after having the user validate the parameters
ledgerOpGetConfiguration ledgerOpcode = 0x06 // Returns specific wallet application configuration
ledgerP1DirectlyFetchAddress ledgerParam1 = 0x00 // Return address directly from the wallet
ledgerP1InitTransactionData ledgerParam1 = 0x00 // First transaction data block for signing
ledgerP1ContTransactionData ledgerParam1 = 0x80 // Subsequent transaction data block for signing
ledgerP2DiscardAddressChainCode ledgerParam2 = 0x00 // Do not return the chain code along with the address
)
// errLedgerReplyInvalidHeader is the error message returned by a Ledger data exchange
// if the device replies with a mismatching header. This usually means the device
// is in browser mode.
var errLedgerReplyInvalidHeader = errors.New("ledger: invalid reply header")
// errLedgerInvalidVersionReply is the error message returned by a Ledger version retrieval
// when a response does arrive, but it does not contain the expected data.
var errLedgerInvalidVersionReply = errors.New("ledger: invalid version reply")
// ledgerDriver implements the communication with a Ledger hardware wallet.
type ledgerDriver struct {
device io.ReadWriter // USB device connection to communicate through
version [3]byte // Current version of the Ledger firmware (zero if app is offline)
browser bool // Flag whether the Ledger is in browser mode (reply channel mismatch)
failure error // Any failure that would make the device unusable
log log.Logger // Contextual logger to tag the ledger with its id
}
// newLedgerDriver creates a new instance of a Ledger USB protocol driver.
func newLedgerDriver(logger log.Logger) driver {
return &ledgerDriver{
log: logger,
}
}
// Status implements usbwallet.driver, returning various states the Ledger can
// currently be in.
func (w *ledgerDriver) Status() (string, error) {
if w.failure != nil {
return fmt.Sprintf("Failed: %v", w.failure), w.failure
}
if w.browser {
return "Ethereum app in browser mode", w.failure
}
if w.offline() {
return "Ethereum app offline", w.failure
}
return fmt.Sprintf("Ethereum app v%d.%d.%d online", w.version[0], w.version[1], w.version[2]), w.failure
}
// offline returns whether the wallet and the Ethereum app is offline or not.
//
// The method assumes that the state lock is held!
func (w *ledgerDriver) offline() bool {
return w.version == [3]byte{0, 0, 0}
}
// Open implements usbwallet.driver, attempting to initialize the connection to the
// Ledger hardware wallet. The Ledger does not require a user passphrase, so that
// parameter is silently discarded.
func (w *ledgerDriver) Open(device io.ReadWriter, passphrase string) error {
w.device, w.failure = device, nil
_, err := w.ledgerDerive(accounts.DefaultBaseDerivationPath)
if err != nil {
// Ethereum app is not running or in browser mode, nothing more to do, return
if err == errLedgerReplyInvalidHeader {
w.browser = true
}
return nil
}
// Try to resolve the Ethereum app's version, will fail prior to v1.0.2
if w.version, err = w.ledgerVersion(); err != nil {
w.version = [3]byte{1, 0, 0} // Assume worst case, can't verify if v1.0.0 or v1.0.1
}
return nil
}
// Close implements usbwallet.driver, cleaning up and metadata maintained within
// the Ledger driver.
func (w *ledgerDriver) Close() error {
w.browser, w.version = false, [3]byte{}
return nil
}
// Heartbeat implements usbwallet.driver, performing a sanity check against the
// Ledger to see if it's still online.
func (w *ledgerDriver) Heartbeat() error {
if _, err := w.ledgerVersion(); err != nil && err != errLedgerInvalidVersionReply {
w.failure = err
return err
}
return nil
}
// Derive implements usbwallet.driver, sending a derivation request to the Ledger
// and returning the Ethereum address located on that derivation path.
func (w *ledgerDriver) Derive(path accounts.DerivationPath) (common.Address, error) {
return w.ledgerDerive(path)
}
// SignTx implements usbwallet.driver, sending the transaction to the Ledger and
// waiting for the user to confirm or deny the transaction.
//
// Note, if the version of the Ethereum application running on the Ledger wallet is
// too old to sign EIP-155 transactions, but such is requested nonetheless, an error
// will be returned opposed to silently signing in Homestead mode.
func (w *ledgerDriver) SignTx(path accounts.DerivationPath, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error) {
// If the Ethereum app doesn't run, abort
if w.offline() {
return common.Address{}, nil, accounts.ErrWalletClosed
}
// Ensure the wallet is capable of signing the given transaction
if chainID != nil && w.version[0] <= 1 && w.version[1] <= 0 && w.version[2] <= 2 {
return common.Address{}, nil, fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least", w.version[0], w.version[1], w.version[2])
}
// All infos gathered and metadata checks out, request signing
return w.ledgerSign(path, tx, chainID)
}
// ledgerVersion retrieves the current version of the Ethereum wallet app running
// on the Ledger wallet.
//
// The version retrieval protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+----+---
// E0 | 06 | 00 | 00 | 00 | 04
//
// With no input data, and the output data being:
//
// Description | Length
// ---------------------------------------------------+--------
// Flags 01: arbitrary data signature enabled by user | 1 byte
// Application major version | 1 byte
// Application minor version | 1 byte
// Application patch version | 1 byte
func (w *ledgerDriver) ledgerVersion() ([3]byte, error) {
// Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpGetConfiguration, 0, 0, nil)
if err != nil {
return [3]byte{}, err
}
if len(reply) != 4 {
return [3]byte{}, errLedgerInvalidVersionReply
}
// Cache the version for future reference
var version [3]byte
copy(version[:], reply[1:])
return version, nil
}
// ledgerDerive retrieves the currently active Ethereum address from a Ledger
// wallet at the specified derivation path.
//
// The address derivation protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+---
// E0 | 02 | 00 return address
// 01 display address and confirm before returning
// | 00: do not return the chain code
// | 01: return the chain code
// | var | 00
//
// Where the input data is:
//
// Description | Length
// -------------------------------------------------+--------
// Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes
// ... | 4 bytes
// Last derivation index (big endian) | 4 bytes
//
// And the output data is:
//
// Description | Length
// ------------------------+-------------------
// Public Key length | 1 byte
// Uncompressed Public Key | arbitrary
// Ethereum address length | 1 byte
// Ethereum address | 40 bytes hex ascii
// Chain code if requested | 32 bytes
func (w *ledgerDriver) ledgerDerive(derivationPath []uint32) (common.Address, error) {
// Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath))
path[0] = byte(len(derivationPath))
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpRetrieveAddress, ledgerP1DirectlyFetchAddress, ledgerP2DiscardAddressChainCode, path)
if err != nil {
return common.Address{}, err
}
// Discard the public key, we don't need that for now
if len(reply) < 1 || len(reply) < 1+int(reply[0]) {
return common.Address{}, errors.New("reply lacks public key entry")
}
reply = reply[1+int(reply[0]):]
// Extract the Ethereum hex address string
if len(reply) < 1 || len(reply) < 1+int(reply[0]) {
return common.Address{}, errors.New("reply lacks address entry")
}
hexstr := reply[1 : 1+int(reply[0])]
// Decode the hex sting into an Ethereum address and return
var address common.Address
hex.Decode(address[:], hexstr)
return address, nil
}
// ledgerSign sends the transaction to the Ledger wallet, and waits for the user
// to confirm or deny the transaction.
//
// The transaction signing protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+---
// E0 | 04 | 00: first transaction data block
// 80: subsequent transaction data block
// | 00 | variable | variable
//
// Where the input for the first transaction block (first 255 bytes) is:
//
// Description | Length
// -------------------------------------------------+----------
// Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes
// ... | 4 bytes
// Last derivation index (big endian) | 4 bytes
// RLP transaction chunk | arbitrary
//
// And the input for subsequent transaction blocks (first 255 bytes) are:
//
// Description | Length
// ----------------------+----------
// RLP transaction chunk | arbitrary
//
// And the output data is:
//
// Description | Length
// ------------+---------
// signature V | 1 byte
// signature R | 32 bytes
// signature S | 32 bytes
func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error) {
// Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath))
path[0] = byte(len(derivationPath))
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Create the transaction RLP based on whether legacy or EIP155 signing was requested
var (
txrlp []byte
err error
)
if chainID == nil {
if txrlp, err = rlp.EncodeToBytes([]interface{}{tx.Nonce(), tx.GasPrice(), tx.Gas(), tx.To(), tx.Value(), tx.Data()}); err != nil {
return common.Address{}, nil, err
}
} else {
if txrlp, err = rlp.EncodeToBytes([]interface{}{tx.Nonce(), tx.GasPrice(), tx.Gas(), tx.To(), tx.Value(), tx.Data(), chainID, big.NewInt(0), big.NewInt(0)}); err != nil {
return common.Address{}, nil, err
}
}
payload := append(path, txrlp...)
// Send the request and wait for the response
var (
op = ledgerP1InitTransactionData
reply []byte
)
for len(payload) > 0 {
// Calculate the size of the next data chunk
chunk := 255
if chunk > len(payload) {
chunk = len(payload)
}
// Send the chunk over, ensuring it's processed correctly
reply, err = w.ledgerExchange(ledgerOpSignTransaction, op, 0, payload[:chunk])
if err != nil {
return common.Address{}, nil, err
}
// Shift the payload and ensure subsequent chunks are marked as such
payload = payload[chunk:]
op = ledgerP1ContTransactionData
}
// Extract the Ethereum signature and do a sanity validation
if len(reply) != 65 {
return common.Address{}, nil, errors.New("reply lacks signature")
}
signature := append(reply[1:], reply[0])
// Create the correct signer and signature transform based on the chain ID
var signer types.Signer
if chainID == nil {
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
}
signed, err := tx.WithSignature(signer, signature)
if err != nil {
return common.Address{}, nil, err
}
sender, err := types.Sender(signer, signed)
if err != nil {
return common.Address{}, nil, err
}
return sender, signed, nil
}
// ledgerExchange performs a data exchange with the Ledger wallet, sending it a
// message and retrieving the response.
//
// The common transport header is defined as follows:
//
// Description | Length
// --------------------------------------+----------
// Communication channel ID (big endian) | 2 bytes
// Command tag | 1 byte
// Packet sequence index (big endian) | 2 bytes
// Payload | arbitrary
//
// The Communication channel ID allows commands multiplexing over the same
// physical link. It is not used for the time being, and should be set to 0101
// to avoid compatibility issues with implementations ignoring a leading 00 byte.
//
// The Command tag describes the message content. Use TAG_APDU (0x05) for standard
// APDU payloads, or TAG_PING (0x02) for a simple link test.
//
// The Packet sequence index describes the current sequence for fragmented payloads.
// The first fragment index is 0x00.
//
// APDU Command payloads are encoded as follows:
//
// Description | Length
// -----------------------------------
// APDU length (big endian) | 2 bytes
// APDU CLA | 1 byte
// APDU INS | 1 byte
// APDU P1 | 1 byte
// APDU P2 | 1 byte
// APDU length | 1 byte
// Optional APDU data | arbitrary
func (w *ledgerDriver) ledgerExchange(opcode ledgerOpcode, p1 ledgerParam1, p2 ledgerParam2, data []byte) ([]byte, error) {
// Construct the message payload, possibly split into multiple chunks
apdu := make([]byte, 2, 7+len(data))
binary.BigEndian.PutUint16(apdu, uint16(5+len(data)))
apdu = append(apdu, []byte{0xe0, byte(opcode), byte(p1), byte(p2), byte(len(data))}...)
apdu = append(apdu, data...)
// Stream all the chunks to the device
header := []byte{0x01, 0x01, 0x05, 0x00, 0x00} // Channel ID and command tag appended
chunk := make([]byte, 64)
space := len(chunk) - len(header)
for i := 0; len(apdu) > 0; i++ {
// Construct the new message to stream
chunk = append(chunk[:0], header...)
binary.BigEndian.PutUint16(chunk[3:], uint16(i))
if len(apdu) > space {
chunk = append(chunk, apdu[:space]...)
apdu = apdu[space:]
} else {
chunk = append(chunk, apdu...)
apdu = nil
}
// Send over to the device
w.log.Trace("Data chunk sent to the Ledger", "chunk", hexutil.Bytes(chunk))
if _, err := w.device.Write(chunk); err != nil {
return nil, err
}
}
// Stream the reply back from the wallet in 64 byte chunks
var reply []byte
chunk = chunk[:64] // Yeah, we surely have enough space
for {
// Read the next chunk from the Ledger wallet
if _, err := io.ReadFull(w.device, chunk); err != nil {
return nil, err
}
w.log.Trace("Data chunk received from the Ledger", "chunk", hexutil.Bytes(chunk))
// Make sure the transport header matches
if chunk[0] != 0x01 || chunk[1] != 0x01 || chunk[2] != 0x05 {
return nil, errLedgerReplyInvalidHeader
}
// If it's the first chunk, retrieve the total message length
var payload []byte
if chunk[3] == 0x00 && chunk[4] == 0x00 {
reply = make([]byte, 0, int(binary.BigEndian.Uint16(chunk[5:7])))
payload = chunk[7:]
} else {
payload = chunk[5:]
}
// Append to the reply and stop when filled up
if left := cap(reply) - len(reply); left > len(payload) {
reply = append(reply, payload...)
} else {
reply = append(reply, payload[:left]...)
break
}
}
return reply[:len(reply)-2], nil
}

View File

@@ -1,217 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Ledger hardware
// wallets. The wire protocol spec can be found in the Ledger Blue GitHub repo:
// https://raw.githubusercontent.com/LedgerHQ/blue-app-eth/master/doc/ethapp.asc
package usbwallet
import (
"errors"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
)
// LedgerScheme is the protocol scheme prefixing account and wallet URLs.
var LedgerScheme = "ledger"
// ledgerDeviceIDs are the known device IDs that Ledger wallets use.
var ledgerDeviceIDs = []deviceID{
{Vendor: 0x2c97, Product: 0x0000}, // Ledger Blue
{Vendor: 0x2c97, Product: 0x0001}, // Ledger Nano S
}
// Maximum time between wallet refreshes (if USB hotplug notifications don't work).
const ledgerRefreshCycle = time.Second
// Minimum time between wallet refreshes to avoid USB trashing.
const ledgerRefreshThrottling = 500 * time.Millisecond
// LedgerHub is a accounts.Backend that can find and handle Ledger hardware wallets.
type LedgerHub struct {
refreshed time.Time // Time instance when the list of wallets was last refreshed
wallets []accounts.Wallet // List of Ledger devices currently tracking
updateFeed event.Feed // Event feed to notify wallet additions/removals
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
quit chan chan error
stateLock sync.RWMutex // Protects the internals of the hub from racey access
// TODO(karalabe): remove if hotplug lands on Windows
commsPend int // Number of operations blocking enumeration
commsLock sync.Mutex // Lock protecting the pending counter and enumeration
}
// NewLedgerHub creates a new hardware wallet manager for Ledger devices.
func NewLedgerHub() (*LedgerHub, error) {
if !hid.Supported() {
return nil, errors.New("unsupported platform")
}
hub := &LedgerHub{
quit: make(chan chan error),
}
hub.refreshWallets()
return hub, nil
}
// Wallets implements accounts.Backend, returning all the currently tracked USB
// devices that appear to be Ledger hardware wallets.
func (hub *LedgerHub) Wallets() []accounts.Wallet {
// Make sure the list of wallets is up to date
hub.refreshWallets()
hub.stateLock.RLock()
defer hub.stateLock.RUnlock()
cpy := make([]accounts.Wallet, len(hub.wallets))
copy(cpy, hub.wallets)
return cpy
}
// refreshWallets scans the USB devices attached to the machine and updates the
// list of wallets based on the found devices.
func (hub *LedgerHub) refreshWallets() {
// Don't scan the USB like crazy it the user fetches wallets in a loop
hub.stateLock.RLock()
elapsed := time.Since(hub.refreshed)
hub.stateLock.RUnlock()
if elapsed < ledgerRefreshThrottling {
return
}
// Retrieve the current list of Ledger devices
var ledgers []hid.DeviceInfo
if runtime.GOOS == "linux" {
// hidapi on Linux opens the device during enumeration to retrieve some infos,
// breaking the Ledger protocol if that is waiting for user confirmation. This
// is a bug acknowledged at Ledger, but it won't be fixed on old devices so we
// need to prevent concurrent comms ourselves. The more elegant solution would
// be to ditch enumeration in favor of hutplug events, but that don't work yet
// on Windows so if we need to hack it anyway, this is more elegant for now.
hub.commsLock.Lock()
if hub.commsPend > 0 { // A confirmation is pending, don't refresh
hub.commsLock.Unlock()
return
}
}
for _, info := range hid.Enumerate(0, 0) { // Can't enumerate directly, one valid ID is the 0 wildcard
for _, id := range ledgerDeviceIDs {
if info.VendorID == id.Vendor && info.ProductID == id.Product {
ledgers = append(ledgers, info)
break
}
}
}
if runtime.GOOS == "linux" {
// See rationale before the enumeration why this is needed and only on Linux.
hub.commsLock.Unlock()
}
// Transform the current list of wallets into the new one
hub.stateLock.Lock()
wallets := make([]accounts.Wallet, 0, len(ledgers))
events := []accounts.WalletEvent{}
for _, ledger := range ledgers {
url := accounts.URL{Scheme: LedgerScheme, Path: ledger.Path}
// Drop wallets in front of the next device or those that failed for some reason
for len(hub.wallets) > 0 && (hub.wallets[0].URL().Cmp(url) < 0 || hub.wallets[0].(*ledgerWallet).failed()) {
events = append(events, accounts.WalletEvent{Wallet: hub.wallets[0], Arrive: false})
hub.wallets = hub.wallets[1:]
}
// If there are no more wallets or the device is before the next, wrap new wallet
if len(hub.wallets) == 0 || hub.wallets[0].URL().Cmp(url) > 0 {
wallet := &ledgerWallet{hub: hub, url: &url, info: ledger, log: log.New("url", url)}
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: true})
wallets = append(wallets, wallet)
continue
}
// If the device is the same as the first wallet, keep it
if hub.wallets[0].URL().Cmp(url) == 0 {
wallets = append(wallets, hub.wallets[0])
hub.wallets = hub.wallets[1:]
continue
}
}
// Drop any leftover wallets and set the new batch
for _, wallet := range hub.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: false})
}
hub.refreshed = time.Now()
hub.wallets = wallets
hub.stateLock.Unlock()
// Fire all wallet events and return
for _, event := range events {
hub.updateFeed.Send(event)
}
}
// Subscribe implements accounts.Backend, creating an async subscription to
// receive notifications on the addition or removal of Ledger wallets.
func (hub *LedgerHub) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
// We need the mutex to reliably start/stop the update loop
hub.stateLock.Lock()
defer hub.stateLock.Unlock()
// Subscribe the caller and track the subscriber count
sub := hub.updateScope.Track(hub.updateFeed.Subscribe(sink))
// Subscribers require an active notification loop, start it
if !hub.updating {
hub.updating = true
go hub.updater()
}
return sub
}
// updater is responsible for maintaining an up-to-date list of wallets stored in
// the keystore, and for firing wallet addition/removal events. It listens for
// account change events from the underlying account cache, and also periodically
// forces a manual refresh (only triggers for systems where the filesystem notifier
// is not running).
func (hub *LedgerHub) updater() {
for {
// Wait for a USB hotplug event (not supported yet) or a refresh timeout
select {
//case <-hub.changes: // reenable on hutplug implementation
case <-time.After(ledgerRefreshCycle):
}
// Run the wallet refresher
hub.refreshWallets()
// If all our subscribers left, stop the updater
hub.stateLock.Lock()
if hub.updateScope.Count() == 0 {
hub.updating = false
hub.stateLock.Unlock()
return
}
hub.stateLock.Unlock()
}
}

View File

@@ -1,903 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Ledger hardware
// wallets. The wire protocol spec can be found in the Ledger Blue GitHub repo:
// https://raw.githubusercontent.com/LedgerHQ/blue-app-eth/master/doc/ethapp.asc
package usbwallet
import (
"context"
"encoding/binary"
"encoding/hex"
"errors"
"fmt"
"io"
"math/big"
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/karalabe/hid"
)
// Maximum time between wallet health checks to detect USB unplugs.
const ledgerHeartbeatCycle = time.Second
// Minimum time to wait between self derivation attempts, even it the user is
// requesting accounts like crazy.
const ledgerSelfDeriveThrottling = time.Second
// ledgerOpcode is an enumeration encoding the supported Ledger opcodes.
type ledgerOpcode byte
// ledgerParam1 is an enumeration encoding the supported Ledger parameters for
// specific opcodes. The same parameter values may be reused between opcodes.
type ledgerParam1 byte
// ledgerParam2 is an enumeration encoding the supported Ledger parameters for
// specific opcodes. The same parameter values may be reused between opcodes.
type ledgerParam2 byte
const (
ledgerOpRetrieveAddress ledgerOpcode = 0x02 // Returns the public key and Ethereum address for a given BIP 32 path
ledgerOpSignTransaction ledgerOpcode = 0x04 // Signs an Ethereum transaction after having the user validate the parameters
ledgerOpGetConfiguration ledgerOpcode = 0x06 // Returns specific wallet application configuration
ledgerP1DirectlyFetchAddress ledgerParam1 = 0x00 // Return address directly from the wallet
ledgerP1ConfirmFetchAddress ledgerParam1 = 0x01 // Require a user confirmation before returning the address
ledgerP1InitTransactionData ledgerParam1 = 0x00 // First transaction data block for signing
ledgerP1ContTransactionData ledgerParam1 = 0x80 // Subsequent transaction data block for signing
ledgerP2DiscardAddressChainCode ledgerParam2 = 0x00 // Do not return the chain code along with the address
ledgerP2ReturnAddressChainCode ledgerParam2 = 0x01 // Require a user confirmation before returning the address
)
// errReplyInvalidHeader is the error message returned by a Ledger data exchange
// if the device replies with a mismatching header. This usually means the device
// is in browser mode.
var errReplyInvalidHeader = errors.New("invalid reply header")
// errInvalidVersionReply is the error message returned by a Ledger version retrieval
// when a response does arrive, but it does not contain the expected data.
var errInvalidVersionReply = errors.New("invalid version reply")
// ledgerWallet represents a live USB Ledger hardware wallet.
type ledgerWallet struct {
hub *LedgerHub // USB hub the device originates from (TODO(karalabe): remove if hotplug lands on Windows)
url *accounts.URL // Textual URL uniquely identifying this wallet
info hid.DeviceInfo // Known USB device infos about the wallet
device *hid.Device // USB device advertising itself as a Ledger wallet
failure error // Any failure that would make the device unusable
version [3]byte // Current version of the Ledger Ethereum app (zero if app is offline)
browser bool // Flag whether the Ledger is in browser mode (reply channel mismatch)
accounts []accounts.Account // List of derive accounts pinned on the Ledger
paths map[common.Address]accounts.DerivationPath // Known derivation paths for signing operations
deriveNextPath accounts.DerivationPath // Next derivation path for account auto-discovery
deriveNextAddr common.Address // Next derived account address for auto-discovery
deriveChain ethereum.ChainStateReader // Blockchain state reader to discover used account with
deriveReq chan chan struct{} // Channel to request a self-derivation on
deriveQuit chan chan error // Channel to terminate the self-deriver with
healthQuit chan chan error
// Locking a hardware wallet is a bit special. Since hardware devices are lower
// performing, any communication with them might take a non negligible amount of
// time. Worse still, waiting for user confirmation can take arbitrarily long,
// but exclusive communication must be upheld during. Locking the entire wallet
// in the mean time however would stall any parts of the system that don't want
// to communicate, just read some state (e.g. list the accounts).
//
// As such, a hardware wallet needs two locks to function correctly. A state
// lock can be used to protect the wallet's software-side internal state, which
// must not be held exlusively during hardware communication. A communication
// lock can be used to achieve exclusive access to the device itself, this one
// however should allow "skipping" waiting for operations that might want to
// use the device, but can live without too (e.g. account self-derivation).
//
// Since we have two locks, it's important to know how to properly use them:
// - Communication requires the `device` to not change, so obtaining the
// commsLock should be done after having a stateLock.
// - Communication must not disable read access to the wallet state, so it
// must only ever hold a *read* lock to stateLock.
commsLock chan struct{} // Mutex (buf=1) for the USB comms without keeping the state locked
stateLock sync.RWMutex // Protects read and write access to the wallet struct fields
log log.Logger // Contextual logger to tag the ledger with its id
}
// URL implements accounts.Wallet, returning the URL of the Ledger device.
func (w *ledgerWallet) URL() accounts.URL {
return *w.url // Immutable, no need for a lock
}
// Status implements accounts.Wallet, always whether the Ledger is opened, closed
// or whether the Ethereum app was not started on it.
func (w *ledgerWallet) Status() string {
w.stateLock.RLock() // No device communication, state lock is enough
defer w.stateLock.RUnlock()
if w.failure != nil {
return fmt.Sprintf("Failed: %v", w.failure)
}
if w.device == nil {
return "Closed"
}
if w.browser {
return "Ethereum app in browser mode"
}
if w.offline() {
return "Ethereum app offline"
}
return fmt.Sprintf("Ethereum app v%d.%d.%d online", w.version[0], w.version[1], w.version[2])
}
// offline returns whether the wallet and the Ethereum app is offline or not.
//
// The method assumes that the state lock is held!
func (w *ledgerWallet) offline() bool {
return w.version == [3]byte{0, 0, 0}
}
// failed returns if the USB device wrapped by the wallet failed for some reason.
// This is used by the device scanner to report failed wallets as departed.
//
// The method assumes that the state lock is *not* held!
func (w *ledgerWallet) failed() bool {
w.stateLock.RLock() // No device communication, state lock is enough
defer w.stateLock.RUnlock()
return w.failure != nil
}
// Open implements accounts.Wallet, attempting to open a USB connection to the
// Ledger hardware wallet. The Ledger does not require a user passphrase, so that
// parameter is silently discarded.
func (w *ledgerWallet) Open(passphrase string) error {
w.stateLock.Lock() // State lock is enough since there's no connection yet at this point
defer w.stateLock.Unlock()
// If the wallet was already opened, don't try to open again
if w.device != nil {
return accounts.ErrWalletAlreadyOpen
}
// Otherwise iterate over all USB devices and find this again (no way to directly do this)
device, err := w.info.Open()
if err != nil {
return err
}
// Wallet seems to be successfully opened, guess if the Ethereum app is running
w.device = device
w.commsLock = make(chan struct{}, 1)
w.commsLock <- struct{}{} // Enable lock
w.paths = make(map[common.Address]accounts.DerivationPath)
w.deriveReq = make(chan chan struct{})
w.deriveQuit = make(chan chan error)
w.healthQuit = make(chan chan error)
defer func() {
go w.heartbeat()
go w.selfDerive()
}()
if _, err = w.ledgerDerive(accounts.DefaultBaseDerivationPath); err != nil {
// Ethereum app is not running or in browser mode, nothing more to do, return
if err == errReplyInvalidHeader {
w.browser = true
}
return nil
}
// Try to resolve the Ethereum app's version, will fail prior to v1.0.2
if w.version, err = w.ledgerVersion(); err != nil {
w.version = [3]byte{1, 0, 0} // Assume worst case, can't verify if v1.0.0 or v1.0.1
}
return nil
}
// heartbeat is a health check loop for the Ledger wallets to periodically verify
// whether they are still present or if they malfunctioned. It is needed because:
// - libusb on Windows doesn't support hotplug, so we can't detect USB unplugs
// - communication timeout on the Ledger requires a device power cycle to fix
func (w *ledgerWallet) heartbeat() {
w.log.Debug("Ledger health-check started")
defer w.log.Debug("Ledger health-check stopped")
// Execute heartbeat checks until termination or error
var (
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until termination is requested or the heartbeat cycle arrives
select {
case errc = <-w.healthQuit:
// Termination requested
continue
case <-time.After(ledgerHeartbeatCycle):
// Heartbeat time
}
// Execute a tiny data exchange to see responsiveness
w.stateLock.RLock()
if w.device == nil {
// Terminated while waiting for the lock
w.stateLock.RUnlock()
continue
}
<-w.commsLock // Don't lock state while resolving version
_, err = w.ledgerVersion()
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
if err != nil && err != errInvalidVersionReply {
w.stateLock.Lock() // Lock state to tear the wallet down
w.failure = err
w.close()
w.stateLock.Unlock()
}
// Ignore non hardware related errors
err = nil
}
// In case of error, wait for termination
if err != nil {
w.log.Debug("Ledger health-check failed", "err", err)
errc = <-w.healthQuit
}
errc <- err
}
// Close implements accounts.Wallet, closing the USB connection to the Ledger.
func (w *ledgerWallet) Close() error {
// Ensure the wallet was opened
w.stateLock.RLock()
hQuit, dQuit := w.healthQuit, w.deriveQuit
w.stateLock.RUnlock()
// Terminate the health checks
var herr error
if hQuit != nil {
errc := make(chan error)
hQuit <- errc
herr = <-errc // Save for later, we *must* close the USB
}
// Terminate the self-derivations
var derr error
if dQuit != nil {
errc := make(chan error)
dQuit <- errc
derr = <-errc // Save for later, we *must* close the USB
}
// Terminate the device connection
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.healthQuit = nil
w.deriveQuit = nil
w.deriveReq = nil
if err := w.close(); err != nil {
return err
}
if herr != nil {
return herr
}
return derr
}
// close is the internal wallet closer that terminates the USB connection and
// resets all the fields to their defaults.
//
// Note, close assumes the state lock is held!
func (w *ledgerWallet) close() error {
// Allow duplicate closes, especially for health-check failures
if w.device == nil {
return nil
}
// Close the device, clear everything, then return
w.device.Close()
w.device = nil
w.browser, w.version = false, [3]byte{}
w.accounts, w.paths = nil, nil
return nil
}
// Accounts implements accounts.Wallet, returning the list of accounts pinned to
// the Ledger hardware wallet. If self-derivation was enabled, the account list
// is periodically expanded based on current chain state.
func (w *ledgerWallet) Accounts() []accounts.Account {
// Attempt self-derivation if it's running
reqc := make(chan struct{}, 1)
select {
case w.deriveReq <- reqc:
// Self-derivation request accepted, wait for it
<-reqc
default:
// Self-derivation offline, throttled or busy, skip
}
// Return whatever account list we ended up with
w.stateLock.RLock()
defer w.stateLock.RUnlock()
cpy := make([]accounts.Account, len(w.accounts))
copy(cpy, w.accounts)
return cpy
}
// selfDerive is an account derivation loop that upon request attempts to find
// new non-zero accounts.
func (w *ledgerWallet) selfDerive() {
w.log.Debug("Ledger self-derivation started")
defer w.log.Debug("Ledger self-derivation stopped")
// Execute self-derivations until termination or error
var (
reqc chan struct{}
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until either derivation or termination is requested
select {
case errc = <-w.deriveQuit:
// Termination requested
continue
case reqc = <-w.deriveReq:
// Account discovery requested
}
// Derivation needs a chain and device access, skip if either unavailable
w.stateLock.RLock()
if w.device == nil || w.deriveChain == nil || w.offline() {
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
select {
case <-w.commsLock:
default:
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
// Device lock obtained, derive the next batch of accounts
var (
accs []accounts.Account
paths []accounts.DerivationPath
nextAddr = w.deriveNextAddr
nextPath = w.deriveNextPath
context = context.Background()
)
for empty := false; !empty; {
// Retrieve the next derived Ethereum account
if nextAddr == (common.Address{}) {
if nextAddr, err = w.ledgerDerive(nextPath); err != nil {
w.log.Warn("Ledger account derivation failed", "err", err)
break
}
}
// Check the account's status against the current chain state
var (
balance *big.Int
nonce uint64
)
balance, err = w.deriveChain.BalanceAt(context, nextAddr, nil)
if err != nil {
w.log.Warn("Ledger balance retrieval failed", "err", err)
break
}
nonce, err = w.deriveChain.NonceAt(context, nextAddr, nil)
if err != nil {
w.log.Warn("Ledger nonce retrieval failed", "err", err)
break
}
// If the next account is empty, stop self-derivation, but add it nonetheless
if balance.Sign() == 0 && nonce == 0 {
empty = true
}
// We've just self-derived a new account, start tracking it locally
path := make(accounts.DerivationPath, len(nextPath))
copy(path[:], nextPath[:])
paths = append(paths, path)
account := accounts.Account{
Address: nextAddr,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
accs = append(accs, account)
// Display a log message to the user for new (or previously empty accounts)
if _, known := w.paths[nextAddr]; !known || (!empty && nextAddr == w.deriveNextAddr) {
w.log.Info("Ledger discovered new account", "address", nextAddr, "path", path, "balance", balance, "nonce", nonce)
}
// Fetch the next potential account
if !empty {
nextAddr = common.Address{}
nextPath[len(nextPath)-1]++
}
}
// Self derivation complete, release device lock
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// Insert any accounts successfully derived
w.stateLock.Lock()
for i := 0; i < len(accs); i++ {
if _, ok := w.paths[accs[i].Address]; !ok {
w.accounts = append(w.accounts, accs[i])
w.paths[accs[i].Address] = paths[i]
}
}
// Shift the self-derivation forward
// TODO(karalabe): don't overwrite changes from wallet.SelfDerive
w.deriveNextAddr = nextAddr
w.deriveNextPath = nextPath
w.stateLock.Unlock()
// Notify the user of termination and loop after a bit of time (to avoid trashing)
reqc <- struct{}{}
if err == nil {
select {
case errc = <-w.deriveQuit:
// Termination requested, abort
case <-time.After(ledgerSelfDeriveThrottling):
// Waited enough, willing to self-derive again
}
}
}
// In case of error, wait for termination
if err != nil {
w.log.Debug("Ledger self-derivation failed", "err", err)
errc = <-w.deriveQuit
}
errc <- err
}
// Contains implements accounts.Wallet, returning whether a particular account is
// or is not pinned into this Ledger instance. Although we could attempt to resolve
// unpinned accounts, that would be an non-negligible hardware operation.
func (w *ledgerWallet) Contains(account accounts.Account) bool {
w.stateLock.RLock()
defer w.stateLock.RUnlock()
_, exists := w.paths[account.Address]
return exists
}
// Derive implements accounts.Wallet, deriving a new account at the specific
// derivation path. If pin is set to true, the account will be added to the list
// of tracked accounts.
func (w *ledgerWallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Account, error) {
// Try to derive the actual account and update its URL if successful
w.stateLock.RLock() // Avoid device disappearing during derivation
if w.device == nil || w.offline() {
w.stateLock.RUnlock()
return accounts.Account{}, accounts.ErrWalletClosed
}
<-w.commsLock // Avoid concurrent hardware access
address, err := w.ledgerDerive(path)
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// If an error occurred or no pinning was requested, return
if err != nil {
return accounts.Account{}, err
}
account := accounts.Account{
Address: address,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
if !pin {
return account, nil
}
// Pinning needs to modify the state
w.stateLock.Lock()
defer w.stateLock.Unlock()
if _, ok := w.paths[address]; !ok {
w.accounts = append(w.accounts, account)
w.paths[address] = path
}
return account, nil
}
// SelfDerive implements accounts.Wallet, trying to discover accounts that the
// user used previously (based on the chain state), but ones that he/she did not
// explicitly pin to the wallet manually. To avoid chain head monitoring, self
// derivation only runs during account listing (and even then throttled).
func (w *ledgerWallet) SelfDerive(base accounts.DerivationPath, chain ethereum.ChainStateReader) {
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.deriveNextPath = make(accounts.DerivationPath, len(base))
copy(w.deriveNextPath[:], base[:])
w.deriveNextAddr = common.Address{}
w.deriveChain = chain
}
// SignHash implements accounts.Wallet, however signing arbitrary data is not
// supported for Ledger wallets, so this method will always return an error.
func (w *ledgerWallet) SignHash(acc accounts.Account, hash []byte) ([]byte, error) {
return nil, accounts.ErrNotSupported
}
// SignTx implements accounts.Wallet. It sends the transaction over to the Ledger
// wallet to request a confirmation from the user. It returns either the signed
// transaction or a failure if the user denied the transaction.
//
// Note, if the version of the Ethereum application running on the Ledger wallet is
// too old to sign EIP-155 transactions, but such is requested nonetheless, an error
// will be returned opposed to silently signing in Homestead mode.
func (w *ledgerWallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
w.stateLock.RLock() // Comms have own mutex, this is for the state fields
defer w.stateLock.RUnlock()
// If the wallet is closed, or the Ethereum app doesn't run, abort
if w.device == nil || w.offline() {
return nil, accounts.ErrWalletClosed
}
// Make sure the requested account is contained within
path, ok := w.paths[account.Address]
if !ok {
return nil, accounts.ErrUnknownAccount
}
// Ensure the wallet is capable of signing the given transaction
if chainID != nil && w.version[0] <= 1 && w.version[1] <= 0 && w.version[2] <= 2 {
return nil, fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least", w.version[0], w.version[1], w.version[2])
}
// All infos gathered and metadata checks out, request signing
<-w.commsLock
defer func() { w.commsLock <- struct{}{} }()
// Ensure the device isn't screwed with while user confirmation is pending
// TODO(karalabe): remove if hotplug lands on Windows
w.hub.commsLock.Lock()
w.hub.commsPend++
w.hub.commsLock.Unlock()
defer func() {
w.hub.commsLock.Lock()
w.hub.commsPend--
w.hub.commsLock.Unlock()
}()
return w.ledgerSign(path, account.Address, tx, chainID)
}
// SignHashWithPassphrase implements accounts.Wallet, however signing arbitrary
// data is not supported for Ledger wallets, so this method will always return
// an error.
func (w *ledgerWallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
return nil, accounts.ErrNotSupported
}
// SignTxWithPassphrase implements accounts.Wallet, attempting to sign the given
// transaction with the given account using passphrase as extra authentication.
// Since the Ledger does not support extra passphrases, it is silently ignored.
func (w *ledgerWallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
return w.SignTx(account, tx, chainID)
}
// ledgerVersion retrieves the current version of the Ethereum wallet app running
// on the Ledger wallet.
//
// The version retrieval protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+----+---
// E0 | 06 | 00 | 00 | 00 | 04
//
// With no input data, and the output data being:
//
// Description | Length
// ---------------------------------------------------+--------
// Flags 01: arbitrary data signature enabled by user | 1 byte
// Application major version | 1 byte
// Application minor version | 1 byte
// Application patch version | 1 byte
func (w *ledgerWallet) ledgerVersion() ([3]byte, error) {
// Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpGetConfiguration, 0, 0, nil)
if err != nil {
return [3]byte{}, err
}
if len(reply) != 4 {
return [3]byte{}, errInvalidVersionReply
}
// Cache the version for future reference
var version [3]byte
copy(version[:], reply[1:])
return version, nil
}
// ledgerDerive retrieves the currently active Ethereum address from a Ledger
// wallet at the specified derivation path.
//
// The address derivation protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+---
// E0 | 02 | 00 return address
// 01 display address and confirm before returning
// | 00: do not return the chain code
// | 01: return the chain code
// | var | 00
//
// Where the input data is:
//
// Description | Length
// -------------------------------------------------+--------
// Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes
// ... | 4 bytes
// Last derivation index (big endian) | 4 bytes
//
// And the output data is:
//
// Description | Length
// ------------------------+-------------------
// Public Key length | 1 byte
// Uncompressed Public Key | arbitrary
// Ethereum address length | 1 byte
// Ethereum address | 40 bytes hex ascii
// Chain code if requested | 32 bytes
func (w *ledgerWallet) ledgerDerive(derivationPath []uint32) (common.Address, error) {
// Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath))
path[0] = byte(len(derivationPath))
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpRetrieveAddress, ledgerP1DirectlyFetchAddress, ledgerP2DiscardAddressChainCode, path)
if err != nil {
return common.Address{}, err
}
// Discard the public key, we don't need that for now
if len(reply) < 1 || len(reply) < 1+int(reply[0]) {
return common.Address{}, errors.New("reply lacks public key entry")
}
reply = reply[1+int(reply[0]):]
// Extract the Ethereum hex address string
if len(reply) < 1 || len(reply) < 1+int(reply[0]) {
return common.Address{}, errors.New("reply lacks address entry")
}
hexstr := reply[1 : 1+int(reply[0])]
// Decode the hex sting into an Ethereum address and return
var address common.Address
hex.Decode(address[:], hexstr)
return address, nil
}
// ledgerSign sends the transaction to the Ledger wallet, and waits for the user
// to confirm or deny the transaction.
//
// The transaction signing protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+---
// E0 | 04 | 00: first transaction data block
// 80: subsequent transaction data block
// | 00 | variable | variable
//
// Where the input for the first transaction block (first 255 bytes) is:
//
// Description | Length
// -------------------------------------------------+----------
// Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes
// ... | 4 bytes
// Last derivation index (big endian) | 4 bytes
// RLP transaction chunk | arbitrary
//
// And the input for subsequent transaction blocks (first 255 bytes) are:
//
// Description | Length
// ----------------------+----------
// RLP transaction chunk | arbitrary
//
// And the output data is:
//
// Description | Length
// ------------+---------
// signature V | 1 byte
// signature R | 32 bytes
// signature S | 32 bytes
func (w *ledgerWallet) ledgerSign(derivationPath []uint32, address common.Address, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath))
path[0] = byte(len(derivationPath))
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Create the transaction RLP based on whether legacy or EIP155 signing was requeste
var (
txrlp []byte
err error
)
if chainID == nil {
if txrlp, err = rlp.EncodeToBytes([]interface{}{tx.Nonce(), tx.GasPrice(), tx.Gas(), tx.To(), tx.Value(), tx.Data()}); err != nil {
return nil, err
}
} else {
if txrlp, err = rlp.EncodeToBytes([]interface{}{tx.Nonce(), tx.GasPrice(), tx.Gas(), tx.To(), tx.Value(), tx.Data(), chainID, big.NewInt(0), big.NewInt(0)}); err != nil {
return nil, err
}
}
payload := append(path, txrlp...)
// Send the request and wait for the response
var (
op = ledgerP1InitTransactionData
reply []byte
)
for len(payload) > 0 {
// Calculate the size of the next data chunk
chunk := 255
if chunk > len(payload) {
chunk = len(payload)
}
// Send the chunk over, ensuring it's processed correctly
reply, err = w.ledgerExchange(ledgerOpSignTransaction, op, 0, payload[:chunk])
if err != nil {
return nil, err
}
// Shift the payload and ensure subsequent chunks are marked as such
payload = payload[chunk:]
op = ledgerP1ContTransactionData
}
// Extract the Ethereum signature and do a sanity validation
if len(reply) != 65 {
return nil, errors.New("reply lacks signature")
}
signature := append(reply[1:], reply[0])
// Create the correct signer and signature transform based on the chain ID
var signer types.Signer
if chainID == nil {
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
}
// Inject the final signature into the transaction and sanity check the sender
signed, err := tx.WithSignature(signer, signature)
if err != nil {
return nil, err
}
sender, err := types.Sender(signer, signed)
if err != nil {
return nil, err
}
if sender != address {
return nil, fmt.Errorf("signer mismatch: expected %s, got %s", address.Hex(), sender.Hex())
}
return signed, nil
}
// ledgerExchange performs a data exchange with the Ledger wallet, sending it a
// message and retrieving the response.
//
// The common transport header is defined as follows:
//
// Description | Length
// --------------------------------------+----------
// Communication channel ID (big endian) | 2 bytes
// Command tag | 1 byte
// Packet sequence index (big endian) | 2 bytes
// Payload | arbitrary
//
// The Communication channel ID allows commands multiplexing over the same
// physical link. It is not used for the time being, and should be set to 0101
// to avoid compatibility issues with implementations ignoring a leading 00 byte.
//
// The Command tag describes the message content. Use TAG_APDU (0x05) for standard
// APDU payloads, or TAG_PING (0x02) for a simple link test.
//
// The Packet sequence index describes the current sequence for fragmented payloads.
// The first fragment index is 0x00.
//
// APDU Command payloads are encoded as follows:
//
// Description | Length
// -----------------------------------
// APDU length (big endian) | 2 bytes
// APDU CLA | 1 byte
// APDU INS | 1 byte
// APDU P1 | 1 byte
// APDU P2 | 1 byte
// APDU length | 1 byte
// Optional APDU data | arbitrary
func (w *ledgerWallet) ledgerExchange(opcode ledgerOpcode, p1 ledgerParam1, p2 ledgerParam2, data []byte) ([]byte, error) {
// Construct the message payload, possibly split into multiple chunks
apdu := make([]byte, 2, 7+len(data))
binary.BigEndian.PutUint16(apdu, uint16(5+len(data)))
apdu = append(apdu, []byte{0xe0, byte(opcode), byte(p1), byte(p2), byte(len(data))}...)
apdu = append(apdu, data...)
// Stream all the chunks to the device
header := []byte{0x01, 0x01, 0x05, 0x00, 0x00} // Channel ID and command tag appended
chunk := make([]byte, 64)
space := len(chunk) - len(header)
for i := 0; len(apdu) > 0; i++ {
// Construct the new message to stream
chunk = append(chunk[:0], header...)
binary.BigEndian.PutUint16(chunk[3:], uint16(i))
if len(apdu) > space {
chunk = append(chunk, apdu[:space]...)
apdu = apdu[space:]
} else {
chunk = append(chunk, apdu...)
apdu = nil
}
// Send over to the device
w.log.Trace("Data chunk sent to the Ledger", "chunk", hexutil.Bytes(chunk))
if _, err := w.device.Write(chunk); err != nil {
return nil, err
}
}
// Stream the reply back from the wallet in 64 byte chunks
var reply []byte
chunk = chunk[:64] // Yeah, we surely have enough space
for {
// Read the next chunk from the Ledger wallet
if _, err := io.ReadFull(w.device, chunk); err != nil {
return nil, err
}
w.log.Trace("Data chunk received from the Ledger", "chunk", hexutil.Bytes(chunk))
// Make sure the transport header matches
if chunk[0] != 0x01 || chunk[1] != 0x01 || chunk[2] != 0x05 {
return nil, errReplyInvalidHeader
}
// If it's the first chunk, retrieve the total message length
var payload []byte
if chunk[3] == 0x00 && chunk[4] == 0x00 {
reply = make([]byte, 0, int(binary.BigEndian.Uint16(chunk[5:7])))
payload = chunk[7:]
} else {
payload = chunk[5:]
}
// Append to the reply and stop when filled up
if left := cap(reply) - len(reply); left > len(payload) {
reply = append(reply, payload...)
} else {
reply = append(reply, payload[:left]...)
break
}
}
return reply[:len(reply)-2], nil
}

View File

@@ -0,0 +1,330 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Trezor hardware
// wallets. The wire protocol spec can be found on the SatoshiLabs website:
// https://doc.satoshilabs.com/trezor-tech/api-protobuf.html
package usbwallet
import (
"encoding/binary"
"errors"
"fmt"
"io"
"math/big"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/usbwallet/internal/trezor"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/golang/protobuf/proto"
)
// ErrTrezorPINNeeded is returned if opening the trezor requires a PIN code. In
// this case, the calling application should display a pinpad and send back the
// encoded passphrase.
var ErrTrezorPINNeeded = errors.New("trezor: pin needed")
// errTrezorReplyInvalidHeader is the error message returned by a Trezor data exchange
// if the device replies with a mismatching header. This usually means the device
// is in browser mode.
var errTrezorReplyInvalidHeader = errors.New("trezor: invalid reply header")
// trezorDriver implements the communication with a Trezor hardware wallet.
type trezorDriver struct {
device io.ReadWriter // USB device connection to communicate through
version [3]uint32 // Current version of the Trezor firmware
label string // Current textual label of the Trezor device
pinwait bool // Flags whether the device is waiting for PIN entry
failure error // Any failure that would make the device unusable
log log.Logger // Contextual logger to tag the trezor with its id
}
// newTrezorDriver creates a new instance of a Trezor USB protocol driver.
func newTrezorDriver(logger log.Logger) driver {
return &trezorDriver{
log: logger,
}
}
// Status implements accounts.Wallet, always whether the Trezor is opened, closed
// or whether the Ethereum app was not started on it.
func (w *trezorDriver) Status() (string, error) {
if w.failure != nil {
return fmt.Sprintf("Failed: %v", w.failure), w.failure
}
if w.device == nil {
return "Closed", w.failure
}
if w.pinwait {
return fmt.Sprintf("Trezor v%d.%d.%d '%s' waiting for PIN", w.version[0], w.version[1], w.version[2], w.label), w.failure
}
return fmt.Sprintf("Trezor v%d.%d.%d '%s' online", w.version[0], w.version[1], w.version[2], w.label), w.failure
}
// Open implements usbwallet.driver, attempting to initialize the connection to
// the Trezor hardware wallet. Initializing the Trezor is a two phase operation:
// * The first phase is to initialize the connection and read the wallet's
// features. This phase is invoked is the provided passphrase is empty. The
// device will display the pinpad as a result and will return an appropriate
// error to notify the user that a second open phase is needed.
// * The second phase is to unlock access to the Trezor, which is done by the
// user actually providing a passphrase mapping a keyboard keypad to the pin
// number of the user (shuffled according to the pinpad displayed).
func (w *trezorDriver) Open(device io.ReadWriter, passphrase string) error {
w.device, w.failure = device, nil
// If phase 1 is requested, init the connection and wait for user callback
if passphrase == "" {
// If we're already waiting for a PIN entry, insta-return
if w.pinwait {
return ErrTrezorPINNeeded
}
// Initialize a connection to the device
features := new(trezor.Features)
if _, err := w.trezorExchange(&trezor.Initialize{}, features); err != nil {
return err
}
w.version = [3]uint32{features.GetMajorVersion(), features.GetMinorVersion(), features.GetPatchVersion()}
w.label = features.GetLabel()
// Do a manual ping, forcing the device to ask for its PIN
askPin := true
res, err := w.trezorExchange(&trezor.Ping{PinProtection: &askPin}, new(trezor.PinMatrixRequest), new(trezor.Success))
if err != nil {
return err
}
// Only return the PIN request if the device wasn't unlocked until now
if res == 1 {
return nil // Device responded with trezor.Success
}
w.pinwait = true
return ErrTrezorPINNeeded
}
// Phase 2 requested with actual PIN entry
w.pinwait = false
if _, err := w.trezorExchange(&trezor.PinMatrixAck{Pin: &passphrase}, new(trezor.Success)); err != nil {
w.failure = err
return err
}
return nil
}
// Close implements usbwallet.driver, cleaning up and metadata maintained within
// the Trezor driver.
func (w *trezorDriver) Close() error {
w.version, w.label, w.pinwait = [3]uint32{}, "", false
return nil
}
// Heartbeat implements usbwallet.driver, performing a sanity check against the
// Trezor to see if it's still online.
func (w *trezorDriver) Heartbeat() error {
if _, err := w.trezorExchange(&trezor.Ping{}, new(trezor.Success)); err != nil {
w.failure = err
return err
}
return nil
}
// Derive implements usbwallet.driver, sending a derivation request to the Trezor
// and returning the Ethereum address located on that derivation path.
func (w *trezorDriver) Derive(path accounts.DerivationPath) (common.Address, error) {
return w.trezorDerive(path)
}
// SignTx implements usbwallet.driver, sending the transaction to the Trezor and
// waiting for the user to confirm or deny the transaction.
func (w *trezorDriver) SignTx(path accounts.DerivationPath, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error) {
if w.device == nil {
return common.Address{}, nil, accounts.ErrWalletClosed
}
return w.trezorSign(path, tx, chainID)
}
// trezorDerive sends a derivation request to the Trezor device and returns the
// Ethereum address located on that path.
func (w *trezorDriver) trezorDerive(derivationPath []uint32) (common.Address, error) {
address := new(trezor.EthereumAddress)
if _, err := w.trezorExchange(&trezor.EthereumGetAddress{AddressN: derivationPath}, address); err != nil {
return common.Address{}, err
}
return common.BytesToAddress(address.GetAddress()), nil
}
// trezorSign sends the transaction to the Trezor wallet, and waits for the user
// to confirm or deny the transaction.
func (w *trezorDriver) trezorSign(derivationPath []uint32, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error) {
// Create the transaction initiation message
data := tx.Data()
length := uint32(len(data))
request := &trezor.EthereumSignTx{
AddressN: derivationPath,
Nonce: new(big.Int).SetUint64(tx.Nonce()).Bytes(),
GasPrice: tx.GasPrice().Bytes(),
GasLimit: new(big.Int).SetUint64(tx.Gas()).Bytes(),
Value: tx.Value().Bytes(),
DataLength: &length,
}
if to := tx.To(); to != nil {
request.To = (*to)[:] // Non contract deploy, set recipient explicitly
}
if length > 1024 { // Send the data chunked if that was requested
request.DataInitialChunk, data = data[:1024], data[1024:]
} else {
request.DataInitialChunk, data = data, nil
}
if chainID != nil { // EIP-155 transaction, set chain ID explicitly (only 32 bit is supported!?)
id := uint32(chainID.Int64())
request.ChainId = &id
}
// Send the initiation message and stream content until a signature is returned
response := new(trezor.EthereumTxRequest)
if _, err := w.trezorExchange(request, response); err != nil {
return common.Address{}, nil, err
}
for response.DataLength != nil && int(*response.DataLength) <= len(data) {
chunk := data[:*response.DataLength]
data = data[*response.DataLength:]
if _, err := w.trezorExchange(&trezor.EthereumTxAck{DataChunk: chunk}, response); err != nil {
return common.Address{}, nil, err
}
}
// Extract the Ethereum signature and do a sanity validation
if len(response.GetSignatureR()) == 0 || len(response.GetSignatureS()) == 0 || response.GetSignatureV() == 0 {
return common.Address{}, nil, errors.New("reply lacks signature")
}
signature := append(append(response.GetSignatureR(), response.GetSignatureS()...), byte(response.GetSignatureV()))
// Create the correct signer and signature transform based on the chain ID
var signer types.Signer
if chainID == nil {
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
}
// Inject the final signature into the transaction and sanity check the sender
signed, err := tx.WithSignature(signer, signature)
if err != nil {
return common.Address{}, nil, err
}
sender, err := types.Sender(signer, signed)
if err != nil {
return common.Address{}, nil, err
}
return sender, signed, nil
}
// trezorExchange performs a data exchange with the Trezor wallet, sending it a
// message and retrieving the response. If multiple responses are possible, the
// method will also return the index of the destination object used.
func (w *trezorDriver) trezorExchange(req proto.Message, results ...proto.Message) (int, error) {
// Construct the original message payload to chunk up
data, err := proto.Marshal(req)
if err != nil {
return 0, err
}
payload := make([]byte, 8+len(data))
copy(payload, []byte{0x23, 0x23})
binary.BigEndian.PutUint16(payload[2:], trezor.Type(req))
binary.BigEndian.PutUint32(payload[4:], uint32(len(data)))
copy(payload[8:], data)
// Stream all the chunks to the device
chunk := make([]byte, 64)
chunk[0] = 0x3f // Report ID magic number
for len(payload) > 0 {
// Construct the new message to stream, padding with zeroes if needed
if len(payload) > 63 {
copy(chunk[1:], payload[:63])
payload = payload[63:]
} else {
copy(chunk[1:], payload)
copy(chunk[1+len(payload):], make([]byte, 63-len(payload)))
payload = nil
}
// Send over to the device
w.log.Trace("Data chunk sent to the Trezor", "chunk", hexutil.Bytes(chunk))
if _, err := w.device.Write(chunk); err != nil {
return 0, err
}
}
// Stream the reply back from the wallet in 64 byte chunks
var (
kind uint16
reply []byte
)
for {
// Read the next chunk from the Trezor wallet
if _, err := io.ReadFull(w.device, chunk); err != nil {
return 0, err
}
w.log.Trace("Data chunk received from the Trezor", "chunk", hexutil.Bytes(chunk))
// Make sure the transport header matches
if chunk[0] != 0x3f || (len(reply) == 0 && (chunk[1] != 0x23 || chunk[2] != 0x23)) {
return 0, errTrezorReplyInvalidHeader
}
// If it's the first chunk, retrieve the reply message type and total message length
var payload []byte
if len(reply) == 0 {
kind = binary.BigEndian.Uint16(chunk[3:5])
reply = make([]byte, 0, int(binary.BigEndian.Uint32(chunk[5:9])))
payload = chunk[9:]
} else {
payload = chunk[1:]
}
// Append to the reply and stop when filled up
if left := cap(reply) - len(reply); left > len(payload) {
reply = append(reply, payload...)
} else {
reply = append(reply, payload[:left]...)
break
}
}
// Try to parse the reply into the requested reply message
if kind == uint16(trezor.MessageType_MessageType_Failure) {
// Trezor returned a failure, extract and return the message
failure := new(trezor.Failure)
if err := proto.Unmarshal(reply, failure); err != nil {
return 0, err
}
return 0, errors.New("trezor: " + failure.GetMessage())
}
if kind == uint16(trezor.MessageType_MessageType_ButtonRequest) {
// Trezor is waiting for user confirmation, ack and wait for the next message
return w.trezorExchange(&trezor.ButtonAck{}, results...)
}
for i, res := range results {
if trezor.Type(res) == kind {
return i, proto.Unmarshal(reply, res)
}
}
expected := make([]string, len(results))
for i, res := range results {
expected[i] = trezor.Name(trezor.Type(res))
}
return 0, fmt.Errorf("trezor: expected reply types %s, got %s", expected, trezor.Name(kind))
}

View File

@@ -1,25 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package usbwallet implements support for USB hardware wallets.
package usbwallet
// deviceID is a combined vendor/product identifier to uniquely identify a USB
// hardware device.
type deviceID struct {
Vendor uint16 // The Vendor identifer
Product uint16 // The Product identifier
}

View File

@@ -0,0 +1,562 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package usbwallet implements support for USB hardware wallets.
package usbwallet
import (
"context"
"fmt"
"io"
"math/big"
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
)
// Maximum time between wallet health checks to detect USB unplugs.
const heartbeatCycle = time.Second
// Minimum time to wait between self derivation attempts, even it the user is
// requesting accounts like crazy.
const selfDeriveThrottling = time.Second
// driver defines the vendor specific functionality hardware wallets instances
// must implement to allow using them with the wallet lifecycle management.
type driver interface {
// Status returns a textual status to aid the user in the current state of the
// wallet. It also returns an error indicating any failure the wallet might have
// encountered.
Status() (string, error)
// Open initializes access to a wallet instance. The passphrase parameter may
// or may not be used by the implementation of a particular wallet instance.
Open(device io.ReadWriter, passphrase string) error
// Close releases any resources held by an open wallet instance.
Close() error
// Heartbeat performs a sanity check against the hardware wallet to see if it
// is still online and healthy.
Heartbeat() error
// Derive sends a derivation request to the USB device and returns the Ethereum
// address located on that path.
Derive(path accounts.DerivationPath) (common.Address, error)
// SignTx sends the transaction to the USB device and waits for the user to confirm
// or deny the transaction.
SignTx(path accounts.DerivationPath, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error)
}
// wallet represents the common functionality shared by all USB hardware
// wallets to prevent reimplementing the same complex maintenance mechanisms
// for different vendors.
type wallet struct {
hub *Hub // USB hub scanning
driver driver // Hardware implementation of the low level device operations
url *accounts.URL // Textual URL uniquely identifying this wallet
info hid.DeviceInfo // Known USB device infos about the wallet
device *hid.Device // USB device advertising itself as a hardware wallet
accounts []accounts.Account // List of derive accounts pinned on the hardware wallet
paths map[common.Address]accounts.DerivationPath // Known derivation paths for signing operations
deriveNextPath accounts.DerivationPath // Next derivation path for account auto-discovery
deriveNextAddr common.Address // Next derived account address for auto-discovery
deriveChain ethereum.ChainStateReader // Blockchain state reader to discover used account with
deriveReq chan chan struct{} // Channel to request a self-derivation on
deriveQuit chan chan error // Channel to terminate the self-deriver with
healthQuit chan chan error
// Locking a hardware wallet is a bit special. Since hardware devices are lower
// performing, any communication with them might take a non negligible amount of
// time. Worse still, waiting for user confirmation can take arbitrarily long,
// but exclusive communication must be upheld during. Locking the entire wallet
// in the mean time however would stall any parts of the system that don't want
// to communicate, just read some state (e.g. list the accounts).
//
// As such, a hardware wallet needs two locks to function correctly. A state
// lock can be used to protect the wallet's software-side internal state, which
// must not be held exclusively during hardware communication. A communication
// lock can be used to achieve exclusive access to the device itself, this one
// however should allow "skipping" waiting for operations that might want to
// use the device, but can live without too (e.g. account self-derivation).
//
// Since we have two locks, it's important to know how to properly use them:
// - Communication requires the `device` to not change, so obtaining the
// commsLock should be done after having a stateLock.
// - Communication must not disable read access to the wallet state, so it
// must only ever hold a *read* lock to stateLock.
commsLock chan struct{} // Mutex (buf=1) for the USB comms without keeping the state locked
stateLock sync.RWMutex // Protects read and write access to the wallet struct fields
log log.Logger // Contextual logger to tag the base with its id
}
// URL implements accounts.Wallet, returning the URL of the USB hardware device.
func (w *wallet) URL() accounts.URL {
return *w.url // Immutable, no need for a lock
}
// Status implements accounts.Wallet, returning a custom status message from the
// underlying vendor-specific hardware wallet implementation.
func (w *wallet) Status() (string, error) {
w.stateLock.RLock() // No device communication, state lock is enough
defer w.stateLock.RUnlock()
status, failure := w.driver.Status()
if w.device == nil {
return "Closed", failure
}
return status, failure
}
// Open implements accounts.Wallet, attempting to open a USB connection to the
// hardware wallet.
func (w *wallet) Open(passphrase string) error {
w.stateLock.Lock() // State lock is enough since there's no connection yet at this point
defer w.stateLock.Unlock()
// If the device was already opened once, refuse to try again
if w.paths != nil {
return accounts.ErrWalletAlreadyOpen
}
// Make sure the actual device connection is done only once
if w.device == nil {
device, err := w.info.Open()
if err != nil {
return err
}
w.device = device
w.commsLock = make(chan struct{}, 1)
w.commsLock <- struct{}{} // Enable lock
}
// Delegate device initialization to the underlying driver
if err := w.driver.Open(w.device, passphrase); err != nil {
return err
}
// Connection successful, start life-cycle management
w.paths = make(map[common.Address]accounts.DerivationPath)
w.deriveReq = make(chan chan struct{})
w.deriveQuit = make(chan chan error)
w.healthQuit = make(chan chan error)
go w.heartbeat()
go w.selfDerive()
// Notify anyone listening for wallet events that a new device is accessible
go w.hub.updateFeed.Send(accounts.WalletEvent{Wallet: w, Kind: accounts.WalletOpened})
return nil
}
// heartbeat is a health check loop for the USB wallets to periodically verify
// whether they are still present or if they malfunctioned.
func (w *wallet) heartbeat() {
w.log.Debug("USB wallet health-check started")
defer w.log.Debug("USB wallet health-check stopped")
// Execute heartbeat checks until termination or error
var (
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until termination is requested or the heartbeat cycle arrives
select {
case errc = <-w.healthQuit:
// Termination requested
continue
case <-time.After(heartbeatCycle):
// Heartbeat time
}
// Execute a tiny data exchange to see responsiveness
w.stateLock.RLock()
if w.device == nil {
// Terminated while waiting for the lock
w.stateLock.RUnlock()
continue
}
<-w.commsLock // Don't lock state while resolving version
err = w.driver.Heartbeat()
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
if err != nil {
w.stateLock.Lock() // Lock state to tear the wallet down
w.close()
w.stateLock.Unlock()
}
// Ignore non hardware related errors
err = nil
}
// In case of error, wait for termination
if err != nil {
w.log.Debug("USB wallet health-check failed", "err", err)
errc = <-w.healthQuit
}
errc <- err
}
// Close implements accounts.Wallet, closing the USB connection to the device.
func (w *wallet) Close() error {
// Ensure the wallet was opened
w.stateLock.RLock()
hQuit, dQuit := w.healthQuit, w.deriveQuit
w.stateLock.RUnlock()
// Terminate the health checks
var herr error
if hQuit != nil {
errc := make(chan error)
hQuit <- errc
herr = <-errc // Save for later, we *must* close the USB
}
// Terminate the self-derivations
var derr error
if dQuit != nil {
errc := make(chan error)
dQuit <- errc
derr = <-errc // Save for later, we *must* close the USB
}
// Terminate the device connection
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.healthQuit = nil
w.deriveQuit = nil
w.deriveReq = nil
if err := w.close(); err != nil {
return err
}
if herr != nil {
return herr
}
return derr
}
// close is the internal wallet closer that terminates the USB connection and
// resets all the fields to their defaults.
//
// Note, close assumes the state lock is held!
func (w *wallet) close() error {
// Allow duplicate closes, especially for health-check failures
if w.device == nil {
return nil
}
// Close the device, clear everything, then return
w.device.Close()
w.device = nil
w.accounts, w.paths = nil, nil
w.driver.Close()
return nil
}
// Accounts implements accounts.Wallet, returning the list of accounts pinned to
// the USB hardware wallet. If self-derivation was enabled, the account list is
// periodically expanded based on current chain state.
func (w *wallet) Accounts() []accounts.Account {
// Attempt self-derivation if it's running
reqc := make(chan struct{}, 1)
select {
case w.deriveReq <- reqc:
// Self-derivation request accepted, wait for it
<-reqc
default:
// Self-derivation offline, throttled or busy, skip
}
// Return whatever account list we ended up with
w.stateLock.RLock()
defer w.stateLock.RUnlock()
cpy := make([]accounts.Account, len(w.accounts))
copy(cpy, w.accounts)
return cpy
}
// selfDerive is an account derivation loop that upon request attempts to find
// new non-zero accounts.
func (w *wallet) selfDerive() {
w.log.Debug("USB wallet self-derivation started")
defer w.log.Debug("USB wallet self-derivation stopped")
// Execute self-derivations until termination or error
var (
reqc chan struct{}
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until either derivation or termination is requested
select {
case errc = <-w.deriveQuit:
// Termination requested
continue
case reqc = <-w.deriveReq:
// Account discovery requested
}
// Derivation needs a chain and device access, skip if either unavailable
w.stateLock.RLock()
if w.device == nil || w.deriveChain == nil {
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
select {
case <-w.commsLock:
default:
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
// Device lock obtained, derive the next batch of accounts
var (
accs []accounts.Account
paths []accounts.DerivationPath
nextAddr = w.deriveNextAddr
nextPath = w.deriveNextPath
context = context.Background()
)
for empty := false; !empty; {
// Retrieve the next derived Ethereum account
if nextAddr == (common.Address{}) {
if nextAddr, err = w.driver.Derive(nextPath); err != nil {
w.log.Warn("USB wallet account derivation failed", "err", err)
break
}
}
// Check the account's status against the current chain state
var (
balance *big.Int
nonce uint64
)
balance, err = w.deriveChain.BalanceAt(context, nextAddr, nil)
if err != nil {
w.log.Warn("USB wallet balance retrieval failed", "err", err)
break
}
nonce, err = w.deriveChain.NonceAt(context, nextAddr, nil)
if err != nil {
w.log.Warn("USB wallet nonce retrieval failed", "err", err)
break
}
// If the next account is empty, stop self-derivation, but add it nonetheless
if balance.Sign() == 0 && nonce == 0 {
empty = true
}
// We've just self-derived a new account, start tracking it locally
path := make(accounts.DerivationPath, len(nextPath))
copy(path[:], nextPath[:])
paths = append(paths, path)
account := accounts.Account{
Address: nextAddr,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
accs = append(accs, account)
// Display a log message to the user for new (or previously empty accounts)
if _, known := w.paths[nextAddr]; !known || (!empty && nextAddr == w.deriveNextAddr) {
w.log.Info("USB wallet discovered new account", "address", nextAddr, "path", path, "balance", balance, "nonce", nonce)
}
// Fetch the next potential account
if !empty {
nextAddr = common.Address{}
nextPath[len(nextPath)-1]++
}
}
// Self derivation complete, release device lock
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// Insert any accounts successfully derived
w.stateLock.Lock()
for i := 0; i < len(accs); i++ {
if _, ok := w.paths[accs[i].Address]; !ok {
w.accounts = append(w.accounts, accs[i])
w.paths[accs[i].Address] = paths[i]
}
}
// Shift the self-derivation forward
// TODO(karalabe): don't overwrite changes from wallet.SelfDerive
w.deriveNextAddr = nextAddr
w.deriveNextPath = nextPath
w.stateLock.Unlock()
// Notify the user of termination and loop after a bit of time (to avoid trashing)
reqc <- struct{}{}
if err == nil {
select {
case errc = <-w.deriveQuit:
// Termination requested, abort
case <-time.After(selfDeriveThrottling):
// Waited enough, willing to self-derive again
}
}
}
// In case of error, wait for termination
if err != nil {
w.log.Debug("USB wallet self-derivation failed", "err", err)
errc = <-w.deriveQuit
}
errc <- err
}
// Contains implements accounts.Wallet, returning whether a particular account is
// or is not pinned into this wallet instance. Although we could attempt to resolve
// unpinned accounts, that would be an non-negligible hardware operation.
func (w *wallet) Contains(account accounts.Account) bool {
w.stateLock.RLock()
defer w.stateLock.RUnlock()
_, exists := w.paths[account.Address]
return exists
}
// Derive implements accounts.Wallet, deriving a new account at the specific
// derivation path. If pin is set to true, the account will be added to the list
// of tracked accounts.
func (w *wallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Account, error) {
// Try to derive the actual account and update its URL if successful
w.stateLock.RLock() // Avoid device disappearing during derivation
if w.device == nil {
w.stateLock.RUnlock()
return accounts.Account{}, accounts.ErrWalletClosed
}
<-w.commsLock // Avoid concurrent hardware access
address, err := w.driver.Derive(path)
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// If an error occurred or no pinning was requested, return
if err != nil {
return accounts.Account{}, err
}
account := accounts.Account{
Address: address,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
if !pin {
return account, nil
}
// Pinning needs to modify the state
w.stateLock.Lock()
defer w.stateLock.Unlock()
if _, ok := w.paths[address]; !ok {
w.accounts = append(w.accounts, account)
w.paths[address] = path
}
return account, nil
}
// SelfDerive implements accounts.Wallet, trying to discover accounts that the
// user used previously (based on the chain state), but ones that he/she did not
// explicitly pin to the wallet manually. To avoid chain head monitoring, self
// derivation only runs during account listing (and even then throttled).
func (w *wallet) SelfDerive(base accounts.DerivationPath, chain ethereum.ChainStateReader) {
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.deriveNextPath = make(accounts.DerivationPath, len(base))
copy(w.deriveNextPath[:], base[:])
w.deriveNextAddr = common.Address{}
w.deriveChain = chain
}
// SignHash implements accounts.Wallet, however signing arbitrary data is not
// supported for hardware wallets, so this method will always return an error.
func (w *wallet) SignHash(account accounts.Account, hash []byte) ([]byte, error) {
return nil, accounts.ErrNotSupported
}
// SignTx implements accounts.Wallet. It sends the transaction over to the Ledger
// wallet to request a confirmation from the user. It returns either the signed
// transaction or a failure if the user denied the transaction.
//
// Note, if the version of the Ethereum application running on the Ledger wallet is
// too old to sign EIP-155 transactions, but such is requested nonetheless, an error
// will be returned opposed to silently signing in Homestead mode.
func (w *wallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
w.stateLock.RLock() // Comms have own mutex, this is for the state fields
defer w.stateLock.RUnlock()
// If the wallet is closed, abort
if w.device == nil {
return nil, accounts.ErrWalletClosed
}
// Make sure the requested account is contained within
path, ok := w.paths[account.Address]
if !ok {
return nil, accounts.ErrUnknownAccount
}
// All infos gathered and metadata checks out, request signing
<-w.commsLock
defer func() { w.commsLock <- struct{}{} }()
// Ensure the device isn't screwed with while user confirmation is pending
// TODO(karalabe): remove if hotplug lands on Windows
w.hub.commsLock.Lock()
w.hub.commsPend++
w.hub.commsLock.Unlock()
defer func() {
w.hub.commsLock.Lock()
w.hub.commsPend--
w.hub.commsLock.Unlock()
}()
// Sign the transaction and verify the sender to avoid hardware fault surprises
sender, signed, err := w.driver.SignTx(path, tx, chainID)
if err != nil {
return nil, err
}
if sender != account.Address {
return nil, fmt.Errorf("signer mismatch: expected %s, got %s", account.Address.Hex(), sender.Hex())
}
return signed, nil
}
// SignHashWithPassphrase implements accounts.Wallet, however signing arbitrary
// data is not supported for Ledger wallets, so this method will always return
// an error.
func (w *wallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
return w.SignHash(account, hash)
}
// SignTxWithPassphrase implements accounts.Wallet, attempting to sign the given
// transaction with the given account using passphrase as extra authentication.
// Since USB wallets don't rely on passphrases, these are silently ignored.
func (w *wallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
return w.SignTx(account, tx, chainID)
}

View File

@@ -21,9 +21,10 @@ environment:
PATH: C:\msys64\mingw32\bin\;C:\Program Files (x86)\NSIS\;%PATH%
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.8.3.windows-%GETH_ARCH%.zip
- 7z x go1.8.3.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.10.3.windows-%GETH_ARCH%.zip
- 7z x go1.10.3.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version

View File

@@ -2,12 +2,7 @@
Tagged releases and develop branch commits are available as installable Debian packages
for Ubuntu. Packages are built for the all Ubuntu versions which are supported by
Canonical:
- Trusty Tahr (14.04 LTS)
- Xenial Xerus (16.04 LTS)
- Yakkety Yak (16.10)
- Zesty Zapus (17.04)
Canonical.
Packages of develop branch commits have suffix -unstable and cannot be installed alongside
the stable version. Switching between release streams requires user intervention.
@@ -21,18 +16,18 @@ variable which Travis CI makes available to certain builds.
We want to build go-ethereum with the most recent version of Go, irrespective of the Go
version that is available in the main Ubuntu repository. In order to make this possible,
our PPA depends on the ~gophers/ubuntu/archive PPA. Our source package build-depends on
golang-1.8, which is co-installable alongside the regular golang package. PPA dependencies
golang-1.10, which is co-installable alongside the regular golang package. PPA dependencies
can be edited at https://launchpad.net/%7Eethereum/+archive/ubuntu/ethereum/+edit-dependencies
## Building Packages Locally (for testing)
You need to run Ubuntu to do test packaging.
Add the gophers PPA and install Go 1.8 and Debian packaging tools:
Add the gophers PPA and install Go 1.10 and Debian packaging tools:
$ sudo apt-add-repository ppa:gophers/ubuntu/archive
$ sudo apt-get update
$ sudo apt-get install build-essential golang-1.8 devscripts debhelper
$ sudo apt-get install build-essential golang-1.10 devscripts debhelper
Create the source packages:

View File

@@ -19,13 +19,14 @@
/*
The ci command is called from Continuous Integration scripts.
Usage: go run ci.go <command> <command flags/arguments>
Usage: go run build/ci.go <command> <command flags/arguments>
Available commands are:
install [ -arch architecture ] [ packages... ] -- builds packages and executables
test [ -coverage ] [ -misspell ] [ packages... ] -- runs the tests
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artefacts
install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables
test [ -coverage ] [ packages... ] -- runs the tests
lint -- runs certain pre-selected linters
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artifacts
importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
nsis -- creates a Windows NSIS installer
@@ -58,6 +59,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params"
sv "github.com/ethereum/go-ethereum/swarm/version"
)
var (
@@ -76,50 +79,84 @@ var (
executablePath("geth"),
executablePath("puppeth"),
executablePath("rlpdump"),
executablePath("swarm"),
executablePath("wnode"),
}
// Files that end up in the swarm*.zip archive.
swarmArchiveFiles = []string{
"COPYING",
executablePath("swarm"),
}
// A debian package is created for all executables listed here.
debExecutables = []debExecutable{
{
Name: "abigen",
BinaryName: "abigen",
Description: "Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages.",
},
{
Name: "bootnode",
BinaryName: "bootnode",
Description: "Ethereum bootnode.",
},
{
Name: "evm",
BinaryName: "evm",
Description: "Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode.",
},
{
Name: "geth",
BinaryName: "geth",
Description: "Ethereum CLI client.",
},
{
Name: "puppeth",
BinaryName: "puppeth",
Description: "Ethereum private network manager.",
},
{
Name: "rlpdump",
BinaryName: "rlpdump",
Description: "Developer utility tool that prints RLP structures.",
},
{
Name: "swarm",
Description: "Ethereum Swarm daemon and tools",
},
{
Name: "wnode",
BinaryName: "wnode",
Description: "Ethereum Whisper diagnostic tool",
},
}
// A debian package is created for all executables listed here.
debSwarmExecutables = []debExecutable{
{
BinaryName: "swarm",
PackageName: "ethereum-swarm",
Description: "Ethereum Swarm daemon and tools",
},
}
debEthereum = debPackage{
Name: "ethereum",
Version: params.Version,
Executables: debExecutables,
}
debSwarm = debPackage{
Name: "ethereum-swarm",
Version: sv.Version,
Executables: debSwarmExecutables,
}
// Debian meta packages to build and push to Ubuntu PPA
debPackages = []debPackage{
debSwarm,
debEthereum,
}
// Packages to be cross-compiled by the xgo command
allCrossCompiledArchiveFiles = append(allToolsArchiveFiles, swarmArchiveFiles...)
// Distros for which packages are created.
// Note: vivid is unsupported because there is no golang-1.6 package for it.
// Note: wily is unsupported because it was officially deprecated on lanchpad.
debDistros = []string{"trusty", "xenial", "yakkety", "zesty"}
// Note: yakkety is unsupported because it was officially deprecated on lanchpad.
// Note: zesty is unsupported because it was officially deprecated on lanchpad.
// Note: artful is unsupported because it was officially deprecated on lanchpad.
debDistros = []string{"trusty", "xenial", "bionic", "cosmic"}
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@@ -145,6 +182,8 @@ func main() {
doInstall(os.Args[2:])
case "test":
doTest(os.Args[2:])
case "lint":
doLint(os.Args[2:])
case "archive":
doArchive(os.Args[2:])
case "debsrc":
@@ -169,17 +208,24 @@ func main() {
func doInstall(cmdline []string) {
var (
arch = flag.String("arch", "", "Architecture to cross build for")
cc = flag.String("cc", "", "C compiler to cross build with")
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
// Check Go version. People regularly open issues about compilation
// failure with outdated Go. This should save them the trouble.
if runtime.Version() < "go1.7" && !strings.HasPrefix(runtime.Version(), "devel") {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.7 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
if !strings.Contains(runtime.Version(), "devel") {
// Figure out the minor version number since we can't textually compare (1.10 < 1.9)
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if minor < 9 {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.9 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
}
}
// Compile packages given as arguments, or everything if there are no arguments.
packages := []string{"./..."}
@@ -195,7 +241,7 @@ func doInstall(cmdline []string) {
build.MustRun(goinstall)
return
}
// If we are cross compiling to ARMv5 ARMv6 or ARMv7, clean any prvious builds
// If we are cross compiling to ARMv5 ARMv6 or ARMv7, clean any previous builds
if *arch == "arm" {
os.RemoveAll(filepath.Join(runtime.GOROOT(), "pkg", runtime.GOOS+"_arm"))
for _, path := range filepath.SplitList(build.GOPATH()) {
@@ -203,7 +249,7 @@ func doInstall(cmdline []string) {
}
}
// Seems we are cross compiling, work around forbidden GOBIN
goinstall := goToolArch(*arch, "install", buildFlags(env)...)
goinstall := goToolArch(*arch, *cc, "install", buildFlags(env)...)
goinstall.Args = append(goinstall.Args, "-v")
goinstall.Args = append(goinstall.Args, []string{"-buildmode", "archive"}...)
goinstall.Args = append(goinstall.Args, packages...)
@@ -217,7 +263,7 @@ func doInstall(cmdline []string) {
}
for name := range pkgs {
if name == "main" {
gobuild := goToolArch(*arch, "build", buildFlags(env)...)
gobuild := goToolArch(*arch, *cc, "build", buildFlags(env)...)
gobuild.Args = append(gobuild.Args, "-v")
gobuild.Args = append(gobuild.Args, []string{"-o", executablePath(cmd.Name())}...)
gobuild.Args = append(gobuild.Args, "."+string(filepath.Separator)+filepath.Join("cmd", cmd.Name()))
@@ -245,21 +291,11 @@ func buildFlags(env build.Environment) (flags []string) {
}
func goTool(subcmd string, args ...string) *exec.Cmd {
return goToolArch(runtime.GOARCH, subcmd, args...)
return goToolArch(runtime.GOARCH, os.Getenv("CC"), subcmd, args...)
}
func goToolArch(arch string, subcmd string, args ...string) *exec.Cmd {
gocmd := filepath.Join(runtime.GOROOT(), "bin", "go")
cmd := exec.Command(gocmd, subcmd)
cmd.Args = append(cmd.Args, args...)
if subcmd == "build" || subcmd == "install" || subcmd == "test" {
// Go CGO has a Windows linker error prior to 1.8 (https://github.com/golang/go/issues/8756).
// Work around issue by allowing multiple definitions for <1.8 builds.
if runtime.GOOS == "windows" && runtime.Version() < "go1.8" {
cmd.Args = append(cmd.Args, []string{"-ldflags", "-extldflags -Wl,--allow-multiple-definition"}...)
}
}
func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd {
cmd := build.GoTool(subcmd, args...)
cmd.Env = []string{"GOPATH=" + build.GOPATH()}
if arch == "" || arch == runtime.GOARCH {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
@@ -267,6 +303,9 @@ func goToolArch(arch string, subcmd string, args ...string) *exec.Cmd {
cmd.Env = append(cmd.Env, "CGO_ENABLED=1")
cmd.Env = append(cmd.Env, "GOARCH="+arch)
}
if cc != "" {
cmd.Env = append(cmd.Env, "CC="+cc)
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "GOBIN=") {
continue
@@ -282,7 +321,6 @@ func goToolArch(arch string, subcmd string, args ...string) *exec.Cmd {
func doTest(cmdline []string) {
var (
misspell = flag.Bool("misspell", false, "Whether to run the spell checker")
coverage = flag.Bool("coverage", false, "Whether to record code coverage")
)
flag.CommandLine.Parse(cmdline)
@@ -296,10 +334,7 @@ func doTest(cmdline []string) {
// Run analysis tools before the tests.
build.MustRun(goTool("vet", packages...))
if *misspell {
// TODO(karalabe): Reenable after false detection is fixed: https://github.com/client9/misspell/issues/105
// spellcheck(packages)
}
// Run the actual tests.
gotest := goTool("test", buildFlags(env)...)
// Test a single package at a time. CI builders are slow
@@ -308,40 +343,47 @@ func doTest(cmdline []string) {
if *coverage {
gotest.Args = append(gotest.Args, "-covermode=atomic", "-cover")
}
gotest.Args = append(gotest.Args, packages...)
build.MustRun(gotest)
}
// spellcheck runs the client9/misspell spellchecker package on all Go, Cgo and
// test files in the requested packages.
func spellcheck(packages []string) {
// Ensure the spellchecker is available
build.MustRun(goTool("get", "github.com/client9/misspell/cmd/misspell"))
// runs gometalinter on requested packages
func doLint(cmdline []string) {
flag.CommandLine.Parse(cmdline)
// Windows chokes on long argument lists, check packages individually
for _, pkg := range packages {
// The spell checker doesn't work on packages, gather all .go files for it
out, err := goTool("list", "-f", "{{.Dir}}{{range .GoFiles}}\n{{.}}{{end}}{{range .CgoFiles}}\n{{.}}{{end}}{{range .TestGoFiles}}\n{{.}}{{end}}", pkg).CombinedOutput()
if err != nil {
log.Fatalf("source file listing failed: %v\n%s", err, string(out))
}
// Retrieve the folder and assemble the source list
lines := strings.Split(string(out), "\n")
root := lines[0]
packages := []string{"./..."}
if len(flag.CommandLine.Args()) > 0 {
packages = flag.CommandLine.Args()
}
// Get metalinter and install all supported linters
build.MustRun(goTool("get", "gopkg.in/alecthomas/gometalinter.v2"))
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), "--install")
sources := make([]string, 0, len(lines)-1)
for _, line := range lines[1:] {
if line = strings.TrimSpace(line); line != "" {
sources = append(sources, filepath.Join(root, line))
}
}
// Run the spell checker for this particular package
build.MustRunCommand(filepath.Join(GOBIN, "misspell"), append([]string{"-error"}, sources...)...)
// Run fast linters batched together
configs := []string{
"--vendor",
"--tests",
"--deadline=2m",
"--disable-all",
"--enable=goimports",
"--enable=varcheck",
"--enable=vet",
"--enable=gofmt",
"--enable=misspell",
"--enable=goconst",
"--min-occurrences=6", // for goconst
}
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), append(configs, packages...)...)
// Run slow linters one by one
for _, linter := range []string{"unconvert", "gosimple"} {
configs = []string{"--vendor", "--tests", "--deadline=10m", "--disable-all", "--enable=" + linter}
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), append(configs, packages...)...)
}
}
// Release Packaging
func doArchive(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
@@ -361,10 +403,14 @@ func doArchive(cmdline []string) {
}
var (
env = build.Env()
base = archiveBasename(*arch, env)
geth = "geth-" + base + ext
alltools = "geth-alltools-" + base + ext
env = build.Env()
basegeth = archiveBasename(*arch, params.ArchiveVersion(env.Commit))
geth = "geth-" + basegeth + ext
alltools = "geth-alltools-" + basegeth + ext
baseswarm = archiveBasename(*arch, sv.ArchiveVersion(env.Commit))
swarm = "swarm-" + baseswarm + ext
)
maybeSkipArchive(env)
if err := build.WriteArchive(geth, gethArchiveFiles); err != nil {
@@ -373,14 +419,17 @@ func doArchive(cmdline []string) {
if err := build.WriteArchive(alltools, allToolsArchiveFiles); err != nil {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools} {
if err := build.WriteArchive(swarm, swarmArchiveFiles); err != nil {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools, swarm} {
if err := archiveUpload(archive, *upload, *signer); err != nil {
log.Fatal(err)
}
}
}
func archiveBasename(arch string, env build.Environment) string {
func archiveBasename(arch string, archiveVersion string) string {
platform := runtime.GOOS + "-" + arch
if arch == "arm" {
platform += os.Getenv("GOARM")
@@ -391,18 +440,7 @@ func archiveBasename(arch string, env build.Environment) string {
if arch == "ios" {
platform = "ios-all"
}
return platform + "-" + archiveVersion(env)
}
func archiveVersion(env build.Environment) string {
version := build.VERSION()
if isUnstableBuild(env) {
version += "-unstable"
}
if env.Commit != "" {
version += "-" + env.Commit[:8]
}
return version
return platform + "-" + archiveVersion
}
func archiveUpload(archive string, blobstore string, signer string) error {
@@ -452,7 +490,6 @@ func maybeSkipArchive(env build.Environment) {
}
// Debian Packaging
func doDebianSource(cmdline []string) {
var (
signer = flag.String("signer", "", `Signing key name, also used as package author`)
@@ -476,21 +513,23 @@ func doDebianSource(cmdline []string) {
build.MustRun(gpg)
}
// Create the packages.
for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now)
pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc")
debuild.Dir = pkgdir
build.MustRun(debuild)
// Create Debian packages and upload them
for _, pkg := range debPackages {
for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc")
debuild.Dir = pkgdir
build.MustRun(debuild)
changes := fmt.Sprintf("%s_%s_source.changes", meta.Name(), meta.VersionString())
changes = filepath.Join(*workdir, changes)
if *signer != "" {
build.MustRunCommand("debsign", changes)
}
if *upload != "" {
build.MustRunCommand("dput", *upload, changes)
changes := fmt.Sprintf("%s_%s_source.changes", meta.Name(), meta.VersionString())
changes = filepath.Join(*workdir, changes)
if *signer != "" {
build.MustRunCommand("debsign", changes)
}
if *upload != "" {
build.MustRunCommand("dput", *upload, changes)
}
}
}
}
@@ -515,9 +554,17 @@ func isUnstableBuild(env build.Environment) bool {
return true
}
type debPackage struct {
Name string // the name of the Debian package to produce, e.g. "ethereum", or "ethereum-swarm"
Version string // the clean version of the debPackage, e.g. 1.8.12 or 0.3.0, without any metadata
Executables []debExecutable // executables to be included in the package
}
type debMetadata struct {
Env build.Environment
PackageName string
// go-ethereum version being built. Note that this
// is not the debian package version. The package version
// is constructed by VersionString.
@@ -529,21 +576,33 @@ type debMetadata struct {
}
type debExecutable struct {
Name, Description string
PackageName string
BinaryName string
Description string
}
func newDebMetadata(distro, author string, env build.Environment, t time.Time) debMetadata {
// Package returns the name of the package if present, or
// fallbacks to BinaryName
func (d debExecutable) Package() string {
if d.PackageName != "" {
return d.PackageName
}
return d.BinaryName
}
func newDebMetadata(distro, author string, env build.Environment, t time.Time, name string, version string, exes []debExecutable) debMetadata {
if author == "" {
// No signing key, use default author.
author = "Ethereum Builds <fjl@ethereum.org>"
}
return debMetadata{
PackageName: name,
Env: env,
Author: author,
Distro: distro,
Version: build.VERSION(),
Version: version,
Time: t.Format(time.RFC1123Z),
Executables: debExecutables,
Executables: exes,
}
}
@@ -551,9 +610,9 @@ func newDebMetadata(distro, author string, env build.Environment, t time.Time) d
// on all executable packages.
func (meta debMetadata) Name() string {
if isUnstableBuild(meta.Env) {
return "ethereum-unstable"
return meta.PackageName + "-unstable"
}
return "ethereum"
return meta.PackageName
}
// VersionString returns the debian version of the packages.
@@ -580,9 +639,9 @@ func (meta debMetadata) ExeList() string {
// ExeName returns the package name of an executable package.
func (meta debMetadata) ExeName(exe debExecutable) string {
if isUnstableBuild(meta.Env) {
return exe.Name + "-unstable"
return exe.Package() + "-unstable"
}
return exe.Name
return exe.Package()
}
// ExeConflicts returns the content of the Conflicts field
@@ -597,7 +656,7 @@ func (meta debMetadata) ExeConflicts(exe debExecutable) string {
// be preferred and the conflicting files should be handled via
// alternates. We might do this eventually but using a conflict is
// easier now.
return "ethereum, " + exe.Name
return "ethereum, " + exe.Package()
}
return ""
}
@@ -614,24 +673,23 @@ func stageDebianSource(tmpdir string, meta debMetadata) (pkgdir string) {
// Put the debian build files in place.
debian := filepath.Join(pkgdir, "debian")
build.Render("build/deb.rules", filepath.Join(debian, "rules"), 0755, meta)
build.Render("build/deb.changelog", filepath.Join(debian, "changelog"), 0644, meta)
build.Render("build/deb.control", filepath.Join(debian, "control"), 0644, meta)
build.Render("build/deb.copyright", filepath.Join(debian, "copyright"), 0644, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.rules", filepath.Join(debian, "rules"), 0755, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.changelog", filepath.Join(debian, "changelog"), 0644, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.control", filepath.Join(debian, "control"), 0644, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.copyright", filepath.Join(debian, "copyright"), 0644, meta)
build.RenderString("8\n", filepath.Join(debian, "compat"), 0644, meta)
build.RenderString("3.0 (native)\n", filepath.Join(debian, "source/format"), 0644, meta)
for _, exe := range meta.Executables {
install := filepath.Join(debian, meta.ExeName(exe)+".install")
docs := filepath.Join(debian, meta.ExeName(exe)+".docs")
build.Render("build/deb.install", install, 0644, exe)
build.Render("build/deb.docs", docs, 0644, exe)
build.Render("build/deb/"+meta.PackageName+"/deb.install", install, 0644, exe)
build.Render("build/deb/"+meta.PackageName+"/deb.docs", docs, 0644, exe)
}
return pkgdir
}
// Windows installer
func doWindowsInstaller(cmdline []string) {
// Parse the flags and make skip installer generation on PRs
var (
@@ -681,11 +739,11 @@ func doWindowsInstaller(cmdline []string) {
// Build the installer. This assumes that all the needed files have been previously
// built (don't mix building and packaging to keep cross compilation complexity to a
// minimum).
version := strings.Split(build.VERSION(), ".")
version := strings.Split(params.Version, ".")
if env.Commit != "" {
version[2] += "-" + env.Commit[:8]
}
installer, _ := filepath.Abs("geth-" + archiveBasename(*arch, env) + ".exe")
installer, _ := filepath.Abs("geth-" + archiveBasename(*arch, params.ArchiveVersion(env.Commit)) + ".exe")
build.MustRunCommand("makensis.exe",
"/DOUTPUTFILE="+installer,
"/DMAJORVERSION="+version[0],
@@ -721,9 +779,9 @@ func doAndroidArchive(cmdline []string) {
log.Fatal("Please ensure ANDROID_NDK points to your Android NDK")
}
// Build the Android archive and Maven resources
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile"))
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init", "--ndk", os.Getenv("ANDROID_NDK")))
build.MustRun(gomobileTool("bind", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
build.MustRun(gomobileTool("bind", "-ldflags", "-s -w", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
if *local {
// If we're building locally, copy bundle to build dir and skip Maven
@@ -737,7 +795,7 @@ func doAndroidArchive(cmdline []string) {
maybeSkipArchive(env)
// Sign and upload the archive to Azure
archive := "geth-" + archiveBasename("android", env) + ".aar"
archive := "geth-" + archiveBasename("android", params.ArchiveVersion(env.Commit)) + ".aar"
os.Rename("geth.aar", archive)
if err := archiveUpload(archive, *upload, *signer); err != nil {
@@ -747,22 +805,27 @@ func doAndroidArchive(cmdline []string) {
os.Rename(archive, meta.Package+".aar")
if *signer != "" && *deploy != "" {
// Import the signing key into the local GPG instance
if b64key := os.Getenv(*signer); b64key != "" {
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatalf("invalid base64 %s", *signer)
}
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
b64key := os.Getenv(*signer)
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatalf("invalid base64 %s", *signer)
}
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
keyID, err := build.PGPKeyID(string(key))
if err != nil {
log.Fatal(err)
}
// Upload the artifacts to Sonatype and/or Maven Central
repo := *deploy + "/service/local/staging/deploy/maven2"
if meta.Develop {
repo = *deploy + "/content/repositories/snapshots"
}
build.MustRunCommand("mvn", "gpg:sign-and-deploy-file",
build.MustRunCommand("mvn", "gpg:sign-and-deploy-file", "-e", "-X",
"-settings=build/mvn.settings", "-Durl="+repo, "-DrepositoryId=ossrh",
"-Dgpg.keyname="+keyID,
"-DpomFile="+meta.Package+".pom", "-Dfile="+meta.Package+".aar")
}
}
@@ -772,9 +835,10 @@ func gomobileTool(subcmd string, args ...string) *exec.Cmd {
cmd.Args = append(cmd.Args, args...)
cmd.Env = []string{
"GOPATH=" + build.GOPATH(),
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") {
continue
}
cmd.Env = append(cmd.Env, e)
@@ -816,7 +880,7 @@ func newMavenMetadata(env build.Environment) mavenMetadata {
}
}
// Render the version and package strings
version := build.VERSION()
version := params.Version
if isUnstableBuild(env) {
version += "-SNAPSHOT"
}
@@ -841,9 +905,9 @@ func doXCodeFramework(cmdline []string) {
env := build.Env()
// Build the iOS XCode framework
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile"))
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init"))
bind := gomobileTool("bind", "--target", "ios", "--tags", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
bind := gomobileTool("bind", "-ldflags", "-s -w", "--target", "ios", "--tags", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
if *local {
// If we're building locally, use the build folder and stop afterwards
@@ -851,7 +915,7 @@ func doXCodeFramework(cmdline []string) {
build.MustRun(bind)
return
}
archive := "geth-" + archiveBasename("ios", env)
archive := "geth-" + archiveBasename("ios", params.ArchiveVersion(env.Commit))
if err := os.Mkdir(archive, os.ModePerm); err != nil {
log.Fatal(err)
}
@@ -907,7 +971,7 @@ func newPodMetadata(env build.Environment, archive string) podMetadata {
}
}
}
version := build.VERSION()
version := params.Version
if isUnstableBuild(env) {
version += "-unstable." + env.Buildnum
}
@@ -937,7 +1001,7 @@ func doXgo(cmdline []string) {
if *alltools {
args = append(args, []string{"--dest", GOBIN}...)
for _, res := range allToolsArchiveFiles {
for _, res := range allCrossCompiledArchiveFiles {
if strings.HasPrefix(res, GOBIN) {
// Binary tool found, cross build it explicitly
args = append(args, "./"+filepath.Join("cmd", filepath.Base(res)))
@@ -1003,23 +1067,14 @@ func doPurge(cmdline []string) {
}
for i := 0; i < len(blobs); i++ {
for j := i + 1; j < len(blobs); j++ {
iTime, err := time.Parse(time.RFC1123, blobs[i].Properties.LastModified)
if err != nil {
log.Fatal(err)
}
jTime, err := time.Parse(time.RFC1123, blobs[j].Properties.LastModified)
if err != nil {
log.Fatal(err)
}
if iTime.After(jTime) {
if blobs[i].Properties.LastModified.After(blobs[j].Properties.LastModified) {
blobs[i], blobs[j] = blobs[j], blobs[i]
}
}
}
// Filter out all archives more recent that the given threshold
for i, blob := range blobs {
timestamp, _ := time.Parse(time.RFC1123, blob.Properties.LastModified)
if time.Since(timestamp) < time.Duration(*limit)*24*time.Hour {
if time.Since(blob.Properties.LastModified) < time.Duration(*limit)*24*time.Hour {
blobs = blobs[:i]
break
}

19
build/clean_go_build_cache.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/sh
# Cleaning the Go cache only makes sense if we actually have Go installed... or
# if Go is actually callable. This does not hold true during deb packaging, so
# we need an explicit check to avoid build failures.
if ! command -v go > /dev/null; then
exit
fi
version_gt() {
test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"
}
golang_version=$(go version |cut -d' ' -f3 |sed 's/go//')
# Clean go build cache when go version is greater than or equal to 1.10
if !(version_gt 1.10 $golang_version); then
go clean -cache
fi

View File

@@ -1,25 +0,0 @@
Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.8
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum
Package: {{.Name}}
Architecture: any
Depends: ${misc:Depends}, {{.ExeList}}
Description: Meta-package to install geth and other tools
Meta-package to install geth and other tools
{{range .Executables}}
Package: {{$.ExeName .}}
Conflicts: {{$.ExeConflicts .}}
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Built-Using: ${misc:Built-Using}
Description: {{.Description}}
{{.Description}}
{{end}}

View File

@@ -1,14 +0,0 @@
Copyright 2016 The go-ethereum Authors
go-ethereum is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
go-ethereum is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.

View File

@@ -1 +0,0 @@
build/bin/{{.Name}} usr/bin

View File

@@ -1,13 +0,0 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
override_dh_auto_build:
build/env.sh /usr/lib/go-1.8/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

View File

@@ -0,0 +1,19 @@
Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.10
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum
{{range .Executables}}
Package: {{$.ExeName .}}
Conflicts: {{$.ExeConflicts .}}
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Built-Using: ${misc:Built-Using}
Description: {{.Description}}
{{.Description}}
{{end}}

View File

@@ -0,0 +1,14 @@
Copyright 2018 The go-ethereum Authors
go-ethereum is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
go-ethereum is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.

View File

@@ -0,0 +1 @@
build/bin/{{.BinaryName}} usr/bin

View File

@@ -0,0 +1,13 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
override_dh_auto_build:
build/env.sh /usr/lib/go-1.10/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

View File

@@ -0,0 +1,5 @@
{{.Name}} ({{.VersionString}}) {{.Distro}}; urgency=low
* git build of {{.Env.Commit}}
-- {{.Author}} {{.Time}}

View File

@@ -0,0 +1,25 @@
Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.10
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum
Package: {{.Name}}
Architecture: any
Depends: ${misc:Depends}, {{.ExeList}}
Description: Meta-package to install geth, swarm, and other tools
Meta-package to install geth, swarm and other tools
{{range .Executables}}
Package: {{$.ExeName .}}
Conflicts: {{$.ExeConflicts .}}
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Built-Using: ${misc:Built-Using}
Description: {{.Description}}
{{.Description}}
{{end}}

View File

@@ -0,0 +1,14 @@
Copyright 2018 The go-ethereum Authors
go-ethereum is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
go-ethereum is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.

View File

@@ -0,0 +1 @@
AUTHORS

View File

@@ -0,0 +1 @@
build/bin/{{.BinaryName}} usr/bin

View File

@@ -0,0 +1,13 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
override_dh_auto_build:
build/env.sh /usr/lib/go-1.10/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

18
build/goimports.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/sh
find_files() {
find . ! \( \
\( \
-path '.github' \
-o -path './build/_workspace' \
-o -path './build/bin' \
-o -path './crypto/bn256' \
-o -path '*/vendor/*' \
\) -prune \
\) -name '*.go'
}
GOFMT="gofmt -s -w"
GOIMPORTS="goimports -w"
find_files | xargs $GOFMT
find_files | xargs $GOIMPORTS

View File

@@ -45,7 +45,7 @@ var (
// paths with any of these prefixes will be skipped
skipPrefixes = []string{
// boring stuff
"vendor/", "tests/files/", "build/",
"vendor/", "tests/testdata/", "build/",
// don't relicense vendored sources
"cmd/internal/browser",
"consensus/ethash/xor.go",
@@ -55,10 +55,9 @@ var (
"crypto/sha3/",
"internal/jsre/deps",
"log/",
"common/bitutil/bitutil",
// don't license generated files
"contracts/chequebook/contract/",
"contracts/ens/contract/",
"contracts/release/contract.go",
"contracts/chequebook/contract/code.go",
}
// paths with this prefix are licensed as GPL. all other files are LGPL.

View File

@@ -29,7 +29,7 @@ import (
)
var (
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind")
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind, - for STDIN")
binFlag = flag.String("bin", "", "Path to the Ethereum contract bytecode (generate deploy method)")
typFlag = flag.String("type", "", "Struct name for the binding (default = package name)")
@@ -75,16 +75,27 @@ func main() {
bins []string
types []string
)
if *solFlag != "" {
if *solFlag != "" || *abiFlag == "-" {
// Generate the list of types to exclude from binding
exclude := make(map[string]bool)
for _, kind := range strings.Split(*excFlag, ",") {
exclude[strings.ToLower(kind)] = true
}
contracts, err := compiler.CompileSolidity(*solcFlag, *solFlag)
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
var contracts map[string]*compiler.Contract
var err error
if *solFlag != "" {
contracts, err = compiler.CompileSolidity(*solcFlag, *solFlag)
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
}
} else {
contracts, err = contractsFromStdin()
if err != nil {
fmt.Printf("Failed to read input ABIs from STDIN: %v\n", err)
os.Exit(-1)
}
}
// Gather all non-excluded contract for binding
for name, contract := range contracts {
@@ -138,3 +149,12 @@ func main() {
os.Exit(-1)
}
}
func contractsFromStdin() (map[string]*compiler.Contract, error) {
bytes, err := ioutil.ReadAll(os.Stdin)
if err != nil {
return nil, err
}
return compiler.ParseCombinedJSON(bytes, "", "", "", "")
}

View File

@@ -21,6 +21,7 @@ import (
"crypto/ecdsa"
"flag"
"fmt"
"net"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
@@ -96,12 +97,37 @@ func main() {
}
}
addr, err := net.ResolveUDPAddr("udp", *listenAddr)
if err != nil {
utils.Fatalf("-ResolveUDPAddr: %v", err)
}
conn, err := net.ListenUDP("udp", addr)
if err != nil {
utils.Fatalf("-ListenUDP: %v", err)
}
realaddr := conn.LocalAddr().(*net.UDPAddr)
if natm != nil {
if !realaddr.IP.IsLoopback() {
go nat.Map(natm, nil, "udp", realaddr.Port, realaddr.Port, "ethereum discovery")
}
// TODO: react to external IP changes over time.
if ext, err := natm.ExternalIP(); err == nil {
realaddr = &net.UDPAddr{IP: ext, Port: realaddr.Port}
}
}
if *runv5 {
if _, err := discv5.ListenUDP(nodeKey, *listenAddr, natm, "", restrictList); err != nil {
if _, err := discv5.ListenUDP(nodeKey, conn, realaddr, "", restrictList); err != nil {
utils.Fatalf("%v", err)
}
} else {
if _, err := discover.ListenUDP(nodeKey, *listenAddr, natm, "", restrictList); err != nil {
cfg := discover.Config{
PrivateKey: nodeKey,
AnnounceAddr: realaddr,
NetRestrict: restrictList,
}
if _, err := discover.ListenUDP(conn, cfg); err != nil {
utils.Fatalf("%v", err)
}
}

1
cmd/clef/4byte.json Normal file

File diff suppressed because one or more lines are too long

877
cmd/clef/README.md Normal file
View File

@@ -0,0 +1,877 @@
Clef
----
Clef can be used to sign transactions and data and is meant as a replacement for geth's account management.
This allows DApps not to depend on geth's account management. When a DApp wants to sign data it can send the data to
the signer, the signer will then provide the user with context and asks the user for permission to sign the data. If
the users grants the signing request the signer will send the signature back to the DApp.
This setup allows a DApp to connect to a remote Ethereum node and send transactions that are locally signed. This can
help in situations when a DApp is connected to a remote node because a local Ethereum node is not available, not
synchronised with the chain or a particular Ethereum node that has no built-in (or limited) account management.
Clef can run as a daemon on the same machine, or off a usb-stick like [usb armory](https://inversepath.com/usbarmory),
or a separate VM in a [QubesOS](https://www.qubes-os.org/) type os setup.
Check out
* the [tutorial](tutorial.md) for some concrete examples on how the signer works.
* the [setup docs](docs/setup.md) for some information on how to configure it to work on QubesOS or USBArmory.
## Command line flags
Clef accepts the following command line options:
```
COMMANDS:
init Initialize the signer, generate secret storage
attest Attest that a js-file is to be used
addpw Store a credential for a keystore file
help Shows a list of commands or help for one command
GLOBAL OPTIONS:
--loglevel value log level to emit to the screen (default: 4)
--keystore value Directory for the keystore (default: "$HOME/.ethereum/keystore")
--configdir value Directory for clef configuration (default: "$HOME/.clef")
--networkid value Network identifier (integer, 1=Frontier, 2=Morden (disused), 3=Ropsten, 4=Rinkeby) (default: 1)
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--nousb Disables monitoring for and managing USB hardware wallets
--rpcaddr value HTTP-RPC server listening interface (default: "localhost")
--rpcport value HTTP-RPC server listening port (default: 8550)
--signersecret value A file containing the password used to encrypt signer credentials, e.g. keystore credentials and ruleset hash
--4bytedb value File containing 4byte-identifiers (default: "./4byte.json")
--4bytedb-custom value File used for writing new 4byte-identifiers submitted via API (default: "./4byte-custom.json")
--auditlog value File used to emit audit logs. Set to "" to disable (default: "audit.log")
--rules value Enable rule-engine (default: "rules.json")
--stdio-ui Use STDIN/STDOUT as a channel for an external UI. This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user interface, and can be used when the signer is started by an external process.
--stdio-ui-test Mechanism to test interface between signer and UI. Requires 'stdio-ui'.
--help, -h show help
--version, -v print the version
```
Example:
```
signer -keystore /my/keystore -chainid 4
```
## Security model
The security model of the signer is as follows:
* One critical component (the signer binary / daemon) is responsible for handling cryptographic operations: signing, private keys, encryption/decryption of keystore files.
* The signer binary has a well-defined 'external' API.
* The 'external' API is considered UNTRUSTED.
* The signer binary also communicates with whatever process that invoked the binary, via stdin/stdout.
* This channel is considered 'trusted'. Over this channel, approvals and passwords are communicated.
The general flow for signing a transaction using e.g. geth is as follows:
![image](sign_flow.png)
In this case, `geth` would be started with `--externalsigner=http://localhost:8550` and would relay requests to `eth.sendTransaction`.
## TODOs
Some snags and todos
* [ ] The signer should take a startup param "--no-change", for UIs that do not contain the capability
to perform changes to things, only approve/deny. Such a UI should be able to start the signer in
a more secure mode by telling it that it only wants approve/deny capabilities.
* [x] It would be nice if the signer could collect new 4byte-id:s/method selectors, and have a
secondary database for those (`4byte_custom.json`). Users could then (optionally) submit their collections for
inclusion upstream.
* It should be possible to configure the signer to check if an account is indeed known to it, before
passing on to the UI. The reason it currently does not, is that it would make it possible to enumerate
accounts if it immediately returned "unknown account".
* [x] It should be possible to configure the signer to auto-allow listing (certain) accounts, instead of asking every time.
* [x] Done Upon startup, the signer should spit out some info to the caller (particularly important when executed in `stdio-ui`-mode),
invoking methods with the following info:
* [x] Version info about the signer
* [x] Address of API (http/ipc)
* [ ] List of known accounts
* [ ] Have a default timeout on signing operations, so that if the user has not answered withing e.g. 60 seconds, the request is rejected.
* [ ] `account_signRawTransaction`
* [ ] `account_bulkSignTransactions([] transactions)` should
* only exist if enabled via config/flag
* only allow non-data-sending transactions
* all txs must use the same `from`-account
* let the user confirm, showing
* the total amount
* the number of unique recipients
* Geth todos
- The signer should pass the `Origin` header as call-info to the UI. As of right now, the way that info about the request is
put together is a bit of a hack into the http server. This could probably be greatly improved
- Relay: Geth should be started in `geth --external_signer localhost:8550`.
- Currently, the Geth APIs use `common.Address` in the arguments to transaction submission (e.g `to` field). This
type is 20 `bytes`, and is incapable of carrying checksum information. The signer uses `common.MixedcaseAddress`, which
retains the original input.
- The Geth api should switch to use the same type, and relay `to`-account verbatim to the external api.
* [x] Storage
* [x] An encrypted key-value storage should be implemented
* See [rules.md](rules.md) for more info about this.
* Another potential thing to introduce is pairing.
* To prevent spurious requests which users just accept, implement a way to "pair" the caller with the signer (external API).
* Thus geth/mist/cpp would cryptographically handshake and afterwards the caller would be allowed to make signing requests.
* This feature would make the addition of rules less dangerous.
* Wallets / accounts. Add API methods for wallets.
## Communication
### External API
The signer listens to HTTP requests on `rpcaddr`:`rpcport`, with the same JSONRPC standard as Geth. The messages are
expected to be JSON [jsonrpc 2.0 standard](http://www.jsonrpc.org/specification).
Some of these call can require user interaction. Clients must be aware that responses
may be delayed significanlty or may never be received if a users decides to ignore the confirmation request.
The External API is **untrusted** : it does not accept credentials over this api, nor does it expect
that requests have any authority.
### UI API
The signer has one native console-based UI, for operation without any standalone tools.
However, there is also an API to communicate with an external UI. To enable that UI,
the signer needs to be executed with the `--stdio-ui` option, which allocates the
`stdin`/`stdout` for the UI-api.
An example (insecure) proof-of-concept of has been implemented in `pythonsigner.py`.
The model is as follows:
* The user starts the UI app (`pythonsigner.py`).
* The UI app starts the `signer` with `--stdio-ui`, and listens to the
process output for confirmation-requests.
* The `signer` opens the external http api.
* When the `signer` receives requests, it sends a `jsonrpc` request via `stdout`.
* The UI app prompts the user accordingly, and responds to the `signer`
* The `signer` signs (or not), and responds to the original request.
## External API
See the [external api changelog](extapi_changelog.md) for information about changes to this API.
### Encoding
- number: positive integers that are hex encoded
- data: hex encoded data
- string: ASCII string
All hex encoded values must be prefixed with `0x`.
## Methods
### account_new
#### Create new password protected account
The signer will generate a new private key, encrypts it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and stores it in the keystore directory.
The client is responsible for creating a backup of the keystore. If the keystore is lost there is no method of retrieving lost accounts.
#### Arguments
None
#### Result
- address [string]: account address that is derived from the generated key
- url [string]: location of the keyfile
#### Sample call
```json
{
"id": 0,
"jsonrpc": "2.0",
"method": "account_new",
"params": []
}
{
"id": 0,
"jsonrpc": "2.0",
"result": {
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"url": "keystore:///my/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
}
```
### account_list
#### List available accounts
List all accounts that this signer currently manages
#### Arguments
None
#### Result
- array with account records:
- account.address [string]: account address that is derived from the generated key
- account.type [string]: type of the
- account.url [string]: location of the account
#### Sample call
```json
{
"id": 1,
"jsonrpc": "2.0",
"method": "account_list"
}
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"address": "0xafb2f771f58513609765698f65d3f2f0224a956f",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T07-26-47.162109726Z--afb2f771f58513609765698f65d3f2f0224a956f"
},
{
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
]
}
```
### account_signTransaction
#### Sign transactions
Signs a transactions and responds with the signed transaction in RLP encoded form.
#### Arguments
2. transaction object:
- `from` [address]: account to send the transaction from
- `to` [address]: receiver account. If omitted or `0x`, will cause contract creation.
- `gas` [number]: maximum amount of gas to burn
- `gasPrice` [number]: gas price
- `value` [number:optional]: amount of Wei to send with the transaction
- `data` [data:optional]: input data
- `nonce` [number]: account nonce
3. method signature [string:optional]
- The method signature, if present, is to aid decoding the calldata. Should consist of `methodname(paramtype,...)`, e.g. `transfer(uint256,address)`. The signer may use this data to parse the supplied calldata, and show the user. The data, however, is considered totally untrusted, and reliability is not expected.
#### Result
- signed transaction in RLP encoded form [data]
#### Sample call
```json
{
"id": 2,
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db",
"gas": "0x55555",
"gasPrice": "0x1234",
"input": "0xabcd",
"nonce": "0x0",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x1234"
}
]
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 67,
"error": {
"code": -32000,
"message": "Request denied"
}
}
```
#### Sample call with ABI-data
```json
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"gas": "0x333",
"gasPrice": "0x1",
"nonce": "0x0",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
},
"safeSend(address)"
],
"id": 67
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 67,
"result": {
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1",
"gas": "0x333",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
}
```
Bash example:
```bash
#curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":{"raw":"0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","tx":{"nonce":"0x0","gasPrice":"0x1","gas":"0x333","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0","value":"0x0","input":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012","v":"0x26","r":"0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e","s":"0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","hash":"0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"}}}
```
### account_sign
#### Sign data
Signs a chunk of data and returns the calculated signature.
#### Arguments
- account [address]: account to sign with
- data [data]: data to sign
#### Result
- calculated signature [data]
#### Sample call
```json
{
"id": 3,
"jsonrpc": "2.0",
"method": "account_sign",
"params": [
"0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db",
"0xaabbccdd"
]
}
```
Response
```json
{
"id": 3,
"jsonrpc": "2.0",
"result": "0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
}
```
### account_ecRecover
#### Recover address
Derive the address from the account that was used to sign data from the data and signature.
#### Arguments
- data [data]: data that was signed
- signature [data]: the signature to verify
#### Result
- derived account [address]
#### Sample call
```json
{
"id": 4,
"jsonrpc": "2.0",
"method": "account_ecRecover",
"params": [
"0xaabbccdd",
"0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
]
}
```
Response
```json
{
"id": 4,
"jsonrpc": "2.0",
"result": "0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db"
}
```
### account_import
#### Import account
Import a private key into the keystore. The imported key is expected to be encrypted according to the web3 keystore
format.
#### Arguments
- account [object]: key in [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) (retrieved with account_export)
#### Result
- imported key [object]:
- key.address [address]: address of the imported key
- key.type [string]: type of the account
- key.url [string]: key URL
#### Sample call
```json
{
"id": 6,
"jsonrpc": "2.0",
"method": "account_import",
"params": [
{
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
},
]
}
```
Response
```json
{
"id": 6,
"jsonrpc": "2.0",
"result": {
"address": "0xc7412fc59930fd90099c917a50e5f11d0934b2f5",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T11-00-42.032024108Z--c7412fc59930fd90099c917a50e5f11d0934b2f5"
}
}
```
### account_export
#### Export account from keystore
Export a private key from the keystore. The exported private key is encrypted with the original passphrase. When the
key is imported later this passphrase is required.
#### Arguments
- account [address]: export private key that is associated with this account
#### Result
- exported key, see [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) for
more information
#### Sample call
```json
{
"id": 5,
"jsonrpc": "2.0",
"method": "account_export",
"params": [
"0xc7412fc59930fd90099c917a50e5f11d0934b2f5"
]
}
```
Response
```json
{
"id": 5,
"jsonrpc": "2.0",
"result": {
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
}
}
```
## UI API
These methods needs to be implemented by a UI listener.
By starting the signer with the switch `--stdio-ui-test`, the signer will invoke all known methods, and expect the UI to respond with
denials. This can be used during development to ensure that the API is (at least somewhat) correctly implemented.
See `pythonsigner`, which can be invoked via `python3 pythonsigner.py test` to perform the 'denial-handshake-test'.
All methods in this API uses object-based parameters, so that there can be no mixups of parameters: each piece of data is accessed by key.
See the [ui api changelog](intapi_changelog.md) for information about changes to this API.
OBS! A slight deviation from `json` standard is in place: every request and response should be confined to a single line.
Whereas the `json` specification allows for linebreaks, linebreaks __should not__ be used in this communication channel, to make
things simpler for both parties.
### ApproveTx
Invoked when there's a transaction for approval.
#### Sample call
Here's a method invocation:
```bash
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
```
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "0x0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"to": "0x0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x1",
"value": "0x0",
"nonce": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "Info",
"message": "safeSend(address: 0x0000000000000000000000000000000000000012)"
}
],
"meta": {
"remote": "127.0.0.1:48486",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
The same method invocation, but with invalid data:
```bash
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000002000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
```
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "0x0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"to": "0x0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x1",
"value": "0x0",
"nonce": "0x0",
"data": "0x4401a6e40000000000000002000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "WARNING",
"message": "Transaction data did not match ABI-interface: WARNING: Supplied data is stuffed with extra data. \nWant 0000000000000002000000000000000000000000000000000000000000000012\nHave 0000000000000000000000000000000000000000000000000000000000000012\nfor method safeSend(address)"
}
],
"meta": {
"remote": "127.0.0.1:48492",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
One which has missing `to`, but with no `data`:
```json
{
"jsonrpc": "2.0",
"id": 3,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "",
"to": null,
"gas": "0x0",
"gasPrice": "0x0",
"value": "0x0",
"nonce": "0x0",
"data": null,
"input": null
},
"call_info": [
{
"type": "CRITICAL",
"message": "Tx will create contract with empty code!"
}
],
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveExport
Invoked when a request to export an account has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 7,
"method": "ApproveExport",
"params": [
{
"address": "0x0000000000000000000000000000000000000000",
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveListing
Invoked when a request for account listing has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 5,
"method": "ApproveListing",
"params": [
{
"accounts": [
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-20T14-44-54.089682944Z--123409812340981234098123409812deadbeef42",
"address": "0x123409812340981234098123409812deadbeef42"
},
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-23T21-59-03.199240693Z--cafebabedeadbeef34098123409812deadbeef42",
"address": "0xcafebabedeadbeef34098123409812deadbeef42"
}
],
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveSignData
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 4,
"method": "ApproveSignData",
"params": [
{
"address": "0x123409812340981234098123409812deadbeef42",
"raw_data": "0x01020304",
"message": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"hash": "0x7e3a4e7a9d1744bc5c675c25e1234ca8ed9162bd17f78b9085e48047c15ac310",
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ShowInfo
The UI should show the info to the user. Does not expect response.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 9,
"method": "ShowInfo",
"params": [
{
"text": "Tests completed"
}
]
}
```
### ShowError
The UI should show the info to the user. Does not expect response.
```json
{
"jsonrpc": "2.0",
"id": 2,
"method": "ShowError",
"params": [
{
"text": "Testing 'ShowError'"
}
]
}
```
### OnApproved
`OnApprovedTx` is called when a transaction has been approved and signed. The call contains the return value that will be sent to the external caller. The return value from this method is ignored - the reason for having this callback is to allow the ruleset to keep track of approved transactions.
When implementing rate-limited rules, this callback should be used.
TLDR; Use this method to keep track of signed transactions, instead of using the data in `ApproveTx`.
### OnSignerStartup
This method provide the UI with information about what API version the signer uses (both internal and external) aswell as build-info and external api,
in k/v-form.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "OnSignerStartup",
"params": [
{
"info": {
"extapi_http": "http://localhost:8550",
"extapi_ipc": null,
"extapi_version": "2.0.0",
"intapi_version": "1.2.0"
}
}
]
}
```
### Rules for UI apis
A UI should conform to the following rules.
* A UI MUST NOT load any external resources that were not embedded/part of the UI package.
* For example, not load icons, stylesheets from the internet
* Not load files from the filesystem, unless they reside in the same local directory (e.g. config files)
* A Graphical UI MUST show the blocky-identicon for ethereum addresses.
* A UI MUST warn display approproate warning if the destination-account is formatted with invalid checksum.
* A UI MUST NOT open any ports or services
* The signer opens the public port
* A UI SHOULD verify the permissions on the signer binary, and refuse to execute or warn if permissions allow non-user write.
* A UI SHOULD inform the user about the `SHA256` or `MD5` hash of the binary being executed
* A UI SHOULD NOT maintain a secondary storage of data, e.g. list of accounts
* The signer provides accounts
* A UI SHOULD, to the best extent possible, use static linking / bundling, so that requried libraries are bundled
along with the UI.
### UI Implementations
There are a couple of implementation for a UI. We'll try to keep this list up to date.
| Name | Repo | UI type| No external resources| Blocky support| Verifies permissions | Hash information | No secondary storage | Statically linked| Can modify parameters|
| ---- | ---- | -------| ---- | ---- | ---- |---- | ---- | ---- | ---- |
| QtSigner| https://github.com/holiman/qtsigner/| Python3/QT-based| :+1:| :+1:| :+1:| :+1:| :+1:| :x: | :+1: (partially)|
| GtkSigner| https://github.com/holiman/gtksigner| Python3/GTK-based| :+1:| :x:| :x:| :+1:| :+1:| :x: | :x: |
| Frame | https://github.com/floating/frame/commits/go-signer| Electron-based| :x:| :x:| :x:| :x:| ?| :x: | :x: |

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Some files were not shown because too many files have changed in this diff Show More