Compare commits

...

1113 Commits

Author SHA1 Message Date
4bcc0a37ab Merge pull request #19473 from karalabe/geth-1.8.27
[1.8.27 backport] eth, les, light: enforce CHT checkpoints on fast-sync too
2019-04-17 15:47:54 +03:00
b5f92e66c6 params, swarm: release Geth v1.8.27 (noop Swarm v0.3.15) 2019-04-17 14:57:38 +03:00
d8787230fa eth, les, light: enforce CHT checkpoints on fast-sync too 2019-04-17 14:56:58 +03:00
cdae1c59ab Merge pull request #19437 from zsfelfoldi/fix-sendtx
les: fix SendTx cost calculation and verify cost table
2019-04-10 15:45:13 +03:00
0b00e19ed9 params, swarm: release Geth v1.8.26 (+noop Swarm v0.3.14) 2019-04-10 15:38:01 +03:00
c8d8126bd0 les: check required message types in cost table 2019-04-10 13:12:46 +02:00
0de9f32ae8 les: backported new SendTx cost calculation 2019-04-10 13:12:42 +02:00
14ae1246b7 Merge pull request #19416 from jmcnevin/cli-fix
Revert flag removal
2019-04-09 11:47:09 +03:00
dc59af8622 params, swarm: hotfix Geth v1.8.25 release to restore rpc flags 2019-04-09 10:58:00 +03:00
45730cfab3 cmd/geth: fix accidental --rpccorsdomain and --rpcvhosts removal 2019-04-09 10:56:50 +03:00
4e13a09c50 Merge pull request #19370 from karalabe/geth-1.8.24
Backport PR for the v1.8.24 maintenance release
2019-04-08 16:16:05 +03:00
009d2fe2d6 params, swarm: release Geth v1.8.24 (noop Swarm 0.3.12) 2019-04-08 16:06:59 +03:00
e872ba7a9e eth, les, geth: implement cli-configurable global gas cap for RPC calls (#19401)
* eth, les, geth: implement cli-configurable global gas cap for RPC calls

* graphql, ethapi: place gas cap in DoCall

* ethapi: reformat log message
2019-04-08 15:15:13 +03:00
9d9c6b5847 p2p/discover: bump failure counter only if no nodes were provided (#19362)
This resolves a minor issue where neighbors responses containing less
than 16 nodes would bump the failure counter, removing the node. One
situation where this can happen is a private deployment where the total
number of extant nodes is less than 16.

Issue found by @jsying.
2019-04-08 14:35:50 +03:00
8ca6454807 params: set Rinkeby Petersburg fork block (4th May, 2019) 2019-04-08 12:14:05 +03:00
0e63a70505 core: minor code polishes + rebase fixes 2019-04-08 12:04:31 +03:00
f1b00cffc8 core: re-omit new log event when logs rebirth 2019-04-08 12:02:15 +03:00
442320a8ae travis: update builders to xenial to shadow Go releases 2019-04-08 12:00:42 +03:00
af401d03a3 all: simplify timestamps to uint64 (#19372)
* all: simplify timestamps to uint64

* tests: update definitions

* clef, faucet, mobile: leftover uint64 fixups

* ethash: fix tests

* graphql: update schema for timestamp

* ethash: remove unused variable
2019-04-08 12:00:42 +03:00
80a2a35bc3 trie: there's no point in retrieving the metaroot 2019-04-08 12:00:42 +03:00
fca5f9fd6f common/fdlimit: fix macos file descriptors for Go 1.12 2019-04-02 13:14:21 +03:00
38c30f8dd8 light, params: update CHTs, integrate CHT for Goerli too 2019-04-02 12:10:06 +03:00
c942700427 Merge pull request #19029 from holiman/update1.8
Update1.8
2019-02-20 10:48:12 +02:00
cde35439e0 params, swarm: release Geth v1.8.23, Swarm v0.3.11 2019-02-20 10:42:02 +02:00
4f908db69e cmd/utils: allow for multiple influxdb tags (#18520)
This PR is replacing the metrics.influxdb.host.tag cmd-line flag with metrics.influxdb.tags - a comma-separated key/value tags, that are passed to the InfluxDB reporter, so that we can index measurements with multiple tags, and not just one host tag.

This will be useful for Swarm, where we want to index measurements not just with the host tag, but also with bzzkey and git commit version (for long-running deployments).

(cherry picked from commit 21acf0bc8d)
2019-02-19 17:34:48 +01:00
320d132925 swarm/metrics: Send the accounting registry to InfluxDB (#18470)
(cherry picked from commit f28da4f602)
2019-02-19 17:34:42 +01:00
7ae2a7bd84 swarm: Reinstate Pss Protocol add call through swarm service (#19117)
* swarm: Reinstate Pss Protocol add call through swarm service

* swarm: Even less self

(cherry picked from commit d88c6ce6b0)
2019-02-19 13:18:10 +01:00
fd34bf594c contracts/*: golint updates for this or self warning
(cherry picked from commit 53b823afc8)
2019-02-19 13:18:02 +01:00
996230174c cmd/swarm/swarm-smoke: Trigger chunk debug on timeout (#19101)
* cmd/swarm/swarm-smoke: first version trigger has-chunks on timeout

* cmd/swarm/swarm-smoke: finalize trigger to chunk debug

* cmd/swarm/swarm-smoke: fixed httpEndpoint for trigger

* cmd/swarm/swarm-smoke: port

* cmd/swarm/swarm-smoke: ws not rpc

* cmd/swarm/swarm-smoke: added debug output

* cmd/swarm/swarm-smoke: addressed PR comments

* cmd/swarm/swarm-smoke: renamed track-timeout and track-chunks

(cherry picked from commit 62d7688d0a)
2019-02-19 13:11:53 +01:00
8857707606 p2p, swarm: fix node up races by granular locking (#18976)
* swarm/network: DRY out repeated giga comment

I not necessarily agree with the way we wait for event propagation.
But I truly disagree with having duplicated giga comments.

* p2p/simulations: encapsulate Node.Up field so we avoid data races

The Node.Up field was accessed concurrently without "proper" locking.
There was a lock on Network and that was used sometimes to access
the  field. Other times the locking was missed and we had
a data race.

For example: https://github.com/ethereum/go-ethereum/pull/18464
The case above was solved, but there were still intermittent/hard to
reproduce races. So let's solve the issue permanently.

resolves: ethersphere/go-ethereum#1146

* p2p/simulations: fix unmarshal of simulations.Node

Making Node.Up field private in 13292ee897
broke TestHTTPNetwork and TestHTTPSnapshot. Because the default
UnmarshalJSON does not handle unexported fields.

Important: The fix is partial and not proper to my taste. But I cut
scope as I think the fix may require a change to the current
serialization format. New ticket:
https://github.com/ethersphere/go-ethereum/issues/1177

* p2p/simulations: Add a sanity test case for Node.Config UnmarshalJSON

* p2p/simulations: revert back to defer Unlock() pattern for Network

It's a good patten to call `defer Unlock()` right after `Lock()` so
(new) error cases won't miss to unlock. Let's get back to that pattern.

The patten was abandoned in 85a79b3ad3,
while fixing a data race. That data race does not exist anymore,
since the Node.Up field got hidden behind its own lock.

* p2p/simulations: consistent naming for test providers Node.UnmarshalJSON

* p2p/simulations: remove JSON annotation from private fields of Node

As unexported fields are not serialized.

* p2p/simulations: fix deadlock in Network.GetRandomDownNode()

Problem: GetRandomDownNode() locks -> getDownNodeIDs() ->
GetNodes() tries to lock -> deadlock

On Network type, unexported functions must assume that `net.lock`
is already acquired and should not call exported functions which
might try to lock again.

* p2p/simulations: ensure method conformity for Network

Connect* methods were moved to p2p/simulations.Network from
swarm/network/simulation. However these new methods did not follow
the pattern of Network methods, i.e., all exported method locks
the whole Network either for read or write.

* p2p/simulations: fix deadlock during network shutdown

`TestDiscoveryPersistenceSimulationSimAdapter` often got into deadlock.
The execution was stuck on two locks, i.e, `Kademlia.lock` and
`p2p/simulations.Network.lock`. Usually the test got stuck once in each
20 executions with high confidence.

`Kademlia` was stuck in `Kademlia.EachAddr()` and `Network` in
`Network.Stop()`.

Solution: in `Network.Stop()` `net.lock` must be released before
calling `node.Stop()` as stopping a node (somehow - I did not find
the exact code path) causes `Network.InitConn()` to be called from
`Kademlia.SuggestPeer()` and that blocks on `net.lock`.

Related ticket: https://github.com/ethersphere/go-ethereum/issues/1223

* swarm/state: simplify if statement in DBStore.Put()

* p2p/simulations: remove faulty godoc from private function

The comment started with the wrong method name.

The method is simple and self explanatory. Also, it's private.
=> Let's just remove the comment.

(cherry picked from commit 50b872bf05)
2019-02-19 13:11:52 +01:00
d6c1fcbe04 swarm/pss: refactoring (#19110)
* swarm/pss: split pss and keystore

* swarm/pss: moved whisper to keystore

* swarm/pss: goimports fixed

(cherry picked from commit 12ca3b172a)
2019-02-19 13:11:52 +01:00
79cac793c0 swarm/storage/netstore: add fetcher cancellation on shutdown (#19049)
swarm/network/stream: remove netstore internal wg
swarm/network/stream: run individual tests with t.Run

(cherry picked from commit 3ee09ba035)
2019-02-19 13:11:52 +01:00
5de6b6b529 swarm/network: Saturation check for healthy networks (#19071)
* swarm/network: new saturation for  implementation

* swarm/network: re-added saturation func in Kademlia as it is used elsewhere

* swarm/network: saturation with higher MinBinSize

* swarm/network: PeersPerBin with depth check

* swarm/network: edited tests to pass new saturated check

* swarm/network: minor fix saturated check

* swarm/network/simulations/discovery: fixed renamed RPC call

* swarm/network: renamed to isSaturated and returns bool

* swarm/network: early depth check

(cherry picked from commit 2af24724dd)
2019-02-19 13:11:52 +01:00
3d2bedf8d0 swarm/storage: fix influxdb gc metrics report (#19102)
(cherry picked from commit 5b8ae7885e)
2019-02-19 13:11:52 +01:00
8ea3d8ad90 swarm: fix network/stream data races (#19051)
* swarm/network/stream: newStreamerTester cleanup only if err is nil

* swarm/network/stream: raise newStreamerTester waitForPeers timeout

* swarm/network/stream: fix data races in GetPeerSubscriptions

* swarm/storage: prevent data race on LDBStore.batchesC

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461775049

* swarm/network/stream: fix TestGetSubscriptionsRPC data race

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461768477

* swarm/network/stream: correctly use Simulation.Run callback

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461783804

* swarm/network: protect addrCountC in Kademlia.AddrCountC function

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-462273444

* p2p/simulations: fix a deadlock calling getRandomNode with lock

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-462317407

* swarm/network/stream: terminate disconnect goruotines in tests

* swarm/network/stream: reduce memory consumption when testing data races

* swarm/network/stream: add watchDisconnections helper function

* swarm/network/stream: add concurrent counter for tests

* swarm/network/stream: rename race/norace test files and use const

* swarm/network/stream: remove watchSim and its panic

* swarm/network/stream: pass context in watchDisconnections

* swarm/network/stream: add concurrent safe bool for watchDisconnections

* swarm/storage: fix LDBStore.batchesC data race by not closing it

(cherry picked from commit 3fd6db2bf6)
2019-02-19 13:11:52 +01:00
a0127019c3 swarm: fix uptime gauge update goroutine leak by introducing cleanup functions (#19040)
(cherry picked from commit d596bea2d5)
2019-02-19 13:11:51 +01:00
7a333e4104 swarm/storage: fix HashExplore concurrency bug ethersphere#1211 (#19028)
* swarm/storage: fix HashExplore concurrency bug ethersphere#1211

*  swarm/storage: lock as value not pointer

* swarm/storage: wait for  to complete

* swarm/storage: fix linter problems

* swarm/storage: append to nil slice

(cherry picked from commit 3d22a46c94)
2019-02-19 13:11:51 +01:00
799fe99537 swarm/pss: mutex lifecycle fixed (#19045)
(cherry picked from commit b30109df3c)
2019-02-19 13:11:51 +01:00
3b02b0ba4b swarm/docker: add global-store and split docker images (#19038)
(cherry picked from commit 6cb7d52a29)
2019-02-19 13:11:51 +01:00
85217b08bd cmd/swarm/global-store: global store cmd (#19014)
(cherry picked from commit 33d0a0efa6)
2019-02-19 13:11:51 +01:00
dcff622d43 swarm: CI race detector test adjustments (#19017)
(cherry picked from commit 27e3f96819)
2019-02-19 13:11:51 +01:00
a3db00f270 swarm/network: refactor simulation tests bootstrap (#18975)
(cherry picked from commit 597597e8b2)
2019-02-19 13:11:50 +01:00
769e43e334 swarm: GetPeerSubscriptions RPC (#18972)
(cherry picked from commit 43e1b7b124)
2019-02-19 13:11:50 +01:00
8d8ddea1a3 swarm/pss: transition to whisper v6 (#19023)
(cherry picked from commit cde02e017e)
2019-02-19 13:09:10 +01:00
068725c5b0 swarm/network, swarm/storage: Preserve opentracing contexts (#19022)
(cherry picked from commit 0c10d37606)
2019-02-19 13:09:09 +01:00
710775f435 swarm/network: fix data race in fetcher_test.go (#18469)
(cherry picked from commit 19bfcbf911)
2019-02-19 13:09:09 +01:00
0fd0108507 swarm/pss: Remove pss service leak in test (#18992)
(cherry picked from commit 7c60d0a6a2)
2019-02-19 13:06:14 +01:00
3c62cc6bba swarm/storage: fix test timeout with -race by increasing mget timeout
(cherry picked from commit 1c3aa8d9b1)
2019-02-19 13:06:14 +01:00
333b1bfb6c swarm/storage/localstore: new localstore package (#19015)
(cherry picked from commit 4f3d22f06c)
2019-02-19 13:06:14 +01:00
d1ace4f344 swarm: Debug API and HasChunks() API endpoint (#18980)
(cherry picked from commit 41597c2856)
2019-02-19 13:06:13 +01:00
637a75d61a cmd/swarm/swarm-smoke: refactor generateEndpoints (#19006)
(cherry picked from commit d212535ddd)
2019-02-19 13:05:55 +01:00
355d55bd34 cmd/swarm/swarm-smoke: remove wrong metrics (#18970)
(cherry picked from commit c5c9cef5c0)
2019-02-19 13:05:37 +01:00
7038b5734c cmd/swarm/swarm-smoke: sliding window test (#18967)
(cherry picked from commit b91bf08876)
2019-02-19 13:05:26 +01:00
1ecf2860cf cmd/swarm: hashes command (#19008)
(cherry picked from commit 7f55b0cbd8)
2019-02-19 12:57:53 +01:00
034f65e9e8 swarm/storage: Get all chunk references for a given file (#19002)
(cherry picked from commit 3eff652a7b)
2019-02-19 12:57:53 +01:00
607a1968e6 swarm/network: Remove extra random peer, connect test sanity, comments (#18964)
(cherry picked from commit f9401ae011)
2019-02-19 12:57:53 +01:00
3f54994db0 swarm: fix flaky delivery tests (#18971)
(cherry picked from commit 592bf6a59c)
2019-02-19 12:57:53 +01:00
2695aa9e0d p2p/testing, swarm: remove unused testing.T in protocol tester (#18500)
(cherry picked from commit 2abeb35d54)
2019-02-19 12:56:31 +01:00
e247dcc141 swarm/version: commit version added (#18510)
(cherry picked from commit ad13d2d407)
2019-02-19 12:56:31 +01:00
b774d0a507 swarm: fix a data race on startTime (#18511)
(cherry picked from commit fa34429a26)
2019-02-19 12:56:30 +01:00
4976fcc91a swarm: bootnode-mode, new bootnodes and no p2p package discovery (#18498)
(cherry picked from commit bbd120354a)
2019-02-19 12:56:30 +01:00
878aa58ec6 cmd/swarm: use resetting timer to measure fetch time (#18474)
(cherry picked from commit a0b0db6305)
2019-02-19 12:56:30 +01:00
475a0664c5 p2p/simulations: fix data race on swarm/network/simulations (#18464)
(cherry picked from commit 85a79b3ad3)
2019-02-19 12:56:30 +01:00
4625b1257f cmd/swarm/swarm-smoke: use ResettingTimer instead of Counters for times (#18479)
(cherry picked from commit 560957799a)
2019-02-19 12:56:30 +01:00
21d54bcaac cmd/swarm/swarm-snapshot: disable tests on windows (#18478)
(cherry picked from commit 632135ce4c)
2019-02-19 12:56:30 +01:00
7383db4dac Upload speed (#18442)
(cherry picked from commit 257bfff316)
2019-02-19 12:56:30 +01:00
afb65f6ace swarm/network: fix data race warning on TestBzzHandshakeLightNode (#18459)
(cherry picked from commit 81e26d5a48)
2019-02-19 12:55:18 +01:00
1f1c751b6e swarm/network: rewrite of peer suggestion engine, fix skipped tests (#18404)
* swarm/network: fix skipped tests related to suggestPeer

* swarm/network: rename depth to radius

* swarm/network: uncomment assertHealth and improve comments

* swarm/network: remove commented code

* swarm/network: kademlia suggestPeer algo correction

* swarm/network: kademlia suggest peer

 * simplify suggest Peer code
 * improve peer suggestion algo
 * add comments
 * kademlia testing improvements
   * assertHealth -> checkHealth (test helper)
   * testSuggestPeer -> checkSuggestPeer (test helper)
   * remove testSuggestPeerBug and TestKademliaCase

* swarm/network: kademlia suggestPeer cleanup, improved comments

* swarm/network: minor comment, discovery test default arg

(cherry picked from commit bcb2594151)
2019-02-19 12:55:07 +01:00
a3f31f51f3 cmd/swarm/swarm-snapshot: swarm snapshot generator (#18453)
* cmd/swarm/swarm-snapshot: add binary to create network snapshots

* cmd/swarm/swarm-snapshot: refactor and extend tests

* p2p/simulations: remove unused triggerChecks func and fix linter

* internal/cmdtest: raise the timeout for killing TestCmd

* cmd/swarm/swarm-snapshot: add more comments and other minor adjustments

* cmd/swarm/swarm-snapshot: remove redundant check in createSnapshot

* cmd/swarm/swarm-snapshot: change comment wording

* p2p/simulations: revert Simulation.Run from master

https://github.com/ethersphere/go-ethereum/pull/1077/files#r247078904

* cmd/swarm/swarm-snapshot: address pr comments

* swarm/network/simulations/discovery: removed snapshot write to file

* cmd/swarm/swarm-snapshot, swarm/network/simulations: removed redundant connection event check, fixed lint error

(cherry picked from commit 34f11e752f)
2019-02-19 12:54:56 +01:00
e63995b3f3 swarm/network: fix data race in TestNetworkID test (#18460)
(cherry picked from commit 96c7c18b18)
2019-02-19 12:54:10 +01:00
dd3e894747 swarm/storage: fix mockNetFetcher data races (#18462)
fixes: ethersphere/go-ethereum#1117
(cherry picked from commit f728837ee6)
2019-02-19 12:54:10 +01:00
df355eceb4 build: explicitly force .xz compression (old debuild picks gzip) (#19118)
(cherry picked from commit c0b9c763bb)
2019-02-19 11:00:46 +02:00
84cb00a94d travis.yml: add launchpad SSH public key (#19115)
(cherry picked from commit 75a931470e)
2019-02-19 11:00:38 +02:00
992a7bbad5 vendor: update bigcache
(cherry picked from commit 37e5a908e7)
2019-02-19 11:00:31 +02:00
a458153098 trie: fix error in node decoding (#19111) 2019-02-19 10:59:57 +02:00
fe5258b41e vendor: pull in upstream syscall fixes for non-linux/arm64
(cherry picked from commit 9d3ea8df1c)
2019-02-19 10:59:40 +02:00
d9be337669 vendor: update syscalls dependency
(cherry picked from commit dcc045f03c)
2019-02-19 10:59:24 +02:00
7bd6f39dc3 common/fdlimit: fix windows build (#19068)
(cherry picked from commit ba90a4aaa4)
2019-02-19 10:58:54 +02:00
b247052a64 build: avoid dput and upload with sftp directly (#19067)
(cherry picked from commit a8ddf7ad83)
2019-02-19 10:58:45 +02:00
276f824707 .travis.yml: fix upload destination (#19043)
(cherry picked from commit edf976ee8e)
2019-02-19 10:58:13 +02:00
048b463b30 common/fdlimit: cap on MacOS file limits, fixes #18994 (#19035)
* common/fdlimit: cap on MacOS file limits, fixes #18994

* common/fdlimit: fix Maximum-check to respect OPEN_MAX

* common/fdlimit: return error if OPEN_MAX is exceeded in Raise()

* common/fdlimit: goimports

* common/fdlimit: check value after setting fdlimit

* common/fdlimit: make comment a bit more descriptive

* cmd/utils: make fdlimit happy path a bit cleaner

(cherry picked from commit f48da43bae)
2019-02-19 10:57:49 +02:00
9f5fb15097 build: use SFTP for launchpad uploads (#19037)
* build: use sftp for launchpad uploads

* .travis.yml: configure sftp export

* build: update CI docs

(cherry picked from commit 3de19c8b31)
2019-02-19 10:56:14 +02:00
2072c26a96 cmd, core, params: add support for Goerli
(cherry picked from commit b0ed083ead)
2019-02-19 10:53:47 +02:00
4da2092908 core: fix pruner panic when importing low-diff-large-sidechain 2019-02-09 17:45:23 +01:00
3ab9dcc3bd core: repro #18977 2019-02-09 17:44:15 +01:00
18f702faf7 cmd/puppeth: handle pre-set Petersburg number, save changed fork rules 2019-02-09 17:38:00 +01:00
3a95128b22 core: fix error in block iterator (#18986) 2019-02-09 17:36:20 +01:00
631e2f07f6 eth: make tracers respect pre- EIP 158/161 rule 2019-02-09 17:35:54 +01:00
7fa3509e2e params, swarm/version: Geth 1.8.22-stable, Swarm 0.3.10-stable 2019-01-31 11:52:18 +01:00
86ec742f97 p2p/discover: improve table addition code (#18974)
This change clears up confusion around the two ways in which nodes
can be added to the table.

When a neighbors packet is received as a reply to findnode, the nodes
contained in the reply are added as 'seen' entries if sufficient space
is available.

When a ping is received and the endpoint verification has taken place,
the remote node is added as a 'verified' entry or moved to the front of
the bucket if present. This also updates the node's IP address and port
if they have changed.
2019-01-31 11:51:13 +01:00
d9a07fba67 params: new CHTs (#18577) 2019-01-29 17:50:20 +01:00
4cd90e02e2 p2p/discover, p2p/enode: rework endpoint proof handling, packet logging (#18963)
This change resolves multiple issues around handling of endpoint proofs.
The proof is now done separately for each IP and completing the proof
requires a matching ping hash.

Also remove waitping because it's equivalent to sleep. waitping was
slightly more efficient, but that may cause issues with findnode if
packets are reordered and the remote end sees findnode before pong.

Logging of received packets was hitherto done after handling the packet,
which meant that sent replies were logged before the packet that
generated them. This change splits up packet handling into 'preverify'
and 'handle'. The error from 'preverify' is logged, but 'handle' happens
after the message is logged. This fixes the order. Packet logs now
contain the node ID.
2019-01-29 17:50:15 +01:00
1f3dfed19e build: tweak debian source package build/upload options (#18962)
dput --passive should make repo pushes from Travis work again.
dput --no-upload-log works around an issue I had while uploading locally.

debuild -d says that debuild shouldn't check for build dependencies when
creating the source package. This option is needed to make builds work
in environments where the installed Go version doesn't match the
declared dependency in the source package.
2019-01-29 17:50:09 +01:00
2ae481ff6b travis, appveyor: bump to Go 1.11.5 (#18947) 2019-01-29 17:49:59 +01:00
c7664b0636 core, cmd/puppeth: implement constantinople fix, disable EIP-1283 (#18486)
This PR adds a new fork which disables EIP-1283. Internally it's called Petersburg,
but the genesis/config field is ConstantinopleFix.

The block numbers are:

    7280000 for Constantinople on Mainnet
    7280000 for ConstantinopleFix on Mainnet
    4939394 for ConstantinopleFix on Ropsten
    9999999 for ConstantinopleFix on Rinkeby (real number decided later)

This PR also defaults to using the same ConstantinopleFix number as whatever
Constantinople is set to. That is, it will default to mainnet behaviour if ConstantinopleFix
is not set.This means that for private networks which have already transitioned
to Constantinople, this PR will break the network unless ConstantinopleFix is
explicitly set!
2019-01-29 17:49:27 +01:00
9dc5d1a915 params, swarm: release Geth v1.8.21 and Swarm v0.3.9 2019-01-15 22:51:31 +02:00
c03f694be5 Merge pull request #18454 from karalabe/postpone-constantinople
params: postpone Constantinople due to net SSTORE reentrancy
2019-01-15 22:25:47 +02:00
2a2fd5adf8 params: postpone Constantinople due to net SSTORE reentrancy 2019-01-15 22:06:17 +02:00
115b1c38ac accounts/abi: Add tests for reflection ahead of refactor (#18434) 2019-01-15 16:45:52 +01:00
4aeeecfded swarm/pot: each() functions refactored (#18452) 2019-01-15 11:51:33 +01:00
1636d9574b swarm/pot: pot.remove fixed (#18431)
* swarm/pot: refactored pot.remove(), updated comments

* swarm/pot: comments updated
2019-01-11 20:42:33 +01:00
88168ff5c5 Stream subscriptions (#18355)
* swarm/network: eachBin now starts at kaddepth for nn

* swarm/network: fix Kademlia.EachBin

* swarm/network: fix kademlia.EachBin

* swarm/network: correct EachBin implementation according to requirements

* swarm/network: less addresses simplified tests

* swarm: calc kad depth outside loop in EachBin test

* swarm/network: removed printResults

* swarm/network: cleanup imports

* swarm/network: remove kademlia.EachBin; fix RequestSubscriptions and add unit test

* swarm/network/stream: address PR comments

* swarm/network/stream: package-wide subscriptionFunc

* swarm/network/stream: refactor to kad.EachConn
2019-01-11 15:08:09 +01:00
d5cad488be core, eth: fix database version (#18429)
* core, eth: fix database version

* eth: polish error message
2019-01-11 13:49:12 +02:00
2eb838ed97 p2p/simulations: eliminate concept of pivot (#18426) 2019-01-11 10:23:45 +01:00
38cce9ac33 accounts/abi: Extra slice tests (#18424)
Co-authored-by: weimumu <934657014@qq.com>
2019-01-10 16:27:54 +01:00
7240f4d800 swarm/network: Rename minproxbinsize, add as member of simulation (#18408)
* swarm/network: Rename minproxbinsize, add as member of simulation

* swarm/network: Deactivate WaitTillHealthy, unreliable pending suggestpeer
2019-01-10 12:33:51 +01:00
7ca40306af accounts/abi: tuple support (#18406) 2019-01-10 09:59:37 +01:00
6df3e4eeb0 swarm/network: remove isproxbin bool from kad.Each* iterfunc (#18239)
* swarm/network, swarm/pss: remove isproxbin bool from kad.Each* iterfunc

* swarm/network: restore comment and unskip snapshot sync tests
2019-01-10 03:36:19 +01:00
d70c4faf20 swarm: Fix T.Fatal inside a goroutine in tests (#18409)
* swarm/storage: fix T.Fatal inside a goroutine

* swarm/network/simulation: fix T.Fatal inside a goroutine

* swarm/network/stream: fix T.Fatal inside a goroutine

* swarm/network/simulation: consistent failures in TestPeerEventsTimeout

* swarm/network/simulation: rename sendRunSignal to triggerSimulationRun
2019-01-09 07:05:55 +01:00
81f04fa606 github: remove swarm github codeowners (#18412) 2019-01-08 22:50:15 +01:00
ae857e74bf swarm, p2p/protocols: Stream accounting (#18337)
* swarm: completed 1st phase of swap accounting

* swarm, p2p/protocols: added stream pricing

* swarm/network/stream: gofmt simplify stream.go

* swarm: fixed review comments

* swarm: used snapshots for swap tests

* swarm: custom retrieve for swap (less cascaded requests at any one time)

* swarm: addressed PR comments

* swarm: log output formatting

* swarm: removed parallelism in swap tests

* swarm: swap tests simplification

* swarm: removed swap_test.go

* swarm/network/stream: added prefix space for comments

* swarm/network/stream: unit test for prices

* swarm/network/stream: don't hardcode price

* swarm/network/stream: fixed invalid price check
2019-01-08 00:59:00 +01:00
56a3f6c03c swarm/storage/mock/test: fix T.Fatal inside a goroutine (#18399) 2019-01-07 14:32:01 +01:00
356c49fa7e swarm: Shed Index and Uint64Field additions (#18398) 2019-01-07 13:20:11 +01:00
428eabe28d cmd/geth: support dumpconfig optionally saving to file (#18327)
* Changed dumpConfig function to optionally save to file

* Added O_TRUNC flag to file open and cleaned up code
2019-01-07 10:56:50 +02:00
e05d468075 internal/ethapi: ask transaction pool for pending nonce (#15794) 2019-01-07 10:47:11 +02:00
aca588a8e4 accounts/keystore: small code simplification (#18394) 2019-01-07 10:35:44 +02:00
fe03b76ffe A few minor code inspection fixes (#18393)
* swarm/network: fix code inspection problems

- typos
- redundant import alias

* p2p/simulations: fix code inspection problems

- typos
- unused function parameters
- redundant import alias
- code style issue: snake case

* swarm/network: fix unused method parameters inspections
2019-01-06 11:58:57 +01:00
072c95fb74 accounts/keystore: fix comment typo (#18395) 2019-01-05 21:27:57 +01:00
e8ff318205 eth/tracer: extend create2 (#18318)
* eth/tracer: extend create2

* eth/tracers: fix create2-flaw in prestate_tracer

* eth/tracers: fix test

* eth/tracers: update assets
2019-01-05 21:26:50 +01:00
c1c4301121 Merge pull request #18371 from jeremyschlatter/patch-1
core/types: update incorrect comment
2019-01-04 10:14:17 +02:00
391d4cb9b5 Merge pull request #18390 from realdave/remove-sha3-pkg
vendor, crypto, swarm: switch over to upstream sha3 package
2019-01-04 09:51:12 +02:00
3f421aca54 cmd/puppeth: fix panic error when export aleth genesis wo/ precompile-addresses (#18344)
* cmd/puppeth: fix panic error when export aleth genesis wo/ precompile-addresses

* cmd/puppeth: don't need to handle duplicate set
2019-01-04 09:48:15 +02:00
8ec344bf60 vendor: update the entire golang.org/x/crypto dependency 2019-01-04 09:26:07 +02:00
33d233d3e1 vendor, crypto, swarm: switch over to upstream sha3 package 2019-01-04 09:26:07 +02:00
49975264a8 swarm/docker: Dockerfile for swarm:edge docker image (#18386) 2019-01-03 15:32:58 +01:00
1ea5279d5d vendor: vendor/github.com/mattn/go-isatty - add missing files (reported by mksully22) (#18376) 2019-01-03 13:31:20 +01:00
27913dd226 accounts/abi/bind: add optional block number for calls (#17942) 2019-01-03 12:54:24 +01:00
ddaf48bf84 travis, appveyor: bump to Go 1.11.4 (#18314)
* travis, appveyor: bump to Go 1.11.4

* internal/build: revert comment changes
2019-01-03 11:32:12 +02:00
57a90ad450 build: add LGPL license at update-license.go (#18377)
* add LGPL licence at update-licence.go

* add empty line
2019-01-03 10:09:04 +02:00
1d284c201d swarm/storage: change Proximity function and add TestProximity test (#18379) 2019-01-03 06:17:59 +01:00
b025053ab0 rpc: Warn the user when the path name is too long for the Unix ipc endpoint (#18330) 2019-01-02 17:33:17 +01:00
9bfd0b60cc accounts/abi: fix case of generated java functions (#18372) 2019-01-02 10:22:10 +01:00
a4af734328 accounts/abi: change unpacking of abi fields w/ underscores (#16513)
* accounts/abi: fix name styling when unpacking abi fields w/ underscores

ABI fields with underscores that are being unpacked
into structs expect structs with following form:

int_one -> Int_one

whereas in abigen the generated structs are camelcased

int_one -> IntOne

so updated the unpack method to expect camelcased structs as well.
2018-12-29 11:32:58 +01:00
6537ab5dd3 core/types: update incorrect comment 2018-12-28 17:58:03 -08:00
735343430d fix string array unpack bug in accounts/abi (#18364) 2018-12-28 08:43:55 +01:00
9e9fc87e70 swarm: remove unused/dead code (#18351) 2018-12-23 17:31:32 +01:00
335760bf06 accounts/abi: Brings out the msg defined at require statement in SC function (#17328) 2018-12-22 11:39:08 +01:00
7df52e324c accounts/abi: add support for unpacking returned bytesN arrays (#15242) 2018-12-22 11:26:49 +01:00
5e4fd8e7db swarm/network: Revised depth and health for Kademlia (#18354)
* swarm/network: Revised depth calculation with tests

* swarm/network: WIP remove redundant "full" function

* swarm/network: WIP peerpot refactor

* swarm/network: Make test methods submethod of peerpot and embed kad

* swarm/network: Remove commented out code

* swarm/network: Rename health test functions

* swarm/network: Too many n's

* swarm/network: Change hive Healthy func to accept addresses

* swarm/network: Add Healthy proxy method for api in hive

* swarm/network: Skip failing test out of scope for PR

* swarm/network: Skip all tests dependent on SuggestPeers

* swarm/network: Remove commented code and useless kad Pof member

* swarm/network: Remove more unused code, add counter on depth test errors

* swarm/network: WIP Create Healthy assertion tests

* swarm/network: Roll back health related methods receiver change

* swarm/network: Hardwire network minproxbinsize in swarm sim

* swarm/network: Rework Health test to strict

Pending add test for saturation
And add test for as many as possible up to saturation

* swarm/network: Skip discovery tests (dependent on SuggestPeer)

* swarm/network: Remove useless minProxBinSize in stream

* swarm/network: Remove unnecessary testing.T param to assert health

* swarm/network: Implement t.Helper() in checkHealth

* swarm/network: Rename check back to assert now that we have helper magic

* swarm/network: Revert WaitTillHealthy change (deferred to nxt PR)

* swarm/network: Kademlia tests GotNN => ConnectNN

* swarm/network: Renames and comments

* swarm/network: Add comments
2018-12-22 06:53:30 +01:00
880de230b4 p2p/protocols: accounting metrics rpc (#18336)
* p2p/protocols: accounting metrics rpc added (#847)

* p2p/protocols: accounting api documentation added (#847)

* p2p/protocols: accounting api doc updated (#847)

* p2p/protocols: accounting api doc update (#847)

* p2p/protocols: accounting api doc update (#847)

* p2p/protocols: fix file is not gofmted

* fix lint error

* updated comments after review

* add account balance to rpc

* naming changed after review
2018-12-22 06:04:03 +01:00
81c3dc728f eth/downloader: progress in stateSync not used anymore (#17998) 2018-12-21 23:36:14 +01:00
ca7c13ba8f swarm/pss: forwarding function refactoring (#18353) 2018-12-21 18:04:18 +01:00
e1edfe0689 p2p/simulation: Test snapshot correctness and minimal benchmark (#18287)
* p2p/simulation: WIP minimal snapshot test

* p2p/simulation: Add snapshot create, load and verify to snapshot test

* build: add test tag for tests

* p2p/simulations, build: Revert travis change, build test sym always

* p2p/simulations: Add comments, timeout check on additional events

* p2p/simulation: Add benchmark template for minimal peer protocol init

* p2p/simulations: Remove unused code

* p2p/simulation: Correct timer reset

* p2p/simulations: Put snapshot check events in buffer and call blocking

* p2p/simulations: TestSnapshot fail if Load function returns early

* p2p/simulations: TestSnapshot wait for all connections before returning

* p2p/simulation: Revert to before wait for snap load (5e75594)

* p2p/simulations: add "conns after load" subtest to TestSnapshot

and nudge
2018-12-21 06:22:11 +01:00
27ce4eb78b core: sanitize more TxPoolConfig fields (#17210)
* core: sanitize more TxPoolConfig fields

* core: fix TestTransactionPendingMinimumAllowance
2018-12-20 14:00:58 +01:00
5f251a6448 downloader: fix edgecase where returned index is OOB for downloader (#18335)
* downloader: fix edgecase where returned index is OOB for downloader

* downloader: documentation

Co-Authored-By: holiman <martin@swende.se>
2018-12-20 10:46:08 +01:00
fe86a707d8 swarm/storage: remove unused methods from Chunk interface (#18283) 2018-12-18 15:25:02 +01:00
b01cfce362 swarm/pss: Reduce input vulnerabilities (#18304) 2018-12-18 15:23:32 +01:00
de4265fa02 swarm/network/simulation:commented out unreachable code-avoid vet errors (#18263) 2018-12-18 07:24:59 +01:00
90ea542e9e Update visualized snapshot test (#18286)
* swarm/network/stream: fix visualized_snapshot_sync_sim_test

* swarm/network/stream: updated visualized snapshot-test;data in p2p event

* swarm/network/stream: cleanup visualized snapshot sync test

* swarm/network/stream: re-enable t.Skip for visualized test

* swarm/network/stream: addressed PR comments
2018-12-18 07:20:59 +01:00
472c23a801 p2p/simulation: move connection methods from swarm/network/simulation (#18323) 2018-12-17 12:19:01 +01:00
d322c9d550 swarm/storage/feed: remove unused code (#18324) 2018-12-17 11:32:55 +01:00
3ad73443c7 fix slice unpack bug in accounts/abi (#18321)
* fix slice unpack bug in accounts/abi
2018-12-17 09:50:52 +01:00
7dbb075c07 Change issue labels in bot configs to the new prefixed version (#18311) 2018-12-14 15:06:06 +01:00
aebf9e2fe7 .github: add @gballet as abi codeowner (#18306) 2018-12-14 15:05:22 +01:00
aad3c67a92 p2p/discv5: don't hash findnode target in lookup against table (#18309) 2018-12-14 14:55:51 +01:00
fe26b2f366 core/state: rename 'new' variable (#18301) 2018-12-14 14:55:03 +01:00
88d7d4fed4 Change issue labels in bot configs to the new prefixed version 2018-12-14 14:50:10 +01:00
9940d93a43 Comment error (#18303) 2018-12-14 11:15:31 +01:00
3796751efc rpc: add application/json-rpc as accepted content type, fixes #18293 (#18310) 2018-12-14 11:08:11 +01:00
e79821cabe accounts/abi: argument type and name were reversed (#17947)
argument type and name were reversed
2018-12-13 15:12:19 +01:00
e57e4571d3 crypto/secp256k1: Fix invalid document link (#18297) 2018-12-13 10:25:13 +01:00
b3be9b7cd8 usbwallet: check returned error when decoding hexstr (#18056)
* usbwallet: check returned error when decoding hexstr

* Update accounts/usbwallet/ledger.go

Co-Authored-By: CoreyLin <514971757@qq.com>

* usbwallet: check hex decode error
2018-12-13 10:21:52 +01:00
4e6f53ac33 swarm/storage: simplify ChunkValidator interface (#18285) 2018-12-12 16:22:17 +01:00
ebbf3dfafb swarm/shed: add metrics to each shed db (#18277)
* swarm/shed: add metrics to each shed db

* swarm/shed: push metrics prefix up

* swarm/shed: rename prefix to metricsPrefix

* swarm/shed: unexport Meter, remove Mutex for quit channel
2018-12-12 07:51:29 +01:00
1e190a3b1c params, swarm: begin Geth v1.9.0 family, Swarm v0.3.9 cycle 2018-12-11 14:23:57 +02:00
24d727b6d6 params, swarm: release Geth v1.8.20 and Swarm v0.3.8 2018-12-11 14:20:21 +02:00
83a9a73b89 cmd/geth, core, eth: implement Constantinople override flag (#18273)
* geth/core/eth: implement constantinople override flag

* les: implemnent constantinople override flag for les clients

* cmd/geth, eth, les: fix typo, move flag to experimentals
2018-12-11 14:19:03 +02:00
5584574217 Merge pull request #18281 from karalabe/puppeth-faucet
cmd/faucet, cmd/puppeth: fix enode and compose regressions, expose UDP
2018-12-11 13:55:10 +02:00
38c3d88cea cmd/puppeth: support latest docker compose, expose faucet UDP 2018-12-11 13:41:41 +02:00
69a8d9841a cmd/faucet: fix faucet static peer regression 2018-12-11 13:41:18 +02:00
bb724080ca cmd/swarm, metrics, swarm/api/client, swarm/storage, swarm/metrics, swarm/api/http: add instrumentation (#18274) 2018-12-11 09:21:58 +01:00
b2aac658b0 Merge pull request #18271 from karalabe/1.8.20-chts
params: update CHTs for the 1.8.20 release
2018-12-10 15:15:51 +02:00
9fe5d20011 Merge pull request #18028 from ryanschneider/blockhash-whitelist
cmd, eth: add support for `--whitelist <blocknum>=<hash>`
2018-12-10 15:10:35 +02:00
dd98d1da94 swarm/network: Correct ambiguity in compared addresses (#18251) 2018-12-10 14:56:01 +02:00
362e2ba792 params: update CHTs for the 1.8.20 release 2018-12-10 14:55:29 +02:00
31b3334922 cmd/utils, eth: minor polishes on whitelist code 2018-12-10 14:47:01 +02:00
48b70ecff1 cmd, eth: Add support for --whitelist <blocknum>=<hash>,... flag
* Rejects peers that respond with a different hash for any of the passed in block numbers.
* Meant for emergency situations when the network forks unexpectedly.
2018-12-10 14:30:06 +02:00
c1e3fe6b14 ethereum: fix typo in interfaces.go (#18266)
* Fix typo in interfaces.go

* Update interfaces.go
2018-12-10 14:24:55 +02:00
2fdff33803 Merge pull request #18269 from Quasilyte/patch-1
light: fix duplicated argument in bytes.Equal call
2018-12-10 13:55:11 +02:00
da6e6e7971 light: fix duplicated argument in bytes.Equal call
Most probably a copy/paste kind of error.
Found with gocritic `dupArg` checker.
2018-12-10 14:29:34 +03:00
af8daf91a6 node, rpc: log cleanups in ipc listener function (#18124)
node,rpc: remove unused log in ipc listener function
2018-12-10 13:15:03 +02:00
fd66af5ee5 Merge pull request #17914 from holiman/block_analysis
core/vm, eth: add standard json tracing into filesystem dumps
2018-12-10 13:05:03 +02:00
0983d02aa9 eth, internal/web3ext: tiny polishes in tracers 2018-12-10 12:33:50 +02:00
42a914a84f cmd/evm, core/vm, eth: implement api methods to do stdjson dump to local filesystem 2018-12-10 12:33:50 +02:00
09d588e0da Merge pull request #18268 from karalabe/forkit
params: set mainnet and Rinkeby Constantinople fork blocks
2018-12-10 11:46:39 +02:00
6a1a4375c6 params: set mainnet and Rinkeby Constantinople fork blocks 2018-12-10 11:36:36 +02:00
dfa16a3e4e eth/tracers: fixed incorrect storage from prestate_tracer (#18253)
* eth: fixed incorrect storage from prestate_tracer

* eth/tracers: updated assets.go
2018-12-10 11:17:31 +02:00
c1d462ee5d cmd/puppeth: fix rogue quote in alethGenesisSpec JSON (#18262) 2018-12-10 11:16:19 +02:00
f32790fb05 node: warn when using deprecated config/resource files (#18199) 2018-12-07 15:43:27 +02:00
d2328b604a Merge pull request #18211 from karalabe/drop-fd-limit
cmd/utils: max out the OS file allowance, don't cap to 2K
2018-12-07 14:10:03 +02:00
661809714e swarm: snapshot load improvement (#18220)
* swarm/network: Hive - do not notify peer if discovery is disabled

* p2p/simulations: validate all connections on loading a snapshot

* p2p/simulations: track all connections in on snapshot loading

* p2p/simulations: add snapshotLoadTimeout variable

* p2p/simulations: ignore control events in snapshot load

* p2p/simulations: simplify event loop synchronization

* p2p/simulations: return already connected error from Load function

* p2p/simulations: log warning on snapshot loading disconnection
2018-12-07 06:51:40 +01:00
de39513ced core, internal, eth, miner, les: Take VM config from BlockChain (#17955)
Until this commit, when sending an RPC request that called `NewEVM`, a blank `vm.Config`
would be taken so as to set some options, based on the default configuration. If some extra
configuration switches were passed to the blockchain, those would be ignored.

This PR adds a function to get the config from the blockchain, and this is what is now used
for RPC calls.

Some subsequent changes need to be made, see https://github.com/ethereum/go-ethereum/pull/17955#pullrequestreview-182237244
for the details of the discussion.
2018-12-06 14:34:49 +01:00
3ac633ba84 swarm/api/http: add resetting timer metrics to requests (#18249) 2018-12-05 11:20:55 +01:00
b98d2e9a1c swarm/network/stream: Debug log instead of Warn for retrieval failure (#18246) 2018-12-04 18:29:51 +01:00
92639b676a Add packing for dynamic array and slice types (#18051)
* added tests for new abi encoding features (#4)

* added tests from bytes32[][] and string[]

* added offset to other types

* formatting

* Abi/dynamic types (#5)

* Revert "Abi/dynamic types (#5)" (#6)

This reverts commit dabca31d79.

* Abi/dynamic types (#7)

* some cleanup

* Apply suggestions from code review

apply suggestions

Co-Authored-By: vedhavyas <vedhavyas.singareddi@gmail.com>

* added better formatting (#8)

* review chnages

* better comments
2018-12-04 15:27:55 +01:00
f74077b4c2 Merge pull request #18172 from holiman/puppeth_converter
cmd/puppeth: implement chainspec converters
2018-12-04 12:15:50 +02:00
d4415f5e40 cmd/puppeth: chain import/export via wizard, minor polishes 2018-12-04 12:12:40 +02:00
7a5c1b28dd whisperv6: remove duplicated code (#18015) 2018-12-03 14:15:22 +01:00
8698fbabf6 cmd/puppeth: implement chainspec converters 2018-12-03 12:34:41 +02:00
a3fd415c0f Merge pull request #18235 from karalabe/puppeth-enforce-lowercase
cmd/puppeth: enforce lowercase network names
2018-12-03 12:23:57 +02:00
4825d9c3dd cmd/puppeth: enforce lowercase network names 2018-12-03 12:17:08 +02:00
f850123ec7 Changed http:// to https:// on JSON-RPC link (#18224)
Changed http:// to https:// on JSON-RPC link in README.md
2018-12-02 13:03:31 +01:00
efe5886877 signer/core: Fixes typo of method name in comment. (#18222) 2018-12-02 13:00:42 +01:00
085f89172f swarm/pss: Add same api interface for all Send* methods (#18218) 2018-12-01 07:07:18 +01:00
54abb97e3b p2p: use errors.New instead of fmt.Errorf (#18193) 2018-11-30 22:38:37 +01:00
ef8ced4151 vendor: update github.com/karalabe/hid (#18213)
Fixes #15101 because hidapi is no longer being called from an
init function.
2018-11-30 11:22:53 +02:00
7e7781ffaa cmd/swarm: add flag for application name (swarm or swarm-private) (#18189)
* cmd/swarm: add flag for application name (swarm or swarm-private)

* cmd/swarm/swarm-smoke: return correct exit code

* cmd/swarm/swarm-smoke: remove colorable

* remove swarm/grafana_dashboards
2018-11-29 18:43:15 +02:00
cf62bd2e88 cmd/utils: max out the OS file allowance, don't cap to 2K 2018-11-29 12:47:29 +02:00
01371469e6 vendor: update leveldb (#18205) 2018-11-29 12:07:10 +02:00
32d35c9c08 accounts/keystore: delete the redundant keystore in filename (#17930)
* accounts/keystore: reduce file name length

* accounts/keystore: reduce code line width
2018-11-29 12:04:56 +02:00
a4428c505e mobile: added constructor for BigInts (#17828) 2018-11-29 12:02:31 +02:00
zah
55a4ff806f remove a no-op line in the code (#17760) 2018-11-29 10:56:59 +01:00
8380a1303c vendor: update leveldb 2018-11-29 09:58:09 +01:00
7c657fc789 tests, core: update tests and make STATICCALL cause touch-delete (#18187) 2018-11-29 10:51:57 +02:00
3d21d455dc cmd/evm: commit statedb if dump is requested (#18208)
Add a call `statedb.Commit(true)` if the `Dump` flag is on, as otherwise the `storage` output in the dump is always empty.
2018-11-29 09:29:12 +01:00
3dba6a6d27 remove unrelated code 2018-11-28 22:42:07 +08:00
a7501d0c41 params, swarm: start Geth v1.8.20 and Swarm v0.3.8 release cycle 2018-11-28 14:44:00 +02:00
dae82f0985 params, swarm: release Geth v1.8.19 and Swarm v0.3.7 2018-11-28 14:42:37 +02:00
8fdbbef72d Merge pull request #18196 from karalabe/downloader-cht-fix
eth/downloader: fix light client cht binary search issue
2018-11-28 14:40:32 +02:00
8696986547 Merge pull request #18197 from karalabe/v1.8.19-chts
params: update CHTs for the v1.8.19 release
2018-11-28 13:54:32 +02:00
d606a7a46a params: update CHTs for the v1.8.19 release 2018-11-28 13:53:33 +02:00
174083c3ae eth/downloader: fix light client cht binary search issue 2018-11-28 13:46:13 +02:00
bfed28a421 core: more detailed metrics for block processing (#18119) 2018-11-28 10:29:05 +02:00
edc39aaedf p2p/discv5: gofmt 2018-11-27 14:50:47 +02:00
89fe24bbcc p2p/discv5: minor code simplification (#18188)
* Update net.go

more simple

* Update net.go
2018-11-27 14:00:57 +02:00
8b9f469419 p2p/protocols: fix minor comments typo (#18185) 2018-11-27 12:52:30 +02:00
695a5cce1e Increase bzz version (#18184)
* swarm/network/stream/: added stream protocol version match tests

* Increase BZZ version due to streamer version change; version tests

* swarm/network: increased hive and test protocol version
2018-11-26 19:34:40 +01:00
c207edf2a3 swarm: add database abstractions (shed package) (#18183) 2018-11-26 18:49:01 +01:00
4f0d978eaa cmd/swarm: update should error on manifest mismatch (#18047)
* cmd/swarm: fix ethersphere/go-ethereum#979:

update should error on manifest mistmatch

* cmd/swarm: fixed comments and remove sprintf from log.Info

* cmd/swarm: remove unnecessary comment
2018-11-26 17:37:59 +01:00
1cd007ecae swarm/network: Correct neighborhood depth (#18066) 2018-11-26 17:13:59 +01:00
bba5fd8192 Accounting metrics reporter (#18136) 2018-11-26 17:05:18 +01:00
2714e8f091 Remove multihash from Swarm bzz:// for Feeds (#18175) 2018-11-26 16:10:22 +01:00
0699287440 tests: Add flag to use EVMC for state tests (#18084) 2018-11-26 16:09:32 +01:00
197d609b9a swarm/pss: Message handler refactor (#18169) 2018-11-26 13:52:04 +01:00
ca228569e4 light: odrTrie tryUpdate should use update (#18107)
TryUpdate does not call t.trie.TryUpdate(key, value) and calls t.trie.TryDelete
instead. The update operation simply deletes the corresponding entry, though
it could retrieve later by odr. However, it adds further network overhead.
2018-11-26 13:27:49 +01:00
f5e6634fd2 swarm/api: improve not found error msg (#18171) 2018-11-26 12:53:15 +01:00
93854bbad4 swarm/network/simulation: fix New function for-loop scope (#18161) 2018-11-26 12:39:38 +01:00
f0515800e6 les: fix fetcher syncing logic (#18072) 2018-11-26 13:34:33 +02:00
bb29d20828 Merge pull request #18179 from holiman/fix_tests
config: add constantinople block to testchainconfig
2018-11-26 11:33:09 +02:00
38592a13a3 fix mixHash/nonce for parity compatible network (#18166) 2018-11-26 09:59:04 +01:00
a5898ba621 config: add constantinople block to testchainconfig 2018-11-26 09:55:45 +01:00
2a113f6d72 core: return error if repair block failed (#18126)
* core: return error if repair block failed

* make error a bit shorter
2018-11-23 11:16:14 +02:00
b24ef5e05d eth: increase timeout in TestBroadcastBlock (#18064) 2018-11-23 11:14:09 +02:00
76f5f662cc cmd/swarm: FUSE do not require --ipcpath (#18112)
- Have `${DataDir}/bzzd.ipc` as IPC path default.
- Respect the `--datadir` flag.
- Keep only the global `--ipcpath` flag and drop the local `--ipcpath` flag
  as flags might overwrite each other. (Note: before global `--ipcpath`
  was ignored even if it was set)

fixes ethersphere#795
2018-11-23 01:32:34 +01:00
6b2cc8950e travis: increase open file limits (#18155) 2018-11-22 16:32:50 +02:00
2843001ac2 trie: fix overflow in write cache parent tracking (#18165)
trie/database: fix overflow in parent tracking
2018-11-22 15:14:31 +02:00
9d5e3e0637 params: add Constantinople block to AllXYZProtocolChanges (#18162)
* params: Add Constantinople block to AllCliqueProtocolChanges

* params: Add Constantinople block to AllEthashProtocolChanges
2018-11-22 15:03:50 +02:00
3ba0418a9a Merge pull request #17973 from holiman/splitter2
core: better side-chain importing
2018-11-22 15:01:10 +02:00
e0d091e090 core: better printout of receipts in bad block reports (#18156)
* core/blockchain: better printout of receipts in bad block reports

* fix splleing
2018-11-22 11:00:16 +02:00
070caec4bd swarm/network/stream: use swarm/mock/mem as mock global store (#18157) 2018-11-21 20:49:13 +01:00
4c181e4fb9 swarm/state: refactor InmemoryStore (#18143) 2018-11-21 14:36:56 +01:00
333b5fb123 core: polish side chain importer a bit 2018-11-21 13:19:56 +02:00
3fd87f2193 core: fix comment typo (#18144) 2018-11-21 12:52:02 +02:00
c7e522fd17 Update minimum required Go version in README.md (#18151) 2018-11-21 12:38:49 +02:00
5d80a1b665 whisper/mailserver: reduce the max number of opened files (#18142)
This should reduce the occurences of travis failures on MacOS

Also fix some linter warnings
2018-11-20 20:14:37 +01:00
21dd59bd04 . 2018-11-20 22:16:40 +08:00
493903eede core: better side-chain importing 2018-11-20 12:28:43 +02:00
3d997b6dec whisper: log errors on failed tests (#18134)
Debug traces to investigate a travis issue on MacOS
2018-11-20 10:08:02 +01:00
d876f214e5 swarm/storage: move 'running migrations for' log line (#18120)
So that we only see the log message when we actually have to migrate.
2018-11-20 08:30:38 +01:00
7bf7bd2f50 internal/cmdtest: Expose process exit status and errors (#18046) 2018-11-20 08:23:43 +01:00
d31f1f4fdb cmd/swarm/swarm-smoke: update smoke tests to fit the new scheme for the k8s cluster (#18104) 2018-11-19 14:58:10 +01:00
6b6c4d1c27 cmd/swarm: speed up tests - use global cluster (#18129) 2018-11-19 14:57:22 +01:00
3333fe660f swarm/storage: speed up garbage collection and rpc tests (#18128) 2018-11-19 12:26:45 +01:00
51e2e78d26 swarm/api/http: change request served msg log level (#18127) 2018-11-18 10:54:03 +01:00
d136e985e8 trie: go fmt package 2018-11-16 16:35:39 +02:00
91c66d47ef Merge pull request #18085 from holiman/downloader_span
downloader: different sync strategy
2018-11-16 16:34:30 +02:00
accc0fab4f core, eth/downloader: fix ancestor lookup for fast sync 2018-11-16 13:21:20 +02:00
51b2f1620c downloader: different sync strategy 2018-11-16 11:54:36 +02:00
68be45e5f8 trie: return hasher to pool (#18116)
* trie: return hasher to pool

* trie: minor code formatting fix
2018-11-16 11:50:48 +02:00
ffe2fc3bc4 Swarm accounting (#18050)
* swarm: completed 1st phase of swap accounting

* swarm: swap accounting for swarm with p2p accounting

* swarm/swap: addressed PR comments

* swarm/swap: ignore ErrNotFound on stateStore.Get()

* swarm/swap: GetPeerBalance test; add TODO for chequebook API check

* swarm/network/stream: fix NewRegistry calls with new arguments

* swarm/swap: address @justelad's PR comments
2018-11-15 23:41:19 +01:00
324027640b swarm/network/simulation: use simulations.Event instead p2p.PeerEvent (#18098) 2018-11-15 21:06:27 +01:00
b91766fe6d eth: fix comment typo (#18114)
* consensus/clique: fix comment typo

* eth,eth/downloader: fix comment typo
2018-11-15 16:31:24 +02:00
a6942b9f25 swarm/storage: Batched database migration (#18113) 2018-11-15 14:57:03 +01:00
17d67c5834 Merge pull request #18087 from karalabe/trie-read-cacher
cmd, core, eth, light, trie: add trie read caching layer
2018-11-15 14:42:19 +02:00
434dd5bc00 cmd, core, eth, light, trie: add trie read caching layer 2018-11-15 12:22:13 +02:00
14346e4ef9 internal: fix typo in comments (#18106)
Changed "signTransactions" to "signTransaction"
2018-11-15 11:11:14 +02:00
b8a2ac3fcf les: fix pubkey index typo (#18093) 2018-11-15 11:10:45 +02:00
9a000601c6 consensus/clique: fix comment typo (#18103) 2018-11-14 14:50:30 +02:00
23de6197f9 rpc: fix package doc typo (#18101)
Changed "send" to "send," in two places
2018-11-14 12:21:52 +02:00
698843b45f rpc: fix example typo (#18100)
whishes --> wishes
2018-11-14 12:21:10 +02:00
48b4e8069c params, swarm: begin Geth v1.8.19 and Swarm v0.3.7 cycle 2018-11-14 10:27:30 +02:00
58632d4402 params, swarm: release Geth v1.8.18 and Swarm v0.3.6 2018-11-14 10:25:19 +02:00
eb8fa3cc89 cmd/swarm, swarm/api/http, swarm/bmt, swarm/fuse, swarm/network/stream, swarm/storage, swarm/storage/encryption, swarm/testutil: use pseudo-random instead of crypto-random for test files content generation (#18083)
- Replace "crypto/rand" to "math/rand" for files content generation
- Remove swarm/network_test.go.Shuffle and swarm/btm/btm_test.go.Shuffle - because go1.9 support dropped (see https://github.com/ethereum/go-ethereum/pull/17807 and comments to swarm/network_test.go.Shuffle)
2018-11-14 09:21:14 +01:00
cff97119a7 Merge pull request #18097 from karalabe/update-chts-2
params: update CHTs
2018-11-14 10:20:51 +02:00
cef7ed53bd params: update CHTs 2018-11-14 10:16:28 +02:00
c41e1bd1eb swarm/storage: fix garbage collector index skew (#18080)
On file access LDBStore's tryAccessIdx() function created a faulty
GC Index Data entry, because not indexing the ikey correctly.
That caused the chunk addresses/hashes to start with '00' and the last
two digits were dropped. => Incorrect chunk address.

Besides the fix, the commit also contains a schema change which will
run the CleanGCIndex() function to clean the GC index from erroneous
entries.

Note: CleanGCIndex() rebuilds the index from scratch which can take
a really-really long time with a huge DB (possibly an hour).
2018-11-13 15:22:53 +01:00
4fecc7a3b1 eth: fix minor grammar issue in comment (#18091) 2018-11-13 11:57:46 +02:00
588aa88121 github: format code owners file (#18090)
replace tabs by spaces in the code owners file
2018-11-13 11:02:04 +02:00
8080265f3f swarm/storage: fix access count on dbstore after cache hit (#17978)
Access count was not incremented when chunk was retrieved
from cache. So the garbage collector might have deleted the most
frequently accessed chunk from disk.

Co-authored-by: Ferenc Szabo <ferenc.szabo@ethereum.org>
2018-11-13 07:41:01 +01:00
1212c7b844 core: fix default trie cache limit (#17860) 2018-11-12 18:06:34 +02:00
201a0bf181 p2p/simulations, swarm/network: Custom services in snapshot (#17991)
* p2p/simulations: Add custom services to simnodes + remove sim down conn objs

* p2p/simulation, swarm/network: Add selective services to discovery sim

* p2p/simulations, swarm/network: Remove useless comments

* p2p/simulations, swarm/network: Clean up mess from rebase

* p2p/simulation: Add sleep to prevent connect flakiness in http test

* p2p/simulations: added concurrent goroutines to prevent sleeps on simulation connect/disconnect

* p2p/simulations, swarm/network/simulations: address pr comments

* reinstated dummy service

* fixed http snapshot test
2018-11-12 14:57:17 +01:00
a0876f7433 Imply that SwarmApiFlag is the API endpoint to connect to, not to listen on (#18071) 2018-11-12 13:04:13 +01:00
1ff152f3a4 rawdb: remove unused parameter for WritePreimages func (#18059)
* rawdb: remove unused parameter for WritePreimages func and modify a
spelling mistake

* rawdb: update the doc for function WritePreimages
2018-11-09 12:51:07 +02:00
f574c4e74b metrics, p2p: add ephemeral registry (#18067)
* metrics, p2p: add ephemeral registry

* metrics: fix linter issue
2018-11-09 10:20:51 +01:00
870efeef01 core/state: remove lock (#18065)
The lock in StateDB is useless. It's only held in Copy, but Copy is safe
for concurrent use because all it does is read.
2018-11-08 21:37:19 +01:00
144c1c6c52 consensus: extend getWork API with block number (#18038) 2018-11-08 17:08:57 +02:00
b16cc501a8 ethclient: include block hash from FilterQuery (#17996)
ethereum/go-ethereum#16734 introduced BlockHash to the FilterQuery
struct. However, ethclient was not updated to include BlockHash in the actual
RPC request.
2018-11-08 13:26:47 +01:00
9313fa63f9 event/filter: delete unused package (#18063) 2018-11-08 14:26:29 +02:00
d0675e9d9c Merge pull request #17982 from holiman/polish_contantinople_extcodehash
core/vm: check empty in extcodehash
2018-11-08 14:26:04 +02:00
bd519ab8ae internal/web3ext: add eth.getProof (#18052) 2018-11-08 13:18:38 +01:00
1064e3283d common/compiler: capture runtime code and source maps (#18020) 2018-11-08 13:09:25 +01:00
a5dc087845 core/vm, eth/tracers: use pointer receiver for GetRefund (#18018) 2018-11-08 13:07:15 +01:00
212bf266c5 eth, p2p: fix comment typos (#18014) 2018-11-08 12:25:14 +01:00
c71e4fc4d5 p2p: fix comment typo (#18027) 2018-11-08 12:22:28 +01:00
968f6019d0 event, event/filter: minor code cleanup (#18061) 2018-11-08 12:17:01 +01:00
503993c819 p2p: use enode.ID type in metered connection (#17933)
Change the type of the metered connection's id field from string to enode.ID.
2018-11-08 12:11:20 +01:00
cf3b187bde swarm, cmd/swarm: address ineffectual assignments (#18048)
* swarm, cmd/swarm: address ineffectual assignments

* swarm/network: remove unused vars from testHandshake

* swarm/storage/feed: revert cursor changes
2018-11-07 20:39:08 +01:00
81533deae5 swarm/network: light nodes are not dialed, saved and requested from (#17975)
* RequestFromPeers does not use peers marked as lightnode

* fix warning about variable name

* write tests for RequestFromPeers

* lightnodes should be omitted from the addressbook

* resolve pr comments regarding logging, formatting and comments

* resolve pr comments regarding comments and added a missing newline

* add assertions to check peers in live connections
2018-11-07 20:33:36 +01:00
0bcff8f525 eth/downloader: speed up tests by generating chain only once (#17916)
* core: speed up GenerateChain

Use a mock implementation of ChainReader instead of creating
and destroying a BlockChain object for each generated block.

* eth/downloader: speed up tests by generating chain only once

This change reworks the downloader tests so they share a common test
blockchain instead of generating a chain in every test. The tests are
roughly twice as fast now.
2018-11-07 15:07:43 +01:00
36ca85fa1c swarm/api: Fix #18007, missing signature should return HTTP 400 (#18008) 2018-11-07 14:49:42 +01:00
b35165555d eth/downloader: remove the expired id directly (#17963) 2018-11-07 15:30:19 +02:00
5b74bb6445 signer: remove ineffectual assignments (#18049)
* signer: remove ineffectual assignments

* signer: remove ineffectual assignments
2018-11-07 14:55:54 +02:00
eea3ae42a3 core, eth/downloader: fix validation flaw, fix downloader printout flaw (#17974) 2018-11-07 14:47:11 +02:00
dc6648bb58 downloader: measure successfull deliveries, not failed (#17983)
* downloader: measure successfull deliveries, not failed

* downloader: fix typos
2018-11-07 14:18:07 +02:00
0fe0b8f7b9 p2p/protocols: use keyed fields for struct instantiation (#18017) 2018-11-07 13:22:31 +02:00
80e2f3aca4 travis, appveyor: bump to Go 1.11.2 (#18031) 2018-11-07 13:17:41 +02:00
e2640a96d4 miner: fix miner stress test (#18039) 2018-11-07 10:55:56 +02:00
79c7a69ac8 swarm: Better syncing and retrieval option definition (#17986)
* swarm: Better syncing and retrieval option definition

* swarm/network/stream: better comments

* swarm/network/stream: addressed PR comments
2018-11-07 00:04:18 +01:00
53eb4e0b0f swarm/api: unexport Respond methods (#18037) 2018-11-06 12:34:34 +01:00
baee850471 swarm: modify context key (#17925)
* swarm: modify context key

* gofmt sctx.go
2018-11-06 09:54:19 +01:00
126dfde6c9 cmd/swarm: auto resolve default path according to env flag (#17960) 2018-11-04 07:59:58 +01:00
f08f596a37 all: updated code owners file (#17987) 2018-10-31 23:36:33 +01:00
3e1cfbae93 cmd/swarm/swarm-smoke: fix issue that loop variable capture in func (#17992) 2018-10-29 10:00:00 +01:00
54f650a3be swarm: clean up unused private types and functions (#17989)
* swarm: clean up unused private types and functions

Those that were identified by code inspection tool.

* swarm/storage: move/add Proximity GoDoc from deleted private function

The mentioned proximity() private function was deleted in:
1ca8fc1e6f
2018-10-27 16:18:42 +02:00
1b6fd032e3 core/vm: check empty in extcodehash 2018-10-26 08:56:37 +02:00
8ed4739176 p2p accounting (#17951)
* p2p/protocols: introduced protocol accounting

* p2p/protocols: added TestExchange simulation

* p2p/protocols: add accounting simulation

* p2p/protocols: remove unnecessary tests

* p2p/protocols: comments for accounting simulation

* p2p/protocols: addressed PR comments

* p2p/protocols: finalized accounting implementation

* p2p/protocols: removed unused code

* p2p/protocols: addressed @nonsense PR comments
2018-10-26 00:26:31 +02:00
80d3907767 cmd/clef: replace password arg with prompt (#17897)
* cmd/clef: replace password arg with prompt (#17829)

Entering passwords on the command line is not secure as it is easy to recover from bash_history or the process table.
1. The clef command addpw was renamed to setpw to better describe the functionality
2. The <password> argument was removed and replaced with an interactive prompt

* cmd/clef: remove undeclared variable
2018-10-25 21:45:56 +02:00
6810933640 eth/downloader: SetBlocksIdle is not used (#17962)
__
 <(o )___
  ( ._> /
   `---'
2018-10-24 01:27:49 +02:00
7f22b59f87 core/state: simplify proof methods (#17965)
This fixes the import cycle build error in core/vm tests.
There is no need to refer to core/vm for a type definition.
2018-10-23 21:51:41 +02:00
4c0883e20d core/vm: adds refund as part of the json standard trace (#17910)
This adds the global accumulated refund counter to the standard
json output as a numeric json value. Previously this was not very
interesting since it was not used much, but with the new sstore
gas changes the value is a lot more interesting from a consensus
investigation perspective.
2018-10-23 16:28:18 +02:00
3088c122d8 eth/downloader: fix comment typos (#17956) 2018-10-23 13:21:16 +02:00
88b41a9e68 swarm/network/stream: disambiguate chunk delivery messages (retrieval… (#17920)
* swarm/network/stream: disambiguate chunk delivery messages (retrieval vs syncing)

* swarm/network/stream: addressed PR comments

* swarm/network/stream: stream protocol version change due to new message types in this PR
2018-10-21 09:30:41 +02:00
66debd91d9 swarm/api/http: remove ModTime=now for direct and multipart uploads (#17945) 2018-10-19 16:02:44 +02:00
75060ef96e cmd/bootnode: fix -writeaddress output (#17932) 2018-10-19 16:41:27 +03:00
6ff97bf2e5 accounts: wallet derivation path comment is mistaken (#17934) 2018-10-19 16:40:10 +03:00
d98c45f70f core: fix a typo (#17941) 2018-10-19 16:33:27 +03:00
aeb733623e swarm/network: disallow historical retrieval requests (#17936) 2018-10-19 10:50:25 +02:00
97fb08342d EIP-1186 eth_getProof (#17737)
* first impl of eth_getProof

* fixed docu

* added comments and refactored based on comments from holiman

* created structs

* handle errors correctly

* change Value to *hexutil.Big in order to have the same output as parity

* use ProofList as return type
2018-10-18 21:41:22 +02:00
cdf5982cfc swarm: Lightnode mode: disable sync, retrieve, subscription (#17899)
* swarm: Lightnode mode: disable sync, retrieve, subscription

* swarm/network/stream: assign error and check in one line

* swarm: restructured RegistryOption initializing

* swarm: empty commit to retrigger CI build

* swarm/network/stream: Added comments explaining RegistryOptions
2018-10-17 19:22:37 +02:00
4e693ad5a6 swarm/tracing: disable stdout logging for opentracing (#17931) 2018-10-17 14:46:59 +02:00
4466c7b971 metrics: added NewCounterForced (#17919) 2018-10-16 16:22:51 +02:00
2868acd80b core/types: fix comment for func SignatureValues (#17921) 2018-10-16 12:45:28 +02:00
6c313fff7b cmd/geth: don't set GOMAXPROCS by default (#17148)
This is no longer needed because Go uses all CPUs
by default. The change allows setting GOMAXPROCS in environment if needed.
2018-10-16 02:02:53 +02:00
a352de6a08 core/vm: add shortcuts for trivial exp cases (#16851) 2018-10-16 00:51:39 +02:00
6a7695e367 ethdb, rpc: support building on js/wasm (#17709)
The changes allow building WebAssembly applications which use ethclient.Client.
2018-10-16 00:47:25 +02:00
16e4d0e005 p2p: meter peer traffic, emit metered peer events (#17695)
This change extends the peer metrics collection:

- traces the life-cycle of the peers
- meters the peer traffic separately for every peer
- creates event feed for the peer events
- emits the peer events
2018-10-16 00:40:51 +02:00
331fa6d307 accounts/usbwallet: simplify code using -= operator (#17904) 2018-10-16 00:34:50 +02:00
3e92c853fb cmd/clef: fix typos in README (#17908) 2018-10-16 00:33:09 +02:00
60827dc50f tests: update tests, implement no-pow blocks (#17902)
This commit updates our tests with the latest and greatest from ethereum/tests.
It also contains implementation of NoProof for blockchain tests.
2018-10-16 00:26:47 +02:00
2e98631c5e rpc: fix client shutdown hang when Close races with Unsubscribe (#17894)
Fixes #17837
2018-10-15 10:56:04 +02:00
6566a0a3b8 swarm/network/stream: generalise setting of next batch (#17818)
* swarm/network/stream: generalize SetNextBatch and add Server SessionIndex

* swarm/network/stream: fix a typo in comment

* swarm/network/stream: remove live argument from NewSwarmSyncerServer
2018-10-12 16:26:16 +02:00
dc3c3fb1e1 swarm/storage: Add accessCnt for GC (#17845) 2018-10-12 16:25:38 +02:00
862d6f2fbf cmd/swarm: Smoke test for Swarm Feed (#17892) 2018-10-12 16:24:00 +02:00
4868964bb9 cmd/swarm: split flags and cli command declarations to the relevant files (#17896) 2018-10-12 14:51:38 +02:00
6f607de5d5 p2p, p2p/discover: add signed ENR generation (#17753)
This PR adds enode.LocalNode and integrates it into the p2p
subsystem. This new object is the keeper of the local node
record. For now, a new version of the record is produced every
time the client restarts. We'll make it smarter to avoid that in
the future.

There are a couple of other changes in this commit: discovery now
waits for all of its goroutines at shutdown and the p2p server
now closes the node database after discovery has shut down. This
fixes a leveldb crash in tests. p2p server startup is faster
because it doesn't need to wait for the external IP query
anymore.
2018-10-12 11:47:24 +02:00
dcae0d348b p2p/simulations: fix a deadlock and clean up adapters (#17891)
This fixes a rare deadlock with the inproc adapter:

- A node is stopped, which acquires Network.lock.
- The protocol code being simulated (swarm/network in my case)
  waits for its goroutines to shut down.
- One of those goroutines calls into the simulation to add a peer,
  which waits for Network.lock.

The fix for the deadlock is really simple, just release the lock
before stopping the simulation node.

Other changes in this PR clean up the exec adapter so it reports
node startup errors better and remove the docker adapter because
it just adds overhead.

In the exec adapter, node information is now posted to a one-shot
server. This avoids log parsing and allows reporting startup
errors to the simulation host.

A small change in package node was needed because simulation
nodes use port zero. Node.{HTTP,WS}Endpoint now return the live
endpoints after startup by checking the TCP listener.
2018-10-11 20:32:14 +02:00
f951e23fb5 Merge pull request #17887 from karalabe/warn-failed-account-access
internal/ethapi: warn on failed account accesses
2018-10-10 13:22:43 +03:00
aff421e78c internal/ethapi: warn on failed account accesses 2018-10-10 12:29:05 +03:00
4e474c74dc rpc: fix subscription corner case and speed up tests (#17874)
Notifier tracks whether subscription are 'active'. A subscription
becomes active when the subscription ID has been sent to the client. If
the client sends notifications in the request handler before the
subscription becomes active they are dropped. The tests tried to work
around this problem by always waiting 5s before sending the first
notification.

Fix it by buffering notifications until the subscription becomes active.
This speeds up all subscription tests.

Also fix TestSubscriptionMultipleNamespaces to wait for three messages
per subscription instead of six. The test now finishes just after all
notifications have been received and doesn't hit the 30s timeout anymore.
2018-10-09 16:34:24 +02:00
da290e9707 cmd/swarm: speed up tests (#17878)
These minor changes already shaved off around 30s.
2018-10-09 14:08:40 +02:00
0fe9a372b3 swarm, swarm/storage: lower constants for faster tests (#17876)
* swarm/storage: lower constants for faster tests

* swarm: reduce test size for TestLocalStoreAndRetrieve

* swarm: reduce nodes for dec_inc_node_count
2018-10-09 11:45:42 +02:00
d5c7a6056a cmd/clef: encrypt the master seed on disk (#17704)
* cmd/clef: encrypt master seed of clef

Signed-off-by: YaoZengzeng <yaozengzeng@zju.edu.cn>

* keystore: refactor for external use of encryption

* clef: utilize keystore encryption, check flags correctly

* clef: validate master password

* clef: add json wrapping around encrypted master seed
2018-10-09 11:05:41 +02:00
ff5538ad4c params, swarm: begin Geth v1.8.18, Swarm v0.3.6 cycle 2018-10-09 10:37:49 +03:00
8bbe72075e params, swarm: release Geth v1.8.17 and Swar v0.3.5 2018-10-09 10:35:31 +03:00
97b2806686 core/asm: Use hexadecimal addresses in assembly dumps (#17870) 2018-10-09 10:27:07 +03:00
11d0ff6578 Fix retrieval tests and simulation backends (#17723)
* swarm/network/stream: introduced visualized snapshot sync test

* swarm/network/stream: non-existing hash visualization sim

* swarm/network/stream: fixed retrieval tests; new backend for visualization

* swarm/network/stream: cleanup of visualized_snapshot_sync_sim_test.go

* swarm/network/stream: rebased PR on master

* swarm/network/stream: fixed loop logic in retrieval tests

* swarm/network/stream: fixed iterations for snapshot tests

* swarm/network/stream: address PR comments

* swarm/network/stream: addressed PR comments
2018-10-08 20:28:44 +02:00
72a076840b travis, build: speed up CI runs (#17854)
* travis: exclude non-test jobs for PRs

We don't usually look at these builders and not starting them
removes ~15min of build time.

* build: don't run vet before tests

Recent versions of Go run vet during 'go test' and we have
a dedicated lint job.

* build: use -timeout 5m for tests

Tests sometimes hang on Travis. CI runs are aborted after 10min with no
output. Adding the timeout means we get to see the stack trace for
timeouts.
2018-10-08 16:37:06 +02:00
459278cd57 miner: remove intermediate conversion to int in tests (#17853)
This fixes the tests on 32bit platforms.
2018-10-08 16:30:00 +02:00
cfcc47529d cmd/utils: fix bug when checking for flag value conflicts (#17803) 2018-10-08 17:08:56 +03:00
c5d34fc94e les, light: reduce les testing stress (#17867) 2018-10-08 16:52:23 +03:00
53634f1e04 trie: remove unused originalRoot field (#17862) 2018-10-08 13:16:16 +02:00
31c4e3a118 core/types: Log.Index is the index in block, not receipt (#17866) 2018-10-08 13:15:19 +02:00
1d3d4a4d57 core/vm: reuse Keccak-256 hashes across opcode executions (#17863) 2018-10-08 13:14:29 +02:00
c5cb214f68 swarm/storage/feed: Expose MaxUpdateDataLength constant (#17858) 2018-10-08 10:57:38 +02:00
f95811e65b cmd/abigen: support for --type flag with piped data (#17648) 2018-10-06 16:27:12 +02:00
5ed3960b9b accounts/abi/bind: stop using goimports in the binding generator (#17768) 2018-10-05 22:24:54 +02:00
5b0c9c8ae5 tests: use non-constantinople ropsten for difficulty tests (#17850)
This is a stopgap until new tests have been generated and imported.
2018-10-05 21:52:05 +02:00
58e868b759 core/vm : fix failing testcase (#17852)
* core/vm : fix failing testcase

* core/vm: fix nitpick
2018-10-05 19:02:06 +02:00
5d3b7bb023 Merge pull request #17839 from karalabe/downloader-invalid-hash-chain-fix
eth/downloader: fix invalid hash chain error due to head mini reorg
2018-10-05 11:03:38 +03:00
6ee3b26f44 eth/downloader: fix invalid hash chain error due to head mini reorg 2018-10-05 10:45:02 +03:00
092df3ab59 core/vm: SHA3 word cost for CREATE2 (#17812)
* core/vm: create2 address generation tests

* core/vm: per byte cost of CREATE2

* core/vm: fix linter issue in test
2018-10-05 10:32:35 +03:00
81375a3801 tests: do not exit early on log hash mismatch (#17844) 2018-10-05 08:35:31 +02:00
d79602d2d4 Merge pull request #17843 from karalabe/ropsten-block-and-chts
params: add ropsten fork delay, update les checkpoints
2018-10-04 22:10:53 +03:00
ff7fad18fb params: add ropsten fork delay, update les checkpoints 2018-10-04 19:14:53 +03:00
89a32451ae core/vm: faster create/create2 (#17806)
* core/vm/runtim: benchmark create/create2

* core/vm: do less hashing in CREATE2

* core/vm: avoid storing jumpdest analysis for initcode

* core/vm: avoid unneccesary lookups, remove unused fields

* core/vm: go formatting tests

* core/vm: save jumpdest analysis locally

* core/vm: use common.Hash instead of nil, fix review comments

* core/vm: removed type destinations

* core/vm: correct check for empty hash

* eth: more elegant api_tracer

* core/vm: address review concerns
2018-10-04 18:15:37 +03:00
8c63d0d2e4 swarm/storage: extract isValid. correctly remove invalid chunks from store on migration (#17835) 2018-10-04 18:13:48 +03:00
1895059119 p2p: add enode URL to PeerInfo (#17838) 2018-10-04 18:13:21 +03:00
127553253e Merge pull request #17801 from eosclassicteam/patch-1
Enable constantinople on Ropsten testnet
2018-10-04 12:36:15 +03:00
ff6e0351ab eth: fixed the minor typo inside the comments (#17830) 2018-10-04 12:35:24 +03:00
b8a0daf0cc cmd/puppeth: fix node URL in health check (#17802)
* cmd/puppeth: fix node URL in health check

* cmd/puppeth: set external IP for geth

* cmd/puppeth: fix enode cast issue
2018-10-04 12:34:49 +03:00
bfa0f96822 cmd/evm: fix state dump (#17832) 2018-10-04 10:22:41 +02:00
82a1c771ef cmd/swarm: disable tests under Windows until they are fixed (#17827) 2018-10-04 09:18:03 +02:00
9d06b2c5f3 core: use ChainHeadEvent subscription in the chain indexer (#17826) 2018-10-03 17:25:25 +03:00
e5677114dc Merge pull request #17796 from epiclabs-io/mru-feeds
swarm/storage/feeds: Renamed MRU to Swarm Feeds
2018-10-03 14:59:41 +02:00
303b99663e swarm: schemas and migrations (#17813) 2018-10-03 14:31:59 +02:00
14bef9a2db core: fix unnecessary ancestor lookup after a fast sync (#17825) 2018-10-03 13:42:19 +03:00
d3a773c284 travis, appveyor: bump to Go 1.11.1 (#17820) 2018-10-03 13:41:24 +03:00
de01178c18 swarm/storage/feed: Renamed package 2018-10-03 09:15:28 +02:00
696bc9b01c swarm/storage/feeds: renamed vars that can conflict with package name 2018-10-03 09:12:06 +02:00
58c0879c2f swarm/storage/feeds: removed capital Feed throughout 2018-10-03 09:12:06 +02:00
68b8088cb9 swarm: Changed owners. 2018-10-03 09:12:06 +02:00
b6ccc06cda swarm/storage/feeds: Final package rename and moved files 2018-10-03 09:12:06 +02:00
83705ef6aa swarm/storage/mru: Renamed rest of MRU references 2018-10-03 09:12:06 +02:00
b35622cf3c swarm/storage/mru: Renamed all comments to Feeds 2018-10-03 09:12:06 +02:00
f1e86ad9cf swarm/storage/mru: Renamed all identifiers to Feeds 2018-10-03 09:12:06 +02:00
bd1f7ebda2 cmd/swarm: fix appveyor build (#17808) 2018-10-02 14:59:58 +02:00
26a37c5351 travis.yml: remove Go 1.9 (#17807) 2018-10-02 11:26:32 +02:00
0bf3065fb4 Merge pull request #17771 from ethersphere/cmd-config-errors
swarm: handle errors in cmdLineOverride and envVarsOverride
2018-10-02 09:31:44 +02:00
83116a3479 Merge pull request #17799 from ethersphere/correct_swarm_version
cmd/swarm: correct swarm version on --help
2018-10-02 08:02:30 +02:00
2c8d5dec50 Merge pull request #17800 from ethersphere/disable_cmd_swarm_tests_on_win
cmd/swarm: disable export and upload tests on Windows
2018-10-01 20:49:08 +02:00
668c37fde1 params: enable constantinople on ropsten at 4.2M 2018-10-01 22:44:09 +09:00
b7bbe66b19 les: limit state ODR retrievals to the last 100 blocks (#17744) 2018-10-01 15:14:53 +02:00
96fd50be10 accounts/abi: fix panic in MethodById lookup. Fixes #17797 (#17798) 2018-10-01 14:17:36 +02:00
634e963f02 cmd/swarm: disable export and upload tests on Windows 2018-10-01 13:41:47 +02:00
dc5d643bb5 cmd/swarm, swarm: cross-platform Content-Type detection (#17782)
- Mime types generator (Standard "mime" package rely on system-settings, see mime.osInitMime)
- Changed swarm/api.Upload:
    - simplify I/O throttling by semaphore primitive and use file name where possible
    - f.Close() must be called in Defer - otherwise panic or future added early return will cause leak of file descriptors
    - one error was suppressed
2018-10-01 13:39:39 +02:00
9a749dcde5 cmd/swarm: correct swarm version on --help 2018-10-01 13:28:07 +02:00
b69942befe core, internal/ethapi: add and use LRU cache for receipts (#17610) 2018-09-29 22:53:31 +02:00
86ec213076 core/types: make tx signature values optional in JSON (#17742) 2018-09-29 22:47:59 +02:00
3d782bc727 eth: broadcast blocks to at least 4 peers (#17725) 2018-09-29 22:17:06 +02:00
01d9f29805 cmd/swarm: remove swarm binary (#17784) 2018-09-29 22:15:32 +02:00
024b22c30e eth/downloader: use intermediate variable for better readability (#17510) 2018-09-29 22:13:39 +02:00
ffca6dfe01 core/types: fix typos (#17762) 2018-09-29 22:11:56 +02:00
107f556b2d internal/ethapi: add eth_chainId method (#17617)
This implements EIP-695.
2018-09-29 22:07:02 +02:00
44eb69561a internal/debug: support color terminal for cygwin/msys2 (#17740)
- update go-colorable, go-isatty, go-runewidth packages
- use go-isatty instead of log/term and remove log/term package
2018-09-29 16:15:39 +02:00
d9e324a331 cmd/swarm: respect --loglevel in run_test helpers (#17739)
When CLI tests were spanning new nodes, the log level verbosity was
hard coded as 6. So the Swarm process was always polluting the test
output with TRACE level logs.

Now `go test -v ./cmd/swarm -loglevel 0` works as expected.
2018-09-28 22:49:15 +02:00
a5aaab2f22 accounts/abi/bind/backends: fix typo (#17749) 2018-09-28 22:47:46 +02:00
f1b9a3e2f4 contracts/ens: expose Add and SetAddr in ENS (#17661)
I am planning to use this to resolve names to user addresses for Swarm/MRU feeds.
2018-09-28 22:46:41 +02:00
79ca6c7a65 tests: update slow test lists, skip on windows/386 (#17758) 2018-09-28 22:23:47 +02:00
4d8c7248bd build: fix typo (#17773) 2018-09-28 20:05:46 +02:00
7e1c374dc6 swarm/storage: ensure 64bit hasherStore struct alignment (#17766) 2018-09-28 20:04:56 +02:00
7910dd5179 Merge pull request #17781 from ethersphere/trim_newline
cmd/swarm: trim new lines from files
2018-09-28 20:01:43 +02:00
0ee44e796a swarm/storage: make linter happy 2018-09-28 17:11:13 +02:00
bf37241eb5 cmd/swarm: fix TestConfigFileOverrides 2018-09-28 15:56:42 +02:00
d5837e84ff cmd/swarm: trim new lines from files 2018-09-28 14:38:05 +02:00
dcaabfe7f6 Clef: USB hw wallet support (#17756)
* signer: implement USB interaction with hw wallets

* signer: fix failing testcases
2018-09-28 12:47:57 +02:00
2c110c81ee Swarm MRUs: Adaptive frequency / Predictable lookups / API simplification (#17559)
* swarm/storage/mru: Adaptive Frequency

swarm/storage/mru/lookup: fixed getBaseTime
Added NewEpoch constructor

swarm/api/client: better error handling in GetResource()


swarm/storage/mru: Renamed structures.
Renamed ResourceMetadata to ResourceID. 
Renamed ResourceID.Name to ResourceID.Topic

swarm/storage/mru: Added binarySerializer interface and test tools

swarm/storage/mru/lookup: Changed base time to time and + marshallers

swarm/storage/mru:  Added ResourceID (former resourceMetadata)

swarm/storage/mru: Added ResourceViewId and serialization tests

swarm/storage/mru/lookup: fixed epoch unmarshaller. Added Epoch Equals

swarm/storage/mru: Fixes as per review comments

cmd/swarm: reworded resource create/update help text regarding topic

swarm/storage/mru: Added UpdateLookup and serializer tests

swarm/storage/mru: Added UpdateHeader, serializers and tests

swarm/storage/mru: changed UpdateAddr / epoch to Base()

swarm/storage/mru: Added resourceUpdate serializer and tests

swarm/storage/mru: Added SignedResourceUpdate tests and serializers

swarm/storage/mru/lookup: fixed GetFirstEpoch bug

swarm/storage/mru: refactor, comments, cleanup

Also added tests for Topic
swarm/storage/mru: handler tests pass

swarm/storage/mru: all resource package tests pass

swarm/storage/mru: resource test pass after adding
timestamp checking support

swarm/storage/mru: Added JSON serializers to ResourceIDView structures

swarm/storage/mru: Sever, client, API test pass

swarm/storage/mru: server test pass

swarm/storage/mru: Added topic length check

swarm/storage/mru: removed some literals,
improved "previous lookup" test case

swarm/storage/mru: some fixes and comments as per review

swarm/storage/mru: first working version without metadata chunk

swarm/storage/mru: Various fixes as per review

swarm/storage/mru: client test pass

swarm/storage/mru: resource query strings and manifest-less queries


swarm/storage/mru: simplify naming

swarm/storage/mru: first autofreq working version



swarm/storage/mru: renamed ToValues to AppendValues

swarm/resource/mru: Added ToValues / FromValues for URL query strings

swarm/storage/mru: Changed POST resource to work with query strings.
No more JSON.

swarm/storage/mru: removed resourceid

swarm/storage/mru: Opened up structures

swarm/storage/mru: Merged Request and SignedResourceUpdate

swarm/storage/mru: removed initial data from CLI resource create

swarm/storage/mru: Refactor Topic as a direct fixed-length array

swarm/storage/mru/lookup: Comprehensive GetNextLevel tests

swarm/storage/mru: Added comments

Added length checks in Topic
swarm/storage/mru: fixes in tests and some code comments

swarm/storage/mru/lookup: new optimized lookup algorithm

swarm/api: moved getResourceView to api out of server

swarm/storage/mru: Lookup algorithm working

swarm/storage/mru: comments and renamed NewLookupParams

Deleted commented code


swarm/storage/mru/lookup: renamed Epoch.LaterThan to After

swarm/storage/mru/lookup: Comments and tidying naming



swarm/storage/mru: fix lookup algorithm

swarm/storage/mru: exposed lookup hint
removed updateheader

swarm/storage/mru/lookup: changed GetNextEpoch for initial values

swarm/storage/mru: resource tests pass

swarm/storage/mru: valueSerializer interface and tests



swarm/storage/mru/lookup: Comments, improvements, fixes, more tests

swarm/storage/mru: renamed UpdateLookup to ID, LookupParams to Query

swarm/storage/mru: renamed query receiver var



swarm/cmd: MRU CLI tests

* cmd/swarm: remove rogue fmt

* swarm/storage/mru: Add version / header for future use

* swarm/storage/mru: Fixes/comments as per review

cmd/swarm: remove rogue fmt

swarm/storage/mru: Add version / header for future use-

* swarm/storage/mru: fix linter errors

* cmd/swarm: Speeded up TestCLIResourceUpdate
2018-09-28 12:07:17 +02:00
c63985d194 Merge branch 'master' into cmd-config-errors 2018-09-28 11:07:00 +02:00
0da3b17a11 Merge pull request #17747 from ethersphere/max-stream-peer-servers
Add stream peer servers limit
2018-09-28 11:04:07 +02:00
d8d8669271 swarm/network/stream: fix streamer test compilation issue (#17772) 2018-09-28 08:34:29 +02:00
bd1f74f654 cmd/swarm: handle errors in cmdLineOverride and envVarsOverride functions 2018-09-27 12:36:35 +02:00
86f68cf04f cmd/swarm: fail on SWARM_ENV_MAX_STREAM_PEER_SERVERS parsing error 2018-09-27 10:08:57 +02:00
a5e6bf7eef Merge branch 'master' into max-stream-peer-servers 2018-09-27 09:43:00 +02:00
e39a9b3480 Merge pull request #17755 from JekaMas/implement-home-directory-expansion
cmd/swarm: use expandPath for swarm cli path parameters
2018-09-27 07:10:22 +02:00
c3cfdfacd0 Merge pull request #17757 from ethersphere/retrieve-request-ttl-pr
swarm: prevent forever running retrieve request loops
2018-09-27 07:08:58 +02:00
4b6824e07b Merge pull request #17734 from frncmx/fix-dos-attack-invalid-hash-length
swarm/network/stream: fix DoS invalid offered hashes length
2018-09-26 12:44:42 +02:00
3f7acbbeb9 swarm: prevent forever running retrieve request loops 2018-09-26 11:34:40 +02:00
0d5e1e7bc9 swarm/network/stream: fix a typo in test comment 2018-09-26 11:30:45 +02:00
26cf866349 [ImgBot] optimizes images (#17741)
*Total -- 171.97kb -> 127.26kb (26%)

/swarm/api/testdata/test0/img/logo.png -- 17.71kb -> 4.02kb (77.29%)
/cmd/clef/sign_flow.png -- 35.54kb -> 20.27kb (42.98%)
/cmd/clef/docs/qubes/qrexec-example.png -- 18.66kb -> 15.79kb (15.4%)
/cmd/clef/docs/qubes/clef_qubes_http.png -- 13.97kb -> 11.95kb (14.44%)
/cmd/clef/docs/qubes/clef_qubes_qrexec.png -- 19.79kb -> 17.03kb (13.91%)
/cmd/clef/docs/qubes/qubes_newaccount-2.png -- 41.75kb -> 36.38kb (12.86%)
/cmd/clef/docs/qubes/qubes_newaccount-1.png -- 24.55kb -> 21.82kb (11.11%)
2018-09-26 10:39:39 +02:00
60577d48d5 Add Clef UI to README.md (#17763) 2018-09-26 10:38:09 +02:00
bf411a04ba cmd/clef: added more details to the clef tutorial (#17759)
* Added more details to the clef tutorial

* Fixed last issues with the comments on the clef tutorial
2018-09-26 10:36:13 +02:00
24349144b6 Merge branch 'master' into max-stream-peer-servers 2018-09-25 16:57:31 +02:00
7d56602391 swarm/api: fix TestDumpConfig 2018-09-25 16:32:11 +02:00
d3441ebb56 cmd/clef, signer: security fixes (#17554)
* signer: remove local path disclosure from extapi

* signer: show more data in cli ui

* rpc: make http server forward UA and Origin via Context

* signer, clef/core: ui changes + display UA and Origin

* signer: cliui - indicate less trust in remote headers, see https://github.com/ethereum/go-ethereum/issues/17637

* signer: prevent possibility swap KV-entries in aes_gcm storage, fixes #17635

* signer: remove ecrecover from external API

* signer,clef: default reject instead of warn + valideate new passwords. fixes #17632 and #17631

* signer: check calldata length even if no ABI signature is present

* signer: fix failing testcase

* clef: remove account import from external api

* signer: allow space in passwords, improve error messsage

* signer/storage: fix typos
2018-09-25 15:54:58 +02:00
09dde380f9 cmd/swarm: use expandPath for swarm cli path parameters 2018-09-25 15:54:47 +03:00
a95a601f35 Polished clef tutorial (#17745) 2018-09-25 12:37:13 +02:00
d5db4f810e .github: add CONTRIBUTING.md (#17476)
The contributing instructions in the README are not in the GitHub contributing
guide, which means that people coming from the GitHub issues are less likely to
see them.
2018-09-25 12:33:18 +02:00
b66f793443 rpc: increase maxRequestContentLength size to 512kB (#17595) 2018-09-25 12:27:18 +02:00
6663e5da10 all: fix various comment typos (#17748) 2018-09-25 12:26:35 +02:00
30cd5c1854 all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.

Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.

The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.

* p2p/discover: port to p2p/enode

This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:

  - Table.Lookup is not available anymore. It used to take a public key
    as argument because v4 protocol requires one. Its replacement is
    LookupRandom.
  - Table.Resolve takes *enode.Node instead of NodeID. This is also for
    v4 protocol compatibility because nodes cannot be looked up by ID
    alone.
  - Types Node and NodeID are gone. Further commits in the series will be
    fixes all over the the codebase to deal with those removals.

* p2p: port to p2p/enode and discovery changes

This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.

New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.

* p2p/simulations, p2p/testing: port to p2p/enode

No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:

 - testing.ProtocolSession tracks complete nodes, not just their IDs.
 - adapters.NodeConfig has a new method to create a complete node.

These changes were needed to make swarm tests work.

Note that the NodeID change makes the code incompatible with old
simulation snapshots.

* whisper/whisperv5, whisper/whisperv6: port to p2p/enode

This port was easy because whisper uses []byte for node IDs and
URL strings in the API.

* eth: port to p2p/enode

Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.

* les: port to p2p/enode

Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.

* node: port to p2p/enode

This change simply replaces discover.Node and discover.NodeID with their
new equivalents.

* swarm/network: port to p2p/enode

Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).

There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.

Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-25 00:59:00 +02:00
9e99a0c2b9 cmd/swarm, swarm: add stream peer servers limit 2018-09-24 17:56:00 +02:00
0ae462fb80 params, swarm: begin Geth v1.8.17, Swarm v0.3.5 cycle 2018-09-24 16:02:07 +03:00
477eb0933b params, swarm: release Geth v1.8.16, Swarm v0.3.4 2018-09-24 15:59:21 +03:00
1d9d3815e5 crypto/secp256k1: remove useless code (#17728)
`(void)data;` may cause link error on Windows.
2018-09-21 21:42:02 +02:00
06d40d37b8 Merge pull request #17732 from karalabe/faucet-caching
cmd/faucet: cache internal state, avoid sync-trashing les
2018-09-21 14:20:46 +03:00
ee92bc537f Merge pull request #17383 from holiman/eip1283
Eip1283
2018-09-21 14:16:18 +03:00
d3f056bd68 swarm/network/stream: fix DoS invalid hash length (#927) 2018-09-21 12:56:43 +02:00
81080bf8cb core: fix a typo (#17733) 2018-09-21 13:45:42 +03:00
c528e3e3cf cmd/faucet: cache internal state, avoid sync-trashing les 2018-09-21 13:31:00 +03:00
1a16cc71c6 Merge pull request #17730 from karalabe/revert-go1.10-ppa
build: revert launchpad PPAs to Go 1.10
2018-09-21 12:34:22 +03:00
b0d60721f1 build: revert launchpad PPAs to Go 1.10 2018-09-21 11:31:23 +03:00
ab13cd9924 les: fix invalid delivery handling in retriever (#17727) 2018-09-21 10:59:21 +03:00
32c05e82a3 Merge pull request #17726 from karalabe/go-1.11-ppa-fix
build/deb: upgrade launchpad PPA sources to Go 1.11 too
2018-09-21 00:33:26 +03:00
5e32152c02 build/deb: upgrade launchpad PPA sources to Go 1.11 too 2018-09-21 00:33:00 +03:00
457e930f27 eth, miner: prefer locally generated uncles vs remote ones (#17715)
* core, eth: fix dependency cycle

* eth, miner: perfer to locally generated uncle
2018-09-21 00:11:55 +03:00
ba0a8b7887 core, eth: fix dependency cycle (#17720) 2018-09-20 20:02:15 +03:00
f55c26ae6d Merge pull request #17719 from karalabe/update-chts
les, light, params: update light client CHTs
2018-09-20 15:10:04 +03:00
d6254f827b all: protect self-mined block during reorg (#17656) 2018-09-20 15:09:30 +03:00
af89093116 les, light, params: update light client CHTs 2018-09-20 14:14:48 +03:00
f89dce0126 Merge pull request #17718 from karalabe/chain-age-logs
common, core, light: add block age into info logs
2018-09-20 13:01:33 +03:00
0f2ba07c41 common, core, light: add block age into info logs 2018-09-20 12:56:35 +03:00
c37238cae9 les: fix retriever logic (#17705) 2018-09-20 10:46:39 +03:00
da29332c5f core/vm: add switches to select evm+ewasm interpreters (#17687)
Interpreter initialization is left to the PRs implementing them.
Options for external interpreters are passed after a colon in the
`--vm.ewasm` and `--vm.evm` switches.
2018-09-20 10:44:35 +03:00
3fec73500b cmd/evm: EVM prestate initialization (#17685)
* Bugfix #17216: evm loads prestate file properly now

* code gofmted
2018-09-20 08:24:53 +02:00
6975c72981 all: fix various comment typos (#17591)
* swarm: fixed comment typo
* eth: fixed comment typo
* cmd/puppeth: fixed comment typo
2018-09-19 18:10:40 +02:00
c35659c6a0 rpc: enable basic auth for websocket client (#17699) 2018-09-19 18:09:03 +02:00
6f004c46d5 accounts/keystore: double-check keystore file after creation (#17348) 2018-09-19 18:08:38 +02:00
16e95f33b7 whisper: Fix interpretation of to parameter in shh_requestMessages (#16996)
The argument is inclusive rather than exclusive, according to docs.
2018-09-19 17:44:30 +02:00
f5c7d1c8eb swarm/storage: Implement global timeout for fetcher (#17702) 2018-09-19 16:59:10 +02:00
736b45a876 Merge pull request #17701 from karalabe/go-1.11
travis, Dockerfile, appveyor, build: bump to Go 1.11
2018-09-19 15:08:42 +03:00
bd9d79adba cmd/geth: typo export -> import (#17703) 2018-09-19 13:29:40 +03:00
16bc8741bf abi, signer: fix nil dereference in #17633 (#17653)
* abi,signer: fix nil dereference in #17633

* signer/core: tiny typo fix in test error message
2018-09-19 12:07:53 +03:00
0b477712a1 consensus/clique: hide no transaction error (#17614) 2018-09-19 12:06:55 +03:00
faa69bea1c core, eth: fix goimports for Go 1.11 2018-09-19 11:47:09 +03:00
67c332e9b5 travis, Dockerfile, appveyor, build: bump to Go 1.11 2018-09-19 11:29:29 +03:00
360a72d54e tests: disable constantinople statetests 2018-09-19 10:05:44 +02:00
5d921fa3a0 core, params: polish net gas metering PR a bit 2018-09-18 16:29:51 +03:00
1f45ba9bb1 swarm/network: downgrade fetcher unable to request log message severity (#17692) 2018-09-18 14:54:52 +02:00
caa2c23a38 core,state: finish implementing Eip 1283 2018-09-18 13:08:32 +03:00
58374e28d9 core, state: initial implementation of Eip-1283 2018-09-18 13:08:28 +03:00
b8aa5980cf cmd/puppeth: fix comment typo (#17690)
* ethdb: unified code comment style.

* puppeth: it is unnecessary to alloc pre-funded to 256 addresses

* Revert "puppeth: it is unnecessary to alloc pre-funded to 256 addresses"

This reverts commit 5e04fbccf0.

* puppeth: fix comment typo

* Revert "ethdb: unified code comment style."

This reverts commit a581efb3f0.

* cmd/puppeth: fix comment typo
2018-09-18 11:30:39 +03:00
bd58098f2d swarm: Chunk refactor improvements (#17683)
* swarm/network: Protocol bump (for chunk refactor)

* swarm/network: Increase discovery and stream protocol version too

* swarm/network: Increase priority queue cap
2018-09-18 10:17:13 +02:00
5d1d1a808d consensus, ethdb, metrics: implement forced-meter (#17667) 2018-09-17 15:32:34 +03:00
41ac8dd803 Merge pull request #17675 from holiman/eip1234
Eip1234
2018-09-17 15:18:17 +03:00
c1345b0742 cmd/puppeth: fix comment typo (#17684)
* ethdb: unified code comment style.

* puppeth: it is unnecessary to alloc pre-funded to 256 addresses

* Revert "puppeth: it is unnecessary to alloc pre-funded to 256 addresses"

This reverts commit 5e04fbccf0.

* puppeth: fix comment typo

* Revert "ethdb: unified code comment style."

This reverts commit a581efb3f0.
2018-09-17 15:09:09 +03:00
7efb12d29b ethash: documentation + cleanup 2018-09-17 11:53:36 +02:00
cc21928e12 Merge pull request #17622 from karalabe/chain-maker-seal
consensus/clique, core: chain maker clique + error tests
2018-09-17 11:35:42 +03:00
3df7df0386 ethash: less copy-paste for EIP 1234 2018-09-15 23:54:16 +02:00
7c71e936a7 Merge pull request #17674 from eosclassicteam/discord
README: Change gitter badge to discord
2018-09-15 11:11:30 +03:00
d4a28a13ca les: fix distReq.sentChn double close bug (#17639) 2018-09-14 22:14:29 +02:00
86a03f97d3 all: simplify s[:] to s where s is a slice (#17673) 2018-09-14 22:07:13 +02:00
44a1764f9c README: Change gitter badge to discord 2018-09-14 22:16:10 +09:00
7bb95a9a64 Merge pull request #17652 from YaoZengzeng/file-permission
cmd/clef: fix incorrect file permissions for secrets.dat
2018-09-14 08:38:13 +02:00
72c820c49e core/vm: fix typo 'EVM EVM' ==> 'EVM' (#17654) 2018-09-13 12:48:15 +03:00
3ff2f75636 swarm: Chunk refactor (#17659)
Co-authored-by: Janos Guljas <janos@resenje.org>
Co-authored-by: Balint Gabor <balint.g@gmail.com>
Co-authored-by: Anton Evangelatov <anton.evangelatov@gmail.com>
Co-authored-by: Viktor Trón <viktor.tron@gmail.com>
2018-09-13 11:42:19 +02:00
ff3a5d24d2 swarm/storage: remove redundant increments for dataIdx and entryCnt (#17484)
* swarm/storage: remove redundant increments for dataIdx and entryCnt

* swarm/storage: add Delete to LDBStore

* swarm/storage: wait for garbage collection
2018-09-12 14:39:45 +02:00
0732617b65 consensus: implement Constantinople EIP 1234 2018-09-12 20:02:34 +09:00
bfce00385f Kademlia refactor (#17641)
* swarm/network: simplify kademlia/hive; rid interfaces

* swarm, swarm/network/stream, swarm/netork/simulations,, swarm/pss: adapt to new Kad API

* swarm/network: minor changes re review; add missing lock to NeighbourhoodDepthC
2018-09-12 11:24:56 +02:00
b06ff563a1 Merge pull request #17651 from ethersphere/wet-run-bug
cmd/swarm: password threw on upload manifest
2018-09-12 10:23:27 +02:00
b040b75075 cmd/clef: fix incorrect file permissions for secrets.dat
Signed-off-by: YaoZengzeng <yaozengzeng@zju.edu.cn>
2018-09-12 16:15:11 +08:00
933ebaa47e cmd/swarm: password threw on upload manifest 2018-09-12 08:25:04 +02:00
2d98099c25 rlp: fix comment typo (#17640) 2018-09-11 18:05:28 +03:00
4bb25042eb consensus/clique, core: chain maker clique + error tests 2018-09-11 16:40:00 +03:00
6dd87483d4 Encryption async api (#17603)
* swarm/storage/encryption: async segmentwise encryption/decryption

* swarm/storage: adapt hasherstore to encryption API change

* swarm/api: adapt RefEncryption for AC to new Encryption API

* swarm/storage/encryption: address review comments
2018-09-11 11:39:02 +02:00
10bac36647 Merge pull request #17620 from karalabe/clique-epoch-fix
consensus/clique: only trust snapshot for genesis or les checkpoint
2018-09-10 16:21:21 +03:00
0e32989a08 cmd/utils: typos in {Miner, MinerLegacy}GasPriceFlag (#17588) 2018-09-10 15:22:34 +03:00
bcfb7f58b9 consensus/clique: only trust snapshot for genesis or les checkpoint 2018-09-10 15:00:54 +03:00
ae992a5d73 core/vm: Hide read only flag from Interpreter interface (#17461)
Makes Interface interface a bit more stateless and abstract.

Obviously this change is dictated by EVMC design. The EVMC tries to keep the responsibility for EVM features totally inside the VMs, if feasible. This makes VM "stateless" because VM does not need to pass any information between executions, all information is included in parameters of the execute function.
2018-09-07 18:13:25 +02:00
8b9b149d54 swarm/api/http: bzz-immutable wrong handler bug (#17602) 2018-09-07 15:23:29 +02:00
70d31fb278 cmd/swarm: added password to ACT (#17598) 2018-09-07 09:56:05 +02:00
580145e96d swarm/storage: added metrics for db entry count (#17589) 2018-09-06 12:11:38 +02:00
4c15ffffdd swarm/api/http: added a regression test for resolver bug from #17483 (#17502) 2018-09-06 12:08:39 +02:00
5918b88a8f cmd/swarm: added publisher key assertion to act tests (#17471) 2018-09-05 11:33:07 +02:00
8711e2b636 whisper: add light mode check to handshake (#16725) 2018-09-05 10:57:45 +02:00
cf33d8b83c core: fix typo in comment (#17586) 2018-09-05 10:29:51 +02:00
42bd67bd6f accounts/abi: fix unpacking of negative int256 (#17583) 2018-09-04 17:53:28 +02:00
beee7a52e0 cmd/swarm: added scaling test for ACT manifests (#17496) 2018-09-04 14:29:00 +02:00
661aa4dc20 .github: slight edits to No Response template (#17475)
The template was not grammatical to me as it was. I have edited the language a tiny bit to make the close comment more clear and less abrasive.
2018-09-04 14:13:50 +02:00
3e81840061 common: fix typo (#17582)
Fixes #17581
2018-09-04 14:12:16 +02:00
84084df26c cmd/ethkey: fix the README to match updated commands (#17332) 2018-09-04 13:45:27 +02:00
003e031994 cmd/faucet: remove trailing newline in password (#17558)
Fixes #17557
2018-09-04 13:16:49 +02:00
32f28a9360 core/vm, tests: update tests, enable constantinople statetests, fix SAR opcode (#17538)
This commit does a few things at once:

- Updates the tests to contain the latest data from ethereum/tests repo.
- Enables Constantinople state tests. This is needed to be able to
  fuzz-test the evm with constantinople rules.
- Fixes the error in opSAR that we've known about for some time. I was
  kind of saving it to see if we hit upon it with the random test
  generator, but it's difficult to both enable the tests and have the
  bug there -- we don't want to forget about it, so maybe it's better
  to just fix it.
2018-09-04 10:49:18 +02:00
6a33954731 core, eth, trie: use common/prque (#17508) 2018-09-03 17:33:21 +02:00
6fc8494620 mobile: add whisper client (#15922) 2018-09-03 17:30:47 +02:00
cc2b39bbd1 consensus/ethash: increase timeout in test (#17526)
This is an attempt to fix the flaky consensus/ethash tests under macOS.
2018-09-03 16:59:23 +02:00
c9a0b36a5f rpc: reset client write deadline after write (#17549)
This fixes an issue with websocket ping frame handling.
2018-09-03 16:56:30 +02:00
e1c64a7d89 params: fix typo (#17552) 2018-09-03 16:52:32 +02:00
992b77992f consensus: fix comment typo (#17562) 2018-09-03 16:49:00 +02:00
5c0954afff p2p/discv5: make idx bounds checking more sound (#17571) 2018-09-03 16:47:20 +02:00
62e94895da params, swarm: begin geth v1.8.16 and swarm v0.3.4 cycle 2018-08-29 18:27:43 +03:00
89451f7c38 params, swarm: release geth v1.8.15 and swarm 0.3.3 2018-08-29 18:24:32 +03:00
f0242ee76d miner: keep the timestamp for resubmitted mining block (#17547) 2018-08-29 17:31:59 +03:00
a4bc2c31e1 Merge pull request #17540 from karalabe/miner-uncle-fix
miner: track uncles more aggressively
2018-08-29 13:59:15 +03:00
75ae5af62a whisper: fix loop in expire() (#17532) 2018-08-29 13:56:13 +03:00
9574968116 cmd/swarm: disable ACT tests on windows (#17536) 2018-08-29 13:52:21 +03:00
e29c2e4364 Merge pull request #17546 from karalabe/miner-max-limit
cmd, core, eth, miner, params: configurable gas floor and ceil
2018-08-29 13:49:59 +03:00
f751c6ed47 miner: track uncles more aggressively 2018-08-29 13:29:59 +03:00
e8f229b82e cmd, core, eth, miner, params: configurable gas floor and ceil 2018-08-29 12:40:12 +03:00
c1c003e4ff consensus, miner: stale block mining support (#17506)
* consensus, miner: stale block supporting

* consensus, miner: refactor seal signature

* cmd, consensus, eth: add miner noverify flag

* cmd, consensus, miner: polish
2018-08-28 16:59:05 +03:00
63352bf424 core: safe indexer operation when syncing starts before the checkpoint (#17511) 2018-08-28 10:31:34 +03:00
b69476b372 all: make indexer configurable (#17188) 2018-08-28 10:08:16 +03:00
c64d72bea2 consensus/ethash: remove unnecessary type declaration (#17529) 2018-08-28 10:05:25 +03:00
0bec85f2e2 core: fix typos in comment (#17531) 2018-08-28 10:04:33 +03:00
f236ac710e vendor: github.com/rjeczalik/notify update to master (#17527) 2018-08-28 10:03:33 +03:00
1acefafe22 swarm/api: fix typo (#17500) 2018-08-27 11:51:58 +03:00
f0488e80f7 signer/storage: fix typo (#17504) 2018-08-27 11:51:26 +03:00
d1aa605f1e all: remove the duplicate 'the' in annotations (#17509) 2018-08-27 11:49:29 +03:00
70398d300d trie: fix typo (#17498) 2018-08-24 21:08:48 +03:00
c134e00e48 Merge pull request #17494 from karalabe/mined-block-uncle-check
miner: differentiate between uncle and lost block
2018-08-23 16:03:10 +03:00
40a71f28cf miner: fix state commit, track old work packages too (#17490)
* miner: commit state which is relative with sealing result

* consensus, core, miner, mobile: introduce sealHash interface

* miner: evict pending task with threshold

* miner: go fmt
2018-08-23 16:02:57 +03:00
c3f7e3be3b core/statedb: deep copy logs (#17489) 2018-08-23 15:59:58 +03:00
1136269a79 miner: differentiate between uncle and lost block 2018-08-23 15:44:27 +03:00
67d6d0bb7d Merge pull request #17492 from karalabe/eth-miner-threads-defaults
cmd, eth: clean up miner startup API, drop noop config field
2018-08-23 14:17:12 +03:00
1e63a015a5 miner: add two stress tests based on clique and ethash 2018-08-23 14:09:45 +03:00
92381ee009 cmd, eth: clean up miner startup API, drop noop config field 2018-08-23 14:09:45 +03:00
1df1187d83 p2p: fix comment typo (#17491) 2018-08-23 11:47:43 +03:00
f34f361ca6 swarm/api/http: fixed resolver bug (#17483) 2018-08-22 17:53:33 +03:00
2993fb519d params, swarm: begin geth v1.8.15 and swarm v0.3.3 cycle 2018-08-22 11:42:03 +03:00
316fc7ecfc params, swarm: release Geth v1.8.14 and Swarm v0.3.2 2018-08-22 11:39:00 +03:00
af85d8e2b3 cmd, eth: apply default miner recommit setting (#17479) 2018-08-22 09:47:50 +03:00
6a8b47c880 Merge pull request #17472 from karalabe/txpool-locals
cmd, core, miner: add --txpool.locals and priority mining
2018-08-22 09:46:13 +03:00
e0d0e64ce2 cmd, core, miner: add --txpool.locals and priority mining 2018-08-22 09:43:57 +03:00
b2c644ffb5 cmd, eth, miner: make recommit configurable (#17444)
* cmd, eth, miner: make recommit configurable

* cmd, eth, les, miner: polish a bit

* miner: filter duplicate sealing work

* cmd: remove uncessary conversion

* miner: avoid microptimization in favor of cleaner code
2018-08-21 22:56:54 +03:00
522cfc68ff swarm: fix typos (#17473) 2018-08-21 21:13:33 +03:00
a063fe9b2d miner: fix uncle iteration logic (#17469) 2018-08-21 17:33:04 +03:00
1dcad8b2de Merge pull request #17466 from karalabe/rinkeby-light-snapshots
consensus/clique, light: light client snapshots on Rinkeby
2018-08-21 16:13:26 +03:00
86acdf1a5b vendor: update rjeczalik/notify so that it compiles on go1.11 (#17467) 2018-08-21 16:13:03 +03:00
9f036647e4 consensus/clique, light: light client snapshots on Rinkeby 2018-08-21 15:21:59 +03:00
355fc47d39 les: fix CHT field in nodeInfo (#17465) 2018-08-21 14:58:10 +03:00
76301ca051 eth: upgradedb subcommand was dropped (#17464) 2018-08-21 13:36:38 +03:00
c582667c9b swarm/network: bump bzz protocol version (#17449) 2018-08-21 11:34:40 +03:00
dcd97c41fa swarm, swarm/network, swarm/pss: log error and fix logs (#17410)
* swarm, swarm/network, swarm/pss: log error and fix logs

* swarm/pss: log compressed publickey
2018-08-21 11:34:10 +03:00
85c6a1c526 Merge pull request #17451 from karalabe/bn256-relicense
crypto/bn256: add missing license file, release wrapper in BSD-3
2018-08-21 11:33:33 +03:00
ecca49e078 Merge pull request #17460 from holiman/tracerfix
Ensure from < to when tracing chain
2018-08-21 11:32:50 +03:00
106d196ec4 eth: ensure from<to when tracing chain (credits Chen Nan via bugbounty) 2018-08-21 09:48:53 +02:00
a6d45a5d00 crypto/bn256: add missing license file, release wrapper in BSD-3 2018-08-20 18:05:06 +03:00
7d38d53ae4 cmd/puppeth: accept ssh identity in the server string (#17407)
* cmd/puppeth: Accept identityfile in the server string with fallback to id_rsa

* cmd/puppeth: code polishes + fix heath check double ports
2018-08-20 16:54:38 +03:00
1de9ada401 light: new CHTs (#17448) 2018-08-20 16:49:28 +03:00
c929030e28 core/types: fix docs about protected Vs (#17436) 2018-08-20 15:14:50 +03:00
87f294aa0b Merge pull request #17437 from hackmod/console-typo
console: fixed comment typo
2018-08-20 15:12:48 +03:00
68f0a414ea Merge pull request #17430 from karalabe/miner-notify-less-aggressive-test
consensus/ethash: reduce notify test aggressiveness
2018-08-20 15:11:49 +03:00
55d050ccd8 travis: remove brew update and osxfuse install (#17429) 2018-08-20 15:11:08 +03:00
a8aa89accb swarm/storage: cleanup task - remove bigger chunks (#17424) 2018-08-20 15:10:30 +03:00
c4078fc805 cmd/swarm: added swarm bootnodes (#17414) 2018-08-20 15:09:50 +03:00
d3488c1aff p2p: fix typo (#17446) 2018-08-20 15:07:21 +03:00
0fd02fe9cf console: fixed comment typo 2018-08-18 07:47:10 +09:00
251c868008 consensus/ethash: reduce notify test aggressiveness 2018-08-17 18:12:39 +03:00
99e1a5e0fb Merge pull request #17426 from karalabe/miner-fees-log-fix
miner: update mining log with correct fee calculation
2018-08-17 16:46:57 +03:00
22cd3f70a6 miner: update mining log with correct fee calculation 2018-08-17 15:47:14 +03:00
2695fa2213 les: fix crasher in NodeInfo when running as server (#17419)
* les: fix crasher in NodeInfo when running as server

The ProtocolManager computes CHT and Bloom trie roots by asking the
indexers for their current head. It tried to get the indexers from
LesOdr, but no LesOdr instance is created in server mode.

Attempt to fix this by moving the indexers, protocol creation and
NodeInfo to a new lesCommons struct which is embedded into both server
and client.

All this setup code should really be cleaned up, but this is just a
hotfix so we have to do that some other time.

* les: fix commons protocol maker
2018-08-17 13:21:53 +03:00
f44046a1c6 build: do not require ethereum-swarm deb when installing ethereum (#17425) 2018-08-17 11:33:39 +03:00
2a06791461 Merge pull request #17368 from karalabe/bn256-go1.11
crypto/bn256: fix issues caused by Go 1.11
2018-08-17 11:01:35 +03:00
60390878a5 accounts: fixed typo (#17421) 2018-08-17 10:38:02 +03:00
9bf6bb8f63 Merge pull request #17405 from karalabe/miner-remote-dag
consensus/ethash: use DAGs for remote mining, generate async
2018-08-16 15:25:25 +03:00
1d439b5e10 Merge pull request #17416 from karalabe/miner-details
miner: add gas and fee details to mining logs
2018-08-16 15:04:50 +03:00
62f5137a72 miner: add gas and fee details to mining logs 2018-08-16 14:49:00 +03:00
54216811a0 miner: regenerate mining work every 3 seconds (#17413)
* miner: regenerate mining work every 3 seconds

* miner: polish
2018-08-16 14:14:33 +03:00
5952d962dc Merge pull request #17412 from karalabe/puppeth-fix-dial-panic
cmd/puppeth: fix nil panic on disconnected stats gathering
2018-08-16 13:19:01 +03:00
3e21adc648 crypto/bn256: fix issues caused by Go 1.11 2018-08-16 11:02:16 +03:00
b24fb76a3a cmd/puppeth: fix nil panic on disconnected stats gathering 2018-08-16 09:41:16 +03:00
2cdf6ee7e0 light: CHT and bloom trie indexers working in light mode (#16534)
This PR enables the indexers to work in light client mode by
downloading a part of these tries (the Merkle proofs of the last
values of the last known section) in order to be able to add new
values and recalculate subsequent hashes. It also adds CHT data to
NodeInfo.
2018-08-15 22:25:46 +02:00
e8752f4e9f cmd/swarm, swarm: added access control functionality (#17404)
Co-authored-by: Janos Guljas <janos@resenje.org>
Co-authored-by: Anton Evangelatov <anton.evangelatov@gmail.com>
Co-authored-by: Balint Gabor <balint.g@gmail.com>
2018-08-15 17:41:52 +02:00
d8541a9f99 consensus/ethash: use DAGs for remote mining, generate async 2018-08-15 14:38:39 +03:00
040aa2bb10 miner: streaming uncle blocks (#17320)
* miner: stream uncle block

* miner: polish
2018-08-15 14:09:17 +03:00
e598ae5c01 Merge pull request #17402 from karalabe/deprecate-flags
cmd: polish miner flags, deprecate olds, add upgrade path
2018-08-15 12:09:34 +03:00
2a17fe2561 cmd: polish miner flags, deprecate olds, add upgrade path 2018-08-15 11:41:23 +03:00
212bba47ff backends: configurable gas limit to allow testing large contracts (#17358)
* backends: increase gaslimit in order to allow tests of large contracts

* backends: increase gaslimit in order to allow tests of large contracts

* backends: increase gaslimit in order to allow tests of large contracts
2018-08-15 10:15:42 +03:00
b52bb31b76 p2p/discv5: add delay to refresh cycle when no seed nodes are found (#16994) 2018-08-14 22:59:18 +02:00
b2ddb1fcbf les: implement client connection logic (#16899)
This PR implements les.freeClientPool. It also adds a simulated clock
in common/mclock, which enables time-sensitive tests to run quickly
and still produce accurate results, and package common/prque which is
a generalised variant of prque that enables removing elements other
than the top one from the queue.

les.freeClientPool implements a client database that limits the
connection time of each client and manages accepting/rejecting
incoming connections and even kicking out some connected clients. The
pool calculates recent usage time for each known client (a value that
increases linearly when the client is connected and decreases
exponentially when not connected). Clients with lower recent usage are
preferred, unknown nodes have the highest priority. Already connected
nodes receive a small bias in their favor in order to avoid accepting
and instantly kicking out clients.

Note: the pool can use any string for client identification. Using
signature keys for that purpose would not make sense when being known
has a negative value for the client. Currently the LES protocol
manager uses IP addresses (without port address) to identify clients.
2018-08-14 22:44:46 +02:00
a1783d1697 miner: move agent logic to worker (#17351)
* miner: move agent logic to worker

* miner: polish

* core: persist block before reorg
2018-08-14 18:34:33 +03:00
e0e0e53401 crypto: change formula for create2 (#17393) 2018-08-14 18:30:42 +03:00
97887d98da swarm/network, swarm/storage: validate chunk size (#17397)
* swarm/network, swarm/storage: validate default chunk size

* swarm/bmt, swarm/network, swarm/storage: update BMT hash initialisation

* swarm/bmt: move segmentCount to tests

* swarm/chunk: change chunk.DefaultSize to be untyped const

* swarm/storage: add size validator

* swarm/storage: add chunk size validation to localstore

* swarm/storage: move validation from localstore to validator

* swarm/storage: global chunk rules in MRU
2018-08-14 16:03:56 +02:00
8a040de60b README.md: fix some typos (#17381)
Signed-off-by: YaoZengzeng <yaozengzeng@zju.edu.cn>
2018-08-14 14:25:36 +03:00
e07e507d1a whisper: fixed broken partial topic filtering
Changes in #15811 broke partial topic filtering. Re-enable it.
2018-08-13 16:27:25 +02:00
d8328a96b4 Merge pull request #17347 from karalabe/miner-notify
cmd, consensus/ethash, eth: miner push notifications
2018-08-13 17:03:16 +03:00
fb368723ac core: fix comment typo (#17376) 2018-08-13 11:40:52 +03:00
6d1e292eef Manifest cli fix and upload defaultpath only once (#17375)
* cmd/swarm: fix manifest subcommands and add tests

* cmd/swarm: manifest update: update default entry for non-encrypted uploads

* swarm/api: upload defaultpath file only once

* swarm/api/client: improve UploadDirectory default path handling

* cmd/swarm: support absolute and relative default path values

* cmd/swarm: fix a typo in test

* cmd/swarm: check encrypted uploads in manifest update tests
2018-08-10 16:12:55 +02:00
3ec5dda4d2 swarm/api/http: added logging to denote request ended (#17371) 2018-08-10 13:49:37 +02:00
f0998415ba cmd, consensus/ethash, eth: miner push notifications 2018-08-10 09:06:59 +03:00
45eaef2431 cmd/swarm: solve rare cases of using the same random port in tests (#17352) 2018-08-09 16:15:59 +02:00
3bcb501c8f swarm/api: close tar writer in GetDirectoryTar to flush and clean (#17339) 2018-08-09 16:15:07 +02:00
d3e4c2dcb0 cmd/swarm: disable TestCLISwarmFs fuse test on darwin (#17340) 2018-08-09 16:14:30 +02:00
7b5c375825 cmd/swarm: remove shadow err (#17360) 2018-08-09 12:37:00 +03:00
beade042d1 Merge pull request #17357 from karalabe/tracer-trie-deref-bug
eth, trie: fix tracer GC which accidentally pruned the metaroot
2018-08-09 12:35:12 +03:00
11bbc66082 eth, trie: fix tracer GC which accidentally pruned the metaroot 2018-08-09 12:33:30 +03:00
834057592f p2p/discv5: fix negative index after uint convert to int (#17274) 2018-08-09 10:03:42 +03:00
Jay
abbb219933 rpc: fix a subscription name (#17345) 2018-08-09 08:56:35 +03:00
8051a0768a trie: fix comment typo (#17350) 2018-08-08 16:08:40 +03:00
a1eb9c7d13 swarm/api/http: fixed list leaf links (#17342) 2018-08-08 09:33:06 +02:00
00e6da9704 swarm/bmt: ignore data longer then 4096 bytes in Hasher.Write (#17338) 2018-08-07 15:34:33 +02:00
9df16f3468 swarm: Added lightnode flag (#17291)
* swarm: Added lightnode flag

Added --lightnode command line parameter
Added LightNode to Handshake message

* swarm/config: Fixed variable naming

* cmd/swarm: Changed BoolTFlag to BoolFlag for SwarmLightNodeEnabled

* swarm/network: Changed logging

* swarm/network: Changed protocol version testing

* swarm/network: Renamed DefaultNetworkID variable to TestProtocolNetworkID

* swarm/network: Bumped protocol version

* swarm/network: Changed LightNode handhsake test to table driven

* swarm/network: Changed back TestProtocolVersion to 5 for now

* swarm/network: Moved the test configuration inside the test function scope
2018-08-07 15:34:11 +02:00
8461fea44b whisper: remove unused error (#17315) 2018-08-07 15:16:56 +02:00
4bb2dc3d09 swarm/api/http: test fixes (#17334) 2018-08-07 13:40:38 +02:00
cf05ef9106 p2p, swarm, trie: avoid copying slices in loops (#17265) 2018-08-07 13:56:40 +03:00
de9b0660ac swarm/README: add more sections to easily onboard developers (#17333) 2018-08-07 12:56:01 +02:00
042191338d swarm/api/http: GET/PUT/PATCH/DELETE/POST multipart form unit tests. (#17277)
httpDo has a verbose option that dumps the HTTP request
2018-08-07 12:00:12 +02:00
93fe16b0a5 swarm/api/http: refactored http package (#17309) 2018-08-07 11:56:55 +02:00
64a4e89504 swarm/storage/mru: HOTFIX - fix panic in Handler.update (#17313) 2018-08-07 11:53:56 +02:00
eef65b20fc p2p: use safe atomic operations when changing connFlags (#17325) 2018-08-06 15:46:30 +03:00
c4df67461f Merge pull request #16333 from shazow/addremovetrustedpeer
rpc: Add admin_addTrustedPeer and admin_removeTrustedPeer.
2018-08-06 13:30:04 +02:00
941018b570 miner: seperate state, receipts for different mining work (#17323) 2018-08-06 12:55:44 +03:00
a72ba5a55b cmd/swarm, swarm: various test fixes (#17299)
* swarm/network/simulation: increase the sleep duration for TestRun

* cmd/swarm, swarm: fix failing tests on mac

* cmd/swarm: update TestCLISwarmFs skip comment

* swarm/network/simulation: adjust disconnections on simulation close

* swarm/network/simulation: call cleanups after net shutdown
2018-08-06 12:33:22 +03:00
6711f098d5 core/vm: fix comment typo (#17319)
antything --> anything
:P
2018-08-06 11:42:49 +03:00
c376a5263f Merge pull request #17318 from ligi/fix_punctuation
Fix punctuation - closes #17317
2018-08-06 11:11:58 +03:00
2901b8b2d2 README: Fix punctuation - closes #17317 2018-08-05 15:40:22 +02:00
35fcd2f423 Merge pull request #17311 from karalabe/puppeth-graceful-stop
cmd/puppeth: graceful shutdown on redeploys
2018-08-03 13:26:39 +03:00
faf0e06ed8 cmd/puppeth: graceful shutdown on redeploys 2018-08-03 12:08:19 +03:00
51db5975cc consensus/ethash: move remote agent logic to ethash internal (#15853)
* consensus/ethash: start remote ggoroutine to handle remote mining

* consensus/ethash: expose remote miner api

* consensus/ethash: expose submitHashrate api

* miner, ethash: push empty block to sealer without waiting execution

* consensus, internal: add getHashrate API for ethash

* consensus: add three method for consensus interface

* miner: expose consensus engine running status to miner

* eth, miner: specify etherbase when miner created

* miner: commit new work when consensus engine is started

* consensus, miner: fix some logics

* all: delete useless interfaces

* consensus: polish a bit
2018-08-03 11:33:37 +03:00
70176cda0e accounts/keystore: rename skipKeyFile to nonKeyFile to better reveal the function purpose (#17290) 2018-08-03 10:25:17 +03:00
d56fa8a659 Merge pull request #17310 from karalabe/mobile-nil-panic
mobile: fix missing return for CallMsg.SetTo(nil)
2018-08-03 09:37:57 +03:00
e4cb158d01 mobile: fix missing return for CallMsg.SetTo(nil) 2018-08-03 08:42:31 +03:00
0ab54de1a5 core/vm: update benchmarks for core/vm (#17308)
- Update benchmarks to use a pool of int pools.
  Unless benchmarks are aborted with segmentation fault.

Signed-off-by: Hyung-Kyu Choi <hqueue@users.noreply.github.com>
2018-08-03 08:15:33 +03:00
adc2944b4c Merge pull request #17301 from karalabe/tests-enable-constantinople
tests: enable the Constantinople fork definition
2018-08-02 14:42:06 +03:00
16eaf2b158 Merge pull request #17302 from karalabe/revert-evm-nil-panic
Revert "cmd/evm: change error msg output to stderr (#17118)"
2018-08-01 19:43:01 +03:00
83e2761c3a Revert "cmd/evm: change error msg output to stderr (#17118)"
This reverts commit fb9f7261ec.
2018-08-01 19:09:08 +03:00
353a82385b tests: enable the Constantinople fork definition 2018-08-01 18:47:09 +03:00
46d4721519 build: explicitly name all packages to be cross-compiled (#17288) 2018-07-31 16:34:54 +03:00
454382e81a params, swarm/version: begin Geth v1.8.14, Swarm v0.3.2 cycle 2018-07-31 13:39:22 +03:00
225171a4bf params, swarm/version: release Geth v1.8.13, Swarm 0.3.1 2018-07-31 13:35:53 +03:00
702b8a7aec core/vm: fix typo in cryptographic hash function name (#17285) 2018-07-31 13:27:51 +03:00
5d7e18539e rpc: make HTTP RPC timeouts configurable, raise defaults (#17240)
* rpc: Make HTTP server timeout values configurable

* rpc: Remove flags for setting HTTP Timeouts, configuring via .toml is sufficient.

* rpc: Replace separate constants with a single default struct.

* rpc: Update HTTP Server Read and Write Timeouts to 30s.

* rpc: Remove redundant NewDefaultHTTPTimeouts function.

* rpc: document HTTPTimeouts.

* rpc: sanitize timeout values for library use
2018-07-31 12:16:14 +03:00
c4a1d4fecf eth/filters: fix the block range assignment for log filter (#17284) 2018-07-31 12:10:38 +03:00
fb9f7261ec cmd/evm: change error msg output to stderr (#17118)
* cmd/evm: change error msg output to stderr

* cmd/evm: fix some linter error
2018-07-31 10:48:27 +03:00
d927cbb638 Merge pull request #17282 from karalabe/trie-flushlist-fixes
trie: handle removing the freshest node too
2018-07-31 10:41:53 +03:00
2fbc454355 miner: fix state locking while writing to chain (issue #16933) (#17173) 2018-07-31 10:22:33 +03:00
d6efa69187 Merge netsim mig to master (#17241)
* swarm: merged stream-tests migration to develop

* swarm/network: expose simulation RandomUpNode to use in stream tests

* swarm/network: wait for subs in PeerEvents and fix stream.runSyncTest

* swarm: enforce waitkademlia for snapshot tests

* swarm: fixed syncer tests and snapshot_sync_test

* swarm: linting of simulation package

* swarm: address review comments

* swarm/network/stream: fix delivery_test bugs and refactor

* swarm/network/stream: addressed PR comments @janos

* swarm/network/stream: enforce waitKademlia, improve TestIntervals

* swarm/network/stream: TestIntervals not waiting for chunk to be stored
2018-07-30 22:55:25 +02:00
3ea8ac6a9a Merge pull request #17281 from karalabe/puppeth-cachewarn-fix
cmd/puppeth: force tiny memory for geth attach in id lookup
2018-07-30 16:32:10 +03:00
8a9c31a307 trie: handle removing the freshest node too 2018-07-30 16:31:17 +03:00
6380c06c65 Merge pull request #17279 from karalabe/puppeth-banlist-fix
cmd/puppeth: split banned ethstats addresses over columns
2018-07-30 16:15:35 +03:00
54d1111965 cmd/puppeth: force tiny memory for geth attach in id lookup 2018-07-30 16:09:19 +03:00
f00d0daf33 cmd/puppeth: split banned ethstats addresses over columns 2018-07-30 15:39:35 +03:00
2cffd4ff3c core: fix some small typos on comment code (#17278) 2018-07-30 14:10:48 +03:00
7b1aa64220 dashboard: append to proper slice (#17266) 2018-07-30 12:48:16 +03:00
8f4c4fea20 p2p: fix rare deadlock in Stop (#17260) 2018-07-30 12:44:17 +03:00
d42ce0f2c1 all: simplify switches (#17267)
* all: simplify switches

* silly mistake
2018-07-30 12:30:09 +03:00
273c7a9dc4 swarm/api: remove ioutil.ReadAll for HandleGetFiles (#17276) 2018-07-30 12:19:26 +03:00
a5d5609e38 build: rename swarm deb package to ethereum-swarm; change swarm deb version from 1.8.x to 0.3.x (#16988)
* build: add support for different package and binary names

* build: bump up copyright date

* build: change default PackageName to empty string

* build, internal, swarm: enhance build/release process

* build: hack ethereum-swarm as a "depends" in deb package

* build/ci: remove redundant variables

* build, cmd, mobile, params, swarm: remove VERSION file; rename Version to VersionMeta;

* internal: remove VERSION() method which reads VERSION file

* build: fix VersionFilePath to Version

* Makefile: remove clean_go_build_cache.sh until it works

* Makefile: revert removal of clean_go_build_cache.sh
2018-07-30 11:56:40 +03:00
93c0f1715d Merge pull request #17245 from karalabe/azure-deps-fixups
internal, vendor: update Azure blobstore API
2018-07-26 19:59:46 +03:00
d9575e92fc crypto/secp256k1: remove external LGPL dependencies (#17239) 2018-07-26 13:33:13 +02:00
11a402f747 core: report progress on log chain exports (#17066)
* core/blockchain: export progress

* core: polish up chain export progress report a bit
2018-07-26 14:26:24 +03:00
021d6fbbbb cmd: prevent accidental invalid commands (#17248)
* cmd: stop parsing bootnodes if one is invalid

* cmd/geth: require valid command as argument (or no arg)
2018-07-26 13:57:20 +03:00
feed8069a6 Merge pull request #17251 from karalabe/ppa-deprecate-artful
build: deprecated ubuntu artful, enable ubuntu cosmic
2018-07-26 13:34:38 +03:00
514022bde6 Merge pull request #17252 from karalabe/travis-debsrc-fix
build: noop clean during travis debsrc assembly step
2018-07-26 13:29:57 +03:00
ff22ec31b6 build: noop clean during travis debsrc assembly step 2018-07-26 13:26:53 +03:00
0598707129 build: deprecated ubuntu artful, enable ubuntu cosmic 2018-07-26 13:14:36 +03:00
a511f6b515 Merge pull request #17250 from karalabe/fix-clean
build: fix bash->sh function declaration
2018-07-26 13:05:45 +03:00
8b1e14b7f8 build: fix bash->sh function declaration 2018-07-26 13:01:00 +03:00
6c412e313c cmd/utils: fix comment typo (#17249)
cmd: Comment error
2018-07-26 10:59:05 +03:00
6b232ce325 internal, vendor: update Azure blobstore API 2018-07-25 18:04:43 +03:00
7abedf9bbb core/vm: support for multiple interpreters (#17093)
- Define an Interpreter interface
- One contract can call contracts from other interpreter types.
- Pass the interpreter to the operands instead of the evm.
  This is meant to prevent type assertions in operands.
2018-07-25 08:56:39 -04:00
27a278e6e3 core: fixed typo in addresssByHeartbeat (#17243) 2018-07-25 14:25:14 +03:00
a0bcb16875 Merge pull request #17244 from chainpunk/master
core: fix typo in comment code
2018-07-25 14:07:54 +03:00
bc0a43191e core: fix typo in comment code 2018-07-25 18:04:38 +07:00
1064b9e691 Merge pull request #17233 from ethersphere/swarm-readme
swarm: README.md
2018-07-25 05:38:14 +02:00
10780e8a00 core: fix txpool guarantee comment (#17214)
* fixed-typo

* core: fix txpool guarantee comment
2018-07-24 18:44:41 +03:00
2433349c80 core/vm, params: implement EXTCODEHASH opcode (#17202)
* core/vm, params: implement EXTCODEHASH opcode

* core, params: tiny fixes and polish

* core: add function description
2018-07-24 18:06:40 +03:00
58243b4d3e README: point Swarm brief to the Swarm README, instead of directly to docs 2018-07-24 16:55:07 +02:00
cab1cff11c core, crypto, params: implement CREATE2 evm instrction (#17196)
* core, crypto, params: implement CREATE2 evm instrction

* core/vm: add opcode to string mapping

* core: remove past fork checking

* core, crypto: use option2 to generate new address
2018-07-24 17:22:03 +03:00
2909f6d7a2 common: add database/sql support for Hash and Address (#15541) 2018-07-24 15:15:07 +02:00
d96ba77113 eth/filters: improve error message for invalid filter topics (#17234) 2018-07-24 15:12:49 +02:00
62467e4405 Merge pull request #17206 from hadv/master
consensus/clique: replace bubble sort by golang stable sort
2018-07-24 13:46:37 +03:00
d0082bb7ec core: fix comment typo (#17236) 2018-07-24 13:17:12 +03:00
21c059b67e Merge pull request #16734 from reductionista/eip234
eth/filters, interfaces: EIP-234 Add blockHash option to eth_getLogs
2018-07-24 12:52:16 +03:00
9e24491c65 core/bloombits, light: fix typos (#17235) 2018-07-24 11:24:27 +03:00
49f63deb24 consensus/clique: replace bubble sort by golang stable sort 2018-07-24 14:56:53 +07:00
b536460f8e Merge pull request #17231 from ethersphere/develop
swarm: client-side MRU signatures ; BMT fixes ; network simulation tests
2018-07-24 08:44:43 +02:00
afd8b84706 crypto/secp256k1: unify the package license to 3-Clause BSD (#17225)
Our original wrapper code had two parts. One taken from a third
party repository (who took it from upstream Go) licensed under
BSD-3. The second written by Jeff, Felix and Gustav, licensed
under LGPL. This made this package problematic to use from the
outside.

With the agreement of the original copyright holders, this commit
changes the license of the LGPL portions of the code to BSD-3:

---
I agree changing from LGPL to a BSD style license.

Jeff
---
Hey guys,

My preference would be to relicense to GNUBL, but I'm also OK with BSD.

Cheers,
Gustav
---
Felix Lange (fjl):
I would approve anything that makes our licensing less complicated
---
2018-07-24 02:47:47 +02:00
f6206efe5b consensus: move test use only var/func to test(#17004) 2018-07-24 02:14:15 +02:00
ae674a3660 Makefile: clean go build cache (#17079) 2018-07-24 02:11:51 +02:00
894022a3d5 cmd/geth: clean up call to SelfDerive (#16970) 2018-07-24 02:09:24 +02:00
68da9aa716 rpc: clean up check for missing methods/subscriptions on handler (#17145) 2018-07-24 02:00:55 +02:00
14bdcdeab4 swarm/testutil: remove EnableMetrics 2018-07-23 18:45:39 +02:00
fe6a9473dc p2p: token is useless in xxxEncHandshake (#17230) 2018-07-23 17:36:08 +02:00
427316a707 swarm/storage/mru: Client-side MRU signatures (#784)
* swarm/storage/mru: Add embedded publickey and remove ENS dep

This commit breaks swarm, swarm/api...
but tests in swarm/storage/mru pass

* swarm: Refactor swarm, swarm/api to mru changes, make tests pass

* swarm/storage/mru: Remove self from recv, remove test ens vldtr

* swarm/storage/mru: Remove redundant test, expose ResourceHash mthd

* swarm/storage/mru: Make HeaderGetter mandatory + godoc fixes

* swarm/storage: Remove validator prefix for metadata chunk

* swarm/storage/mru: Use Address instead of PublicKey

* swarm/storage/mru: Change index from name to metadata chunk addr

* swarm/storage/mru: Refactor swarm/api/... to MRU index changes

* swarm/storage/mru: Refactor cleanup

* swarm/storage/mru: Rebase cleanup

* swarm: Use constructor for GenericSigner MRU in swarm.go

* swarm/storage: Change to BMTHash for MRU hashing

* swarm/storage: Reduce loglevel on chunk validator logs

* swarm/storage/mru: Delint

* swarm: MRU Rebase cleanup

* swarm/storage/mru: client-side mru signatures

Rebase to PR #668 and fix all conflicts

* swarm/storage/mru:  refactor and documentation

* swarm/resource/mru: error-checking  tests for parseUpdate/newUpdateChunk

* swarm/storage/mru: Added resourcemetadata tests

* swarm/storage/mru: Added tests  for UpdateRequest

* swarm/storage/mru: more test coverage for UpdateRequest and comments

* swarm/storage/mru: Avoid fake chunks in parseUpdate()

* swarm/storage/mru: Documented resource.go extensively

moved some functions where they make most sense

* swarm/storage/mru: increase test coverage for UpdateRequest and

variable name changes throughout to increase consistency

* swarm/storage/mru: moved default timestamp to NewCreateRequest-

* swarm/storage/mru: lookup refactor

* swarm/storage/mru: added comments and renamed raw flag to rawmru

* swarm/storage/mru: fix receiver typo

* swarm/storage/mru: refactored update chunk new/create

* swarm/storage/mru:  refactored signature digest to avoid malleability

* swarm/storage/mru: optimize update data serialization

* swarm/storage/mru: refactor and cleanup

* swarm/storage/mru: add timestamp struct and serialization

* swarm/storage/mru: fix lint error and mark some old code for deletion

* swarm/storage/mru: remove unnecessary variable

* swarm/storage/mru: Added more comments throughout

* swarm/storage/mru: Refactored metadata chunk layout + extensive error...

* swarm/storage/mru: refactor cli parser
Changed resource info output to JSON

* swarm/storage/mru: refactor serialization for extensibility

refactored error messages to NewErrorf

* swarm/storage/mru: Moved Signature to resource_sign.
Check Sign errors in server tests

* swarm/storage/mru: Remove isSafeName() checks

* swarm/storage/mru: scrubbed off all references to "block" for time

* swarm/storage/mru: removed superfluous isSynced() call.

* swarm/storage/mru: remove isMultihash() and ToSafeName functions

* swarm/storage/mru: various fixes and comments

* swarm/storage/mru: decoupled cli for independent create/update
* Made resource name optional
* Removed unused LookupPrevious

* swarm/storage/mru: Decoupled resource create / update & refactor

* swarm/storage/mru: Fixed some comments as per issues raised in PR #743

* swarm/storage/mru: Cosmetic changes as per #743 comments

* swarm/storage/mru: refct request encoder/decoder > marshal/unmarshal

* swarm/storage/mru: Cosmetic changes as per review in #748

* swarm/storage/mru: removed timestamp proof placeholder

* swarm/storage/mru: cosmetic/doc/fixes changes as per comments in #704

* swarm/storage/mru: removed unnecessary check in Handler.update

* swarm/storage/mru: Implemented Marshaler/Unmarshaler iface in Request

* swarm/storage/mru: Fixed linter error

* swarm/storage/mru: removed redundant address in signature digest

* swarm/storage/mru: fixed bug: LookupLatestVersionInPeriod not working

* swarm/storage/mru: Unfold Request creation API for create or update+create
set common time source for mru package

* swarm/api/http: fix HandleGetResource error variable shadowed
when requesting a resource that does not exist

* swarm/storage/mru: Add simple check to detect duplicate updates

* swarm/storage/mru: moved Multihash() to the right place.

* cmd/swarm: remove unneeded clientaccountmanager.go

* swarm/storage/mru: Changed some comments as per reviews in #784

* swarm/storage/mru: Made SignedResourceUpdate.GetDigest() public

* swarm/storage/mru: cosmetic changes as per comments in #784

* cmd/swarm: Inverted --multihash flag default

* swarm/storage/mru: removed Verify from SignedResourceUpdate.fromChunk

* swarm/storage/mru: Moved validation code out of serializer
Cosmetic / comment changes

* swarm/storage/mru: Added unit tests for UpdateLookup

* swarm/storage/mru: Increased coverage of metadata serialization

* swarm/storage/mru: Increased test coverage of updateHeader serializers

* swarm/storage/mru: Add resourceUpdate serializer test
2018-07-23 15:33:33 +02:00
0647c4de7b swarm/api/http: http package refactoring 1/5 and 2/5 2018-07-23 15:33:32 +02:00
7ddc2c9e95 cmd/swarm: add implicit subcommand help (fix #786) (#788)
* cmd/swarm: add implicit subcommand help (fix #786)

* cmd/swarm: moved implicit help to a recursive func
2018-07-23 15:33:32 +02:00
dcaaa3c804 swarm: network simulation for swarm tests (#769)
* cmd/swarm: minor cli flag text adjustments

* cmd/swarm, swarm/storage, swarm: fix  mingw on windows test issues

* cmd/swarm: support for smoke tests on the production swarm cluster

* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion

* changed colour of landing page

* landing page reacts to enter keypress

* swarm/api/http: sticky footer for swarm landing page using flex

* swarm/api/http: sticky footer for error pages and fix for multiple choices

* swarm: propagate ctx to internal apis (#754)

* swarm/simnet: add basic node/service functions

* swarm/netsim: add buckets for global state and kademlia health check

* swarm/netsim: Use sync.Map as bucket and provide cleanup function for...

* swarm, swarm/netsim: adjust SwarmNetworkTest

* swarm/netsim: fix tests

* swarm: added visualization option to sim net redesign

* swarm/netsim: support multiple services per node

* swarm/netsim: remove redundant return statement

* swarm/netsim: add comments

* swarm: shutdown HTTP in Simulation.Close

* swarm: sim HTTP server timeout

* swarm/netsim: add more simulation methods and peer events examples

* swarm/netsim: add WaitKademlia example

* swarm/netsim: fix comments

* swarm/netsim: terminate peer events goroutines on simulation done

* swarm, swarm/netsim: naming updates

* swarm/netsim: return not healthy kademlias on WaitTillHealthy

* swarm: fix WaitTillHealthy call in testSwarmNetwork

* swarm/netsim: allow bucket to have any type for a key

* swarm: Added snapshots to new netsim

* swarm/netsim: add more tests for bucket

* swarm/netsim: move http related things into separate files

* swarm/netsim: add AddNodeWithService option

* swarm/netsim: add more tests and Start* methods

* swarm/netsim: add peer events and kademlia tests

* swarm/netsim: fix some tests flakiness

* swarm/netsim: improve random nodes selection, fix TestStartStop* tests

* swarm/netsim: remove time measurement from TestClose to avoid flakiness

* swarm/netsim: builder pattern for netsim HTTP server (#773)

* swarm/netsim: add connect related tests

* swarm/netsim: add comment for TestPeerEvents

* swarm: rename netsim package to network/simulation
2018-07-23 15:33:25 +02:00
f5b128a5b3 swarm/fuse: Hotfix missing parantheses in statement 2018-07-23 15:31:02 +02:00
fd982d3f3b swarm/bmt: async section writer interface to BMT (#778)
- AsyncHasher implements AsyncWriter interface
 - add extra level for zerohashes in pool to lookup empty data hash
 - remove unused segment, hash and depth fields from Tree
 - Hash pkg function -> syncHash moved to test
 - add asyncHash helper func to tests using shuffle
 - add TestAsyncCorrectness to tests
 - add BenchmarkBMTAsync to tests
 - refactor benchmarks using subbenchmarks
 - improved comments
 - preinitialise base hashers on the nodes
2018-07-23 15:09:25 +02:00
526abe2736 eth/tracers: fix noop tracer (#17220) 2018-07-22 22:11:52 +03:00
8997efe31f rpc: fix missing parentheses in doc (#17224) 2018-07-22 22:09:45 +03:00
4ea2d707f9 gitter: change ethereum/swarm to ethersphere/orange-lounge 2018-07-21 00:47:46 +02:00
ee8877509a swarm/readme: add link to code review guidelines 2018-07-19 13:27:31 +02:00
763e64cad8 swarm: readme.md 2018-07-19 12:13:03 +02:00
040dd5bd5d accounts/abi: refactor Method#Sig() to use index in range iterator directly (#17198) 2018-07-19 11:42:47 +03:00
dcdd57df62 core, ethdb: two tiny fixes (#17183)
* ethdb: fix memory database

* core: fix bloombits checking

* core: minor polish
2018-07-18 13:41:36 +03:00
323428865f accounts: add unit tests for URL (#17182) 2018-07-18 11:32:49 +03:00
65c91ad5e7 p2p: correct comments typo (#17184) 2018-07-18 10:41:18 +03:00
5d30be412b all: switch out defunct set library to different one (#16873)
* keystore, ethash, eth, miner, rpc, whisperv6: tech debt with now defunct set.

* whisperv5: swap out gopkg.in/fatih/set.v0 with supported set
2018-07-16 10:54:19 +03:00
eb7f901289 dashboard: fix CSS, escape special HTML chars, clean up code (#17167)
* dashboard: fix CSS, escape special HTML chars, clean up code

* dashboard: change 0 to 1

* dashboard: add escape-html npm package
2018-07-16 10:43:58 +03:00
db5e403afe build: fix typo (#17175)
build: Fix a typo in ci.go
2018-07-16 10:36:21 +03:00
96116758d2 cmd/geth: fix golint issue (#17176) 2018-07-16 10:33:58 +03:00
65cebb7730 build: Fix a typo in ci.go 2018-07-15 14:13:11 +08:00
2e0391ea84 cmd/swarm: change version of swarm binary (#17174) 2018-07-13 22:53:02 +02:00
d483da766f swarm/network: bump up protocol versions due to wrapped message intro (#17170) 2018-07-13 17:40:55 +02:00
7c9314f231 swarm: integrate OpenTracing; propagate ctx to internal APIs (#17169)
* swarm: propagate ctx, enable opentracing

* swarm/tracing: log error when tracing is misconfigured
2018-07-13 17:40:28 +02:00
e1f1d3085c accounts, eth, les: blockhash based filtering on all code paths 2018-07-12 18:16:54 +03:00
96339daf40 eth/filters, ethereum: EIP-234 add blockHash param for eth_getLogs 2018-07-12 18:16:32 +03:00
f7d3678c28 swarm/api/http: http package refactoring 1/5 and 2/5 (#17164) 2018-07-12 15:42:45 +02:00
facf1bc9d6 consensus/ethash: fix the algorithm of fakeBlockNumber in comments (#17166)
correct the algorithm in the comments for fakeBlockNumber, from "min" to "max".
2018-07-12 13:32:23 +03:00
e8824f6e74 vendor, ethdb: resume write operation asap (#17144)
* vendor: update leveldb

* ethdb: remove useless warning log
2018-07-12 11:07:51 +03:00
a9835c1816 cmd, dashboard, log: log collection and exploration (#17097)
* cmd, dashboard, internal, log, node: logging feature

* cmd, dashboard, internal, log: requested changes

* dashboard, vendor: gofmt, govendor, use vendored file watcher

* dashboard, log: gofmt -s -w, goimports

* dashboard, log: gosimple
2018-07-11 10:59:04 +03:00
2eedbe799f cmd: typo fixed, isntance -> instance (#17149) 2018-07-09 17:34:59 +03:00
b3711af051 swarm: ctx propagation; bmt fixes; pss generic notification framework (#17150)
* cmd/swarm: minor cli flag text adjustments

* swarm/api/http: sticky footer for swarm landing page using flex

* swarm/api/http: sticky footer for error pages and fix for multiple choices

* cmd/swarm, swarm/storage, swarm: fix  mingw on windows test issues

* cmd/swarm: update description of swarm cmd

* swarm: added network ID test

* cmd/swarm: support for smoke tests on the production swarm cluster

* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion

* swarm: propagate ctx to internal apis (#754)

* swarm/metrics: collect disk measurements

* swarm/bmt: fix io.Writer interface

  * Write now tolerates arbitrary variable buffers
  * added variable buffer tests
  * Write loop and finalise optimisation
  * refactor / rename
  * add tests for empty input

* swarm/pss: (UPDATE) Generic notifications package (#744)

swarm/pss: Generic package for creating pss notification svcs

* swarm: Adding context to more functions

* swarm/api: change colour of landing page in templates

* swarm/api: change landing page to react to enter keypress
2018-07-09 14:11:49 +02:00
30bdf817a0 core/types: polish TxDifference code and docs a bit (#17130)
* core: fix func TxDifference

fix a typo in func comment;
change named return to unnamed as there's explicit return in the body

* fix another typo in TxDifference
2018-07-09 11:48:54 +03:00
fbeb4f20f9 cmd/geth: fix usage formatting (#17136) 2018-07-09 11:41:28 +03:00
0b20b1a050 consensus/clique: fixed documentation copy-paste issue (#17137) 2018-07-09 11:39:43 +03:00
4dbefc1f25 cmd/geth: fixed comment typo (#17140) 2018-07-09 11:38:52 +03:00
dbae1dc7b3 rpc: fixed comment grammar issue (#17146) 2018-07-09 11:31:59 +03:00
3b07451564 params, VERSION: v1.8.13 unstable 2018-07-05 01:09:02 +02:00
37685930d9 params: v1.8.12 stable 2018-07-05 01:08:05 +02:00
51df1c1f20 les: add announcement safety check to light fetcher (#17034) 2018-07-04 13:40:20 +03:00
f524ec4326 light: new CHTs (#17124) 2018-07-04 12:41:17 +03:00
eb794af833 consensus/ethash: fixed documentation typo (#17121)
"proot-of-work" to "proof-of-work"
2018-07-04 11:20:58 +03:00
67a7857124 Merge pull request #17111 from karalabe/trie-memleak
trie: fix a temporary memory leak in the memcache
2018-07-03 17:20:58 +03:00
c73b654fd1 p2p/discover: move bond logic from table to transport (#17048)
* p2p/discover: move bond logic from table to transport

This commit moves node endpoint verification (bonding) from the table to
the UDP transport implementation. Previously, adding a node to the table
entailed pinging the node if needed. With this change, the ping-back
logic is embedded in the packet handler at a lower level.

It is easy to verify that the basic protocol is unchanged: we still
require a valid pong reply from the node before findnode is accepted.

The node database tracked the time of last ping sent to the node and
time of last valid pong received from the node. Node endpoints are
considered verified when a valid pong is received and the time of last
pong was called 'bond time'. The time of last ping sent was unused. In
this commit, the last ping database entry is repurposed to mean last
ping _received_. This entry is now used to track whether the node needs
to be pinged back.

The other big change is how nodes are added to the table. We used to add
nodes in Table.bond, which ran when a remote node pinged us or when we
encountered the node in a neighbors reply. The transport now adds to the
table directly after the endpoint is verified through ping. To ensure
that the Table can't be filled just by pinging the node repeatedly, we
retain the isInitDone check. During init, only nodes from neighbors
replies are added.

* p2p/discover: reduce findnode failure counter on success

* p2p/discover: remove unused parameter of loadSeedNodes

* p2p/discover: improve ping-back check and comments

* p2p/discover: add neighbors reply nodes always, not just during init
2018-07-03 16:24:12 +03:00
9da128db70 cmd/p2psim: add exit error output and exit code (#17116) 2018-07-03 15:14:57 +03:00
4e5d1f1c39 core/vm: reuse bigint pools across transactions (#17070)
* core/vm: A pool for int pools

* core/vm: fix rebase issue

* core/vm: push leftover stack items after execution, not before
2018-07-03 13:06:42 +03:00
d57e85ecc9 node: documentation typo fix (#17113) 2018-07-03 11:42:18 +03:00
1990c9e621 cmd/geth: export metrics to InfluxDB (#16979)
* cmd/geth: add flags for metrics export

* cmd/geth: update usage fields for metrics flags

* metrics/influxdb: update reporter logger to adhere to geth logging convention
2018-07-02 15:51:02 +03:00
319098cc1c trie: fix a temporary memory leak in the memcache 2018-07-02 15:47:33 +03:00
223d943481 vendor: update docker/docker/pkg/reexec so that it compiles on OpenBSD (#17084) 2018-07-02 14:25:02 +03:00
8974e2e5e0 Merge pull request #17092 from pilu/master
remove formatting from ResettingTimer metrics if requested in raw format
2018-07-02 14:22:19 +03:00
a4a2343cdc ethdb, core: implement delete for db batch (#17101) 2018-07-02 11:16:30 +03:00
fdfd6d3c39 ethstats: comment minor correction (#17102)
spell correction from `repors` to `reports`
2018-06-29 15:15:38 +03:00
b5537c5601 node: remove formatting from ResettingTimer metrics if requested in raw 2018-06-27 11:43:49 +02:00
e916f9786d Merge pull request #17087 from OpenCommunityCoin/build/portable-shell
build: make build/goimports.sh more potable
2018-06-27 11:28:58 +03:00
909e968ebb build: make build/goimports.sh more potable 2018-06-26 22:04:27 +09:00
598f786aab core/vm: clear linter warnings (#17057)
* core/vm: clear linter warnings

* core/vm: review input

* core/vm.go: revert lint in noop as per request
2018-06-26 15:56:25 +03:00
461291882e whisper: Reduce message loop log from Warn to Info (#17055) 2018-06-26 04:31:05 -04:00
1f0f6f0272 swarm/pss: Hide big network tests under longrunning flag (#17074) 2018-06-25 16:11:47 +02:00
0a22ae5572 swarm/fuse: Disable fuse tests, they are flaky (#17072) 2018-06-25 14:04:01 +02:00
b0cfd9c786 Merge pull request #17054 from chfast/log-time-format
log: Change time format
2018-06-25 12:23:33 +03:00
6d8a1bfb08 log: Change time format
- Keep the tailing zeros.
- Limit precision to milliseconds.
2018-06-25 11:22:23 +02:00
4895665670 les: handle conn/disc/reg logic in the eventloop (#16981)
* les: handle conn/disc/reg logic in the eventloop

* les: try to dial before start eventloop

* les: handle disconnect logic more safely

* les: grammar fix
2018-06-25 11:52:24 +03:00
eaff89291c Merge pull request #17041 from ethersphere/swarm-network-rewrite-merge
Swarm POC3 - happy solstice
2018-06-21 23:00:43 +02:00
e187711c65 swarm: network rewrite merge 2018-06-21 21:10:31 +02:00
6209545083 p2p: Wrap conn.flags ops with atomic.Load/Store 2018-06-21 12:22:47 -04:00
193a402cc0 p2p: Test for peer.rw.flags race conditions 2018-06-21 12:22:47 -04:00
dcca66bce8 p2p: Cache inbound flag on Peer.isInbound to avoid a race 2018-06-21 12:22:47 -04:00
399aa710d5 p2p: Attempt to race check peer.Inbound() in TestServerDial 2018-06-21 12:22:47 -04:00
699794d88d p2p: More tests for AddTrustedPeer/RemoveTrustedPeer 2018-06-21 12:22:47 -04:00
773857a524 p2p: Test for MaxPeers=0 and TrustedPeer override 2018-06-21 12:21:48 -04:00
2a75fe3308 rpc: Add admin_addTrustedPeer and admin_removeTrustedPeer.
These RPC calls are analogous to Parity's parity_addReservedPeer and
parity_removeReservedPeer.

They are useful for adjusting the trusted peer set during runtime,
without requiring restarting the server.
2018-06-21 12:21:48 -04:00
d926bf2c7e trie: cache collapsed tries node, not rlp blobs (#16876)
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.

This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
2018-06-21 11:28:05 +02:00
8db8d074e2 cmd/geth: remove the tail "," from genesis config (#17028)
remove the tail "," from genesis config,  which will cause genesis config parse error .
2018-06-21 10:44:39 +03:00
1a70338734 mobile: correct comment typo in ethereum.go (#17040) 2018-06-21 10:35:35 +03:00
61a5976368 accounts: remove deadcode isSigned (#16990) 2018-06-20 11:46:29 +02:00
88c42ab4e7 Merge pull request #16954 from holiman/more_tracers
eth/tracers: fix err in 4byte, add some opcode analysis tools
2018-06-20 11:43:40 +03:00
4210dd1500 tracers: fix err in 4byte, add some opcode analysis tools 2018-06-20 11:42:57 +03:00
c971ab617d travis: use NDK 17b for Android archives (#17029) 2018-06-20 11:28:10 +03:00
f1986f86f2 signer: remove useless errorWrapper (#17003) 2018-06-19 14:48:10 +03:00
28aca90716 accounts/usbwallet: correct comment typo (#16998) 2018-06-19 14:43:20 +03:00
9b1536b26a core: remove dead code, limit test code scope (#17006)
* core: move test util var/func to test file

* core: remove useless func
2018-06-19 14:41:13 +03:00
3e57c33147 accounts/usbwallet: correct comment typo (#17008) 2018-06-19 14:36:35 +03:00
baa7eb901e mobile: correct comment typo in geth.go (#17021) 2018-06-19 14:35:59 +03:00
574378edb5 cmd: remove faucet/puppeth dead code (#16991)
* cmd/faucet: authGitHub is not used anymore

* cmd/puppeth: remove not used code
2018-06-15 11:14:55 +03:00
c95e4a80d1 accounts/keystore: assign schema as const instead of var (#16985) 2018-06-14 17:24:35 +03:00
897ea01d5f internal/debug: use pprof goroutine writer for debug_stacks (#16892)
* debug: Use pprof goroutine writer in debug.Stacks() to ensure all goroutines are captured.

* Up to 64MB limit, previous code only captured first 1MB of goroutines.

* internal/debug: simplify stacks handler

* fix typo

* fix pointer receiver
2018-06-14 16:58:44 +03:00
ec192f18b4 core/asm: correct comments typo (#16974)
* core/asm/compiler: correct comments typo

core/asm/compiler: correct comments typo

* Correct comments typo
2018-06-14 16:24:35 +03:00
aa34173f13 common/number: delete unused package (#16983)
This package was meant to hold an improved 256 bit integer library, but
the effort was abandoned in 2015. AFAIK nothing ever used this package.
Time to say goodbye.
2018-06-14 15:10:34 +03:00
c343f75c26 bmt: fix package documentation comment (#16909) 2018-06-14 13:50:31 +02:00
52b1d09457 core: reduce nesting in transaction pool code (#16980) 2018-06-14 13:46:43 +03:00
9402f96597 eth: conform better to the golint standards (#16783)
* eth: made changes to conform better to the golint standards

* eth: fix comment nit
2018-06-14 13:14:52 +03:00
d0fd8d6fc2 common: all golint warnings removed (#16852)
* common: all golint warnings removed

* common: fixups
2018-06-14 12:52:50 +03:00
cfde0b5f52 Merge pull request #16977 from karalabe/go-1.10.3
travis, appveyor: update to Go 1.10.3
2018-06-14 12:51:50 +03:00
de06185fc3 travis, appveyor: update to Go 1.10.3 2018-06-14 12:46:49 +03:00
3fb5f3ae11 cmd/utils: fix NetworkId default when -dev is set (#16833)
Prior to this change, when geth was started with `geth -dev -rpc`,
it would report a network id of `1` in response to the `net_version` RPC
request. But the actual network id it used to verify transactions
was `1337`.

This change causes geth instead respond with `1337` to the `net_version`
RPC when geth is started with `geth -dev -rpc`.
2018-06-14 12:31:31 +03:00
e75d0a6e4c eth/filters: make filterLogs func more readable (#16920) 2018-06-14 12:27:02 +03:00
947e0afeb3 core/vm: optimize MSTORE and SLOAD (#16939)
* vm/test: add tests+benchmarks for mstore

* core/vm: less alloc and copying for mstore

* core/vm: less allocs in sload

* vm: check for errors more correctly
2018-06-14 12:23:37 +03:00
1836366ac1 all: library changes for swarm-network-rewrite (#16898)
This commit adds all changes needed for the merge of swarm-network-rewrite.
The changes:

- build: increase linter timeout
- contracts/ens: export ensNode
- log: add Output method and enable fractional seconds in format
- metrics: relax test timeout
- p2p: reduced some log levels, updates to simulation packages
- rpc: increased maxClientSubscriptionBuffer to 20000
2018-06-14 11:21:17 +02:00
591cef17d4 #15685 made peer_test.go more portable by using random free port instead of hardcoded port 30303 (#15687)
Improves test portability by resolving 127.0.0.1:0
to get a random free port instead of the hard coded one. Now
the test works if you have a running node on the same
interface already.

Fixes #15685
2018-06-14 10:54:00 +02:00
e33a5de454 console: correct some comments typo (#16971)
console/console: correct some comments typo
2018-06-14 11:35:20 +03:00
f04c0e341e core/asm: correct comments typo (#16975)
core/asm/lexer: correct comments typo
2018-06-14 11:32:19 +03:00
ea89f40f0d eth/fetcher: fix annotation (#16969) 2018-06-13 17:02:28 +03:00
1fc54d92ec internal/web3ext: fix method name for enabling mutex profiling (#16964) 2018-06-13 14:10:20 +02:00
8c4a7fa8d3 core: change comment to match code more closely (#16963) 2018-06-13 10:14:15 +03:00
423d4254f5 VERSION, params: begin v1.8.12 release cycle 2018-06-12 17:04:17 +03:00
dea1ce052a params: release go-ethereum v1.8.11 2018-06-12 17:02:14 +03:00
25982375a8 les: fix retriever logic (#16776)
This PR fixes a retriever logic bug. When a peer had a soft timeout
and then a response arrived, it always assumed it was the same peer
even though it could have been a later requested one that did not time
out at all yet. In this case the logic went to an illegal state and
deadlocked, causing a goroutine leak.

Fixes #16243 and replaces #16359.
Thanks to @riceke for finding the bug in the logic.
2018-06-12 15:58:47 +02:00
049f5b3572 core, eth, les: more efficient hash-based header chain retrieval (#16946) 2018-06-12 16:52:54 +03:00
0255951587 crypto: replace ToECDSAPub with error-checking func UnmarshalPubkey (#16932)
ToECDSAPub was unsafe because it returned a non-nil key with nil X, Y in
case of invalid input. This change replaces ToECDSAPub with
UnmarshalPubkey across the codebase.
2018-06-12 15:26:08 +02:00
85cd64df0e Merge pull request #16958 from karalabe/pending-account-fast
internal/ethapi: reduce pendingTransactions to O(txs+accs) from O(txs*accs)
2018-06-12 14:07:21 +03:00
9608ccf106 Merge pull request #16959 from karalabe/fix-linters
metrics: fix gofmt linter warnings
2018-06-12 14:04:59 +03:00
3f06da7b5f metrics: fix gofmt linter warnings 2018-06-12 14:02:36 +03:00
546d42179e les: pass server pool to protocol manager (#16947) 2018-06-12 14:00:52 +03:00
90829a04bf internal/ethapi: reduce pendingTransactions to O(txs+accs) from O(txs*accs) 2018-06-12 13:49:43 +03:00
f991995918 ethdb: gracefullly handle quit channel (#16794)
* ethdb: gratefullly handle quit channel

* ethdb: minor polish
2018-06-11 16:11:48 +03:00
aab7ab04b0 core/rawdb: wrap db key creations (#16914)
* core/rawdb: use wrappered helper to assemble key

* core/rawdb: wrappered helper to assemble key

* core/rawdb: rewrite the wrapper, pass common.Hash
2018-06-11 16:06:26 +03:00
43b940ec5a Merge pull request #16945 from karalabe/triedb-spurious-warning
trie: don't report the root flushlist as an alloc
2018-06-11 15:02:37 +03:00
b487bdf0ba metrics: removed repetitive calculations (#16944) 2018-06-11 14:45:25 +03:00
a3267ed929 trie: don't report the root flushlist as an alloc 2018-06-11 14:32:13 +03:00
9f7592c802 Merge pull request #16942 from karalabe/rpc-nil-reply
rpc: support returning nil pointer big.Ints (null)
2018-06-11 13:58:17 +03:00
99483e85b9 rpc: support returning nil pointer big.Ints (null) 2018-06-11 13:56:22 +03:00
1d666cf27e rpc: fix a comment typo (#16929) 2018-06-11 12:01:13 +03:00
eac16f9824 core: improve getBadBlocks to return full block rlp (#16902)
* core: improve getBadBlocks to return full block rlp

* core, eth, ethapi: changes to getBadBlocks formatting

* ethapi: address review concerns
2018-06-11 11:03:40 +03:00
69c52bde3f ethclient: fix RPC parse error of Parity response (#16924)
The error produced when using a Parity RPC was the following:

ERROR: transaction did not get mined: failed to get tx for txid 0xbdeb094b3278019383c8da148ff1cb5b5dbd61bf8731bc2310ac1b8ed0235226: json: cannot unmarshal non-string into Go struct field txExtraInfo.blockHash of type common.Hash
2018-06-11 10:41:09 +03:00
2977538ac0 light: new CHTs for mainnet and ropsten (#16926) 2018-06-11 10:38:05 +03:00
7f0726f706 metrics: return an empty snapshot for NilResettingTimer (#16930) 2018-06-11 10:31:55 +03:00
13af276418 cmd/ethkey: add command to change key passphrase (#16516)
This change introduces 

    ethkey changepassphrase <keyfile>

to change the passphrase of a key file.
2018-06-08 15:07:07 +02:00
ea06da0892 trie: avoid unnecessary slicing on shortnode decoding (#16917)
optimization code
2018-06-07 11:48:36 +03:00
feb6620c34 core: relax type requirement for bc in ApplyTransaction (#16901) 2018-06-07 10:34:24 +02:00
90b22773e9 cmd/puppeth: fixed a typo in a wizard input query (#16910) 2018-06-06 12:17:41 +03:00
9e4f96a1a6 whisper: re-insert #16757 that has been lost during a merge (#16889) 2018-06-05 16:25:16 +02:00
01a7e267dc Merge pull request #16882 from karalabe/streaming-ecrecover
core: concurrent background transaction sender ecrecover
2018-06-05 17:13:43 +03:00
e8ea5aa0d5 trie: reduce hasher allocations (#16896)
* trie: reduce hasher allocations

name    old time/op    new time/op    delta
Hash-8    4.05µs ±12%    3.56µs ± 9%  -12.13%  (p=0.000 n=20+19)

name    old alloc/op   new alloc/op   delta
Hash-8    1.30kB ± 0%    0.66kB ± 0%  -49.15%  (p=0.000 n=20+20)

name    old allocs/op  new allocs/op  delta
Hash-8      11.0 ± 0%       8.0 ± 0%  -27.27%  (p=0.000 n=20+20)

* trie: bump initial buffer cap in hasher
2018-06-05 15:06:29 +03:00
5bee5d69d7 vendor: added vendor packages necessary for the swarm-network-rewrite merge (#16792)
* vendor: added vendor packages necessary for the swarm-network-rewrite merge into ethereum master

* vendor: removed multihash deps
2018-06-05 12:40:21 +02:00
cbfb40b0aa params: fix golint warnings (#16853)
params: fix golint warnings
2018-06-05 12:31:34 +02:00
4cf2b4110e cmd/abigen: support for reading solc output from stdin (#16683)
Allow the --abi flag to be given - to indicate that it should read the
ABI information from standard input. It expects to read the solc output
with the --combined-json flag providing bin, abi, userdoc, devdoc, and
metadata, and works very similarly to the internal invocation of solc,
except it allows external invocation of solc.

This facilitates integration with more complex solc invocations, such
as invocations that require path remapping or --allow-paths tweaks.

Simple usage example:

    solc --combined-json bin,abi,userdoc,devdoc,metadata *.sol | abigen --abi -
2018-06-05 12:22:02 +02:00
0029a869f0 miner: not call commitNewWork if it's a side block (#16751) 2018-06-05 12:10:09 +02:00
2ab24a2a8f core: concurrent background transaction sender ecrecover 2018-06-05 11:03:55 +03:00
400332b99d eth/tracers: fix minor off-by-one error (#16879)
* tracing: fix minor off-by-one error

* tracers: go generate
2018-06-05 10:27:07 +03:00
a5237a27ea les: add Skip overflow check to GetBlockHeadersMsg handler (#16891) 2018-06-05 10:23:00 +03:00
7a22e89080 Merge pull request #16894 from hadv/master
core: fix typo in comment code
2018-06-05 10:11:42 +03:00
e3a993d774 core: fix typo in comment code 2018-06-05 09:56:45 +07:00
ed40767355 Merge pull request #16800 from rjl493456442/memory_allowance_warining
cmd: cap cache size if exceeds reasonable range
2018-06-04 17:40:51 +03:00
a20cc75b4a cmd/geth: cap cache allowance 2018-06-04 13:46:39 +03:00
b659718fd0 Merge pull request #16880 from holiman/http_timeouts
rpc: set timeouts for http server, see #16859
2018-06-04 13:38:23 +03:00
be2aec092d metrics: expvar support for ResettingTimer (#16878)
* metrics: expvar support for ResettingTimer

* metrics: use integers for percentiles; remove Overall

* metrics: fix edge-case panic for index-out-of-range
2018-06-04 13:05:16 +03:00
17f80cc2e2 rpc: set timeouts for http server, see #16859 2018-06-04 11:41:55 +02:00
143c4341d8 core, eth, trie: streaming GC for the trie cache (#16810)
* core, eth, trie: streaming GC for the trie cache

* trie: track memcache statistics
2018-06-04 10:47:43 +03:00
3f33a7c8ce consensus/ethash: reduce keccak hash allocations (#16857)
Use Read instead of Sum to avoid internal allocations and
copying the state.

name                      old time/op  new time/op  delta
CacheGeneration-8          764ms ± 1%   579ms ± 1%  -24.22%  (p=0.000 n=20+17)
SmallDatasetGeneration-8  75.2ms ±12%  60.6ms ±10%  -19.37%  (p=0.000 n=20+20)
HashimotoLight-8          1.58ms ±11%  1.55ms ± 8%     ~     (p=0.322 n=20+19)
HashimotoFullSmall-8      4.90µs ± 1%  4.88µs ± 1%   -0.31%  (p=0.013 n=19+18)
2018-06-04 10:32:32 +03:00
c8dcb9584e rpc: use HTTP request context as top-level context (#16861) 2018-06-02 12:26:47 +02:00
af28d12847 console: squash golint warnings (#16836) 2018-05-31 13:59:08 +02:00
0ad32d3be7 ethstats: fix last golint warning (#16837) 2018-05-30 11:36:02 +03:00
68b0d30d4a VERSION, params: begin 1.8.11 release cycle 2018-05-30 11:04:45 +03:00
eae63c511c params: release Geth 1.8.10 hotfix 2018-05-30 11:00:07 +03:00
ca34e8230e Merge pull request #16843 from karalabe/txpool-fix-deadlock
core: fix transaction event asynchronicity
2018-05-30 10:45:02 +03:00
342ec83d67 core: fix transaction event asynchronicity 2018-05-30 10:14:00 +03:00
38c7eb0f26 trie: rename TrieSync to Sync and improve hexToKeybytes (#16804)
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.

In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
2018-05-29 17:48:43 +02:00
d51faee240 Merge pull request #16831 from abeln/patch-1
core/vm: fix typo in comment
2018-05-29 15:44:30 +03:00
426f62f1a8 core: improve test for TransactionPriceNonceSort (#16413) 2018-05-29 14:21:04 +02:00
7677ec1f34 p2p/discv5: add egress/ingress traffic metrics to discv5 udp transport (#16369) 2018-05-29 13:46:09 +02:00
d258eee211 core/vm: fix typo in comment 2018-05-29 13:22:00 +02:00
84f8c0cc1f common: improve documentation comments (#16701)
This commit adds many comments and removes unused code.
It also removes the EmptyHash function, which had some uses
but was silly.
2018-05-29 12:42:21 +02:00
998f6564b2 whisper/shhclient: update call to shh_post to expect string instead of bool (#16757)
Fixes #16756
2018-05-29 04:36:31 -04:00
40a2c52397 eth/fetcher: reuse variables for hash and number (#16819) 2018-05-29 10:57:08 +03:00
a9c6ef6905 ethereum: fix a typo in FilterQuery{} (#16827)
Fix a spelling mistake in comment
2018-05-29 10:44:06 +03:00
ccc0debb63 VERSION, params: begin 1.8.10 release cycle 2018-05-28 13:01:20 +03:00
ff9b14617e params: release go-ethereum v1.8.9 2018-05-28 12:58:22 +03:00
d6ed2f67a8 eth, node, trie: fix minor typos (#16802) 2018-05-24 15:55:20 +03:00
54294b45b1 Merge pull request #16803 from karalabe/trie-avoid-funccall
trie: cleaner logic, one less func call
2018-05-24 15:54:00 +03:00
d31802312a trie: cleaner logic, one less func call 2018-05-24 13:46:45 +03:00
55b579e02c core: use a wrapped map to remove contention in TxPool.Get. (#16670)
* core: use a wrapped `map` and `sync.RWMutex` for `TxPool.all` to remove contention in `TxPool.Get`.

* core: Remove redundant `txLookup.Find` and improve comments on txLookup methods.
2018-05-23 15:55:42 +03:00
be22ee8dda core/vm: fix typo in instructions.go (#16788) 2018-05-23 15:02:10 +03:00
56de337e57 Merge pull request #16722 from karalabe/trie-iterator-proofs
trie: support proof generation from the iterator
2018-05-23 13:51:09 +03:00
c934c06cc1 trie: support proof generation from the iterator 2018-05-23 13:02:20 +03:00
fbf57d53e2 core/types: convert status type from uint to uint64 (#16784) 2018-05-23 11:10:24 +03:00
6ce21a4744 vendor, ethdb: print warning log if leveldb is performing compaction (#16766)
* vendor: update leveldb package

* ethdb: print warning log if db is performing compaction

* ethdb: update annotation and log
2018-05-22 11:12:52 +03:00
9af364e42b node: all golint warnings fixed (#16773)
* node: all golint warnings fixed

* node: rm per peter

* node: rm per peter
2018-05-22 10:29:41 +03:00
09d44247f7 log: fixes for golint warnings (#16775) 2018-05-22 10:28:43 +03:00
0fe47e98c4 trie: fixes to comply with golint (#16771) 2018-05-21 23:41:31 +03:00
415969f534 Merge pull request #16769 from karalabe/async-broadcasts
eth: propagate blocks and transactions async
2018-05-21 13:42:25 +03:00
d9cee2c172 eth: propagate blocks and transactions async 2018-05-21 11:32:42 +03:00
ab6bdbd9b0 Merge pull request #16758 from hadv/fix/typos
Fix some typos in comment code and output log
2018-05-19 19:40:55 +03:00
953b5ac015 Merge pull request #16720 from rjl493456442/PreTxsEvent
all: collate new transaction events together
2018-05-19 19:39:28 +03:00
f2fdb75dd9 core, consensus: fix some typos in comment code and output log 2018-05-19 15:44:36 +07:00
f9c456e02d Merge pull request #16753 from karalabe/go-1.10.2
travis, appveyor: bump Go release to 1.10.2
2018-05-18 13:18:54 +03:00
579bd0f9fb travis, appveyor: bump Go release to 1.10.2 2018-05-18 12:24:04 +03:00
49719e21bc core, eth: minor txpool event cleanups 2018-05-18 12:08:24 +03:00
a2e43d28d0 all: collate new transaction events together 2018-05-18 11:46:44 +03:00
6286c255f1 p2p/enr: updates for discovery v4 compatibility (#16679)
This applies spec changes from ethereum/EIPs#1049 and adds support for
pluggable identity schemes.

Some care has been taken to make the "v4" scheme standalone. It uses
public APIs only and could be moved out of package enr at any time.

A couple of minor changes were needed to make identity schemes work:

- The sequence number is now updated in Set instead of when signing.
- Record is now copy-safe, i.e. calling Set on a shallow copy doesn't
  modify the record it was copied from.
2018-05-17 15:11:27 +02:00
f6bc65fc68 Merge pull request #16739 from karalabe/android-trusty
travis: try to upgrade android builder to trusty
2018-05-14 16:56:02 +03:00
ff8a033f18 travis: try to upgrade android builder to trusty 2018-05-14 16:42:26 +03:00
247b5f0369 accounts/abi: allow abi: tags when unpacking structs
Go code users can now tag event struct members with `abi:` to specify in what fields the event will be de-serialized.

See PR #16648 for details.
2018-05-14 14:47:31 +02:00
49ec4f0cd1 VERSION, params: start 1.8.9 release cycle 2018-05-14 13:16:55 +03:00
2688dab48c params: release go-ethereum v1.8.8 2018-05-14 13:14:17 +03:00
595b47e535 light: new CHT for mainnet and ropsten (#16736) 2018-05-14 12:23:58 +03:00
784aa83942 bmt: golint updates for this or self warning (#16628)
* bmt/*: golint updates for this or self warning

* Update bmt.go
2018-05-10 13:36:01 +03:00
fcc18f4c80 travis: use Android NDK 16b (#16562) 2018-05-10 13:33:13 +03:00
53a18d2e27 event: document select case slice use and add edge case test (#16680)
Feed keeps active subscription channels in a slice called 'f.sendCases'.
The Send method tracks the active cases in a local variable 'cases'
whose value is f.sendCases initially. 'cases' shrinks to a shorter
prefix of f.sendCases every time a send succeeds, moving the successful
case out of range of the active case list.

This can be confusing because the two slices share a backing array. Add
more comments to document what is going on. Also add a test for removing
a case that is in 'f.sentCases' but not 'cases'.
2018-05-10 13:26:36 +03:00
7beccb29be all: get rid of error when creating memory database (#16716)
* all: get rid of error when create mdb

* core: clean up variables definition

* all: inline mdb definition
2018-05-09 15:24:25 +03:00
5dbd8b42a9 whisper/shhclient: update call to shh_generateSymKeyFromPassword to pass a string (#16668) 2018-05-09 13:40:59 +02:00
4e7dc34ff1 eth/filter: check nil pointer when unsubscribe (#16682)
* eth/filter: check nil pointer when unsubscribe

* eth/filters, accounts, rpc: abort system if subscribe failed

* eth/filter: add crit log before exit

* eth/filter, event: minor fixes
2018-05-09 11:29:25 +03:00
4747aad160 eth: golint fixes to variable names (#16711) 2018-05-09 10:59:00 +03:00
4ea493e7eb cmd: various golint fixes (#16700)
* cmd: various golint fixes

* cmd: update to pr change request

* cmd: update to pr change request
2018-05-09 10:38:03 +03:00
c60f6f6214 p2p: don't discard reason set by Disconnect (#16559)
Peer.run was discarding the reason for disconnection sent to the disc
channel by Disconnect.
2018-05-09 01:20:20 +02:00
ba975dc093 crypto: fix golint warnings (#16710) 2018-05-09 01:17:09 +02:00
eab6e5a317 build: specify the key to use when invoking gpg:sign-and-deploy-file (#16696) 2018-05-09 01:13:53 +02:00
c4a4613d95 p2p/simulations/adapters: fix websocket log line parsing in exec adapter (#16667) 2018-05-08 17:05:27 +02:00
fedae95015 eth/filters: derive FilterCriteria from ethereum.FilterQuery (#16629) 2018-05-08 14:39:15 +02:00
864e80a48f p2p: fix some golint warnings (#16577) 2018-05-08 13:08:43 +02:00
a42be3b78d rlp: fix some golint warnings (#16659) 2018-05-08 11:48:07 +02:00
6cf0ab38bd core/rawdb: separate raw database access to own package (#16666) 2018-05-07 14:35:06 +03:00
5463ed9996 mobile: add GetStatus Method for Receipt (#16598) 2018-05-07 11:32:51 +03:00
d7be5c6619 common: changed if-else blocks to conform with golint (#16656) 2018-05-07 11:26:39 +03:00
d2fe83dc5c whisper/mailserver: pass init error to the caller (#16671)
* whisper/mailserver: pass init error to the caller

* whisper/mailserver: add returns to fmt.Errorf

* whisper/mailserver: check err in mailserver init test
2018-05-04 12:10:18 +03:00
Eli
16f3c31773 signer: fix golint errors (#16653)
* signer/*: golint fixes

Specifically naming and comment formatting for documentation

* signer/*: fixed naming error crashing build

* signer/*: corrected error

* signer/core: fix tiny error whitespace

* signer/rules: fix test refactor
2018-05-04 11:04:17 +03:00
5b3af4c3d1 eth: golint updates for this or self warning (#16632)
* eth/*:golint updates for this or self warning

* eth/*: golint updates for this or self warning, pr updated per feedback
2018-05-03 15:15:33 +03:00
60b433ab84 event: golint updates for this or self warning (#16631)
* event/*: golint updates for this or self warning

* event/*: golint updates for this or self warning, pr updated per feedback
2018-05-03 14:54:36 +03:00
fd3da7c69d consensus/ethash: fixed typo (#16665) 2018-05-03 12:44:47 +03:00
cd9a1d5b37 metrics: golint updates for this or self warning (#16635)
* metrics/*: golint updates for this or self warning

* metrics/*: golint updates for this or self warning, updated pr from feedback
2018-05-03 12:43:59 +03:00
2ad511ce09 rpc: golint error with context as last parameter (#16657)
* rpc/*: golint error with context as last parameter

* Update json.go
2018-05-03 11:41:22 +03:00
541f299fbb accounts: changed if-else blocks to conform with golint (#16654) 2018-05-03 11:37:56 +03:00
7c02933275 les: changed if-else blocks to conform with golint (#16658) 2018-05-03 11:35:06 +03:00
f2447bd4c3 p2p: changed if-else blocks to conform with golint (#16660) 2018-05-03 11:33:39 +03:00
ea1724de1a log: changed if-else blocks to conform with golint (#16661) 2018-05-03 11:30:12 +03:00
577d375a0d VERSION, params: begin v1.8.8 release cycle 2018-05-02 14:04:09 +03:00
66432f3821 params: release geth 1.8.7 2018-05-02 14:01:38 +03:00
5d4d79ae26 cmd/clef: documentation about setup (#16568)
clef: documentation about setup
2018-05-02 13:31:05 +03:00
6a01363d1d Merge pull request #16644 from ligi/reduce_aar_size
build: Add ldflags "-s -w" when building aar
2018-05-02 12:23:24 +03:00
58c4e033f4 build: Add ldflags -s -w when building aar
Smaller size on mobile is always good.
Might also solve our maven central upload problem
2018-05-02 11:21:24 +02:00
5449139ca2 Merge pull request #16569 from holiman/evm_blocknum
cmd/evm: use block number from genesis
2018-05-02 11:48:38 +03:00
579ac6287b Merge pull request #16576 from CrispinFlowerday/bugfix/local_underpriced_txs
core: ensure local transactions aren't discarded as underpriced
2018-05-02 11:34:47 +03:00
a7720b5926 core: golint updates for this or self warning (#16633) 2018-05-02 11:27:59 +03:00
670bae4cd3 internal: golint updates for this or self warning (#16634) 2018-05-02 11:26:21 +03:00
Eli
4a8d5d2b1e trie: golint iterator fixes (#16639) 2018-05-02 11:24:34 +03:00
Eli
d76c5ca532 tests: golint fixes for tests directory (#16640) 2018-05-02 11:20:19 +03:00
c1ea527573 accounts: golint updates for this or self warning (#16627) 2018-05-02 11:17:52 +03:00
8dfa4f46a9 evm/main: use blocknumber from genesis 2018-05-02 10:17:00 +02:00
0afd767537 core: ensure local transactions aren't discarded as underpriced
This fixes an issue where local transactions are discarded as
underpriced when the pool and queue are full.
2018-05-02 11:04:40 +03:00
448d17b8f7 Merge pull request #16630 from tstranex/master
vendor: Fix index out of range panic when size is bigger than 1 TiB
2018-05-02 10:58:14 +03:00
9922943b42 Merge pull request #16636 from reductionista/travis
travis.yml: remove obsolete brew-cask install
2018-05-02 10:41:50 +03:00
a1949d0788 vendor: fix leveldb crash when bigger than 1 TiB 2018-05-02 10:34:31 +03:00
Eli
9f6af6f812 whisper: Golint fixes in whisper packages (#16637) 2018-05-02 08:17:17 +02:00
ea171d5bd9 travis.yml: remove obsolete brew-cask install 2018-05-01 11:41:34 -07:00
1da33028ce Merge pull request #16588 from karalabe/tracer-dirty-fix
core, eth: fix tracer dirty finalization
2018-04-27 16:28:19 +03:00
7a7428a027 core, eth: fix tracer dirty finalization 2018-04-27 14:29:18 +03:00
cfe8f5fd94 trie: remove unused buf parameter (#16583) 2018-04-27 12:45:02 +03:00
852aa143ac cmd/utils: point users to --syncmode under DEPRECATED (#16572)
Indicate that --light and --fast options are replaced by --syncmode
2018-04-27 12:32:06 +03:00
b724d1aada core/state: cache missing storage entries (#16584) 2018-04-27 12:13:23 +03:00
86be91b3e2 core/types: avoid duplicating transactions on changing signer (#16435) 2018-04-24 10:39:03 +03:00
e7067be94f cmd/geth, mobile: add memsize to pprof server (#16532)
* cmd/geth, mobile: add memsize to pprof server

This is a temporary change, to be reverted before the next release.

* cmd/geth: fix variable name
2018-04-23 16:20:39 +03:00
9586f2acc7 VERSION, params: begin release cycle 1.8.7 2018-04-23 15:54:11 +03:00
12683feca7 params: release v1.8.6 to fix docker images 2018-04-23 15:52:02 +03:00
49371bf255 Dockerfile: drop legacy discovery v5 port mappings 2018-04-23 15:51:30 +03:00
16a78b095e Merge pull request #16552 from karalabe/revert-docker-user
Dockerfile: revert the user change PR that broke all APIs
2018-04-23 15:27:24 +03:00
96a6c8ba0a Dockerfile: revert the user change PR that broke all APIs 2018-04-23 15:26:15 +03:00
7d2c730acb Merge pull request #16551 from ethereum/revert-16477-puppeth-dockerfile-permission-fix
Revert "cmd/puppeth: fix node deploys for updated dockerfile user"
2018-04-23 15:24:17 +03:00
abd881f6d4 Revert "cmd/puppeth: fix node deploys for updated dockerfile user" 2018-04-23 15:23:56 +03:00
4f91831aec Merge pull request #16550 from ethereum/revert-16478-fix-alltools-dockerfile
Revert "Dockerfile.alltools: fix invalid command"
2018-04-23 15:23:47 +03:00
3f2583d6d1 Revert "Dockerfile.alltools: fix invalid command" 2018-04-23 15:23:14 +03:00
Vie
26a4dbb467 cmd/geth: update the copyright year in the geth command usage (#16537) 2018-04-23 13:45:43 +03:00
50aa1dcfda VERSION, params: begin Geth 1.8.6 release cycle 2018-04-23 10:06:45 +03:00
cbdaa0ca2a params: release Geth v1.8.5 - Dirty Derivative² 2018-04-23 10:02:34 +03:00
7cf83cee52 eth/downloader: fix for Issue #16539 (#16546) 2018-04-23 10:01:21 +03:00
744428cb03 vendor: update elastic/gosigar so that it compiles on OpenBSD (#16542) 2018-04-21 19:19:12 +02:00
b15eb665ee ethclient: add DialContext and Close (#16318)
DialContext allows users to pass a Context object for cancellation.
Close closes the underlying RPC connection.
2018-04-19 15:36:33 +02:00
a16f12ba86 whisper/whisperv6: post returns the hash of sent message (#16495) 2018-04-19 15:34:24 +02:00
8feb31825e rpc: handle HTTP response error codes (#16500) 2018-04-19 15:32:43 +02:00
8f8774cf6d all: fix various typos (#16533)
* fix typo

* fix typo

* fix typo
2018-04-19 16:32:02 +03:00
dm4
c514fbccc0 core/asm: accept uppercase instructions (#16531) 2018-04-19 15:31:30 +02:00
52b046c9b6 rpc: clean up IPC handler (#16524)
This avoids logging accept errors on shutdown and removes
a bit of duplication. It also fixes some goimports lint warnings.
2018-04-18 12:27:20 +02:00
661f5f3dac cmd/utils: fix help template issue for subcommands (#16351) 2018-04-18 01:50:25 +02:00
dm4
49e38c970e core/asm: remove unused condition (#16487) 2018-04-18 01:08:31 +02:00
ba1030b6b8 build: enable goimports and varcheck linters (#16446) 2018-04-18 00:53:50 +02:00
7605e63cb9 VERSION, params: begin v1.8.5 release cycle 2018-04-17 11:26:06 +03:00
2423ae01e0 params: release Geth v1.8.4 2018-04-17 11:20:53 +03:00
92c6d13083 light: new CHTs (#16515) 2018-04-17 09:33:31 +03:00
ec3db0f56c cmd/clef, signer: initial poc of the standalone signer (#16154)
* signer: introduce external signer command

* cmd/signer, rpc: Implement new signer. Add info about remote user to Context

* signer: refactored request/response, made use of urfave.cli

* cmd/signer: Use common flags

* cmd/signer: methods to validate calldata against abi

* cmd/signer: work on abi parser

* signer: add mutex around UI

* cmd/signer: add json 4byte directory, remove passwords from api

* cmd/signer: minor changes

* cmd/signer: Use ErrRequestDenied, enable lightkdf

* cmd/signer: implement tests

* cmd/signer: made possible for UI to modify tx parameters

* cmd/signer: refactors, removed channels in ui comms, added UI-api via stdin/out

* cmd/signer: Made lowercase json-definitions, added UI-signer test functionality

* cmd/signer: update documentation

* cmd/signer: fix bugs, improve abi detection, abi argument display

* cmd/signer: minor change in json format

* cmd/signer: rework json communication

* cmd/signer: implement mixcase addresses in API, fix json id bug

* cmd/signer: rename fromaccount, update pythonpoc with new json encoding format

* cmd/signer: make use of new abi interface

* signer: documentation

* signer/main: remove redundant  option

* signer: implement audit logging

* signer: create package 'signer', minor changes

* common: add 0x-prefix to mixcaseaddress in json marshalling + validation

* signer, rules, storage: implement rules + ephemeral storage for signer rules

* signer: implement OnApprovedTx, change signing response (API BREAKAGE)

* signer: refactoring + documentation

* signer/rules: implement dispatching to next handler

* signer: docs

* signer/rules: hide json-conversion from users, ensure context is cleaned

* signer: docs

* signer: implement validation rules, change signature of call_info

* signer: fix log flaw with string pointer

* signer: implement custom 4byte databsae that saves submitted signatures

* signer/storage: implement aes-gcm-backed credential storage

* accounts: implement json unmarshalling of url

* signer: fix listresponse, fix gas->uint64

* node: make http/ipc start methods public

* signer: add ipc capability+review concerns

* accounts: correct docstring

* signer: address review concerns

* rpc: go fmt -s

* signer: review concerns+ baptize Clef

* signer,node: move Start-functions to separate file

* signer: formatting
2018-04-16 15:04:32 +03:00
de2a7bb764 eth/downloader: wait for all fetcher goroutines to exit before terminating (#16509) 2018-04-16 11:37:48 +03:00
6b2b328cdb ethdb: add leveldb write delay statistic (#16499) 2018-04-16 11:18:24 +03:00
2a1fc3d155 miner: remove contention on currentMu for pending data retrievals (#16497) 2018-04-16 10:56:20 +03:00
60516c83b0 Merge pull request #16494 from karalabe/txpool-stable-pricedelete
core: txpool stable underprice drop order, perf fixes
2018-04-12 13:19:06 +03:00
db48d312e4 core: txpool stable underprice drop order, perf fixes 2018-04-12 12:54:22 +03:00
7e911b8e47 Merge pull request #16491 from holiman/fix_copy_again
core/state: fix ripemd-cornercase in Copy
2018-04-12 10:54:29 +03:00
7205366c9f core/state: fix ripemd-cornercase in Copy 2018-04-11 15:03:49 +02:00
5a79aca8b9 Merge pull request #16485 from holiman/fixcopycopy
core/state: fix bug in copy of copy State
2018-04-11 12:00:17 +03:00
0c7b99b8cc core/state: fix bug in copy of copy State 2018-04-11 10:23:01 +02:00
e7cc5b4160 les: add ps.lock.Unlock() before return (#16360) 2018-04-11 11:02:33 +03:00
34ecb495b6 Merge pull request #16481 from karalabe/go1.10.1
travis, appveyor: bump to Go 1.10.1
2018-04-11 10:56:32 +03:00
2e247705cd travis.yml: add TEST_PACKAGES to speed up swarm testing (#16456)
This commit is meant to allow ecosystem projects such as ethersphere
to minimize CI build times by specifying an environment variable with
the packages to run tests on.

If the environment variable isn't defined the build script will test
all packages so this shouldn't affect the main go-ethereum repository.
2018-04-10 16:35:26 +02:00
95d5c22086 travis, appveyor: bump to Go 1.10.1 2018-04-10 17:07:58 +03:00
3caf16b15f core: remove stray account creations in state transition (#16470)
The 'from' and 'to' methods on StateTransitions are reader methods and
shouldn't have inadvertent side effects on state.

It is safe to remove the check in 'from' because account existence is
implicitly checked by the nonce and balance checks. If the account has
non-zero balance or nonce, it must exist. Even if the sender account has
nonce zero at the start of the state transition or no balance, the nonce
is incremented before execution and the account will be created at that
time.

It is safe to remove the check in 'to' because the EVM creates the
account if necessary.

Fixes #15119
2018-04-10 15:33:25 +02:00
30deb6067f build: add -e and -X flags to get more information on #16433 (#16443) 2018-04-10 14:32:48 +03:00
989ab26028 Merge pull request #16478 from karalabe/fix-alltools-dockerfile
Dockerfile.alltools: fix invalid command
2018-04-10 14:13:52 +03:00
c7ab3e5544 common: delete StringToAddress, StringToHash (#16436)
* common: delete StringToAddress, StringToHash

These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).

* eth/filters: remove incorrect use of common.BytesToAddress
2018-04-10 14:12:07 +03:00
29213b1f8f Dockerfile.alltools: fix invalid command 2018-04-10 14:02:56 +03:00
149f706fde Merge pull request #16477 from karalabe/puppeth-dockerfile-permission-fix
cmd/puppeth: fix node deploys for updated dockerfile user
2018-04-10 13:40:52 +03:00
7c1e9a5004 cmd/puppeth: fix node deploys for updated dockerfile user 2018-04-10 13:13:05 +03:00
39f4c80155 Merge pull request #15225 from holiman/test_removefrom_dirtyset
Change handling of dirty objects in state
2018-04-10 12:28:30 +03:00
8c31d2897b core: add blockchain benchmarks 2018-04-10 11:20:06 +02:00
14c9215dd3 state: handle nil in journal dirties 2018-04-10 11:20:02 +02:00
1100e8ba63 eth/downloader: flush state sync data before exit (#16280) 2018-04-09 14:46:27 +02:00
0fac705ed0 compression/rle: delete RLE compression (#16468) 2018-04-09 13:47:06 +02:00
315b9b18df ethclient: remove empty object in newHeads subscription call (#16454) 2018-04-09 12:40:58 +02:00
8de655ef3a bmt: fix comment typos (#16461) 2018-04-09 12:38:01 +02:00
dm4
3ebcf92b42 cmd/evm: print vm output when debug flag is on (#16326) 2018-04-06 12:43:36 +02:00
c43792a42c cmd/geth: update template for 'geth bug' command (#16350) 2018-04-06 12:42:11 +02:00
50dbe8e244 Dockerfile: use non-privileged user account (#16052) 2018-04-05 14:14:32 +02:00
ec8ee611ca core/types: remove String methods from struct types (#16205)
Most of these methods did not contain all the relevant information
inside the object and were not using a similar formatting type.
Moreover, the existence of a suboptimal String method breaks usage
with more advanced data dumping tools like go-spew.
2018-04-05 14:13:02 +02:00
1e248f3a6e README: change 'built in' to 'built-in' 2018-04-04 15:25:34 +02:00
6ab9f0a19f accounts/abi: improve test coverage (#16044) 2018-04-04 13:42:36 +02:00
7aad81f881 eth: fix typos (#16414) 2018-04-04 12:25:02 +02:00
2a4bd55b43 cmd/geth: remove relOracle variable (#16434) 2018-04-04 12:22:26 +02:00
5909482fb5 core/state: avoid redundant addition to code size cache (#16427) 2018-04-03 17:13:19 +02:00
d1af4e1a9e crypto/secp256k1: catch curve parameter parse errors (#16392) 2018-04-03 17:12:00 +02:00
6cdfb9a3eb .gitattributes: enable solidity highlighting on github (#16425) 2018-04-03 15:21:24 +02:00
a095b84ec5 travis.yml: remove sudo requirement for PPA and Azure purge builders (#16404)
This is supposed to fix the FTP upload issue according to
travis-ci/travis-ci#9391.
2018-03-28 15:35:41 +03:00
d985b9052a core/state: avoid linear overhead on journal dirty listing 2018-03-28 09:32:02 +03:00
958ed4f3d9 core/state: rework dirty handling to avoid quadratic overhead 2018-03-28 09:29:28 +03:00
1a8894b3d5 core/state: uniform parameter style (#16398)
- Uniform code style.
2018-03-28 09:26:37 +03:00
80449719bd whisper: fix issue in topic list copy (#16381)
- Fixes #16271. What was appeneded was a pointer to
an object that changes during the iteration.
- The topic is allocated as a 4-byte array, fill partial topics
with 0s. Partial topics are currently disabled, but would
crash as they rely on the presence of byte number 3.
2018-03-27 17:26:08 +02:00
45bd4fedde light: new CHT for ropsten (#16393) 2018-03-27 10:09:15 +03:00
d763e20d55 Merge pull request #16394 from hydai/fix_typo
core/vm: Fixed typos in core/vm/interpreter.go
2018-03-27 10:08:58 +03:00
6134990709 core/vm: Fixed typos in core/vm/interpreter.go 2018-03-27 12:29:04 +08:00
85ea9159d0 params, VERSION: v1.8.4 unstable 2018-03-26 18:09:56 +02:00
329ac18ef6 params: v1.8.3 stable 2018-03-26 18:09:06 +02:00
89cc604a50 light: new mainnet CHT (#16390) 2018-03-26 18:02:12 +03:00
cf799e5eaa whisper: switch all remaining components from v5 to v6 2018-03-26 16:36:14 +02:00
c053f1146d Merge pull request #16388 from hydai/fix_comments
core/vm: Fixed typos
2018-03-26 17:05:35 +03:00
c3dc814fea core/vm: Fixed typo in core/vm/evm.go 2018-03-26 21:40:00 +08:00
db9b2f5405 cmd/puppeth: add constraints to network name (#16336)
* cmd/puppeth: add constraints to network name

* cmd/puppeth: update usage of network arg

* cmd/puppeth: avoid package dependency on utils
2018-03-26 15:21:20 +03:00
e9b5e22ad1 rpc: limit chunked requests (#16343) 2018-03-26 14:46:37 +03:00
e506d384e9 core/state: fix typo (#16370) 2018-03-26 14:45:34 +03:00
dd708c1636 Merge pull request #16319 from rjl493456442/dump_preimages
cmd: implement preimage dump and import cmds
2018-03-26 14:45:01 +03:00
495bdb0c71 cmd: export preimages in RLP, support GZIP, uniform with block export 2018-03-26 14:08:01 +03:00
7c131f4d6d core/asm: fixed typo (posititon -> position) (#16366) 2018-03-26 13:48:39 +03:00
84c5db5409 core/vm: remove JIT VM codes (#16362) 2018-03-26 13:48:04 +03:00
23ac783332 ecies: drop randomness parameter from PrivateKey.Decrypt (#16374)
The parameter `rand` is unused in `PrivateKey.Decrypt`. Decryption in
the ECIES encryption scheme is deterministic, so randomness isn't
needed.
2018-03-26 13:46:18 +03:00
e9a1d8de34 Merge pull request #16387 from karalabe/evm-polsihes
core: minor evm polishes and optimizations
2018-03-26 13:44:36 +03:00
b6b6f52ec8 cmd: implement preimage dump and import cmds 2018-03-26 12:51:46 +03:00
1fae50a199 core: minor evm polishes and optimizations 2018-03-26 12:28:46 +03:00
3d013c1939 whisper: some components are still using v5, switch to v6 2018-03-22 15:48:52 +01:00
933972d139 Merge pull request #16256 from epiclabs-io/unpack_one_arg_event
Fix issue unmarshaling single parameter events from abigen generated go code #16208
2018-03-21 10:11:37 +01:00
b1917ac9a3 build: add GOBIN to PATH for gomobile (#16344)
* build: add GOBIN to PATH for gomobile

* build: install gobind alongside gomobile
2018-03-20 02:00:45 +09:00
1203c6a237 crypto/bn256: full switchover to cloudflare's code (#16301)
* crypto/bn256: full switchover to cloudflare's code

* crypto/bn256: only use cloudflare for optimized architectures

* crypto/bn256: upstream fallback for non-optimized code

* .travis, build: drop support for Go 1.8 (need type aliases)

* crypto/bn256/cloudflare: enable curve mul lattice optimization
2018-03-20 01:13:54 +09:00
0965761a45 whisper: Notify Vlad and Guillaume of whisper PRs (#16340) 2018-03-19 22:27:46 +09:00
faed47b3c5 Merge pull request #15990 from markya0616/sim_backend_block_hash
accounts/abi, core: add AddTxWithChain in BlockGen for simulation
2018-03-19 08:29:54 +01:00
fe6cf00f48 miner: remove duplicated code (#15968) 2018-03-16 14:13:52 +02:00
322006d0f2 Merge pull request #16315 from karalabe/drop-vagrant
containers: drop vagrant support, noone's maintaining it
2018-03-15 18:59:50 +02:00
56e2376e69 containers: drop vagrant support, noone's maintaining it 2018-03-14 13:23:40 +02:00
a063876749 core/asm: fixed typo (labal -> label) (#16313) 2018-03-14 11:59:06 +02:00
62bc179bb9 Merge pull request #16310 from karalabe/websocket-request-limits
rpc: enforce the 128KB request limits on websockets too
2018-03-13 14:35:18 +02:00
555f42cfd8 rpc: enforce the 128KB request limits on websockets too 2018-03-13 13:55:26 +02:00
6a2d2869f6 github: more information bot configuration (#16298) 2018-03-12 10:56:59 +02:00
1488fdaf19 cmd/utils: fix maxpeers vs lightpeers logic (#16125) 2018-03-09 11:55:03 +02:00
77da203547 eth: update higest block we know during the sync if a higher was found (#16283)
* eth: update higest block we know during the sync if a higher was found

* eth: avoid useless sync in fast sync
2018-03-09 11:51:30 +02:00
307846d046 Merge pull request #16287 from razum2um/master
Allow any vhost for wallet deployed by puppeth
2018-03-09 11:06:18 +02:00
38e2071df8 Merge pull request #16289 from jeffwalsh/remove-add-std-arg
common/compiler: remove "--add-std" arg, deprecated in solidity 0.4.21
2018-03-09 11:05:09 +02:00
a25561dfb4 common/compiler: remove "--add-std" arg, deprecated in solidity 0.4.21 2018-03-08 18:05:56 -08:00
52697fb1b2 cmd/puppeth: allow any vhost in wallet 2018-03-09 00:02:31 +07:00
b2f53f9621 Merge pull request #16128 from karalabe/go1.10
travis, Dockerfile, appveyor, build: bump to Go 1.10
2018-03-08 17:26:14 +02:00
669aba8e2c travis, Dockerfile, appveyor, build: bump to Go 1.10 2018-03-08 16:34:26 +02:00
39c16c8a1e cmd, ethdb, vendor: integrate leveldb iostats (#16277)
* cmd, dashboard, ethdb, vendor: send iostats to dashboard

* ethdb: change names

* ethdb: handle parsing errors

* ethdb: handle iostats syntax error

* ethdb: r -> w
2018-03-08 14:59:00 +02:00
4871e25f5f core/vm: optimize eq, slt, sgt and iszero + tests (#16047)
* vm: optimize eq, slt, sgt and iszero + tests

* core/vm: fix error in slt/sgt, found by vmtests. Added testcase

* core/vm: make slt/sgt cleaner
2018-03-08 14:48:19 +02:00
85d5f2c661 Merge pull request #16285 from karalabe/fix-resend-optional-param
internal/ethapi: make resent gas params optional
2018-03-08 12:52:38 +02:00
28ef23f446 internal/ethapi: make resent gas params optional 2018-03-08 12:29:42 +02:00
704840a8ad cmd, dashboard: use webpack dev server, remove custom assets (#16263)
* cmd, dashboard: remove custom assets, webpack dev server

* dashboard: yarn commands, small fixes
2018-03-08 10:22:21 +02:00
3ec1b9a92d Merge pull request #16275 from karalabe/bump-duktape
vendor: bump duktape to get rid of build warning
2018-03-07 16:29:31 +02:00
fc1f3f2618 vendor: bump duktape to get rid of build warning 2018-03-07 14:47:09 +02:00
cddb529d70 accounts/abi: normalize method name to a camel-case string (#15976) 2018-03-07 14:34:53 +02:00
63687f04e4 core: check transaction/receipt count match when reconstructing blocks (#16272) 2018-03-07 12:05:14 +02:00
d43ffdbf6a Merge pull request #16240 from cuiweixie/txpool
core: should enqueue the invalids tx anyway
2018-03-07 11:45:28 +02:00
f6bef558aa eth: fixed typo (#16274) 2018-03-07 11:15:54 +02:00
2b5d1a4a4c core: update txpool tests for the removal fix 2018-03-07 10:58:11 +02:00
cui
f8601430fd core: should enqueue the invalids tx anyway
even the pending is empty we shoud enqueue the invalid txs
2018-03-07 10:07:50 +02:00
f1d440a437 whisper: final refactoring (#16259)
whisper: final refactoring
2018-03-06 23:37:43 +01:00
746392cfd2 swarm/storage: disable treechunker test while it is flaky (#16254) 2018-03-05 17:23:33 +01:00
60a999f238 accounts/abi: Modified unpackAtomic to accept struct lvalues 2018-03-05 16:01:40 +01:00
13b566e06e accounts/abi: Add one-parameter event test case from enriquefynn/unpack_one_arg_event 2018-03-05 16:00:03 +01:00
1548518644 VERSION, params: begin 1.8.3 release cycle 2018-03-05 15:52:21 +02:00
1e72271f57 accounts/abi: use unpackTuple to unpack event arguments
Events with just 1 argument fail before this change
2018-02-16 11:46:25 +01:00
c1d70ea970 accounts/abi, core: add AddTxWithChain in BlockGen for simulation 2018-01-29 18:47:08 +08:00
1688 changed files with 278161 additions and 81656 deletions

1
.gitattributes vendored
View File

@ -1,2 +1,3 @@
# Auto detect text files and perform LF normalization
* text=auto
*.sol linguist-language=Solidity

18
.github/CODEOWNERS vendored
View File

@ -1,11 +1,13 @@
# Lines starting with '#' are comments.
# Each line is a file pattern followed by one or more owners.
accounts/usbwallet @karalabe
consensus @karalabe
core/ @karalabe @holiman
eth/ @karalabe
les/ @zsfelfoldi
light/ @zsfelfoldi
mobile/ @karalabe
p2p/ @fjl @zsfelfoldi
accounts/usbwallet @karalabe
accounts/abi @gballet
consensus @karalabe
core/ @karalabe @holiman
eth/ @karalabe
les/ @zsfelfoldi
light/ @zsfelfoldi
mobile/ @karalabe
p2p/ @fjl @zsfelfoldi
whisper/ @gballet @gluk256

View File

@ -1,16 +1,40 @@
# Contributing
Thank you for considering to help out with the source code! We welcome
contributions from anyone on the internet, and are grateful for even the
smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a
pull request for the maintainers to review and merge into the main code base. If
you wish to submit more complex changes though, please check up with the core
devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum) to
ensure those changes are in line with the general philosophy of the project
and/or get some early feedback which can make both your efforts much lighter as
well as our review and merge procedures quick and simple.
## Coding guidelines
Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go
[formatting](https://golang.org/doc/effective_go.html#formatting) guidelines
(i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go
[commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Pull requests need to be based on and opened against the `master` branch.
* Commit messages should be prefixed with the package(s) they modify.
* E.g. "eth, rpc: make trace configs optional"
## Can I have feature X
Before you do a feature request please check and make sure that it isn't possible
through some other means. The JavaScript enabled console is a powerful feature
in the right hands. Please check our [Wiki page](https://github.com/ethereum/go-ethereum/wiki) for more info
Before you submit a feature request, please check and make sure that it isn't
possible through some other means. The JavaScript-enabled console is a powerful
feature in the right hands. Please check our
[Wiki page](https://github.com/ethereum/go-ethereum/wiki) for more info
and help.
## Contributing
## Configuration, dependencies, and tests
If you'd like to contribute to go-ethereum please fork, fix, commit and
send a pull request. Commits which do not comply with the coding standards
are ignored (use gofmt!).
See [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
for more details on configuring your environment, testing, and
dependency management.
Please see the [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
for more details on configuring your environment, managing project dependencies
and testing procedures.

11
.github/no-response.yml vendored Normal file
View File

@ -0,0 +1,11 @@
# Number of days of inactivity before an Issue is closed for lack of response
daysUntilClose: 30
# Label requiring a response
responseRequiredLabel: "need:more-information"
# Comment to post when closing an Issue for lack of response. Set to `false` to disable
closeComment: >
This issue has been automatically closed because there has been no response
to our request for more information from the original author. With only the
information that is currently in the issue, we don't have enough information
to take action. Please reach out if you have more relevant information or
answers to our questions so that we can investigate further.

2
.github/stale.yml vendored
View File

@ -7,7 +7,7 @@ exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
staleLabel: "status:inactive"
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had

3
.gitignore vendored
View File

@ -42,3 +42,6 @@ profile.cov
/dashboard/assets/node_modules
/dashboard/assets/stats.json
/dashboard/assets/bundle.js
/dashboard/assets/package-lock.json
**/yarn-error.log

View File

@ -4,42 +4,47 @@ sudo: false
matrix:
include:
- os: linux
dist: trusty
dist: xenial
sudo: required
go: 1.8.x
go: 1.10.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- os: linux
dist: trusty
dist: xenial
sudo: required
go: 1.9.x
go: 1.11.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage
- go run build/ci.go test -coverage $TEST_PACKAGES
- os: osx
go: 1.9.x
go: 1.11.x
script:
- echo "Increase the maximum number of open file descriptors on macOS"
- NOFILE=20480
- sudo sysctl -w kern.maxfiles=$NOFILE
- sudo sysctl -w kern.maxfilesperproc=$NOFILE
- sudo launchctl limit maxfiles $NOFILE $NOFILE
- sudo launchctl limit maxfiles
- ulimit -S -n $NOFILE
- ulimit -n
- unset -f cd # workaround for https://github.com/travis-ci/travis-ci/issues/8703
- brew update
- brew install caskroom/cask/brew-cask
- brew cask install osxfuse
- go run build/ci.go install
- go run build/ci.go test -coverage
- go run build/ci.go test -coverage $TEST_PACKAGES
# This builder only tests code linters on latest version of Go
- os: linux
dist: trusty
go: 1.9.x
dist: xenial
go: 1.11.x
env:
- lint
git:
@ -47,14 +52,13 @@ matrix:
script:
- go run build/ci.go lint
# This builder does the Ubuntu PPA and Linux Azure uploads
- os: linux
dist: trusty
sudo: required
go: 1.9.x
# This builder does the Ubuntu PPA upload
- if: type = push
os: linux
dist: xenial
go: 1.11.x
env:
- ubuntu-ppa
- azure-linux
git:
submodules: false # avoid cloning ethereum/tests
addons:
@ -63,11 +67,29 @@ matrix:
- devscripts
- debhelper
- dput
- gcc-multilib
- fakeroot
- python-bzrlib
- python-paramiko
script:
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Linux Azure uploads
- if: type = push
os: linux
dist: xenial
sudo: required
go: 1.11.x
env:
- azure-linux
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
- gcc-multilib
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go debsrc -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>" -upload ppa:ethereum/ethereum
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go install -arch 386
@ -87,11 +109,12 @@ matrix:
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the Linux Azure MIPS xgo uploads
- os: linux
dist: trusty
- if: type = push
os: linux
dist: xenial
services:
- docker
go: 1.9.x
go: 1.11.x
env:
- azure-linux-mips
git:
@ -114,8 +137,9 @@ matrix:
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the Android Maven and Azure uploads
- os: linux
dist: precise # Needed for the android tools
- if: type = push
os: linux
dist: xenial
addons:
apt:
packages:
@ -135,24 +159,25 @@ matrix:
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz | tar -xz
- curl https://storage.googleapis.com/golang/go1.11.5.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go
script:
# Build the Android archive and upload it to Maven Central and Azure
- curl https://dl.google.com/android/repository/android-ndk-r15c-linux-x86_64.zip -o android-ndk-r15c.zip
- unzip -q android-ndk-r15c.zip && rm android-ndk-r15c.zip
- mv android-ndk-r15c $HOME
- export ANDROID_NDK=$HOME/android-ndk-r15c
- curl https://dl.google.com/android/repository/android-ndk-r17b-linux-x86_64.zip -o android-ndk-r17b.zip
- unzip -q android-ndk-r17b.zip && rm android-ndk-r17b.zip
- mv android-ndk-r17b $HOME
- export ANDROID_NDK=$HOME/android-ndk-r17b
- mkdir -p $GOPATH/src/github.com/ethereum
- ln -s `pwd` $GOPATH/src/github.com/ethereum
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- os: osx
go: 1.9.x
- if: type = push
os: osx
go: 1.11.x
env:
- azure-osx
- azure-ios
@ -179,20 +204,13 @@ matrix:
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
# This builder does the Azure archive purges to avoid accumulating junk
- os: linux
dist: trusty
sudo: required
go: 1.9.x
- if: type = cron
os: linux
dist: xenial
go: 1.11.x
env:
- azure-purge
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go purge -store gethstore/builds -days 14
notifications:
webhooks:
urls:
- https://webhooks.gitter.im/e/e09ccdce1048c5e03445
on_success: change
on_failure: always

View File

@ -171,3 +171,4 @@ xiekeyang <xiekeyang@users.noreply.github.com>
yoza <yoza.is12s@gmail.com>
ΞTHΞЯSPHΞЯΞ <{viktor.tron,nagydani,zsfelfoldi}@gmail.com>
Максим Чусовлянов <mchusovlianov@gmail.com>
Ralph Caraveo <deckarep@gmail.com>

View File

@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.9-alpine as builder
FROM golang:1.11-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers
@ -12,5 +12,5 @@ FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp 30304/udp
EXPOSE 8545 8546 30303 30303/udp
ENTRYPOINT ["geth"]

View File

@ -1,5 +1,5 @@
# Build Geth in a stock Go builder container
FROM golang:1.9-alpine as builder
FROM golang:1.11-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers
@ -12,4 +12,4 @@ FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp 30304/udp
EXPOSE 8545 8546 30303 30303/udp

View File

@ -37,7 +37,11 @@ ios:
test: all
build/env.sh go run build/ci.go test
lint: ## Run linters.
build/env.sh go run build/ci.go lint
clean:
./build/clean_go_build_cache.sh
rm -fr build/_workspace/pkg/ $(GOBIN)/*
# The devtools target installs tools required for 'go generate'.
@ -53,6 +57,9 @@ devtools:
@type "solc" 2> /dev/null || echo 'Please install solc'
@type "protoc" 2> /dev/null || echo 'Please install protoc'
swarm-devtools:
env GOBIN= go install ./cmd/swarm/mimegen
# Cross Compilation Targets (xgo)
geth-cross: geth-linux geth-darwin geth-windows geth-android geth-ios

View File

@ -7,7 +7,7 @@ https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/6874
)](https://godoc.org/github.com/ethereum/go-ethereum)
[![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[![Travis](https://travis-ci.org/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ethereum/go-ethereum?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv)
Automated builds are available for stable releases and the unstable master branch.
Binary archives are published at https://geth.ethereum.org/downloads/.
@ -18,7 +18,7 @@ For prerequisites and detailed build instructions please read the
[Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum)
on the wiki.
Building geth requires both a Go (version 1.7 or later) and a C compiler.
Building geth requires both a Go (version 1.9 or later) and a C compiler.
You can install them using your favourite package manager.
Once the dependencies are installed, run
@ -34,13 +34,13 @@ The go-ethereum project comes with several wrappers/executables found in the `cm
| Command | Description |
|:----------:|-------------|
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default) archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `swarm` | swarm daemon and tools. This is the entrypoint for the swarm network. `swarm --help` for command line options and subcommands. See https://swarm-guide.readthedocs.io for swarm documentation. |
| `swarm` | Swarm daemon and tools. This is the entrypoint for the Swarm network. `swarm --help` for command line options and subcommands. See [Swarm README](https://github.com/ethereum/go-ethereum/tree/master/swarm) for more information. |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
## Running geth
@ -69,7 +69,7 @@ This command will:
* Start up Geth's built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://github.com/ethereum/wiki/wiki/JavaScript-API)
as well as Geth's own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
This too is optional and if you leave it out you can always attach to an already running Geth instance
This tool is optional and if you leave it out you can always attach to an already running Geth instance
with `geth attach`.
### Full node on the Ethereum test network
@ -142,7 +142,7 @@ Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containe
### Programatically interfacing Geth nodes
As a developer, sooner rather than later you'll want to start interacting with Geth and the Ethereum
network via your own programs and not manually through the console. To aid this, Geth has built in
network via your own programs and not manually through the console. To aid this, Geth has built-in
support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC) and
[Geth specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)). These can be
exposed via HTTP, WebSockets and IPC (unix sockets on unix based platforms, and named pipes on Windows).
@ -168,7 +168,7 @@ HTTP based JSON-RPC API options:
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to connect
via HTTP, WS or IPC to a Geth node configured with the above flags and you'll need to speak [JSON-RPC](http://www.jsonrpc.org/specification)
via HTTP, WS or IPC to a Geth node configured with the above flags and you'll need to speak [JSON-RPC](https://www.jsonrpc.org/specification)
on all transports. You can reuse the same connection for multiple requests!
**Note: Please understand the security implications of opening up an HTTP/WS based transport before

View File

@ -1 +0,0 @@
1.8.2

View File

@ -58,13 +58,11 @@ func (abi ABI) Pack(name string, args ...interface{}) ([]byte, error) {
return nil, err
}
return arguments, nil
}
method, exist := abi.Methods[name]
if !exist {
return nil, fmt.Errorf("method '%s' not found", name)
}
arguments, err := method.Inputs.Pack(args...)
if err != nil {
return nil, err
@ -82,7 +80,7 @@ func (abi ABI) Unpack(v interface{}, name string, output []byte) (err error) {
// we need to decide whether we're calling a method or an event
if method, ok := abi.Methods[name]; ok {
if len(output)%32 != 0 {
return fmt.Errorf("abi: improperly formatted output")
return fmt.Errorf("abi: improperly formatted output: %s - Bytes: [%+v]", string(output), output)
}
return method.Outputs.Unpack(v, output)
} else if event, ok := abi.Events[name]; ok {
@ -137,6 +135,9 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
// MethodById looks up a method by the 4-byte id
// returns nil if none found
func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
if len(sigdata) < 4 {
return nil, fmt.Errorf("data too short (% bytes) for abi method lookup", len(sigdata))
}
for _, method := range abi.Methods {
if bytes.Equal(method.Id(), sigdata[:4]) {
return &method, nil

View File

@ -22,11 +22,10 @@ import (
"fmt"
"log"
"math/big"
"reflect"
"strings"
"testing"
"reflect"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
@ -52,11 +51,14 @@ const jsondata2 = `
{ "type" : "function", "name" : "slice", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32[2]" } ] },
{ "type" : "function", "name" : "slice256", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "sliceAddress", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "address[]" } ] },
{ "type" : "function", "name" : "sliceMultiAddress", "constant" : false, "inputs" : [ { "name" : "a", "type" : "address[]" }, { "name" : "b", "type" : "address[]" } ] }
{ "type" : "function", "name" : "sliceMultiAddress", "constant" : false, "inputs" : [ { "name" : "a", "type" : "address[]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray", "constant" : false, "inputs" : [ { "name" : "a", "type" : "uint256[2][2]" }, { "name" : "b", "type" : "address[]" } ] },
{ "type" : "function", "name" : "nestedArray2", "constant" : false, "inputs" : [ { "name" : "a", "type" : "uint8[][2]" } ] },
{ "type" : "function", "name" : "nestedSlice", "constant" : false, "inputs" : [ { "name" : "a", "type" : "uint8[][]" } ] }
]`
func TestReader(t *testing.T) {
Uint256, _ := NewType("uint256")
Uint256, _ := NewType("uint256", nil)
exp := ABI{
Methods: map[string]Method{
"balance": {
@ -177,7 +179,7 @@ func TestTestSlice(t *testing.T) {
}
func TestMethodSignature(t *testing.T) {
String, _ := NewType("string")
String, _ := NewType("string", nil)
m := Method{"foo", false, []Argument{{"bar", String, false}, {"baz", String, false}}, nil}
exp := "foo(string,string)"
if m.Sig() != exp {
@ -189,12 +191,31 @@ func TestMethodSignature(t *testing.T) {
t.Errorf("expected ids to match %x != %x", m.Id(), idexp)
}
uintt, _ := NewType("uint256")
uintt, _ := NewType("uint256", nil)
m = Method{"foo", false, []Argument{{"bar", uintt, false}}, nil}
exp = "foo(uint256)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
}
// Method with tuple arguments
s, _ := NewType("tuple", []ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256[]"},
{Name: "c", Type: "tuple[]", Components: []ArgumentMarshaling{
{Name: "x", Type: "int256"},
{Name: "y", Type: "int256"},
}},
{Name: "d", Type: "tuple[2]", Components: []ArgumentMarshaling{
{Name: "x", Type: "int256"},
{Name: "y", Type: "int256"},
}},
})
m = Method{"foo", false, []Argument{{"s", s, false}, {"bar", String, false}}, nil}
exp = "foo((int256,int256[],(int256,int256)[],(int256,int256)[2]),string)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
}
}
func TestMultiPack(t *testing.T) {
@ -564,11 +585,13 @@ func TestBareEvents(t *testing.T) {
const definition = `[
{ "type" : "event", "name" : "balance" },
{ "type" : "event", "name" : "anon", "anonymous" : true},
{ "type" : "event", "name" : "args", "inputs" : [{ "indexed":false, "name":"arg0", "type":"uint256" }, { "indexed":true, "name":"arg1", "type":"address" }] }
{ "type" : "event", "name" : "args", "inputs" : [{ "indexed":false, "name":"arg0", "type":"uint256" }, { "indexed":true, "name":"arg1", "type":"address" }] },
{ "type" : "event", "name" : "tuple", "inputs" : [{ "indexed":false, "name":"t", "type":"tuple", "components":[{"name":"a", "type":"uint256"}] }, { "indexed":true, "name":"arg1", "type":"address" }] }
]`
arg0, _ := NewType("uint256")
arg1, _ := NewType("address")
arg0, _ := NewType("uint256", nil)
arg1, _ := NewType("address", nil)
tuple, _ := NewType("tuple", []ArgumentMarshaling{{Name: "a", Type: "uint256"}})
expectedEvents := map[string]struct {
Anonymous bool
@ -580,6 +603,10 @@ func TestBareEvents(t *testing.T) {
{Name: "arg0", Type: arg0, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
}},
"tuple": {false, []Argument{
{Name: "t", Type: tuple, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
}},
}
abi, err := JSON(strings.NewReader(definition))
@ -621,14 +648,16 @@ func TestBareEvents(t *testing.T) {
// TestUnpackEvent is based on this contract:
// contract T {
// event received(address sender, uint amount, bytes memo);
// event receivedAddr(address sender);
// function receive(bytes memo) external payable {
// received(msg.sender, msg.value, memo);
// receivedAddr(msg.sender);
// }
// }
// When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt:
// receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
func TestUnpackEvent(t *testing.T) {
const abiJSON = `[{"constant":false,"inputs":[{"name":"memo","type":"bytes"}],"name":"receive","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"}]`
const abiJSON = `[{"constant":false,"inputs":[{"name":"memo","type":"bytes"}],"name":"receive","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"}],"name":"receivedAddr","type":"event"}]`
abi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
@ -644,17 +673,24 @@ func TestUnpackEvent(t *testing.T) {
}
type ReceivedEvent struct {
Address common.Address
Amount *big.Int
Memo []byte
Sender common.Address
Amount *big.Int
Memo []byte
}
var ev ReceivedEvent
err = abi.Unpack(&ev, "received", data)
if err != nil {
t.Error(err)
} else {
t.Logf("len(data): %d; received event: %+v", len(data), ev)
}
type ReceivedAddrEvent struct {
Sender common.Address
}
var receivedAddrEv ReceivedAddrEvent
err = abi.Unpack(&receivedAddrEv, "receivedAddr", data)
if err != nil {
t.Error(err)
}
}
@ -698,5 +734,14 @@ func TestABI_MethodById(t *testing.T) {
t.Errorf("Method %v (id %v) not 'findable' by id in ABI", name, common.ToHex(m.Id()))
}
}
// Also test empty
if _, err := abi.MethodById([]byte{0x00}); err == nil {
t.Errorf("Expected error, too short to decode data")
}
if _, err := abi.MethodById([]byte{}); err == nil {
t.Errorf("Expected error, too short to decode data")
}
if _, err := abi.MethodById(nil); err == nil {
t.Errorf("Expected error, nil is short to decode data")
}
}

View File

@ -33,24 +33,27 @@ type Argument struct {
type Arguments []Argument
type ArgumentMarshaling struct {
Name string
Type string
Components []ArgumentMarshaling
Indexed bool
}
// UnmarshalJSON implements json.Unmarshaler interface
func (argument *Argument) UnmarshalJSON(data []byte) error {
var extarg struct {
Name string
Type string
Indexed bool
}
err := json.Unmarshal(data, &extarg)
var arg ArgumentMarshaling
err := json.Unmarshal(data, &arg)
if err != nil {
return fmt.Errorf("argument json err: %v", err)
}
argument.Type, err = NewType(extarg.Type)
argument.Type, err = NewType(arg.Type, arg.Components)
if err != nil {
return err
}
argument.Name = extarg.Name
argument.Indexed = extarg.Indexed
argument.Name = arg.Name
argument.Indexed = arg.Indexed
return nil
}
@ -85,7 +88,6 @@ func (arguments Arguments) isTuple() bool {
// Unpack performs the operation hexdata -> Go format
func (arguments Arguments) Unpack(v interface{}, data []byte) error {
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
@ -97,59 +99,134 @@ func (arguments Arguments) Unpack(v interface{}, data []byte) error {
if arguments.isTuple() {
return arguments.unpackTuple(v, marshalledValues)
}
return arguments.unpackAtomic(v, marshalledValues)
return arguments.unpackAtomic(v, marshalledValues[0])
}
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
// unpack sets the unmarshalled value to go format.
// Note the dst here must be settable.
func unpack(t *Type, dst interface{}, src interface{}) error {
var (
dstVal = reflect.ValueOf(dst).Elem()
srcVal = reflect.ValueOf(src)
)
if t.T != TupleTy && !((t.T == SliceTy || t.T == ArrayTy) && t.Elem.T == TupleTy) {
return set(dstVal, srcVal)
}
switch t.T {
case TupleTy:
if dstVal.Kind() != reflect.Struct {
return fmt.Errorf("abi: invalid dst value for unpack, want struct, got %s", dstVal.Kind())
}
fieldmap, err := mapArgNamesToStructFields(t.TupleRawNames, dstVal)
if err != nil {
return err
}
for i, elem := range t.TupleElems {
fname := fieldmap[t.TupleRawNames[i]]
field := dstVal.FieldByName(fname)
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't found in the given value", t.TupleRawNames[i])
}
if err := unpack(elem, field.Addr().Interface(), srcVal.Field(i).Interface()); err != nil {
return err
}
}
return nil
case SliceTy:
if dstVal.Kind() != reflect.Slice {
return fmt.Errorf("abi: invalid dst value for unpack, want slice, got %s", dstVal.Kind())
}
slice := reflect.MakeSlice(dstVal.Type(), srcVal.Len(), srcVal.Len())
for i := 0; i < slice.Len(); i++ {
if err := unpack(t.Elem, slice.Index(i).Addr().Interface(), srcVal.Index(i).Interface()); err != nil {
return err
}
}
dstVal.Set(slice)
case ArrayTy:
if dstVal.Kind() != reflect.Array {
return fmt.Errorf("abi: invalid dst value for unpack, want array, got %s", dstVal.Kind())
}
array := reflect.New(dstVal.Type()).Elem()
for i := 0; i < array.Len(); i++ {
if err := unpack(t.Elem, array.Index(i).Addr().Interface(), srcVal.Index(i).Interface()); err != nil {
return err
}
}
dstVal.Set(array)
}
return nil
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues interface{}) error {
if arguments.LengthNonIndexed() == 0 {
return nil
}
argument := arguments.NonIndexed()[0]
elem := reflect.ValueOf(v).Elem()
if elem.Kind() == reflect.Struct {
fieldmap, err := mapArgNamesToStructFields([]string{argument.Name}, elem)
if err != nil {
return err
}
field := elem.FieldByName(fieldmap[argument.Name])
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't be found in the given value", argument.Name)
}
return unpack(&argument.Type, field.Addr().Interface(), marshalledValues)
}
return unpack(&argument.Type, elem.Addr().Interface(), marshalledValues)
}
// unpackTuple unpacks ( hexdata -> go ) a batch of values.
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
var (
value = reflect.ValueOf(v).Elem()
typ = value.Type()
kind = value.Kind()
)
if err := requireUnpackKind(value, typ, kind, arguments); err != nil {
return err
}
// If the output interface is a struct, make sure names don't collide
// If the interface is a struct, get of abi->struct_field mapping
var abi2struct map[string]string
if kind == reflect.Struct {
exists := make(map[string]bool)
for _, arg := range arguments {
field := capitalise(arg.Name)
if field == "" {
return fmt.Errorf("abi: purely underscored output cannot unpack to struct")
}
if exists[field] {
return fmt.Errorf("abi: multiple outputs mapping to the same struct field '%s'", field)
}
exists[field] = true
var (
argNames []string
err error
)
for _, arg := range arguments.NonIndexed() {
argNames = append(argNames, arg.Name)
}
abi2struct, err = mapArgNamesToStructFields(argNames, value)
if err != nil {
return err
}
}
for i, arg := range arguments.NonIndexed() {
reflectValue := reflect.ValueOf(marshalledValues[i])
switch kind {
case reflect.Struct:
name := capitalise(arg.Name)
for j := 0; j < typ.NumField(); j++ {
// TODO read tags: `abi:"fieldName"`
if typ.Field(j).Name == name {
if err := set(value.Field(j), reflectValue, arg); err != nil {
return err
}
}
field := value.FieldByName(abi2struct[arg.Name])
if !field.IsValid() {
return fmt.Errorf("abi: field %s can't be found in the given value", arg.Name)
}
if err := unpack(&arg.Type, field.Addr().Interface(), marshalledValues[i]); err != nil {
return err
}
case reflect.Slice, reflect.Array:
if value.Len() < i {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
v := value.Index(i)
if err := requireAssignable(v, reflectValue); err != nil {
if err := requireAssignable(v, reflect.ValueOf(marshalledValues[i])); err != nil {
return err
}
if err := set(v.Elem(), reflectValue, arg); err != nil {
if err := unpack(&arg.Type, v.Addr().Interface(), marshalledValues[i]); err != nil {
return err
}
default:
@ -157,31 +234,7 @@ func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interfa
}
}
return nil
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues []interface{}) error {
if len(marshalledValues) != 1 {
return fmt.Errorf("abi: wrong length, expected single value, got %d", len(marshalledValues))
}
elem := reflect.ValueOf(v).Elem()
reflectValue := reflect.ValueOf(marshalledValues[0])
return set(elem, reflectValue, arguments.NonIndexed()[0])
}
// Computes the full size of an array;
// i.e. counting nested arrays, which count towards size for unpacking.
func getArraySize(arr *Type) int {
size := arr.Size
// Arrays can be nested, with each element being the same size
arr = arr.Elem
for arr.T == ArrayTy {
// Keep multiplying by elem.Size while the elem is an array.
size *= arr.Size
arr = arr.Elem
}
// Now we have the full array size, including its children.
return size
}
// UnpackValues can be used to unpack ABI-encoded hexdata according to the ABI-specification,
@ -192,7 +245,7 @@ func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
virtualArgs := 0
for index, arg := range arguments.NonIndexed() {
marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data)
if arg.Type.T == ArrayTy {
if arg.Type.T == ArrayTy && !isDynamicType(arg.Type) {
// If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256.
// This means that we need to add two 'virtual' arguments when
@ -203,7 +256,11 @@ func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
//
// Calculate the full array size to get the correct offset for the next argument.
// Decrement it by 1, as the normal index increment is still applied.
virtualArgs += getArraySize(&arg.Type) - 1
virtualArgs += getTypeSize(arg.Type)/32 - 1
} else if arg.Type.T == TupleTy && !isDynamicType(arg.Type) {
// If we have a static tuple, like (uint256, bool, uint256), these are
// coded as just like uint256,bool,uint256
virtualArgs += getTypeSize(arg.Type)/32 - 1
}
if err != nil {
return nil, err
@ -233,11 +290,7 @@ func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
// input offset is the bytes offset for packed output
inputOffset := 0
for _, abiArg := range abiArgs {
if abiArg.Type.T == ArrayTy {
inputOffset += 32 * abiArg.Type.Size
} else {
inputOffset += 32
}
inputOffset += getTypeSize(abiArg.Type)
}
var ret []byte
for i, a := range args {
@ -247,14 +300,13 @@ func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
if err != nil {
return nil, err
}
// check for a slice type (string, bytes, slice)
if input.Type.requiresLengthPrefix() {
// calculate the offset
offset := inputOffset + len(variableInput)
// check for dynamic types
if isDynamicType(input.Type) {
// set the offset
ret = append(ret, packNum(reflect.ValueOf(offset))...)
// Append the packed output to the variable input. The variable input
// will be appended at the end of the input.
ret = append(ret, packNum(reflect.ValueOf(inputOffset))...)
// calculate next offset
inputOffset += len(packed)
// append to variable input
variableInput = append(variableInput, packed...)
} else {
// append the packed value to the input
@ -267,14 +319,13 @@ func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
return ret, nil
}
// capitalise makes the first character of a string upper case, also removing any
// prefixing underscores from the variable names.
func capitalise(input string) string {
for len(input) > 0 && input[0] == '_' {
input = input[1:]
// ToCamelCase converts an under-score string to a camel-case string
func ToCamelCase(input string) string {
parts := strings.Split(input, "_")
for i, s := range parts {
if len(s) > 0 {
parts[i] = strings.ToUpper(s[:1]) + s[1:]
}
}
if len(input) == 0 {
return ""
}
return strings.ToUpper(input[:1]) + input[1:]
return strings.Join(parts, "")
}

View File

@ -31,6 +31,7 @@ import (
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/bloombits"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
@ -64,11 +65,11 @@ type SimulatedBackend struct {
// NewSimulatedBackend creates a new binding backend using a simulated blockchain
// for testing purposes.
func NewSimulatedBackend(alloc core.GenesisAlloc) *SimulatedBackend {
database, _ := ethdb.NewMemDatabase()
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, Alloc: alloc}
func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
database := ethdb.NewMemDatabase()
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
genesis.MustCommit(database)
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{})
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil)
backend := &SimulatedBackend{
database: database,
@ -159,7 +160,7 @@ func (b *SimulatedBackend) StorageAt(ctx context.Context, contract common.Addres
// TransactionReceipt returns the receipt of a transaction.
func (b *SimulatedBackend) TransactionReceipt(ctx context.Context, txHash common.Hash) (*types.Receipt, error) {
receipt, _, _, _ := core.GetReceipt(b.database, txHash)
receipt, _, _, _ := rawdb.ReadReceipt(b.database, txHash)
return receipt, nil
}
@ -207,7 +208,7 @@ func (b *SimulatedBackend) PendingNonceAt(ctx context.Context, account common.Ad
}
// SuggestGasPrice implements ContractTransactor.SuggestGasPrice. Since the simulated
// chain doens't have miners, we just return a gas price of 1 for any call.
// chain doesn't have miners, we just return a gas price of 1 for any call.
func (b *SimulatedBackend) SuggestGasPrice(ctx context.Context) (*big.Int, error) {
return big.NewInt(1), nil
}
@ -307,9 +308,9 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() {
block.AddTx(tx)
block.AddTxWithChain(b.blockchain, tx)
}
block.AddTx(tx)
block.AddTxWithChain(b.blockchain, tx)
})
statedb, _ := b.blockchain.State()
@ -323,18 +324,24 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
//
// TODO(karalabe): Deprecate when the subscription one can return past data too.
func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.FilterQuery) ([]types.Log, error) {
// Initialize unset filter boundaried to run from genesis to chain head
from := int64(0)
if query.FromBlock != nil {
from = query.FromBlock.Int64()
var filter *filters.Filter
if query.BlockHash != nil {
// Block filter requested, construct a single-shot filter
filter = filters.NewBlockFilter(&filterBackend{b.database, b.blockchain}, *query.BlockHash, query.Addresses, query.Topics)
} else {
// Initialize unset filter boundaried to run from genesis to chain head
from := int64(0)
if query.FromBlock != nil {
from = query.FromBlock.Int64()
}
to := int64(-1)
if query.ToBlock != nil {
to = query.ToBlock.Int64()
}
// Construct the range filter
filter = filters.NewRangeFilter(&filterBackend{b.database, b.blockchain}, from, to, query.Addresses, query.Topics)
}
to := int64(-1)
if query.ToBlock != nil {
to = query.ToBlock.Int64()
}
// Construct and execute the filter
filter := filters.New(&filterBackend{b.database, b.blockchain}, from, to, query.Addresses, query.Topics)
// Run the filter and return all the logs
logs, err := filter.Logs(ctx)
if err != nil {
return nil, err
@ -429,12 +436,24 @@ func (fb *filterBackend) HeaderByNumber(ctx context.Context, block rpc.BlockNumb
return fb.bc.GetHeaderByNumber(uint64(block.Int64())), nil
}
func (fb *filterBackend) HeaderByHash(ctx context.Context, hash common.Hash) (*types.Header, error) {
return fb.bc.GetHeaderByHash(hash), nil
}
func (fb *filterBackend) GetReceipts(ctx context.Context, hash common.Hash) (types.Receipts, error) {
return core.GetBlockReceipts(fb.db, hash, core.GetBlockNumber(fb.db, hash)), nil
number := rawdb.ReadHeaderNumber(fb.db, hash)
if number == nil {
return nil, nil
}
return rawdb.ReadReceipts(fb.db, hash, *number), nil
}
func (fb *filterBackend) GetLogs(ctx context.Context, hash common.Hash) ([][]*types.Log, error) {
receipts := core.GetBlockReceipts(fb.db, hash, core.GetBlockNumber(fb.db, hash))
number := rawdb.ReadHeaderNumber(fb.db, hash)
if number == nil {
return nil, nil
}
receipts := rawdb.ReadReceipts(fb.db, hash, *number)
if receipts == nil {
return nil, nil
}
@ -445,7 +464,7 @@ func (fb *filterBackend) GetLogs(ctx context.Context, hash common.Hash) ([][]*ty
return logs, nil
}
func (fb *filterBackend) SubscribeTxPreEvent(ch chan<- core.TxPreEvent) event.Subscription {
func (fb *filterBackend) SubscribeNewTxsEvent(ch chan<- core.NewTxsEvent) event.Subscription {
return event.NewSubscription(func(quit <-chan struct{}) error {
<-quit
return nil

View File

@ -36,10 +36,10 @@ type SignerFn func(types.Signer, common.Address, *types.Transaction) (*types.Tra
// CallOpts is the collection of options to fine tune a contract call request.
type CallOpts struct {
Pending bool // Whether to operate on the pending state or the last known one
From common.Address // Optional the sender address, otherwise the first account is used
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
Pending bool // Whether to operate on the pending state or the last known one
From common.Address // Optional the sender address, otherwise the first account is used
BlockNumber *big.Int // Optional the block number on which the call should be performed
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// TransactOpts is the collection of authorization data required to create a
@ -148,10 +148,10 @@ func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string,
}
}
} else {
output, err = c.caller.CallContract(ctx, msg, nil)
output, err = c.caller.CallContract(ctx, msg, opts.BlockNumber)
if err == nil && len(output) == 0 {
// Make sure we have a contract to operate on, and bail out otherwise.
if code, err = c.caller.CodeAt(ctx, c.address, nil); err != nil {
if code, err = c.caller.CodeAt(ctx, c.address, opts.BlockNumber); err != nil {
return err
} else if len(code) == 0 {
return ErrNoCode

View File

@ -0,0 +1,64 @@
package bind_test
import (
"context"
"math/big"
"testing"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
)
type mockCaller struct {
codeAtBlockNumber *big.Int
callContractBlockNumber *big.Int
}
func (mc *mockCaller) CodeAt(ctx context.Context, contract common.Address, blockNumber *big.Int) ([]byte, error) {
mc.codeAtBlockNumber = blockNumber
return []byte{1, 2, 3}, nil
}
func (mc *mockCaller) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) {
mc.callContractBlockNumber = blockNumber
return nil, nil
}
func TestPassingBlockNumber(t *testing.T) {
mc := &mockCaller{}
bc := bind.NewBoundContract(common.HexToAddress("0x0"), abi.ABI{
Methods: map[string]abi.Method{
"something": {
Name: "something",
Outputs: abi.Arguments{},
},
},
}, mc, nil, nil)
var ret string
blockNumber := big.NewInt(42)
bc.Call(&bind.CallOpts{BlockNumber: blockNumber}, &ret, "something")
if mc.callContractBlockNumber != blockNumber {
t.Fatalf("CallContract() was not passed the block number")
}
if mc.codeAtBlockNumber != blockNumber {
t.Fatalf("CodeAt() was not passed the block number")
}
bc.Call(&bind.CallOpts{}, &ret, "something")
if mc.callContractBlockNumber != nil {
t.Fatalf("CallContract() was passed a block number when it should not have been")
}
if mc.codeAtBlockNumber != nil {
t.Fatalf("CodeAt() was passed a block number when it should not have been")
}
}

View File

@ -23,13 +23,13 @@ package bind
import (
"bytes"
"fmt"
"go/format"
"regexp"
"strings"
"text/template"
"unicode"
"github.com/ethereum/go-ethereum/accounts/abi"
"golang.org/x/tools/imports"
)
// Lang is a target programming language selector to generate bindings for.
@ -145,9 +145,9 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
if err := tmpl.Execute(buffer, data); err != nil {
return "", err
}
// For Go bindings pass the code through goimports to clean it up and double check
// For Go bindings pass the code through gofmt to clean it up
if lang == LangGo {
code, err := imports.Process(".", buffer.Bytes(), nil)
code, err := format.Source(buffer.Bytes())
if err != nil {
return "", fmt.Errorf("%v\n%s", err, buffer)
}
@ -207,7 +207,7 @@ func bindTypeGo(kind abi.Type) string {
// The inner function of bindTypeGo, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the the translated type.
// The length of the matched part is returned, with the translated type.
func bindUnnestedTypeGo(stringKind string) (int, string) {
switch {
@ -255,7 +255,7 @@ func bindTypeJava(kind abi.Type) string {
// The inner function of bindTypeJava, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the the translated type.
// The length of the matched part is returned, with the translated type.
func bindUnnestedTypeJava(stringKind string) (int, string) {
switch {
@ -381,25 +381,23 @@ func namedTypeJava(javaKind string, solKind abi.Type) string {
// methodNormalizer is a name transformer that modifies Solidity method names to
// conform to target language naming concentions.
var methodNormalizer = map[Lang]func(string) string{
LangGo: capitalise,
LangGo: abi.ToCamelCase,
LangJava: decapitalise,
}
// capitalise makes the first character of a string upper case, also removing any
// prefixing underscores from the variable names.
// capitalise makes a camel-case string which starts with an upper case character.
func capitalise(input string) string {
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return strings.ToUpper(input[:1]) + input[1:]
return abi.ToCamelCase(input)
}
// decapitalise makes the first character of a string lower case.
// decapitalise makes a camel-case string which starts with a lower case character.
func decapitalise(input string) string {
return strings.ToLower(input[:1]) + input[1:]
if len(input) == 0 {
return input
}
goForm := abi.ToCamelCase(input)
return strings.ToLower(goForm[:1]) + goForm[1:]
}
// structured checks whether a list of ABI data types has enough information to

File diff suppressed because one or more lines are too long

View File

@ -64,6 +64,30 @@ const tmplSourceGo = `
package {{.Package}}
import (
"math/big"
"strings"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
)
// Reference imports to suppress errors if they are not otherwise used.
var (
_ = big.NewInt
_ = strings.NewReader
_ = ethereum.NotFound
_ = abi.U256
_ = bind.Bind
_ = common.Big1
_ = types.BloomLookup
_ = event.NewSubscription
)
{{range $contract := .Contracts}}
// {{.Type}}ABI is the input ABI used to generate the binding from.
const {{.Type}}ABI = "{{.InputABI}}"

View File

@ -53,9 +53,11 @@ var waitDeployedTests = map[string]struct {
func TestWaitDeployed(t *testing.T) {
for name, test := range waitDeployedTests {
backend := backends.NewSimulatedBackend(core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
})
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
}, 10000000,
)
// Create the transaction.
tx := types.NewContractCreation(0, big.NewInt(0), test.gas, big.NewInt(1), common.FromHex(test.code))

View File

@ -33,15 +33,15 @@ type Event struct {
Inputs Arguments
}
func (event Event) String() string {
inputs := make([]string, len(event.Inputs))
for i, input := range event.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Name, input.Type)
func (e Event) String() string {
inputs := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", input.Name, input.Type)
inputs[i] = fmt.Sprintf("%v indexed %v", input.Type, input.Name)
}
}
return fmt.Sprintf("event %v(%v)", event.Name, strings.Join(inputs, ", "))
return fmt.Sprintf("event %v(%v)", e.Name, strings.Join(inputs, ", "))
}
// Id returns the canonical representation of the event's signature used by the

View File

@ -58,12 +58,28 @@ var jsonEventPledge = []byte(`{
"type": "event"
}`)
var jsonEventMixedCase = []byte(`{
"anonymous": false,
"inputs": [{
"indexed": false, "name": "value", "type": "uint256"
}, {
"indexed": false, "name": "_value", "type": "uint256"
}, {
"indexed": false, "name": "Value", "type": "uint256"
}],
"name": "MixedCase",
"type": "event"
}`)
// 1000000
var transferData1 = "00000000000000000000000000000000000000000000000000000000000f4240"
// "0x00Ce0d46d924CC8437c806721496599FC3FFA268", 2218516807680, "usd"
var pledgeData1 = "00000000000000000000000000ce0d46d924cc8437c806721496599fc3ffa2680000000000000000000000000000000000000000000000000000020489e800007573640000000000000000000000000000000000000000000000000000000000"
// 1000000,2218516807680,1000001
var mixedCaseData1 = "00000000000000000000000000000000000000000000000000000000000f42400000000000000000000000000000000000000000000000000000020489e8000000000000000000000000000000000000000000000000000000000000000f4241"
func TestEventId(t *testing.T) {
var table = []struct {
definition string
@ -71,12 +87,12 @@ func TestEventId(t *testing.T) {
}{
{
definition: `[
{ "type" : "event", "name" : "balance", "inputs": [{ "name" : "in", "type": "uint256" }] },
{ "type" : "event", "name" : "check", "inputs": [{ "name" : "t", "type": "address" }, { "name": "b", "type": "uint256" }] }
{ "type" : "event", "name" : "Balance", "inputs": [{ "name" : "in", "type": "uint256" }] },
{ "type" : "event", "name" : "Check", "inputs": [{ "name" : "t", "type": "address" }, { "name": "b", "type": "uint256" }] }
]`,
expectations: map[string]common.Hash{
"balance": crypto.Keccak256Hash([]byte("balance(uint256)")),
"check": crypto.Keccak256Hash([]byte("check(address,uint256)")),
"Balance": crypto.Keccak256Hash([]byte("Balance(uint256)")),
"Check": crypto.Keccak256Hash([]byte("Check(address,uint256)")),
},
},
}
@ -95,6 +111,39 @@ func TestEventId(t *testing.T) {
}
}
func TestEventString(t *testing.T) {
var table = []struct {
definition string
expectations map[string]string
}{
{
definition: `[
{ "type" : "event", "name" : "Balance", "inputs": [{ "name" : "in", "type": "uint256" }] },
{ "type" : "event", "name" : "Check", "inputs": [{ "name" : "t", "type": "address" }, { "name": "b", "type": "uint256" }] },
{ "type" : "event", "name" : "Transfer", "inputs": [{ "name": "from", "type": "address", "indexed": true }, { "name": "to", "type": "address", "indexed": true }, { "name": "value", "type": "uint256" }] }
]`,
expectations: map[string]string{
"Balance": "event Balance(uint256 in)",
"Check": "event Check(address t, uint256 b)",
"Transfer": "event Transfer(address indexed from, address indexed to, uint256 value)",
},
},
}
for _, test := range table {
abi, err := JSON(strings.NewReader(test.definition))
if err != nil {
t.Fatal(err)
}
for name, event := range abi.Events {
if event.String() != test.expectations[name] {
t.Errorf("expected string to be %s, got %s", test.expectations[name], event.String())
}
}
}
}
// TestEventMultiValueWithArrayUnpack verifies that array fields will be counted after parsing array.
func TestEventMultiValueWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": false, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
@ -121,6 +170,27 @@ func TestEventTupleUnpack(t *testing.T) {
Value *big.Int
}
type EventTransferWithTag struct {
// this is valid because `value` is not exportable,
// so value is only unmarshalled into `Value1`.
value *big.Int
Value1 *big.Int `abi:"value"`
}
type BadEventTransferWithSameFieldAndTag struct {
Value *big.Int
Value1 *big.Int `abi:"value"`
}
type BadEventTransferWithDuplicatedTag struct {
Value1 *big.Int `abi:"value"`
Value2 *big.Int `abi:"value"`
}
type BadEventTransferWithEmptyTag struct {
Value *big.Int `abi:""`
}
type EventPledge struct {
Who common.Address
Wad *big.Int
@ -133,9 +203,16 @@ func TestEventTupleUnpack(t *testing.T) {
Currency [3]byte
}
type EventMixedCase struct {
Value1 *big.Int `abi:"value"`
Value2 *big.Int `abi:"_value"`
Value3 *big.Int `abi:"Value"`
}
bigint := new(big.Int)
bigintExpected := big.NewInt(1000000)
bigintExpected2 := big.NewInt(2218516807680)
bigintExpected3 := big.NewInt(1000001)
addr := common.HexToAddress("0x00Ce0d46d924CC8437c806721496599FC3FFA268")
var testCases = []struct {
data string
@ -158,6 +235,34 @@ func TestEventTupleUnpack(t *testing.T) {
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into slice",
}, {
transferData1,
&EventTransferWithTag{},
&EventTransferWithTag{Value1: bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into structure with abi: tag",
}, {
transferData1,
&BadEventTransferWithDuplicatedTag{},
&BadEventTransferWithDuplicatedTag{},
jsonEventTransfer,
"struct: abi tag in 'Value2' already mapped",
"Can not unpack ERC20 Transfer event with duplicated abi tag",
}, {
transferData1,
&BadEventTransferWithSameFieldAndTag{},
&BadEventTransferWithSameFieldAndTag{},
jsonEventTransfer,
"abi: multiple variables maps to the same abi field 'value'",
"Can not unpack ERC20 Transfer event with a field and a tag mapping to the same abi variable",
}, {
transferData1,
&BadEventTransferWithEmptyTag{},
&BadEventTransferWithEmptyTag{},
jsonEventTransfer,
"struct: abi tag in 'Value' is empty",
"Can not unpack ERC20 Transfer event with an empty tag",
}, {
pledgeData1,
&EventPledge{},
@ -216,6 +321,13 @@ func TestEventTupleUnpack(t *testing.T) {
jsonEventPledge,
"abi: cannot unmarshal tuple into map[string]interface {}",
"Can not unpack Pledge event into map",
}, {
mixedCaseData1,
&EventMixedCase{},
&EventMixedCase{Value1: bigintExpected, Value2: bigintExpected2, Value3: bigintExpected3},
jsonEventMixedCase,
"",
"Can unpack abi variables with mixed case",
}}
for _, tc := range testCases {
@ -227,7 +339,7 @@ func TestEventTupleUnpack(t *testing.T) {
assert.Nil(err, "Should be able to unpack event data.")
assert.Equal(tc.expected, tc.dest, tc.name)
} else {
assert.EqualError(err, tc.error)
assert.EqualError(err, tc.error, tc.name)
}
})
}

View File

@ -47,10 +47,8 @@ type Method struct {
// Please note that "int" is substitute for its canonical representation "int256"
func (method Method) Sig() string {
types := make([]string, len(method.Inputs))
i := 0
for _, input := range method.Inputs {
for i, input := range method.Inputs {
types[i] = input.Type.String()
i++
}
return fmt.Sprintf("%v(%v)", method.Name, strings.Join(types, ","))
}
@ -58,14 +56,14 @@ func (method Method) Sig() string {
func (method Method) String() string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Name, input.Type)
inputs[i] = fmt.Sprintf("%v %v", input.Type, input.Name)
}
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
outputs[i] = output.Type.String()
if len(output.Name) > 0 {
outputs[i] = fmt.Sprintf("%v ", output.Name)
outputs[i] += fmt.Sprintf(" %v", output.Name)
}
outputs[i] += output.Type.String()
}
constant := ""
if method.Const {

View File

@ -0,0 +1,61 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"strings"
"testing"
)
const methoddata = `
[
{ "type" : "function", "name" : "balance", "constant" : true },
{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "transfer", "constant" : false, "inputs" : [ { "name" : "from", "type" : "address" }, { "name" : "to", "type" : "address" }, { "name" : "value", "type" : "uint256" } ], "outputs" : [ { "name" : "success", "type" : "bool" } ] }
]`
func TestMethodString(t *testing.T) {
var table = []struct {
method string
expectation string
}{
{
method: "balance",
expectation: "function balance() constant returns()",
},
{
method: "send",
expectation: "function send(uint256 amount) returns()",
},
{
method: "transfer",
expectation: "function transfer(address from, address to, uint256 value) returns(bool success)",
},
}
abi, err := JSON(strings.NewReader(methoddata))
if err != nil {
t.Fatal(err)
}
for _, test := range table {
got := abi.Methods[test.method].String()
if got != test.expectation {
t.Errorf("expected string to be %s, got %s", test.expectation, got)
}
}
}

View File

@ -25,35 +25,20 @@ import (
)
var (
big_t = reflect.TypeOf(&big.Int{})
derefbig_t = reflect.TypeOf(big.Int{})
uint8_t = reflect.TypeOf(uint8(0))
uint16_t = reflect.TypeOf(uint16(0))
uint32_t = reflect.TypeOf(uint32(0))
uint64_t = reflect.TypeOf(uint64(0))
int_t = reflect.TypeOf(int(0))
int8_t = reflect.TypeOf(int8(0))
int16_t = reflect.TypeOf(int16(0))
int32_t = reflect.TypeOf(int32(0))
int64_t = reflect.TypeOf(int64(0))
address_t = reflect.TypeOf(common.Address{})
int_ts = reflect.TypeOf([]int(nil))
int8_ts = reflect.TypeOf([]int8(nil))
int16_ts = reflect.TypeOf([]int16(nil))
int32_ts = reflect.TypeOf([]int32(nil))
int64_ts = reflect.TypeOf([]int64(nil))
bigT = reflect.TypeOf(&big.Int{})
derefbigT = reflect.TypeOf(big.Int{})
uint8T = reflect.TypeOf(uint8(0))
uint16T = reflect.TypeOf(uint16(0))
uint32T = reflect.TypeOf(uint32(0))
uint64T = reflect.TypeOf(uint64(0))
int8T = reflect.TypeOf(int8(0))
int16T = reflect.TypeOf(int16(0))
int32T = reflect.TypeOf(int32(0))
int64T = reflect.TypeOf(int64(0))
addressT = reflect.TypeOf(common.Address{})
)
// U256 converts a big Int into a 256bit EVM number.
func U256(n *big.Int) []byte {
return math.PaddedBigBytes(math.U256(n), 32)
}
// checks whether the given reflect value is signed. This also works for slices with a number type
func isSigned(v reflect.Value) bool {
switch v.Type() {
case int_ts, int8_ts, int16_ts, int32_ts, int64_ts, int_t, int8_t, int16_t, int32_t, int64_t:
return true
}
return false
}

View File

@ -19,7 +19,6 @@ package abi
import (
"bytes"
"math/big"
"reflect"
"testing"
)
@ -32,13 +31,3 @@ func TestNumberTypes(t *testing.T) {
t.Errorf("expected %x got %x", ubytes, unsigned)
}
}
func TestSigned(t *testing.T) {
if isSigned(reflect.ValueOf(uint(10))) {
t.Error("signed")
}
if !isSigned(reflect.ValueOf(int(10))) {
t.Error("not signed")
}
}

View File

@ -29,314 +29,601 @@ import (
func TestPack(t *testing.T) {
for i, test := range []struct {
typ string
input interface{}
output []byte
typ string
components []ArgumentMarshaling
input interface{}
output []byte
}{
{
"uint8",
nil,
uint8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint8[]",
nil,
[]uint8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16",
nil,
uint16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16[]",
nil,
[]uint16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32",
nil,
uint32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32[]",
nil,
[]uint32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64",
nil,
uint64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64[]",
nil,
[]uint64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256",
nil,
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256[]",
nil,
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8",
nil,
int8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8[]",
nil,
[]int8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16",
nil,
int16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16[]",
nil,
[]int16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32",
nil,
int32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32[]",
nil,
[]int32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64",
nil,
int64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64[]",
nil,
[]int64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256",
nil,
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256[]",
nil,
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"bytes1",
nil,
[1]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes2",
nil,
[2]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes3",
nil,
[3]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes4",
nil,
[4]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes5",
nil,
[5]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes6",
nil,
[6]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes7",
nil,
[7]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes8",
nil,
[8]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes9",
nil,
[9]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes10",
nil,
[10]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes11",
nil,
[11]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes12",
nil,
[12]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes13",
nil,
[13]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes14",
nil,
[14]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes15",
nil,
[15]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes16",
nil,
[16]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes17",
nil,
[17]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes18",
nil,
[18]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes19",
nil,
[19]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes20",
nil,
[20]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes21",
nil,
[21]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes22",
nil,
[22]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes23",
nil,
[23]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
nil,
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes25",
nil,
[25]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes26",
nil,
[26]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes27",
nil,
[27]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes28",
nil,
[28]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes29",
nil,
[29]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes30",
nil,
[30]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes31",
nil,
[31]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes32",
nil,
[32]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"uint32[2][3][4]",
nil,
[4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018"),
},
{
"address[]",
nil,
[]common.Address{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000"),
},
{
"bytes32[]",
nil,
[]common.Hash{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000201000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000"),
},
{
"function",
nil,
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"string",
nil,
"foobar",
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000006666f6f6261720000000000000000000000000000000000000000000000000000"),
},
{
"string[]",
nil,
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset 128 to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"string[2]",
nil,
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"bytes32[][]",
nil,
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[][2]",
nil,
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[3][2]",
nil,
[][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
// static tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "int64"},
{Name: "b", Type: "int256"},
{Name: "c", Type: "int256"},
{Name: "d", Type: "bool"},
{Name: "e", Type: "bytes32[3][2]"},
},
struct {
A int64
B *big.Int
C *big.Int
D bool
E [][]common.Hash
}{1, big.NewInt(1), big.NewInt(-1), true, [][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001" + // struct[a]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // struct[c]
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[d]
"0100000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // struct[e] array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // struct[e] array[1][2]
},
{
// dynamic tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "string"},
{Name: "b", Type: "int64"},
{Name: "c", Type: "bytes"},
{Name: "d", Type: "string[]"},
{Name: "e", Type: "int256[]"},
{Name: "f", Type: "address[]"},
},
struct {
FieldA string `abi:"a"` // Test whether abi tag works
FieldB int64 `abi:"b"`
C []byte
D []string
E []*big.Int
F []common.Address
}{"foobar", 1, []byte{1}, []string{"foo", "bar"}, []*big.Int{big.NewInt(1), big.NewInt(-1)}, []common.Address{{1}, {2}}},
common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000000000c0" + // struct[a] offset
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[b]
"0000000000000000000000000000000000000000000000000000000000000100" + // struct[c] offset
"0000000000000000000000000000000000000000000000000000000000000140" + // struct[d] offset
"0000000000000000000000000000000000000000000000000000000000000220" + // struct[e] offset
"0000000000000000000000000000000000000000000000000000000000000280" + // struct[f] offset
"0000000000000000000000000000000000000000000000000000000000000006" + // struct[a] length
"666f6f6261720000000000000000000000000000000000000000000000000000" + // struct[a] "foobar"
"0000000000000000000000000000000000000000000000000000000000000001" + // struct[c] length
"0100000000000000000000000000000000000000000000000000000000000000" + // []byte{1}
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[d] length
"0000000000000000000000000000000000000000000000000000000000000040" + // foo offset
"0000000000000000000000000000000000000000000000000000000000000080" + // bar offset
"0000000000000000000000000000000000000000000000000000000000000003" + // foo length
"666f6f0000000000000000000000000000000000000000000000000000000000" + // foo
"0000000000000000000000000000000000000000000000000000000000000003" + // bar offset
"6261720000000000000000000000000000000000000000000000000000000000" + // bar
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[e] length
"0000000000000000000000000000000000000000000000000000000000000001" + // 1
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // -1
"0000000000000000000000000000000000000000000000000000000000000002" + // struct[f] length
"0000000000000000000000000100000000000000000000000000000000000000" + // common.Address{1}
"0000000000000000000000000200000000000000000000000000000000000000"), // common.Address{2}
},
{
// nested tuple
"tuple",
[]ArgumentMarshaling{
{Name: "a", Type: "tuple", Components: []ArgumentMarshaling{{Name: "a", Type: "uint256"}, {Name: "b", Type: "uint256[]"}}},
{Name: "b", Type: "int256[]"},
},
struct {
A struct {
FieldA *big.Int `abi:"a"`
B []*big.Int
}
B []*big.Int
}{
A: struct {
FieldA *big.Int `abi:"a"` // Test whether abi tag works for nested tuple
B []*big.Int
}{big.NewInt(1), []*big.Int{big.NewInt(1), big.NewInt(0)}},
B: []*big.Int{big.NewInt(1), big.NewInt(0)}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // a offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // b offset
"0000000000000000000000000000000000000000000000000000000000000001" + // a.a value
"0000000000000000000000000000000000000000000000000000000000000040" + // a.b offset
"0000000000000000000000000000000000000000000000000000000000000002" + // a.b length
"0000000000000000000000000000000000000000000000000000000000000001" + // a.b[0] value
"0000000000000000000000000000000000000000000000000000000000000000" + // a.b[1] value
"0000000000000000000000000000000000000000000000000000000000000002" + // b length
"0000000000000000000000000000000000000000000000000000000000000001" + // b[0] value
"0000000000000000000000000000000000000000000000000000000000000000"), // b[1] value
},
{
// tuple slice
"tuple[]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256[]"},
},
[]struct {
A *big.Int
B []*big.Int
}{
{big.NewInt(-1), []*big.Int{big.NewInt(1), big.NewInt(0)}},
{big.NewInt(1), []*big.Int{big.NewInt(2), big.NewInt(-1)}},
},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // tuple length
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000e0" + // tuple[1] offset
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].B length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].B[0] value
"0000000000000000000000000000000000000000000000000000000000000000" + // tuple[0].B[1] value
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A
"0000000000000000000000000000000000000000000000000000000000000040" + // tuple[1].B offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B length
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].B[0] value
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].B[1] value
},
{
// static tuple array
"tuple[2]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256"},
{Name: "b", Type: "int256"},
},
[2]struct {
A *big.Int
B *big.Int
}{
{big.NewInt(-1), big.NewInt(1)},
{big.NewInt(1), big.NewInt(-1)},
},
common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].a
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].b
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].a
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].b
},
{
// dynamic tuple array
"tuple[2]",
[]ArgumentMarshaling{
{Name: "a", Type: "int256[]"},
},
[2]struct {
A []*big.Int
}{
{[]*big.Int{big.NewInt(-1), big.NewInt(1)}},
{[]*big.Int{big.NewInt(1), big.NewInt(-1)}},
},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // tuple[0] offset
"00000000000000000000000000000000000000000000000000000000000000c0" + // tuple[1] offset
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[0].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[0].A length
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" + // tuple[0].A[0]
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[0].A[1]
"0000000000000000000000000000000000000000000000000000000000000020" + // tuple[1].A offset
"0000000000000000000000000000000000000000000000000000000000000002" + // tuple[1].A length
"0000000000000000000000000000000000000000000000000000000000000001" + // tuple[1].A[0]
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), // tuple[1].A[1]
},
} {
typ, err := NewType(test.typ)
typ, err := NewType(test.typ, test.components)
if err != nil {
t.Fatalf("%v failed. Unexpected parse error: %v", i, err)
}
output, err := typ.pack(reflect.ValueOf(test.input))
if err != nil {
t.Fatalf("%v failed. Unexpected pack error: %v", i, err)
}
if !bytes.Equal(output, test.output) {
t.Errorf("%d failed. Expected bytes: '%x' Got: '%x'", i, test.output, output)
t.Errorf("input %d for typ: %v failed. Expected bytes: '%x' Got: '%x'", i, typ.String(), test.output, output)
}
}
}
@ -406,6 +693,59 @@ func TestMethodPack(t *testing.T) {
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
a := [2][2]*big.Int{{big.NewInt(1), big.NewInt(1)}, {big.NewInt(2), big.NewInt(0)}}
sig = abi.Methods["nestedArray"].Id()
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0xa0}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrC[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrD[:], 32)...)
packed, err = abi.Pack("nestedArray", a, []common.Address{addrC, addrD})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["nestedArray2"].Id()
sig = append(sig, common.LeftPadBytes([]byte{0x20}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x40}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x80}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
packed, err = abi.Pack("nestedArray2", [2][]uint8{{1}, {1}})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["nestedSlice"].Id()
sig = append(sig, common.LeftPadBytes([]byte{0x20}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x02}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0x40}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{0xa0}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
packed, err = abi.Pack("nestedSlice", [][]uint8{{1, 2}, {1, 2}})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
}
func TestPackNumber(t *testing.T) {

View File

@ -19,12 +19,13 @@ package abi
import (
"fmt"
"reflect"
"strings"
)
// indirect recursively dereferences the value until it either gets the value
// or finds a big.Int
func indirect(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Ptr && v.Elem().Type() != derefbig_t {
if v.Kind() == reflect.Ptr && v.Elem().Type() != derefbigT {
return indirect(v.Elem())
}
return v
@ -36,26 +37,26 @@ func reflectIntKindAndType(unsigned bool, size int) (reflect.Kind, reflect.Type)
switch size {
case 8:
if unsigned {
return reflect.Uint8, uint8_t
return reflect.Uint8, uint8T
}
return reflect.Int8, int8_t
return reflect.Int8, int8T
case 16:
if unsigned {
return reflect.Uint16, uint16_t
return reflect.Uint16, uint16T
}
return reflect.Int16, int16_t
return reflect.Int16, int16T
case 32:
if unsigned {
return reflect.Uint32, uint32_t
return reflect.Uint32, uint32T
}
return reflect.Int32, int32_t
return reflect.Int32, int32T
case 64:
if unsigned {
return reflect.Uint64, uint64_t
return reflect.Uint64, uint64T
}
return reflect.Int64, int64_t
return reflect.Int64, int64T
}
return reflect.Ptr, big_t
return reflect.Ptr, bigT
}
// mustArrayToBytesSlice creates a new byte slice with the exact same size as value
@ -70,22 +71,36 @@ func mustArrayToByteSlice(value reflect.Value) reflect.Value {
//
// set is a bit more lenient when it comes to assignment and doesn't force an as
// strict ruleset as bare `reflect` does.
func set(dst, src reflect.Value, output Argument) error {
dstType := dst.Type()
srcType := src.Type()
func set(dst, src reflect.Value) error {
dstType, srcType := dst.Type(), src.Type()
switch {
case dstType.AssignableTo(srcType):
dst.Set(src)
case dstType.Kind() == reflect.Interface:
return set(dst.Elem(), src)
case dstType.Kind() == reflect.Ptr && dstType.Elem() != derefbigT:
return set(dst.Elem(), src)
case srcType.AssignableTo(dstType) && dst.CanSet():
dst.Set(src)
case dstType.Kind() == reflect.Ptr:
return set(dst.Elem(), src, output)
case dstType.Kind() == reflect.Slice && srcType.Kind() == reflect.Slice:
return setSlice(dst, src)
default:
return fmt.Errorf("abi: cannot unmarshal %v in to %v", src.Type(), dst.Type())
}
return nil
}
// setSlice attempts to assign src to dst when slices are not assignable by default
// e.g. src: [][]byte -> dst: [][15]byte
func setSlice(dst, src reflect.Value) error {
slice := reflect.MakeSlice(dst.Type(), src.Len(), src.Len())
for i := 0; i < src.Len(); i++ {
v := src.Index(i)
reflect.Copy(slice.Index(i), v)
}
dst.Set(slice)
return nil
}
// requireAssignable assures that `dest` is a pointer and it's not an interface.
func requireAssignable(dst, src reflect.Value) error {
if dst.Kind() != reflect.Ptr && dst.Kind() != reflect.Interface {
@ -110,3 +125,94 @@ func requireUnpackKind(v reflect.Value, t reflect.Type, k reflect.Kind,
}
return nil
}
// mapArgNamesToStructFields maps a slice of argument names to struct fields.
// first round: for each Exportable field that contains a `abi:""` tag
// and this field name exists in the given argument name list, pair them together.
// second round: for each argument name that has not been already linked,
// find what variable is expected to be mapped into, if it exists and has not been
// used, pair them.
// Note this function assumes the given value is a struct value.
func mapArgNamesToStructFields(argNames []string, value reflect.Value) (map[string]string, error) {
typ := value.Type()
abi2struct := make(map[string]string)
struct2abi := make(map[string]string)
// first round ~~~
for i := 0; i < typ.NumField(); i++ {
structFieldName := typ.Field(i).Name
// skip private struct fields.
if structFieldName[:1] != strings.ToUpper(structFieldName[:1]) {
continue
}
// skip fields that have no abi:"" tag.
var ok bool
var tagName string
if tagName, ok = typ.Field(i).Tag.Lookup("abi"); !ok {
continue
}
// check if tag is empty.
if tagName == "" {
return nil, fmt.Errorf("struct: abi tag in '%s' is empty", structFieldName)
}
// check which argument field matches with the abi tag.
found := false
for _, arg := range argNames {
if arg == tagName {
if abi2struct[arg] != "" {
return nil, fmt.Errorf("struct: abi tag in '%s' already mapped", structFieldName)
}
// pair them
abi2struct[arg] = structFieldName
struct2abi[structFieldName] = arg
found = true
}
}
// check if this tag has been mapped.
if !found {
return nil, fmt.Errorf("struct: abi tag '%s' defined but not found in abi", tagName)
}
}
// second round ~~~
for _, argName := range argNames {
structFieldName := ToCamelCase(argName)
if structFieldName == "" {
return nil, fmt.Errorf("abi: purely underscored output cannot unpack to struct")
}
// this abi has already been paired, skip it... unless there exists another, yet unassigned
// struct field with the same field name. If so, raise an error:
// abi: [ { "name": "value" } ]
// struct { Value *big.Int , Value1 *big.Int `abi:"value"`}
if abi2struct[argName] != "" {
if abi2struct[argName] != structFieldName &&
struct2abi[structFieldName] == "" &&
value.FieldByName(structFieldName).IsValid() {
return nil, fmt.Errorf("abi: multiple variables maps to the same abi field '%s'", argName)
}
continue
}
// return an error if this struct field has already been paired.
if struct2abi[structFieldName] != "" {
return nil, fmt.Errorf("abi: multiple outputs mapping to the same struct field '%s'", structFieldName)
}
if value.FieldByName(structFieldName).IsValid() {
// pair them
abi2struct[argName] = structFieldName
struct2abi[structFieldName] = argName
} else {
// not paired, but annotate as used, to detect cases like
// abi : [ { "name": "value" }, { "name": "_value" } ]
// struct { Value *big.Int }
struct2abi[structFieldName] = argName
}
}
return abi2struct, nil
}

View File

@ -0,0 +1,191 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"reflect"
"testing"
)
type reflectTest struct {
name string
args []string
struc interface{}
want map[string]string
err string
}
var reflectTests = []reflectTest{
{
name: "OneToOneCorrespondance",
args: []string{"fieldA"},
struc: struct {
FieldA int `abi:"fieldA"`
}{},
want: map[string]string{
"fieldA": "FieldA",
},
},
{
name: "MissingFieldsInStruct",
args: []string{"fieldA", "fieldB"},
struc: struct {
FieldA int `abi:"fieldA"`
}{},
want: map[string]string{
"fieldA": "FieldA",
},
},
{
name: "MoreFieldsInStructThanArgs",
args: []string{"fieldA"},
struc: struct {
FieldA int `abi:"fieldA"`
FieldB int
}{},
want: map[string]string{
"fieldA": "FieldA",
},
},
{
name: "MissingFieldInArgs",
args: []string{"fieldA"},
struc: struct {
FieldA int `abi:"fieldA"`
FieldB int `abi:"fieldB"`
}{},
err: "struct: abi tag 'fieldB' defined but not found in abi",
},
{
name: "NoAbiDescriptor",
args: []string{"fieldA"},
struc: struct {
FieldA int
}{},
want: map[string]string{
"fieldA": "FieldA",
},
},
{
name: "NoArgs",
args: []string{},
struc: struct {
FieldA int `abi:"fieldA"`
}{},
err: "struct: abi tag 'fieldA' defined but not found in abi",
},
{
name: "DifferentName",
args: []string{"fieldB"},
struc: struct {
FieldA int `abi:"fieldB"`
}{},
want: map[string]string{
"fieldB": "FieldA",
},
},
{
name: "DifferentName",
args: []string{"fieldB"},
struc: struct {
FieldA int `abi:"fieldB"`
}{},
want: map[string]string{
"fieldB": "FieldA",
},
},
{
name: "MultipleFields",
args: []string{"fieldA", "fieldB"},
struc: struct {
FieldA int `abi:"fieldA"`
FieldB int `abi:"fieldB"`
}{},
want: map[string]string{
"fieldA": "FieldA",
"fieldB": "FieldB",
},
},
{
name: "MultipleFieldsABIMissing",
args: []string{"fieldA", "fieldB"},
struc: struct {
FieldA int `abi:"fieldA"`
FieldB int
}{},
want: map[string]string{
"fieldA": "FieldA",
"fieldB": "FieldB",
},
},
{
name: "NameConflict",
args: []string{"fieldB"},
struc: struct {
FieldA int `abi:"fieldB"`
FieldB int
}{},
err: "abi: multiple variables maps to the same abi field 'fieldB'",
},
{
name: "Underscored",
args: []string{"_"},
struc: struct {
FieldA int
}{},
err: "abi: purely underscored output cannot unpack to struct",
},
{
name: "DoubleMapping",
args: []string{"fieldB", "fieldC", "fieldA"},
struc: struct {
FieldA int `abi:"fieldC"`
FieldB int
}{},
err: "abi: multiple outputs mapping to the same struct field 'FieldA'",
},
{
name: "AlreadyMapped",
args: []string{"fieldB", "fieldB"},
struc: struct {
FieldB int `abi:"fieldB"`
}{},
err: "struct: abi tag in 'FieldB' already mapped",
},
}
func TestReflectNameToStruct(t *testing.T) {
for _, test := range reflectTests {
t.Run(test.name, func(t *testing.T) {
m, err := mapArgNamesToStructFields(test.args, reflect.ValueOf(test.struc))
if len(test.err) > 0 {
if err == nil || err.Error() != test.err {
t.Fatalf("Invalid error: expected %v, got %v", test.err, err)
}
} else {
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
for fname := range test.want {
if m[fname] != test.want[fname] {
t.Fatalf("Incorrect value for field %s: expected %v, got %v", fname, test.want[fname], m[fname])
}
}
}
})
}
}

View File

@ -17,6 +17,7 @@
package abi
import (
"errors"
"fmt"
"reflect"
"regexp"
@ -32,6 +33,7 @@ const (
StringTy
SliceTy
ArrayTy
TupleTy
AddressTy
FixedBytesTy
BytesTy
@ -43,13 +45,16 @@ const (
// Type is the reflection of the supported argument type
type Type struct {
Elem *Type
Kind reflect.Kind
Type reflect.Type
Size int
T byte // Our own type checking
stringKind string // holds the unparsed string for deriving signatures
// Tuple relative fields
TupleElems []*Type // Type information of all tuple fields
TupleRawNames []string // Raw field name of all tuple fields
}
var (
@ -58,7 +63,7 @@ var (
)
// NewType creates a new reflection type of abi type given in t.
func NewType(t string) (typ Type, err error) {
func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
// check that array brackets are equal if they exist
if strings.Count(t, "[") != strings.Count(t, "]") {
return Type{}, fmt.Errorf("invalid arg type in abi")
@ -71,7 +76,7 @@ func NewType(t string) (typ Type, err error) {
if strings.Count(t, "[") != 0 {
i := strings.LastIndex(t, "[")
// recursively embed the type
embeddedType, err := NewType(t[:i])
embeddedType, err := NewType(t[:i], components)
if err != nil {
return Type{}, err
}
@ -87,6 +92,9 @@ func NewType(t string) (typ Type, err error) {
typ.Kind = reflect.Slice
typ.Elem = &embeddedType
typ.Type = reflect.SliceOf(embeddedType.Type)
if embeddedType.T == TupleTy {
typ.stringKind = embeddedType.stringKind + sliced
}
} else if len(intz) == 1 {
// is a array
typ.T = ArrayTy
@ -97,13 +105,21 @@ func NewType(t string) (typ Type, err error) {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
typ.Type = reflect.ArrayOf(typ.Size, embeddedType.Type)
if embeddedType.T == TupleTy {
typ.stringKind = embeddedType.stringKind + sliced
}
} else {
return Type{}, fmt.Errorf("invalid formatting of array type")
}
return typ, err
}
// parse the type and size of the abi-type.
parsedType := typeRegex.FindAllStringSubmatch(t, -1)[0]
matches := typeRegex.FindAllStringSubmatch(t, -1)
if len(matches) == 0 {
return Type{}, fmt.Errorf("invalid type '%v'", t)
}
parsedType := matches[0]
// varSize is the size of the variable
var varSize int
if len(parsedType[3]) > 0 {
@ -135,7 +151,7 @@ func NewType(t string) (typ Type, err error) {
typ.Type = reflect.TypeOf(bool(false))
case "address":
typ.Kind = reflect.Array
typ.Type = address_t
typ.Type = addressT
typ.Size = 20
typ.T = AddressTy
case "string":
@ -153,6 +169,40 @@ func NewType(t string) (typ Type, err error) {
typ.Size = varSize
typ.Type = reflect.ArrayOf(varSize, reflect.TypeOf(byte(0)))
}
case "tuple":
var (
fields []reflect.StructField
elems []*Type
names []string
expression string // canonical parameter expression
)
expression += "("
for idx, c := range components {
cType, err := NewType(c.Type, c.Components)
if err != nil {
return Type{}, err
}
if ToCamelCase(c.Name) == "" {
return Type{}, errors.New("abi: purely anonymous or underscored field is not supported")
}
fields = append(fields, reflect.StructField{
Name: ToCamelCase(c.Name), // reflect.StructOf will panic for any exported field.
Type: cType.Type,
})
elems = append(elems, &cType)
names = append(names, c.Name)
expression += cType.stringKind
if idx != len(components)-1 {
expression += ","
}
}
expression += ")"
typ.Kind = reflect.Struct
typ.Type = reflect.StructOf(fields)
typ.TupleElems = elems
typ.TupleRawNames = names
typ.T = TupleTy
typ.stringKind = expression
case "function":
typ.Kind = reflect.Array
typ.T = FunctionTy
@ -173,28 +223,82 @@ func (t Type) String() (out string) {
func (t Type) pack(v reflect.Value) ([]byte, error) {
// dereference pointer first if it's a pointer
v = indirect(v)
if err := typeCheck(t, v); err != nil {
return nil, err
}
if t.T == SliceTy || t.T == ArrayTy {
var packed []byte
switch t.T {
case SliceTy, ArrayTy:
var ret []byte
if t.requiresLengthPrefix() {
// append length
ret = append(ret, packNum(reflect.ValueOf(v.Len()))...)
}
// calculate offset if any
offset := 0
offsetReq := isDynamicType(*t.Elem)
if offsetReq {
offset = getTypeSize(*t.Elem) * v.Len()
}
var tail []byte
for i := 0; i < v.Len(); i++ {
val, err := t.Elem.pack(v.Index(i))
if err != nil {
return nil, err
}
packed = append(packed, val...)
if !offsetReq {
ret = append(ret, val...)
continue
}
ret = append(ret, packNum(reflect.ValueOf(offset))...)
offset += len(val)
tail = append(tail, val...)
}
if t.T == SliceTy {
return packBytesSlice(packed, v.Len()), nil
} else if t.T == ArrayTy {
return packed, nil
return append(ret, tail...), nil
case TupleTy:
// (T1,...,Tk) for k >= 0 and any types T1, …, Tk
// enc(X) = head(X(1)) ... head(X(k)) tail(X(1)) ... tail(X(k))
// where X = (X(1), ..., X(k)) and head and tail are defined for Ti being a static
// type as
// head(X(i)) = enc(X(i)) and tail(X(i)) = "" (the empty string)
// and as
// head(X(i)) = enc(len(head(X(1)) ... head(X(k)) tail(X(1)) ... tail(X(i-1))))
// tail(X(i)) = enc(X(i))
// otherwise, i.e. if Ti is a dynamic type.
fieldmap, err := mapArgNamesToStructFields(t.TupleRawNames, v)
if err != nil {
return nil, err
}
// Calculate prefix occupied size.
offset := 0
for _, elem := range t.TupleElems {
offset += getTypeSize(*elem)
}
var ret, tail []byte
for i, elem := range t.TupleElems {
field := v.FieldByName(fieldmap[t.TupleRawNames[i]])
if !field.IsValid() {
return nil, fmt.Errorf("field %s for tuple not found in the given struct", t.TupleRawNames[i])
}
val, err := elem.pack(field)
if err != nil {
return nil, err
}
if isDynamicType(*elem) {
ret = append(ret, packNum(reflect.ValueOf(offset))...)
tail = append(tail, val...)
offset += len(val)
} else {
ret = append(ret, val...)
}
}
return append(ret, tail...), nil
default:
return packElement(t, v), nil
}
return packElement(t, v), nil
}
// requireLengthPrefix returns whether the type requires any sort of length
@ -202,3 +306,47 @@ func (t Type) pack(v reflect.Value) ([]byte, error) {
func (t Type) requiresLengthPrefix() bool {
return t.T == StringTy || t.T == BytesTy || t.T == SliceTy
}
// isDynamicType returns true if the type is dynamic.
// The following types are called “dynamic”:
// * bytes
// * string
// * T[] for any T
// * T[k] for any dynamic T and any k >= 0
// * (T1,...,Tk) if Ti is dynamic for some 1 <= i <= k
func isDynamicType(t Type) bool {
if t.T == TupleTy {
for _, elem := range t.TupleElems {
if isDynamicType(*elem) {
return true
}
}
return false
}
return t.T == StringTy || t.T == BytesTy || t.T == SliceTy || (t.T == ArrayTy && isDynamicType(*t.Elem))
}
// getTypeSize returns the size that this type needs to occupy.
// We distinguish static and dynamic types. Static types are encoded in-place
// and dynamic types are encoded at a separately allocated location after the
// current block.
// So for a static variable, the size returned represents the size that the
// variable actually occupies.
// For a dynamic variable, the returned size is fixed 32 bytes, which is used
// to store the location reference for actual value storage.
func getTypeSize(t Type) int {
if t.T == ArrayTy && !isDynamicType(*t.Elem) {
// Recursively calculate type size if it is a nested array
if t.Elem.T == ArrayTy {
return t.Size * getTypeSize(*t.Elem)
}
return t.Size * 32
} else if t.T == TupleTy && !isDynamicType(t) {
total := 0
for _, elem := range t.TupleElems {
total += getTypeSize(*elem)
}
return total
}
return 32
}

View File

@ -32,72 +32,75 @@ type typeWithoutStringer Type
// Tests that all allowed types get recognized by the type parser.
func TestTypeRegexp(t *testing.T) {
tests := []struct {
blob string
kind Type
blob string
components []ArgumentMarshaling
kind Type
}{
{"bool", Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}},
{"bool[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool(nil)), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][][2]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][2][]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", Type{Kind: reflect.Int8, Type: int8_t, Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", Type{Kind: reflect.Int16, Type: int16_t, Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", Type{Kind: reflect.Int32, Type: int32_t, Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", Type{Kind: reflect.Int64, Type: int64_t, Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8_t, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8_t, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16_t, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16_t, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32_t, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32_t, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64_t, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64_t, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", Type{Kind: reflect.Uint8, Type: uint8_t, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", Type{Kind: reflect.Uint16, Type: uint16_t, Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", Type{Kind: reflect.Uint32, Type: uint32_t, Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", Type{Kind: reflect.Uint64, Type: uint64_t, Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8_t, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8_t, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16_t, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16_t, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32_t, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32_t, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64_t, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64_t, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", Type{Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]*big.Int{}), Size: 2, Elem: &Type{Kind: reflect.Ptr, Type: big_t, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}},
{"bytes[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]byte{}), Elem: &Type{Kind: reflect.Slice, Type: reflect.TypeOf([]byte{}), T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]byte{}), Elem: &Type{T: BytesTy, Type: reflect.TypeOf([]byte{}), Kind: reflect.Slice, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][32]byte{}), Elem: &Type{Kind: reflect.Array, Type: reflect.TypeOf([32]byte{}), T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][32]byte{}), Elem: &Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}},
{"string[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]string{}), Elem: &Type{Kind: reflect.String, Type: reflect.TypeOf(""), T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]string{}), Elem: &Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}, stringKind: "string[2]"}},
{"address", Type{Kind: reflect.Array, Type: address_t, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: address_t, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: address_t, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
{"bool", nil, Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}},
{"bool[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool(nil)), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", nil, Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][][2]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][2][]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", nil, Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", nil, Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", nil, Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", nil, Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", nil, Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", nil, Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", nil, Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", nil, Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", nil, Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", nil, Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", nil, Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", nil, Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]*big.Int{}), Size: 2, Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", nil, Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}},
{"bytes[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]byte{}), Elem: &Type{Kind: reflect.Slice, Type: reflect.TypeOf([]byte{}), T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]byte{}), Elem: &Type{T: BytesTy, Type: reflect.TypeOf([]byte{}), Kind: reflect.Slice, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][32]byte{}), Elem: &Type{Kind: reflect.Array, Type: reflect.TypeOf([32]byte{}), T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][32]byte{}), Elem: &Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", nil, Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}},
{"string[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]string{}), Elem: &Type{Kind: reflect.String, Type: reflect.TypeOf(""), T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]string{}), Elem: &Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}, stringKind: "string[2]"}},
{"address", nil, Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", nil, Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", nil, Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
// TODO when fixed types are implemented properly
// {"fixed", Type{}},
// {"fixed128x128", Type{}},
// {"fixed[]", Type{}},
// {"fixed[2]", Type{}},
// {"fixed128x128[]", Type{}},
// {"fixed128x128[2]", Type{}},
// {"fixed", nil, Type{}},
// {"fixed128x128", nil, Type{}},
// {"fixed[]", nil, Type{}},
// {"fixed[2]", nil, Type{}},
// {"fixed128x128[]", nil, Type{}},
// {"fixed128x128[2]", nil, Type{}},
{"tuple", []ArgumentMarshaling{{Name: "a", Type: "int64"}}, Type{Kind: reflect.Struct, T: TupleTy, Type: reflect.TypeOf(struct{ A int64 }{}), stringKind: "(int64)",
TupleElems: []*Type{{Kind: reflect.Int64, T: IntTy, Type: reflect.TypeOf(int64(0)), Size: 64, stringKind: "int64"}}, TupleRawNames: []string{"a"}}},
}
for _, tt := range tests {
typ, err := NewType(tt.blob)
typ, err := NewType(tt.blob, tt.components)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", tt.blob, err)
}
@ -109,151 +112,170 @@ func TestTypeRegexp(t *testing.T) {
func TestTypeCheck(t *testing.T) {
for i, test := range []struct {
typ string
input interface{}
err string
typ string
components []ArgumentMarshaling
input interface{}
err string
}{
{"uint", big.NewInt(1), "unsupported arg type: uint"},
{"int", big.NewInt(1), "unsupported arg type: int"},
{"uint256", big.NewInt(1), ""},
{"uint256[][3][]", [][3][]*big.Int{{{}}}, ""},
{"uint256[][][3]", [3][][]*big.Int{{{}}}, ""},
{"uint256[3][][]", [][][3]*big.Int{{{}}}, ""},
{"uint256[3][3][3]", [3][3][3]*big.Int{{{}}}, ""},
{"uint8[][]", [][]uint8{}, ""},
{"int256", big.NewInt(1), ""},
{"uint8", uint8(1), ""},
{"uint16", uint16(1), ""},
{"uint32", uint32(1), ""},
{"uint64", uint64(1), ""},
{"int8", int8(1), ""},
{"int16", int16(1), ""},
{"int32", int32(1), ""},
{"int64", int64(1), ""},
{"uint24", big.NewInt(1), ""},
{"uint40", big.NewInt(1), ""},
{"uint48", big.NewInt(1), ""},
{"uint56", big.NewInt(1), ""},
{"uint72", big.NewInt(1), ""},
{"uint80", big.NewInt(1), ""},
{"uint88", big.NewInt(1), ""},
{"uint96", big.NewInt(1), ""},
{"uint104", big.NewInt(1), ""},
{"uint112", big.NewInt(1), ""},
{"uint120", big.NewInt(1), ""},
{"uint128", big.NewInt(1), ""},
{"uint136", big.NewInt(1), ""},
{"uint144", big.NewInt(1), ""},
{"uint152", big.NewInt(1), ""},
{"uint160", big.NewInt(1), ""},
{"uint168", big.NewInt(1), ""},
{"uint176", big.NewInt(1), ""},
{"uint184", big.NewInt(1), ""},
{"uint192", big.NewInt(1), ""},
{"uint200", big.NewInt(1), ""},
{"uint208", big.NewInt(1), ""},
{"uint216", big.NewInt(1), ""},
{"uint224", big.NewInt(1), ""},
{"uint232", big.NewInt(1), ""},
{"uint240", big.NewInt(1), ""},
{"uint248", big.NewInt(1), ""},
{"int24", big.NewInt(1), ""},
{"int40", big.NewInt(1), ""},
{"int48", big.NewInt(1), ""},
{"int56", big.NewInt(1), ""},
{"int72", big.NewInt(1), ""},
{"int80", big.NewInt(1), ""},
{"int88", big.NewInt(1), ""},
{"int96", big.NewInt(1), ""},
{"int104", big.NewInt(1), ""},
{"int112", big.NewInt(1), ""},
{"int120", big.NewInt(1), ""},
{"int128", big.NewInt(1), ""},
{"int136", big.NewInt(1), ""},
{"int144", big.NewInt(1), ""},
{"int152", big.NewInt(1), ""},
{"int160", big.NewInt(1), ""},
{"int168", big.NewInt(1), ""},
{"int176", big.NewInt(1), ""},
{"int184", big.NewInt(1), ""},
{"int192", big.NewInt(1), ""},
{"int200", big.NewInt(1), ""},
{"int208", big.NewInt(1), ""},
{"int216", big.NewInt(1), ""},
{"int224", big.NewInt(1), ""},
{"int232", big.NewInt(1), ""},
{"int240", big.NewInt(1), ""},
{"int248", big.NewInt(1), ""},
{"uint30", uint8(1), "abi: cannot use uint8 as type ptr as argument"},
{"uint8", uint16(1), "abi: cannot use uint16 as type uint8 as argument"},
{"uint8", uint32(1), "abi: cannot use uint32 as type uint8 as argument"},
{"uint8", uint64(1), "abi: cannot use uint64 as type uint8 as argument"},
{"uint8", int8(1), "abi: cannot use int8 as type uint8 as argument"},
{"uint8", int16(1), "abi: cannot use int16 as type uint8 as argument"},
{"uint8", int32(1), "abi: cannot use int32 as type uint8 as argument"},
{"uint8", int64(1), "abi: cannot use int64 as type uint8 as argument"},
{"uint16", uint16(1), ""},
{"uint16", uint8(1), "abi: cannot use uint8 as type uint16 as argument"},
{"uint16[]", []uint16{1, 2, 3}, ""},
{"uint16[]", [3]uint16{1, 2, 3}, ""},
{"uint16[]", []uint32{1, 2, 3}, "abi: cannot use []uint32 as type [0]uint16 as argument"},
{"uint16[3]", [3]uint32{1, 2, 3}, "abi: cannot use [3]uint32 as type [3]uint16 as argument"},
{"uint16[3]", [4]uint16{1, 2, 3}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"uint16[3]", []uint16{1, 2, 3}, ""},
{"uint16[3]", []uint16{1, 2, 3, 4}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"address[]", []common.Address{{1}}, ""},
{"address[1]", []common.Address{{1}}, ""},
{"address[1]", [1]common.Address{{1}}, ""},
{"address[2]", [1]common.Address{{1}}, "abi: cannot use [1]array as type [2]array as argument"},
{"bytes32", [32]byte{}, ""},
{"bytes31", [31]byte{}, ""},
{"bytes30", [30]byte{}, ""},
{"bytes29", [29]byte{}, ""},
{"bytes28", [28]byte{}, ""},
{"bytes27", [27]byte{}, ""},
{"bytes26", [26]byte{}, ""},
{"bytes25", [25]byte{}, ""},
{"bytes24", [24]byte{}, ""},
{"bytes23", [23]byte{}, ""},
{"bytes22", [22]byte{}, ""},
{"bytes21", [21]byte{}, ""},
{"bytes20", [20]byte{}, ""},
{"bytes19", [19]byte{}, ""},
{"bytes18", [18]byte{}, ""},
{"bytes17", [17]byte{}, ""},
{"bytes16", [16]byte{}, ""},
{"bytes15", [15]byte{}, ""},
{"bytes14", [14]byte{}, ""},
{"bytes13", [13]byte{}, ""},
{"bytes12", [12]byte{}, ""},
{"bytes11", [11]byte{}, ""},
{"bytes10", [10]byte{}, ""},
{"bytes9", [9]byte{}, ""},
{"bytes8", [8]byte{}, ""},
{"bytes7", [7]byte{}, ""},
{"bytes6", [6]byte{}, ""},
{"bytes5", [5]byte{}, ""},
{"bytes4", [4]byte{}, ""},
{"bytes3", [3]byte{}, ""},
{"bytes2", [2]byte{}, ""},
{"bytes1", [1]byte{}, ""},
{"bytes32", [33]byte{}, "abi: cannot use [33]uint8 as type [32]uint8 as argument"},
{"bytes32", common.Hash{1}, ""},
{"bytes31", common.Hash{1}, "abi: cannot use common.Hash as type [31]uint8 as argument"},
{"bytes31", [32]byte{}, "abi: cannot use [32]uint8 as type [31]uint8 as argument"},
{"bytes", []byte{0, 1}, ""},
{"bytes", [2]byte{0, 1}, "abi: cannot use array as type slice as argument"},
{"bytes", common.Hash{1}, "abi: cannot use array as type slice as argument"},
{"string", "hello world", ""},
{"string", string(""), ""},
{"string", []byte{}, "abi: cannot use slice as type string as argument"},
{"bytes32[]", [][32]byte{{}}, ""},
{"function", [24]byte{}, ""},
{"bytes20", common.Address{}, ""},
{"address", [20]byte{}, ""},
{"address", common.Address{}, ""},
{"uint", nil, big.NewInt(1), "unsupported arg type: uint"},
{"int", nil, big.NewInt(1), "unsupported arg type: int"},
{"uint256", nil, big.NewInt(1), ""},
{"uint256[][3][]", nil, [][3][]*big.Int{{{}}}, ""},
{"uint256[][][3]", nil, [3][][]*big.Int{{{}}}, ""},
{"uint256[3][][]", nil, [][][3]*big.Int{{{}}}, ""},
{"uint256[3][3][3]", nil, [3][3][3]*big.Int{{{}}}, ""},
{"uint8[][]", nil, [][]uint8{}, ""},
{"int256", nil, big.NewInt(1), ""},
{"uint8", nil, uint8(1), ""},
{"uint16", nil, uint16(1), ""},
{"uint32", nil, uint32(1), ""},
{"uint64", nil, uint64(1), ""},
{"int8", nil, int8(1), ""},
{"int16", nil, int16(1), ""},
{"int32", nil, int32(1), ""},
{"int64", nil, int64(1), ""},
{"uint24", nil, big.NewInt(1), ""},
{"uint40", nil, big.NewInt(1), ""},
{"uint48", nil, big.NewInt(1), ""},
{"uint56", nil, big.NewInt(1), ""},
{"uint72", nil, big.NewInt(1), ""},
{"uint80", nil, big.NewInt(1), ""},
{"uint88", nil, big.NewInt(1), ""},
{"uint96", nil, big.NewInt(1), ""},
{"uint104", nil, big.NewInt(1), ""},
{"uint112", nil, big.NewInt(1), ""},
{"uint120", nil, big.NewInt(1), ""},
{"uint128", nil, big.NewInt(1), ""},
{"uint136", nil, big.NewInt(1), ""},
{"uint144", nil, big.NewInt(1), ""},
{"uint152", nil, big.NewInt(1), ""},
{"uint160", nil, big.NewInt(1), ""},
{"uint168", nil, big.NewInt(1), ""},
{"uint176", nil, big.NewInt(1), ""},
{"uint184", nil, big.NewInt(1), ""},
{"uint192", nil, big.NewInt(1), ""},
{"uint200", nil, big.NewInt(1), ""},
{"uint208", nil, big.NewInt(1), ""},
{"uint216", nil, big.NewInt(1), ""},
{"uint224", nil, big.NewInt(1), ""},
{"uint232", nil, big.NewInt(1), ""},
{"uint240", nil, big.NewInt(1), ""},
{"uint248", nil, big.NewInt(1), ""},
{"int24", nil, big.NewInt(1), ""},
{"int40", nil, big.NewInt(1), ""},
{"int48", nil, big.NewInt(1), ""},
{"int56", nil, big.NewInt(1), ""},
{"int72", nil, big.NewInt(1), ""},
{"int80", nil, big.NewInt(1), ""},
{"int88", nil, big.NewInt(1), ""},
{"int96", nil, big.NewInt(1), ""},
{"int104", nil, big.NewInt(1), ""},
{"int112", nil, big.NewInt(1), ""},
{"int120", nil, big.NewInt(1), ""},
{"int128", nil, big.NewInt(1), ""},
{"int136", nil, big.NewInt(1), ""},
{"int144", nil, big.NewInt(1), ""},
{"int152", nil, big.NewInt(1), ""},
{"int160", nil, big.NewInt(1), ""},
{"int168", nil, big.NewInt(1), ""},
{"int176", nil, big.NewInt(1), ""},
{"int184", nil, big.NewInt(1), ""},
{"int192", nil, big.NewInt(1), ""},
{"int200", nil, big.NewInt(1), ""},
{"int208", nil, big.NewInt(1), ""},
{"int216", nil, big.NewInt(1), ""},
{"int224", nil, big.NewInt(1), ""},
{"int232", nil, big.NewInt(1), ""},
{"int240", nil, big.NewInt(1), ""},
{"int248", nil, big.NewInt(1), ""},
{"uint30", nil, uint8(1), "abi: cannot use uint8 as type ptr as argument"},
{"uint8", nil, uint16(1), "abi: cannot use uint16 as type uint8 as argument"},
{"uint8", nil, uint32(1), "abi: cannot use uint32 as type uint8 as argument"},
{"uint8", nil, uint64(1), "abi: cannot use uint64 as type uint8 as argument"},
{"uint8", nil, int8(1), "abi: cannot use int8 as type uint8 as argument"},
{"uint8", nil, int16(1), "abi: cannot use int16 as type uint8 as argument"},
{"uint8", nil, int32(1), "abi: cannot use int32 as type uint8 as argument"},
{"uint8", nil, int64(1), "abi: cannot use int64 as type uint8 as argument"},
{"uint16", nil, uint16(1), ""},
{"uint16", nil, uint8(1), "abi: cannot use uint8 as type uint16 as argument"},
{"uint16[]", nil, []uint16{1, 2, 3}, ""},
{"uint16[]", nil, [3]uint16{1, 2, 3}, ""},
{"uint16[]", nil, []uint32{1, 2, 3}, "abi: cannot use []uint32 as type [0]uint16 as argument"},
{"uint16[3]", nil, [3]uint32{1, 2, 3}, "abi: cannot use [3]uint32 as type [3]uint16 as argument"},
{"uint16[3]", nil, [4]uint16{1, 2, 3}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"uint16[3]", nil, []uint16{1, 2, 3}, ""},
{"uint16[3]", nil, []uint16{1, 2, 3, 4}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"address[]", nil, []common.Address{{1}}, ""},
{"address[1]", nil, []common.Address{{1}}, ""},
{"address[1]", nil, [1]common.Address{{1}}, ""},
{"address[2]", nil, [1]common.Address{{1}}, "abi: cannot use [1]array as type [2]array as argument"},
{"bytes32", nil, [32]byte{}, ""},
{"bytes31", nil, [31]byte{}, ""},
{"bytes30", nil, [30]byte{}, ""},
{"bytes29", nil, [29]byte{}, ""},
{"bytes28", nil, [28]byte{}, ""},
{"bytes27", nil, [27]byte{}, ""},
{"bytes26", nil, [26]byte{}, ""},
{"bytes25", nil, [25]byte{}, ""},
{"bytes24", nil, [24]byte{}, ""},
{"bytes23", nil, [23]byte{}, ""},
{"bytes22", nil, [22]byte{}, ""},
{"bytes21", nil, [21]byte{}, ""},
{"bytes20", nil, [20]byte{}, ""},
{"bytes19", nil, [19]byte{}, ""},
{"bytes18", nil, [18]byte{}, ""},
{"bytes17", nil, [17]byte{}, ""},
{"bytes16", nil, [16]byte{}, ""},
{"bytes15", nil, [15]byte{}, ""},
{"bytes14", nil, [14]byte{}, ""},
{"bytes13", nil, [13]byte{}, ""},
{"bytes12", nil, [12]byte{}, ""},
{"bytes11", nil, [11]byte{}, ""},
{"bytes10", nil, [10]byte{}, ""},
{"bytes9", nil, [9]byte{}, ""},
{"bytes8", nil, [8]byte{}, ""},
{"bytes7", nil, [7]byte{}, ""},
{"bytes6", nil, [6]byte{}, ""},
{"bytes5", nil, [5]byte{}, ""},
{"bytes4", nil, [4]byte{}, ""},
{"bytes3", nil, [3]byte{}, ""},
{"bytes2", nil, [2]byte{}, ""},
{"bytes1", nil, [1]byte{}, ""},
{"bytes32", nil, [33]byte{}, "abi: cannot use [33]uint8 as type [32]uint8 as argument"},
{"bytes32", nil, common.Hash{1}, ""},
{"bytes31", nil, common.Hash{1}, "abi: cannot use common.Hash as type [31]uint8 as argument"},
{"bytes31", nil, [32]byte{}, "abi: cannot use [32]uint8 as type [31]uint8 as argument"},
{"bytes", nil, []byte{0, 1}, ""},
{"bytes", nil, [2]byte{0, 1}, "abi: cannot use array as type slice as argument"},
{"bytes", nil, common.Hash{1}, "abi: cannot use array as type slice as argument"},
{"string", nil, "hello world", ""},
{"string", nil, string(""), ""},
{"string", nil, []byte{}, "abi: cannot use slice as type string as argument"},
{"bytes32[]", nil, [][32]byte{{}}, ""},
{"function", nil, [24]byte{}, ""},
{"bytes20", nil, common.Address{}, ""},
{"address", nil, [20]byte{}, ""},
{"address", nil, common.Address{}, ""},
{"bytes32[]]", nil, "", "invalid arg type in abi"},
{"invalidType", nil, "", "unsupported arg type: invalidType"},
{"invalidSlice[]", nil, "", "unsupported arg type: invalidSlice"},
// simple tuple
{"tuple", []ArgumentMarshaling{{Name: "a", Type: "uint256"}, {Name: "b", Type: "uint256"}}, struct {
A *big.Int
B *big.Int
}{}, ""},
// tuple slice
{"tuple[]", []ArgumentMarshaling{{Name: "a", Type: "uint256"}, {Name: "b", Type: "uint256"}}, []struct {
A *big.Int
B *big.Int
}{}, ""},
// tuple array
{"tuple[2]", []ArgumentMarshaling{{Name: "a", Type: "uint256"}, {Name: "b", Type: "uint256"}}, []struct {
A *big.Int
B *big.Int
}{{big.NewInt(0), big.NewInt(0)}, {big.NewInt(0), big.NewInt(0)}}, ""},
} {
typ, err := NewType(test.typ)
typ, err := NewType(test.typ, test.components)
if err != nil && len(test.err) == 0 {
t.Fatal("unexpected parse error:", err)
} else if err != nil && len(test.err) != 0 {

View File

@ -25,8 +25,17 @@ import (
"github.com/ethereum/go-ethereum/common"
)
var (
maxUint256 = big.NewInt(0).Add(
big.NewInt(0).Exp(big.NewInt(2), big.NewInt(256), nil),
big.NewInt(-1))
maxInt256 = big.NewInt(0).Add(
big.NewInt(0).Exp(big.NewInt(2), big.NewInt(255), nil),
big.NewInt(-1))
)
// reads the integer based on its kind
func readInteger(kind reflect.Kind, b []byte) interface{} {
func readInteger(typ byte, kind reflect.Kind, b []byte) interface{} {
switch kind {
case reflect.Uint8:
return b[len(b)-1]
@ -45,7 +54,20 @@ func readInteger(kind reflect.Kind, b []byte) interface{} {
case reflect.Int64:
return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
default:
return new(big.Int).SetBytes(b)
// the only case lefts for integer is int256/uint256.
// big.SetBytes can't tell if a number is negative, positive on itself.
// On EVM, if the returned number > max int256, it is negative.
ret := new(big.Int).SetBytes(b)
if typ == UintTy {
return ret
}
if ret.Cmp(maxInt256) > 0 {
ret.Add(maxUint256, big.NewInt(0).Neg(ret))
ret.Add(ret, big.NewInt(1))
ret.Neg(ret)
}
return ret
}
}
@ -93,17 +115,6 @@ func readFixedBytes(t Type, word []byte) (interface{}, error) {
}
func getFullElemSize(elem *Type) int {
//all other should be counted as 32 (slices have pointers to respective elements)
size := 32
//arrays wrap it, each element being the same size
for elem.T == ArrayTy {
size *= elem.Size
elem = elem.Elem
}
return size
}
// iteratively unpack elements
func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error) {
if size < 0 {
@ -128,13 +139,9 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
// Arrays have packed elements, resulting in longer unpack steps.
// Slices have just 32 bytes per element (pointing to the contents).
elemSize := 32
if t.T == ArrayTy {
elemSize = getFullElemSize(t.Elem)
}
elemSize := getTypeSize(*t.Elem)
for i, j := start, 0; j < size; i, j = i+elemSize, j+1 {
inter, err := toGoType(i, *t.Elem, output)
if err != nil {
return nil, err
@ -148,6 +155,36 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
return refSlice.Interface(), nil
}
func forTupleUnpack(t Type, output []byte) (interface{}, error) {
retval := reflect.New(t.Type).Elem()
virtualArgs := 0
for index, elem := range t.TupleElems {
marshalledValue, err := toGoType((index+virtualArgs)*32, *elem, output)
if elem.T == ArrayTy && !isDynamicType(*elem) {
// If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256.
// This means that we need to add two 'virtual' arguments when
// we count the index from now on.
//
// Array values nested multiple levels deep are also encoded inline:
// [2][3]uint256: uint256,uint256,uint256,uint256,uint256,uint256
//
// Calculate the full array size to get the correct offset for the next argument.
// Decrement it by 1, as the normal index increment is still applied.
virtualArgs += getTypeSize(*elem)/32 - 1
} else if elem.T == TupleTy && !isDynamicType(*elem) {
// If we have a static tuple, like (uint256, bool, uint256), these are
// coded as just like uint256,bool,uint256
virtualArgs += getTypeSize(*elem)/32 - 1
}
if err != nil {
return nil, err
}
retval.Field(index).Set(reflect.ValueOf(marshalledValue))
}
return retval.Interface(), nil
}
// toGoType parses the output bytes and recursively assigns the value of these bytes
// into a go type with accordance with the ABI spec.
func toGoType(index int, t Type, output []byte) (interface{}, error) {
@ -156,14 +193,14 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
}
var (
returnOutput []byte
begin, end int
err error
returnOutput []byte
begin, length int
err error
)
// if we require a length prefix, find the beginning word and size returned.
if t.requiresLengthPrefix() {
begin, end, err = lengthPrefixPointsTo(index, output)
begin, length, err = lengthPrefixPointsTo(index, output)
if err != nil {
return nil, err
}
@ -172,14 +209,28 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
}
switch t.T {
case TupleTy:
if isDynamicType(t) {
begin, err := tuplePointsTo(index, output)
if err != nil {
return nil, err
}
return forTupleUnpack(t, output[begin:])
} else {
return forTupleUnpack(t, output[index:])
}
case SliceTy:
return forEachUnpack(t, output, begin, end)
return forEachUnpack(t, output[begin:], 0, length)
case ArrayTy:
return forEachUnpack(t, output, index, t.Size)
if isDynamicType(*t.Elem) {
offset := int64(binary.BigEndian.Uint64(returnOutput[len(returnOutput)-8:]))
return forEachUnpack(t, output[offset:], 0, t.Size)
}
return forEachUnpack(t, output[index:], 0, t.Size)
case StringTy: // variable arrays are written at the end of the return bytes
return string(output[begin : begin+end]), nil
return string(output[begin : begin+length]), nil
case IntTy, UintTy:
return readInteger(t.Kind, returnOutput), nil
return readInteger(t.T, t.Kind, returnOutput), nil
case BoolTy:
return readBool(returnOutput)
case AddressTy:
@ -187,7 +238,7 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
case HashTy:
return common.BytesToHash(returnOutput), nil
case BytesTy:
return output[begin : begin+end], nil
return output[begin : begin+length], nil
case FixedBytesTy:
return readFixedBytes(t, returnOutput)
case FunctionTy:
@ -228,3 +279,17 @@ func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err
length = int(lengthBig.Uint64())
return
}
// tuplePointsTo resolves the location reference for dynamic tuple.
func tuplePointsTo(index int, output []byte) (start int, err error) {
offset := big.NewInt(0).SetBytes(output[index : index+32])
outputLen := big.NewInt(int64(len(output)))
if offset.Cmp(big.NewInt(int64(len(output)))) > 0 {
return 0, fmt.Errorf("abi: cannot marshal in to go slice: offset %v would go over slice boundary (len=%v)", offset, outputLen)
}
if offset.BitLen() > 63 {
return 0, fmt.Errorf("abi offset larger than int64: %v", offset)
}
return int(offset.Uint64()), nil
}

View File

@ -56,6 +56,23 @@ var unpackTests = []unpackTest{
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: true,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000000",
want: false,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000001000000000001",
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000003",
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
@ -100,6 +117,11 @@ var unpackTests = []unpackTest{
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int256"}]`,
enc: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
want: big.NewInt(-1),
},
{
def: `[{"type": "address"}]`,
enc: "0000000000000000000000000100000000000000000000000000000000000000",
@ -151,9 +173,14 @@ var unpackTests = []unpackTest{
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000E0000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [][]uint8{{1, 2}, {1, 2, 3}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
@ -161,7 +188,7 @@ var unpackTests = []unpackTest{
},
{
def: `[{"type": "uint8[][2]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
want: [2][]uint8{{1}, {1}},
},
{
@ -169,6 +196,11 @@ var unpackTests = []unpackTest{
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
@ -214,6 +246,26 @@ var unpackTests = []unpackTest{
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "string[4]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000000548656c6c6f0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005576f726c64000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b476f2d657468657265756d0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000",
want: [4]string{"Hello", "World", "Go-ethereum", "Ethereum"},
},
{
def: `[{"type": "string[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b676f2d657468657265756d000000000000000000000000000000000000000000",
want: []string{"Ethereum", "go-ethereum"},
},
{
def: `[{"type": "bytes[]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000003f0f0f000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003f0f0f00000000000000000000000000000000000000000000000000000000000",
want: [][]byte{{0xf0, 0xf0, 0xf0}, {0xf0, 0xf0, 0xf0}},
},
{
def: `[{"type": "uint256[2][][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000e00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000003e80000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000003e8",
want: [][][2]*big.Int{{{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}, {{big.NewInt(1), big.NewInt(200)}, {big.NewInt(1), big.NewInt(1000)}}},
},
{
def: `[{"type": "int8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
@ -273,6 +325,53 @@ var unpackTests = []unpackTest{
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int_one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int__one","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
}{big.NewInt(1)},
},
{
def: `[{"name":"int_one","type":"int256"}, {"name":"intone","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
Intone *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"___","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
IntOne *big.Int
Intone *big.Int
}{},
err: "abi: purely underscored output cannot unpack to struct",
},
{
def: `[{"name":"int_one","type":"int256"},{"name":"IntOne","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'IntOne'",
},
{
def: `[{"name":"int","type":"int256"},{"name":"Int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
@ -337,6 +436,55 @@ func TestUnpack(t *testing.T) {
}
}
func TestUnpackSetDynamicArrayOutput(t *testing.T) {
abi, err := JSON(strings.NewReader(`[{"constant":true,"inputs":[],"name":"testDynamicFixedBytes15","outputs":[{"name":"","type":"bytes15[]"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"testDynamicFixedBytes32","outputs":[{"name":"","type":"bytes32[]"}],"payable":false,"stateMutability":"view","type":"function"}]`))
if err != nil {
t.Fatal(err)
}
var (
marshalledReturn32 = common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000230783132333435363738393000000000000000000000000000000000000000003078303938373635343332310000000000000000000000000000000000000000")
marshalledReturn15 = common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000230783031323334350000000000000000000000000000000000000000000000003078393837363534000000000000000000000000000000000000000000000000")
out32 [][32]byte
out15 [][15]byte
)
// test 32
err = abi.Unpack(&out32, "testDynamicFixedBytes32", marshalledReturn32)
if err != nil {
t.Fatal(err)
}
if len(out32) != 2 {
t.Fatalf("expected array with 2 values, got %d", len(out32))
}
expected := common.Hex2Bytes("3078313233343536373839300000000000000000000000000000000000000000")
if !bytes.Equal(out32[0][:], expected) {
t.Errorf("expected %x, got %x\n", expected, out32[0])
}
expected = common.Hex2Bytes("3078303938373635343332310000000000000000000000000000000000000000")
if !bytes.Equal(out32[1][:], expected) {
t.Errorf("expected %x, got %x\n", expected, out32[1])
}
// test 15
err = abi.Unpack(&out15, "testDynamicFixedBytes32", marshalledReturn15)
if err != nil {
t.Fatal(err)
}
if len(out15) != 2 {
t.Fatalf("expected array with 2 values, got %d", len(out15))
}
expected = common.Hex2Bytes("307830313233343500000000000000")
if !bytes.Equal(out15[0][:], expected) {
t.Errorf("expected %x, got %x\n", expected, out15[0])
}
expected = common.Hex2Bytes("307839383736353400000000000000")
if !bytes.Equal(out15[1][:], expected) {
t.Errorf("expected %x, got %x\n", expected, out15[1])
}
}
type methodMultiOutput struct {
Int *big.Int
String string
@ -440,6 +588,68 @@ func TestMultiReturnWithArray(t *testing.T) {
}
}
func TestMultiReturnWithStringArray(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"name": "","type": "uint256[3]"},{"name": "","type": "address"},{"name": "","type": "string[2]"},{"name": "","type": "bool"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000005c1b78ea0000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000001a055690d9db80000000000000000000000000000ab1257528b3782fb40d7ed5f72e624b744dffb2f00000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001048656c6c6f2c20457468657265756d2100000000000000000000000000000000"))
temp, _ := big.NewInt(0).SetString("30000000000000000000", 10)
ret1, ret1Exp := new([3]*big.Int), [3]*big.Int{big.NewInt(1545304298), big.NewInt(6), temp}
ret2, ret2Exp := new(common.Address), common.HexToAddress("ab1257528b3782fb40d7ed5f72e624b744dffb2f")
ret3, ret3Exp := new([2]string), [2]string{"Ethereum", "Hello, Ethereum!"}
ret4, ret4Exp := new(bool), false
if err := abi.Unpack(&[]interface{}{ret1, ret2, ret3, ret4}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
t.Error("big.Int array result", *ret1, "!= Expected", ret1Exp)
}
if !reflect.DeepEqual(*ret2, ret2Exp) {
t.Error("address result", *ret2, "!= Expected", ret2Exp)
}
if !reflect.DeepEqual(*ret3, ret3Exp) {
t.Error("string array result", *ret3, "!= Expected", ret3Exp)
}
if !reflect.DeepEqual(*ret4, ret4Exp) {
t.Error("bool result", *ret4, "!= Expected", ret4Exp)
}
}
func TestMultiReturnWithStringSlice(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"name": "","type": "string[]"},{"name": "","type": "uint256[]"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040")) // output[0] offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000120")) // output[1] offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // output[0] length
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040")) // output[0][0] offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000080")) // output[0][1] offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000008")) // output[0][0] length
buff.Write(common.Hex2Bytes("657468657265756d000000000000000000000000000000000000000000000000")) // output[0][0] value
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000b")) // output[0][1] length
buff.Write(common.Hex2Bytes("676f2d657468657265756d000000000000000000000000000000000000000000")) // output[0][1] value
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // output[1] length
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000064")) // output[1][0] value
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000065")) // output[1][1] value
ret1, ret1Exp := new([]string), []string{"ethereum", "go-ethereum"}
ret2, ret2Exp := new([]*big.Int), []*big.Int{big.NewInt(100), big.NewInt(101)}
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
t.Error("string slice result", *ret1, "!= Expected", ret1Exp)
}
if !reflect.DeepEqual(*ret2, ret2Exp) {
t.Error("uint256 slice result", *ret2, "!= Expected", ret2Exp)
}
}
func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
// Similar to TestMultiReturnWithArray, but with a special case in mind:
// values of nested static arrays count towards the size as well, and any element following
@ -729,6 +939,108 @@ func TestUnmarshal(t *testing.T) {
}
}
func TestUnpackTuple(t *testing.T) {
const simpleTuple = `[{"name":"tuple","constant":false,"outputs":[{"type":"tuple","name":"ret","components":[{"type":"int256","name":"a"},{"type":"int256","name":"b"}]}]}]`
abi, err := JSON(strings.NewReader(simpleTuple))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // ret[a] = 1
buff.Write(common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")) // ret[b] = -1
v := struct {
Ret struct {
A *big.Int
B *big.Int
}
}{Ret: struct {
A *big.Int
B *big.Int
}{new(big.Int), new(big.Int)}}
err = abi.Unpack(&v, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
} else {
if v.Ret.A.Cmp(big.NewInt(1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", 1, v.Ret.A)
}
if v.Ret.B.Cmp(big.NewInt(-1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", v.Ret.B, -1)
}
}
// Test nested tuple
const nestedTuple = `[{"name":"tuple","constant":false,"outputs":[
{"type":"tuple","name":"s","components":[{"type":"uint256","name":"a"},{"type":"uint256[]","name":"b"},{"type":"tuple[]","name":"c","components":[{"name":"x", "type":"uint256"},{"name":"y","type":"uint256"}]}]},
{"type":"tuple","name":"t","components":[{"name":"x", "type":"uint256"},{"name":"y","type":"uint256"}]},
{"type":"uint256","name":"a"}
]}]`
abi, err = JSON(strings.NewReader(nestedTuple))
if err != nil {
t.Fatal(err)
}
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000080")) // s offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000000")) // t.X = 0
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // t.Y = 1
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // a = 1
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // s.A = 1
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000060")) // s.B offset
buff.Write(common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000000000c0")) // s.C offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // s.B length
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // s.B[0] = 1
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // s.B[0] = 2
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // s.C length
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // s.C[0].X
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // s.C[0].Y
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // s.C[1].X
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // s.C[1].Y
type T struct {
X *big.Int `abi:"x"`
Z *big.Int `abi:"y"` // Test whether the abi tag works.
}
type S struct {
A *big.Int
B []*big.Int
C []T
}
type Ret struct {
FieldS S `abi:"s"`
FieldT T `abi:"t"`
A *big.Int
}
var ret Ret
var expected = Ret{
FieldS: S{
A: big.NewInt(1),
B: []*big.Int{big.NewInt(1), big.NewInt(2)},
C: []T{
{big.NewInt(1), big.NewInt(2)},
{big.NewInt(2), big.NewInt(1)},
},
},
FieldT: T{
big.NewInt(0), big.NewInt(1),
},
A: big.NewInt(1),
}
err = abi.Unpack(&ret, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
}
if reflect.DeepEqual(ret, expected) {
t.Error("unexpected unpack value")
}
}
func TestOOMMaliciousInput(t *testing.T) {
oomTests := []unpackTest{
{

View File

@ -106,7 +106,7 @@ type Wallet interface {
// or optionally with the aid of any location metadata from the embedded URL field.
//
// If the wallet requires additional authentication to sign the request (e.g.
// a password to decrypt the account, or a PIN code o verify the transaction),
// a password to decrypt the account, or a PIN code to verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignTxWithPassphrase, or by other means (e.g. unlock

View File

@ -30,8 +30,8 @@ import (
var DefaultRootDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
// DefaultBaseDerivationPath is the base path from which custom derivation endpoints
// are incremented. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
// are incremented. As such, the first account will be at m/44'/60'/0'/0/0, the second
// at m/44'/60'/0'/0/1, etc.
var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}
// DefaultLedgerBaseDerivationPath is the base path from which custom derivation endpoints

View File

@ -27,10 +27,10 @@ import (
"sync"
"time"
mapset "github.com/deckarep/golang-set"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/fatih/set.v0"
)
// Minimum amount of time between cache reloads. This limit applies if the platform does
@ -79,7 +79,7 @@ func newAccountCache(keydir string) (*accountCache, chan struct{}) {
keydir: keydir,
byAddr: make(map[common.Address][]accounts.Account),
notify: make(chan struct{}, 1),
fileC: fileCache{all: set.NewNonTS()},
fileC: fileCache{all: mapset.NewThreadUnsafeSet()},
}
ac.watcher = newWatcher(ac)
return ac, ac.notify
@ -237,7 +237,7 @@ func (ac *accountCache) scanAccounts() error {
log.Debug("Failed to reload keystore contents", "err", err)
return err
}
if creates.Size() == 0 && deletes.Size() == 0 && updates.Size() == 0 {
if creates.Cardinality() == 0 && deletes.Cardinality() == 0 && updates.Cardinality() == 0 {
return nil
}
// Create a helper method to scan the contents of the key files
@ -265,22 +265,25 @@ func (ac *accountCache) scanAccounts() error {
case (addr == common.Address{}):
log.Debug("Failed to decode keystore key", "path", path, "err", "missing or zero address")
default:
return &accounts.Account{Address: addr, URL: accounts.URL{Scheme: KeyStoreScheme, Path: path}}
return &accounts.Account{
Address: addr,
URL: accounts.URL{Scheme: KeyStoreScheme, Path: path},
}
}
return nil
}
// Process all the file diffs
start := time.Now()
for _, p := range creates.List() {
for _, p := range creates.ToSlice() {
if a := readAccount(p.(string)); a != nil {
ac.add(*a)
}
}
for _, p := range deletes.List() {
for _, p := range deletes.ToSlice() {
ac.deleteByFile(p.(string))
}
for _, p := range updates.List() {
for _, p := range updates.ToSlice() {
path := p.(string)
ac.deleteByFile(path)
if a := readAccount(path); a != nil {

View File

@ -24,20 +24,20 @@ import (
"sync"
"time"
mapset "github.com/deckarep/golang-set"
"github.com/ethereum/go-ethereum/log"
set "gopkg.in/fatih/set.v0"
)
// fileCache is a cache of files seen during scan of keystore.
type fileCache struct {
all *set.SetNonTS // Set of all files from the keystore folder
lastMod time.Time // Last time instance when a file was modified
all mapset.Set // Set of all files from the keystore folder
lastMod time.Time // Last time instance when a file was modified
mu sync.RWMutex
}
// scan performs a new scan on the given directory, compares against the already
// cached filenames, and returns file sets: creates, deletes, updates.
func (fc *fileCache) scan(keyDir string) (set.Interface, set.Interface, set.Interface, error) {
func (fc *fileCache) scan(keyDir string) (mapset.Set, mapset.Set, mapset.Set, error) {
t0 := time.Now()
// List all the failes from the keystore folder
@ -51,14 +51,14 @@ func (fc *fileCache) scan(keyDir string) (set.Interface, set.Interface, set.Inte
defer fc.mu.Unlock()
// Iterate all the files and gather their metadata
all := set.NewNonTS()
mods := set.NewNonTS()
all := mapset.NewThreadUnsafeSet()
mods := mapset.NewThreadUnsafeSet()
var newLastMod time.Time
for _, fi := range files {
// Skip any non-key files from the folder
path := filepath.Join(keyDir, fi.Name())
if skipKeyFile(fi) {
// Skip any non-key files from the folder
if nonKeyFile(fi) {
log.Trace("Ignoring file on account scan", "path", path)
continue
}
@ -76,9 +76,9 @@ func (fc *fileCache) scan(keyDir string) (set.Interface, set.Interface, set.Inte
t2 := time.Now()
// Update the tracked files and return the three sets
deletes := set.Difference(fc.all, all) // Deletes = previous - current
creates := set.Difference(all, fc.all) // Creates = current - previous
updates := set.Difference(mods, creates) // Updates = modified - creates
deletes := fc.all.Difference(all) // Deletes = previous - current
creates := all.Difference(fc.all) // Creates = current - previous
updates := mods.Difference(creates) // Updates = modified - creates
fc.all, fc.lastMod = all, newLastMod
t3 := time.Now()
@ -88,8 +88,8 @@ func (fc *fileCache) scan(keyDir string) (set.Interface, set.Interface, set.Inte
return creates, deletes, updates, nil
}
// skipKeyFile ignores editor backups, hidden files and folders/symlinks.
func skipKeyFile(fi os.FileInfo) bool {
// nonKeyFile ignores editor backups, hidden files and folders/symlinks.
func nonKeyFile(fi os.FileInfo) bool {
// Skip editor backups and UNIX-style hidden files.
if strings.HasSuffix(fi.Name(), "~") || strings.HasPrefix(fi.Name(), ".") {
return true

View File

@ -66,19 +66,19 @@ type plainKeyJSON struct {
type encryptedKeyJSONV3 struct {
Address string `json:"address"`
Crypto cryptoJSON `json:"crypto"`
Crypto CryptoJSON `json:"crypto"`
Id string `json:"id"`
Version int `json:"version"`
}
type encryptedKeyJSONV1 struct {
Address string `json:"address"`
Crypto cryptoJSON `json:"crypto"`
Crypto CryptoJSON `json:"crypto"`
Id string `json:"id"`
Version string `json:"version"`
}
type cryptoJSON struct {
type CryptoJSON struct {
Cipher string `json:"cipher"`
CipherText string `json:"ciphertext"`
CipherParams cipherparamsJSON `json:"cipherparams"`
@ -171,7 +171,10 @@ func storeNewKey(ks keyStore, rand io.Reader, auth string) (*Key, accounts.Accou
if err != nil {
return nil, accounts.Account{}, err
}
a := accounts.Account{Address: key.Address, URL: accounts.URL{Scheme: KeyStoreScheme, Path: ks.JoinPath(keyFileName(key.Address))}}
a := accounts.Account{
Address: key.Address,
URL: accounts.URL{Scheme: KeyStoreScheme, Path: ks.JoinPath(keyFileName(key.Address))},
}
if err := ks.StoreKey(a.URL.Path, key, auth); err != nil {
zeroKey(key.PrivateKey)
return nil, a, err
@ -179,26 +182,34 @@ func storeNewKey(ks keyStore, rand io.Reader, auth string) (*Key, accounts.Accou
return key, a, err
}
func writeKeyFile(file string, content []byte) error {
func writeTemporaryKeyFile(file string, content []byte) (string, error) {
// Create the keystore directory with appropriate permissions
// in case it is not present yet.
const dirPerm = 0700
if err := os.MkdirAll(filepath.Dir(file), dirPerm); err != nil {
return err
return "", err
}
// Atomic write: create a temporary hidden file first
// then move it into place. TempFile assigns mode 0600.
f, err := ioutil.TempFile(filepath.Dir(file), "."+filepath.Base(file)+".tmp")
if err != nil {
return err
return "", err
}
if _, err := f.Write(content); err != nil {
f.Close()
os.Remove(f.Name())
return err
return "", err
}
f.Close()
return os.Rename(f.Name(), file)
return f.Name(), nil
}
func writeKeyFile(file string, content []byte) error {
name, err := writeTemporaryKeyFile(file, content)
if err != nil {
return err
}
return os.Rename(name, file)
}
// keyFileName implements the naming convention for keyfiles:
@ -216,5 +227,6 @@ func toISO8601(t time.Time) string {
} else {
tz = fmt.Sprintf("%03d00", offset/3600)
}
return fmt.Sprintf("%04d-%02d-%02dT%02d-%02d-%02d.%09d%s", t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), t.Nanosecond(), tz)
return fmt.Sprintf("%04d-%02d-%02dT%02d-%02d-%02d.%09d%s",
t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), t.Nanosecond(), tz)
}

View File

@ -50,7 +50,7 @@ var (
var KeyStoreType = reflect.TypeOf(&KeyStore{})
// KeyStoreScheme is the protocol scheme prefixing account and wallet URLs.
var KeyStoreScheme = "keystore"
const KeyStoreScheme = "keystore"
// Maximum time between wallet refreshes (if filesystem notifications don't work).
const walletRefreshCycle = 3 * time.Second
@ -78,7 +78,7 @@ type unlocked struct {
// NewKeyStore creates a keystore for the given directory.
func NewKeyStore(keydir string, scryptN, scryptP int) *KeyStore {
keydir, _ = filepath.Abs(keydir)
ks := &KeyStore{storage: &keyStorePassphrase{keydir, scryptN, scryptP}}
ks := &KeyStore{storage: &keyStorePassphrase{keydir, scryptN, scryptP, false}}
ks.init(keydir)
return ks
}

View File

@ -28,18 +28,19 @@ package keystore
import (
"bytes"
"crypto/aes"
crand "crypto/rand"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/crypto/randentropy"
"github.com/pborman/uuid"
"golang.org/x/crypto/pbkdf2"
"golang.org/x/crypto/scrypt"
@ -72,6 +73,10 @@ type keyStorePassphrase struct {
keysDirPath string
scryptN int
scryptP int
// skipKeyFileVerification disables the security-feature which does
// reads and decrypts any newly created keyfiles. This should be 'false' in all
// cases except tests -- setting this to 'true' is not recommended.
skipKeyFileVerification bool
}
func (ks keyStorePassphrase) GetKey(addr common.Address, filename, auth string) (*Key, error) {
@ -93,7 +98,7 @@ func (ks keyStorePassphrase) GetKey(addr common.Address, filename, auth string)
// StoreKey generates a key, encrypts with 'auth' and stores in the given directory
func StoreKey(dir, auth string, scryptN, scryptP int) (common.Address, error) {
_, a, err := storeNewKey(&keyStorePassphrase{dir, scryptN, scryptP}, crand.Reader, auth)
_, a, err := storeNewKey(&keyStorePassphrase{dir, scryptN, scryptP, false}, rand.Reader, auth)
return a.Address, err
}
@ -102,33 +107,54 @@ func (ks keyStorePassphrase) StoreKey(filename string, key *Key, auth string) er
if err != nil {
return err
}
return writeKeyFile(filename, keyjson)
// Write into temporary file
tmpName, err := writeTemporaryKeyFile(filename, keyjson)
if err != nil {
return err
}
if !ks.skipKeyFileVerification {
// Verify that we can decrypt the file with the given password.
_, err = ks.GetKey(key.Address, tmpName, auth)
if err != nil {
msg := "An error was encountered when saving and verifying the keystore file. \n" +
"This indicates that the keystore is corrupted. \n" +
"The corrupted file is stored at \n%v\n" +
"Please file a ticket at:\n\n" +
"https://github.com/ethereum/go-ethereum/issues." +
"The error was : %s"
return fmt.Errorf(msg, tmpName, err)
}
}
return os.Rename(tmpName, filename)
}
func (ks keyStorePassphrase) JoinPath(filename string) string {
if filepath.IsAbs(filename) {
return filename
} else {
return filepath.Join(ks.keysDirPath, filename)
}
return filepath.Join(ks.keysDirPath, filename)
}
// EncryptKey encrypts a key using the specified scrypt parameters into a json
// blob that can be decrypted later on.
func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
authArray := []byte(auth)
salt := randentropy.GetEntropyCSPRNG(32)
derivedKey, err := scrypt.Key(authArray, salt, scryptN, scryptR, scryptP, scryptDKLen)
// Encryptdata encrypts the data given as 'data' with the password 'auth'.
func EncryptDataV3(data, auth []byte, scryptN, scryptP int) (CryptoJSON, error) {
salt := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
derivedKey, err := scrypt.Key(auth, salt, scryptN, scryptR, scryptP, scryptDKLen)
if err != nil {
return nil, err
return CryptoJSON{}, err
}
encryptKey := derivedKey[:16]
keyBytes := math.PaddedBigBytes(key.PrivateKey.D, 32)
iv := randentropy.GetEntropyCSPRNG(aes.BlockSize) // 16
cipherText, err := aesCTRXOR(encryptKey, keyBytes, iv)
iv := make([]byte, aes.BlockSize) // 16
if _, err := io.ReadFull(rand.Reader, iv); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
cipherText, err := aesCTRXOR(encryptKey, data, iv)
if err != nil {
return nil, err
return CryptoJSON{}, err
}
mac := crypto.Keccak256(derivedKey[16:32], cipherText)
@ -138,12 +164,11 @@ func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
scryptParamsJSON["p"] = scryptP
scryptParamsJSON["dklen"] = scryptDKLen
scryptParamsJSON["salt"] = hex.EncodeToString(salt)
cipherParamsJSON := cipherparamsJSON{
IV: hex.EncodeToString(iv),
}
cryptoStruct := cryptoJSON{
cryptoStruct := CryptoJSON{
Cipher: "aes-128-ctr",
CipherText: hex.EncodeToString(cipherText),
CipherParams: cipherParamsJSON,
@ -151,6 +176,17 @@ func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
KDFParams: scryptParamsJSON,
MAC: hex.EncodeToString(mac),
}
return cryptoStruct, nil
}
// EncryptKey encrypts a key using the specified scrypt parameters into a json
// blob that can be decrypted later on.
func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
keyBytes := math.PaddedBigBytes(key.PrivateKey.D, 32)
cryptoStruct, err := EncryptDataV3(keyBytes, []byte(auth), scryptN, scryptP)
if err != nil {
return nil, err
}
encryptedKeyJSONV3 := encryptedKeyJSONV3{
hex.EncodeToString(key.Address[:]),
cryptoStruct,
@ -198,42 +234,48 @@ func DecryptKey(keyjson []byte, auth string) (*Key, error) {
}, nil
}
func decryptKeyV3(keyProtected *encryptedKeyJSONV3, auth string) (keyBytes []byte, keyId []byte, err error) {
if keyProtected.Version != version {
return nil, nil, fmt.Errorf("Version not supported: %v", keyProtected.Version)
func DecryptDataV3(cryptoJson CryptoJSON, auth string) ([]byte, error) {
if cryptoJson.Cipher != "aes-128-ctr" {
return nil, fmt.Errorf("Cipher not supported: %v", cryptoJson.Cipher)
}
if keyProtected.Crypto.Cipher != "aes-128-ctr" {
return nil, nil, fmt.Errorf("Cipher not supported: %v", keyProtected.Crypto.Cipher)
}
keyId = uuid.Parse(keyProtected.Id)
mac, err := hex.DecodeString(keyProtected.Crypto.MAC)
mac, err := hex.DecodeString(cryptoJson.MAC)
if err != nil {
return nil, nil, err
return nil, err
}
iv, err := hex.DecodeString(keyProtected.Crypto.CipherParams.IV)
iv, err := hex.DecodeString(cryptoJson.CipherParams.IV)
if err != nil {
return nil, nil, err
return nil, err
}
cipherText, err := hex.DecodeString(keyProtected.Crypto.CipherText)
cipherText, err := hex.DecodeString(cryptoJson.CipherText)
if err != nil {
return nil, nil, err
return nil, err
}
derivedKey, err := getKDFKey(keyProtected.Crypto, auth)
derivedKey, err := getKDFKey(cryptoJson, auth)
if err != nil {
return nil, nil, err
return nil, err
}
calculatedMAC := crypto.Keccak256(derivedKey[16:32], cipherText)
if !bytes.Equal(calculatedMAC, mac) {
return nil, nil, ErrDecrypt
return nil, ErrDecrypt
}
plainText, err := aesCTRXOR(derivedKey[:16], cipherText, iv)
if err != nil {
return nil, err
}
return plainText, err
}
func decryptKeyV3(keyProtected *encryptedKeyJSONV3, auth string) (keyBytes []byte, keyId []byte, err error) {
if keyProtected.Version != version {
return nil, nil, fmt.Errorf("Version not supported: %v", keyProtected.Version)
}
keyId = uuid.Parse(keyProtected.Id)
plainText, err := DecryptDataV3(keyProtected.Crypto, auth)
if err != nil {
return nil, nil, err
}
@ -274,7 +316,7 @@ func decryptKeyV1(keyProtected *encryptedKeyJSONV1, auth string) (keyBytes []byt
return plainText, keyId, err
}
func getKDFKey(cryptoJSON cryptoJSON, auth string) ([]byte, error) {
func getKDFKey(cryptoJSON CryptoJSON, auth string) ([]byte, error) {
authArray := []byte(auth)
salt, err := hex.DecodeString(cryptoJSON.KDFParams["salt"].(string))
if err != nil {

View File

@ -56,7 +56,6 @@ func (ks keyStorePlain) StoreKey(filename string, key *Key, auth string) error {
func (ks keyStorePlain) JoinPath(filename string) string {
if filepath.IsAbs(filename) {
return filename
} else {
return filepath.Join(ks.keysDirPath, filename)
}
return filepath.Join(ks.keysDirPath, filename)
}

View File

@ -37,7 +37,7 @@ func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) {
t.Fatal(err)
}
if encrypted {
ks = &keyStorePassphrase{d, veryLightScryptN, veryLightScryptP}
ks = &keyStorePassphrase{d, veryLightScryptN, veryLightScryptP, true}
} else {
ks = &keyStorePlain{d}
}
@ -191,7 +191,7 @@ func TestV1_1(t *testing.T) {
func TestV1_2(t *testing.T) {
t.Parallel()
ks := &keyStorePassphrase{"testdata/v1", LightScryptN, LightScryptP}
ks := &keyStorePassphrase{"testdata/v1", LightScryptN, LightScryptP, true}
addr := common.HexToAddress("cb61d5a9c4896fb9658090b597ef0e7be6f7b67e")
file := "testdata/v1/cb61d5a9c4896fb9658090b597ef0e7be6f7b67e/cb61d5a9c4896fb9658090b597ef0e7be6f7b67e"
k, err := ks.GetKey(addr, file, "g")

View File

@ -38,7 +38,13 @@ func importPreSaleKey(keyStore keyStore, keyJSON []byte, password string) (accou
return accounts.Account{}, nil, err
}
key.Id = uuid.NewRandom()
a := accounts.Account{Address: key.Address, URL: accounts.URL{Scheme: KeyStoreScheme, Path: keyStore.JoinPath(keyFileName(key.Address))}}
a := accounts.Account{
Address: key.Address,
URL: accounts.URL{
Scheme: KeyStoreScheme,
Path: keyStore.JoinPath(keyFileName(key.Address)),
},
}
err = keyStore.StoreKey(a.URL.Path, key, password)
return a, key, err
}

View File

@ -52,8 +52,8 @@ func (w *keystoreWallet) Status() (string, error) {
// is no connection or decryption step necessary to access the list of accounts.
func (w *keystoreWallet) Open(passphrase string) error { return nil }
// Close implements accounts.Wallet, but is a noop for plain wallets since is no
// meaningful open operation.
// Close implements accounts.Wallet, but is a noop for plain wallets since there
// is no meaningful open operation.
func (w *keystoreWallet) Close() error { return nil }
// Accounts implements accounts.Wallet, returning an account list consisting of
@ -84,10 +84,7 @@ func (w *keystoreWallet) SelfDerive(base accounts.DerivationPath, chain ethereum
// able to sign via our shared keystore backend).
func (w *keystoreWallet) SignHash(account accounts.Account, hash []byte) ([]byte, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
if !w.Contains(account) {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
@ -100,10 +97,7 @@ func (w *keystoreWallet) SignHash(account accounts.Account, hash []byte) ([]byte
// be able to sign via our shared keystore backend).
func (w *keystoreWallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
if !w.Contains(account) {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
@ -114,10 +108,7 @@ func (w *keystoreWallet) SignTx(account accounts.Account, tx *types.Transaction,
// given hash with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
if !w.Contains(account) {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
@ -128,10 +119,7 @@ func (w *keystoreWallet) SignHashWithPassphrase(account accounts.Account, passph
// transaction with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
if !w.Contains(account) {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign

View File

@ -74,6 +74,22 @@ func (u URL) MarshalJSON() ([]byte, error) {
return json.Marshal(u.String())
}
// UnmarshalJSON parses url.
func (u *URL) UnmarshalJSON(input []byte) error {
var textURL string
err := json.Unmarshal(input, &textURL)
if err != nil {
return err
}
url, err := parseURL(textURL)
if err != nil {
return err
}
u.Scheme = url.Scheme
u.Path = url.Path
return nil
}
// Cmp compares x and y and returns:
//
// -1 if x < y

96
accounts/url_test.go Normal file
View File

@ -0,0 +1,96 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"testing"
)
func TestURLParsing(t *testing.T) {
url, err := parseURL("https://ethereum.org")
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if url.Scheme != "https" {
t.Errorf("expected: %v, got: %v", "https", url.Scheme)
}
if url.Path != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.Path)
}
_, err = parseURL("ethereum.org")
if err == nil {
t.Error("expected err, got: nil")
}
}
func TestURLString(t *testing.T) {
url := URL{Scheme: "https", Path: "ethereum.org"}
if url.String() != "https://ethereum.org" {
t.Errorf("expected: %v, got: %v", "https://ethereum.org", url.String())
}
url = URL{Scheme: "", Path: "ethereum.org"}
if url.String() != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.String())
}
}
func TestURLMarshalJSON(t *testing.T) {
url := URL{Scheme: "https", Path: "ethereum.org"}
json, err := url.MarshalJSON()
if err != nil {
t.Errorf("unexpcted error: %v", err)
}
if string(json) != "\"https://ethereum.org\"" {
t.Errorf("expected: %v, got: %v", "\"https://ethereum.org\"", string(json))
}
}
func TestURLUnmarshalJSON(t *testing.T) {
url := &URL{}
err := url.UnmarshalJSON([]byte("\"https://ethereum.org\""))
if err != nil {
t.Errorf("unexpcted error: %v", err)
}
if url.Scheme != "https" {
t.Errorf("expected: %v, got: %v", "https", url.Scheme)
}
if url.Path != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "https", url.Path)
}
}
func TestURLComparison(t *testing.T) {
tests := []struct {
urlA URL
urlB URL
expect int
}{
{URL{"https", "ethereum.org"}, URL{"https", "ethereum.org"}, 0},
{URL{"http", "ethereum.org"}, URL{"https", "ethereum.org"}, -1},
{URL{"https", "ethereum.org/a"}, URL{"https", "ethereum.org"}, 1},
{URL{"https", "abc.org"}, URL{"https", "ethereum.org"}, -1},
}
for i, tt := range tests {
result := tt.urlA.Cmp(tt.urlB)
if result != tt.expect {
t.Errorf("test %d: cmp mismatch: expected: %d, got: %d", i, tt.expect, result)
}
}
}

View File

@ -127,7 +127,7 @@ func (hub *Hub) refreshWallets() {
// breaking the Ledger protocol if that is waiting for user confirmation. This
// is a bug acknowledged at Ledger, but it won't be fixed on old devices so we
// need to prevent concurrent comms ourselves. The more elegant solution would
// be to ditch enumeration in favor of hutplug events, but that don't work yet
// be to ditch enumeration in favor of hotplug events, but that don't work yet
// on Windows so if we need to hack it anyway, this is more elegant for now.
hub.commsLock.Lock()
if hub.commsPend > 0 { // A confirmation is pending, don't refresh

View File

@ -36,7 +36,7 @@ func Type(msg proto.Message) uint16 {
}
// Name returns the friendly message type name of a specific protocol buffer
// type numbers.
// type number.
func Name(kind uint16) string {
name := MessageType_name[int32(kind)]
if len(name) < 12 {

View File

@ -53,11 +53,9 @@ const (
ledgerOpGetConfiguration ledgerOpcode = 0x06 // Returns specific wallet application configuration
ledgerP1DirectlyFetchAddress ledgerParam1 = 0x00 // Return address directly from the wallet
ledgerP1ConfirmFetchAddress ledgerParam1 = 0x01 // Require a user confirmation before returning the address
ledgerP1InitTransactionData ledgerParam1 = 0x00 // First transaction data block for signing
ledgerP1ContTransactionData ledgerParam1 = 0x80 // Subsequent transaction data block for signing
ledgerP2DiscardAddressChainCode ledgerParam2 = 0x00 // Do not return the chain code along with the address
ledgerP2ReturnAddressChainCode ledgerParam2 = 0x01 // Require a user confirmation before returning the address
)
// errLedgerReplyInvalidHeader is the error message returned by a Ledger data exchange
@ -259,7 +257,9 @@ func (w *ledgerDriver) ledgerDerive(derivationPath []uint32) (common.Address, er
// Decode the hex sting into an Ethereum address and return
var address common.Address
hex.Decode(address[:], hexstr)
if _, err = hex.Decode(address[:], hexstr); err != nil {
return common.Address{}, err
}
return address, nil
}
@ -304,7 +304,7 @@ func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Create the transaction RLP based on whether legacy or EIP155 signing was requeste
// Create the transaction RLP based on whether legacy or EIP155 signing was requested
var (
txrlp []byte
err error
@ -352,7 +352,7 @@ func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
signature[64] -= byte(chainID.Uint64()*2 + 35)
}
signed, err := tx.WithSignature(signer, signature)
if err != nil {

View File

@ -221,7 +221,7 @@ func (w *trezorDriver) trezorSign(derivationPath []uint32, tx *types.Transaction
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
signature[64] -= byte(chainID.Uint64()*2 + 35)
}
// Inject the final signature into the transaction and sanity check the sender
signed, err := tx.WithSignature(signer, signature)

View File

@ -99,7 +99,7 @@ type wallet struct {
//
// As such, a hardware wallet needs two locks to function correctly. A state
// lock can be used to protect the wallet's software-side internal state, which
// must not be held exlusively during hardware communication. A communication
// must not be held exclusively during hardware communication. A communication
// lock can be used to achieve exclusive access to the device itself, this one
// however should allow "skipping" waiting for operations that might want to
// use the device, but can live without too (e.g. account self-derivation).

View File

@ -23,8 +23,8 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.9.2.windows-%GETH_ARCH%.zip
- 7z x go1.9.2.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.11.5.windows-%GETH_ARCH%.zip
- 7z x go1.11.5.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version

View File

@ -1,561 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package bmt provides a binary merkle tree implementation
package bmt
import (
"fmt"
"hash"
"io"
"strings"
"sync"
"sync/atomic"
)
/*
Binary Merkle Tree Hash is a hash function over arbitrary datachunks of limited size
It is defined as the root hash of the binary merkle tree built over fixed size segments
of the underlying chunk using any base hash function (e.g keccak 256 SHA3)
It is used as the chunk hash function in swarm which in turn is the basis for the
128 branching swarm hash http://swarm-guide.readthedocs.io/en/latest/architecture.html#swarm-hash
The BMT is optimal for providing compact inclusion proofs, i.e. prove that a
segment is a substring of a chunk starting at a particular offset
The size of the underlying segments is fixed at 32 bytes (called the resolution
of the BMT hash), the EVM word size to optimize for on-chain BMT verification
as well as the hash size optimal for inclusion proofs in the merkle tree of the swarm hash.
Two implementations are provided:
* RefHasher is optimized for code simplicity and meant as a reference implementation
* Hasher is optimized for speed taking advantage of concurrency with minimalistic
control structure to coordinate the concurrent routines
It implements the ChunkHash interface as well as the go standard hash.Hash interface
*/
const (
// DefaultSegmentCount is the maximum number of segments of the underlying chunk
DefaultSegmentCount = 128 // Should be equal to storage.DefaultBranches
// DefaultPoolSize is the maximum number of bmt trees used by the hashers, i.e,
// the maximum number of concurrent BMT hashing operations performed by the same hasher
DefaultPoolSize = 8
)
// BaseHasher is a hash.Hash constructor function used for the base hash of the BMT.
type BaseHasher func() hash.Hash
// Hasher a reusable hasher for fixed maximum size chunks representing a BMT
// implements the hash.Hash interface
// reuse pool of Tree-s for amortised memory allocation and resource control
// supports order-agnostic concurrent segment writes
// as well as sequential read and write
// can not be called concurrently on more than one chunk
// can be further appended after Sum
// Reset gives back the Tree to the pool and guaranteed to leave
// the tree and itself in a state reusable for hashing a new chunk
type Hasher struct {
pool *TreePool // BMT resource pool
bmt *Tree // prebuilt BMT resource for flowcontrol and proofs
blocksize int // segment size (size of hash) also for hash.Hash
count int // segment count
size int // for hash.Hash same as hashsize
cur int // cursor position for righmost currently open chunk
segment []byte // the rightmost open segment (not complete)
depth int // index of last level
result chan []byte // result channel
hash []byte // to record the result
max int32 // max segments for SegmentWriter interface
blockLength []byte // The block length that needes to be added in Sum
}
// New creates a reusable Hasher
// implements the hash.Hash interface
// pulls a new Tree from a resource pool for hashing each chunk
func New(p *TreePool) *Hasher {
return &Hasher{
pool: p,
depth: depth(p.SegmentCount),
size: p.SegmentSize,
blocksize: p.SegmentSize,
count: p.SegmentCount,
result: make(chan []byte),
}
}
// Node is a reuseable segment hasher representing a node in a BMT
// it allows for continued writes after a Sum
// and is left in completely reusable state after Reset
type Node struct {
level, index int // position of node for information/logging only
initial bool // first and last node
root bool // whether the node is root to a smaller BMT
isLeft bool // whether it is left side of the parent double segment
unbalanced bool // indicates if a node has only the left segment
parent *Node // BMT connections
state int32 // atomic increment impl concurrent boolean toggle
left, right []byte
}
// NewNode constructor for segment hasher nodes in the BMT
func NewNode(level, index int, parent *Node) *Node {
return &Node{
parent: parent,
level: level,
index: index,
initial: index == 0,
isLeft: index%2 == 0,
}
}
// TreePool provides a pool of Trees used as resources by Hasher
// a Tree popped from the pool is guaranteed to have clean state
// for hashing a new chunk
// Hasher Reset releases the Tree to the pool
type TreePool struct {
lock sync.Mutex
c chan *Tree
hasher BaseHasher
SegmentSize int
SegmentCount int
Capacity int
count int
}
// NewTreePool creates a Tree pool with hasher, segment size, segment count and capacity
// on GetTree it reuses free Trees or creates a new one if size is not reached
func NewTreePool(hasher BaseHasher, segmentCount, capacity int) *TreePool {
return &TreePool{
c: make(chan *Tree, capacity),
hasher: hasher,
SegmentSize: hasher().Size(),
SegmentCount: segmentCount,
Capacity: capacity,
}
}
// Drain drains the pool uptil it has no more than n resources
func (self *TreePool) Drain(n int) {
self.lock.Lock()
defer self.lock.Unlock()
for len(self.c) > n {
<-self.c
self.count--
}
}
// Reserve is blocking until it returns an available Tree
// it reuses free Trees or creates a new one if size is not reached
func (self *TreePool) Reserve() *Tree {
self.lock.Lock()
defer self.lock.Unlock()
var t *Tree
if self.count == self.Capacity {
return <-self.c
}
select {
case t = <-self.c:
default:
t = NewTree(self.hasher, self.SegmentSize, self.SegmentCount)
self.count++
}
return t
}
// Release gives back a Tree to the pool.
// This Tree is guaranteed to be in reusable state
// does not need locking
func (self *TreePool) Release(t *Tree) {
self.c <- t // can never fail but...
}
// Tree is a reusable control structure representing a BMT
// organised in a binary tree
// Hasher uses a TreePool to pick one for each chunk hash
// the Tree is 'locked' while not in the pool
type Tree struct {
leaves []*Node
}
// Draw draws the BMT (badly)
func (self *Tree) Draw(hash []byte, d int) string {
var left, right []string
var anc []*Node
for i, n := range self.leaves {
left = append(left, fmt.Sprintf("%v", hashstr(n.left)))
if i%2 == 0 {
anc = append(anc, n.parent)
}
right = append(right, fmt.Sprintf("%v", hashstr(n.right)))
}
anc = self.leaves
var hashes [][]string
for l := 0; len(anc) > 0; l++ {
var nodes []*Node
hash := []string{""}
for i, n := range anc {
hash = append(hash, fmt.Sprintf("%v|%v", hashstr(n.left), hashstr(n.right)))
if i%2 == 0 && n.parent != nil {
nodes = append(nodes, n.parent)
}
}
hash = append(hash, "")
hashes = append(hashes, hash)
anc = nodes
}
hashes = append(hashes, []string{"", fmt.Sprintf("%v", hashstr(hash)), ""})
total := 60
del := " "
var rows []string
for i := len(hashes) - 1; i >= 0; i-- {
var textlen int
hash := hashes[i]
for _, s := range hash {
textlen += len(s)
}
if total < textlen {
total = textlen + len(hash)
}
delsize := (total - textlen) / (len(hash) - 1)
if delsize > len(del) {
delsize = len(del)
}
row := fmt.Sprintf("%v: %v", len(hashes)-i-1, strings.Join(hash, del[:delsize]))
rows = append(rows, row)
}
rows = append(rows, strings.Join(left, " "))
rows = append(rows, strings.Join(right, " "))
return strings.Join(rows, "\n") + "\n"
}
// NewTree initialises the Tree by building up the nodes of a BMT
// segment size is stipulated to be the size of the hash
// segmentCount needs to be positive integer and does not need to be
// a power of two and can even be an odd number
// segmentSize * segmentCount determines the maximum chunk size
// hashed using the tree
func NewTree(hasher BaseHasher, segmentSize, segmentCount int) *Tree {
n := NewNode(0, 0, nil)
n.root = true
prevlevel := []*Node{n}
// iterate over levels and creates 2^level nodes
level := 1
count := 2
for d := 1; d <= depth(segmentCount); d++ {
nodes := make([]*Node, count)
for i := 0; i < len(nodes); i++ {
parent := prevlevel[i/2]
t := NewNode(level, i, parent)
nodes[i] = t
}
prevlevel = nodes
level++
count *= 2
}
// the datanode level is the nodes on the last level where
return &Tree{
leaves: prevlevel,
}
}
// methods needed by hash.Hash
// Size returns the size
func (self *Hasher) Size() int {
return self.size
}
// BlockSize returns the block size
func (self *Hasher) BlockSize() int {
return self.blocksize
}
// Sum returns the hash of the buffer
// hash.Hash interface Sum method appends the byte slice to the underlying
// data before it calculates and returns the hash of the chunk
func (self *Hasher) Sum(b []byte) (r []byte) {
t := self.bmt
i := self.cur
n := t.leaves[i]
j := i
// must run strictly before all nodes calculate
// datanodes are guaranteed to have a parent
if len(self.segment) > self.size && i > 0 && n.parent != nil {
n = n.parent
} else {
i *= 2
}
d := self.finalise(n, i)
self.writeSegment(j, self.segment, d)
c := <-self.result
self.releaseTree()
// sha3(length + BMT(pure_chunk))
if self.blockLength == nil {
return c
}
res := self.pool.hasher()
res.Reset()
res.Write(self.blockLength)
res.Write(c)
return res.Sum(nil)
}
// Hasher implements the SwarmHash interface
// Hash waits for the hasher result and returns it
// caller must call this on a BMT Hasher being written to
func (self *Hasher) Hash() []byte {
return <-self.result
}
// Hasher implements the io.Writer interface
// Write fills the buffer to hash
// with every full segment complete launches a hasher go routine
// that shoots up the BMT
func (self *Hasher) Write(b []byte) (int, error) {
l := len(b)
if l <= 0 {
return 0, nil
}
s := self.segment
i := self.cur
count := (self.count + 1) / 2
need := self.count*self.size - self.cur*2*self.size
size := self.size
if need > size {
size *= 2
}
if l < need {
need = l
}
// calculate missing bit to complete current open segment
rest := size - len(s)
if need < rest {
rest = need
}
s = append(s, b[:rest]...)
need -= rest
// read full segments and the last possibly partial segment
for need > 0 && i < count-1 {
// push all finished chunks we read
self.writeSegment(i, s, self.depth)
need -= size
if need < 0 {
size += need
}
s = b[rest : rest+size]
rest += size
i++
}
self.segment = s
self.cur = i
// otherwise, we can assume len(s) == 0, so all buffer is read and chunk is not yet full
return l, nil
}
// Hasher implements the io.ReaderFrom interface
// ReadFrom reads from io.Reader and appends to the data to hash using Write
// it reads so that chunk to hash is maximum length or reader reaches EOF
// caller must Reset the hasher prior to call
func (self *Hasher) ReadFrom(r io.Reader) (m int64, err error) {
bufsize := self.size*self.count - self.size*self.cur - len(self.segment)
buf := make([]byte, bufsize)
var read int
for {
var n int
n, err = r.Read(buf)
read += n
if err == io.EOF || read == len(buf) {
hash := self.Sum(buf[:n])
if read == len(buf) {
err = NewEOC(hash)
}
break
}
if err != nil {
break
}
n, err = self.Write(buf[:n])
if err != nil {
break
}
}
return int64(read), err
}
// Reset needs to be called before writing to the hasher
func (self *Hasher) Reset() {
self.getTree()
self.blockLength = nil
}
// Hasher implements the SwarmHash interface
// ResetWithLength needs to be called before writing to the hasher
// the argument is supposed to be the byte slice binary representation of
// the legth of the data subsumed under the hash
func (self *Hasher) ResetWithLength(l []byte) {
self.Reset()
self.blockLength = l
}
// Release gives back the Tree to the pool whereby it unlocks
// it resets tree, segment and index
func (self *Hasher) releaseTree() {
if self.bmt != nil {
n := self.bmt.leaves[self.cur]
for ; n != nil; n = n.parent {
n.unbalanced = false
if n.parent != nil {
n.root = false
}
}
self.pool.Release(self.bmt)
self.bmt = nil
}
self.cur = 0
self.segment = nil
}
func (self *Hasher) writeSegment(i int, s []byte, d int) {
h := self.pool.hasher()
n := self.bmt.leaves[i]
if len(s) > self.size && n.parent != nil {
go func() {
h.Reset()
h.Write(s)
s = h.Sum(nil)
if n.root {
self.result <- s
return
}
self.run(n.parent, h, d, n.index, s)
}()
return
}
go self.run(n, h, d, i*2, s)
}
func (self *Hasher) run(n *Node, h hash.Hash, d int, i int, s []byte) {
isLeft := i%2 == 0
for {
if isLeft {
n.left = s
} else {
n.right = s
}
if !n.unbalanced && n.toggle() {
return
}
if !n.unbalanced || !isLeft || i == 0 && d == 0 {
h.Reset()
h.Write(n.left)
h.Write(n.right)
s = h.Sum(nil)
} else {
s = append(n.left, n.right...)
}
self.hash = s
if n.root {
self.result <- s
return
}
isLeft = n.isLeft
n = n.parent
i++
}
}
// getTree obtains a BMT resource by reserving one from the pool
func (self *Hasher) getTree() *Tree {
if self.bmt != nil {
return self.bmt
}
t := self.pool.Reserve()
self.bmt = t
return t
}
// atomic bool toggle implementing a concurrent reusable 2-state object
// atomic addint with %2 implements atomic bool toggle
// it returns true if the toggler just put it in the active/waiting state
func (self *Node) toggle() bool {
return atomic.AddInt32(&self.state, 1)%2 == 1
}
func hashstr(b []byte) string {
end := len(b)
if end > 4 {
end = 4
}
return fmt.Sprintf("%x", b[:end])
}
func depth(n int) (d int) {
for l := (n - 1) / 2; l > 0; l /= 2 {
d++
}
return d
}
// finalise is following the zigzags on the tree belonging
// to the final datasegment
func (self *Hasher) finalise(n *Node, i int) (d int) {
isLeft := i%2 == 0
for {
// when the final segment's path is going via left segments
// the incoming data is pushed to the parent upon pulling the left
// we do not need toogle the state since this condition is
// detectable
n.unbalanced = isLeft
n.right = nil
if n.initial {
n.root = true
return d
}
isLeft = n.isLeft
n = n.parent
d++
}
}
// EOC (end of chunk) implements the error interface
type EOC struct {
Hash []byte // read the hash of the chunk off the error
}
// Error returns the error string
func (self *EOC) Error() string {
return fmt.Sprintf("hasher limit reached, chunk hash: %x", self.Hash)
}
// NewEOC creates new end of chunk error with the hash
func NewEOC(hash []byte) *EOC {
return &EOC{hash}
}

View File

@ -1,85 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// simple nonconcurrent reference implementation for hashsize segment based
// Binary Merkle tree hash on arbitrary but fixed maximum chunksize
//
// This implementation does not take advantage of any paralellisms and uses
// far more memory than necessary, but it is easy to see that it is correct.
// It can be used for generating test cases for optimized implementations.
// see testBMTHasherCorrectness function in bmt_test.go
package bmt
import (
"hash"
)
// RefHasher is the non-optimized easy to read reference implementation of BMT
type RefHasher struct {
span int
section int
cap int
h hash.Hash
}
// NewRefHasher returns a new RefHasher
func NewRefHasher(hasher BaseHasher, count int) *RefHasher {
h := hasher()
hashsize := h.Size()
maxsize := hashsize * count
c := 2
for ; c < count; c *= 2 {
}
if c > 2 {
c /= 2
}
return &RefHasher{
section: 2 * hashsize,
span: c * hashsize,
cap: maxsize,
h: h,
}
}
// Hash returns the BMT hash of the byte slice
// implements the SwarmHash interface
func (rh *RefHasher) Hash(d []byte) []byte {
if len(d) > rh.cap {
d = d[:rh.cap]
}
return rh.hash(d, rh.span)
}
func (rh *RefHasher) hash(d []byte, s int) []byte {
l := len(d)
left := d
var right []byte
if l > rh.section {
for ; s >= l; s /= 2 {
}
left = rh.hash(d[:s], s)
right = d[s:]
if l-s > rh.section/2 {
right = rh.hash(right, s)
}
}
defer rh.h.Reset()
rh.h.Write(left)
rh.h.Write(right)
h := rh.h.Sum(nil)
return h
}

View File

@ -1,481 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bmt
import (
"bytes"
crand "crypto/rand"
"fmt"
"hash"
"io"
"math/rand"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/ethereum/go-ethereum/crypto/sha3"
)
const (
maxproccnt = 8
)
// TestRefHasher tests that the RefHasher computes the expected BMT hash for
// all data lengths between 0 and 256 bytes
func TestRefHasher(t *testing.T) {
hashFunc := sha3.NewKeccak256
sha3 := func(data ...[]byte) []byte {
h := hashFunc()
for _, v := range data {
h.Write(v)
}
return h.Sum(nil)
}
// the test struct is used to specify the expected BMT hash for data
// lengths between "from" and "to"
type test struct {
from int64
to int64
expected func([]byte) []byte
}
var tests []*test
// all lengths in [0,64] should be:
//
// sha3(data)
//
tests = append(tests, &test{
from: 0,
to: 64,
expected: func(data []byte) []byte {
return sha3(data)
},
})
// all lengths in [65,96] should be:
//
// sha3(
// sha3(data[:64])
// data[64:]
// )
//
tests = append(tests, &test{
from: 65,
to: 96,
expected: func(data []byte) []byte {
return sha3(sha3(data[:64]), data[64:])
},
})
// all lengths in [97,128] should be:
//
// sha3(
// sha3(data[:64])
// sha3(data[64:])
// )
//
tests = append(tests, &test{
from: 97,
to: 128,
expected: func(data []byte) []byte {
return sha3(sha3(data[:64]), sha3(data[64:]))
},
})
// all lengths in [129,160] should be:
//
// sha3(
// sha3(
// sha3(data[:64])
// sha3(data[64:128])
// )
// data[128:]
// )
//
tests = append(tests, &test{
from: 129,
to: 160,
expected: func(data []byte) []byte {
return sha3(sha3(sha3(data[:64]), sha3(data[64:128])), data[128:])
},
})
// all lengths in [161,192] should be:
//
// sha3(
// sha3(
// sha3(data[:64])
// sha3(data[64:128])
// )
// sha3(data[128:])
// )
//
tests = append(tests, &test{
from: 161,
to: 192,
expected: func(data []byte) []byte {
return sha3(sha3(sha3(data[:64]), sha3(data[64:128])), sha3(data[128:]))
},
})
// all lengths in [193,224] should be:
//
// sha3(
// sha3(
// sha3(data[:64])
// sha3(data[64:128])
// )
// sha3(
// sha3(data[128:192])
// data[192:]
// )
// )
//
tests = append(tests, &test{
from: 193,
to: 224,
expected: func(data []byte) []byte {
return sha3(sha3(sha3(data[:64]), sha3(data[64:128])), sha3(sha3(data[128:192]), data[192:]))
},
})
// all lengths in [225,256] should be:
//
// sha3(
// sha3(
// sha3(data[:64])
// sha3(data[64:128])
// )
// sha3(
// sha3(data[128:192])
// sha3(data[192:])
// )
// )
//
tests = append(tests, &test{
from: 225,
to: 256,
expected: func(data []byte) []byte {
return sha3(sha3(sha3(data[:64]), sha3(data[64:128])), sha3(sha3(data[128:192]), sha3(data[192:])))
},
})
// run the tests
for _, x := range tests {
for length := x.from; length <= x.to; length++ {
t.Run(fmt.Sprintf("%d_bytes", length), func(t *testing.T) {
data := make([]byte, length)
if _, err := io.ReadFull(crand.Reader, data); err != nil && err != io.EOF {
t.Fatal(err)
}
expected := x.expected(data)
actual := NewRefHasher(hashFunc, 128).Hash(data)
if !bytes.Equal(actual, expected) {
t.Fatalf("expected %x, got %x", expected, actual)
}
})
}
}
}
func testDataReader(l int) (r io.Reader) {
return io.LimitReader(crand.Reader, int64(l))
}
func TestHasherCorrectness(t *testing.T) {
err := testHasher(testBaseHasher)
if err != nil {
t.Fatal(err)
}
}
func testHasher(f func(BaseHasher, []byte, int, int) error) error {
tdata := testDataReader(4128)
data := make([]byte, 4128)
tdata.Read(data)
hasher := sha3.NewKeccak256
size := hasher().Size()
counts := []int{1, 2, 3, 4, 5, 8, 16, 32, 64, 128}
var err error
for _, count := range counts {
max := count * size
incr := 1
for n := 0; n <= max+incr; n += incr {
err = f(hasher, data, n, count)
if err != nil {
return err
}
}
}
return nil
}
func TestHasherReuseWithoutRelease(t *testing.T) {
testHasherReuse(1, t)
}
func TestHasherReuseWithRelease(t *testing.T) {
testHasherReuse(maxproccnt, t)
}
func testHasherReuse(i int, t *testing.T) {
hasher := sha3.NewKeccak256
pool := NewTreePool(hasher, 128, i)
defer pool.Drain(0)
bmt := New(pool)
for i := 0; i < 500; i++ {
n := rand.Intn(4096)
tdata := testDataReader(n)
data := make([]byte, n)
tdata.Read(data)
err := testHasherCorrectness(bmt, hasher, data, n, 128)
if err != nil {
t.Fatal(err)
}
}
}
func TestHasherConcurrency(t *testing.T) {
hasher := sha3.NewKeccak256
pool := NewTreePool(hasher, 128, maxproccnt)
defer pool.Drain(0)
wg := sync.WaitGroup{}
cycles := 100
wg.Add(maxproccnt * cycles)
errc := make(chan error)
for p := 0; p < maxproccnt; p++ {
for i := 0; i < cycles; i++ {
go func() {
bmt := New(pool)
n := rand.Intn(4096)
tdata := testDataReader(n)
data := make([]byte, n)
tdata.Read(data)
err := testHasherCorrectness(bmt, hasher, data, n, 128)
wg.Done()
if err != nil {
errc <- err
}
}()
}
}
go func() {
wg.Wait()
close(errc)
}()
var err error
select {
case <-time.NewTimer(5 * time.Second).C:
err = fmt.Errorf("timed out")
case err = <-errc:
}
if err != nil {
t.Fatal(err)
}
}
func testBaseHasher(hasher BaseHasher, d []byte, n, count int) error {
pool := NewTreePool(hasher, count, 1)
defer pool.Drain(0)
bmt := New(pool)
return testHasherCorrectness(bmt, hasher, d, n, count)
}
func testHasherCorrectness(bmt hash.Hash, hasher BaseHasher, d []byte, n, count int) (err error) {
data := d[:n]
rbmt := NewRefHasher(hasher, count)
exp := rbmt.Hash(data)
timeout := time.NewTimer(time.Second)
c := make(chan error)
go func() {
bmt.Reset()
bmt.Write(data)
got := bmt.Sum(nil)
if !bytes.Equal(got, exp) {
c <- fmt.Errorf("wrong hash: expected %x, got %x", exp, got)
}
close(c)
}()
select {
case <-timeout.C:
err = fmt.Errorf("BMT hash calculation timed out")
case err = <-c:
}
return err
}
func BenchmarkSHA3_4k(t *testing.B) { benchmarkSHA3(4096, t) }
func BenchmarkSHA3_2k(t *testing.B) { benchmarkSHA3(4096/2, t) }
func BenchmarkSHA3_1k(t *testing.B) { benchmarkSHA3(4096/4, t) }
func BenchmarkSHA3_512b(t *testing.B) { benchmarkSHA3(4096/8, t) }
func BenchmarkSHA3_256b(t *testing.B) { benchmarkSHA3(4096/16, t) }
func BenchmarkSHA3_128b(t *testing.B) { benchmarkSHA3(4096/32, t) }
func BenchmarkBMTBaseline_4k(t *testing.B) { benchmarkBMTBaseline(4096, t) }
func BenchmarkBMTBaseline_2k(t *testing.B) { benchmarkBMTBaseline(4096/2, t) }
func BenchmarkBMTBaseline_1k(t *testing.B) { benchmarkBMTBaseline(4096/4, t) }
func BenchmarkBMTBaseline_512b(t *testing.B) { benchmarkBMTBaseline(4096/8, t) }
func BenchmarkBMTBaseline_256b(t *testing.B) { benchmarkBMTBaseline(4096/16, t) }
func BenchmarkBMTBaseline_128b(t *testing.B) { benchmarkBMTBaseline(4096/32, t) }
func BenchmarkRefHasher_4k(t *testing.B) { benchmarkRefHasher(4096, t) }
func BenchmarkRefHasher_2k(t *testing.B) { benchmarkRefHasher(4096/2, t) }
func BenchmarkRefHasher_1k(t *testing.B) { benchmarkRefHasher(4096/4, t) }
func BenchmarkRefHasher_512b(t *testing.B) { benchmarkRefHasher(4096/8, t) }
func BenchmarkRefHasher_256b(t *testing.B) { benchmarkRefHasher(4096/16, t) }
func BenchmarkRefHasher_128b(t *testing.B) { benchmarkRefHasher(4096/32, t) }
func BenchmarkHasher_4k(t *testing.B) { benchmarkHasher(4096, t) }
func BenchmarkHasher_2k(t *testing.B) { benchmarkHasher(4096/2, t) }
func BenchmarkHasher_1k(t *testing.B) { benchmarkHasher(4096/4, t) }
func BenchmarkHasher_512b(t *testing.B) { benchmarkHasher(4096/8, t) }
func BenchmarkHasher_256b(t *testing.B) { benchmarkHasher(4096/16, t) }
func BenchmarkHasher_128b(t *testing.B) { benchmarkHasher(4096/32, t) }
func BenchmarkHasherNoReuse_4k(t *testing.B) { benchmarkHasherReuse(1, 4096, t) }
func BenchmarkHasherNoReuse_2k(t *testing.B) { benchmarkHasherReuse(1, 4096/2, t) }
func BenchmarkHasherNoReuse_1k(t *testing.B) { benchmarkHasherReuse(1, 4096/4, t) }
func BenchmarkHasherNoReuse_512b(t *testing.B) { benchmarkHasherReuse(1, 4096/8, t) }
func BenchmarkHasherNoReuse_256b(t *testing.B) { benchmarkHasherReuse(1, 4096/16, t) }
func BenchmarkHasherNoReuse_128b(t *testing.B) { benchmarkHasherReuse(1, 4096/32, t) }
func BenchmarkHasherReuse_4k(t *testing.B) { benchmarkHasherReuse(16, 4096, t) }
func BenchmarkHasherReuse_2k(t *testing.B) { benchmarkHasherReuse(16, 4096/2, t) }
func BenchmarkHasherReuse_1k(t *testing.B) { benchmarkHasherReuse(16, 4096/4, t) }
func BenchmarkHasherReuse_512b(t *testing.B) { benchmarkHasherReuse(16, 4096/8, t) }
func BenchmarkHasherReuse_256b(t *testing.B) { benchmarkHasherReuse(16, 4096/16, t) }
func BenchmarkHasherReuse_128b(t *testing.B) { benchmarkHasherReuse(16, 4096/32, t) }
// benchmarks the minimum hashing time for a balanced (for simplicity) BMT
// by doing count/segmentsize parallel hashings of 2*segmentsize bytes
// doing it on n maxproccnt each reusing the base hasher
// the premise is that this is the minimum computation needed for a BMT
// therefore this serves as a theoretical optimum for concurrent implementations
func benchmarkBMTBaseline(n int, t *testing.B) {
tdata := testDataReader(64)
data := make([]byte, 64)
tdata.Read(data)
hasher := sha3.NewKeccak256
t.ReportAllocs()
t.ResetTimer()
for i := 0; i < t.N; i++ {
count := int32((n-1)/hasher().Size() + 1)
wg := sync.WaitGroup{}
wg.Add(maxproccnt)
var i int32
for j := 0; j < maxproccnt; j++ {
go func() {
defer wg.Done()
h := hasher()
for atomic.AddInt32(&i, 1) < count {
h.Reset()
h.Write(data)
h.Sum(nil)
}
}()
}
wg.Wait()
}
}
func benchmarkHasher(n int, t *testing.B) {
tdata := testDataReader(n)
data := make([]byte, n)
tdata.Read(data)
size := 1
hasher := sha3.NewKeccak256
segmentCount := 128
pool := NewTreePool(hasher, segmentCount, size)
bmt := New(pool)
t.ReportAllocs()
t.ResetTimer()
for i := 0; i < t.N; i++ {
bmt.Reset()
bmt.Write(data)
bmt.Sum(nil)
}
}
func benchmarkHasherReuse(poolsize, n int, t *testing.B) {
tdata := testDataReader(n)
data := make([]byte, n)
tdata.Read(data)
hasher := sha3.NewKeccak256
segmentCount := 128
pool := NewTreePool(hasher, segmentCount, poolsize)
cycles := 200
t.ReportAllocs()
t.ResetTimer()
for i := 0; i < t.N; i++ {
wg := sync.WaitGroup{}
wg.Add(cycles)
for j := 0; j < cycles; j++ {
bmt := New(pool)
go func() {
defer wg.Done()
bmt.Reset()
bmt.Write(data)
bmt.Sum(nil)
}()
}
wg.Wait()
}
}
func benchmarkSHA3(n int, t *testing.B) {
data := make([]byte, n)
tdata := testDataReader(n)
tdata.Read(data)
hasher := sha3.NewKeccak256
h := hasher()
t.ReportAllocs()
t.ResetTimer()
for i := 0; i < t.N; i++ {
h.Reset()
h.Write(data)
h.Sum(nil)
}
}
func benchmarkRefHasher(n int, t *testing.B) {
data := make([]byte, n)
tdata := testDataReader(n)
tdata.Read(data)
hasher := sha3.NewKeccak256
rbmt := NewRefHasher(hasher, 128)
t.ReportAllocs()
t.ResetTimer()
for i := 0; i < t.N; i++ {
rbmt.Hash(data)
}
}

View File

@ -2,37 +2,39 @@
Tagged releases and develop branch commits are available as installable Debian packages
for Ubuntu. Packages are built for the all Ubuntu versions which are supported by
Canonical:
- Trusty Tahr (14.04 LTS)
- Xenial Xerus (16.04 LTS)
- Yakkety Yak (16.10)
- Zesty Zapus (17.04)
Canonical.
Packages of develop branch commits have suffix -unstable and cannot be installed alongside
the stable version. Switching between release streams requires user intervention.
## Launchpad
The packages are built and served by launchpad.net. We generate a Debian source package
for each distribution and upload it. Their builder picks up the source package, builds it
and installs the new version into the PPA repository. Launchpad requires a valid signature
by a team member for source package uploads. The signing key is stored in an environment
variable which Travis CI makes available to certain builds.
by a team member for source package uploads.
The signing key is stored in an environment variable which Travis CI makes available to
certain builds. Since Travis CI doesn't support FTP, SFTP is used to transfer the
packages. To set this up yourself, you need to create a Launchpad user and add a GPG key
and SSH key to it. Then encode both keys as base64 and configure 'secret' environment
variables `PPA_SIGNING_KEY` and `PPA_SSH_KEY` on Travis.
We want to build go-ethereum with the most recent version of Go, irrespective of the Go
version that is available in the main Ubuntu repository. In order to make this possible,
our PPA depends on the ~gophers/ubuntu/archive PPA. Our source package build-depends on
golang-1.9, which is co-installable alongside the regular golang package. PPA dependencies
golang-1.10, which is co-installable alongside the regular golang package. PPA dependencies
can be edited at https://launchpad.net/%7Eethereum/+archive/ubuntu/ethereum/+edit-dependencies
## Building Packages Locally (for testing)
You need to run Ubuntu to do test packaging.
Add the gophers PPA and install Go 1.9 and Debian packaging tools:
Add the gophers PPA and install Go 1.10 and Debian packaging tools:
$ sudo apt-add-repository ppa:gophers/ubuntu/archive
$ sudo apt-get update
$ sudo apt-get install build-essential golang-1.9 devscripts debhelper
$ sudo apt-get install build-essential golang-1.10 devscripts debhelper python-bzrlib python-paramiko
Create the source packages:

View File

@ -26,7 +26,7 @@ Available commands are:
install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables
test [ -coverage ] [ packages... ] -- runs the tests
lint -- runs certain pre-selected linters
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artefacts
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artifacts
importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
nsis -- creates a Windows NSIS installer
@ -59,6 +59,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params"
sv "github.com/ethereum/go-ethereum/swarm/version"
)
var (
@ -77,52 +79,84 @@ var (
executablePath("geth"),
executablePath("puppeth"),
executablePath("rlpdump"),
executablePath("swarm"),
executablePath("wnode"),
}
// Files that end up in the swarm*.zip archive.
swarmArchiveFiles = []string{
"COPYING",
executablePath("swarm"),
}
// A debian package is created for all executables listed here.
debExecutables = []debExecutable{
{
Name: "abigen",
BinaryName: "abigen",
Description: "Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages.",
},
{
Name: "bootnode",
BinaryName: "bootnode",
Description: "Ethereum bootnode.",
},
{
Name: "evm",
BinaryName: "evm",
Description: "Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode.",
},
{
Name: "geth",
BinaryName: "geth",
Description: "Ethereum CLI client.",
},
{
Name: "puppeth",
BinaryName: "puppeth",
Description: "Ethereum private network manager.",
},
{
Name: "rlpdump",
BinaryName: "rlpdump",
Description: "Developer utility tool that prints RLP structures.",
},
{
Name: "swarm",
Description: "Ethereum Swarm daemon and tools",
},
{
Name: "wnode",
BinaryName: "wnode",
Description: "Ethereum Whisper diagnostic tool",
},
}
// A debian package is created for all executables listed here.
debSwarmExecutables = []debExecutable{
{
BinaryName: "swarm",
PackageName: "ethereum-swarm",
Description: "Ethereum Swarm daemon and tools",
},
}
debEthereum = debPackage{
Name: "ethereum",
Version: params.Version,
Executables: debExecutables,
}
debSwarm = debPackage{
Name: "ethereum-swarm",
Version: sv.Version,
Executables: debSwarmExecutables,
}
// Debian meta packages to build and push to Ubuntu PPA
debPackages = []debPackage{
debSwarm,
debEthereum,
}
// Packages to be cross-compiled by the xgo command
allCrossCompiledArchiveFiles = append(allToolsArchiveFiles, swarmArchiveFiles...)
// Distros for which packages are created.
// Note: vivid is unsupported because there is no golang-1.6 package for it.
// Note: wily is unsupported because it was officially deprecated on lanchpad.
// Note: yakkety is unsupported because it was officially deprecated on lanchpad.
// Note: zesty is unsupported because it was officially deprecated on lanchpad.
debDistros = []string{"trusty", "xenial", "artful", "bionic"}
// Note: artful is unsupported because it was officially deprecated on lanchpad.
debDistros = []string{"trusty", "xenial", "bionic", "cosmic"}
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@ -182,13 +216,13 @@ func doInstall(cmdline []string) {
// Check Go version. People regularly open issues about compilation
// failure with outdated Go. This should save them the trouble.
if !strings.Contains(runtime.Version(), "devel") {
// Figure out the minor version number since we can't textually compare (1.10 < 1.8)
// Figure out the minor version number since we can't textually compare (1.10 < 1.9)
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if minor < 8 {
if minor < 9 {
log.Println("You have Go version", runtime.Version())
log.Println("go-ethereum requires at least Go version 1.8 and cannot")
log.Println("go-ethereum requires at least Go version 1.9 and cannot")
log.Println("be compiled with an earlier version. Please upgrade your Go installation.")
os.Exit(1)
}
@ -262,16 +296,6 @@ func goTool(subcmd string, args ...string) *exec.Cmd {
func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd {
cmd := build.GoTool(subcmd, args...)
if subcmd == "build" || subcmd == "install" || subcmd == "test" {
// Go CGO has a Windows linker error prior to 1.8 (https://github.com/golang/go/issues/8756).
// Work around issue by allowing multiple definitions for <1.8 builds.
var minor int
fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor)
if runtime.GOOS == "windows" && minor < 8 {
cmd.Args = append(cmd.Args, []string{"-ldflags", "-extldflags -Wl,--allow-multiple-definition"}...)
}
}
cmd.Env = []string{"GOPATH=" + build.GOPATH()}
if arch == "" || arch == runtime.GOARCH {
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
@ -296,9 +320,7 @@ func goToolArch(arch string, cc string, subcmd string, args ...string) *exec.Cmd
// "tests" also includes static analysis tools such as vet.
func doTest(cmdline []string) {
var (
coverage = flag.Bool("coverage", false, "Whether to record code coverage")
)
coverage := flag.Bool("coverage", false, "Whether to record code coverage")
flag.CommandLine.Parse(cmdline)
env := build.Env()
@ -308,14 +330,11 @@ func doTest(cmdline []string) {
}
packages = build.ExpandPackagesNoVendor(packages)
// Run analysis tools before the tests.
build.MustRun(goTool("vet", packages...))
// Run the actual tests.
gotest := goTool("test", buildFlags(env)...)
// Test a single package at a time. CI builders are slow
// and some tests run into timeouts under load.
gotest.Args = append(gotest.Args, "-p", "1")
gotest := goTool("test", buildFlags(env)...)
gotest.Args = append(gotest.Args, "-p", "1", "-timeout", "5m")
if *coverage {
gotest.Args = append(gotest.Args, "-covermode=atomic", "-cover")
}
@ -339,7 +358,11 @@ func doLint(cmdline []string) {
// Run fast linters batched together
configs := []string{
"--vendor",
"--tests",
"--deadline=2m",
"--disable-all",
"--enable=goimports",
"--enable=varcheck",
"--enable=vet",
"--enable=gofmt",
"--enable=misspell",
@ -350,13 +373,12 @@ func doLint(cmdline []string) {
// Run slow linters one by one
for _, linter := range []string{"unconvert", "gosimple"} {
configs = []string{"--vendor", "--deadline=10m", "--disable-all", "--enable=" + linter}
configs = []string{"--vendor", "--tests", "--deadline=10m", "--disable-all", "--enable=" + linter}
build.MustRunCommand(filepath.Join(GOBIN, "gometalinter.v2"), append(configs, packages...)...)
}
}
// Release Packaging
func doArchive(cmdline []string) {
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
@ -376,10 +398,14 @@ func doArchive(cmdline []string) {
}
var (
env = build.Env()
base = archiveBasename(*arch, env)
geth = "geth-" + base + ext
alltools = "geth-alltools-" + base + ext
env = build.Env()
basegeth = archiveBasename(*arch, params.ArchiveVersion(env.Commit))
geth = "geth-" + basegeth + ext
alltools = "geth-alltools-" + basegeth + ext
baseswarm = archiveBasename(*arch, sv.ArchiveVersion(env.Commit))
swarm = "swarm-" + baseswarm + ext
)
maybeSkipArchive(env)
if err := build.WriteArchive(geth, gethArchiveFiles); err != nil {
@ -388,14 +414,17 @@ func doArchive(cmdline []string) {
if err := build.WriteArchive(alltools, allToolsArchiveFiles); err != nil {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools} {
if err := build.WriteArchive(swarm, swarmArchiveFiles); err != nil {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools, swarm} {
if err := archiveUpload(archive, *upload, *signer); err != nil {
log.Fatal(err)
}
}
}
func archiveBasename(arch string, env build.Environment) string {
func archiveBasename(arch string, archiveVersion string) string {
platform := runtime.GOOS + "-" + arch
if arch == "arm" {
platform += os.Getenv("GOARM")
@ -406,28 +435,14 @@ func archiveBasename(arch string, env build.Environment) string {
if arch == "ios" {
platform = "ios-all"
}
return platform + "-" + archiveVersion(env)
}
func archiveVersion(env build.Environment) string {
version := build.VERSION()
if isUnstableBuild(env) {
version += "-unstable"
}
if env.Commit != "" {
version += "-" + env.Commit[:8]
}
return version
return platform + "-" + archiveVersion
}
func archiveUpload(archive string, blobstore string, signer string) error {
// If signing was requested, generate the signature files
if signer != "" {
pgpkey, err := base64.StdEncoding.DecodeString(os.Getenv(signer))
if err != nil {
return fmt.Errorf("invalid base64 %s", signer)
}
if err := build.PGPSignFile(archive, archive+".asc", string(pgpkey)); err != nil {
key := getenvBase64(signer)
if err := build.PGPSignFile(archive, archive+".asc", string(key)); err != nil {
return err
}
}
@ -467,11 +482,11 @@ func maybeSkipArchive(env build.Environment) {
}
// Debian Packaging
func doDebianSource(cmdline []string) {
var (
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ppa:ethereum/ethereum")`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
)
@ -481,35 +496,69 @@ func doDebianSource(cmdline []string) {
maybeSkipArchive(env)
// Import the signing key.
if b64key := os.Getenv("PPA_SIGNING_KEY"); b64key != "" {
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatal("invalid base64 PPA_SIGNING_KEY")
}
if key := getenvBase64("PPA_SIGNING_KEY"); len(key) > 0 {
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
}
// Create the packages.
for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now)
pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc")
debuild.Dir = pkgdir
build.MustRun(debuild)
// Create Debian packages and upload them
for _, pkg := range debPackages {
for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc", "-d", "-Zxz")
debuild.Dir = pkgdir
build.MustRun(debuild)
changes := fmt.Sprintf("%s_%s_source.changes", meta.Name(), meta.VersionString())
changes = filepath.Join(*workdir, changes)
if *signer != "" {
build.MustRunCommand("debsign", changes)
}
if *upload != "" {
build.MustRunCommand("dput", *upload, changes)
var (
basename = fmt.Sprintf("%s_%s", meta.Name(), meta.VersionString())
source = filepath.Join(*workdir, basename+".tar.xz")
dsc = filepath.Join(*workdir, basename+".dsc")
changes = filepath.Join(*workdir, basename+"_source.changes")
)
if *signer != "" {
build.MustRunCommand("debsign", changes)
}
if *upload != "" {
ppaUpload(*workdir, *upload, *sshUser, []string{source, dsc, changes})
}
}
}
}
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
log.Fatal("-upload PPA name must contain single /")
}
if sshUser == "" {
sshUser = p[0]
}
incomingDir := fmt.Sprintf("~%s/ubuntu/%s", p[0], p[1])
// Create the SSH identity file if it doesn't exist.
var idfile string
if sshkey := getenvBase64("PPA_SSH_KEY"); len(sshkey) > 0 {
idfile = filepath.Join(workdir, "sshkey")
if _, err := os.Stat(idfile); os.IsNotExist(err) {
ioutil.WriteFile(idfile, sshkey, 0600)
}
}
// Upload
dest := sshUser + "@ppa.launchpad.net"
if err := build.UploadSFTP(idfile, dest, incomingDir, files); err != nil {
log.Fatal(err)
}
}
func getenvBase64(variable string) []byte {
dec, err := base64.StdEncoding.DecodeString(os.Getenv(variable))
if err != nil {
log.Fatal("invalid base64 " + variable)
}
return []byte(dec)
}
func makeWorkdir(wdflag string) string {
var err error
if wdflag != "" {
@ -530,9 +579,17 @@ func isUnstableBuild(env build.Environment) bool {
return true
}
type debPackage struct {
Name string // the name of the Debian package to produce, e.g. "ethereum", or "ethereum-swarm"
Version string // the clean version of the debPackage, e.g. 1.8.12 or 0.3.0, without any metadata
Executables []debExecutable // executables to be included in the package
}
type debMetadata struct {
Env build.Environment
PackageName string
// go-ethereum version being built. Note that this
// is not the debian package version. The package version
// is constructed by VersionString.
@ -544,21 +601,33 @@ type debMetadata struct {
}
type debExecutable struct {
Name, Description string
PackageName string
BinaryName string
Description string
}
func newDebMetadata(distro, author string, env build.Environment, t time.Time) debMetadata {
// Package returns the name of the package if present, or
// fallbacks to BinaryName
func (d debExecutable) Package() string {
if d.PackageName != "" {
return d.PackageName
}
return d.BinaryName
}
func newDebMetadata(distro, author string, env build.Environment, t time.Time, name string, version string, exes []debExecutable) debMetadata {
if author == "" {
// No signing key, use default author.
author = "Ethereum Builds <fjl@ethereum.org>"
}
return debMetadata{
PackageName: name,
Env: env,
Author: author,
Distro: distro,
Version: build.VERSION(),
Version: version,
Time: t.Format(time.RFC1123Z),
Executables: debExecutables,
Executables: exes,
}
}
@ -566,9 +635,9 @@ func newDebMetadata(distro, author string, env build.Environment, t time.Time) d
// on all executable packages.
func (meta debMetadata) Name() string {
if isUnstableBuild(meta.Env) {
return "ethereum-unstable"
return meta.PackageName + "-unstable"
}
return "ethereum"
return meta.PackageName
}
// VersionString returns the debian version of the packages.
@ -595,9 +664,9 @@ func (meta debMetadata) ExeList() string {
// ExeName returns the package name of an executable package.
func (meta debMetadata) ExeName(exe debExecutable) string {
if isUnstableBuild(meta.Env) {
return exe.Name + "-unstable"
return exe.Package() + "-unstable"
}
return exe.Name
return exe.Package()
}
// ExeConflicts returns the content of the Conflicts field
@ -612,7 +681,7 @@ func (meta debMetadata) ExeConflicts(exe debExecutable) string {
// be preferred and the conflicting files should be handled via
// alternates. We might do this eventually but using a conflict is
// easier now.
return "ethereum, " + exe.Name
return "ethereum, " + exe.Package()
}
return ""
}
@ -629,24 +698,23 @@ func stageDebianSource(tmpdir string, meta debMetadata) (pkgdir string) {
// Put the debian build files in place.
debian := filepath.Join(pkgdir, "debian")
build.Render("build/deb.rules", filepath.Join(debian, "rules"), 0755, meta)
build.Render("build/deb.changelog", filepath.Join(debian, "changelog"), 0644, meta)
build.Render("build/deb.control", filepath.Join(debian, "control"), 0644, meta)
build.Render("build/deb.copyright", filepath.Join(debian, "copyright"), 0644, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.rules", filepath.Join(debian, "rules"), 0755, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.changelog", filepath.Join(debian, "changelog"), 0644, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.control", filepath.Join(debian, "control"), 0644, meta)
build.Render("build/deb/"+meta.PackageName+"/deb.copyright", filepath.Join(debian, "copyright"), 0644, meta)
build.RenderString("8\n", filepath.Join(debian, "compat"), 0644, meta)
build.RenderString("3.0 (native)\n", filepath.Join(debian, "source/format"), 0644, meta)
for _, exe := range meta.Executables {
install := filepath.Join(debian, meta.ExeName(exe)+".install")
docs := filepath.Join(debian, meta.ExeName(exe)+".docs")
build.Render("build/deb.install", install, 0644, exe)
build.Render("build/deb.docs", docs, 0644, exe)
build.Render("build/deb/"+meta.PackageName+"/deb.install", install, 0644, exe)
build.Render("build/deb/"+meta.PackageName+"/deb.docs", docs, 0644, exe)
}
return pkgdir
}
// Windows installer
func doWindowsInstaller(cmdline []string) {
// Parse the flags and make skip installer generation on PRs
var (
@ -696,11 +764,11 @@ func doWindowsInstaller(cmdline []string) {
// Build the installer. This assumes that all the needed files have been previously
// built (don't mix building and packaging to keep cross compilation complexity to a
// minimum).
version := strings.Split(build.VERSION(), ".")
version := strings.Split(params.Version, ".")
if env.Commit != "" {
version[2] += "-" + env.Commit[:8]
}
installer, _ := filepath.Abs("geth-" + archiveBasename(*arch, env) + ".exe")
installer, _ := filepath.Abs("geth-" + archiveBasename(*arch, params.ArchiveVersion(env.Commit)) + ".exe")
build.MustRunCommand("makensis.exe",
"/DOUTPUTFILE="+installer,
"/DMAJORVERSION="+version[0],
@ -736,9 +804,9 @@ func doAndroidArchive(cmdline []string) {
log.Fatal("Please ensure ANDROID_NDK points to your Android NDK")
}
// Build the Android archive and Maven resources
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile"))
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init", "--ndk", os.Getenv("ANDROID_NDK")))
build.MustRun(gomobileTool("bind", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
build.MustRun(gomobileTool("bind", "-ldflags", "-s -w", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
if *local {
// If we're building locally, copy bundle to build dir and skip Maven
@ -752,7 +820,7 @@ func doAndroidArchive(cmdline []string) {
maybeSkipArchive(env)
// Sign and upload the archive to Azure
archive := "geth-" + archiveBasename("android", env) + ".aar"
archive := "geth-" + archiveBasename("android", params.ArchiveVersion(env.Commit)) + ".aar"
os.Rename("geth.aar", archive)
if err := archiveUpload(archive, *upload, *signer); err != nil {
@ -762,22 +830,22 @@ func doAndroidArchive(cmdline []string) {
os.Rename(archive, meta.Package+".aar")
if *signer != "" && *deploy != "" {
// Import the signing key into the local GPG instance
if b64key := os.Getenv(*signer); b64key != "" {
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatalf("invalid base64 %s", *signer)
}
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
key := getenvBase64(*signer)
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
keyID, err := build.PGPKeyID(string(key))
if err != nil {
log.Fatal(err)
}
// Upload the artifacts to Sonatype and/or Maven Central
repo := *deploy + "/service/local/staging/deploy/maven2"
if meta.Develop {
repo = *deploy + "/content/repositories/snapshots"
}
build.MustRunCommand("mvn", "gpg:sign-and-deploy-file",
build.MustRunCommand("mvn", "gpg:sign-and-deploy-file", "-e", "-X",
"-settings=build/mvn.settings", "-Durl="+repo, "-DrepositoryId=ossrh",
"-Dgpg.keyname="+keyID,
"-DpomFile="+meta.Package+".pom", "-Dfile="+meta.Package+".aar")
}
}
@ -787,9 +855,10 @@ func gomobileTool(subcmd string, args ...string) *exec.Cmd {
cmd.Args = append(cmd.Args, args...)
cmd.Env = []string{
"GOPATH=" + build.GOPATH(),
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") {
continue
}
cmd.Env = append(cmd.Env, e)
@ -831,7 +900,7 @@ func newMavenMetadata(env build.Environment) mavenMetadata {
}
}
// Render the version and package strings
version := build.VERSION()
version := params.Version
if isUnstableBuild(env) {
version += "-SNAPSHOT"
}
@ -856,9 +925,9 @@ func doXCodeFramework(cmdline []string) {
env := build.Env()
// Build the iOS XCode framework
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile"))
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init"))
bind := gomobileTool("bind", "--target", "ios", "--tags", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
bind := gomobileTool("bind", "-ldflags", "-s -w", "--target", "ios", "--tags", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
if *local {
// If we're building locally, use the build folder and stop afterwards
@ -866,7 +935,7 @@ func doXCodeFramework(cmdline []string) {
build.MustRun(bind)
return
}
archive := "geth-" + archiveBasename("ios", env)
archive := "geth-" + archiveBasename("ios", params.ArchiveVersion(env.Commit))
if err := os.Mkdir(archive, os.ModePerm); err != nil {
log.Fatal(err)
}
@ -922,7 +991,7 @@ func newPodMetadata(env build.Environment, archive string) podMetadata {
}
}
}
version := build.VERSION()
version := params.Version
if isUnstableBuild(env) {
version += "-unstable." + env.Buildnum
}
@ -952,7 +1021,7 @@ func doXgo(cmdline []string) {
if *alltools {
args = append(args, []string{"--dest", GOBIN}...)
for _, res := range allToolsArchiveFiles {
for _, res := range allCrossCompiledArchiveFiles {
if strings.HasPrefix(res, GOBIN) {
// Binary tool found, cross build it explicitly
args = append(args, "./"+filepath.Join("cmd", filepath.Base(res)))
@ -991,7 +1060,7 @@ func xgoTool(args []string) *exec.Cmd {
func doPurge(cmdline []string) {
var (
store = flag.String("store", "", `Destination from where to purge archives (usually "gethstore/builds")`)
limit = flag.Int("days", 30, `Age threshold above which to delete unstalbe archives`)
limit = flag.Int("days", 30, `Age threshold above which to delete unstable archives`)
)
flag.CommandLine.Parse(cmdline)
@ -1018,23 +1087,14 @@ func doPurge(cmdline []string) {
}
for i := 0; i < len(blobs); i++ {
for j := i + 1; j < len(blobs); j++ {
iTime, err := time.Parse(time.RFC1123, blobs[i].Properties.LastModified)
if err != nil {
log.Fatal(err)
}
jTime, err := time.Parse(time.RFC1123, blobs[j].Properties.LastModified)
if err != nil {
log.Fatal(err)
}
if iTime.After(jTime) {
if blobs[i].Properties.LastModified.After(blobs[j].Properties.LastModified) {
blobs[i], blobs[j] = blobs[j], blobs[i]
}
}
}
// Filter out all archives more recent that the given threshold
for i, blob := range blobs {
timestamp, _ := time.Parse(time.RFC1123, blob.Properties.LastModified)
if time.Since(timestamp) < time.Duration(*limit)*24*time.Hour {
if time.Since(blob.Properties.LastModified) < time.Duration(*limit)*24*time.Hour {
blobs = blobs[:i]
break
}

19
build/clean_go_build_cache.sh Executable file
View File

@ -0,0 +1,19 @@
#!/bin/sh
# Cleaning the Go cache only makes sense if we actually have Go installed... or
# if Go is actually callable. This does not hold true during deb packaging, so
# we need an explicit check to avoid build failures.
if ! command -v go > /dev/null; then
exit
fi
version_gt() {
test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"
}
golang_version=$(go version |cut -d' ' -f3 |sed 's/go//')
# Clean go build cache when go version is greater than or equal to 1.10
if !(version_gt 1.10 $golang_version); then
go clean -cache
fi

View File

@ -1 +0,0 @@
build/bin/{{.Name}} usr/bin

View File

@ -1,13 +0,0 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
override_dh_auto_build:
build/env.sh /usr/lib/go-1.9/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

View File

@ -0,0 +1,19 @@
Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.10
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum
{{range .Executables}}
Package: {{$.ExeName .}}
Conflicts: {{$.ExeConflicts .}}
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Built-Using: ${misc:Built-Using}
Description: {{.Description}}
{{.Description}}
{{end}}

View File

@ -1,4 +1,4 @@
Copyright 2016 The go-ethereum Authors
Copyright 2018 The go-ethereum Authors
go-ethereum is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by

View File

@ -0,0 +1 @@
build/bin/{{.BinaryName}} usr/bin

View File

@ -0,0 +1,13 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
override_dh_auto_build:
build/env.sh /usr/lib/go-1.10/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

View File

@ -0,0 +1,5 @@
{{.Name}} ({{.VersionString}}) {{.Distro}}; urgency=low
* git build of {{.Env.Commit}}
-- {{.Author}} {{.Time}}

View File

@ -2,7 +2,7 @@ Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.9
Build-Depends: debhelper (>= 8.0.0), golang-1.10
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
@ -11,8 +11,8 @@ Vcs-Browser: https://github.com/ethereum/go-ethereum
Package: {{.Name}}
Architecture: any
Depends: ${misc:Depends}, {{.ExeList}}
Description: Meta-package to install geth and other tools
Meta-package to install geth and other tools
Description: Meta-package to install geth, swarm, and other tools
Meta-package to install geth, swarm and other tools
{{range .Executables}}
Package: {{$.ExeName .}}

View File

@ -0,0 +1,14 @@
Copyright 2018 The go-ethereum Authors
go-ethereum is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
go-ethereum is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.

View File

@ -0,0 +1 @@
AUTHORS

View File

@ -0,0 +1 @@
build/bin/{{.BinaryName}} usr/bin

View File

@ -0,0 +1,13 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
override_dh_auto_build:
build/env.sh /usr/lib/go-1.10/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

18
build/goimports.sh Executable file
View File

@ -0,0 +1,18 @@
#!/bin/sh
find_files() {
find . ! \( \
\( \
-path '.github' \
-o -path './build/_workspace' \
-o -path './build/bin' \
-o -path './crypto/bn256' \
-o -path '*/vendor/*' \
\) -prune \
\) -name '*.go'
}
GOFMT="gofmt -s -w"
GOIMPORTS="goimports -w"
find_files | xargs $GOFMT
find_files | xargs $GOIMPORTS

View File

@ -1,3 +1,19 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build none
/*

View File

@ -29,7 +29,7 @@ import (
)
var (
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind")
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind, - for STDIN")
binFlag = flag.String("bin", "", "Path to the Ethereum contract bytecode (generate deploy method)")
typFlag = flag.String("type", "", "Struct name for the binding (default = package name)")
@ -75,16 +75,27 @@ func main() {
bins []string
types []string
)
if *solFlag != "" {
if *solFlag != "" || (*abiFlag == "-" && *pkgFlag == "") {
// Generate the list of types to exclude from binding
exclude := make(map[string]bool)
for _, kind := range strings.Split(*excFlag, ",") {
exclude[strings.ToLower(kind)] = true
}
contracts, err := compiler.CompileSolidity(*solcFlag, *solFlag)
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
var contracts map[string]*compiler.Contract
var err error
if *solFlag != "" {
contracts, err = compiler.CompileSolidity(*solcFlag, *solFlag)
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
}
} else {
contracts, err = contractsFromStdin()
if err != nil {
fmt.Printf("Failed to read input ABIs from STDIN: %v\n", err)
os.Exit(-1)
}
}
// Gather all non-excluded contract for binding
for name, contract := range contracts {
@ -100,7 +111,13 @@ func main() {
}
} else {
// Otherwise load up the ABI, optional bytecode and type name from the parameters
abi, err := ioutil.ReadFile(*abiFlag)
var abi []byte
var err error
if *abiFlag == "-" {
abi, err = ioutil.ReadAll(os.Stdin)
} else {
abi, err = ioutil.ReadFile(*abiFlag)
}
if err != nil {
fmt.Printf("Failed to read input ABI: %v\n", err)
os.Exit(-1)
@ -138,3 +155,11 @@ func main() {
os.Exit(-1)
}
}
func contractsFromStdin() (map[string]*compiler.Contract, error) {
bytes, err := ioutil.ReadAll(os.Stdin)
if err != nil {
return nil, err
}
return compiler.ParseCombinedJSON(bytes, "", "", "", "")
}

View File

@ -29,6 +29,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/nat"
"github.com/ethereum/go-ethereum/p2p/netutil"
)
@ -37,7 +38,7 @@ func main() {
var (
listenAddr = flag.String("addr", ":30301", "listen address")
genKey = flag.String("genkey", "", "generate a node key")
writeAddr = flag.Bool("writeaddress", false, "write out the node's pubkey hash and quit")
writeAddr = flag.Bool("writeaddress", false, "write out the node's public key and quit")
nodeKeyFile = flag.String("nodekey", "", "private key filename")
nodeKeyHex = flag.String("nodekeyhex", "", "private key as hex (for testing)")
natdesc = flag.String("nat", "none", "port mapping mechanism (any|none|upnp|pmp|extip:<IP>)")
@ -85,7 +86,7 @@ func main() {
}
if *writeAddr {
fmt.Printf("%v\n", discover.PubkeyID(&nodeKey.PublicKey))
fmt.Printf("%x\n", crypto.FromECDSAPub(&nodeKey.PublicKey)[1:])
os.Exit(0)
}
@ -118,16 +119,17 @@ func main() {
}
if *runv5 {
if _, err := discv5.ListenUDP(nodeKey, conn, realaddr, "", restrictList); err != nil {
if _, err := discv5.ListenUDP(nodeKey, conn, "", restrictList); err != nil {
utils.Fatalf("%v", err)
}
} else {
db, _ := enode.OpenDB("")
ln := enode.NewLocalNode(db, nodeKey)
cfg := discover.Config{
PrivateKey: nodeKey,
AnnounceAddr: realaddr,
NetRestrict: restrictList,
PrivateKey: nodeKey,
NetRestrict: restrictList,
}
if _, err := discover.ListenUDP(conn, cfg); err != nil {
if _, err := discover.ListenUDP(conn, ln, cfg); err != nil {
utils.Fatalf("%v", err)
}
}

1
cmd/clef/4byte.json Normal file

File diff suppressed because one or more lines are too long

878
cmd/clef/README.md Normal file
View File

@ -0,0 +1,878 @@
Clef
----
Clef can be used to sign transactions and data and is meant as a replacement for geth's account management.
This allows DApps not to depend on geth's account management. When a DApp wants to sign data it can send the data to
the signer, the signer will then provide the user with context and asks the user for permission to sign the data. If
the users grants the signing request the signer will send the signature back to the DApp.
This setup allows a DApp to connect to a remote Ethereum node and send transactions that are locally signed. This can
help in situations when a DApp is connected to a remote node because a local Ethereum node is not available, not
synchronised with the chain or a particular Ethereum node that has no built-in (or limited) account management.
Clef can run as a daemon on the same machine, or off a usb-stick like [usb armory](https://inversepath.com/usbarmory),
or a separate VM in a [QubesOS](https://www.qubes-os.org/) type os setup.
Check out
* the [tutorial](tutorial.md) for some concrete examples on how the signer works.
* the [setup docs](docs/setup.md) for some information on how to configure it to work on QubesOS or USBArmory.
## Command line flags
Clef accepts the following command line options:
```
COMMANDS:
init Initialize the signer, generate secret storage
attest Attest that a js-file is to be used
addpw Store a credential for a keystore file
help Shows a list of commands or help for one command
GLOBAL OPTIONS:
--loglevel value log level to emit to the screen (default: 4)
--keystore value Directory for the keystore (default: "$HOME/.ethereum/keystore")
--configdir value Directory for clef configuration (default: "$HOME/.clef")
--networkid value Network identifier (integer, 1=Frontier, 2=Morden (disused), 3=Ropsten, 4=Rinkeby) (default: 1)
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--nousb Disables monitoring for and managing USB hardware wallets
--rpcaddr value HTTP-RPC server listening interface (default: "localhost")
--rpcport value HTTP-RPC server listening port (default: 8550)
--signersecret value A file containing the password used to encrypt signer credentials, e.g. keystore credentials and ruleset hash
--4bytedb value File containing 4byte-identifiers (default: "./4byte.json")
--4bytedb-custom value File used for writing new 4byte-identifiers submitted via API (default: "./4byte-custom.json")
--auditlog value File used to emit audit logs. Set to "" to disable (default: "audit.log")
--rules value Enable rule-engine (default: "rules.json")
--stdio-ui Use STDIN/STDOUT as a channel for an external UI. This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user interface, and can be used when the signer is started by an external process.
--stdio-ui-test Mechanism to test interface between signer and UI. Requires 'stdio-ui'.
--help, -h show help
--version, -v print the version
```
Example:
```
signer -keystore /my/keystore -chainid 4
```
## Security model
The security model of the signer is as follows:
* One critical component (the signer binary / daemon) is responsible for handling cryptographic operations: signing, private keys, encryption/decryption of keystore files.
* The signer binary has a well-defined 'external' API.
* The 'external' API is considered UNTRUSTED.
* The signer binary also communicates with whatever process that invoked the binary, via stdin/stdout.
* This channel is considered 'trusted'. Over this channel, approvals and passwords are communicated.
The general flow for signing a transaction using e.g. geth is as follows:
![image](sign_flow.png)
In this case, `geth` would be started with `--externalsigner=http://localhost:8550` and would relay requests to `eth.sendTransaction`.
## TODOs
Some snags and todos
* [ ] The signer should take a startup param "--no-change", for UIs that do not contain the capability
to perform changes to things, only approve/deny. Such a UI should be able to start the signer in
a more secure mode by telling it that it only wants approve/deny capabilities.
* [x] It would be nice if the signer could collect new 4byte-id:s/method selectors, and have a
secondary database for those (`4byte_custom.json`). Users could then (optionally) submit their collections for
inclusion upstream.
* It should be possible to configure the signer to check if an account is indeed known to it, before
passing on to the UI. The reason it currently does not, is that it would make it possible to enumerate
accounts if it immediately returned "unknown account".
* [x] It should be possible to configure the signer to auto-allow listing (certain) accounts, instead of asking every time.
* [x] Done Upon startup, the signer should spit out some info to the caller (particularly important when executed in `stdio-ui`-mode),
invoking methods with the following info:
* [x] Version info about the signer
* [x] Address of API (http/ipc)
* [ ] List of known accounts
* [ ] Have a default timeout on signing operations, so that if the user has not answered within e.g. 60 seconds, the request is rejected.
* [ ] `account_signRawTransaction`
* [ ] `account_bulkSignTransactions([] transactions)` should
* only exist if enabled via config/flag
* only allow non-data-sending transactions
* all txs must use the same `from`-account
* let the user confirm, showing
* the total amount
* the number of unique recipients
* Geth todos
- The signer should pass the `Origin` header as call-info to the UI. As of right now, the way that info about the request is
put together is a bit of a hack into the http server. This could probably be greatly improved
- Relay: Geth should be started in `geth --external_signer localhost:8550`.
- Currently, the Geth APIs use `common.Address` in the arguments to transaction submission (e.g `to` field). This
type is 20 `bytes`, and is incapable of carrying checksum information. The signer uses `common.MixedcaseAddress`, which
retains the original input.
- The Geth api should switch to use the same type, and relay `to`-account verbatim to the external api.
* [x] Storage
* [x] An encrypted key-value storage should be implemented
* See [rules.md](rules.md) for more info about this.
* Another potential thing to introduce is pairing.
* To prevent spurious requests which users just accept, implement a way to "pair" the caller with the signer (external API).
* Thus geth/mist/cpp would cryptographically handshake and afterwards the caller would be allowed to make signing requests.
* This feature would make the addition of rules less dangerous.
* Wallets / accounts. Add API methods for wallets.
## Communication
### External API
The signer listens to HTTP requests on `rpcaddr`:`rpcport`, with the same JSONRPC standard as Geth. The messages are
expected to be JSON [jsonrpc 2.0 standard](http://www.jsonrpc.org/specification).
Some of these call can require user interaction. Clients must be aware that responses
may be delayed significantly or may never be received if a users decides to ignore the confirmation request.
The External API is **untrusted** : it does not accept credentials over this api, nor does it expect
that requests have any authority.
### UI API
The signer has one native console-based UI, for operation without any standalone tools.
However, there is also an API to communicate with an external UI. To enable that UI,
the signer needs to be executed with the `--stdio-ui` option, which allocates the
`stdin`/`stdout` for the UI-api.
An example (insecure) proof-of-concept of has been implemented in `pythonsigner.py`.
The model is as follows:
* The user starts the UI app (`pythonsigner.py`).
* The UI app starts the `signer` with `--stdio-ui`, and listens to the
process output for confirmation-requests.
* The `signer` opens the external http api.
* When the `signer` receives requests, it sends a `jsonrpc` request via `stdout`.
* The UI app prompts the user accordingly, and responds to the `signer`
* The `signer` signs (or not), and responds to the original request.
## External API
See the [external api changelog](extapi_changelog.md) for information about changes to this API.
### Encoding
- number: positive integers that are hex encoded
- data: hex encoded data
- string: ASCII string
All hex encoded values must be prefixed with `0x`.
## Methods
### account_new
#### Create new password protected account
The signer will generate a new private key, encrypts it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and stores it in the keystore directory.
The client is responsible for creating a backup of the keystore. If the keystore is lost there is no method of retrieving lost accounts.
#### Arguments
None
#### Result
- address [string]: account address that is derived from the generated key
- url [string]: location of the keyfile
#### Sample call
```json
{
"id": 0,
"jsonrpc": "2.0",
"method": "account_new",
"params": []
}
{
"id": 0,
"jsonrpc": "2.0",
"result": {
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"url": "keystore:///my/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
}
```
### account_list
#### List available accounts
List all accounts that this signer currently manages
#### Arguments
None
#### Result
- array with account records:
- account.address [string]: account address that is derived from the generated key
- account.type [string]: type of the
- account.url [string]: location of the account
#### Sample call
```json
{
"id": 1,
"jsonrpc": "2.0",
"method": "account_list"
}
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"address": "0xafb2f771f58513609765698f65d3f2f0224a956f",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T07-26-47.162109726Z--afb2f771f58513609765698f65d3f2f0224a956f"
},
{
"address": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T08-40-15.419655028Z--bea9183f8f4f03d427f6bcea17388bdff1cab133"
}
]
}
```
### account_signTransaction
#### Sign transactions
Signs a transactions and responds with the signed transaction in RLP encoded form.
#### Arguments
2. transaction object:
- `from` [address]: account to send the transaction from
- `to` [address]: receiver account. If omitted or `0x`, will cause contract creation.
- `gas` [number]: maximum amount of gas to burn
- `gasPrice` [number]: gas price
- `value` [number:optional]: amount of Wei to send with the transaction
- `data` [data:optional]: input data
- `nonce` [number]: account nonce
3. method signature [string:optional]
- The method signature, if present, is to aid decoding the calldata. Should consist of `methodname(paramtype,...)`, e.g. `transfer(uint256,address)`. The signer may use this data to parse the supplied calldata, and show the user. The data, however, is considered totally untrusted, and reliability is not expected.
#### Result
- signed transaction in RLP encoded form [data]
#### Sample call
```json
{
"id": 2,
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db",
"gas": "0x55555",
"gasPrice": "0x1234",
"input": "0xabcd",
"nonce": "0x0",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x1234"
}
]
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 67,
"error": {
"code": -32000,
"message": "Request denied"
}
}
```
#### Sample call with ABI-data
```json
{
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"gas": "0x333",
"gasPrice": "0x1",
"nonce": "0x0",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
},
"safeSend(address)"
],
"id": 67
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 67,
"result": {
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1",
"gas": "0x333",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
}
```
Bash example:
```bash
#curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":{"raw":"0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","tx":{"nonce":"0x0","gasPrice":"0x1","gas":"0x333","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0","value":"0x0","input":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012","v":"0x26","r":"0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e","s":"0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","hash":"0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"}}}
```
### account_sign
#### Sign data
Signs a chunk of data and returns the calculated signature.
#### Arguments
- account [address]: account to sign with
- data [data]: data to sign
#### Result
- calculated signature [data]
#### Sample call
```json
{
"id": 3,
"jsonrpc": "2.0",
"method": "account_sign",
"params": [
"0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db",
"0xaabbccdd"
]
}
```
Response
```json
{
"id": 3,
"jsonrpc": "2.0",
"result": "0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
}
```
### account_ecRecover
#### Recover address
Derive the address from the account that was used to sign data from the data and signature.
#### Arguments
- data [data]: data that was signed
- signature [data]: the signature to verify
#### Result
- derived account [address]
#### Sample call
```json
{
"id": 4,
"jsonrpc": "2.0",
"method": "account_ecRecover",
"params": [
"0xaabbccdd",
"0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
]
}
```
Response
```json
{
"id": 4,
"jsonrpc": "2.0",
"result": "0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db"
}
```
### account_import
#### Import account
Import a private key into the keystore. The imported key is expected to be encrypted according to the web3 keystore
format.
#### Arguments
- account [object]: key in [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) (retrieved with account_export)
#### Result
- imported key [object]:
- key.address [address]: address of the imported key
- key.type [string]: type of the account
- key.url [string]: key URL
#### Sample call
```json
{
"id": 6,
"jsonrpc": "2.0",
"method": "account_import",
"params": [
{
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
},
]
}
```
Response
```json
{
"id": 6,
"jsonrpc": "2.0",
"result": {
"address": "0xc7412fc59930fd90099c917a50e5f11d0934b2f5",
"type": "account",
"url": "keystore:///tmp/keystore/UTC--2017-08-24T11-00-42.032024108Z--c7412fc59930fd90099c917a50e5f11d0934b2f5"
}
}
```
### account_export
#### Export account from keystore
Export a private key from the keystore. The exported private key is encrypted with the original passphrase. When the
key is imported later this passphrase is required.
#### Arguments
- account [address]: export private key that is associated with this account
#### Result
- exported key, see [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) for
more information
#### Sample call
```json
{
"id": 5,
"jsonrpc": "2.0",
"method": "account_export",
"params": [
"0xc7412fc59930fd90099c917a50e5f11d0934b2f5"
]
}
```
Response
```json
{
"id": 5,
"jsonrpc": "2.0",
"result": {
"address": "c7412fc59930fd90099c917a50e5f11d0934b2f5",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {
"iv": "401c39a7c7af0388491c3d3ecb39f532"
},
"ciphertext": "eb045260b18dd35cd0e6d99ead52f8fa1e63a6b0af2d52a8de198e59ad783204",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9a657e3618527c9b5580ded60c12092e5038922667b7b76b906496f021bb841a"
},
"mac": "880dc10bc06e9cec78eb9830aeb1e7a4a26b4c2c19615c94acb632992b952806"
},
"id": "09bccb61-b8d3-4e93-bf4f-205a8194f0b9",
"version": 3
}
}
```
## UI API
These methods needs to be implemented by a UI listener.
By starting the signer with the switch `--stdio-ui-test`, the signer will invoke all known methods, and expect the UI to respond with
denials. This can be used during development to ensure that the API is (at least somewhat) correctly implemented.
See `pythonsigner`, which can be invoked via `python3 pythonsigner.py test` to perform the 'denial-handshake-test'.
All methods in this API uses object-based parameters, so that there can be no mixups of parameters: each piece of data is accessed by key.
See the [ui api changelog](intapi_changelog.md) for information about changes to this API.
OBS! A slight deviation from `json` standard is in place: every request and response should be confined to a single line.
Whereas the `json` specification allows for linebreaks, linebreaks __should not__ be used in this communication channel, to make
things simpler for both parties.
### ApproveTx
Invoked when there's a transaction for approval.
#### Sample call
Here's a method invocation:
```bash
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
```
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "0x0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"to": "0x0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x1",
"value": "0x0",
"nonce": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "Info",
"message": "safeSend(address: 0x0000000000000000000000000000000000000012)"
}
],
"meta": {
"remote": "127.0.0.1:48486",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
The same method invocation, but with invalid data:
```bash
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000002000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
```
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "0x0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"to": "0x0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x1",
"value": "0x0",
"nonce": "0x0",
"data": "0x4401a6e40000000000000002000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "WARNING",
"message": "Transaction data did not match ABI-interface: WARNING: Supplied data is stuffed with extra data. \nWant 0000000000000002000000000000000000000000000000000000000000000012\nHave 0000000000000000000000000000000000000000000000000000000000000012\nfor method safeSend(address)"
}
],
"meta": {
"remote": "127.0.0.1:48492",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
One which has missing `to`, but with no `data`:
```json
{
"jsonrpc": "2.0",
"id": 3,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "",
"to": null,
"gas": "0x0",
"gasPrice": "0x0",
"value": "0x0",
"nonce": "0x0",
"data": null,
"input": null
},
"call_info": [
{
"type": "CRITICAL",
"message": "Tx will create contract with empty code!"
}
],
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveExport
Invoked when a request to export an account has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 7,
"method": "ApproveExport",
"params": [
{
"address": "0x0000000000000000000000000000000000000000",
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveListing
Invoked when a request for account listing has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 5,
"method": "ApproveListing",
"params": [
{
"accounts": [
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-20T14-44-54.089682944Z--123409812340981234098123409812deadbeef42",
"address": "0x123409812340981234098123409812deadbeef42"
},
{
"type": "Account",
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-23T21-59-03.199240693Z--cafebabedeadbeef34098123409812deadbeef42",
"address": "0xcafebabedeadbeef34098123409812deadbeef42"
}
],
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveSignData
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 4,
"method": "ApproveSignData",
"params": [
{
"address": "0x123409812340981234098123409812deadbeef42",
"raw_data": "0x01020304",
"message": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"hash": "0x7e3a4e7a9d1744bc5c675c25e1234ca8ed9162bd17f78b9085e48047c15ac310",
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ShowInfo
The UI should show the info to the user. Does not expect response.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 9,
"method": "ShowInfo",
"params": [
{
"text": "Tests completed"
}
]
}
```
### ShowError
The UI should show the info to the user. Does not expect response.
```json
{
"jsonrpc": "2.0",
"id": 2,
"method": "ShowError",
"params": [
{
"text": "Testing 'ShowError'"
}
]
}
```
### OnApproved
`OnApprovedTx` is called when a transaction has been approved and signed. The call contains the return value that will be sent to the external caller. The return value from this method is ignored - the reason for having this callback is to allow the ruleset to keep track of approved transactions.
When implementing rate-limited rules, this callback should be used.
TLDR; Use this method to keep track of signed transactions, instead of using the data in `ApproveTx`.
### OnSignerStartup
This method provide the UI with information about what API version the signer uses (both internal and external) aswell as build-info and external api,
in k/v-form.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "OnSignerStartup",
"params": [
{
"info": {
"extapi_http": "http://localhost:8550",
"extapi_ipc": null,
"extapi_version": "2.0.0",
"intapi_version": "1.2.0"
}
}
]
}
```
### Rules for UI apis
A UI should conform to the following rules.
* A UI MUST NOT load any external resources that were not embedded/part of the UI package.
* For example, not load icons, stylesheets from the internet
* Not load files from the filesystem, unless they reside in the same local directory (e.g. config files)
* A Graphical UI MUST show the blocky-identicon for ethereum addresses.
* A UI MUST warn display approproate warning if the destination-account is formatted with invalid checksum.
* A UI MUST NOT open any ports or services
* The signer opens the public port
* A UI SHOULD verify the permissions on the signer binary, and refuse to execute or warn if permissions allow non-user write.
* A UI SHOULD inform the user about the `SHA256` or `MD5` hash of the binary being executed
* A UI SHOULD NOT maintain a secondary storage of data, e.g. list of accounts
* The signer provides accounts
* A UI SHOULD, to the best extent possible, use static linking / bundling, so that required libraries are bundled
along with the UI.
### UI Implementations
There are a couple of implementation for a UI. We'll try to keep this list up to date.
| Name | Repo | UI type| No external resources| Blocky support| Verifies permissions | Hash information | No secondary storage | Statically linked| Can modify parameters|
| ---- | ---- | -------| ---- | ---- | ---- |---- | ---- | ---- | ---- |
| QtSigner| https://github.com/holiman/qtsigner/| Python3/QT-based| :+1:| :+1:| :+1:| :+1:| :+1:| :x: | :+1: (partially)|
| GtkSigner| https://github.com/holiman/gtksigner| Python3/GTK-based| :+1:| :x:| :x:| :+1:| :+1:| :x: | :x: |
| Frame | https://github.com/floating/frame/commits/go-signer| Electron-based| :x:| :x:| :x:| :x:| ?| :x: | :x: |
| Clef UI| https://github.com/kyokan/clef-ui| Golang/QT-based| :+1:| :+1:| :x:| :+1:| :+1:| :x: | :+1: (approve tx only)|

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -0,0 +1,23 @@
"""
This implements a dispatcher which listens to localhost:8550, and proxies
requests via qrexec to the service qubes.EthSign on a target domain
"""
import http.server
import socketserver,subprocess
PORT=8550
TARGET_DOMAIN= 'debian-work'
class Dispatcher(http.server.BaseHTTPRequestHandler):
def do_POST(self):
post_data = self.rfile.read(int(self.headers['Content-Length']))
p = subprocess.Popen(['/usr/bin/qrexec-client-vm',TARGET_DOMAIN,'qubes.Clefsign'],stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output = p.communicate(post_data)[0]
self.wfile.write(output)
with socketserver.TCPServer(("",PORT), Dispatcher) as httpd:
print("Serving at port", PORT)
httpd.serve_forever()

View File

@ -0,0 +1,16 @@
#!/bin/bash
SIGNER_BIN="/home/user/tools/clef/clef"
SIGNER_CMD="/home/user/tools/gtksigner/gtkui.py -s $SIGNER_BIN"
# Start clef if not already started
if [ ! -S /home/user/.clef/clef.ipc ]; then
$SIGNER_CMD &
sleep 1
fi
# Should be started by now
if [ -S /home/user/.clef/clef.ipc ]; then
# Post incoming request to HTTP channel
curl -H "Content-Type: application/json" -X POST -d @- http://localhost:8550 2>/dev/null
fi

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

198
cmd/clef/docs/setup.md Normal file
View File

@ -0,0 +1,198 @@
# Setting up Clef
This document describes how Clef can be used in a more secure manner than executing it from your everyday laptop,
in order to ensure that the keys remain safe in the event that your computer should get compromised.
## Qubes OS
### Background
The Qubes operating system is based around virtual machines (qubes), where a set of virtual machines are configured, typically for
different purposes such as:
- personal
- Your personal email, browsing etc
- work
- Work email etc
- vault
- a VM without network access, where gpg-keys and/or keepass credentials are stored.
A couple of dedicated virtual machines handle externalities:
- sys-net provides networking to all other (network-enabled) machines
- sys-firewall handles firewall rules
- sys-usb handles USB devices, and can map usb-devices to certain qubes.
The goal of this document is to describe how we can set up clef to provide secure transaction
signing from a `vault` vm, to another networked qube which runs Dapps.
### Setup
There are two ways that this can be achieved: integrated via Qubes or integrated via networking.
#### 1. Qubes Integrated
Qubes provdes a facility for inter-qubes communication via `qrexec`. A qube can request to make a cross-qube RPC request
to another qube. The OS then asks the user if the call is permitted.
![Example](qubes/qrexec-example.png)
A policy-file can be created to allow such interaction. On the `target` domain, a service is invoked which can read the
`stdin` from the `client` qube.
This is how [Split GPG](https://www.qubes-os.org/doc/split-gpg/) is implemented. We can set up Clef the same way:
##### Server
![Clef via qrexec](qubes/clef_qubes_qrexec.png)
On the `target` qubes, we need to define the rpc service.
[qubes.Clefsign](qubes/qubes.Clefsign):
```bash
#!/bin/bash
SIGNER_BIN="/home/user/tools/clef/clef"
SIGNER_CMD="/home/user/tools/gtksigner/gtkui.py -s $SIGNER_BIN"
# Start clef if not already started
if [ ! -S /home/user/.clef/clef.ipc ]; then
$SIGNER_CMD &
sleep 1
fi
# Should be started by now
if [ -S /home/user/.clef/clef.ipc ]; then
# Post incoming request to HTTP channel
curl -H "Content-Type: application/json" -X POST -d @- http://localhost:8550 2>/dev/null
fi
```
This RPC service is not complete (see notes about HTTP headers below), but works as a proof-of-concept.
It will forward the data received on `stdin` (forwarded by the OS) to Clef's HTTP channel.
It would have been possible to send data directly to the `/home/user/.clef/.clef.ipc`
socket via e.g `nc -U /home/user/.clef/clef.ipc`, but the reason for sending the request
data over `HTTP` instead of `IPC` is that we want the ability to forward `HTTP` headers.
To enable the service:
``` bash
sudo cp qubes.Clefsign /etc/qubes-rpc/
sudo chmod +x /etc/qubes-rpc/ qubes.Clefsign
```
This setup uses [gtksigner](https://github.com/holiman/gtksigner), which is a very minimal GTK-based UI that works well
with minimal requirements.
##### Client
On the `client` qube, we need to create a listener which will receive the request from the Dapp, and proxy it.
[qubes-client.py](qubes/client/qubes-client.py):
```python
"""
This implements a dispatcher which listens to localhost:8550, and proxies
requests via qrexec to the service qubes.EthSign on a target domain
"""
import http.server
import socketserver,subprocess
PORT=8550
TARGET_DOMAIN= 'debian-work'
class Dispatcher(http.server.BaseHTTPRequestHandler):
def do_POST(self):
post_data = self.rfile.read(int(self.headers['Content-Length']))
p = subprocess.Popen(['/usr/bin/qrexec-client-vm',TARGET_DOMAIN,'qubes.Clefsign'],stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output = p.communicate(post_data)[0]
self.wfile.write(output)
with socketserver.TCPServer(("",PORT), Dispatcher) as httpd:
print("Serving at port", PORT)
httpd.serve_forever()
```
#### Testing
To test the flow, if we have set up `debian-work` as the `target`, we can do
```bash
$ cat newaccnt.json
{ "id": 0, "jsonrpc": "2.0","method": "account_new","params": []}
$ cat newaccnt.json| qrexec-client-vm debian-work qubes.Clefsign
```
This should pop up first a dialog to allow the IPC call:
![one](qubes/qubes_newaccount-1.png)
Followed by a GTK-dialog to approve the operation
![two](qubes/qubes_newaccount-2.png)
To test the full flow, we use the client wrapper. Start it on the `client` qube:
```
[user@work qubes]$ python3 qubes-client.py
```
Make the request over http (`client` qube):
```
[user@work clef]$ cat newaccnt.json | curl -X POST -d @- http://localhost:8550
```
And it should show the same popups again.
##### Pros and cons
The benefits of this setup are:
- This is the qubes-os intended model for inter-qube communication,
- and thus benefits from qubes-os dialogs and policies for user approval
However, it comes with a couple of drawbacks:
- The `qubes-gpg-client` must forward the http request via RPC to the `target` qube. When doing so, the proxy
will either drop important headers, or replace them.
- The `Host` header is most likely `localhost`
- The `Origin` header must be forwarded
- Information about the remote ip must be added as a `X-Forwarded-For`. However, Clef cannot always trust an `XFF` header,
since malicious clients may lie about `XFF` in order to fool the http server into believing it comes from another address.
- Even with a policy in place to allow rpc-calls between `caller` and `target`, there will be several popups:
- One qubes-specific where the user specifies the `target` vm
- One clef-specific to approve the transaction
#### 2. Network integrated
The second way to set up Clef on a qubes system is to allow networking, and have Clef listen to a port which is accessible
form other qubes.
![Clef via http](qubes/clef_qubes_http.png)
## USBArmory
The [USB armory](https://inversepath.com/usbarmory) is an open source hardware design with an 800 Mhz ARM processor. It is a pocket-size
computer. When inserted into a laptop, it identifies itself as a USB network interface, basically adding another network
to your computer. Over this new network interface, you can SSH into the device.
Running Clef off a USB armory means that you can use the armory as a very versatile offline computer, which only
ever connects to a local network between your computer and the device itself.
Needless to say, the while this model should be fairly secure against remote attacks, an attacker with physical access
to the USB Armory would trivially be able to extract the contents of the device filesystem.

View File

@ -0,0 +1,32 @@
### Changelog for external API
#### 4.0.0
* The external `account_Ecrecover`-method was removed.
* The external `account_Import`-method was removed.
#### 3.0.0
* The external `account_List`-method was changed to not expose `url`, which contained info about the local filesystem. It now returns only a list of addresses.
#### 2.0.0
* Commit `73abaf04b1372fa4c43201fb1b8019fe6b0a6f8d`, move `from` into `transaction` object in `signTransaction`. This
makes the `accounts_signTransaction` identical to the old `eth_signTransaction`.
#### 1.0.0
Initial release.
### Versioning
The API uses [semantic versioning](https://semver.org/).
TLDR; Given a version number MAJOR.MINOR.PATCH, increment the:
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

View File

@ -0,0 +1,105 @@
### Changelog for internal API (ui-api)
### 3.0.0
* Make use of `OnInputRequired(info UserInputRequest)` for obtaining master password during startup
### 2.1.0
* Add `OnInputRequired(info UserInputRequest)` to internal API. This method is used when Clef needs user input, e.g. passwords.
The following structures are used:
```golang
UserInputRequest struct {
Prompt string `json:"prompt"`
Title string `json:"title"`
IsPassword bool `json:"isPassword"`
}
UserInputResponse struct {
Text string `json:"text"`
}
### 2.0.0
* Modify how `call_info` on a transaction is conveyed. New format:
```
{
"jsonrpc": "2.0",
"id": 2,
"method": "ApproveTx",
"params": [
{
"transaction": {
"from": "0x82A2A876D39022B3019932D30Cd9c97ad5616813",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x123",
"value": "0x10",
"nonce": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "WARNING",
"message": "Tx contains data, but provided ABI signature could not be matched: Did not match: test (0 matches)"
}
],
"meta": {
"remote": "127.0.0.1:54286",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
#### 1.2.0
* Add `OnStartup` method, to provide the UI with information about what API version
the signer uses (both internal and external) aswell as build-info and external api.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "OnSignerStartup",
"params": [
{
"info": {
"extapi_http": "http://localhost:8550",
"extapi_ipc": null,
"extapi_version": "2.0.0",
"intapi_version": "1.2.0"
}
}
]
}
```
#### 1.1.0
* Add `OnApproved` method
#### 1.0.0
Initial release.
### Versioning
The API uses [semantic versioning](https://semver.org/).
TLDR; Given a version number MAJOR.MINOR.PATCH, increment the:
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

735
cmd/clef/main.go Normal file
View File

@ -0,0 +1,735 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// signer is a utility that can be used so sign transactions and
// arbitrary data.
package main
import (
"bufio"
"context"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"os/signal"
"os/user"
"path/filepath"
"runtime"
"strings"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/signer/core"
"github.com/ethereum/go-ethereum/signer/rules"
"github.com/ethereum/go-ethereum/signer/storage"
"gopkg.in/urfave/cli.v1"
)
// ExternalAPIVersion -- see extapi_changelog.md
const ExternalAPIVersion = "4.0.0"
// InternalAPIVersion -- see intapi_changelog.md
const InternalAPIVersion = "3.0.0"
const legalWarning = `
WARNING!
Clef is alpha software, and not yet publically released. This software has _not_ been audited, and there
are no guarantees about the workings of this software. It may contain severe flaws. You should not use this software
unless you agree to take full responsibility for doing so, and know what you are doing.
TLDR; THIS IS NOT PRODUCTION-READY SOFTWARE!
`
var (
logLevelFlag = cli.IntFlag{
Name: "loglevel",
Value: 4,
Usage: "log level to emit to the screen",
}
advancedMode = cli.BoolFlag{
Name: "advanced",
Usage: "If enabled, issues warnings instead of rejections for suspicious requests. Default off",
}
keystoreFlag = cli.StringFlag{
Name: "keystore",
Value: filepath.Join(node.DefaultDataDir(), "keystore"),
Usage: "Directory for the keystore",
}
configdirFlag = cli.StringFlag{
Name: "configdir",
Value: DefaultConfigDir(),
Usage: "Directory for Clef configuration",
}
rpcPortFlag = cli.IntFlag{
Name: "rpcport",
Usage: "HTTP-RPC server listening port",
Value: node.DefaultHTTPPort + 5,
}
signerSecretFlag = cli.StringFlag{
Name: "signersecret",
Usage: "A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash",
}
dBFlag = cli.StringFlag{
Name: "4bytedb",
Usage: "File containing 4byte-identifiers",
Value: "./4byte.json",
}
customDBFlag = cli.StringFlag{
Name: "4bytedb-custom",
Usage: "File used for writing new 4byte-identifiers submitted via API",
Value: "./4byte-custom.json",
}
auditLogFlag = cli.StringFlag{
Name: "auditlog",
Usage: "File used to emit audit logs. Set to \"\" to disable",
Value: "audit.log",
}
ruleFlag = cli.StringFlag{
Name: "rules",
Usage: "Enable rule-engine",
Value: "rules.json",
}
stdiouiFlag = cli.BoolFlag{
Name: "stdio-ui",
Usage: "Use STDIN/STDOUT as a channel for an external UI. " +
"This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user " +
"interface, and can be used when Clef is started by an external process.",
}
testFlag = cli.BoolFlag{
Name: "stdio-ui-test",
Usage: "Mechanism to test interface between Clef and UI. Requires 'stdio-ui'.",
}
app = cli.NewApp()
initCommand = cli.Command{
Action: utils.MigrateFlags(initializeSecrets),
Name: "init",
Usage: "Initialize the signer, generate secret storage",
ArgsUsage: "",
Flags: []cli.Flag{
logLevelFlag,
configdirFlag,
},
Description: `
The init command generates a master seed which Clef can use to store credentials and data needed for
the rule-engine to work.`,
}
attestCommand = cli.Command{
Action: utils.MigrateFlags(attestFile),
Name: "attest",
Usage: "Attest that a js-file is to be used",
ArgsUsage: "<sha256sum>",
Flags: []cli.Flag{
logLevelFlag,
configdirFlag,
signerSecretFlag,
},
Description: `
The attest command stores the sha256 of the rule.js-file that you want to use for automatic processing of
incoming requests.
Whenever you make an edit to the rule file, you need to use attestation to tell
Clef that the file is 'safe' to execute.`,
}
setCredentialCommand = cli.Command{
Action: utils.MigrateFlags(setCredential),
Name: "setpw",
Usage: "Store a credential for a keystore file",
ArgsUsage: "<address>",
Flags: []cli.Flag{
logLevelFlag,
configdirFlag,
signerSecretFlag,
},
Description: `
The setpw command stores a password for a given address (keyfile). If you enter a blank passphrase, it will
remove any stored credential for that address (keyfile)
`,
}
)
func init() {
app.Name = "Clef"
app.Usage = "Manage Ethereum account operations"
app.Flags = []cli.Flag{
logLevelFlag,
keystoreFlag,
configdirFlag,
utils.NetworkIdFlag,
utils.LightKDFFlag,
utils.NoUSBFlag,
utils.RPCListenAddrFlag,
utils.RPCVirtualHostsFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.RPCEnabledFlag,
rpcPortFlag,
signerSecretFlag,
dBFlag,
customDBFlag,
auditLogFlag,
ruleFlag,
stdiouiFlag,
testFlag,
advancedMode,
}
app.Action = signer
app.Commands = []cli.Command{initCommand, attestCommand, setCredentialCommand}
}
func main() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func initializeSecrets(c *cli.Context) error {
if err := initialize(c); err != nil {
return err
}
configDir := c.GlobalString(configdirFlag.Name)
masterSeed := make([]byte, 256)
num, err := io.ReadFull(rand.Reader, masterSeed)
if err != nil {
return err
}
if num != len(masterSeed) {
return fmt.Errorf("failed to read enough random")
}
n, p := keystore.StandardScryptN, keystore.StandardScryptP
if c.GlobalBool(utils.LightKDFFlag.Name) {
n, p = keystore.LightScryptN, keystore.LightScryptP
}
text := "The master seed of clef is locked with a password. Please give a password. Do not forget this password."
var password string
for {
password = getPassPhrase(text, true)
if err := core.ValidatePasswordFormat(password); err != nil {
fmt.Printf("invalid password: %v\n", err)
} else {
break
}
}
cipherSeed, err := encryptSeed(masterSeed, []byte(password), n, p)
if err != nil {
return fmt.Errorf("failed to encrypt master seed: %v", err)
}
err = os.Mkdir(configDir, 0700)
if err != nil && !os.IsExist(err) {
return err
}
location := filepath.Join(configDir, "masterseed.json")
if _, err := os.Stat(location); err == nil {
return fmt.Errorf("file %v already exists, will not overwrite", location)
}
err = ioutil.WriteFile(location, cipherSeed, 0400)
if err != nil {
return err
}
fmt.Printf("A master seed has been generated into %s\n", location)
fmt.Printf(`
This is required to be able to store credentials, such as :
* Passwords for keystores (used by rule engine)
* Storage for javascript rules
* Hash of rule-file
You should treat that file with utmost secrecy, and make a backup of it.
NOTE: This file does not contain your accounts. Those need to be backed up separately!
`)
return nil
}
func attestFile(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an argument.")
}
if err := initialize(ctx); err != nil {
return err
}
stretchedKey, err := readMasterKey(ctx, nil)
if err != nil {
utils.Fatalf(err.Error())
}
configDir := ctx.GlobalString(configdirFlag.Name)
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
confKey := crypto.Keccak256([]byte("config"), stretchedKey)
// Initialize the encrypted storages
configStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "config.json"), confKey)
val := ctx.Args().First()
configStorage.Put("ruleset_sha256", val)
log.Info("Ruleset attestation updated", "sha256", val)
return nil
}
func setCredential(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an address to be passed as an argument.")
}
if err := initialize(ctx); err != nil {
return err
}
address := ctx.Args().First()
password := getPassPhrase("Enter a passphrase to store with this address.", true)
stretchedKey, err := readMasterKey(ctx, nil)
if err != nil {
utils.Fatalf(err.Error())
}
configDir := ctx.GlobalString(configdirFlag.Name)
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
// Initialize the encrypted storages
pwStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
pwStorage.Put(address, password)
log.Info("Credential store updated", "key", address)
return nil
}
func initialize(c *cli.Context) error {
// Set up the logger to print everything
logOutput := os.Stdout
if c.GlobalBool(stdiouiFlag.Name) {
logOutput = os.Stderr
// If using the stdioui, we can't do the 'confirm'-flow
fmt.Fprintf(logOutput, legalWarning)
} else {
if !confirm(legalWarning) {
return fmt.Errorf("aborted by user")
}
}
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(c.Int(logLevelFlag.Name)), log.StreamHandler(logOutput, log.TerminalFormat(true))))
return nil
}
func signer(c *cli.Context) error {
if err := initialize(c); err != nil {
return err
}
var (
ui core.SignerUI
)
if c.GlobalBool(stdiouiFlag.Name) {
log.Info("Using stdin/stdout as UI-channel")
ui = core.NewStdIOUI()
} else {
log.Info("Using CLI as UI-channel")
ui = core.NewCommandlineUI()
}
fourByteDb := c.GlobalString(dBFlag.Name)
fourByteLocal := c.GlobalString(customDBFlag.Name)
db, err := core.NewAbiDBFromFiles(fourByteDb, fourByteLocal)
if err != nil {
utils.Fatalf(err.Error())
}
log.Info("Loaded 4byte db", "signatures", db.Size(), "file", fourByteDb, "local", fourByteLocal)
var (
api core.ExternalAPI
)
configDir := c.GlobalString(configdirFlag.Name)
if stretchedKey, err := readMasterKey(c, ui); err != nil {
log.Info("No master seed provided, rules disabled", "error", err)
} else {
if err != nil {
utils.Fatalf(err.Error())
}
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
// Generate domain specific keys
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
jskey := crypto.Keccak256([]byte("jsstorage"), stretchedKey)
confkey := crypto.Keccak256([]byte("config"), stretchedKey)
// Initialize the encrypted storages
pwStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
jsStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "jsstorage.json"), jskey)
configStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "config.json"), confkey)
//Do we have a rule-file?
ruleJS, err := ioutil.ReadFile(c.GlobalString(ruleFlag.Name))
if err != nil {
log.Info("Could not load rulefile, rules not enabled", "file", "rulefile")
} else {
hasher := sha256.New()
hasher.Write(ruleJS)
shasum := hasher.Sum(nil)
storedShasum := configStorage.Get("ruleset_sha256")
if storedShasum != hex.EncodeToString(shasum) {
log.Info("Could not validate ruleset hash, rules not enabled", "got", hex.EncodeToString(shasum), "expected", storedShasum)
} else {
// Initialize rules
ruleEngine, err := rules.NewRuleEvaluator(ui, jsStorage, pwStorage)
if err != nil {
utils.Fatalf(err.Error())
}
ruleEngine.Init(string(ruleJS))
ui = ruleEngine
log.Info("Rule engine configured", "file", c.String(ruleFlag.Name))
}
}
}
apiImpl := core.NewSignerAPI(
c.GlobalInt64(utils.NetworkIdFlag.Name),
c.GlobalString(keystoreFlag.Name),
c.GlobalBool(utils.NoUSBFlag.Name),
ui, db,
c.GlobalBool(utils.LightKDFFlag.Name),
c.GlobalBool(advancedMode.Name))
api = apiImpl
// Audit logging
if logfile := c.GlobalString(auditLogFlag.Name); logfile != "" {
api, err = core.NewAuditLogger(logfile, api)
if err != nil {
utils.Fatalf(err.Error())
}
log.Info("Audit logs configured", "file", logfile)
}
// register signer API with server
var (
extapiURL = "n/a"
ipcapiURL = "n/a"
)
rpcAPI := []rpc.API{
{
Namespace: "account",
Public: true,
Service: api,
Version: "1.0"},
}
if c.GlobalBool(utils.RPCEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.RPCVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.RPCCORSDomainFlag.Name))
// start http server
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.RPCListenAddrFlag.Name), c.Int(rpcPortFlag.Name))
listener, _, err := rpc.StartHTTPEndpoint(httpEndpoint, rpcAPI, []string{"account"}, cors, vhosts, rpc.DefaultHTTPTimeouts)
if err != nil {
utils.Fatalf("Could not start RPC api: %v", err)
}
extapiURL = fmt.Sprintf("http://%s", httpEndpoint)
log.Info("HTTP endpoint opened", "url", extapiURL)
defer func() {
listener.Close()
log.Info("HTTP endpoint closed", "url", httpEndpoint)
}()
}
if !c.GlobalBool(utils.IPCDisabledFlag.Name) {
if c.IsSet(utils.IPCPathFlag.Name) {
ipcapiURL = c.GlobalString(utils.IPCPathFlag.Name)
} else {
ipcapiURL = filepath.Join(configDir, "clef.ipc")
}
listener, _, err := rpc.StartIPCEndpoint(ipcapiURL, rpcAPI)
if err != nil {
utils.Fatalf("Could not start IPC api: %v", err)
}
log.Info("IPC endpoint opened", "url", ipcapiURL)
defer func() {
listener.Close()
log.Info("IPC endpoint closed", "url", ipcapiURL)
}()
}
if c.GlobalBool(testFlag.Name) {
log.Info("Performing UI test")
go testExternalUI(apiImpl)
}
ui.OnSignerStartup(core.StartupInfo{
Info: map[string]interface{}{
"extapi_version": ExternalAPIVersion,
"intapi_version": InternalAPIVersion,
"extapi_http": extapiURL,
"extapi_ipc": ipcapiURL,
},
})
abortChan := make(chan os.Signal)
signal.Notify(abortChan, os.Interrupt)
sig := <-abortChan
log.Info("Exiting...", "signal", sig)
return nil
}
// splitAndTrim splits input separated by a comma
// and trims excessive white space from the substrings.
func splitAndTrim(input string) []string {
result := strings.Split(input, ",")
for i, r := range result {
result[i] = strings.TrimSpace(r)
}
return result
}
// DefaultConfigDir is the default config directory to use for the vaults and other
// persistence requirements.
func DefaultConfigDir() string {
// Try to place the data folder in the user's home dir
home := homeDir()
if home != "" {
if runtime.GOOS == "darwin" {
return filepath.Join(home, "Library", "Signer")
} else if runtime.GOOS == "windows" {
return filepath.Join(home, "AppData", "Roaming", "Signer")
} else {
return filepath.Join(home, ".clef")
}
}
// As we cannot guess a stable location, return empty and handle later
return ""
}
func homeDir() string {
if home := os.Getenv("HOME"); home != "" {
return home
}
if usr, err := user.Current(); err == nil {
return usr.HomeDir
}
return ""
}
func readMasterKey(ctx *cli.Context, ui core.SignerUI) ([]byte, error) {
var (
file string
configDir = ctx.GlobalString(configdirFlag.Name)
)
if ctx.GlobalIsSet(signerSecretFlag.Name) {
file = ctx.GlobalString(signerSecretFlag.Name)
} else {
file = filepath.Join(configDir, "masterseed.json")
}
if err := checkFile(file); err != nil {
return nil, err
}
cipherKey, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
var password string
// If ui is not nil, get the password from ui.
if ui != nil {
resp, err := ui.OnInputRequired(core.UserInputRequest{
Title: "Master Password",
Prompt: "Please enter the password to decrypt the master seed",
IsPassword: true})
if err != nil {
return nil, err
}
password = resp.Text
} else {
password = getPassPhrase("Decrypt master seed of clef", false)
}
masterSeed, err := decryptSeed(cipherKey, password)
if err != nil {
return nil, fmt.Errorf("failed to decrypt the master seed of clef")
}
if len(masterSeed) < 256 {
return nil, fmt.Errorf("master seed of insufficient length, expected >255 bytes, got %d", len(masterSeed))
}
// Create vault location
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), masterSeed)[:10]))
err = os.Mkdir(vaultLocation, 0700)
if err != nil && !os.IsExist(err) {
return nil, err
}
return masterSeed, nil
}
// checkFile is a convenience function to check if a file
// * exists
// * is mode 0400
func checkFile(filename string) error {
info, err := os.Stat(filename)
if err != nil {
return fmt.Errorf("failed stat on %s: %v", filename, err)
}
// Check the unix permission bits
if info.Mode().Perm()&0377 != 0 {
return fmt.Errorf("file (%v) has insecure file permissions (%v)", filename, info.Mode().String())
}
return nil
}
// confirm displays a text and asks for user confirmation
func confirm(text string) bool {
fmt.Printf(text)
fmt.Printf("\nEnter 'ok' to proceed:\n>")
text, err := bufio.NewReader(os.Stdin).ReadString('\n')
if err != nil {
log.Crit("Failed to read user input", "err", err)
}
if text := strings.TrimSpace(text); text == "ok" {
return true
}
return false
}
func testExternalUI(api *core.SignerAPI) {
ctx := context.WithValue(context.Background(), "remote", "clef binary")
ctx = context.WithValue(ctx, "scheme", "in-proc")
ctx = context.WithValue(ctx, "local", "main")
errs := make([]string, 0)
api.UI.ShowInfo("Testing 'ShowInfo'")
api.UI.ShowError("Testing 'ShowError'")
checkErr := func(method string, err error) {
if err != nil && err != core.ErrRequestDenied {
errs = append(errs, fmt.Sprintf("%v: %v", method, err.Error()))
}
}
var err error
_, err = api.SignTransaction(ctx, core.SendTxArgs{From: common.MixedcaseAddress{}}, nil)
checkErr("SignTransaction", err)
_, err = api.Sign(ctx, common.MixedcaseAddress{}, common.Hex2Bytes("01020304"))
checkErr("Sign", err)
_, err = api.List(ctx)
checkErr("List", err)
_, err = api.New(ctx)
checkErr("New", err)
_, err = api.Export(ctx, common.Address{})
checkErr("Export", err)
_, err = api.Import(ctx, json.RawMessage{})
checkErr("Import", err)
api.UI.ShowInfo("Tests completed")
if len(errs) > 0 {
log.Error("Got errors")
for _, e := range errs {
log.Error(e)
}
} else {
log.Info("No errors")
}
}
// getPassPhrase retrieves the password associated with clef, either fetched
// from a list of preloaded passphrases, or requested interactively from the user.
// TODO: there are many `getPassPhrase` functions, it will be better to abstract them into one.
func getPassPhrase(prompt string, confirmation bool) string {
fmt.Println(prompt)
password, err := console.Stdin.PromptPassword("Passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase confirmation: %v", err)
}
if password != confirm {
utils.Fatalf("Passphrases do not match")
}
}
return password
}
type encryptedSeedStorage struct {
Description string `json:"description"`
Version int `json:"version"`
Params keystore.CryptoJSON `json:"params"`
}
// encryptSeed uses a similar scheme as the keystore uses, but with a different wrapping,
// to encrypt the master seed
func encryptSeed(seed []byte, auth []byte, scryptN, scryptP int) ([]byte, error) {
cryptoStruct, err := keystore.EncryptDataV3(seed, auth, scryptN, scryptP)
if err != nil {
return nil, err
}
return json.Marshal(&encryptedSeedStorage{"Clef seed", 1, cryptoStruct})
}
// decryptSeed decrypts the master seed
func decryptSeed(keyjson []byte, auth string) ([]byte, error) {
var encSeed encryptedSeedStorage
if err := json.Unmarshal(keyjson, &encSeed); err != nil {
return nil, err
}
if encSeed.Version != 1 {
log.Warn(fmt.Sprintf("unsupported encryption format of seed: %d, operation will likely fail", encSeed.Version))
}
seed, err := keystore.DecryptDataV3(encSeed.Params, auth)
if err != nil {
return nil, err
}
return seed, err
}
/**
//Create Account
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_new","params":["test"],"id":67}' localhost:8550
// List accounts
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_list","params":[""],"id":67}' http://localhost:8550/
// Make Transaction
// safeSend(0x12)
// 4401a6e40000000000000000000000000000000000000000000000000000000000000012
// supplied abi
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x82A2A876D39022B3019932D30Cd9c97ad5616813","gas":"0x333","gasPrice":"0x123","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x10", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"test"],"id":67}' http://localhost:8550/
// Not supplied
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x82A2A876D39022B3019932D30Cd9c97ad5616813","gas":"0x333","gasPrice":"0x123","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x10", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"}],"id":67}' http://localhost:8550/
// Sign data
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_sign","params":["0x694267f14675d7e1b9494fd8d72fefe1755710fa","bazonk gaz baz"],"id":67}' http://localhost:8550/
**/

179
cmd/clef/pythonsigner.py Normal file
View File

@ -0,0 +1,179 @@
import os,sys, subprocess
from tinyrpc.transports import ServerTransport
from tinyrpc.protocols.jsonrpc import JSONRPCProtocol
from tinyrpc.dispatch import public,RPCDispatcher
from tinyrpc.server import RPCServer
""" This is a POC example of how to write a custom UI for Clef. The UI starts the
clef process with the '--stdio-ui' option, and communicates with clef using standard input / output.
The standard input/output is a relatively secure way to communicate, as it does not require opening any ports
or IPC files. Needless to say, it does not protect against memory inspection mechanisms where an attacker
can access process memory."""
try:
import urllib.parse as urlparse
except ImportError:
import urllib as urlparse
class StdIOTransport(ServerTransport):
""" Uses std input/output for RPC """
def receive_message(self):
return None, urlparse.unquote(sys.stdin.readline())
def send_reply(self, context, reply):
print(reply)
class PipeTransport(ServerTransport):
""" Uses std a pipe for RPC """
def __init__(self,input, output):
self.input = input
self.output = output
def receive_message(self):
data = self.input.readline()
print(">> {}".format( data))
return None, urlparse.unquote(data)
def send_reply(self, context, reply):
print("<< {}".format( reply))
self.output.write(reply)
self.output.write("\n")
class StdIOHandler():
def __init__(self):
pass
@public
def ApproveTx(self,req):
"""
Example request:
{
"jsonrpc": "2.0",
"method": "ApproveTx",
"params": [{
"transaction": {
"to": "0xae967917c465db8578ca9024c205720b1a3651A9",
"gas": "0x333",
"gasPrice": "0x123",
"value": "0x10",
"data": "0xd7a5865800000000000000000000000000000000000000000000000000000000000000ff",
"nonce": "0x0"
},
"from": "0xAe967917c465db8578ca9024c205720b1a3651A9",
"call_info": "Warning! Could not validate ABI-data against calldata\nSupplied ABI spec does not contain method signature in data: 0xd7a58658",
"meta": {
"remote": "127.0.0.1:34572",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}],
"id": 1
}
:param transaction: transaction info
:param call_info: info abou the call, e.g. if ABI info could not be
:param meta: metadata about the request, e.g. where the call comes from
:return:
"""
transaction = req.get('transaction')
_from = req.get('from')
call_info = req.get('call_info')
meta = req.get('meta')
return {
"approved" : False,
#"transaction" : transaction,
# "from" : _from,
# "password" : None,
}
@public
def ApproveSignData(self, req):
""" Example request
"""
return {"approved": False, "password" : None}
@public
def ApproveExport(self, req):
""" Example request
"""
return {"approved" : False}
@public
def ApproveImport(self, req):
""" Example request
"""
return { "approved" : False, "old_password": "", "new_password": ""}
@public
def ApproveListing(self, req):
""" Example request
"""
return {'accounts': []}
@public
def ApproveNewAccount(self, req):
"""
Example request
:return:
"""
return {"approved": False,
#"password": ""
}
@public
def ShowError(self,message = {}):
"""
Example request:
{"jsonrpc":"2.0","method":"ShowInfo","params":{"message":"Testing 'ShowError'"},"id":1}
:param message: to show
:return: nothing
"""
if 'text' in message.keys():
sys.stderr.write("Error: {}\n".format( message['text']))
return
@public
def ShowInfo(self,message = {}):
"""
Example request
{"jsonrpc":"2.0","method":"ShowInfo","params":{"message":"Testing 'ShowInfo'"},"id":0}
:param message: to display
:return:nothing
"""
if 'text' in message.keys():
sys.stdout.write("Error: {}\n".format( message['text']))
return
def main(args):
cmd = ["./clef", "--stdio-ui"]
if len(args) > 0 and args[0] == "test":
cmd.extend(["--stdio-ui-test"])
print("cmd: {}".format(" ".join(cmd)))
dispatcher = RPCDispatcher()
dispatcher.register_instance(StdIOHandler(), '')
# line buffered
p = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
rpc_server = RPCServer(
PipeTransport(p.stdout, p.stdin),
JSONRPCProtocol(),
dispatcher
)
rpc_server.serve_forever()
if __name__ == '__main__':
main(sys.argv[1:])

236
cmd/clef/rules.md Normal file
View File

@ -0,0 +1,236 @@
# Rules
The `signer` binary contains a ruleset engine, implemented with [OttoVM](https://github.com/robertkrimen/otto)
It enables usecases like the following:
* I want to auto-approve transactions with contract `CasinoDapp`, with up to `0.05 ether` in value to maximum `1 ether` per 24h period
* I want to auto-approve transaction to contract `EthAlarmClock` with `data`=`0xdeadbeef`, if `value=0`, `gas < 44k` and `gasPrice < 40Gwei`
The two main features that are required for this to work well are;
1. Rule Implementation: how to create, manage and interpret rules in a flexible but secure manner
2. Credential managements and credentials; how to provide auto-unlock without exposing keys unnecessarily.
The section below deals with both of them
## Rule Implementation
A ruleset file is implemented as a `js` file. Under the hood, the ruleset-engine is a `SignerUI`, implementing the same methods as the `json-rpc` methods
defined in the UI protocol. Example:
```javascript
function asBig(str){
if(str.slice(0,2) == "0x"){ return new BigNumber(str.slice(2),16)}
return new BigNumber(str)
}
// Approve transactions to a certain contract if value is below a certain limit
function ApproveTx(req){
var limit = big.Newint("0xb1a2bc2ec50000")
var value = asBig(req.transaction.value);
if(req.transaction.to.toLowerCase()=="0xae967917c465db8578ca9024c205720b1a3651a9")
&& value.lt(limit) ){
return "Approve"
}
// If we return "Reject", it will be rejected.
// By not returning anything, it will be passed to the next UI, for manual processing
}
//Approve listings if request made from IPC
function ApproveListing(req){
if (req.metadata.scheme == "ipc"){ return "Approve"}
}
```
Whenever the external API is called (and the ruleset is enabled), the `signer` calls the UI, which is an instance of a ruleset-engine. The ruleset-engine
invokes the corresponding method. In doing so, there are three possible outcomes:
1. JS returns "Approve"
* Auto-approve request
2. JS returns "Reject"
* Auto-reject request
3. Error occurs, or something else is returned
* Pass on to `next` ui: the regular UI channel.
A more advanced example can be found below, "Example 1: ruleset for a rate-limited window", using `storage` to `Put` and `Get` `string`s by key.
* At the time of writing, storage only exists as an ephemeral unencrypted implementation, to be used during testing.
### Things to note
The Otto vm has a few [caveats](https://github.com/robertkrimen/otto):
* "use strict" will parse, but does nothing.
* The regular expression engine (re2/regexp) is not fully compatible with the ECMA5 specification.
* Otto targets ES5. ES6 features (eg: Typed Arrays) are not supported.
Additionally, a few more have been added
* The rule execution cannot load external javascript files.
* The only preloaded libary is [`bignumber.js`](https://github.com/MikeMcl/bignumber.js) version `2.0.3`. This one is fairly old, and is not aligned with the documentation at the github repository.
* Each invocation is made in a fresh virtual machine. This means that you cannot store data in global variables between invocations. This is a deliberate choice -- if you want to store data, use the disk-backed `storage`, since rules should not rely on ephemeral data.
* Javascript API parameters are _always_ an object. This is also a design choice, to ensure that parameters are accessed by _key_ and not by order. This is to prevent mistakes due to missing parameters or parameter changes.
* The JS engine has access to `storage` and `console`.
#### Security considerations
##### Security of ruleset
Some security precautions can be made, such as:
* Never load `ruleset.js` unless the file is `readonly` (`r-??-??-?`). If the user wishes to modify the ruleset, he must make it writeable and then set back to readonly.
* This is to prevent attacks where files are dropped on the users disk.
* Since we're going to have to have some form of secure storage (not defined in this section), we could also store the `sha3` of the `ruleset.js` file in there.
* If the user wishes to modify the ruleset, he'd then have to perform e.g. `signer --attest /path/to/ruleset --credential <creds>`
##### Security of implementation
The drawbacks of this very flexible solution is that the `signer` needs to contain a javascript engine. This is pretty simple to implement, since it's already
implemented for `geth`. There are no known security vulnerabilities in, nor have we had any security-problems with it so far.
The javascript engine would be an added attack surface; but if the validation of `rulesets` is made good (with hash-based attestation), the actual javascript cannot be considered
an attack surface -- if an attacker can control the ruleset, a much simpler attack would be to implement an "always-approve" rule instead of exploiting the js vm. The only benefit
to be gained from attacking the actual `signer` process from the `js` side would be if it could somehow extract cryptographic keys from memory.
##### Security in usability
Javascript is flexible, but also easy to get wrong, especially when users assume that `js` can handle large integers natively. Typical errors
include trying to multiply `gasCost` with `gas` without using `bigint`:s.
It's unclear whether any other DSL could be more secure; since there's always the possibility of erroneously implementing a rule.
## Credential management
The ability to auto-approve transaction means that the signer needs to have necessary credentials to decrypt keyfiles. These passwords are hereafter called `ksp` (keystore pass).
### Example implementation
Upon startup of the signer, the signer is given a switch: `--seed <path/to/masterseed>`
The `seed` contains a blob of bytes, which is the master seed for the `signer`.
The `signer` uses the `seed` to:
* Generate the `path` where the settings are stored.
* `./settings/1df094eb-c2b1-4689-90dd-790046d38025/vault.dat`
* `./settings/1df094eb-c2b1-4689-90dd-790046d38025/rules.js`
* Generate the encryption password for `vault.dat`.
The `vault.dat` would be an encrypted container storing the following information:
* `ksp` entries
* `sha256` hash of `rules.js`
* Information about pair:ed callers (not yet specified)
### Security considerations
This would leave it up to the user to ensure that the `path/to/masterseed` is handled in a secure way. It's difficult to get around this, although one could
imagine leveraging OS-level keychains where supported. The setup is however in general similar to how ssh-keys are stored in `.ssh/`.
# Implementation status
This is now implemented (with ephemeral non-encrypted storage for now, so not yet enabled).
## Example 1: ruleset for a rate-limited window
```javascript
function big(str){
if(str.slice(0,2) == "0x"){ return new BigNumber(str.slice(2),16)}
return new BigNumber(str)
}
// Time window: 1 week
var window = 1000* 3600*24*7;
// Limit : 1 ether
var limit = new BigNumber("1e18");
function isLimitOk(transaction){
var value = big(transaction.value)
// Start of our window function
var windowstart = new Date().getTime() - window;
var txs = [];
var stored = storage.Get('txs');
if(stored != ""){
txs = JSON.parse(stored)
}
// First, remove all that have passed out of the time-window
var newtxs = txs.filter(function(tx){return tx.tstamp > windowstart});
console.log(txs, newtxs.length);
// Secondly, aggregate the current sum
sum = new BigNumber(0)
sum = newtxs.reduce(function(agg, tx){ return big(tx.value).plus(agg)}, sum);
console.log("ApproveTx > Sum so far", sum);
console.log("ApproveTx > Requested", value.toNumber());
// Would we exceed weekly limit ?
return sum.plus(value).lt(limit)
}
function ApproveTx(r){
if (isLimitOk(r.transaction)){
return "Approve"
}
return "Nope"
}
/**
* OnApprovedTx(str) is called when a transaction has been approved and signed. The parameter
* 'response_str' contains the return value that will be sent to the external caller.
* The return value from this method is ignore - the reason for having this callback is to allow the
* ruleset to keep track of approved transactions.
*
* When implementing rate-limited rules, this callback should be used.
* If a rule responds with neither 'Approve' nor 'Reject' - the tx goes to manual processing. If the user
* then accepts the transaction, this method will be called.
*
* TLDR; Use this method to keep track of signed transactions, instead of using the data in ApproveTx.
*/
function OnApprovedTx(resp){
var value = big(resp.tx.value)
var txs = []
// Load stored transactions
var stored = storage.Get('txs');
if(stored != ""){
txs = JSON.parse(stored)
}
// Add this to the storage
txs.push({tstamp: new Date().getTime(), value: value});
storage.Put("txs", JSON.stringify(txs));
}
```
## Example 2: allow destination
```javascript
function ApproveTx(r){
if(r.transaction.from.toLowerCase()=="0x0000000000000000000000000000000000001337"){ return "Approve"}
if(r.transaction.from.toLowerCase()=="0x000000000000000000000000000000000000dead"){ return "Reject"}
// Otherwise goes to manual processing
}
```
## Example 3: Allow listing
```javascript
function ApproveListing(){
return "Approve"
}
```

BIN
cmd/clef/sign_flow.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

211
cmd/clef/tutorial.md Normal file
View File

@ -0,0 +1,211 @@
## Initializing the signer
First, initialize the master seed.
```text
#./signer init
WARNING!
The signer is alpha software, and not yet publically released. This software has _not_ been audited, and there
are no guarantees about the workings of this software. It may contain severe flaws. You should not use this software
unless you agree to take full responsibility for doing so, and know what you are doing.
TLDR; THIS IS NOT PRODUCTION-READY SOFTWARE!
Enter 'ok' to proceed:
>ok
A master seed has been generated into /home/martin/.signer/secrets.dat
This is required to be able to store credentials, such as :
* Passwords for keystores (used by rule engine)
* Storage for javascript rules
* Hash of rule-file
You should treat that file with utmost secrecy, and make a backup of it.
NOTE: This file does not contain your accounts. Those need to be backed up separately!
```
(for readability purposes, we'll remove the WARNING printout in the rest of this document)
## Creating rules
Now, you can create a rule-file. Note that it is not mandatory to use predefined rules, but it's really handy.
```javascript
function ApproveListing(){
return "Approve"
}
```
Get the `sha256` hash. If you have openssl, you can do `openssl sha256 rules.js`...
```text
#sha256sum rules.js
6c21d1737429d6d4f2e55146da0797782f3c0a0355227f19d702df377c165d72 rules.js
```
...now `attest` the file...
```text
#./signer attest 6c21d1737429d6d4f2e55146da0797782f3c0a0355227f19d702df377c165d72
INFO [02-21|12:14:38] Ruleset attestation updated sha256=6c21d1737429d6d4f2e55146da0797782f3c0a0355227f19d702df377c165d72
```
...and (this is required only for non-production versions) load a mock-up `4byte.json` by copying the file from the source to your current working directory:
```text
#cp $GOPATH/src/github.com/ethereum/go-ethereum/cmd/clef/4byte.json $PWD
```
At this point, we can start the signer with the rule-file:
```text
#./signer --rules rules.js --rpc
INFO [09-25|20:28:11.866] Using CLI as UI-channel
INFO [09-25|20:28:11.876] Loaded 4byte db signatures=5509 file=./4byte.json
INFO [09-25|20:28:11.877] Rule engine configured file=./rules.js
DEBUG[09-25|20:28:11.877] FS scan times list=100.781µs set=13.253µs diff=5.761µs
DEBUG[09-25|20:28:11.884] Ledger support enabled
DEBUG[09-25|20:28:11.888] Trezor support enabled
INFO [09-25|20:28:11.888] Audit logs configured file=audit.log
DEBUG[09-25|20:28:11.888] HTTP registered namespace=account
INFO [09-25|20:28:11.890] HTTP endpoint opened url=http://localhost:8550
DEBUG[09-25|20:28:11.890] IPC registered namespace=account
INFO [09-25|20:28:11.890] IPC endpoint opened url=<nil>
------- Signer info -------
* extapi_version : 2.0.0
* intapi_version : 2.0.0
* extapi_http : http://localhost:8550
* extapi_ipc : <nil>
```
Any list-requests will now be auto-approved by our rule-file.
## Under the hood
While doing the operations above, these files have been created:
```text
#ls -laR ~/.signer/
/home/martin/.signer/:
total 16
drwx------ 3 martin martin 4096 feb 21 12:14 .
drwxr-xr-x 71 martin martin 4096 feb 21 12:12 ..
drwx------ 2 martin martin 4096 feb 21 12:14 43f73718397aa54d1b22
-rwx------ 1 martin martin 256 feb 21 12:12 secrets.dat
/home/martin/.signer/43f73718397aa54d1b22:
total 12
drwx------ 2 martin martin 4096 feb 21 12:14 .
drwx------ 3 martin martin 4096 feb 21 12:14 ..
-rw------- 1 martin martin 159 feb 21 12:14 config.json
#cat /home/martin/.signer/43f73718397aa54d1b22/config.json
{"ruleset_sha256":{"iv":"6v4W4tfJxj3zZFbl","c":"6dt5RTDiTq93yh1qDEjpsat/tsKG7cb+vr3sza26IPL2fvsQ6ZoqFx++CPUa8yy6fD9Bbq41L01ehkKHTG3pOAeqTW6zc/+t0wv3AB6xPmU="}}
```
In `~/.signer`, the `secrets.dat` file was created, containing the `master_seed`.
The `master_seed` was then used to derive a few other things:
- `vault_location` : in this case `43f73718397aa54d1b22` .
- Thus, if you use a different `master_seed`, another `vault_location` will be used that does not conflict with each other.
- Example: `signer --signersecret /path/to/afile ...`
- `config.json` which is the encrypted key/value storage for configuration data, containing the key `ruleset_sha256`.
## Adding credentials
In order to make more useful rules like signing transactions, the signer needs access to the passwords needed to unlock keystores.
```text
#./signer addpw "0x694267f14675d7e1b9494fd8d72fefe1755710fa" "test_password"
INFO [02-21|13:43:21] Credential store updated key=0x694267f14675d7e1b9494fd8d72fefe1755710fa
```
## More advanced rules
Now let's update the rules to make use of credentials:
```javascript
function ApproveListing(){
return "Approve"
}
function ApproveSignData(r){
if( r.address.toLowerCase() == "0x694267f14675d7e1b9494fd8d72fefe1755710fa")
{
if(r.message.indexOf("bazonk") >= 0){
return "Approve"
}
return "Reject"
}
// Otherwise goes to manual processing
}
```
In this example:
* Any requests to sign data with the account `0x694...` will be
* auto-approved if the message contains with `bazonk`
* auto-rejected if it does not.
* Any other signing-requests will be passed along for manual approve/reject.
_Note: make sure that `0x694...` is an account you have access to. You can create it either via the clef or the traditional account cli tool. If the latter was chosen, make sure both clef and geth use the same keystore by specifing `--keystore path/to/your/keystore` when running clef._
Attest the new file...
```text
#sha256sum rules.js
2a0cb661dacfc804b6e95d935d813fd17c0997a7170e4092ffbc34ca976acd9f rules.js
#./signer attest 2a0cb661dacfc804b6e95d935d813fd17c0997a7170e4092ffbc34ca976acd9f
INFO [02-21|14:36:30] Ruleset attestation updated sha256=2a0cb661dacfc804b6e95d935d813fd17c0997a7170e4092ffbc34ca976acd9f
```
And start the signer:
```
#./signer --rules rules.js --rpc
INFO [09-25|21:02:16.450] Using CLI as UI-channel
INFO [09-25|21:02:16.466] Loaded 4byte db signatures=5509 file=./4byte.json
INFO [09-25|21:02:16.467] Rule engine configured file=./rules.js
DEBUG[09-25|21:02:16.468] FS scan times list=1.45262ms set=21.926µs diff=6.944µs
DEBUG[09-25|21:02:16.473] Ledger support enabled
DEBUG[09-25|21:02:16.475] Trezor support enabled
INFO [09-25|21:02:16.476] Audit logs configured file=audit.log
DEBUG[09-25|21:02:16.476] HTTP registered namespace=account
INFO [09-25|21:02:16.478] HTTP endpoint opened url=http://localhost:8550
DEBUG[09-25|21:02:16.478] IPC registered namespace=account
INFO [09-25|21:02:16.478] IPC endpoint opened url=<nil>
------- Signer info -------
* extapi_version : 2.0.0
* intapi_version : 2.0.0
* extapi_http : http://localhost:8550
* extapi_ipc : <nil>
```
And then test signing, once with `bazonk` and once without:
```
#curl -H "Content-Type: application/json" -X POST --data "{\"jsonrpc\":\"2.0\",\"method\":\"account_sign\",\"params\":[\"0x694267f14675d7e1b9494fd8d72fefe1755710fa\",\"0x$(xxd -pu <<< ' bazonk baz gaz')\"],\"id\":67}" http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":"0x93e6161840c3ae1efc26dc68dedab6e8fc233bb3fefa1b4645dbf6609b93dace160572ea4ab33240256bb6d3dadb60dcd9c515d6374d3cf614ee897408d41d541c"}
#curl -H "Content-Type: application/json" -X POST --data "{\"jsonrpc\":\"2.0\",\"method\":\"account_sign\",\"params\":[\"0x694267f14675d7e1b9494fd8d72fefe1755710fa\",\"0x$(xxd -pu <<< ' bonk baz gaz')\"],\"id\":67}" http://localhost:8550/
{"jsonrpc":"2.0","id":67,"error":{"code":-32000,"message":"Request denied"}}
```
Meanwhile, in the signer output:
```text
INFO [02-21|14:42:41] Op approved
INFO [02-21|14:42:56] Op rejected
```
The signer also stores all traffic over the external API in a log file. The last 4 lines shows the two requests and their responses:
```text
#tail -n 4 audit.log
t=2018-02-21T14:42:41+0100 lvl=info msg=Sign api=signer type=request metadata="{\"remote\":\"127.0.0.1:49706\",\"local\":\"localhost:8550\",\"scheme\":\"HTTP/1.1\"}" addr="0x694267f14675d7e1b9494fd8d72fefe1755710fa [chksum INVALID]" data=202062617a6f6e6b2062617a2067617a0a
t=2018-02-21T14:42:42+0100 lvl=info msg=Sign api=signer type=response data=93e6161840c3ae1efc26dc68dedab6e8fc233bb3fefa1b4645dbf6609b93dace160572ea4ab33240256bb6d3dadb60dcd9c515d6374d3cf614ee897408d41d541c error=nil
t=2018-02-21T14:42:56+0100 lvl=info msg=Sign api=signer type=request metadata="{\"remote\":\"127.0.0.1:49708\",\"local\":\"localhost:8550\",\"scheme\":\"HTTP/1.1\"}" addr="0x694267f14675d7e1b9494fd8d72fefe1755710fa [chksum INVALID]" data=2020626f6e6b2062617a2067617a0a
t=2018-02-21T14:42:56+0100 lvl=info msg=Sign api=signer type=response data= error="Request denied"
```

View File

@ -21,21 +21,33 @@ Private key information can be printed by using the `--private` flag;
make sure to use this feature with great caution!
### `ethkey sign <keyfile> <message/file>`
### `ethkey signmessage <keyfile> <message/file>`
Sign the message with a keyfile.
It is possible to refer to a file containing the message.
To sign a message contained in a file, use the `--msgfile` flag.
### `ethkey verify <address> <signature> <message/file>`
### `ethkey verifymessage <address> <signature> <message/file>`
Verify the signature of the message.
It is possible to refer to a file containing the message.
To sign a message contained in a file, use the --msgfile flag.
### `ethkey changepassphrase <keyfile>`
Change the passphrase of a keyfile.
use the `--newpasswordfile` to point to the new password file.
## Passphrases
For every command that uses a keyfile, you will be prompted to provide the
passphrase for decrypting the keyfile. To avoid this message, it is possible
to pass the passphrase by using the `--passphrase` flag pointing to a file that
to pass the passphrase by using the `--passwordfile` flag pointing to a file that
contains the passphrase.
## JSON
In case you need to output the result in a JSON format, you shall by using the `--json` flag.

View File

@ -0,0 +1,72 @@
package main
import (
"fmt"
"io/ioutil"
"strings"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"gopkg.in/urfave/cli.v1"
)
var newPassphraseFlag = cli.StringFlag{
Name: "newpasswordfile",
Usage: "the file that contains the new passphrase for the keyfile",
}
var commandChangePassphrase = cli.Command{
Name: "changepassphrase",
Usage: "change the passphrase on a keyfile",
ArgsUsage: "<keyfile>",
Description: `
Change the passphrase of a keyfile.`,
Flags: []cli.Flag{
passphraseFlag,
newPassphraseFlag,
},
Action: func(ctx *cli.Context) error {
keyfilepath := ctx.Args().First()
// Read key from file.
keyjson, err := ioutil.ReadFile(keyfilepath)
if err != nil {
utils.Fatalf("Failed to read the keyfile at '%s': %v", keyfilepath, err)
}
// Decrypt key with passphrase.
passphrase := getPassphrase(ctx)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)
}
// Get a new passphrase.
fmt.Println("Please provide a new passphrase")
var newPhrase string
if passFile := ctx.String(newPassphraseFlag.Name); passFile != "" {
content, err := ioutil.ReadFile(passFile)
if err != nil {
utils.Fatalf("Failed to read new passphrase file '%s': %v", passFile, err)
}
newPhrase = strings.TrimRight(string(content), "\r\n")
} else {
newPhrase = promptPassphrase(true)
}
// Encrypt the key with the new passphrase.
newJson, err := keystore.EncryptKey(key, newPhrase, keystore.StandardScryptN, keystore.StandardScryptP)
if err != nil {
utils.Fatalf("Error encrypting with new passphrase: %v", err)
}
// Then write the new keyfile in place of the old one.
if err := ioutil.WriteFile(keyfilepath, newJson, 600); err != nil {
utils.Fatalf("Error writing new keyfile to disk: %v", err)
}
// Don't print anything. Just return successfully,
// producing a positive exit code.
return nil
},
}

View File

@ -90,7 +90,7 @@ If you want to encrypt an existing private key, it can be specified by setting
}
// Encrypt key with passphrase.
passphrase := getPassPhrase(ctx, true)
passphrase := promptPassphrase(true)
keyjson, err := keystore.EncryptKey(key, passphrase, keystore.StandardScryptN, keystore.StandardScryptP)
if err != nil {
utils.Fatalf("Error encrypting key: %v", err)

Some files were not shown because too many files have changed in this diff Show More