Compare commits

...

95 Commits

Author SHA1 Message Date
Péter Szilágyi
4bcc0a37ab
Merge pull request #19473 from karalabe/geth-1.8.27
[1.8.27 backport] eth, les, light: enforce CHT checkpoints on fast-sync too
2019-04-17 15:47:54 +03:00
Péter Szilágyi
b5f92e66c6
params, swarm: release Geth v1.8.27 (noop Swarm v0.3.15) 2019-04-17 14:57:38 +03:00
Péter Szilágyi
d8787230fa
eth, les, light: enforce CHT checkpoints on fast-sync too 2019-04-17 14:56:58 +03:00
Péter Szilágyi
cdae1c59ab
Merge pull request #19437 from zsfelfoldi/fix-sendtx
les: fix SendTx cost calculation and verify cost table
2019-04-10 15:45:13 +03:00
Péter Szilágyi
0b00e19ed9
params, swarm: release Geth v1.8.26 (+noop Swarm v0.3.14) 2019-04-10 15:38:01 +03:00
Zsolt Felfoldi
c8d8126bd0 les: check required message types in cost table 2019-04-10 13:12:46 +02:00
Zsolt Felfoldi
0de9f32ae8 les: backported new SendTx cost calculation 2019-04-10 13:12:42 +02:00
Péter Szilágyi
14ae1246b7
Merge pull request #19416 from jmcnevin/cli-fix
Revert flag removal
2019-04-09 11:47:09 +03:00
Péter Szilágyi
dc59af8622
params, swarm: hotfix Geth v1.8.25 release to restore rpc flags 2019-04-09 10:58:00 +03:00
Jeremy McNevin
45730cfab3
cmd/geth: fix accidental --rpccorsdomain and --rpcvhosts removal 2019-04-09 10:56:50 +03:00
Péter Szilágyi
4e13a09c50
Merge pull request #19370 from karalabe/geth-1.8.24
Backport PR for the v1.8.24 maintenance release
2019-04-08 16:16:05 +03:00
Péter Szilágyi
009d2fe2d6
params, swarm: release Geth v1.8.24 (noop Swarm 0.3.12) 2019-04-08 16:06:59 +03:00
Martin Holst Swende
e872ba7a9e
eth, les, geth: implement cli-configurable global gas cap for RPC calls (#19401)
* eth, les, geth: implement cli-configurable global gas cap for RPC calls

* graphql, ethapi: place gas cap in DoCall

* ethapi: reformat log message
2019-04-08 15:15:13 +03:00
Felix Lange
9d9c6b5847
p2p/discover: bump failure counter only if no nodes were provided (#19362)
This resolves a minor issue where neighbors responses containing less
than 16 nodes would bump the failure counter, removing the node. One
situation where this can happen is a private deployment where the total
number of extant nodes is less than 16.

Issue found by @jsying.
2019-04-08 14:35:50 +03:00
Péter Szilágyi
8ca6454807
params: set Rinkeby Petersburg fork block (4th May, 2019) 2019-04-08 12:14:05 +03:00
Péter Szilágyi
0e63a70505
core: minor code polishes + rebase fixes 2019-04-08 12:04:31 +03:00
rjl493456442
f1b00cffc8
core: re-omit new log event when logs rebirth 2019-04-08 12:02:15 +03:00
Péter Szilágyi
442320a8ae
travis: update builders to xenial to shadow Go releases 2019-04-08 12:00:42 +03:00
Martin Holst Swende
af401d03a3
all: simplify timestamps to uint64 (#19372)
* all: simplify timestamps to uint64

* tests: update definitions

* clef, faucet, mobile: leftover uint64 fixups

* ethash: fix tests

* graphql: update schema for timestamp

* ethash: remove unused variable
2019-04-08 12:00:42 +03:00
Péter Szilágyi
80a2a35bc3
trie: there's no point in retrieving the metaroot 2019-04-08 12:00:42 +03:00
Péter Szilágyi
fca5f9fd6f
common/fdlimit: fix macos file descriptors for Go 1.12 2019-04-02 13:14:21 +03:00
Péter Szilágyi
38c30f8dd8
light, params: update CHTs, integrate CHT for Goerli too 2019-04-02 12:10:06 +03:00
Péter Szilágyi
c942700427
Merge pull request #19029 from holiman/update1.8
Update1.8
2019-02-20 10:48:12 +02:00
Péter Szilágyi
cde35439e0
params, swarm: release Geth v1.8.23, Swarm v0.3.11 2019-02-20 10:42:02 +02:00
Anton Evangelatov
4f908db69e
cmd/utils: allow for multiple influxdb tags (#18520)
This PR is replacing the metrics.influxdb.host.tag cmd-line flag with metrics.influxdb.tags - a comma-separated key/value tags, that are passed to the InfluxDB reporter, so that we can index measurements with multiple tags, and not just one host tag.

This will be useful for Swarm, where we want to index measurements not just with the host tag, but also with bzzkey and git commit version (for long-running deployments).

(cherry picked from commit 21acf0bc8d4f179397bb7d06d6f36df3cbee4a8e)
2019-02-19 17:34:48 +01:00
Jerzy Lasyk
320d132925
swarm/metrics: Send the accounting registry to InfluxDB (#18470)
(cherry picked from commit f28da4f602fcd17624cf6d40d070253dd6663121)
2019-02-19 17:34:42 +01:00
lash
7ae2a7bd84
swarm: Reinstate Pss Protocol add call through swarm service (#19117)
* swarm: Reinstate Pss Protocol add call through swarm service

* swarm: Even less self

(cherry picked from commit d88c6ce6b058ccd04b03d079d486b1d55fe5ef61)
2019-02-19 13:18:10 +01:00
Kiel barry
fd34bf594c
contracts/*: golint updates for this or self warning
(cherry picked from commit 53b823afc8c24337290ba2e7889c2dde496e9272)
2019-02-19 13:18:02 +01:00
holisticode
996230174c
cmd/swarm/swarm-smoke: Trigger chunk debug on timeout (#19101)
* cmd/swarm/swarm-smoke: first version trigger has-chunks on timeout

* cmd/swarm/swarm-smoke: finalize trigger to chunk debug

* cmd/swarm/swarm-smoke: fixed httpEndpoint for trigger

* cmd/swarm/swarm-smoke: port

* cmd/swarm/swarm-smoke: ws not rpc

* cmd/swarm/swarm-smoke: added debug output

* cmd/swarm/swarm-smoke: addressed PR comments

* cmd/swarm/swarm-smoke: renamed track-timeout and track-chunks

(cherry picked from commit 62d7688d0a7ddbdb5d7167b264e0ea617578b60d)
2019-02-19 13:11:53 +01:00
Ferenc Szabo
8857707606
p2p, swarm: fix node up races by granular locking (#18976)
* swarm/network: DRY out repeated giga comment

I not necessarily agree with the way we wait for event propagation.
But I truly disagree with having duplicated giga comments.

* p2p/simulations: encapsulate Node.Up field so we avoid data races

The Node.Up field was accessed concurrently without "proper" locking.
There was a lock on Network and that was used sometimes to access
the  field. Other times the locking was missed and we had
a data race.

For example: https://github.com/ethereum/go-ethereum/pull/18464
The case above was solved, but there were still intermittent/hard to
reproduce races. So let's solve the issue permanently.

resolves: ethersphere/go-ethereum#1146

* p2p/simulations: fix unmarshal of simulations.Node

Making Node.Up field private in 13292ee897e345045fbfab3bda23a77589a271c1
broke TestHTTPNetwork and TestHTTPSnapshot. Because the default
UnmarshalJSON does not handle unexported fields.

Important: The fix is partial and not proper to my taste. But I cut
scope as I think the fix may require a change to the current
serialization format. New ticket:
https://github.com/ethersphere/go-ethereum/issues/1177

* p2p/simulations: Add a sanity test case for Node.Config UnmarshalJSON

* p2p/simulations: revert back to defer Unlock() pattern for Network

It's a good patten to call `defer Unlock()` right after `Lock()` so
(new) error cases won't miss to unlock. Let's get back to that pattern.

The patten was abandoned in 85a79b3ad3c5863f8612d25c246bcfad339f36b7,
while fixing a data race. That data race does not exist anymore,
since the Node.Up field got hidden behind its own lock.

* p2p/simulations: consistent naming for test providers Node.UnmarshalJSON

* p2p/simulations: remove JSON annotation from private fields of Node

As unexported fields are not serialized.

* p2p/simulations: fix deadlock in Network.GetRandomDownNode()

Problem: GetRandomDownNode() locks -> getDownNodeIDs() ->
GetNodes() tries to lock -> deadlock

On Network type, unexported functions must assume that `net.lock`
is already acquired and should not call exported functions which
might try to lock again.

* p2p/simulations: ensure method conformity for Network

Connect* methods were moved to p2p/simulations.Network from
swarm/network/simulation. However these new methods did not follow
the pattern of Network methods, i.e., all exported method locks
the whole Network either for read or write.

* p2p/simulations: fix deadlock during network shutdown

`TestDiscoveryPersistenceSimulationSimAdapter` often got into deadlock.
The execution was stuck on two locks, i.e, `Kademlia.lock` and
`p2p/simulations.Network.lock`. Usually the test got stuck once in each
20 executions with high confidence.

`Kademlia` was stuck in `Kademlia.EachAddr()` and `Network` in
`Network.Stop()`.

Solution: in `Network.Stop()` `net.lock` must be released before
calling `node.Stop()` as stopping a node (somehow - I did not find
the exact code path) causes `Network.InitConn()` to be called from
`Kademlia.SuggestPeer()` and that blocks on `net.lock`.

Related ticket: https://github.com/ethersphere/go-ethereum/issues/1223

* swarm/state: simplify if statement in DBStore.Put()

* p2p/simulations: remove faulty godoc from private function

The comment started with the wrong method name.

The method is simple and self explanatory. Also, it's private.
=> Let's just remove the comment.

(cherry picked from commit 50b872bf05b8644f14b9bea340092ced6968dd59)
2019-02-19 13:11:52 +01:00
gluk256
d6c1fcbe04
swarm/pss: refactoring (#19110)
* swarm/pss: split pss and keystore

* swarm/pss: moved whisper to keystore

* swarm/pss: goimports fixed

(cherry picked from commit 12ca3b172a7e1b2b63ef2369e8dc37c75144c81f)
2019-02-19 13:11:52 +01:00
Elad
79cac793c0
swarm/storage/netstore: add fetcher cancellation on shutdown (#19049)
swarm/network/stream: remove netstore internal wg
swarm/network/stream: run individual tests with t.Run

(cherry picked from commit 3ee09ba03511ad9a49e37c58f0c35b9c9771dd6f)
2019-02-19 13:11:52 +01:00
holisticode
5de6b6b529
swarm/network: Saturation check for healthy networks (#19071)
* swarm/network: new saturation for  implementation

* swarm/network: re-added saturation func in Kademlia as it is used elsewhere

* swarm/network: saturation with higher MinBinSize

* swarm/network: PeersPerBin with depth check

* swarm/network: edited tests to pass new saturated check

* swarm/network: minor fix saturated check

* swarm/network/simulations/discovery: fixed renamed RPC call

* swarm/network: renamed to isSaturated and returns bool

* swarm/network: early depth check

(cherry picked from commit 2af24724dd5f3ab1994001854eb32c6a19f9f64a)
2019-02-19 13:11:52 +01:00
Elad
3d2bedf8d0
swarm/storage: fix influxdb gc metrics report (#19102)
(cherry picked from commit 5b8ae7885eaa033aaf1fb1d5959b7f1c86761d6d)
2019-02-19 13:11:52 +01:00
Janoš Guljaš
8ea3d8ad90
swarm: fix network/stream data races (#19051)
* swarm/network/stream: newStreamerTester cleanup only if err is nil

* swarm/network/stream: raise newStreamerTester waitForPeers timeout

* swarm/network/stream: fix data races in GetPeerSubscriptions

* swarm/storage: prevent data race on LDBStore.batchesC

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461775049

* swarm/network/stream: fix TestGetSubscriptionsRPC data race

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461768477

* swarm/network/stream: correctly use Simulation.Run callback

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461783804

* swarm/network: protect addrCountC in Kademlia.AddrCountC function

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-462273444

* p2p/simulations: fix a deadlock calling getRandomNode with lock

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-462317407

* swarm/network/stream: terminate disconnect goruotines in tests

* swarm/network/stream: reduce memory consumption when testing data races

* swarm/network/stream: add watchDisconnections helper function

* swarm/network/stream: add concurrent counter for tests

* swarm/network/stream: rename race/norace test files and use const

* swarm/network/stream: remove watchSim and its panic

* swarm/network/stream: pass context in watchDisconnections

* swarm/network/stream: add concurrent safe bool for watchDisconnections

* swarm/storage: fix LDBStore.batchesC data race by not closing it

(cherry picked from commit 3fd6db2bf63ce90232de445c7f33943406a5e634)
2019-02-19 13:11:52 +01:00
Elad
a0127019c3
swarm: fix uptime gauge update goroutine leak by introducing cleanup functions (#19040)
(cherry picked from commit d596bea2d501d20b92e0fd4baa8bba682157dfa7)
2019-02-19 13:11:51 +01:00
holisticode
7a333e4104
swarm/storage: fix HashExplore concurrency bug ethersphere#1211 (#19028)
* swarm/storage: fix HashExplore concurrency bug ethersphere#1211

*  swarm/storage: lock as value not pointer

* swarm/storage: wait for  to complete

* swarm/storage: fix linter problems

* swarm/storage: append to nil slice

(cherry picked from commit 3d22a46c94f1d842dbada665b36a453362adda74)
2019-02-19 13:11:51 +01:00
gluk256
799fe99537
swarm/pss: mutex lifecycle fixed (#19045)
(cherry picked from commit b30109df3c7c56cb0d1752fc03f478474c3c190a)
2019-02-19 13:11:51 +01:00
Rafael Matias
3b02b0ba4b
swarm/docker: add global-store and split docker images (#19038)
(cherry picked from commit 6cb7d52a29c68cdc4eafabb6dfe7594c288d151e)
2019-02-19 13:11:51 +01:00
Janoš Guljaš
85217b08bd
cmd/swarm/global-store: global store cmd (#19014)
(cherry picked from commit 33d0a0efa61fed2b16797fd12161519943943282)
2019-02-19 13:11:51 +01:00
Ferenc Szabo
dcff622d43
swarm: CI race detector test adjustments (#19017)
(cherry picked from commit 27e3f968194e2723279b60f71c79d4da9fc7577f)
2019-02-19 13:11:51 +01:00
Anton Evangelatov
a3db00f270
swarm/network: refactor simulation tests bootstrap (#18975)
(cherry picked from commit 597597e8b27ee60a25b4533771702892e72898a5)
2019-02-19 13:11:50 +01:00
holisticode
769e43e334
swarm: GetPeerSubscriptions RPC (#18972)
(cherry picked from commit 43e1b7b124d2bcfba98fbe54972a35c022d85bf2)
2019-02-19 13:11:50 +01:00
gluk256
8d8ddea1a3
swarm/pss: transition to whisper v6 (#19023)
(cherry picked from commit cde02e017ef2fb254f9b91888f4a14645c24890a)
2019-02-19 13:09:10 +01:00
lash
068725c5b0
swarm/network, swarm/storage: Preserve opentracing contexts (#19022)
(cherry picked from commit 0c10d376066cb7e57d3bfc03f950c7750cd90640)
2019-02-19 13:09:09 +01:00
Ferenc Szabo
710775f435
swarm/network: fix data race in fetcher_test.go (#18469)
(cherry picked from commit 19bfcbf9117f39f54f698a0953534d90c08e9930)
2019-02-19 13:09:09 +01:00
lash
0fd0108507
swarm/pss: Remove pss service leak in test (#18992)
(cherry picked from commit 7c60d0a6a2d3925c2862cbbb188988475619fd0d)
2019-02-19 13:06:14 +01:00
Ferenc Szabo
3c62cc6bba
swarm/storage: fix test timeout with -race by increasing mget timeout
(cherry picked from commit 1c3aa8d9b12d6104ccddecc1711bc6be2f5b269d)
2019-02-19 13:06:14 +01:00
Janoš Guljaš
333b1bfb6c
swarm/storage/localstore: new localstore package (#19015)
(cherry picked from commit 4f3d22f06c546f36487b33dfb6b5cb4df3ecf073)
2019-02-19 13:06:14 +01:00
holisticode
d1ace4f344
swarm: Debug API and HasChunks() API endpoint (#18980)
(cherry picked from commit 41597c2856d6ac7328baca1340c3e36ab0edd382)
2019-02-19 13:06:13 +01:00
Anton Evangelatov
637a75d61a
cmd/swarm/swarm-smoke: refactor generateEndpoints (#19006)
(cherry picked from commit d212535ddd5bf63a0c0b194525246480ae46c537)
2019-02-19 13:05:55 +01:00
Anton Evangelatov
355d55bd34
cmd/swarm/swarm-smoke: remove wrong metrics (#18970)
(cherry picked from commit c5c9cef5c0baf1652b6642858ad2426794823699)
2019-02-19 13:05:37 +01:00
Elad
7038b5734c
cmd/swarm/swarm-smoke: sliding window test (#18967)
(cherry picked from commit b91bf08876ca4da0c2a843a9ed3e88d64427cfb8)
2019-02-19 13:05:26 +01:00
holisticode
1ecf2860cf
cmd/swarm: hashes command (#19008)
(cherry picked from commit 7f55b0cbd8618a1b0de8d7e37d2b0143ebae4abf)
2019-02-19 12:57:53 +01:00
holisticode
034f65e9e8
swarm/storage: Get all chunk references for a given file (#19002)
(cherry picked from commit 3eff652a7b606f25d43bef6ccb998b8e306f8a75)
2019-02-19 12:57:53 +01:00
lash
607a1968e6
swarm/network: Remove extra random peer, connect test sanity, comments (#18964)
(cherry picked from commit f9401ae011ddf7f8d2d95020b7446c17f8d98dc1)
2019-02-19 12:57:53 +01:00
Janoš Guljaš
3f54994db0
swarm: fix flaky delivery tests (#18971)
(cherry picked from commit 592bf6a59cac9697f0491b24e5093cb759d7e44c)
2019-02-19 12:57:53 +01:00
Elad
2695aa9e0d
p2p/testing, swarm: remove unused testing.T in protocol tester (#18500)
(cherry picked from commit 2abeb35d5425d72c2f7fdfe4209f7a94fac52a8e)
2019-02-19 12:56:31 +01:00
gluk256
e247dcc141
swarm/version: commit version added (#18510)
(cherry picked from commit ad13d2d407d2f614c39af92430fda0a926da2a8a)
2019-02-19 12:56:31 +01:00
Janoš Guljaš
b774d0a507
swarm: fix a data race on startTime (#18511)
(cherry picked from commit fa34429a2695f57bc0a96cd78f25e86700d8ee44)
2019-02-19 12:56:30 +01:00
Anton Evangelatov
4976fcc91a
swarm: bootnode-mode, new bootnodes and no p2p package discovery (#18498)
(cherry picked from commit bbd120354a8d226b446591eeda9f9462cb9b690a)
2019-02-19 12:56:30 +01:00
Anton Evangelatov
878aa58ec6
cmd/swarm: use resetting timer to measure fetch time (#18474)
(cherry picked from commit a0b0db63055e1dd350215f9fe04b0abf19f3488a)
2019-02-19 12:56:30 +01:00
Elad
475a0664c5
p2p/simulations: fix data race on swarm/network/simulations (#18464)
(cherry picked from commit 85a79b3ad3c5863f8612d25c246bcfad339f36b7)
2019-02-19 12:56:30 +01:00
holisticode
4625b1257f
cmd/swarm/swarm-smoke: use ResettingTimer instead of Counters for times (#18479)
(cherry picked from commit 560957799a089042e471320d179ef2e96caf4f8d)
2019-02-19 12:56:30 +01:00
Elad
21d54bcaac
cmd/swarm/swarm-snapshot: disable tests on windows (#18478)
(cherry picked from commit 632135ce4c1d8d3d9a36771aab4137260018e84b)
2019-02-19 12:56:30 +01:00
holisticode
7383db4dac
Upload speed (#18442)
(cherry picked from commit 257bfff316e4efb8952fbeb67c91f86af579cb0a)
2019-02-19 12:56:30 +01:00
Elad
afb65f6ace
swarm/network: fix data race warning on TestBzzHandshakeLightNode (#18459)
(cherry picked from commit 81e26d5a4837077d5fff17e7b461061b134a4a00)
2019-02-19 12:55:18 +01:00
Viktor Trón
1f1c751b6e
swarm/network: rewrite of peer suggestion engine, fix skipped tests (#18404)
* swarm/network: fix skipped tests related to suggestPeer

* swarm/network: rename depth to radius

* swarm/network: uncomment assertHealth and improve comments

* swarm/network: remove commented code

* swarm/network: kademlia suggestPeer algo correction

* swarm/network: kademlia suggest peer

 * simplify suggest Peer code
 * improve peer suggestion algo
 * add comments
 * kademlia testing improvements
   * assertHealth -> checkHealth (test helper)
   * testSuggestPeer -> checkSuggestPeer (test helper)
   * remove testSuggestPeerBug and TestKademliaCase

* swarm/network: kademlia suggestPeer cleanup, improved comments

* swarm/network: minor comment, discovery test default arg

(cherry picked from commit bcb2594151c849d65108dd94e54b69067d117d7d)
2019-02-19 12:55:07 +01:00
Elad
a3f31f51f3
cmd/swarm/swarm-snapshot: swarm snapshot generator (#18453)
* cmd/swarm/swarm-snapshot: add binary to create network snapshots

* cmd/swarm/swarm-snapshot: refactor and extend tests

* p2p/simulations: remove unused triggerChecks func and fix linter

* internal/cmdtest: raise the timeout for killing TestCmd

* cmd/swarm/swarm-snapshot: add more comments and other minor adjustments

* cmd/swarm/swarm-snapshot: remove redundant check in createSnapshot

* cmd/swarm/swarm-snapshot: change comment wording

* p2p/simulations: revert Simulation.Run from master

https://github.com/ethersphere/go-ethereum/pull/1077/files#r247078904

* cmd/swarm/swarm-snapshot: address pr comments

* swarm/network/simulations/discovery: removed snapshot write to file

* cmd/swarm/swarm-snapshot, swarm/network/simulations: removed redundant connection event check, fixed lint error

(cherry picked from commit 34f11e752f61b81c13cdde0649a3c7b14f801c69)
2019-02-19 12:54:56 +01:00
Janoš Guljaš
e63995b3f3
swarm/network: fix data race in TestNetworkID test (#18460)
(cherry picked from commit 96c7c18b184ae894f1c6bd5fbfc45fbcfa9ace77)
2019-02-19 12:54:10 +01:00
Janoš Guljaš
dd3e894747
swarm/storage: fix mockNetFetcher data races (#18462)
fixes: ethersphere/go-ethereum#1117
(cherry picked from commit f728837ee6b48a2413437f54057b4552b7e77494)
2019-02-19 12:54:10 +01:00
Péter Szilágyi
df355eceb4
build: explicitly force .xz compression (old debuild picks gzip) (#19118)
(cherry picked from commit c0b9c763bb1572c202a60b82e7dcdc48dc3c280a)
2019-02-19 11:00:46 +02:00
Péter Szilágyi
84cb00a94d
travis.yml: add launchpad SSH public key (#19115)
(cherry picked from commit 75a931470ee006623f7f172d2a50e7723ca26187)
2019-02-19 11:00:38 +02:00
Martin Holst Swende
992a7bbad5
vendor: update bigcache
(cherry picked from commit 37e5a908e7368d84beef14a3ee8c534f34aa636f)
2019-02-19 11:00:31 +02:00
Martin Holst Swende
a458153098
trie: fix error in node decoding (#19111) 2019-02-19 10:59:57 +02:00
Péter Szilágyi
fe5258b41e
vendor: pull in upstream syscall fixes for non-linux/arm64
(cherry picked from commit 9d3ea8df1c70be24e5814e8338dfc9078b8ccafe)
2019-02-19 10:59:40 +02:00
Péter Szilágyi
d9be337669
vendor: update syscalls dependency
(cherry picked from commit dcc045f03c7c933dcdc7302f0338cbbfef7398ea)
2019-02-19 10:59:24 +02:00
Felix Lange
7bd6f39dc3
common/fdlimit: fix windows build (#19068)
(cherry picked from commit ba90a4aaa42428fc5f38c4869455db5a51565714)
2019-02-19 10:58:54 +02:00
Felix Lange
b247052a64
build: avoid dput and upload with sftp directly (#19067)
(cherry picked from commit a8ddf7ad8393cff80848b193c698ce5e6440e061)
2019-02-19 10:58:45 +02:00
Felix Lange
276f824707
.travis.yml: fix upload destination (#19043)
(cherry picked from commit edf976ee8e7e1561cf11cbdc5a0c5edb497dda34)
2019-02-19 10:58:13 +02:00
Martin Holst Swende
048b463b30
common/fdlimit: cap on MacOS file limits, fixes #18994 (#19035)
* common/fdlimit: cap on MacOS file limits, fixes #18994

* common/fdlimit: fix Maximum-check to respect OPEN_MAX

* common/fdlimit: return error if OPEN_MAX is exceeded in Raise()

* common/fdlimit: goimports

* common/fdlimit: check value after setting fdlimit

* common/fdlimit: make comment a bit more descriptive

* cmd/utils: make fdlimit happy path a bit cleaner

(cherry picked from commit f48da43bae183a04a23d298cb1790d2f8d2cec51)
2019-02-19 10:57:49 +02:00
Felix Lange
9f5fb15097
build: use SFTP for launchpad uploads (#19037)
* build: use sftp for launchpad uploads

* .travis.yml: configure sftp export

* build: update CI docs

(cherry picked from commit 3de19c8b31ab975eed1f7f276d31761f7f8b9af9)
2019-02-19 10:56:14 +02:00
Péter Szilágyi
2072c26a96
cmd, core, params: add support for Goerli
(cherry picked from commit b0ed083ead2d58cc25754eacdb48046eb2bc81cb)
2019-02-19 10:53:47 +02:00
Péter Szilágyi
4da2092908
core: fix pruner panic when importing low-diff-large-sidechain 2019-02-09 17:45:23 +01:00
Martin Holst Swende
3ab9dcc3bd
core: repro #18977 2019-02-09 17:44:15 +01:00
Péter Szilágyi
18f702faf7
cmd/puppeth: handle pre-set Petersburg number, save changed fork rules 2019-02-09 17:38:00 +01:00
Martin Holst Swende
3a95128b22
core: fix error in block iterator (#18986) 2019-02-09 17:36:20 +01:00
Martin Holst Swende
631e2f07f6
eth: make tracers respect pre- EIP 158/161 rule 2019-02-09 17:35:54 +01:00
Felix Lange
7fa3509e2e params, swarm/version: Geth 1.8.22-stable, Swarm 0.3.10-stable 2019-01-31 11:52:18 +01:00
Felix Lange
86ec742f97 p2p/discover: improve table addition code (#18974)
This change clears up confusion around the two ways in which nodes
can be added to the table.

When a neighbors packet is received as a reply to findnode, the nodes
contained in the reply are added as 'seen' entries if sufficient space
is available.

When a ping is received and the endpoint verification has taken place,
the remote node is added as a 'verified' entry or moved to the front of
the bucket if present. This also updates the node's IP address and port
if they have changed.
2019-01-31 11:51:13 +01:00
Felföldi Zsolt
d9a07fba67 params: new CHTs (#18577) 2019-01-29 17:50:20 +01:00
Felix Lange
4cd90e02e2 p2p/discover, p2p/enode: rework endpoint proof handling, packet logging (#18963)
This change resolves multiple issues around handling of endpoint proofs.
The proof is now done separately for each IP and completing the proof
requires a matching ping hash.

Also remove waitping because it's equivalent to sleep. waitping was
slightly more efficient, but that may cause issues with findnode if
packets are reordered and the remote end sees findnode before pong.

Logging of received packets was hitherto done after handling the packet,
which meant that sent replies were logged before the packet that
generated them. This change splits up packet handling into 'preverify'
and 'handle'. The error from 'preverify' is logged, but 'handle' happens
after the message is logged. This fixes the order. Packet logs now
contain the node ID.
2019-01-29 17:50:15 +01:00
Felix Lange
1f3dfed19e build: tweak debian source package build/upload options (#18962)
dput --passive should make repo pushes from Travis work again.
dput --no-upload-log works around an issue I had while uploading locally.

debuild -d says that debuild shouldn't check for build dependencies when
creating the source package. This option is needed to make builds work
in environments where the installed Go version doesn't match the
declared dependency in the source package.
2019-01-29 17:50:09 +01:00
Samuel Marks
2ae481ff6b travis, appveyor: bump to Go 1.11.5 (#18947) 2019-01-29 17:49:59 +01:00
Martin Holst Swende
c7664b0636 core, cmd/puppeth: implement constantinople fix, disable EIP-1283 (#18486)
This PR adds a new fork which disables EIP-1283. Internally it's called Petersburg,
but the genesis/config field is ConstantinopleFix.

The block numbers are:

    7280000 for Constantinople on Mainnet
    7280000 for ConstantinopleFix on Mainnet
    4939394 for ConstantinopleFix on Ropsten
    9999999 for ConstantinopleFix on Rinkeby (real number decided later)

This PR also defaults to using the same ConstantinopleFix number as whatever
Constantinople is set to. That is, it will default to mainnet behaviour if ConstantinopleFix
is not set.This means that for private networks which have already transitioned
to Constantinople, this PR will break the network unless ConstantinopleFix is
explicitly set!
2019-01-29 17:49:27 +01:00
450 changed files with 94270 additions and 17812 deletions

View File

@ -4,7 +4,7 @@ sudo: false
matrix: matrix:
include: include:
- os: linux - os: linux
dist: trusty dist: xenial
sudo: required sudo: required
go: 1.10.x go: 1.10.x
script: script:
@ -16,7 +16,7 @@ matrix:
# These are the latest Go versions. # These are the latest Go versions.
- os: linux - os: linux
dist: trusty dist: xenial
sudo: required sudo: required
go: 1.11.x go: 1.11.x
script: script:
@ -43,7 +43,7 @@ matrix:
# This builder only tests code linters on latest version of Go # This builder only tests code linters on latest version of Go
- os: linux - os: linux
dist: trusty dist: xenial
go: 1.11.x go: 1.11.x
env: env:
- lint - lint
@ -55,7 +55,7 @@ matrix:
# This builder does the Ubuntu PPA upload # This builder does the Ubuntu PPA upload
- if: type = push - if: type = push
os: linux os: linux
dist: trusty dist: xenial
go: 1.11.x go: 1.11.x
env: env:
- ubuntu-ppa - ubuntu-ppa
@ -68,13 +68,16 @@ matrix:
- debhelper - debhelper
- dput - dput
- fakeroot - fakeroot
- python-bzrlib
- python-paramiko
script: script:
- go run build/ci.go debsrc -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>" -upload ppa:ethereum/ethereum - echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Linux Azure uploads # This builder does the Linux Azure uploads
- if: type = push - if: type = push
os: linux os: linux
dist: trusty dist: xenial
sudo: required sudo: required
go: 1.11.x go: 1.11.x
env: env:
@ -108,7 +111,7 @@ matrix:
# This builder does the Linux Azure MIPS xgo uploads # This builder does the Linux Azure MIPS xgo uploads
- if: type = push - if: type = push
os: linux os: linux
dist: trusty dist: xenial
services: services:
- docker - docker
go: 1.11.x go: 1.11.x
@ -136,7 +139,7 @@ matrix:
# This builder does the Android Maven and Azure uploads # This builder does the Android Maven and Azure uploads
- if: type = push - if: type = push
os: linux os: linux
dist: trusty dist: xenial
addons: addons:
apt: apt:
packages: packages:
@ -156,7 +159,7 @@ matrix:
git: git:
submodules: false # avoid cloning ethereum/tests submodules: false # avoid cloning ethereum/tests
before_install: before_install:
- curl https://storage.googleapis.com/golang/go1.11.4.linux-amd64.tar.gz | tar -xz - curl https://storage.googleapis.com/golang/go1.11.5.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH - export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go - export GOROOT=`pwd`/go
- export GOPATH=$HOME/go - export GOPATH=$HOME/go
@ -203,7 +206,7 @@ matrix:
# This builder does the Azure archive purges to avoid accumulating junk # This builder does the Azure archive purges to avoid accumulating junk
- if: type = cron - if: type = cron
os: linux os: linux
dist: trusty dist: xenial
go: 1.11.x go: 1.11.x
env: env:
- azure-purge - azure-purge

View File

@ -23,8 +23,8 @@ environment:
install: install:
- git submodule update --init - git submodule update --init
- rmdir C:\go /s /q - rmdir C:\go /s /q
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.11.4.windows-%GETH_ARCH%.zip - appveyor DownloadFile https://storage.googleapis.com/golang/go1.11.5.windows-%GETH_ARCH%.zip
- 7z x go1.11.4.windows-%GETH_ARCH%.zip -y -oC:\ > NUL - 7z x go1.11.5.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version - go version
- gcc --version - gcc --version

View File

@ -7,11 +7,18 @@ Canonical.
Packages of develop branch commits have suffix -unstable and cannot be installed alongside Packages of develop branch commits have suffix -unstable and cannot be installed alongside
the stable version. Switching between release streams requires user intervention. the stable version. Switching between release streams requires user intervention.
## Launchpad
The packages are built and served by launchpad.net. We generate a Debian source package The packages are built and served by launchpad.net. We generate a Debian source package
for each distribution and upload it. Their builder picks up the source package, builds it for each distribution and upload it. Their builder picks up the source package, builds it
and installs the new version into the PPA repository. Launchpad requires a valid signature and installs the new version into the PPA repository. Launchpad requires a valid signature
by a team member for source package uploads. The signing key is stored in an environment by a team member for source package uploads.
variable which Travis CI makes available to certain builds.
The signing key is stored in an environment variable which Travis CI makes available to
certain builds. Since Travis CI doesn't support FTP, SFTP is used to transfer the
packages. To set this up yourself, you need to create a Launchpad user and add a GPG key
and SSH key to it. Then encode both keys as base64 and configure 'secret' environment
variables `PPA_SIGNING_KEY` and `PPA_SSH_KEY` on Travis.
We want to build go-ethereum with the most recent version of Go, irrespective of the Go We want to build go-ethereum with the most recent version of Go, irrespective of the Go
version that is available in the main Ubuntu repository. In order to make this possible, version that is available in the main Ubuntu repository. In order to make this possible,
@ -27,7 +34,7 @@ Add the gophers PPA and install Go 1.10 and Debian packaging tools:
$ sudo apt-add-repository ppa:gophers/ubuntu/archive $ sudo apt-add-repository ppa:gophers/ubuntu/archive
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install build-essential golang-1.10 devscripts debhelper $ sudo apt-get install build-essential golang-1.10 devscripts debhelper python-bzrlib python-paramiko
Create the source packages: Create the source packages:

View File

@ -441,11 +441,8 @@ func archiveBasename(arch string, archiveVersion string) string {
func archiveUpload(archive string, blobstore string, signer string) error { func archiveUpload(archive string, blobstore string, signer string) error {
// If signing was requested, generate the signature files // If signing was requested, generate the signature files
if signer != "" { if signer != "" {
pgpkey, err := base64.StdEncoding.DecodeString(os.Getenv(signer)) key := getenvBase64(signer)
if err != nil { if err := build.PGPSignFile(archive, archive+".asc", string(key)); err != nil {
return fmt.Errorf("invalid base64 %s", signer)
}
if err := build.PGPSignFile(archive, archive+".asc", string(pgpkey)); err != nil {
return err return err
} }
} }
@ -488,7 +485,8 @@ func maybeSkipArchive(env build.Environment) {
func doDebianSource(cmdline []string) { func doDebianSource(cmdline []string) {
var ( var (
signer = flag.String("signer", "", `Signing key name, also used as package author`) signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ppa:ethereum/ethereum")`) upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`) workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now() now = time.Now()
) )
@ -498,11 +496,7 @@ func doDebianSource(cmdline []string) {
maybeSkipArchive(env) maybeSkipArchive(env)
// Import the signing key. // Import the signing key.
if b64key := os.Getenv("PPA_SIGNING_KEY"); b64key != "" { if key := getenvBase64("PPA_SIGNING_KEY"); len(key) > 0 {
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatal("invalid base64 PPA_SIGNING_KEY")
}
gpg := exec.Command("gpg", "--import") gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key) gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg) build.MustRun(gpg)
@ -513,22 +507,58 @@ func doDebianSource(cmdline []string) {
for _, distro := range debDistros { for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables) meta := newDebMetadata(distro, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
pkgdir := stageDebianSource(*workdir, meta) pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc") debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc", "-d", "-Zxz")
debuild.Dir = pkgdir debuild.Dir = pkgdir
build.MustRun(debuild) build.MustRun(debuild)
changes := fmt.Sprintf("%s_%s_source.changes", meta.Name(), meta.VersionString()) var (
changes = filepath.Join(*workdir, changes) basename = fmt.Sprintf("%s_%s", meta.Name(), meta.VersionString())
source = filepath.Join(*workdir, basename+".tar.xz")
dsc = filepath.Join(*workdir, basename+".dsc")
changes = filepath.Join(*workdir, basename+"_source.changes")
)
if *signer != "" { if *signer != "" {
build.MustRunCommand("debsign", changes) build.MustRunCommand("debsign", changes)
} }
if *upload != "" { if *upload != "" {
build.MustRunCommand("dput", *upload, changes) ppaUpload(*workdir, *upload, *sshUser, []string{source, dsc, changes})
} }
} }
} }
} }
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
log.Fatal("-upload PPA name must contain single /")
}
if sshUser == "" {
sshUser = p[0]
}
incomingDir := fmt.Sprintf("~%s/ubuntu/%s", p[0], p[1])
// Create the SSH identity file if it doesn't exist.
var idfile string
if sshkey := getenvBase64("PPA_SSH_KEY"); len(sshkey) > 0 {
idfile = filepath.Join(workdir, "sshkey")
if _, err := os.Stat(idfile); os.IsNotExist(err) {
ioutil.WriteFile(idfile, sshkey, 0600)
}
}
// Upload
dest := sshUser + "@ppa.launchpad.net"
if err := build.UploadSFTP(idfile, dest, incomingDir, files); err != nil {
log.Fatal(err)
}
}
func getenvBase64(variable string) []byte {
dec, err := base64.StdEncoding.DecodeString(os.Getenv(variable))
if err != nil {
log.Fatal("invalid base64 " + variable)
}
return []byte(dec)
}
func makeWorkdir(wdflag string) string { func makeWorkdir(wdflag string) string {
var err error var err error
if wdflag != "" { if wdflag != "" {
@ -800,15 +830,10 @@ func doAndroidArchive(cmdline []string) {
os.Rename(archive, meta.Package+".aar") os.Rename(archive, meta.Package+".aar")
if *signer != "" && *deploy != "" { if *signer != "" && *deploy != "" {
// Import the signing key into the local GPG instance // Import the signing key into the local GPG instance
b64key := os.Getenv(*signer) key := getenvBase64(*signer)
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatalf("invalid base64 %s", *signer)
}
gpg := exec.Command("gpg", "--import") gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key) gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg) build.MustRun(gpg)
keyID, err := build.PGPKeyID(string(key)) keyID, err := build.PGPKeyID(string(key))
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)

View File

@ -579,7 +579,7 @@ func (f *faucet) loop() {
go func() { go func() {
for head := range update { for head := range update {
// New chain head arrived, query the current stats and stream to clients // New chain head arrived, query the current stats and stream to clients
timestamp := time.Unix(head.Time.Int64(), 0) timestamp := time.Unix(int64(head.Time), 0)
if time.Since(timestamp) > time.Hour { if time.Since(timestamp) > time.Hour {
log.Warn("Skipping faucet refresh, head too old", "number", head.Number, "hash", head.Hash(), "age", common.PrettyAge(timestamp)) log.Warn("Skipping faucet refresh, head too old", "number", head.Number, "hash", head.Hash(), "age", common.PrettyAge(timestamp))
continue continue

View File

@ -372,7 +372,7 @@ func copyDb(ctx *cli.Context) error {
chain, chainDb := utils.MakeChain(ctx, stack) chain, chainDb := utils.MakeChain(ctx, stack)
syncmode := *utils.GlobalTextMarshaler(ctx, utils.SyncModeFlag.Name).(*downloader.SyncMode) syncmode := *utils.GlobalTextMarshaler(ctx, utils.SyncModeFlag.Name).(*downloader.SyncMode)
dl := downloader.New(syncmode, chainDb, new(event.TypeMux), chain, nil, nil) dl := downloader.New(syncmode, 0, chainDb, new(event.TypeMux), chain, nil, nil)
// Create a source peer to satisfy downloader requests from // Create a source peer to satisfy downloader requests from
db, err := ethdb.NewLDBDatabase(ctx.Args().First(), ctx.GlobalInt(utils.CacheFlag.Name), 256) db, err := ethdb.NewLDBDatabase(ctx.Args().First(), ctx.GlobalInt(utils.CacheFlag.Name), 256)

View File

@ -38,7 +38,7 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"gopkg.in/urfave/cli.v1" cli "gopkg.in/urfave/cli.v1"
) )
const ( const (
@ -121,6 +121,7 @@ var (
utils.DeveloperPeriodFlag, utils.DeveloperPeriodFlag,
utils.TestnetFlag, utils.TestnetFlag,
utils.RinkebyFlag, utils.RinkebyFlag,
utils.GoerliFlag,
utils.VMEnableDebugFlag, utils.VMEnableDebugFlag,
utils.NetworkIdFlag, utils.NetworkIdFlag,
utils.ConstantinopleOverrideFlag, utils.ConstantinopleOverrideFlag,
@ -149,6 +150,7 @@ var (
utils.WSAllowedOriginsFlag, utils.WSAllowedOriginsFlag,
utils.IPCDisabledFlag, utils.IPCDisabledFlag,
utils.IPCPathFlag, utils.IPCPathFlag,
utils.RPCGlobalGasCap,
} }
whisperFlags = []cli.Flag{ whisperFlags = []cli.Flag{
@ -164,7 +166,7 @@ var (
utils.MetricsInfluxDBDatabaseFlag, utils.MetricsInfluxDBDatabaseFlag,
utils.MetricsInfluxDBUsernameFlag, utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag, utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBHostTagFlag, utils.MetricsInfluxDBTagsFlag,
} }
) )

View File

@ -26,7 +26,7 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils" "github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/internal/debug" "github.com/ethereum/go-ethereum/internal/debug"
"gopkg.in/urfave/cli.v1" cli "gopkg.in/urfave/cli.v1"
) )
// AppHelpTemplate is the test template for the default, global app help topic. // AppHelpTemplate is the test template for the default, global app help topic.
@ -74,6 +74,7 @@ var AppHelpFlagGroups = []flagGroup{
utils.NetworkIdFlag, utils.NetworkIdFlag,
utils.TestnetFlag, utils.TestnetFlag,
utils.RinkebyFlag, utils.RinkebyFlag,
utils.GoerliFlag,
utils.SyncModeFlag, utils.SyncModeFlag,
utils.GCModeFlag, utils.GCModeFlag,
utils.EthStatsURLFlag, utils.EthStatsURLFlag,
@ -152,6 +153,7 @@ var AppHelpFlagGroups = []flagGroup{
utils.RPCListenAddrFlag, utils.RPCListenAddrFlag,
utils.RPCPortFlag, utils.RPCPortFlag,
utils.RPCApiFlag, utils.RPCApiFlag,
utils.RPCGlobalGasCap,
utils.WSEnabledFlag, utils.WSEnabledFlag,
utils.WSListenAddrFlag, utils.WSListenAddrFlag,
utils.WSPortFlag, utils.WSPortFlag,
@ -229,7 +231,7 @@ var AppHelpFlagGroups = []flagGroup{
utils.MetricsInfluxDBDatabaseFlag, utils.MetricsInfluxDBDatabaseFlag,
utils.MetricsInfluxDBUsernameFlag, utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag, utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBHostTagFlag, utils.MetricsInfluxDBTagsFlag,
}, },
}, },
{ {

View File

@ -245,6 +245,7 @@ type parityChainSpec struct {
EIP1014Transition hexutil.Uint64 `json:"eip1014Transition"` EIP1014Transition hexutil.Uint64 `json:"eip1014Transition"`
EIP1052Transition hexutil.Uint64 `json:"eip1052Transition"` EIP1052Transition hexutil.Uint64 `json:"eip1052Transition"`
EIP1283Transition hexutil.Uint64 `json:"eip1283Transition"` EIP1283Transition hexutil.Uint64 `json:"eip1283Transition"`
EIP1283DisableTransition hexutil.Uint64 `json:"eip1283DisableTransition"`
} `json:"params"` } `json:"params"`
Genesis struct { Genesis struct {
@ -347,6 +348,11 @@ func newParityChainSpec(network string, genesis *core.Genesis, bootnodes []strin
if num := genesis.Config.ConstantinopleBlock; num != nil { if num := genesis.Config.ConstantinopleBlock; num != nil {
spec.setConstantinople(num) spec.setConstantinople(num)
} }
// ConstantinopleFix (remove eip-1283)
if num := genesis.Config.PetersburgBlock; num != nil {
spec.setConstantinopleFix(num)
}
spec.Params.MaximumExtraDataSize = (hexutil.Uint64)(params.MaximumExtraDataSize) spec.Params.MaximumExtraDataSize = (hexutil.Uint64)(params.MaximumExtraDataSize)
spec.Params.MinGasLimit = (hexutil.Uint64)(params.MinGasLimit) spec.Params.MinGasLimit = (hexutil.Uint64)(params.MinGasLimit)
spec.Params.GasLimitBoundDivisor = (math2.HexOrDecimal64)(params.GasLimitBoundDivisor) spec.Params.GasLimitBoundDivisor = (math2.HexOrDecimal64)(params.GasLimitBoundDivisor)
@ -441,6 +447,10 @@ func (spec *parityChainSpec) setConstantinople(num *big.Int) {
spec.Params.EIP1283Transition = n spec.Params.EIP1283Transition = n
} }
func (spec *parityChainSpec) setConstantinopleFix(num *big.Int) {
spec.Params.EIP1283DisableTransition = hexutil.Uint64(num.Uint64())
}
// pyEthereumGenesisSpec represents the genesis specification format used by the // pyEthereumGenesisSpec represents the genesis specification format used by the
// Python Ethereum implementation. // Python Ethereum implementation.
type pyEthereumGenesisSpec struct { type pyEthereumGenesisSpec struct {

View File

@ -632,6 +632,7 @@ func deployDashboard(client *sshClient, network string, conf *config, config *da
"Spurious": conf.Genesis.Config.EIP155Block, "Spurious": conf.Genesis.Config.EIP155Block,
"Byzantium": conf.Genesis.Config.ByzantiumBlock, "Byzantium": conf.Genesis.Config.ByzantiumBlock,
"Constantinople": conf.Genesis.Config.ConstantinopleBlock, "Constantinople": conf.Genesis.Config.ConstantinopleBlock,
"ConstantinopleFix": conf.Genesis.Config.PetersburgBlock,
}) })
files[filepath.Join(workdir, "index.html")] = indexfile.Bytes() files[filepath.Join(workdir, "index.html")] = indexfile.Bytes()

View File

@ -222,10 +222,18 @@ func (w *wizard) manageGenesis() {
fmt.Println() fmt.Println()
fmt.Printf("Which block should Constantinople come into effect? (default = %v)\n", w.conf.Genesis.Config.ConstantinopleBlock) fmt.Printf("Which block should Constantinople come into effect? (default = %v)\n", w.conf.Genesis.Config.ConstantinopleBlock)
w.conf.Genesis.Config.ConstantinopleBlock = w.readDefaultBigInt(w.conf.Genesis.Config.ConstantinopleBlock) w.conf.Genesis.Config.ConstantinopleBlock = w.readDefaultBigInt(w.conf.Genesis.Config.ConstantinopleBlock)
if w.conf.Genesis.Config.PetersburgBlock == nil {
w.conf.Genesis.Config.PetersburgBlock = w.conf.Genesis.Config.ConstantinopleBlock
}
fmt.Println()
fmt.Printf("Which block should Constantinople-Fix (remove EIP-1283) come into effect? (default = %v)\n", w.conf.Genesis.Config.PetersburgBlock)
w.conf.Genesis.Config.PetersburgBlock = w.readDefaultBigInt(w.conf.Genesis.Config.PetersburgBlock)
out, _ := json.MarshalIndent(w.conf.Genesis.Config, "", " ") out, _ := json.MarshalIndent(w.conf.Genesis.Config, "", " ")
fmt.Printf("Chain configuration updated:\n\n%s\n", out) fmt.Printf("Chain configuration updated:\n\n%s\n", out)
w.conf.flush()
case "2": case "2":
// Save whatever genesis configuration we currently have // Save whatever genesis configuration we currently have
fmt.Println() fmt.Println()

View File

@ -17,61 +17,8 @@
package main package main
var SwarmBootnodes = []string{ var SwarmBootnodes = []string{
// Foundation Swarm Gateway Cluster // EF Swarm Bootnode - AWS - eu-central-1
"enode://e5c6f9215c919a5450a7b8c14c22535607b69f2c8e1e7f6f430cb25d7a2c27cd1df4c4f18ad7c1d7e5162e271ffcd3f20b1a1467fb6e790e7d727f3b2193de97@52.232.7.187:30399", "enode://4c113504601930bf2000c29bcd98d1716b6167749f58bad703bae338332fe93cc9d9204f08afb44100dc7bea479205f5d162df579f9a8f76f8b402d339709023@3.122.203.99:30301",
"enode://9b2fe07e69ccc7db5fef15793dab7d7d2e697ed92132d6e9548218e68a34613a8671ad03a6658d862b468ed693cae8a0f8f8d37274e4a657ffb59ca84676e45b@52.232.7.187:30400", // EF Swarm Bootnode - AWS - us-west-2
"enode://76c1059162c93ef9df0f01097c824d17c492634df211ef4c806935b349082233b63b90c23970254b3b7138d630400f7cf9b71e80355a446a8b733296cb04169a@52.232.7.187:30401", "enode://89f2ede3371bff1ad9f2088f2012984e280287a4e2b68007c2a6ad994909c51886b4a8e9e2ecc97f9910aca538398e0a5804b0ee80a187fde1ba4f32626322ba@52.35.212.179:30301",
"enode://ce46bbe2a8263145d65252d52da06e000ad350ed09c876a71ea9544efa42f63c1e1b6cc56307373aaad8f9dd069c90d0ed2dd1530106200e16f4ca681dd8ae2d@52.232.7.187:30402",
"enode://f431e0d6008a6c35c6e670373d828390c8323e53da8158e7bfc43cf07e632cc9e472188be8df01decadea2d4a068f1428caba769b632554a8fb0607bc296988f@52.232.7.187:30403",
"enode://174720abfff83d7392f121108ae50ea54e04889afe020df883655c0f6cb95414db945a0228d8982fe000d86fc9f4b7669161adc89cd7cd56f78f01489ab2b99b@52.232.7.187:30404",
"enode://2ae89be4be61a689b6f9ecee4360a59e185e010ab750f14b63b4ae43d4180e872e18e3437d4386ce44875dc7cc6eb761acba06412fe3178f3dac1dab3b65703e@52.232.7.187:30405",
"enode://24abebe1c0e6d75d6052ce3219a87be8573fd6397b4cb51f0773b83abba9b3d872bfb273cdc07389715b87adfac02f5235f5241442c5089802cbd8d42e310fce@52.232.7.187:30406",
"enode://d08dfa46bfbbdbcaafbb6e34abee4786610f6c91e0b76d7881f0334ac10dda41d8c1f2b6eedffb4493293c335c0ad46776443b2208d1fbbb9e1a90b25ee4eef2@52.232.7.187:30407",
"enode://8d95eb0f837d27581a43668ed3b8783d69dc4e84aa3edd7a0897e026155c8f59c8702fdc0375ee7bac15757c9c78e1315d9b73e4ce59c936db52ea4ae2f501c7@52.232.7.187:30408",
"enode://a5967cc804aebd422baaaba9f06f27c9e695ccab335b61088130f8cbe64e3cdf78793868c7051dfc06eecfe844fad54bc7f6dfaed9db3c7ecef279cb829c25fb@52.232.7.187:30409",
"enode://5f00134d81a8f2ebcc46f8766f627f492893eda48138f811b7de2168308171968f01710bca6da05764e74f14bae41652f554e6321f1aed85fa3461e89d075dbf@52.232.7.187:30410",
"enode://b2142b79b01a5aa66a5e23cc35e78219a8e97bc2412a6698cee24ae02e87078b725d71730711bd62e25ff1aa8658c6633778af8ac14c63814a337c3dd0ebda9f@52.232.7.187:30411",
"enode://1ffa7651094867d6486ce3ef46d27a052c2cb968b618346c6df7040322c7efc3337547ba85d4cbba32e8b31c42c867202554735c06d4c664b9afada2ed0c4b3c@52.232.7.187:30412",
"enode://129e0c3d5f5df12273754f6f703d2424409fa4baa599e0b758c55600169313887855e75b082028d2302ec034b303898cd697cc7ae8256ba924ce927510da2c8d@52.232.7.187:30413",
"enode://419e2dc0d2f5b022cf16b0e28842658284909fa027a0fbbb5e2b755e7f846ea02a8f0b66a7534981edf6a7bcf8a14855344c6668e2cd4476ccd35a11537c9144@52.232.7.187:30414",
"enode://23d55ad900583231b91f2f62e3f72eb498b342afd58b682be3af052eed62b5651094471065981de33d8786f075f05e3cca499503b0ac8ae84b2a06e99f5b0723@52.232.7.187:30415",
"enode://bc56e4158c00e9f616d7ea533def20a89bef959df4e62a768ff238ff4e1e9223f57ecff969941c20921bad98749baae311c0fbebce53bf7bbb9d3dc903640990@52.232.7.187:30416",
"enode://433ce15199c409875e7e72fffd69fdafe746f17b20f0d5555281722a65fde6c80328fab600d37d8624509adc072c445ce0dad4a1c01cff6acf3132c11d429d4d@52.232.7.187:30417",
"enode://632ee95b8f0eac51ef89ceb29313fef3a60050181d66a6b125583b1a225a7694b252edc016efb58aa3b251da756cb73280842a022c658ed405223b2f58626343@52.232.7.187:30418",
"enode://4a0f9bcff7a4b9ee453fb298d0fb222592efe121512e30cd72fef631beb8c6a15153a1456eb073ee18551c0e003c569651a101892dc4124e90b933733a498bb5@52.232.7.187:30419",
"enode://f0d80fbc72d16df30e19aac3051eb56a7aff0c8367686702e01ea132d8b0b3ee00cadd6a859d2cca98ec68d3d574f8a8a87dba2347ec1e2818dc84bc3fa34fae@52.232.7.187:30420",
"enode://a199146906e4f9f2b94b195a8308d9a59a3564b92efaab898a4243fe4c2ad918b7a8e4853d9d901d94fad878270a2669d644591299c3d43de1b298c00b92b4a7@52.232.7.187:30421",
"enode://052036ea8736b37adbfb684d90ce43e11b3591b51f31489d7c726b03618dea4f73b1e659deb928e6bf40564edcdcf08351643f42db3d4ca1c2b5db95dad59e94@52.232.7.187:30422",
"enode://460e2b8c6da8f12fac96c836e7d108f4b7ec55a1c64631bb8992339e117e1c28328fee83af863196e20af1487a655d13e5ceba90e980e92502d5bac5834c1f71@52.232.7.187:30423",
"enode://6d2cdd13741b2e72e9031e1b93c6d9a4e68de2844aa4e939f6a8a8498a7c1d7e2ee4c64217e92a6df08c9a32c6764d173552810ef1bd2ecb356532d389dd2136@52.232.7.187:30424",
"enode://62105fc25ce2cd5b299647f47eaa9211502dc76f0e9f461df915782df7242ac3223e3db04356ae6ed2977ccac20f0b16864406e9ca514a40a004cb6a5d0402aa@52.232.7.187:30425",
"enode://e0e388fc520fd493c33f0ce16685e6f98fb6aec28f2edc14ee6b179594ee519a896425b0025bb6f0e182dd3e468443f19c70885fbc66560d000093a668a86aa8@52.232.7.187:30426",
"enode://63f3353a72521ea10022127a4fe6b4acbef197c3fe668fd9f4805542d8a6fcf79f6335fbab62d180a35e19b739483e740858b113fdd7c13a26ad7b4e318a5aef@52.232.7.187:30427",
"enode://33a42b927085678d4aefd4e70b861cfca6ef5f6c143696c4f755973fd29e64c9e658cad57a66a687a7a156da1e3688b1fbdd17bececff2ee009fff038fa5666b@52.232.7.187:30428",
"enode://259ab5ab5c1daee3eab7e3819ab3177b82d25c29e6c2444fdd3f956e356afae79a72840ccf2d0665fe82c81ebc3b3734da1178ac9fd5d62c67e674b69f86b6be@52.232.7.187:30429",
"enode://558bccad7445ce3fd8db116ed6ab4aed1324fdbdac2348417340c1764dc46d46bffe0728e5b7d5c36f12e794c289f18f57f08f085d2c65c9910a5c7a65b6a66a@52.232.7.187:30430",
"enode://abe60937a0657ffded718e3f84a32987286983be257bdd6004775c4b525747c2b598f4fac49c8de324de5ce75b22673fa541a7ce2d555fb7f8ca325744ae3577@52.232.7.187:30431",
"enode://bce6f0aaa5b230742680084df71d4f026b3eff7f564265599216a1b06b765303fdc9325de30ffd5dfdaf302ce4b14322891d2faea50ce2ca298d7409f5858339@52.232.7.187:30432",
"enode://21b957c4e03277d42be6660730ec1b93f540764f26c6abdb54d006611139c7081248486206dfbf64fcaffd62589e9c6b8ea77a5297e4b21a605f1bcf49483ed0@52.232.7.187:30433",
"enode://ff104e30e64f24c3d7328acee8b13354e5551bc8d60bb25ecbd9632d955c7e34bb2d969482d173355baad91c8282f8b592624eb3929151090da3b4448d4d58fb@52.232.7.187:30434",
"enode://c76e2b5f81a521bceaec1518926a21380a345df9cf463461562c6845795512497fb67679e155fc96a74350f8b78de8f4c135dd52b106dbbb9795452021d09ea5@52.232.7.187:30435",
"enode://3288fd860105164f3e9b69934c4eb18f7146cfab31b5a671f994e21a36e9287766e5f9f075aefbc404538c77f7c2eb2a4495020a7633a1c3970d94e9fa770aeb@52.232.7.187:30436",
"enode://6cea859c7396d46b20cfcaa80f9a11cd112f8684f2f782f7b4c0e1e0af9212113429522075101923b9b957603e6c32095a6a07b5e5e35183c521952ee108dfaf@52.232.7.187:30437",
"enode://f628ec56e4ca8317cc24cc4ac9b27b95edcce7b96e1c7f3b53e30de4a8580fe44f2f0694a513bdb0a431acaf2824074d6ace4690247bbc34c14f426af8c056ea@52.232.7.187:30438",
"enode://055ec8b26fc105c4f97970a1cce9773a5e34c03f511b839db742198a1c571e292c54aa799e9afb991cc8a560529b8cdf3e0c344bc6c282aff2f68eec59361ddf@52.232.7.187:30439",
"enode://48cb0d430c328974226aa33a931d8446cd5a8d40f3ead8f4ce7ad60faa1278192eb6d58bed91258d63e81f255fc107eec2425ce2ae8b22350dd556076e160610@52.232.7.187:30440",
"enode://3fadb7af7f770d5ffc6b073b8d42834bebb18ce1fe8a4fe270d2b799e7051327093960dc61d9a18870db288f7746a0e6ea2a013cd6ab0e5f97ca08199473aace@52.232.7.187:30441",
"enode://a5d7168024c9992769cf380ffa559a64b4f39a29d468f579559863814eb0ae0ed689ac0871a3a2b4c78b03297485ec322d578281131ef5d5c09a4beb6200a97a@52.232.7.187:30442",
"enode://9c57744c5b2c2d71abcbe80512652f9234d4ab041b768a2a886ab390fe6f184860f40e113290698652d7e20a8ac74d27ac8671db23eb475b6c5e6253e4693bf8@52.232.7.187:30443",
"enode://daca9ff0c3176045a0e0ed228dee00ec86bc0939b135dc6b1caa23745d20fd0332e1ee74ad04020e89df56c7146d831a91b89d15ca3df05ba7618769fefab376@52.232.7.187:30444",
"enode://a3f6af59428cb4b9acb198db15ef5554fa43c2b0c18e468a269722d64a27218963a2975eaf82750b6262e42192b5e3669ea51337b4cda62b33987981bc5e0c1a@52.232.7.187:30445",
"enode://fe571422fa4651c3354c85dac61911a6a6520dd3c0332967a49d4133ca30e16a8a4946fa73ca2cb5de77917ea701a905e1c3015b2f4defcd53132b61cc84127a@52.232.7.187:30446",
// Mainframe
"enode://ee9a5a571ea6c8a59f9a8bb2c569c865e922b41c91d09b942e8c1d4dd2e1725bd2c26149da14de1f6321a2c6fdf1e07c503c3e093fb61696daebf74d6acd916b@54.186.219.160:30399",
"enode://a03f0562ecb8a992ad5242345535e73483cdc18ab934d36bf24b567d43447c2cea68f89f1d51d504dd13acc30f24ebce5a150bea2ccb1b722122ce4271dc199d@52.67.248.147:30399",
"enode://e2cbf9eafd85903d3b1c56743035284320695e0072bc8d7396e0542aa5e1c321b236f67eab66b79c2f15d4447fa4bbe74dd67d0467da23e7eb829f60ec8a812b@13.58.169.1:30399",
"enode://8b8c6bda6047f1cad9fab2db4d3d02b7aa26279902c32879f7bcd4a7d189fee77fdc36ee151ce6b84279b4792e72578fd529d2274d014132465758fbfee51cee@13.209.13.15:30399",
"enode://63f6a8818927e429585287cf2ca0cb9b11fa990b7b9b331c2962cdc6f21807a2473b26e8256225c26caff70d7218e59586d704d49061452c6852e382c885d03c@35.154.106.174:30399",
"enode://ed4bd3b794ed73f18e6dcc70c6624dfec63b5654f6ab54e8f40b16eff8afbd342d4230e099ddea40e84423f81b2d2ea79799dc345257b1fec6f6c422c9d008f7@52.213.20.99:30399",
} }

View File

@ -79,8 +79,10 @@ const (
SWARM_ENV_STORE_PATH = "SWARM_STORE_PATH" SWARM_ENV_STORE_PATH = "SWARM_STORE_PATH"
SWARM_ENV_STORE_CAPACITY = "SWARM_STORE_CAPACITY" SWARM_ENV_STORE_CAPACITY = "SWARM_STORE_CAPACITY"
SWARM_ENV_STORE_CACHE_CAPACITY = "SWARM_STORE_CACHE_CAPACITY" SWARM_ENV_STORE_CACHE_CAPACITY = "SWARM_STORE_CACHE_CAPACITY"
SWARM_ENV_BOOTNODE_MODE = "SWARM_BOOTNODE_MODE"
SWARM_ACCESS_PASSWORD = "SWARM_ACCESS_PASSWORD" SWARM_ACCESS_PASSWORD = "SWARM_ACCESS_PASSWORD"
SWARM_AUTO_DEFAULTPATH = "SWARM_AUTO_DEFAULTPATH" SWARM_AUTO_DEFAULTPATH = "SWARM_AUTO_DEFAULTPATH"
SWARM_GLOBALSTORE_API = "SWARM_GLOBALSTORE_API"
GETH_ENV_DATADIR = "GETH_DATADIR" GETH_ENV_DATADIR = "GETH_DATADIR"
) )
@ -164,10 +166,9 @@ func configFileOverride(config *bzzapi.Config, ctx *cli.Context) (*bzzapi.Config
return config, err return config, err
} }
//override the current config with whatever is provided through the command line // cmdLineOverride overrides the current config with whatever is provided through the command line
//most values are not allowed a zero value (empty string), if not otherwise noted // most values are not allowed a zero value (empty string), if not otherwise noted
func cmdLineOverride(currentConfig *bzzapi.Config, ctx *cli.Context) *bzzapi.Config { func cmdLineOverride(currentConfig *bzzapi.Config, ctx *cli.Context) *bzzapi.Config {
if keyid := ctx.GlobalString(SwarmAccountFlag.Name); keyid != "" { if keyid := ctx.GlobalString(SwarmAccountFlag.Name); keyid != "" {
currentConfig.BzzAccount = keyid currentConfig.BzzAccount = keyid
} }
@ -258,14 +259,21 @@ func cmdLineOverride(currentConfig *bzzapi.Config, ctx *cli.Context) *bzzapi.Con
currentConfig.LocalStoreParams.CacheCapacity = storeCacheCapacity currentConfig.LocalStoreParams.CacheCapacity = storeCacheCapacity
} }
if ctx.GlobalIsSet(SwarmBootnodeModeFlag.Name) {
currentConfig.BootnodeMode = ctx.GlobalBool(SwarmBootnodeModeFlag.Name)
}
if ctx.GlobalIsSet(SwarmGlobalStoreAPIFlag.Name) {
currentConfig.GlobalStoreAPI = ctx.GlobalString(SwarmGlobalStoreAPIFlag.Name)
}
return currentConfig return currentConfig
} }
//override the current config with whatver is provided in environment variables // envVarsOverride overrides the current config with whatver is provided in environment variables
//most values are not allowed a zero value (empty string), if not otherwise noted // most values are not allowed a zero value (empty string), if not otherwise noted
func envVarsOverride(currentConfig *bzzapi.Config) (config *bzzapi.Config) { func envVarsOverride(currentConfig *bzzapi.Config) (config *bzzapi.Config) {
if keyid := os.Getenv(SWARM_ENV_ACCOUNT); keyid != "" { if keyid := os.Getenv(SWARM_ENV_ACCOUNT); keyid != "" {
currentConfig.BzzAccount = keyid currentConfig.BzzAccount = keyid
} }
@ -364,6 +372,18 @@ func envVarsOverride(currentConfig *bzzapi.Config) (config *bzzapi.Config) {
currentConfig.Cors = cors currentConfig.Cors = cors
} }
if bm := os.Getenv(SWARM_ENV_BOOTNODE_MODE); bm != "" {
bootnodeMode, err := strconv.ParseBool(bm)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SWARM_ENV_BOOTNODE_MODE, err)
}
currentConfig.BootnodeMode = bootnodeMode
}
if api := os.Getenv(SWARM_GLOBALSTORE_API); api != "" {
currentConfig.GlobalStoreAPI = api
}
return currentConfig return currentConfig
} }

59
cmd/swarm/explore.go Normal file
View File

@ -0,0 +1,59 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command bzzhash computes a swarm tree hash.
package main
import (
"context"
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/swarm/storage"
"gopkg.in/urfave/cli.v1"
)
var hashesCommand = cli.Command{
Action: hashes,
CustomHelpTemplate: helpTemplate,
Name: "hashes",
Usage: "print all hashes of a file to STDOUT",
ArgsUsage: "<file>",
Description: "Prints all hashes of a file to STDOUT",
}
func hashes(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm hashes <file name>")
}
f, err := os.Open(args[0])
if err != nil {
utils.Fatalf("Error opening file " + args[1])
}
defer f.Close()
fileStore := storage.NewFileStore(&storage.FakeChunkStore{}, storage.NewFileStoreParams())
refs, err := fileStore.GetAllReferences(context.TODO(), f, false)
if err != nil {
utils.Fatalf("%v\n", err)
} else {
for _, r := range refs {
fmt.Println(r.String())
}
}
}

View File

@ -156,6 +156,10 @@ var (
Name: "compressed", Name: "compressed",
Usage: "Prints encryption keys in compressed form", Usage: "Prints encryption keys in compressed form",
} }
SwarmBootnodeModeFlag = cli.BoolFlag{
Name: "bootnode-mode",
Usage: "Run Swarm in Bootnode mode",
}
SwarmFeedNameFlag = cli.StringFlag{ SwarmFeedNameFlag = cli.StringFlag{
Name: "name", Name: "name",
Usage: "User-defined name for the new feed, limited to 32 characters. If combined with topic, it will refer to a subtopic with this name", Usage: "User-defined name for the new feed, limited to 32 characters. If combined with topic, it will refer to a subtopic with this name",
@ -172,4 +176,9 @@ var (
Name: "user", Name: "user",
Usage: "Indicates the user who updates the feed", Usage: "Indicates the user who updates the feed",
} }
SwarmGlobalStoreAPIFlag = cli.StringFlag{
Name: "globalstore-api",
Usage: "URL of the Global Store API provider (only for testing)",
EnvVar: SWARM_GLOBALSTORE_API,
}
) )

View File

@ -0,0 +1,100 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"net"
"net/http"
"os"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm/storage/mock"
"github.com/ethereum/go-ethereum/swarm/storage/mock/db"
"github.com/ethereum/go-ethereum/swarm/storage/mock/mem"
cli "gopkg.in/urfave/cli.v1"
)
// startHTTP starts a global store with HTTP RPC server.
// It is used for "http" cli command.
func startHTTP(ctx *cli.Context) (err error) {
server, cleanup, err := newServer(ctx)
if err != nil {
return err
}
defer cleanup()
listener, err := net.Listen("tcp", ctx.String("addr"))
if err != nil {
return err
}
log.Info("http", "address", listener.Addr().String())
return http.Serve(listener, server)
}
// startWS starts a global store with WebSocket RPC server.
// It is used for "websocket" cli command.
func startWS(ctx *cli.Context) (err error) {
server, cleanup, err := newServer(ctx)
if err != nil {
return err
}
defer cleanup()
listener, err := net.Listen("tcp", ctx.String("addr"))
if err != nil {
return err
}
origins := ctx.StringSlice("origins")
log.Info("websocket", "address", listener.Addr().String(), "origins", origins)
return http.Serve(listener, server.WebsocketHandler(origins))
}
// newServer creates a global store and returns its RPC server.
// Returned cleanup function should be called only if err is nil.
func newServer(ctx *cli.Context) (server *rpc.Server, cleanup func(), err error) {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(ctx.Int("verbosity")), log.StreamHandler(os.Stdout, log.TerminalFormat(false))))
cleanup = func() {}
var globalStore mock.GlobalStorer
dir := ctx.String("dir")
if dir != "" {
dbStore, err := db.NewGlobalStore(dir)
if err != nil {
return nil, nil, err
}
cleanup = func() {
dbStore.Close()
}
globalStore = dbStore
log.Info("database global store", "dir", dir)
} else {
globalStore = mem.NewGlobalStore()
log.Info("in-memory global store")
}
server = rpc.NewServer()
if err := server.RegisterName("mockStore", globalStore); err != nil {
cleanup()
return nil, nil, err
}
return server, cleanup, nil
}

View File

@ -0,0 +1,191 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"context"
"io/ioutil"
"net"
"net/http"
"os"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rpc"
mockRPC "github.com/ethereum/go-ethereum/swarm/storage/mock/rpc"
)
// TestHTTP_InMemory tests in-memory global store that exposes
// HTTP server.
func TestHTTP_InMemory(t *testing.T) {
testHTTP(t, true)
}
// TestHTTP_Database tests global store with persisted database
// that exposes HTTP server.
func TestHTTP_Database(t *testing.T) {
dir, err := ioutil.TempDir("", "swarm-global-store-")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
// create a fresh global store
testHTTP(t, true, "--dir", dir)
// check if data saved by the previous global store instance
testHTTP(t, false, "--dir", dir)
}
// testWebsocket starts global store binary with HTTP server
// and validates that it can store and retrieve data.
// If put is false, no data will be stored, only retrieved,
// giving the possibility to check if data is present in the
// storage directory.
func testHTTP(t *testing.T, put bool, args ...string) {
addr := findFreeTCPAddress(t)
testCmd := runGlobalStore(t, append([]string{"http", "--addr", addr}, args...)...)
defer testCmd.Interrupt()
client, err := rpc.DialHTTP("http://" + addr)
if err != nil {
t.Fatal(err)
}
// wait until global store process is started as
// rpc.DialHTTP is actually not connecting
for i := 0; i < 1000; i++ {
_, err = http.DefaultClient.Get("http://" + addr)
if err == nil {
break
}
time.Sleep(10 * time.Millisecond)
}
if err != nil {
t.Fatal(err)
}
store := mockRPC.NewGlobalStore(client)
defer store.Close()
node := store.NewNodeStore(common.HexToAddress("123abc"))
wantKey := "key"
wantValue := "value"
if put {
err = node.Put([]byte(wantKey), []byte(wantValue))
if err != nil {
t.Fatal(err)
}
}
gotValue, err := node.Get([]byte(wantKey))
if err != nil {
t.Fatal(err)
}
if string(gotValue) != wantValue {
t.Errorf("got value %s for key %s, want %s", string(gotValue), wantKey, wantValue)
}
}
// TestWebsocket_InMemory tests in-memory global store that exposes
// WebSocket server.
func TestWebsocket_InMemory(t *testing.T) {
testWebsocket(t, true)
}
// TestWebsocket_Database tests global store with persisted database
// that exposes HTTP server.
func TestWebsocket_Database(t *testing.T) {
dir, err := ioutil.TempDir("", "swarm-global-store-")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
// create a fresh global store
testWebsocket(t, true, "--dir", dir)
// check if data saved by the previous global store instance
testWebsocket(t, false, "--dir", dir)
}
// testWebsocket starts global store binary with WebSocket server
// and validates that it can store and retrieve data.
// If put is false, no data will be stored, only retrieved,
// giving the possibility to check if data is present in the
// storage directory.
func testWebsocket(t *testing.T, put bool, args ...string) {
addr := findFreeTCPAddress(t)
testCmd := runGlobalStore(t, append([]string{"ws", "--addr", addr}, args...)...)
defer testCmd.Interrupt()
var client *rpc.Client
var err error
// wait until global store process is started
for i := 0; i < 1000; i++ {
client, err = rpc.DialWebsocket(context.Background(), "ws://"+addr, "")
if err == nil {
break
}
time.Sleep(10 * time.Millisecond)
}
if err != nil {
t.Fatal(err)
}
store := mockRPC.NewGlobalStore(client)
defer store.Close()
node := store.NewNodeStore(common.HexToAddress("123abc"))
wantKey := "key"
wantValue := "value"
if put {
err = node.Put([]byte(wantKey), []byte(wantValue))
if err != nil {
t.Fatal(err)
}
}
gotValue, err := node.Get([]byte(wantKey))
if err != nil {
t.Fatal(err)
}
if string(gotValue) != wantValue {
t.Errorf("got value %s for key %s, want %s", string(gotValue), wantKey, wantValue)
}
}
// findFreeTCPAddress returns a local address (IP:Port) to which
// global store can listen on.
func findFreeTCPAddress(t *testing.T) (addr string) {
t.Helper()
listener, err := net.Listen("tcp", "")
if err != nil {
t.Fatal(err)
}
defer listener.Close()
return listener.Addr().String()
}

View File

@ -0,0 +1,104 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/log"
cli "gopkg.in/urfave/cli.v1"
)
var gitCommit string // Git SHA1 commit hash of the release (set via linker flags)
func main() {
err := newApp().Run(os.Args)
if err != nil {
log.Error(err.Error())
os.Exit(1)
}
}
// newApp construct a new instance of Swarm Global Store.
// Method Run is called on it in the main function and in tests.
func newApp() (app *cli.App) {
app = utils.NewApp(gitCommit, "Swarm Global Store")
app.Name = "global-store"
// app flags (for all commands)
app.Flags = []cli.Flag{
cli.IntFlag{
Name: "verbosity",
Value: 3,
Usage: "verbosity level",
},
}
app.Commands = []cli.Command{
{
Name: "http",
Aliases: []string{"h"},
Usage: "start swarm global store with http server",
Action: startHTTP,
// Flags only for "start" command.
// Allow app flags to be specified after the
// command argument.
Flags: append(app.Flags,
cli.StringFlag{
Name: "dir",
Value: "",
Usage: "data directory",
},
cli.StringFlag{
Name: "addr",
Value: "0.0.0.0:3033",
Usage: "address to listen for http connection",
},
),
},
{
Name: "websocket",
Aliases: []string{"ws"},
Usage: "start swarm global store with websocket server",
Action: startWS,
// Flags only for "start" command.
// Allow app flags to be specified after the
// command argument.
Flags: append(app.Flags,
cli.StringFlag{
Name: "dir",
Value: "",
Usage: "data directory",
},
cli.StringFlag{
Name: "addr",
Value: "0.0.0.0:3033",
Usage: "address to listen for websocket connection",
},
cli.StringSliceFlag{
Name: "origins",
Value: &cli.StringSlice{"*"},
Usage: "websocket origins",
},
),
},
}
return app
}

View File

@ -0,0 +1,49 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"os"
"testing"
"github.com/docker/docker/pkg/reexec"
"github.com/ethereum/go-ethereum/internal/cmdtest"
)
func init() {
reexec.Register("swarm-global-store", func() {
if err := newApp().Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
os.Exit(0)
})
}
func runGlobalStore(t *testing.T, args ...string) *cmdtest.TestCmd {
tt := cmdtest.NewTestCmd(t, nil)
tt.Run("swarm-global-store", args...)
return tt
}
func TestMain(m *testing.M) {
if reexec.Init() {
return
}
os.Exit(m.Run())
}

View File

@ -39,13 +39,16 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm" "github.com/ethereum/go-ethereum/swarm"
bzzapi "github.com/ethereum/go-ethereum/swarm/api" bzzapi "github.com/ethereum/go-ethereum/swarm/api"
swarmmetrics "github.com/ethereum/go-ethereum/swarm/metrics" swarmmetrics "github.com/ethereum/go-ethereum/swarm/metrics"
"github.com/ethereum/go-ethereum/swarm/storage/mock"
mockrpc "github.com/ethereum/go-ethereum/swarm/storage/mock/rpc"
"github.com/ethereum/go-ethereum/swarm/tracing" "github.com/ethereum/go-ethereum/swarm/tracing"
sv "github.com/ethereum/go-ethereum/swarm/version" sv "github.com/ethereum/go-ethereum/swarm/version"
"gopkg.in/urfave/cli.v1" cli "gopkg.in/urfave/cli.v1"
) )
const clientIdentifier = "swarm" const clientIdentifier = "swarm"
@ -66,9 +69,10 @@ OPTIONS:
{{end}}{{end}} {{end}}{{end}}
` `
var ( // Git SHA1 commit hash of the release (set via linker flags)
gitCommit string // Git SHA1 commit hash of the release (set via linker flags) // this variable will be assigned if corresponding parameter is passed with install, but not with test
) // e.g.: go install -ldflags "-X main.gitCommit=ed1312d01b19e04ef578946226e5d8069d5dfd5a" ./cmd/swarm
var gitCommit string
//declare a few constant error messages, useful for later error check comparisons in test //declare a few constant error messages, useful for later error check comparisons in test
var ( var (
@ -89,6 +93,7 @@ var defaultNodeConfig = node.DefaultConfig
// This init function sets defaults so cmd/swarm can run alongside geth. // This init function sets defaults so cmd/swarm can run alongside geth.
func init() { func init() {
sv.GitCommit = gitCommit
defaultNodeConfig.Name = clientIdentifier defaultNodeConfig.Name = clientIdentifier
defaultNodeConfig.Version = sv.VersionWithCommit(gitCommit) defaultNodeConfig.Version = sv.VersionWithCommit(gitCommit)
defaultNodeConfig.P2P.ListenAddr = ":30399" defaultNodeConfig.P2P.ListenAddr = ":30399"
@ -140,6 +145,8 @@ func init() {
dbCommand, dbCommand,
// See config.go // See config.go
DumpConfigCommand, DumpConfigCommand,
// hashesCommand
hashesCommand,
} }
// append a hidden help subcommand to all commands that have subcommands // append a hidden help subcommand to all commands that have subcommands
@ -154,7 +161,6 @@ func init() {
utils.BootnodesFlag, utils.BootnodesFlag,
utils.KeyStoreDirFlag, utils.KeyStoreDirFlag,
utils.ListenPortFlag, utils.ListenPortFlag,
utils.NoDiscoverFlag,
utils.DiscoveryV5Flag, utils.DiscoveryV5Flag,
utils.NetrestrictFlag, utils.NetrestrictFlag,
utils.NodeKeyFileFlag, utils.NodeKeyFileFlag,
@ -187,10 +193,13 @@ func init() {
SwarmUploadDefaultPath, SwarmUploadDefaultPath,
SwarmUpFromStdinFlag, SwarmUpFromStdinFlag,
SwarmUploadMimeType, SwarmUploadMimeType,
// bootnode mode
SwarmBootnodeModeFlag,
// storage flags // storage flags
SwarmStorePath, SwarmStorePath,
SwarmStoreCapacity, SwarmStoreCapacity,
SwarmStoreCacheCapacity, SwarmStoreCacheCapacity,
SwarmGlobalStoreAPIFlag,
} }
rpcFlags := []cli.Flag{ rpcFlags := []cli.Flag{
utils.WSEnabledFlag, utils.WSEnabledFlag,
@ -227,12 +236,17 @@ func main() {
func keys(ctx *cli.Context) error { func keys(ctx *cli.Context) error {
privateKey := getPrivKey(ctx) privateKey := getPrivKey(ctx)
pub := hex.EncodeToString(crypto.FromECDSAPub(&privateKey.PublicKey)) pubkey := crypto.FromECDSAPub(&privateKey.PublicKey)
pubkeyhex := hex.EncodeToString(pubkey)
pubCompressed := hex.EncodeToString(crypto.CompressPubkey(&privateKey.PublicKey)) pubCompressed := hex.EncodeToString(crypto.CompressPubkey(&privateKey.PublicKey))
bzzkey := crypto.Keccak256Hash(pubkey).Hex()
if !ctx.Bool(SwarmCompressedFlag.Name) { if !ctx.Bool(SwarmCompressedFlag.Name) {
fmt.Println(fmt.Sprintf("publicKey=%s", pub)) fmt.Println(fmt.Sprintf("bzzkey=%s", bzzkey[2:]))
fmt.Println(fmt.Sprintf("publicKey=%s", pubkeyhex))
} }
fmt.Println(fmt.Sprintf("publicKeyCompressed=%s", pubCompressed)) fmt.Println(fmt.Sprintf("publicKeyCompressed=%s", pubCompressed))
return nil return nil
} }
@ -272,6 +286,10 @@ func bzzd(ctx *cli.Context) error {
setSwarmBootstrapNodes(ctx, &cfg) setSwarmBootstrapNodes(ctx, &cfg)
//setup the ethereum node //setup the ethereum node
utils.SetNodeConfig(ctx, &cfg) utils.SetNodeConfig(ctx, &cfg)
//always disable discovery from p2p package - swarm discovery is done with the `hive` protocol
cfg.P2P.NoDiscovery = true
stack, err := node.New(&cfg) stack, err := node.New(&cfg)
if err != nil { if err != nil {
utils.Fatalf("can't create node: %v", err) utils.Fatalf("can't create node: %v", err)
@ -294,6 +312,15 @@ func bzzd(ctx *cli.Context) error {
stack.Stop() stack.Stop()
}() }()
// add swarm bootnodes, because swarm doesn't use p2p package's discovery discv5
go func() {
s := stack.Server()
for _, n := range cfg.P2P.BootstrapNodes {
s.AddPeer(n)
}
}()
stack.Wait() stack.Wait()
return nil return nil
} }
@ -301,8 +328,18 @@ func bzzd(ctx *cli.Context) error {
func registerBzzService(bzzconfig *bzzapi.Config, stack *node.Node) { func registerBzzService(bzzconfig *bzzapi.Config, stack *node.Node) {
//define the swarm service boot function //define the swarm service boot function
boot := func(_ *node.ServiceContext) (node.Service, error) { boot := func(_ *node.ServiceContext) (node.Service, error) {
// In production, mockStore must be always nil. var nodeStore *mock.NodeStore
return swarm.NewSwarm(bzzconfig, nil) if bzzconfig.GlobalStoreAPI != "" {
// connect to global store
client, err := rpc.Dial(bzzconfig.GlobalStoreAPI)
if err != nil {
return nil, fmt.Errorf("global store: %v", err)
}
globalStore := mockrpc.NewGlobalStore(client)
// create a node store for this swarm key on global store
nodeStore = globalStore.NewNodeStore(common.HexToAddress(bzzconfig.BzzKey))
}
return swarm.NewSwarm(bzzconfig, nodeStore)
} }
//register within the ethereum node //register within the ethereum node
if err := stack.Register(boot); err != nil { if err := stack.Register(boot); err != nil {
@ -428,5 +465,5 @@ func setSwarmBootstrapNodes(ctx *cli.Context, cfg *node.Config) {
} }
cfg.P2P.BootstrapNodes = append(cfg.P2P.BootstrapNodes, node) cfg.P2P.BootstrapNodes = append(cfg.P2P.BootstrapNodes, node)
} }
log.Debug("added default swarm bootnodes", "length", len(cfg.P2P.BootstrapNodes))
} }

View File

@ -254,7 +254,6 @@ func existingTestNode(t *testing.T, dir string, bzzaccount string) *testNode {
node.Cmd = runSwarm(t, node.Cmd = runSwarm(t,
"--port", p2pPort, "--port", p2pPort,
"--nat", "extip:127.0.0.1", "--nat", "extip:127.0.0.1",
"--nodiscover",
"--datadir", dir, "--datadir", dir,
"--ipcpath", conf.IPCPath, "--ipcpath", conf.IPCPath,
"--ens-api", "", "--ens-api", "",
@ -330,7 +329,6 @@ func newTestNode(t *testing.T, dir string) *testNode {
node.Cmd = runSwarm(t, node.Cmd = runSwarm(t,
"--port", p2pPort, "--port", p2pPort,
"--nat", "extip:127.0.0.1", "--nat", "extip:127.0.0.1",
"--nodiscover",
"--datadir", dir, "--datadir", dir,
"--ipcpath", conf.IPCPath, "--ipcpath", conf.IPCPath,
"--ens-api", "", "--ens-api", "",

View File

@ -2,13 +2,10 @@ package main
import ( import (
"bytes" "bytes"
"context"
"crypto/md5" "crypto/md5"
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"net/http"
"net/http/httptrace"
"os" "os"
"os/exec" "os/exec"
"strings" "strings"
@ -19,12 +16,8 @@ import (
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/swarm/spancontext"
"github.com/ethereum/go-ethereum/swarm/storage/feed" "github.com/ethereum/go-ethereum/swarm/storage/feed"
"github.com/ethereum/go-ethereum/swarm/testutil" "github.com/ethereum/go-ethereum/swarm/testutil"
colorable "github.com/mattn/go-colorable"
opentracing "github.com/opentracing/opentracing-go"
"github.com/pborman/uuid" "github.com/pborman/uuid"
cli "gopkg.in/urfave/cli.v1" cli "gopkg.in/urfave/cli.v1"
) )
@ -33,34 +26,28 @@ const (
feedRandomDataLength = 8 feedRandomDataLength = 8
) )
func cliFeedUploadAndSync(c *cli.Context) error { func feedUploadAndSyncCmd(ctx *cli.Context, tuid string) error {
metrics.GetOrRegisterCounter("feed-and-sync", nil).Inc(1)
log.Root().SetHandler(log.CallerFileHandler(log.LvlFilterHandler(log.Lvl(verbosity), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true)))))
errc := make(chan error) errc := make(chan error)
go func() { go func() {
errc <- feedUploadAndSync(c) errc <- feedUploadAndSync(ctx, tuid)
}() }()
select { select {
case err := <-errc: case err := <-errc:
if err != nil { if err != nil {
metrics.GetOrRegisterCounter("feed-and-sync.fail", nil).Inc(1) metrics.GetOrRegisterCounter(fmt.Sprintf("%s.fail", commandName), nil).Inc(1)
} }
return err return err
case <-time.After(time.Duration(timeout) * time.Second): case <-time.After(time.Duration(timeout) * time.Second):
metrics.GetOrRegisterCounter("feed-and-sync.timeout", nil).Inc(1) metrics.GetOrRegisterCounter(fmt.Sprintf("%s.timeout", commandName), nil).Inc(1)
return fmt.Errorf("timeout after %v sec", timeout) return fmt.Errorf("timeout after %v sec", timeout)
} }
} }
// TODO: retrieve with manifest + extract repeating code func feedUploadAndSync(c *cli.Context, tuid string) error {
func feedUploadAndSync(c *cli.Context) error { log.Info("generating and uploading feeds to " + httpEndpoint(hosts[0]) + " and syncing")
defer func(now time.Time) { log.Info("total time", "time", time.Since(now), "size (kb)", filesize) }(time.Now())
generateEndpoints(scheme, cluster, appName, from, to)
log.Info("generating and uploading feeds to " + endpoints[0] + " and syncing")
// create a random private key to sign updates with and derive the address // create a random private key to sign updates with and derive the address
pkFile, err := ioutil.TempFile("", "swarm-feed-smoke-test") pkFile, err := ioutil.TempFile("", "swarm-feed-smoke-test")
@ -114,7 +101,7 @@ func feedUploadAndSync(c *cli.Context) error {
// create feed manifest, topic only // create feed manifest, topic only
var out bytes.Buffer var out bytes.Buffer
cmd := exec.Command("swarm", "--bzzapi", endpoints[0], "feed", "create", "--topic", topicHex, "--user", userHex) cmd := exec.Command("swarm", "--bzzapi", httpEndpoint(hosts[0]), "feed", "create", "--topic", topicHex, "--user", userHex)
cmd.Stdout = &out cmd.Stdout = &out
log.Debug("create feed manifest topic cmd", "cmd", cmd) log.Debug("create feed manifest topic cmd", "cmd", cmd)
err = cmd.Run() err = cmd.Run()
@ -129,7 +116,7 @@ func feedUploadAndSync(c *cli.Context) error {
out.Reset() out.Reset()
// create feed manifest, subtopic only // create feed manifest, subtopic only
cmd = exec.Command("swarm", "--bzzapi", endpoints[0], "feed", "create", "--name", subTopicHex, "--user", userHex) cmd = exec.Command("swarm", "--bzzapi", httpEndpoint(hosts[0]), "feed", "create", "--name", subTopicHex, "--user", userHex)
cmd.Stdout = &out cmd.Stdout = &out
log.Debug("create feed manifest subtopic cmd", "cmd", cmd) log.Debug("create feed manifest subtopic cmd", "cmd", cmd)
err = cmd.Run() err = cmd.Run()
@ -144,7 +131,7 @@ func feedUploadAndSync(c *cli.Context) error {
out.Reset() out.Reset()
// create feed manifest, merged topic // create feed manifest, merged topic
cmd = exec.Command("swarm", "--bzzapi", endpoints[0], "feed", "create", "--topic", topicHex, "--name", subTopicHex, "--user", userHex) cmd = exec.Command("swarm", "--bzzapi", httpEndpoint(hosts[0]), "feed", "create", "--topic", topicHex, "--name", subTopicHex, "--user", userHex)
cmd.Stdout = &out cmd.Stdout = &out
log.Debug("create feed manifest mergetopic cmd", "cmd", cmd) log.Debug("create feed manifest mergetopic cmd", "cmd", cmd)
err = cmd.Run() err = cmd.Run()
@ -170,7 +157,7 @@ func feedUploadAndSync(c *cli.Context) error {
dataHex := hexutil.Encode(data) dataHex := hexutil.Encode(data)
// update with topic // update with topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, dataHex) cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", httpEndpoint(hosts[0]), "feed", "update", "--topic", topicHex, dataHex)
cmd.Stdout = &out cmd.Stdout = &out
log.Debug("update feed manifest topic cmd", "cmd", cmd) log.Debug("update feed manifest topic cmd", "cmd", cmd)
err = cmd.Run() err = cmd.Run()
@ -181,7 +168,7 @@ func feedUploadAndSync(c *cli.Context) error {
out.Reset() out.Reset()
// update with subtopic // update with subtopic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--name", subTopicHex, dataHex) cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", httpEndpoint(hosts[0]), "feed", "update", "--name", subTopicHex, dataHex)
cmd.Stdout = &out cmd.Stdout = &out
log.Debug("update feed manifest subtopic cmd", "cmd", cmd) log.Debug("update feed manifest subtopic cmd", "cmd", cmd)
err = cmd.Run() err = cmd.Run()
@ -192,7 +179,7 @@ func feedUploadAndSync(c *cli.Context) error {
out.Reset() out.Reset()
// update with merged topic // update with merged topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, "--name", subTopicHex, dataHex) cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", httpEndpoint(hosts[0]), "feed", "update", "--topic", topicHex, "--name", subTopicHex, dataHex)
cmd.Stdout = &out cmd.Stdout = &out
log.Debug("update feed manifest merged topic cmd", "cmd", cmd) log.Debug("update feed manifest merged topic cmd", "cmd", cmd)
err = cmd.Run() err = cmd.Run()
@ -206,14 +193,14 @@ func feedUploadAndSync(c *cli.Context) error {
// retrieve the data // retrieve the data
wg := sync.WaitGroup{} wg := sync.WaitGroup{}
for _, endpoint := range endpoints { for _, host := range hosts {
// raw retrieve, topic only // raw retrieve, topic only
for _, hex := range []string{topicHex, subTopicOnlyHex, mergedSubTopicHex} { for _, hex := range []string{topicHex, subTopicOnlyHex, mergedSubTopicHex} {
wg.Add(1) wg.Add(1)
ruid := uuid.New()[:8] ruid := uuid.New()[:8]
go func(hex string, endpoint string, ruid string) { go func(hex string, endpoint string, ruid string) {
for { for {
err := fetchFeed(hex, userHex, endpoint, dataHash, ruid) err := fetchFeed(hex, userHex, httpEndpoint(host), dataHash, ruid)
if err != nil { if err != nil {
continue continue
} }
@ -221,20 +208,18 @@ func feedUploadAndSync(c *cli.Context) error {
wg.Done() wg.Done()
return return
} }
}(hex, endpoint, ruid) }(hex, httpEndpoint(host), ruid)
} }
} }
wg.Wait() wg.Wait()
log.Info("all endpoints synced random data successfully") log.Info("all endpoints synced random data successfully")
// upload test file // upload test file
seed := int(time.Now().UnixNano() / 1e6) log.Info("feed uploading to "+httpEndpoint(hosts[0])+" and syncing", "seed", seed)
log.Info("feed uploading to "+endpoints[0]+" and syncing", "seed", seed)
randomBytes := testutil.RandomBytes(seed, filesize*1000) randomBytes := testutil.RandomBytes(seed, filesize*1000)
hash, err := upload(&randomBytes, endpoints[0]) hash, err := upload(randomBytes, httpEndpoint(hosts[0]))
if err != nil { if err != nil {
return err return err
} }
@ -243,15 +228,12 @@ func feedUploadAndSync(c *cli.Context) error {
return err return err
} }
multihashHex := hexutil.Encode(hashBytes) multihashHex := hexutil.Encode(hashBytes)
fileHash, err := digest(bytes.NewReader(randomBytes)) fileHash := h.Sum(nil)
if err != nil {
return err
}
log.Info("uploaded successfully", "hash", hash, "digest", fmt.Sprintf("%x", fileHash)) log.Info("uploaded successfully", "hash", hash, "digest", fmt.Sprintf("%x", fileHash))
// update file with topic // update file with topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, multihashHex) cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", httpEndpoint(hosts[0]), "feed", "update", "--topic", topicHex, multihashHex)
cmd.Stdout = &out cmd.Stdout = &out
err = cmd.Run() err = cmd.Run()
if err != nil { if err != nil {
@ -261,7 +243,7 @@ func feedUploadAndSync(c *cli.Context) error {
out.Reset() out.Reset()
// update file with subtopic // update file with subtopic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--name", subTopicHex, multihashHex) cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", httpEndpoint(hosts[0]), "feed", "update", "--name", subTopicHex, multihashHex)
cmd.Stdout = &out cmd.Stdout = &out
err = cmd.Run() err = cmd.Run()
if err != nil { if err != nil {
@ -271,7 +253,7 @@ func feedUploadAndSync(c *cli.Context) error {
out.Reset() out.Reset()
// update file with merged topic // update file with merged topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, "--name", subTopicHex, multihashHex) cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", httpEndpoint(hosts[0]), "feed", "update", "--topic", topicHex, "--name", subTopicHex, multihashHex)
cmd.Stdout = &out cmd.Stdout = &out
err = cmd.Run() err = cmd.Run()
if err != nil { if err != nil {
@ -282,7 +264,7 @@ func feedUploadAndSync(c *cli.Context) error {
time.Sleep(3 * time.Second) time.Sleep(3 * time.Second)
for _, endpoint := range endpoints { for _, host := range hosts {
// manifest retrieve, topic only // manifest retrieve, topic only
for _, url := range []string{manifestWithTopic, manifestWithSubTopic, manifestWithMergedTopic} { for _, url := range []string{manifestWithTopic, manifestWithSubTopic, manifestWithMergedTopic} {
@ -290,7 +272,7 @@ func feedUploadAndSync(c *cli.Context) error {
ruid := uuid.New()[:8] ruid := uuid.New()[:8]
go func(url string, endpoint string, ruid string) { go func(url string, endpoint string, ruid string) {
for { for {
err := fetch(url, endpoint, fileHash, ruid) err := fetch(url, endpoint, fileHash, ruid, "")
if err != nil { if err != nil {
continue continue
} }
@ -298,7 +280,7 @@ func feedUploadAndSync(c *cli.Context) error {
wg.Done() wg.Done()
return return
} }
}(url, endpoint, ruid) }(url, httpEndpoint(host), ruid)
} }
} }
@ -307,60 +289,3 @@ func feedUploadAndSync(c *cli.Context) error {
return nil return nil
} }
func fetchFeed(topic string, user string, endpoint string, original []byte, ruid string) error {
ctx, sp := spancontext.StartSpan(context.Background(), "feed-and-sync.fetch")
defer sp.Finish()
log.Trace("sleeping", "ruid", ruid)
time.Sleep(3 * time.Second)
log.Trace("http get request (feed)", "ruid", ruid, "api", endpoint, "topic", topic, "user", user)
var tn time.Time
reqUri := endpoint + "/bzz-feed:/?topic=" + topic + "&user=" + user
req, _ := http.NewRequest("GET", reqUri, nil)
opentracing.GlobalTracer().Inject(
sp.Context(),
opentracing.HTTPHeaders,
opentracing.HTTPHeadersCarrier(req.Header))
trace := client.GetClientTrace("feed-and-sync - http get", "feed-and-sync", ruid, &tn)
req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
transport := http.DefaultTransport
//transport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
tn = time.Now()
res, err := transport.RoundTrip(req)
if err != nil {
log.Error(err.Error(), "ruid", ruid)
return err
}
log.Trace("http get response (feed)", "ruid", ruid, "api", endpoint, "topic", topic, "user", user, "code", res.StatusCode, "len", res.ContentLength)
if res.StatusCode != 200 {
return fmt.Errorf("expected status code %d, got %v (ruid %v)", 200, res.StatusCode, ruid)
}
defer res.Body.Close()
rdigest, err := digest(res.Body)
if err != nil {
log.Warn(err.Error(), "ruid", ruid)
return err
}
if !bytes.Equal(rdigest, original) {
err := fmt.Errorf("downloaded imported file md5=%x is not the same as the generated one=%x", rdigest, original)
log.Warn(err.Error(), "ruid", ruid)
return err
}
log.Trace("downloaded file matches random file", "ruid", ruid, "len", res.ContentLength)
return nil
}

View File

@ -37,18 +37,16 @@ var (
) )
var ( var (
endpoints []string allhosts string
includeLocalhost bool hosts []string
cluster string
appName string
scheme string
filesize int filesize int
syncDelay int syncDelay int
from int httpPort int
to int wsPort int
verbosity int verbosity int
timeout int timeout int
single bool single bool
trackTimeout int
) )
func main() { func main() {
@ -59,39 +57,22 @@ func main() {
app.Flags = []cli.Flag{ app.Flags = []cli.Flag{
cli.StringFlag{ cli.StringFlag{
Name: "cluster-endpoint", Name: "hosts",
Value: "prod", Value: "",
Usage: "cluster to point to (prod or a given namespace)", Usage: "comma-separated list of swarm hosts",
Destination: &cluster, Destination: &allhosts,
},
cli.StringFlag{
Name: "app",
Value: "swarm",
Usage: "application to point to (swarm or swarm-private)",
Destination: &appName,
}, },
cli.IntFlag{ cli.IntFlag{
Name: "cluster-from", Name: "http-port",
Value: 8501, Value: 80,
Usage: "swarm node (from)", Usage: "http port",
Destination: &from, Destination: &httpPort,
}, },
cli.IntFlag{ cli.IntFlag{
Name: "cluster-to", Name: "ws-port",
Value: 8512, Value: 8546,
Usage: "swarm node (to)", Usage: "ws port",
Destination: &to, Destination: &wsPort,
},
cli.StringFlag{
Name: "cluster-scheme",
Value: "http",
Usage: "http or https",
Destination: &scheme,
},
cli.BoolFlag{
Name: "include-localhost",
Usage: "whether to include localhost:8500 as an endpoint",
Destination: &includeLocalhost,
}, },
cli.IntFlag{ cli.IntFlag{
Name: "filesize", Name: "filesize",
@ -122,6 +103,12 @@ func main() {
Usage: "whether to fetch content from a single node or from all nodes", Usage: "whether to fetch content from a single node or from all nodes",
Destination: &single, Destination: &single,
}, },
cli.IntFlag{
Name: "track-timeout",
Value: 5,
Usage: "timeout in seconds to wait for GetAllReferences to return",
Destination: &trackTimeout,
},
} }
app.Flags = append(app.Flags, []cli.Flag{ app.Flags = append(app.Flags, []cli.Flag{
@ -130,7 +117,7 @@ func main() {
swarmmetrics.MetricsInfluxDBDatabaseFlag, swarmmetrics.MetricsInfluxDBDatabaseFlag,
swarmmetrics.MetricsInfluxDBUsernameFlag, swarmmetrics.MetricsInfluxDBUsernameFlag,
swarmmetrics.MetricsInfluxDBPasswordFlag, swarmmetrics.MetricsInfluxDBPasswordFlag,
swarmmetrics.MetricsInfluxDBHostTagFlag, swarmmetrics.MetricsInfluxDBTagsFlag,
}...) }...)
app.Flags = append(app.Flags, tracing.Flags...) app.Flags = append(app.Flags, tracing.Flags...)
@ -140,13 +127,25 @@ func main() {
Name: "upload_and_sync", Name: "upload_and_sync",
Aliases: []string{"c"}, Aliases: []string{"c"},
Usage: "upload and sync", Usage: "upload and sync",
Action: cliUploadAndSync, Action: wrapCliCommand("upload-and-sync", uploadAndSyncCmd),
}, },
{ {
Name: "feed_sync", Name: "feed_sync",
Aliases: []string{"f"}, Aliases: []string{"f"},
Usage: "feed update generate, upload and sync", Usage: "feed update generate, upload and sync",
Action: cliFeedUploadAndSync, Action: wrapCliCommand("feed-and-sync", feedUploadAndSyncCmd),
},
{
Name: "upload_speed",
Aliases: []string{"u"},
Usage: "measure upload speed",
Action: wrapCliCommand("upload-speed", uploadSpeedCmd),
},
{
Name: "sliding_window",
Aliases: []string{"s"},
Usage: "measure network aggregate capacity",
Action: wrapCliCommand("sliding-window", slidingWindowCmd),
}, },
} }
@ -177,13 +176,14 @@ func emitMetrics(ctx *cli.Context) error {
database = ctx.GlobalString(swarmmetrics.MetricsInfluxDBDatabaseFlag.Name) database = ctx.GlobalString(swarmmetrics.MetricsInfluxDBDatabaseFlag.Name)
username = ctx.GlobalString(swarmmetrics.MetricsInfluxDBUsernameFlag.Name) username = ctx.GlobalString(swarmmetrics.MetricsInfluxDBUsernameFlag.Name)
password = ctx.GlobalString(swarmmetrics.MetricsInfluxDBPasswordFlag.Name) password = ctx.GlobalString(swarmmetrics.MetricsInfluxDBPasswordFlag.Name)
hosttag = ctx.GlobalString(swarmmetrics.MetricsInfluxDBHostTagFlag.Name) tags = ctx.GlobalString(swarmmetrics.MetricsInfluxDBTagsFlag.Name)
) )
return influxdb.InfluxDBWithTagsOnce(gethmetrics.DefaultRegistry, endpoint, database, username, password, "swarm-smoke.", map[string]string{
"host": hosttag, tagsMap := utils.SplitTagsFlag(tags)
"version": gitCommit, tagsMap["version"] = gitCommit
"filesize": fmt.Sprintf("%v", filesize), tagsMap["filesize"] = fmt.Sprintf("%v", filesize)
})
return influxdb.InfluxDBWithTagsOnce(gethmetrics.DefaultRegistry, endpoint, database, username, password, "swarm-smoke.", tagsMap)
} }
return nil return nil

View File

@ -0,0 +1,131 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"fmt"
"math/rand"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/swarm/testutil"
"github.com/pborman/uuid"
cli "gopkg.in/urfave/cli.v1"
)
type uploadResult struct {
hash string
digest []byte
}
func slidingWindowCmd(ctx *cli.Context, tuid string) error {
errc := make(chan error)
go func() {
errc <- slidingWindow(ctx, tuid)
}()
select {
case err := <-errc:
if err != nil {
metrics.GetOrRegisterCounter(fmt.Sprintf("%s.fail", commandName), nil).Inc(1)
}
return err
case <-time.After(time.Duration(timeout) * time.Second):
metrics.GetOrRegisterCounter(fmt.Sprintf("%s.timeout", commandName), nil).Inc(1)
return fmt.Errorf("timeout after %v sec", timeout)
}
}
func slidingWindow(ctx *cli.Context, tuid string) error {
hashes := []uploadResult{} //swarm hashes of the uploads
nodes := len(hosts)
const iterationTimeout = 30 * time.Second
log.Info("sliding window test started", "tuid", tuid, "nodes", nodes, "filesize(kb)", filesize, "timeout", timeout)
uploadedBytes := 0
networkDepth := 0
errored := false
outer:
for {
log.Info("uploading to "+httpEndpoint(hosts[0])+" and syncing", "seed", seed)
t1 := time.Now()
randomBytes := testutil.RandomBytes(seed, filesize*1000)
hash, err := upload(randomBytes, httpEndpoint(hosts[0]))
if err != nil {
log.Error(err.Error())
return err
}
metrics.GetOrRegisterResettingTimer("sliding-window.upload-time", nil).UpdateSince(t1)
fhash, err := digest(bytes.NewReader(randomBytes))
if err != nil {
log.Error(err.Error())
return err
}
log.Info("uploaded successfully", "hash", hash, "digest", fmt.Sprintf("%x", fhash), "sleeping", syncDelay)
hashes = append(hashes, uploadResult{hash: hash, digest: fhash})
time.Sleep(time.Duration(syncDelay) * time.Second)
uploadedBytes += filesize * 1000
for i, v := range hashes {
timeout := time.After(time.Duration(timeout) * time.Second)
errored = false
inner:
for {
select {
case <-timeout:
errored = true
log.Error("error retrieving hash. timeout", "hash idx", i, "err", err)
metrics.GetOrRegisterCounter("sliding-window.single.error", nil).Inc(1)
break inner
default:
idx := 1 + rand.Intn(len(hosts)-1)
ruid := uuid.New()[:8]
start := time.Now()
err := fetch(v.hash, httpEndpoint(hosts[idx]), v.digest, ruid, "")
if err != nil {
continue inner
}
metrics.GetOrRegisterResettingTimer("sliding-window.single.fetch-time", nil).UpdateSince(start)
break inner
}
}
if errored {
break outer
}
networkDepth = i
metrics.GetOrRegisterGauge("sliding-window.network-depth", nil).Update(int64(networkDepth))
}
}
log.Info("sliding window test finished", "errored?", errored, "networkDepth", networkDepth, "networkDepth(kb)", networkDepth*filesize)
log.Info("stats", "uploadedFiles", len(hashes), "uploadedKb", uploadedBytes/1000, "filesizeKb", filesize)
return nil
}

View File

@ -19,91 +19,122 @@ package main
import ( import (
"bytes" "bytes"
"context" "context"
"crypto/md5"
crand "crypto/rand"
"errors"
"fmt" "fmt"
"io"
"io/ioutil" "io/ioutil"
"math/rand" "math/rand"
"net/http"
"net/http/httptrace"
"os" "os"
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm/api" "github.com/ethereum/go-ethereum/swarm/api"
"github.com/ethereum/go-ethereum/swarm/api/client" "github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethereum/go-ethereum/swarm/spancontext"
"github.com/ethereum/go-ethereum/swarm/testutil" "github.com/ethereum/go-ethereum/swarm/testutil"
opentracing "github.com/opentracing/opentracing-go"
"github.com/pborman/uuid" "github.com/pborman/uuid"
cli "gopkg.in/urfave/cli.v1" cli "gopkg.in/urfave/cli.v1"
) )
func generateEndpoints(scheme string, cluster string, app string, from int, to int) { func uploadAndSyncCmd(ctx *cli.Context, tuid string) error {
if cluster == "prod" { randomBytes := testutil.RandomBytes(seed, filesize*1000)
for port := from; port < to; port++ {
endpoints = append(endpoints, fmt.Sprintf("%s://%v.swarm-gateways.net", scheme, port))
}
} else {
for port := from; port < to; port++ {
endpoints = append(endpoints, fmt.Sprintf("%s://%s-%v-%s.stg.swarm-gateways.net", scheme, app, port, cluster))
}
}
if includeLocalhost {
endpoints = append(endpoints, "http://localhost:8500")
}
}
func cliUploadAndSync(c *cli.Context) error {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(verbosity), log.StreamHandler(os.Stdout, log.TerminalFormat(true))))
metrics.GetOrRegisterCounter("upload-and-sync", nil).Inc(1)
errc := make(chan error) errc := make(chan error)
go func() { go func() {
errc <- uploadAndSync(c) errc <- uplaodAndSync(ctx, randomBytes, tuid)
}() }()
select { select {
case err := <-errc: case err := <-errc:
if err != nil { if err != nil {
metrics.GetOrRegisterCounter("upload-and-sync.fail", nil).Inc(1) metrics.GetOrRegisterCounter(fmt.Sprintf("%s.fail", commandName), nil).Inc(1)
} }
return err return err
case <-time.After(time.Duration(timeout) * time.Second): case <-time.After(time.Duration(timeout) * time.Second):
metrics.GetOrRegisterCounter("upload-and-sync.timeout", nil).Inc(1) metrics.GetOrRegisterCounter(fmt.Sprintf("%s.timeout", commandName), nil).Inc(1)
return fmt.Errorf("timeout after %v sec", timeout)
e := fmt.Errorf("timeout after %v sec", timeout)
// trigger debug functionality on randomBytes
err := trackChunks(randomBytes[:])
if err != nil {
e = fmt.Errorf("%v; triggerChunkDebug failed: %v", e, err)
}
return e
} }
} }
func uploadAndSync(c *cli.Context) error { func trackChunks(testData []byte) error {
defer func(now time.Time) { log.Warn("Test timed out; running chunk debug sequence")
totalTime := time.Since(now)
log.Info("total time", "time", totalTime, "kb", filesize) addrs, err := getAllRefs(testData)
metrics.GetOrRegisterCounter("upload-and-sync.total-time", nil).Inc(int64(totalTime)) if err != nil {
}(time.Now()) return err
}
log.Trace("All references retrieved")
generateEndpoints(scheme, cluster, appName, from, to) // has-chunks
seed := int(time.Now().UnixNano() / 1e6) for _, host := range hosts {
log.Info("uploading to "+endpoints[0]+" and syncing", "seed", seed) httpHost := fmt.Sprintf("ws://%s:%d", host, 8546)
log.Trace("Calling `Has` on host", "httpHost", httpHost)
rpcClient, err := rpc.Dial(httpHost)
if err != nil {
log.Trace("Error dialing host", "err", err)
return err
}
log.Trace("rpc dial ok")
var hasInfo []api.HasInfo
err = rpcClient.Call(&hasInfo, "bzz_has", addrs)
if err != nil {
log.Trace("Error calling host", "err", err)
return err
}
log.Trace("rpc call ok")
count := 0
for _, info := range hasInfo {
if !info.Has {
count++
log.Error("Host does not have chunk", "host", httpHost, "chunk", info.Addr)
}
}
if count == 0 {
log.Info("Host reported to have all chunks", "host", httpHost)
}
}
return nil
}
randomBytes := testutil.RandomBytes(seed, filesize*1000) func getAllRefs(testData []byte) (storage.AddressCollection, error) {
log.Trace("Getting all references for given root hash")
datadir, err := ioutil.TempDir("", "chunk-debug")
if err != nil {
return nil, fmt.Errorf("unable to create temp dir: %v", err)
}
defer os.RemoveAll(datadir)
fileStore, err := storage.NewLocalFileStore(datadir, make([]byte, 32))
if err != nil {
return nil, err
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(trackTimeout)*time.Second)
defer cancel()
reader := bytes.NewReader(testData)
return fileStore.GetAllReferences(ctx, reader, false)
}
func uplaodAndSync(c *cli.Context, randomBytes []byte, tuid string) error {
log.Info("uploading to "+httpEndpoint(hosts[0])+" and syncing", "tuid", tuid, "seed", seed)
t1 := time.Now() t1 := time.Now()
hash, err := upload(&randomBytes, endpoints[0]) hash, err := upload(randomBytes, httpEndpoint(hosts[0]))
if err != nil { if err != nil {
log.Error(err.Error()) log.Error(err.Error())
return err return err
} }
metrics.GetOrRegisterCounter("upload-and-sync.upload-time", nil).Inc(int64(time.Since(t1))) t2 := time.Since(t1)
metrics.GetOrRegisterResettingTimer("upload-and-sync.upload-time", nil).Update(t2)
fhash, err := digest(bytes.NewReader(randomBytes)) fhash, err := digest(bytes.NewReader(randomBytes))
if err != nil { if err != nil {
@ -111,147 +142,53 @@ func uploadAndSync(c *cli.Context) error {
return err return err
} }
log.Info("uploaded successfully", "hash", hash, "digest", fmt.Sprintf("%x", fhash)) log.Info("uploaded successfully", "tuid", tuid, "hash", hash, "took", t2, "digest", fmt.Sprintf("%x", fhash))
time.Sleep(time.Duration(syncDelay) * time.Second) time.Sleep(time.Duration(syncDelay) * time.Second)
wg := sync.WaitGroup{} wg := sync.WaitGroup{}
if single { if single {
rand.Seed(time.Now().UTC().UnixNano()) randIndex := 1 + rand.Intn(len(hosts)-1)
randIndex := 1 + rand.Intn(len(endpoints)-1)
ruid := uuid.New()[:8] ruid := uuid.New()[:8]
wg.Add(1) wg.Add(1)
go func(endpoint string, ruid string) { go func(endpoint string, ruid string) {
for { for {
start := time.Now() start := time.Now()
err := fetch(hash, endpoint, fhash, ruid) err := fetch(hash, endpoint, fhash, ruid, tuid)
fetchTime := time.Since(start)
if err != nil { if err != nil {
continue continue
} }
ended := time.Since(start)
metrics.GetOrRegisterMeter("upload-and-sync.single.fetch-time", nil).Mark(int64(fetchTime)) metrics.GetOrRegisterResettingTimer("upload-and-sync.single.fetch-time", nil).Update(ended)
log.Info("fetch successful", "tuid", tuid, "ruid", ruid, "took", ended, "endpoint", endpoint)
wg.Done() wg.Done()
return return
} }
}(endpoints[randIndex], ruid) }(httpEndpoint(hosts[randIndex]), ruid)
} else { } else {
for _, endpoint := range endpoints { for _, endpoint := range hosts[1:] {
ruid := uuid.New()[:8] ruid := uuid.New()[:8]
wg.Add(1) wg.Add(1)
go func(endpoint string, ruid string) { go func(endpoint string, ruid string) {
for { for {
start := time.Now() start := time.Now()
err := fetch(hash, endpoint, fhash, ruid) err := fetch(hash, endpoint, fhash, ruid, tuid)
fetchTime := time.Since(start)
if err != nil { if err != nil {
continue continue
} }
ended := time.Since(start)
metrics.GetOrRegisterMeter("upload-and-sync.each.fetch-time", nil).Mark(int64(fetchTime)) metrics.GetOrRegisterResettingTimer("upload-and-sync.each.fetch-time", nil).Update(ended)
log.Info("fetch successful", "tuid", tuid, "ruid", ruid, "took", ended, "endpoint", endpoint)
wg.Done() wg.Done()
return return
} }
}(endpoint, ruid) }(httpEndpoint(endpoint), ruid)
} }
} }
wg.Wait() wg.Wait()
log.Info("all endpoints synced random file successfully") log.Info("all hosts synced random file successfully")
return nil return nil
} }
// fetch is getting the requested `hash` from the `endpoint` and compares it with the `original` file
func fetch(hash string, endpoint string, original []byte, ruid string) error {
ctx, sp := spancontext.StartSpan(context.Background(), "upload-and-sync.fetch")
defer sp.Finish()
log.Trace("sleeping", "ruid", ruid)
time.Sleep(3 * time.Second)
log.Trace("http get request", "ruid", ruid, "api", endpoint, "hash", hash)
var tn time.Time
reqUri := endpoint + "/bzz:/" + hash + "/"
req, _ := http.NewRequest("GET", reqUri, nil)
opentracing.GlobalTracer().Inject(
sp.Context(),
opentracing.HTTPHeaders,
opentracing.HTTPHeadersCarrier(req.Header))
trace := client.GetClientTrace("upload-and-sync - http get", "upload-and-sync", ruid, &tn)
req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
transport := http.DefaultTransport
//transport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
tn = time.Now()
res, err := transport.RoundTrip(req)
if err != nil {
log.Error(err.Error(), "ruid", ruid)
return err
}
log.Trace("http get response", "ruid", ruid, "api", endpoint, "hash", hash, "code", res.StatusCode, "len", res.ContentLength)
if res.StatusCode != 200 {
err := fmt.Errorf("expected status code %d, got %v", 200, res.StatusCode)
log.Warn(err.Error(), "ruid", ruid)
return err
}
defer res.Body.Close()
rdigest, err := digest(res.Body)
if err != nil {
log.Warn(err.Error(), "ruid", ruid)
return err
}
if !bytes.Equal(rdigest, original) {
err := fmt.Errorf("downloaded imported file md5=%x is not the same as the generated one=%x", rdigest, original)
log.Warn(err.Error(), "ruid", ruid)
return err
}
log.Trace("downloaded file matches random file", "ruid", ruid, "len", res.ContentLength)
return nil
}
// upload is uploading a file `f` to `endpoint` via the `swarm up` cmd
func upload(dataBytes *[]byte, endpoint string) (string, error) {
swarm := client.NewClient(endpoint)
f := &client.File{
ReadCloser: ioutil.NopCloser(bytes.NewReader(*dataBytes)),
ManifestEntry: api.ManifestEntry{
ContentType: "text/plain",
Mode: 0660,
Size: int64(len(*dataBytes)),
},
}
// upload data to bzz:// and retrieve the content-addressed manifest hash, hex-encoded.
return swarm.Upload(f, "", false)
}
func digest(r io.Reader) ([]byte, error) {
h := md5.New()
_, err := io.Copy(h, r)
if err != nil {
return nil, err
}
return h.Sum(nil), nil
}
// generates random data in heap buffer
func generateRandomData(datasize int) ([]byte, error) {
b := make([]byte, datasize)
c, err := crand.Read(b)
if err != nil {
return nil, err
} else if c != datasize {
return nil, errors.New("short read")
}
return b, nil
}

View File

@ -0,0 +1,73 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"fmt"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/swarm/testutil"
cli "gopkg.in/urfave/cli.v1"
)
func uploadSpeedCmd(ctx *cli.Context, tuid string) error {
log.Info("uploading to "+hosts[0], "tuid", tuid, "seed", seed)
randomBytes := testutil.RandomBytes(seed, filesize*1000)
errc := make(chan error)
go func() {
errc <- uploadSpeed(ctx, tuid, randomBytes)
}()
select {
case err := <-errc:
if err != nil {
metrics.GetOrRegisterCounter(fmt.Sprintf("%s.fail", commandName), nil).Inc(1)
}
return err
case <-time.After(time.Duration(timeout) * time.Second):
metrics.GetOrRegisterCounter(fmt.Sprintf("%s.timeout", commandName), nil).Inc(1)
// trigger debug functionality on randomBytes
return fmt.Errorf("timeout after %v sec", timeout)
}
}
func uploadSpeed(c *cli.Context, tuid string, data []byte) error {
t1 := time.Now()
hash, err := upload(data, hosts[0])
if err != nil {
log.Error(err.Error())
return err
}
metrics.GetOrRegisterCounter("upload-speed.upload-time", nil).Inc(int64(time.Since(t1)))
fhash, err := digest(bytes.NewReader(data))
if err != nil {
log.Error(err.Error())
return err
}
log.Info("uploaded successfully", "hash", hash, "digest", fmt.Sprintf("%x", fhash))
return nil
}

View File

@ -0,0 +1,235 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"context"
"crypto/md5"
crand "crypto/rand"
"errors"
"fmt"
"io"
"io/ioutil"
"math/rand"
"net/http"
"net/http/httptrace"
"os"
"strings"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/swarm/api"
"github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/swarm/spancontext"
opentracing "github.com/opentracing/opentracing-go"
"github.com/pborman/uuid"
cli "gopkg.in/urfave/cli.v1"
)
var (
commandName = ""
seed = int(time.Now().UTC().UnixNano())
)
func init() {
rand.Seed(int64(seed))
}
func httpEndpoint(host string) string {
return fmt.Sprintf("http://%s:%d", host, httpPort)
}
func wsEndpoint(host string) string {
return fmt.Sprintf("ws://%s:%d", host, wsPort)
}
func wrapCliCommand(name string, command func(*cli.Context, string) error) func(*cli.Context) error {
return func(ctx *cli.Context) error {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(verbosity), log.StreamHandler(os.Stdout, log.TerminalFormat(false))))
// test uuid
tuid := uuid.New()[:8]
commandName = name
hosts = strings.Split(allhosts, ",")
defer func(now time.Time) {
totalTime := time.Since(now)
log.Info("total time", "tuid", tuid, "time", totalTime, "kb", filesize)
metrics.GetOrRegisterResettingTimer(name+".total-time", nil).Update(totalTime)
}(time.Now())
log.Info("smoke test starting", "tuid", tuid, "task", name, "timeout", timeout)
metrics.GetOrRegisterCounter(name, nil).Inc(1)
return command(ctx, tuid)
}
}
func fetchFeed(topic string, user string, endpoint string, original []byte, ruid string) error {
ctx, sp := spancontext.StartSpan(context.Background(), "feed-and-sync.fetch")
defer sp.Finish()
log.Trace("sleeping", "ruid", ruid)
time.Sleep(3 * time.Second)
log.Trace("http get request (feed)", "ruid", ruid, "api", endpoint, "topic", topic, "user", user)
var tn time.Time
reqUri := endpoint + "/bzz-feed:/?topic=" + topic + "&user=" + user
req, _ := http.NewRequest("GET", reqUri, nil)
opentracing.GlobalTracer().Inject(
sp.Context(),
opentracing.HTTPHeaders,
opentracing.HTTPHeadersCarrier(req.Header))
trace := client.GetClientTrace("feed-and-sync - http get", "feed-and-sync", ruid, &tn)
req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
transport := http.DefaultTransport
//transport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
tn = time.Now()
res, err := transport.RoundTrip(req)
if err != nil {
log.Error(err.Error(), "ruid", ruid)
return err
}
log.Trace("http get response (feed)", "ruid", ruid, "api", endpoint, "topic", topic, "user", user, "code", res.StatusCode, "len", res.ContentLength)
if res.StatusCode != 200 {
return fmt.Errorf("expected status code %d, got %v (ruid %v)", 200, res.StatusCode, ruid)
}
defer res.Body.Close()
rdigest, err := digest(res.Body)
if err != nil {
log.Warn(err.Error(), "ruid", ruid)
return err
}
if !bytes.Equal(rdigest, original) {
err := fmt.Errorf("downloaded imported file md5=%x is not the same as the generated one=%x", rdigest, original)
log.Warn(err.Error(), "ruid", ruid)
return err
}
log.Trace("downloaded file matches random file", "ruid", ruid, "len", res.ContentLength)
return nil
}
// fetch is getting the requested `hash` from the `endpoint` and compares it with the `original` file
func fetch(hash string, endpoint string, original []byte, ruid string, tuid string) error {
ctx, sp := spancontext.StartSpan(context.Background(), "upload-and-sync.fetch")
defer sp.Finish()
log.Info("http get request", "tuid", tuid, "ruid", ruid, "endpoint", endpoint, "hash", hash)
var tn time.Time
reqUri := endpoint + "/bzz:/" + hash + "/"
req, _ := http.NewRequest("GET", reqUri, nil)
opentracing.GlobalTracer().Inject(
sp.Context(),
opentracing.HTTPHeaders,
opentracing.HTTPHeadersCarrier(req.Header))
trace := client.GetClientTrace(commandName+" - http get", commandName, ruid, &tn)
req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
transport := http.DefaultTransport
//transport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
tn = time.Now()
res, err := transport.RoundTrip(req)
if err != nil {
log.Error(err.Error(), "ruid", ruid)
return err
}
log.Info("http get response", "tuid", tuid, "ruid", ruid, "endpoint", endpoint, "hash", hash, "code", res.StatusCode, "len", res.ContentLength)
if res.StatusCode != 200 {
err := fmt.Errorf("expected status code %d, got %v", 200, res.StatusCode)
log.Warn(err.Error(), "ruid", ruid)
return err
}
defer res.Body.Close()
rdigest, err := digest(res.Body)
if err != nil {
log.Warn(err.Error(), "ruid", ruid)
return err
}
if !bytes.Equal(rdigest, original) {
err := fmt.Errorf("downloaded imported file md5=%x is not the same as the generated one=%x", rdigest, original)
log.Warn(err.Error(), "ruid", ruid)
return err
}
log.Trace("downloaded file matches random file", "ruid", ruid, "len", res.ContentLength)
return nil
}
// upload an arbitrary byte as a plaintext file to `endpoint` using the api client
func upload(data []byte, endpoint string) (string, error) {
swarm := client.NewClient(endpoint)
f := &client.File{
ReadCloser: ioutil.NopCloser(bytes.NewReader(data)),
ManifestEntry: api.ManifestEntry{
ContentType: "text/plain",
Mode: 0660,
Size: int64(len(data)),
},
}
// upload data to bzz:// and retrieve the content-addressed manifest hash, hex-encoded.
return swarm.Upload(f, "", false)
}
func digest(r io.Reader) ([]byte, error) {
h := md5.New()
_, err := io.Copy(h, r)
if err != nil {
return nil, err
}
return h.Sum(nil), nil
}
// generates random data in heap buffer
func generateRandomData(datasize int) ([]byte, error) {
b := make([]byte, datasize)
c, err := crand.Read(b)
if err != nil {
return nil, err
} else if c != datasize {
return nil, errors.New("short read")
}
return b, nil
}

View File

@ -0,0 +1,157 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethereum/go-ethereum/swarm/network"
"github.com/ethereum/go-ethereum/swarm/network/simulation"
cli "gopkg.in/urfave/cli.v1"
)
// create is used as the entry function for "create" app command.
func create(ctx *cli.Context) error {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(ctx.Int("verbosity")), log.StreamHandler(os.Stdout, log.TerminalFormat(true))))
if len(ctx.Args()) < 1 {
return errors.New("argument should be the filename to verify or write-to")
}
filename, err := touchPath(ctx.Args()[0])
if err != nil {
return err
}
return createSnapshot(filename, ctx.Int("nodes"), strings.Split(ctx.String("services"), ","))
}
// createSnapshot creates a new snapshot on filesystem with provided filename,
// number of nodes and service names.
func createSnapshot(filename string, nodes int, services []string) (err error) {
log.Debug("create snapshot", "filename", filename, "nodes", nodes, "services", services)
sim := simulation.New(map[string]simulation.ServiceFunc{
"bzz": func(ctx *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
addr := network.NewAddr(ctx.Config.Node())
kad := network.NewKademlia(addr.Over(), network.NewKadParams())
hp := network.NewHiveParams()
hp.KeepAliveInterval = time.Duration(200) * time.Millisecond
hp.Discovery = true // discovery must be enabled when creating a snapshot
config := &network.BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
}
return network.NewBzz(config, kad, nil, nil, nil), nil, nil
},
})
defer sim.Close()
_, err = sim.AddNodes(nodes)
if err != nil {
return fmt.Errorf("add nodes: %v", err)
}
err = sim.Net.ConnectNodesRing(nil)
if err != nil {
return fmt.Errorf("connect nodes: %v", err)
}
ctx, cancelSimRun := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancelSimRun()
if _, err := sim.WaitTillHealthy(ctx); err != nil {
return fmt.Errorf("wait for healthy kademlia: %v", err)
}
var snap *simulations.Snapshot
if len(services) > 0 {
// If service names are provided, include them in the snapshot.
// But, check if "bzz" service is not among them to remove it
// form the snapshot as it exists on snapshot creation.
var removeServices []string
var wantBzz bool
for _, s := range services {
if s == "bzz" {
wantBzz = true
break
}
}
if !wantBzz {
removeServices = []string{"bzz"}
}
snap, err = sim.Net.SnapshotWithServices(services, removeServices)
} else {
snap, err = sim.Net.Snapshot()
}
if err != nil {
return fmt.Errorf("create snapshot: %v", err)
}
jsonsnapshot, err := json.Marshal(snap)
if err != nil {
return fmt.Errorf("json encode snapshot: %v", err)
}
return ioutil.WriteFile(filename, jsonsnapshot, 0666)
}
// touchPath creates an empty file and all subdirectories
// that are missing.
func touchPath(filename string) (string, error) {
if path.IsAbs(filename) {
if _, err := os.Stat(filename); err == nil {
// path exists, overwrite
return filename, nil
}
}
d, f := path.Split(filename)
dir, err := filepath.Abs(filepath.Dir(os.Args[0]))
if err != nil {
return "", err
}
_, err = os.Stat(path.Join(dir, filename))
if err == nil {
// path exists, overwrite
return filename, nil
}
dirPath := path.Join(dir, d)
filePath := path.Join(dirPath, f)
if d != "" {
err = os.MkdirAll(dirPath, os.ModeDir)
if err != nil {
return "", err
}
}
return filePath, nil
}

View File

@ -0,0 +1,143 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"runtime"
"sort"
"strconv"
"strings"
"testing"
"github.com/ethereum/go-ethereum/p2p/simulations"
)
// TestSnapshotCreate is a high level e2e test that tests for snapshot generation.
// It runs a few "create" commands with different flag values and loads generated
// snapshot files to validate their content.
func TestSnapshotCreate(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
for _, v := range []struct {
name string
nodes int
services string
}{
{
name: "defaults",
},
{
name: "more nodes",
nodes: defaultNodes + 5,
},
{
name: "services",
services: "stream,pss,zorglub",
},
{
name: "services with bzz",
services: "bzz,pss",
},
} {
t.Run(v.name, func(t *testing.T) {
t.Parallel()
file, err := ioutil.TempFile("", "swarm-snapshot")
if err != nil {
t.Fatal(err)
}
defer os.Remove(file.Name())
if err = file.Close(); err != nil {
t.Error(err)
}
args := []string{"create"}
if v.nodes > 0 {
args = append(args, "--nodes", strconv.Itoa(v.nodes))
}
if v.services != "" {
args = append(args, "--services", v.services)
}
testCmd := runSnapshot(t, append(args, file.Name())...)
testCmd.ExpectExit()
if code := testCmd.ExitStatus(); code != 0 {
t.Fatalf("command exit code %v, expected 0", code)
}
f, err := os.Open(file.Name())
if err != nil {
t.Fatal(err)
}
defer func() {
err := f.Close()
if err != nil {
t.Error("closing snapshot file", "err", err)
}
}()
b, err := ioutil.ReadAll(f)
if err != nil {
t.Fatal(err)
}
var snap simulations.Snapshot
err = json.Unmarshal(b, &snap)
if err != nil {
t.Fatal(err)
}
wantNodes := v.nodes
if wantNodes == 0 {
wantNodes = defaultNodes
}
gotNodes := len(snap.Nodes)
if gotNodes != wantNodes {
t.Errorf("got %v nodes, want %v", gotNodes, wantNodes)
}
if len(snap.Conns) == 0 {
t.Error("no connections in a snapshot")
}
var wantServices []string
if v.services != "" {
wantServices = strings.Split(v.services, ",")
} else {
wantServices = []string{"bzz"}
}
// sort service names so they can be comparable
// as strings to every node sorted services
sort.Strings(wantServices)
for i, n := range snap.Nodes {
gotServices := n.Node.Config.Services
sort.Strings(gotServices)
if fmt.Sprint(gotServices) != fmt.Sprint(wantServices) {
t.Errorf("got services %v for node %v, want %v", gotServices, i, wantServices)
}
}
})
}
}

View File

@ -0,0 +1,82 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/log"
cli "gopkg.in/urfave/cli.v1"
)
var gitCommit string // Git SHA1 commit hash of the release (set via linker flags)
// default value for "create" command --nodes flag
const defaultNodes = 10
func main() {
err := newApp().Run(os.Args)
if err != nil {
log.Error(err.Error())
os.Exit(1)
}
}
// newApp construct a new instance of Swarm Snapshot Utility.
// Method Run is called on it in the main function and in tests.
func newApp() (app *cli.App) {
app = utils.NewApp(gitCommit, "Swarm Snapshot Utility")
app.Name = "swarm-snapshot"
app.Usage = ""
// app flags (for all commands)
app.Flags = []cli.Flag{
cli.IntFlag{
Name: "verbosity",
Value: 1,
Usage: "verbosity level",
},
}
app.Commands = []cli.Command{
{
Name: "create",
Aliases: []string{"c"},
Usage: "create a swarm snapshot",
Action: create,
// Flags only for "create" command.
// Allow app flags to be specified after the
// command argument.
Flags: append(app.Flags,
cli.IntFlag{
Name: "nodes",
Value: defaultNodes,
Usage: "number of nodes",
},
cli.StringFlag{
Name: "services",
Value: "bzz",
Usage: "comma separated list of services to boot the nodes with",
},
),
},
}
return app
}

View File

@ -0,0 +1,49 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"os"
"testing"
"github.com/docker/docker/pkg/reexec"
"github.com/ethereum/go-ethereum/internal/cmdtest"
)
func init() {
reexec.Register("swarm-snapshot", func() {
if err := newApp().Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
os.Exit(0)
})
}
func runSnapshot(t *testing.T, args ...string) *cmdtest.TestCmd {
tt := cmdtest.NewTestCmd(t, nil)
tt.Run("swarm-snapshot", args...)
return tt
}
func TestMain(m *testing.M) {
if reexec.Init() {
return
}
os.Exit(m.Run())
}

View File

@ -57,7 +57,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/netutil" "github.com/ethereum/go-ethereum/p2p/netutil"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
whisper "github.com/ethereum/go-ethereum/whisper/whisperv6" whisper "github.com/ethereum/go-ethereum/whisper/whisperv6"
"gopkg.in/urfave/cli.v1" cli "gopkg.in/urfave/cli.v1"
) )
var ( var (
@ -140,6 +140,10 @@ var (
Name: "rinkeby", Name: "rinkeby",
Usage: "Rinkeby network: pre-configured proof-of-authority test network", Usage: "Rinkeby network: pre-configured proof-of-authority test network",
} }
GoerliFlag = cli.BoolFlag{
Name: "goerli",
Usage: "Görli network: pre-configured proof-of-authority test network",
}
ConstantinopleOverrideFlag = cli.Uint64Flag{ ConstantinopleOverrideFlag = cli.Uint64Flag{
Name: "override.constantinople", Name: "override.constantinople",
Usage: "Manually specify constantinople fork-block, overriding the bundled setting", Usage: "Manually specify constantinople fork-block, overriding the bundled setting",
@ -407,6 +411,10 @@ var (
Name: "vmdebug", Name: "vmdebug",
Usage: "Record information useful for VM and contract debugging", Usage: "Record information useful for VM and contract debugging",
} }
RPCGlobalGasCap = cli.Uint64Flag{
Name: "rpc.gascap",
Usage: "Sets a cap on gas that can be used in eth_call/estimateGas",
}
// Logging and debug settings // Logging and debug settings
EthStatsURLFlag = cli.StringFlag{ EthStatsURLFlag = cli.StringFlag{
Name: "ethstats", Name: "ethstats",
@ -614,14 +622,14 @@ var (
Usage: "Password to authorize access to the database", Usage: "Password to authorize access to the database",
Value: "test", Value: "test",
} }
// The `host` tag is part of every measurement sent to InfluxDB. Queries on tags are faster in InfluxDB. // Tags are part of every measurement sent to InfluxDB. Queries on tags are faster in InfluxDB.
// It is used so that we can group all nodes and average a measurement across all of them, but also so // For example `host` tag could be used so that we can group all nodes and average a measurement
// that we can select a specific node and inspect its measurements. // across all of them, but also so that we can select a specific node and inspect its measurements.
// https://docs.influxdata.com/influxdb/v1.4/concepts/key_concepts/#tag-key // https://docs.influxdata.com/influxdb/v1.4/concepts/key_concepts/#tag-key
MetricsInfluxDBHostTagFlag = cli.StringFlag{ MetricsInfluxDBTagsFlag = cli.StringFlag{
Name: "metrics.influxdb.host.tag", Name: "metrics.influxdb.tags",
Usage: "InfluxDB `host` tag attached to all measurements", Usage: "Comma-separated InfluxDB tags (key/values) attached to all measurements",
Value: "localhost", Value: "host=localhost",
} }
EWASMInterpreterFlag = cli.StringFlag{ EWASMInterpreterFlag = cli.StringFlag{
@ -647,6 +655,9 @@ func MakeDataDir(ctx *cli.Context) string {
if ctx.GlobalBool(RinkebyFlag.Name) { if ctx.GlobalBool(RinkebyFlag.Name) {
return filepath.Join(path, "rinkeby") return filepath.Join(path, "rinkeby")
} }
if ctx.GlobalBool(GoerliFlag.Name) {
return filepath.Join(path, "goerli")
}
return path return path
} }
Fatalf("Cannot determine default data directory, please set manually (--datadir)") Fatalf("Cannot determine default data directory, please set manually (--datadir)")
@ -701,6 +712,8 @@ func setBootstrapNodes(ctx *cli.Context, cfg *p2p.Config) {
urls = params.TestnetBootnodes urls = params.TestnetBootnodes
case ctx.GlobalBool(RinkebyFlag.Name): case ctx.GlobalBool(RinkebyFlag.Name):
urls = params.RinkebyBootnodes urls = params.RinkebyBootnodes
case ctx.GlobalBool(GoerliFlag.Name):
urls = params.GoerliBootnodes
case cfg.BootstrapNodes != nil: case cfg.BootstrapNodes != nil:
return // already set, don't apply defaults. return // already set, don't apply defaults.
} }
@ -728,6 +741,8 @@ func setBootstrapNodesV5(ctx *cli.Context, cfg *p2p.Config) {
} }
case ctx.GlobalBool(RinkebyFlag.Name): case ctx.GlobalBool(RinkebyFlag.Name):
urls = params.RinkebyBootnodes urls = params.RinkebyBootnodes
case ctx.GlobalBool(GoerliFlag.Name):
urls = params.GoerliBootnodes
case cfg.BootstrapNodesV5 != nil: case cfg.BootstrapNodesV5 != nil:
return // already set, don't apply defaults. return // already set, don't apply defaults.
} }
@ -836,10 +851,11 @@ func makeDatabaseHandles() int {
if err != nil { if err != nil {
Fatalf("Failed to retrieve file descriptor allowance: %v", err) Fatalf("Failed to retrieve file descriptor allowance: %v", err)
} }
if err := fdlimit.Raise(uint64(limit)); err != nil { raised, err := fdlimit.Raise(uint64(limit))
if err != nil {
Fatalf("Failed to raise file descriptor allowance: %v", err) Fatalf("Failed to raise file descriptor allowance: %v", err)
} }
return limit / 2 // Leave half for networking and other stuff return int(raised / 2) // Leave half for networking and other stuff
} }
// MakeAddress converts an account specified directly as a hex encoded string or // MakeAddress converts an account specified directly as a hex encoded string or
@ -980,7 +996,6 @@ func SetNodeConfig(ctx *cli.Context, cfg *node.Config) {
setHTTP(ctx, cfg) setHTTP(ctx, cfg)
setWS(ctx, cfg) setWS(ctx, cfg)
setNodeUserIdent(ctx, cfg) setNodeUserIdent(ctx, cfg)
setDataDir(ctx, cfg) setDataDir(ctx, cfg)
if ctx.GlobalIsSet(KeyStoreDirFlag.Name) { if ctx.GlobalIsSet(KeyStoreDirFlag.Name) {
@ -1004,6 +1019,8 @@ func setDataDir(ctx *cli.Context, cfg *node.Config) {
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "testnet") cfg.DataDir = filepath.Join(node.DefaultDataDir(), "testnet")
case ctx.GlobalBool(RinkebyFlag.Name): case ctx.GlobalBool(RinkebyFlag.Name):
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby") cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby")
case ctx.GlobalBool(GoerliFlag.Name):
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "goerli")
} }
} }
@ -1160,7 +1177,7 @@ func SetShhConfig(ctx *cli.Context, stack *node.Node, cfg *whisper.Config) {
// SetEthConfig applies eth-related command line flags to the config. // SetEthConfig applies eth-related command line flags to the config.
func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *eth.Config) { func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *eth.Config) {
// Avoid conflicting network flags // Avoid conflicting network flags
checkExclusive(ctx, DeveloperFlag, TestnetFlag, RinkebyFlag) checkExclusive(ctx, DeveloperFlag, TestnetFlag, RinkebyFlag, GoerliFlag)
checkExclusive(ctx, LightServFlag, SyncModeFlag, "light") checkExclusive(ctx, LightServFlag, SyncModeFlag, "light")
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore) ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
@ -1243,6 +1260,9 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *eth.Config) {
if ctx.GlobalIsSet(EVMInterpreterFlag.Name) { if ctx.GlobalIsSet(EVMInterpreterFlag.Name) {
cfg.EVMInterpreter = ctx.GlobalString(EVMInterpreterFlag.Name) cfg.EVMInterpreter = ctx.GlobalString(EVMInterpreterFlag.Name)
} }
if ctx.GlobalIsSet(RPCGlobalGasCap.Name) {
cfg.RPCGasCap = new(big.Int).SetUint64(ctx.GlobalUint64(RPCGlobalGasCap.Name))
}
// Override any default configs for hard coded networks. // Override any default configs for hard coded networks.
switch { switch {
@ -1256,6 +1276,11 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *eth.Config) {
cfg.NetworkId = 4 cfg.NetworkId = 4
} }
cfg.Genesis = core.DefaultRinkebyGenesisBlock() cfg.Genesis = core.DefaultRinkebyGenesisBlock()
case ctx.GlobalBool(GoerliFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 5
}
cfg.Genesis = core.DefaultGoerliGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name): case ctx.GlobalBool(DeveloperFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) { if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 1337 cfg.NetworkId = 1337
@ -1360,18 +1385,35 @@ func SetupMetrics(ctx *cli.Context) {
database = ctx.GlobalString(MetricsInfluxDBDatabaseFlag.Name) database = ctx.GlobalString(MetricsInfluxDBDatabaseFlag.Name)
username = ctx.GlobalString(MetricsInfluxDBUsernameFlag.Name) username = ctx.GlobalString(MetricsInfluxDBUsernameFlag.Name)
password = ctx.GlobalString(MetricsInfluxDBPasswordFlag.Name) password = ctx.GlobalString(MetricsInfluxDBPasswordFlag.Name)
hosttag = ctx.GlobalString(MetricsInfluxDBHostTagFlag.Name)
) )
if enableExport { if enableExport {
tagsMap := SplitTagsFlag(ctx.GlobalString(MetricsInfluxDBTagsFlag.Name))
log.Info("Enabling metrics export to InfluxDB") log.Info("Enabling metrics export to InfluxDB")
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", map[string]string{
"host": hosttag, go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", tagsMap)
})
} }
} }
} }
func SplitTagsFlag(tagsFlag string) map[string]string {
tags := strings.Split(tagsFlag, ",")
tagsMap := map[string]string{}
for _, t := range tags {
if t != "" {
kv := strings.Split(t, "=")
if len(kv) == 2 {
tagsMap[kv[0]] = kv[1]
}
}
}
return tagsMap
}
// MakeChainDatabase open an LevelDB using the flags passed to the client and will hard crash if it fails. // MakeChainDatabase open an LevelDB using the flags passed to the client and will hard crash if it fails.
func MakeChainDatabase(ctx *cli.Context, stack *node.Node) ethdb.Database { func MakeChainDatabase(ctx *cli.Context, stack *node.Node) ethdb.Database {
var ( var (
@ -1396,6 +1438,8 @@ func MakeGenesis(ctx *cli.Context) *core.Genesis {
genesis = core.DefaultTestnetGenesisBlock() genesis = core.DefaultTestnetGenesisBlock()
case ctx.GlobalBool(RinkebyFlag.Name): case ctx.GlobalBool(RinkebyFlag.Name):
genesis = core.DefaultRinkebyGenesisBlock() genesis = core.DefaultRinkebyGenesisBlock()
case ctx.GlobalBool(GoerliFlag.Name):
genesis = core.DefaultGoerliGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name): case ctx.GlobalBool(DeveloperFlag.Name):
Fatalf("Developer chains are ephemeral") Fatalf("Developer chains are ephemeral")
} }

64
cmd/utils/flags_test.go Normal file
View File

@ -0,0 +1,64 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Package utils contains internal helper functions for go-ethereum commands.
package utils
import (
"reflect"
"testing"
)
func Test_SplitTagsFlag(t *testing.T) {
tests := []struct {
name string
args string
want map[string]string
}{
{
"2 tags case",
"host=localhost,bzzkey=123",
map[string]string{
"host": "localhost",
"bzzkey": "123",
},
},
{
"1 tag case",
"host=localhost123",
map[string]string{
"host": "localhost123",
},
},
{
"empty case",
"",
map[string]string{},
},
{
"garbage",
"smth=smthelse=123",
map[string]string{},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := SplitTagsFlag(tt.args); !reflect.DeepEqual(got, tt.want) {
t.Errorf("splitTagsFlag() = %v, want %v", got, tt.want)
}
})
}
}

View File

@ -0,0 +1,71 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package fdlimit
import "syscall"
// hardlimit is the number of file descriptors allowed at max by the kernel.
const hardlimit = 10240
// Raise tries to maximize the file descriptor allowance of this process
// to the maximum hard-limit allowed by the OS.
// Returns the size it was set to (may differ from the desired 'max')
func Raise(max uint64) (uint64, error) {
// Get the current limit
var limit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
// Try to update the limit to the max allowance
limit.Cur = limit.Max
if limit.Cur > max {
limit.Cur = max
}
if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
// MacOS can silently apply further caps, so retrieve the actually set limit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
return limit.Cur, nil
}
// Current retrieves the number of file descriptors allowed to be opened by this
// process.
func Current() (int, error) {
var limit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
return int(limit.Cur), nil
}
// Maximum retrieves the maximum number of file descriptors this process is
// allowed to request for itself.
func Maximum() (int, error) {
// Retrieve the maximum allowed by dynamic OS limits
var limit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
// Cap it to OPEN_MAX (10240) because macos is a special snowflake
if limit.Max > hardlimit {
limit.Max = hardlimit
}
return int(limit.Max), nil
}

View File

@ -26,11 +26,11 @@ import "syscall"
// Raise tries to maximize the file descriptor allowance of this process // Raise tries to maximize the file descriptor allowance of this process
// to the maximum hard-limit allowed by the OS. // to the maximum hard-limit allowed by the OS.
func Raise(max uint64) error { func Raise(max uint64) (uint64, error) {
// Get the current limit // Get the current limit
var limit syscall.Rlimit var limit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil { if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return err return 0, err
} }
// Try to update the limit to the max allowance // Try to update the limit to the max allowance
limit.Cur = limit.Max limit.Cur = limit.Max
@ -38,9 +38,12 @@ func Raise(max uint64) error {
limit.Cur = int64(max) limit.Cur = int64(max)
} }
if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil { if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return err return 0, err
} }
return nil if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
return limit.Cur, nil
} }
// Current retrieves the number of file descriptors allowed to be opened by this // Current retrieves the number of file descriptors allowed to be opened by this

View File

@ -36,7 +36,7 @@ func TestFileDescriptorLimits(t *testing.T) {
if limit, err := Current(); err != nil || limit <= 0 { if limit, err := Current(); err != nil || limit <= 0 {
t.Fatalf("failed to retrieve file descriptor limit (%d): %v", limit, err) t.Fatalf("failed to retrieve file descriptor limit (%d): %v", limit, err)
} }
if err := Raise(uint64(target)); err != nil { if _, err := Raise(uint64(target)); err != nil {
t.Fatalf("failed to raise file allowance") t.Fatalf("failed to raise file allowance")
} }
if limit, err := Current(); err != nil || limit < target { if limit, err := Current(); err != nil || limit < target {

View File

@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License // You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>. // along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build linux darwin netbsd openbsd solaris // +build linux netbsd openbsd solaris
package fdlimit package fdlimit
@ -22,11 +22,12 @@ import "syscall"
// Raise tries to maximize the file descriptor allowance of this process // Raise tries to maximize the file descriptor allowance of this process
// to the maximum hard-limit allowed by the OS. // to the maximum hard-limit allowed by the OS.
func Raise(max uint64) error { // Returns the size it was set to (may differ from the desired 'max')
func Raise(max uint64) (uint64, error) {
// Get the current limit // Get the current limit
var limit syscall.Rlimit var limit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil { if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return err return 0, err
} }
// Try to update the limit to the max allowance // Try to update the limit to the max allowance
limit.Cur = limit.Max limit.Cur = limit.Max
@ -34,9 +35,13 @@ func Raise(max uint64) error {
limit.Cur = max limit.Cur = max
} }
if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil { if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return err return 0, err
} }
return nil // MacOS can silently apply further caps, so retrieve the actually set limit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &limit); err != nil {
return 0, err
}
return limit.Cur, nil
} }
// Current retrieves the number of file descriptors allowed to be opened by this // Current retrieves the number of file descriptors allowed to be opened by this

View File

@ -16,28 +16,31 @@
package fdlimit package fdlimit
import "errors" import "fmt"
// hardlimit is the number of file descriptors allowed at max by the kernel.
const hardlimit = 16384
// Raise tries to maximize the file descriptor allowance of this process // Raise tries to maximize the file descriptor allowance of this process
// to the maximum hard-limit allowed by the OS. // to the maximum hard-limit allowed by the OS.
func Raise(max uint64) error { func Raise(max uint64) (uint64, error) {
// This method is NOP by design: // This method is NOP by design:
// * Linux/Darwin counterparts need to manually increase per process limits // * Linux/Darwin counterparts need to manually increase per process limits
// * On Windows Go uses the CreateFile API, which is limited to 16K files, non // * On Windows Go uses the CreateFile API, which is limited to 16K files, non
// changeable from within a running process // changeable from within a running process
// This way we can always "request" raising the limits, which will either have // This way we can always "request" raising the limits, which will either have
// or not have effect based on the platform we're running on. // or not have effect based on the platform we're running on.
if max > 16384 { if max > hardlimit {
return errors.New("file descriptor limit (16384) reached") return hardlimit, fmt.Errorf("file descriptor limit (%d) reached", hardlimit)
} }
return nil return max, nil
} }
// Current retrieves the number of file descriptors allowed to be opened by this // Current retrieves the number of file descriptors allowed to be opened by this
// process. // process.
func Current() (int, error) { func Current() (int, error) {
// Please see Raise for the reason why we use hard coded 16K as the limit // Please see Raise for the reason why we use hard coded 16K as the limit
return 16384, nil return hardlimit, nil
} }
// Maximum retrieves the maximum number of file descriptors this process is // Maximum retrieves the maximum number of file descriptors this process is

View File

@ -279,7 +279,7 @@ func (c *Clique) verifyHeader(chain consensus.ChainReader, header *types.Header,
number := header.Number.Uint64() number := header.Number.Uint64()
// Don't waste time checking blocks from the future // Don't waste time checking blocks from the future
if header.Time.Cmp(big.NewInt(time.Now().Unix())) > 0 { if header.Time > uint64(time.Now().Unix()) {
return consensus.ErrFutureBlock return consensus.ErrFutureBlock
} }
// Checkpoint blocks need to enforce zero beneficiary // Checkpoint blocks need to enforce zero beneficiary
@ -351,7 +351,7 @@ func (c *Clique) verifyCascadingFields(chain consensus.ChainReader, header *type
if parent == nil || parent.Number.Uint64() != number-1 || parent.Hash() != header.ParentHash { if parent == nil || parent.Number.Uint64() != number-1 || parent.Hash() != header.ParentHash {
return consensus.ErrUnknownAncestor return consensus.ErrUnknownAncestor
} }
if parent.Time.Uint64()+c.config.Period > header.Time.Uint64() { if parent.Time+c.config.Period > header.Time {
return ErrInvalidTimestamp return ErrInvalidTimestamp
} }
// Retrieve the snapshot needed to verify this header and cache it // Retrieve the snapshot needed to verify this header and cache it
@ -570,9 +570,9 @@ func (c *Clique) Prepare(chain consensus.ChainReader, header *types.Header) erro
if parent == nil { if parent == nil {
return consensus.ErrUnknownAncestor return consensus.ErrUnknownAncestor
} }
header.Time = new(big.Int).Add(parent.Time, new(big.Int).SetUint64(c.config.Period)) header.Time = parent.Time + c.config.Period
if header.Time.Int64() < time.Now().Unix() { if header.Time < uint64(time.Now().Unix()) {
header.Time = big.NewInt(time.Now().Unix()) header.Time = uint64(time.Now().Unix())
} }
return nil return nil
} }
@ -637,7 +637,7 @@ func (c *Clique) Seal(chain consensus.ChainReader, block *types.Block, results c
} }
} }
// Sweet, the protocol permits us to sign the block, wait for our time // Sweet, the protocol permits us to sign the block, wait for our time
delay := time.Unix(header.Time.Int64(), 0).Sub(time.Now()) // nolint: gosimple delay := time.Unix(int64(header.Time), 0).Sub(time.Now()) // nolint: gosimple
if header.Difficulty.Cmp(diffNoTurn) == 0 { if header.Difficulty.Cmp(diffNoTurn) == 0 {
// It's not our turn explicitly to sign, delay it a bit // It's not our turn explicitly to sign, delay it a bit
wiggle := time.Duration(len(snap.Signers)/2+1) * wiggleTime wiggle := time.Duration(len(snap.Signers)/2+1) * wiggleTime

View File

@ -716,7 +716,7 @@ func TestConcurrentDiskCacheGeneration(t *testing.T) {
Difficulty: big.NewInt(167925187834220), Difficulty: big.NewInt(167925187834220),
GasLimit: 4015682, GasLimit: 4015682,
GasUsed: 0, GasUsed: 0,
Time: big.NewInt(1488928920), Time: 1488928920,
Extra: []byte("www.bw.com"), Extra: []byte("www.bw.com"),
MixDigest: common.HexToHash("0x3e140b0784516af5e5ec6730f2fb20cca22f32be399b9e4ad77d32541f798cd0"), MixDigest: common.HexToHash("0x3e140b0784516af5e5ec6730f2fb20cca22f32be399b9e4ad77d32541f798cd0"),
Nonce: types.EncodeNonce(0xf400cd0006070c49), Nonce: types.EncodeNonce(0xf400cd0006070c49),

View File

@ -63,7 +63,6 @@ var (
// codebase, inherently breaking if the engine is swapped out. Please put common // codebase, inherently breaking if the engine is swapped out. Please put common
// error types into the consensus package. // error types into the consensus package.
var ( var (
errLargeBlockTime = errors.New("timestamp too big")
errZeroBlockTime = errors.New("timestamp equals parent's") errZeroBlockTime = errors.New("timestamp equals parent's")
errTooManyUncles = errors.New("too many uncles") errTooManyUncles = errors.New("too many uncles")
errDuplicateUncle = errors.New("duplicate uncle") errDuplicateUncle = errors.New("duplicate uncle")
@ -242,20 +241,16 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainReader, header, parent *
return fmt.Errorf("extra-data too long: %d > %d", len(header.Extra), params.MaximumExtraDataSize) return fmt.Errorf("extra-data too long: %d > %d", len(header.Extra), params.MaximumExtraDataSize)
} }
// Verify the header's timestamp // Verify the header's timestamp
if uncle { if !uncle {
if header.Time.Cmp(math.MaxBig256) > 0 { if header.Time > uint64(time.Now().Add(allowedFutureBlockTime).Unix()) {
return errLargeBlockTime
}
} else {
if header.Time.Cmp(big.NewInt(time.Now().Add(allowedFutureBlockTime).Unix())) > 0 {
return consensus.ErrFutureBlock return consensus.ErrFutureBlock
} }
} }
if header.Time.Cmp(parent.Time) <= 0 { if header.Time <= parent.Time {
return errZeroBlockTime return errZeroBlockTime
} }
// Verify the block's difficulty based in it's timestamp and parent's difficulty // Verify the block's difficulty based in it's timestamp and parent's difficulty
expected := ethash.CalcDifficulty(chain, header.Time.Uint64(), parent) expected := ethash.CalcDifficulty(chain, header.Time, parent)
if expected.Cmp(header.Difficulty) != 0 { if expected.Cmp(header.Difficulty) != 0 {
return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, expected) return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, expected)
@ -349,7 +344,7 @@ func makeDifficultyCalculator(bombDelay *big.Int) func(time uint64, parent *type
// ) + 2^(periodCount - 2) // ) + 2^(periodCount - 2)
bigTime := new(big.Int).SetUint64(time) bigTime := new(big.Int).SetUint64(time)
bigParentTime := new(big.Int).Set(parent.Time) bigParentTime := new(big.Int).SetUint64(parent.Time)
// holds intermediate values to make the algo easier to read & audit // holds intermediate values to make the algo easier to read & audit
x := new(big.Int) x := new(big.Int)
@ -408,7 +403,7 @@ func calcDifficultyHomestead(time uint64, parent *types.Header) *big.Int {
// ) + 2^(periodCount - 2) // ) + 2^(periodCount - 2)
bigTime := new(big.Int).SetUint64(time) bigTime := new(big.Int).SetUint64(time)
bigParentTime := new(big.Int).Set(parent.Time) bigParentTime := new(big.Int).SetUint64(parent.Time)
// holds intermediate values to make the algo easier to read & audit // holds intermediate values to make the algo easier to read & audit
x := new(big.Int) x := new(big.Int)
@ -456,7 +451,7 @@ func calcDifficultyFrontier(time uint64, parent *types.Header) *big.Int {
bigParentTime := new(big.Int) bigParentTime := new(big.Int)
bigTime.SetUint64(time) bigTime.SetUint64(time)
bigParentTime.Set(parent.Time) bigParentTime.SetUint64(parent.Time)
if bigTime.Sub(bigTime, bigParentTime).Cmp(params.DurationLimit) < 0 { if bigTime.Sub(bigTime, bigParentTime).Cmp(params.DurationLimit) < 0 {
diff.Add(parent.Difficulty, adjust) diff.Add(parent.Difficulty, adjust)
@ -558,7 +553,7 @@ func (ethash *Ethash) Prepare(chain consensus.ChainReader, header *types.Header)
if parent == nil { if parent == nil {
return consensus.ErrUnknownAncestor return consensus.ErrUnknownAncestor
} }
header.Difficulty = ethash.CalcDifficulty(chain, header.Time.Uint64(), parent) header.Difficulty = ethash.CalcDifficulty(chain, header.Time, parent)
return nil return nil
} }

View File

@ -76,7 +76,7 @@ func TestCalcDifficulty(t *testing.T) {
number := new(big.Int).Sub(test.CurrentBlocknumber, big.NewInt(1)) number := new(big.Int).Sub(test.CurrentBlocknumber, big.NewInt(1))
diff := CalcDifficulty(config, test.CurrentTimestamp, &types.Header{ diff := CalcDifficulty(config, test.CurrentTimestamp, &types.Header{
Number: number, Number: number,
Time: new(big.Int).SetUint64(test.ParentTimestamp), Time: test.ParentTimestamp,
Difficulty: test.ParentDifficulty, Difficulty: test.ParentDifficulty,
}) })
if diff.Cmp(test.CurrentDifficulty) != 0 { if diff.Cmp(test.CurrentDifficulty) != 0 {

View File

@ -27,40 +27,40 @@ const Version = "1.0"
var errNoChequebook = errors.New("no chequebook") var errNoChequebook = errors.New("no chequebook")
type Api struct { type API struct {
chequebookf func() *Chequebook chequebookf func() *Chequebook
} }
func NewApi(ch func() *Chequebook) *Api { func NewAPI(ch func() *Chequebook) *API {
return &Api{ch} return &API{ch}
} }
func (self *Api) Balance() (string, error) { func (a *API) Balance() (string, error) {
ch := self.chequebookf() ch := a.chequebookf()
if ch == nil { if ch == nil {
return "", errNoChequebook return "", errNoChequebook
} }
return ch.Balance().String(), nil return ch.Balance().String(), nil
} }
func (self *Api) Issue(beneficiary common.Address, amount *big.Int) (cheque *Cheque, err error) { func (a *API) Issue(beneficiary common.Address, amount *big.Int) (cheque *Cheque, err error) {
ch := self.chequebookf() ch := a.chequebookf()
if ch == nil { if ch == nil {
return nil, errNoChequebook return nil, errNoChequebook
} }
return ch.Issue(beneficiary, amount) return ch.Issue(beneficiary, amount)
} }
func (self *Api) Cash(cheque *Cheque) (txhash string, err error) { func (a *API) Cash(cheque *Cheque) (txhash string, err error) {
ch := self.chequebookf() ch := a.chequebookf()
if ch == nil { if ch == nil {
return "", errNoChequebook return "", errNoChequebook
} }
return ch.Cash(cheque) return ch.Cash(cheque)
} }
func (self *Api) Deposit(amount *big.Int) (txhash string, err error) { func (a *API) Deposit(amount *big.Int) (txhash string, err error) {
ch := self.chequebookf() ch := a.chequebookf()
if ch == nil { if ch == nil {
return "", errNoChequebook return "", errNoChequebook
} }

View File

@ -75,8 +75,8 @@ type Cheque struct {
Sig []byte // signature Sign(Keccak256(contract, beneficiary, amount), prvKey) Sig []byte // signature Sign(Keccak256(contract, beneficiary, amount), prvKey)
} }
func (self *Cheque) String() string { func (ch *Cheque) String() string {
return fmt.Sprintf("contract: %s, beneficiary: %s, amount: %v, signature: %x", self.Contract.Hex(), self.Beneficiary.Hex(), self.Amount, self.Sig) return fmt.Sprintf("contract: %s, beneficiary: %s, amount: %v, signature: %x", ch.Contract.Hex(), ch.Beneficiary.Hex(), ch.Amount, ch.Sig)
} }
type Params struct { type Params struct {
@ -109,8 +109,8 @@ type Chequebook struct {
log log.Logger // contextual logger with the contract address embedded log log.Logger // contextual logger with the contract address embedded
} }
func (self *Chequebook) String() string { func (chbook *Chequebook) String() string {
return fmt.Sprintf("contract: %s, owner: %s, balance: %v, signer: %x", self.contractAddr.Hex(), self.owner.Hex(), self.balance, self.prvKey.PublicKey) return fmt.Sprintf("contract: %s, owner: %s, balance: %v, signer: %x", chbook.contractAddr.Hex(), chbook.owner.Hex(), chbook.balance, chbook.prvKey.PublicKey)
} }
// NewChequebook creates a new Chequebook. // NewChequebook creates a new Chequebook.
@ -148,12 +148,12 @@ func NewChequebook(path string, contractAddr common.Address, prvKey *ecdsa.Priva
return return
} }
func (self *Chequebook) setBalanceFromBlockChain() { func (chbook *Chequebook) setBalanceFromBlockChain() {
balance, err := self.backend.BalanceAt(context.TODO(), self.contractAddr, nil) balance, err := chbook.backend.BalanceAt(context.TODO(), chbook.contractAddr, nil)
if err != nil { if err != nil {
log.Error("Failed to retrieve chequebook balance", "err", err) log.Error("Failed to retrieve chequebook balance", "err", err)
} else { } else {
self.balance.Set(balance) chbook.balance.Set(balance)
} }
} }
@ -187,19 +187,19 @@ type chequebookFile struct {
} }
// UnmarshalJSON deserialises a chequebook. // UnmarshalJSON deserialises a chequebook.
func (self *Chequebook) UnmarshalJSON(data []byte) error { func (chbook *Chequebook) UnmarshalJSON(data []byte) error {
var file chequebookFile var file chequebookFile
err := json.Unmarshal(data, &file) err := json.Unmarshal(data, &file)
if err != nil { if err != nil {
return err return err
} }
_, ok := self.balance.SetString(file.Balance, 10) _, ok := chbook.balance.SetString(file.Balance, 10)
if !ok { if !ok {
return fmt.Errorf("cumulative amount sent: unable to convert string to big integer: %v", file.Balance) return fmt.Errorf("cumulative amount sent: unable to convert string to big integer: %v", file.Balance)
} }
self.contractAddr = common.HexToAddress(file.Contract) chbook.contractAddr = common.HexToAddress(file.Contract)
for addr, sent := range file.Sent { for addr, sent := range file.Sent {
self.sent[common.HexToAddress(addr)], ok = new(big.Int).SetString(sent, 10) chbook.sent[common.HexToAddress(addr)], ok = new(big.Int).SetString(sent, 10)
if !ok { if !ok {
return fmt.Errorf("beneficiary %v cumulative amount sent: unable to convert string to big integer: %v", addr, sent) return fmt.Errorf("beneficiary %v cumulative amount sent: unable to convert string to big integer: %v", addr, sent)
} }
@ -208,14 +208,14 @@ func (self *Chequebook) UnmarshalJSON(data []byte) error {
} }
// MarshalJSON serialises a chequebook. // MarshalJSON serialises a chequebook.
func (self *Chequebook) MarshalJSON() ([]byte, error) { func (chbook *Chequebook) MarshalJSON() ([]byte, error) {
var file = &chequebookFile{ var file = &chequebookFile{
Balance: self.balance.String(), Balance: chbook.balance.String(),
Contract: self.contractAddr.Hex(), Contract: chbook.contractAddr.Hex(),
Owner: self.owner.Hex(), Owner: chbook.owner.Hex(),
Sent: make(map[string]string), Sent: make(map[string]string),
} }
for addr, sent := range self.sent { for addr, sent := range chbook.sent {
file.Sent[addr.Hex()] = sent.String() file.Sent[addr.Hex()] = sent.String()
} }
return json.Marshal(file) return json.Marshal(file)
@ -223,67 +223,67 @@ func (self *Chequebook) MarshalJSON() ([]byte, error) {
// Save persists the chequebook on disk, remembering balance, contract address and // Save persists the chequebook on disk, remembering balance, contract address and
// cumulative amount of funds sent for each beneficiary. // cumulative amount of funds sent for each beneficiary.
func (self *Chequebook) Save() (err error) { func (chbook *Chequebook) Save() (err error) {
data, err := json.MarshalIndent(self, "", " ") data, err := json.MarshalIndent(chbook, "", " ")
if err != nil { if err != nil {
return err return err
} }
self.log.Trace("Saving chequebook to disk", self.path) chbook.log.Trace("Saving chequebook to disk", chbook.path)
return ioutil.WriteFile(self.path, data, os.ModePerm) return ioutil.WriteFile(chbook.path, data, os.ModePerm)
} }
// Stop quits the autodeposit go routine to terminate // Stop quits the autodeposit go routine to terminate
func (self *Chequebook) Stop() { func (chbook *Chequebook) Stop() {
defer self.lock.Unlock() defer chbook.lock.Unlock()
self.lock.Lock() chbook.lock.Lock()
if self.quit != nil { if chbook.quit != nil {
close(self.quit) close(chbook.quit)
self.quit = nil chbook.quit = nil
} }
} }
// Issue creates a cheque signed by the chequebook owner's private key. The // Issue creates a cheque signed by the chequebook owner's private key. The
// signer commits to a contract (one that they own), a beneficiary and amount. // signer commits to a contract (one that they own), a beneficiary and amount.
func (self *Chequebook) Issue(beneficiary common.Address, amount *big.Int) (ch *Cheque, err error) { func (chbook *Chequebook) Issue(beneficiary common.Address, amount *big.Int) (ch *Cheque, err error) {
defer self.lock.Unlock() defer chbook.lock.Unlock()
self.lock.Lock() chbook.lock.Lock()
if amount.Sign() <= 0 { if amount.Sign() <= 0 {
return nil, fmt.Errorf("amount must be greater than zero (%v)", amount) return nil, fmt.Errorf("amount must be greater than zero (%v)", amount)
} }
if self.balance.Cmp(amount) < 0 { if chbook.balance.Cmp(amount) < 0 {
err = fmt.Errorf("insufficient funds to issue cheque for amount: %v. balance: %v", amount, self.balance) err = fmt.Errorf("insufficient funds to issue cheque for amount: %v. balance: %v", amount, chbook.balance)
} else { } else {
var sig []byte var sig []byte
sent, found := self.sent[beneficiary] sent, found := chbook.sent[beneficiary]
if !found { if !found {
sent = new(big.Int) sent = new(big.Int)
self.sent[beneficiary] = sent chbook.sent[beneficiary] = sent
} }
sum := new(big.Int).Set(sent) sum := new(big.Int).Set(sent)
sum.Add(sum, amount) sum.Add(sum, amount)
sig, err = crypto.Sign(sigHash(self.contractAddr, beneficiary, sum), self.prvKey) sig, err = crypto.Sign(sigHash(chbook.contractAddr, beneficiary, sum), chbook.prvKey)
if err == nil { if err == nil {
ch = &Cheque{ ch = &Cheque{
Contract: self.contractAddr, Contract: chbook.contractAddr,
Beneficiary: beneficiary, Beneficiary: beneficiary,
Amount: sum, Amount: sum,
Sig: sig, Sig: sig,
} }
sent.Set(sum) sent.Set(sum)
self.balance.Sub(self.balance, amount) // subtract amount from balance chbook.balance.Sub(chbook.balance, amount) // subtract amount from balance
} }
} }
// auto deposit if threshold is set and balance is less then threshold // auto deposit if threshold is set and balance is less then threshold
// note this is called even if issuing cheque fails // note this is called even if issuing cheque fails
// so we reattempt depositing // so we reattempt depositing
if self.threshold != nil { if chbook.threshold != nil {
if self.balance.Cmp(self.threshold) < 0 { if chbook.balance.Cmp(chbook.threshold) < 0 {
send := new(big.Int).Sub(self.buffer, self.balance) send := new(big.Int).Sub(chbook.buffer, chbook.balance)
self.deposit(send) chbook.deposit(send)
} }
} }
@ -291,8 +291,8 @@ func (self *Chequebook) Issue(beneficiary common.Address, amount *big.Int) (ch *
} }
// Cash is a convenience method to cash any cheque. // Cash is a convenience method to cash any cheque.
func (self *Chequebook) Cash(ch *Cheque) (txhash string, err error) { func (chbook *Chequebook) Cash(ch *Cheque) (txhash string, err error) {
return ch.Cash(self.session) return ch.Cash(chbook.session)
} }
// data to sign: contract address, beneficiary, cumulative amount of funds ever sent // data to sign: contract address, beneficiary, cumulative amount of funds ever sent
@ -309,73 +309,73 @@ func sigHash(contract, beneficiary common.Address, sum *big.Int) []byte {
} }
// Balance returns the current balance of the chequebook. // Balance returns the current balance of the chequebook.
func (self *Chequebook) Balance() *big.Int { func (chbook *Chequebook) Balance() *big.Int {
defer self.lock.Unlock() defer chbook.lock.Unlock()
self.lock.Lock() chbook.lock.Lock()
return new(big.Int).Set(self.balance) return new(big.Int).Set(chbook.balance)
} }
// Owner returns the owner account of the chequebook. // Owner returns the owner account of the chequebook.
func (self *Chequebook) Owner() common.Address { func (chbook *Chequebook) Owner() common.Address {
return self.owner return chbook.owner
} }
// Address returns the on-chain contract address of the chequebook. // Address returns the on-chain contract address of the chequebook.
func (self *Chequebook) Address() common.Address { func (chbook *Chequebook) Address() common.Address {
return self.contractAddr return chbook.contractAddr
} }
// Deposit deposits money to the chequebook account. // Deposit deposits money to the chequebook account.
func (self *Chequebook) Deposit(amount *big.Int) (string, error) { func (chbook *Chequebook) Deposit(amount *big.Int) (string, error) {
defer self.lock.Unlock() defer chbook.lock.Unlock()
self.lock.Lock() chbook.lock.Lock()
return self.deposit(amount) return chbook.deposit(amount)
} }
// deposit deposits amount to the chequebook account. // deposit deposits amount to the chequebook account.
// The caller must hold self.lock. // The caller must hold self.lock.
func (self *Chequebook) deposit(amount *big.Int) (string, error) { func (chbook *Chequebook) deposit(amount *big.Int) (string, error) {
// since the amount is variable here, we do not use sessions // since the amount is variable here, we do not use sessions
depositTransactor := bind.NewKeyedTransactor(self.prvKey) depositTransactor := bind.NewKeyedTransactor(chbook.prvKey)
depositTransactor.Value = amount depositTransactor.Value = amount
chbookRaw := &contract.ChequebookRaw{Contract: self.contract} chbookRaw := &contract.ChequebookRaw{Contract: chbook.contract}
tx, err := chbookRaw.Transfer(depositTransactor) tx, err := chbookRaw.Transfer(depositTransactor)
if err != nil { if err != nil {
self.log.Warn("Failed to fund chequebook", "amount", amount, "balance", self.balance, "target", self.buffer, "err", err) chbook.log.Warn("Failed to fund chequebook", "amount", amount, "balance", chbook.balance, "target", chbook.buffer, "err", err)
return "", err return "", err
} }
// assume that transaction is actually successful, we add the amount to balance right away // assume that transaction is actually successful, we add the amount to balance right away
self.balance.Add(self.balance, amount) chbook.balance.Add(chbook.balance, amount)
self.log.Trace("Deposited funds to chequebook", "amount", amount, "balance", self.balance, "target", self.buffer) chbook.log.Trace("Deposited funds to chequebook", "amount", amount, "balance", chbook.balance, "target", chbook.buffer)
return tx.Hash().Hex(), nil return tx.Hash().Hex(), nil
} }
// AutoDeposit (re)sets interval time and amount which triggers sending funds to the // AutoDeposit (re)sets interval time and amount which triggers sending funds to the
// chequebook. Contract backend needs to be set if threshold is not less than buffer, then // chequebook. Contract backend needs to be set if threshold is not less than buffer, then
// deposit will be triggered on every new cheque issued. // deposit will be triggered on every new cheque issued.
func (self *Chequebook) AutoDeposit(interval time.Duration, threshold, buffer *big.Int) { func (chbook *Chequebook) AutoDeposit(interval time.Duration, threshold, buffer *big.Int) {
defer self.lock.Unlock() defer chbook.lock.Unlock()
self.lock.Lock() chbook.lock.Lock()
self.threshold = threshold chbook.threshold = threshold
self.buffer = buffer chbook.buffer = buffer
self.autoDeposit(interval) chbook.autoDeposit(interval)
} }
// autoDeposit starts a goroutine that periodically sends funds to the chequebook // autoDeposit starts a goroutine that periodically sends funds to the chequebook
// contract caller holds the lock the go routine terminates if Chequebook.quit is closed. // contract caller holds the lock the go routine terminates if Chequebook.quit is closed.
func (self *Chequebook) autoDeposit(interval time.Duration) { func (chbook *Chequebook) autoDeposit(interval time.Duration) {
if self.quit != nil { if chbook.quit != nil {
close(self.quit) close(chbook.quit)
self.quit = nil chbook.quit = nil
} }
// if threshold >= balance autodeposit after every cheque issued // if threshold >= balance autodeposit after every cheque issued
if interval == time.Duration(0) || self.threshold != nil && self.buffer != nil && self.threshold.Cmp(self.buffer) >= 0 { if interval == time.Duration(0) || chbook.threshold != nil && chbook.buffer != nil && chbook.threshold.Cmp(chbook.buffer) >= 0 {
return return
} }
ticker := time.NewTicker(interval) ticker := time.NewTicker(interval)
self.quit = make(chan bool) chbook.quit = make(chan bool)
quit := self.quit quit := chbook.quit
go func() { go func() {
for { for {
@ -383,15 +383,15 @@ func (self *Chequebook) autoDeposit(interval time.Duration) {
case <-quit: case <-quit:
return return
case <-ticker.C: case <-ticker.C:
self.lock.Lock() chbook.lock.Lock()
if self.balance.Cmp(self.buffer) < 0 { if chbook.balance.Cmp(chbook.buffer) < 0 {
amount := new(big.Int).Sub(self.buffer, self.balance) amount := new(big.Int).Sub(chbook.buffer, chbook.balance)
txhash, err := self.deposit(amount) txhash, err := chbook.deposit(amount)
if err == nil { if err == nil {
self.txhash = txhash chbook.txhash = txhash
} }
} }
self.lock.Unlock() chbook.lock.Unlock()
} }
} }
}() }()
@ -409,21 +409,21 @@ func NewOutbox(chbook *Chequebook, beneficiary common.Address) *Outbox {
} }
// Issue creates cheque. // Issue creates cheque.
func (self *Outbox) Issue(amount *big.Int) (swap.Promise, error) { func (o *Outbox) Issue(amount *big.Int) (swap.Promise, error) {
return self.chequeBook.Issue(self.beneficiary, amount) return o.chequeBook.Issue(o.beneficiary, amount)
} }
// AutoDeposit enables auto-deposits on the underlying chequebook. // AutoDeposit enables auto-deposits on the underlying chequebook.
func (self *Outbox) AutoDeposit(interval time.Duration, threshold, buffer *big.Int) { func (o *Outbox) AutoDeposit(interval time.Duration, threshold, buffer *big.Int) {
self.chequeBook.AutoDeposit(interval, threshold, buffer) o.chequeBook.AutoDeposit(interval, threshold, buffer)
} }
// Stop helps satisfy the swap.OutPayment interface. // Stop helps satisfy the swap.OutPayment interface.
func (self *Outbox) Stop() {} func (o *Outbox) Stop() {}
// String implements fmt.Stringer. // String implements fmt.Stringer.
func (self *Outbox) String() string { func (o *Outbox) String() string {
return fmt.Sprintf("chequebook: %v, beneficiary: %s, balance: %v", self.chequeBook.Address().Hex(), self.beneficiary.Hex(), self.chequeBook.Balance()) return fmt.Sprintf("chequebook: %v, beneficiary: %s, balance: %v", o.chequeBook.Address().Hex(), o.beneficiary.Hex(), o.chequeBook.Balance())
} }
// Inbox can deposit, verify and cash cheques from a single contract to a single // Inbox can deposit, verify and cash cheques from a single contract to a single
@ -474,55 +474,55 @@ func NewInbox(prvKey *ecdsa.PrivateKey, contractAddr, beneficiary common.Address
return return
} }
func (self *Inbox) String() string { func (i *Inbox) String() string {
return fmt.Sprintf("chequebook: %v, beneficiary: %s, balance: %v", self.contract.Hex(), self.beneficiary.Hex(), self.cheque.Amount) return fmt.Sprintf("chequebook: %v, beneficiary: %s, balance: %v", i.contract.Hex(), i.beneficiary.Hex(), i.cheque.Amount)
} }
// Stop quits the autocash goroutine. // Stop quits the autocash goroutine.
func (self *Inbox) Stop() { func (i *Inbox) Stop() {
defer self.lock.Unlock() defer i.lock.Unlock()
self.lock.Lock() i.lock.Lock()
if self.quit != nil { if i.quit != nil {
close(self.quit) close(i.quit)
self.quit = nil i.quit = nil
} }
} }
// Cash attempts to cash the current cheque. // Cash attempts to cash the current cheque.
func (self *Inbox) Cash() (txhash string, err error) { func (i *Inbox) Cash() (txhash string, err error) {
if self.cheque != nil { if i.cheque != nil {
txhash, err = self.cheque.Cash(self.session) txhash, err = i.cheque.Cash(i.session)
self.log.Trace("Cashing in chequebook cheque", "amount", self.cheque.Amount, "beneficiary", self.beneficiary) i.log.Trace("Cashing in chequebook cheque", "amount", i.cheque.Amount, "beneficiary", i.beneficiary)
self.cashed = self.cheque.Amount i.cashed = i.cheque.Amount
} }
return return
} }
// AutoCash (re)sets maximum time and amount which triggers cashing of the last uncashed // AutoCash (re)sets maximum time and amount which triggers cashing of the last uncashed
// cheque if maxUncashed is set to 0, then autocash on receipt. // cheque if maxUncashed is set to 0, then autocash on receipt.
func (self *Inbox) AutoCash(cashInterval time.Duration, maxUncashed *big.Int) { func (i *Inbox) AutoCash(cashInterval time.Duration, maxUncashed *big.Int) {
defer self.lock.Unlock() defer i.lock.Unlock()
self.lock.Lock() i.lock.Lock()
self.maxUncashed = maxUncashed i.maxUncashed = maxUncashed
self.autoCash(cashInterval) i.autoCash(cashInterval)
} }
// autoCash starts a loop that periodically clears the last cheque // autoCash starts a loop that periodically clears the last cheque
// if the peer is trusted. Clearing period could be 24h or a week. // if the peer is trusted. Clearing period could be 24h or a week.
// The caller must hold self.lock. // The caller must hold self.lock.
func (self *Inbox) autoCash(cashInterval time.Duration) { func (i *Inbox) autoCash(cashInterval time.Duration) {
if self.quit != nil { if i.quit != nil {
close(self.quit) close(i.quit)
self.quit = nil i.quit = nil
} }
// if maxUncashed is set to 0, then autocash on receipt // if maxUncashed is set to 0, then autocash on receipt
if cashInterval == time.Duration(0) || self.maxUncashed != nil && self.maxUncashed.Sign() == 0 { if cashInterval == time.Duration(0) || i.maxUncashed != nil && i.maxUncashed.Sign() == 0 {
return return
} }
ticker := time.NewTicker(cashInterval) ticker := time.NewTicker(cashInterval)
self.quit = make(chan bool) i.quit = make(chan bool)
quit := self.quit quit := i.quit
go func() { go func() {
for { for {
@ -530,14 +530,14 @@ func (self *Inbox) autoCash(cashInterval time.Duration) {
case <-quit: case <-quit:
return return
case <-ticker.C: case <-ticker.C:
self.lock.Lock() i.lock.Lock()
if self.cheque != nil && self.cheque.Amount.Cmp(self.cashed) != 0 { if i.cheque != nil && i.cheque.Amount.Cmp(i.cashed) != 0 {
txhash, err := self.Cash() txhash, err := i.Cash()
if err == nil { if err == nil {
self.txhash = txhash i.txhash = txhash
} }
} }
self.lock.Unlock() i.lock.Unlock()
} }
} }
}() }()
@ -545,56 +545,56 @@ func (self *Inbox) autoCash(cashInterval time.Duration) {
// Receive is called to deposit the latest cheque to the incoming Inbox. // Receive is called to deposit the latest cheque to the incoming Inbox.
// The given promise must be a *Cheque. // The given promise must be a *Cheque.
func (self *Inbox) Receive(promise swap.Promise) (*big.Int, error) { func (i *Inbox) Receive(promise swap.Promise) (*big.Int, error) {
ch := promise.(*Cheque) ch := promise.(*Cheque)
defer self.lock.Unlock() defer i.lock.Unlock()
self.lock.Lock() i.lock.Lock()
var sum *big.Int var sum *big.Int
if self.cheque == nil { if i.cheque == nil {
// the sum is checked against the blockchain once a cheque is received // the sum is checked against the blockchain once a cheque is received
tally, err := self.session.Sent(self.beneficiary) tally, err := i.session.Sent(i.beneficiary)
if err != nil { if err != nil {
return nil, fmt.Errorf("inbox: error calling backend to set amount: %v", err) return nil, fmt.Errorf("inbox: error calling backend to set amount: %v", err)
} }
sum = tally sum = tally
} else { } else {
sum = self.cheque.Amount sum = i.cheque.Amount
} }
amount, err := ch.Verify(self.signer, self.contract, self.beneficiary, sum) amount, err := ch.Verify(i.signer, i.contract, i.beneficiary, sum)
var uncashed *big.Int var uncashed *big.Int
if err == nil { if err == nil {
self.cheque = ch i.cheque = ch
if self.maxUncashed != nil { if i.maxUncashed != nil {
uncashed = new(big.Int).Sub(ch.Amount, self.cashed) uncashed = new(big.Int).Sub(ch.Amount, i.cashed)
if self.maxUncashed.Cmp(uncashed) < 0 { if i.maxUncashed.Cmp(uncashed) < 0 {
self.Cash() i.Cash()
} }
} }
self.log.Trace("Received cheque in chequebook inbox", "amount", amount, "uncashed", uncashed) i.log.Trace("Received cheque in chequebook inbox", "amount", amount, "uncashed", uncashed)
} }
return amount, err return amount, err
} }
// Verify verifies cheque for signer, contract, beneficiary, amount, valid signature. // Verify verifies cheque for signer, contract, beneficiary, amount, valid signature.
func (self *Cheque) Verify(signerKey *ecdsa.PublicKey, contract, beneficiary common.Address, sum *big.Int) (*big.Int, error) { func (ch *Cheque) Verify(signerKey *ecdsa.PublicKey, contract, beneficiary common.Address, sum *big.Int) (*big.Int, error) {
log.Trace("Verifying chequebook cheque", "cheque", self, "sum", sum) log.Trace("Verifying chequebook cheque", "cheque", ch, "sum", sum)
if sum == nil { if sum == nil {
return nil, fmt.Errorf("invalid amount") return nil, fmt.Errorf("invalid amount")
} }
if self.Beneficiary != beneficiary { if ch.Beneficiary != beneficiary {
return nil, fmt.Errorf("beneficiary mismatch: %v != %v", self.Beneficiary.Hex(), beneficiary.Hex()) return nil, fmt.Errorf("beneficiary mismatch: %v != %v", ch.Beneficiary.Hex(), beneficiary.Hex())
} }
if self.Contract != contract { if ch.Contract != contract {
return nil, fmt.Errorf("contract mismatch: %v != %v", self.Contract.Hex(), contract.Hex()) return nil, fmt.Errorf("contract mismatch: %v != %v", ch.Contract.Hex(), contract.Hex())
} }
amount := new(big.Int).Set(self.Amount) amount := new(big.Int).Set(ch.Amount)
if sum != nil { if sum != nil {
amount.Sub(amount, sum) amount.Sub(amount, sum)
if amount.Sign() <= 0 { if amount.Sign() <= 0 {
@ -602,7 +602,7 @@ func (self *Cheque) Verify(signerKey *ecdsa.PublicKey, contract, beneficiary com
} }
} }
pubKey, err := crypto.SigToPub(sigHash(self.Contract, beneficiary, self.Amount), self.Sig) pubKey, err := crypto.SigToPub(sigHash(ch.Contract, beneficiary, ch.Amount), ch.Sig)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid signature: %v", err) return nil, fmt.Errorf("invalid signature: %v", err)
} }
@ -621,9 +621,9 @@ func sig2vrs(sig []byte) (v byte, r, s [32]byte) {
} }
// Cash cashes the cheque by sending an Ethereum transaction. // Cash cashes the cheque by sending an Ethereum transaction.
func (self *Cheque) Cash(session *contract.ChequebookSession) (string, error) { func (ch *Cheque) Cash(session *contract.ChequebookSession) (string, error) {
v, r, s := sig2vrs(self.Sig) v, r, s := sig2vrs(ch.Sig)
tx, err := session.Cash(self.Beneficiary, self.Amount, v, r, s) tx, err := session.Cash(ch.Beneficiary, ch.Amount, v, r, s)
if err != nil { if err != nil {
return "", err return "", err
} }

View File

@ -205,22 +205,22 @@ func (_Chequebook *ChequebookCallerSession) Sent(arg0 common.Address) (*big.Int,
// Cash is a paid mutator transaction binding the contract method 0xfbf788d6. // Cash is a paid mutator transaction binding the contract method 0xfbf788d6.
// //
// Solidity: function cash(beneficiary address, amount uint256, sig_v uint8, sig_r bytes32, sig_s bytes32) returns() // Solidity: function cash(beneficiary address, amount uint256, sig_v uint8, sig_r bytes32, sig_s bytes32) returns()
func (_Chequebook *ChequebookTransactor) Cash(opts *bind.TransactOpts, beneficiary common.Address, amount *big.Int, sig_v uint8, sig_r [32]byte, sig_s [32]byte) (*types.Transaction, error) { func (_Chequebook *ChequebookTransactor) Cash(opts *bind.TransactOpts, beneficiary common.Address, amount *big.Int, sigV uint8, sigR [32]byte, sigS [32]byte) (*types.Transaction, error) {
return _Chequebook.contract.Transact(opts, "cash", beneficiary, amount, sig_v, sig_r, sig_s) return _Chequebook.contract.Transact(opts, "cash", beneficiary, amount, sigV, sigR, sigS)
} }
// Cash is a paid mutator transaction binding the contract method 0xfbf788d6. // Cash is a paid mutator transaction binding the contract method 0xfbf788d6.
// //
// Solidity: function cash(beneficiary address, amount uint256, sig_v uint8, sig_r bytes32, sig_s bytes32) returns() // Solidity: function cash(beneficiary address, amount uint256, sig_v uint8, sig_r bytes32, sig_s bytes32) returns()
func (_Chequebook *ChequebookSession) Cash(beneficiary common.Address, amount *big.Int, sig_v uint8, sig_r [32]byte, sig_s [32]byte) (*types.Transaction, error) { func (_Chequebook *ChequebookSession) Cash(beneficiary common.Address, amount *big.Int, sigV uint8, sigR [32]byte, sigS [32]byte) (*types.Transaction, error) {
return _Chequebook.Contract.Cash(&_Chequebook.TransactOpts, beneficiary, amount, sig_v, sig_r, sig_s) return _Chequebook.Contract.Cash(&_Chequebook.TransactOpts, beneficiary, amount, sigV, sigR, sigS)
} }
// Cash is a paid mutator transaction binding the contract method 0xfbf788d6. // Cash is a paid mutator transaction binding the contract method 0xfbf788d6.
// //
// Solidity: function cash(beneficiary address, amount uint256, sig_v uint8, sig_r bytes32, sig_s bytes32) returns() // Solidity: function cash(beneficiary address, amount uint256, sig_v uint8, sig_r bytes32, sig_s bytes32) returns()
func (_Chequebook *ChequebookTransactorSession) Cash(beneficiary common.Address, amount *big.Int, sig_v uint8, sig_r [32]byte, sig_s [32]byte) (*types.Transaction, error) { func (_Chequebook *ChequebookTransactorSession) Cash(beneficiary common.Address, amount *big.Int, sigV uint8, sigR [32]byte, sigS [32]byte) (*types.Transaction, error) {
return _Chequebook.Contract.Cash(&_Chequebook.TransactOpts, beneficiary, amount, sig_v, sig_r, sig_s) return _Chequebook.Contract.Cash(&_Chequebook.TransactOpts, beneficiary, amount, sigV, sigR, sigS)
} }
// Kill is a paid mutator transaction binding the contract method 0x41c0e1b5. // Kill is a paid mutator transaction binding the contract method 0x41c0e1b5.

View File

@ -227,10 +227,10 @@ func (_ENS *ENSCallerSession) Resolver(node [32]byte) (common.Address, error) {
return _ENS.Contract.Resolver(&_ENS.CallOpts, node) return _ENS.Contract.Resolver(&_ENS.CallOpts, node)
} }
// Ttl is a free data retrieval call binding the contract method 0x16a25cbd. // TTL is a free data retrieval call binding the contract method 0x16a25cbd.
// //
// Solidity: function ttl(node bytes32) constant returns(uint64) // Solidity: function ttl(node bytes32) constant returns(uint64)
func (_ENS *ENSCaller) Ttl(opts *bind.CallOpts, node [32]byte) (uint64, error) { func (_ENS *ENSCaller) TTL(opts *bind.CallOpts, node [32]byte) (uint64, error) {
var ( var (
ret0 = new(uint64) ret0 = new(uint64)
) )
@ -239,18 +239,18 @@ func (_ENS *ENSCaller) Ttl(opts *bind.CallOpts, node [32]byte) (uint64, error) {
return *ret0, err return *ret0, err
} }
// Ttl is a free data retrieval call binding the contract method 0x16a25cbd. // TTL is a free data retrieval call binding the contract method 0x16a25cbd.
// //
// Solidity: function ttl(node bytes32) constant returns(uint64) // Solidity: function ttl(node bytes32) constant returns(uint64)
func (_ENS *ENSSession) Ttl(node [32]byte) (uint64, error) { func (_ENS *ENSSession) TTL(node [32]byte) (uint64, error) {
return _ENS.Contract.Ttl(&_ENS.CallOpts, node) return _ENS.Contract.TTL(&_ENS.CallOpts, node)
} }
// Ttl is a free data retrieval call binding the contract method 0x16a25cbd. // TTL is a free data retrieval call binding the contract method 0x16a25cbd.
// //
// Solidity: function ttl(node bytes32) constant returns(uint64) // Solidity: function ttl(node bytes32) constant returns(uint64)
func (_ENS *ENSCallerSession) Ttl(node [32]byte) (uint64, error) { func (_ENS *ENSCallerSession) TTL(node [32]byte) (uint64, error) {
return _ENS.Contract.Ttl(&_ENS.CallOpts, node) return _ENS.Contract.TTL(&_ENS.CallOpts, node)
} }
// SetOwner is a paid mutator transaction binding the contract method 0x5b0fc9c3. // SetOwner is a paid mutator transaction binding the contract method 0x5b0fc9c3.
@ -682,7 +682,7 @@ func (it *ENSNewTTLIterator) Close() error {
// ENSNewTTL represents a NewTTL event raised by the ENS contract. // ENSNewTTL represents a NewTTL event raised by the ENS contract.
type ENSNewTTL struct { type ENSNewTTL struct {
Node [32]byte Node [32]byte
Ttl uint64 TTL uint64
Raw types.Log // Blockchain specific contextual infos Raw types.Log // Blockchain specific contextual infos
} }

View File

@ -35,7 +35,7 @@ var (
TestNetAddress = common.HexToAddress("0x112234455c3a32fd11230c42e7bccd4a84e02010") TestNetAddress = common.HexToAddress("0x112234455c3a32fd11230c42e7bccd4a84e02010")
) )
// swarm domain name registry and resolver // ENS is the swarm domain name registry and resolver
type ENS struct { type ENS struct {
*contract.ENSSession *contract.ENSSession
contractBackend bind.ContractBackend contractBackend bind.ContractBackend
@ -48,7 +48,6 @@ func NewENS(transactOpts *bind.TransactOpts, contractAddr common.Address, contra
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &ENS{ return &ENS{
&contract.ENSSession{ &contract.ENSSession{
Contract: ens, Contract: ens,
@ -60,27 +59,24 @@ func NewENS(transactOpts *bind.TransactOpts, contractAddr common.Address, contra
// DeployENS deploys an instance of the ENS nameservice, with a 'first-in, first-served' root registrar. // DeployENS deploys an instance of the ENS nameservice, with a 'first-in, first-served' root registrar.
func DeployENS(transactOpts *bind.TransactOpts, contractBackend bind.ContractBackend) (common.Address, *ENS, error) { func DeployENS(transactOpts *bind.TransactOpts, contractBackend bind.ContractBackend) (common.Address, *ENS, error) {
// Deploy the ENS registry. // Deploy the ENS registry
ensAddr, _, _, err := contract.DeployENS(transactOpts, contractBackend) ensAddr, _, _, err := contract.DeployENS(transactOpts, contractBackend)
if err != nil { if err != nil {
return ensAddr, nil, err return ensAddr, nil, err
} }
ens, err := NewENS(transactOpts, ensAddr, contractBackend) ens, err := NewENS(transactOpts, ensAddr, contractBackend)
if err != nil { if err != nil {
return ensAddr, nil, err return ensAddr, nil, err
} }
// Deploy the registrar
// Deploy the registrar.
regAddr, _, _, err := contract.DeployFIFSRegistrar(transactOpts, contractBackend, ensAddr, [32]byte{}) regAddr, _, _, err := contract.DeployFIFSRegistrar(transactOpts, contractBackend, ensAddr, [32]byte{})
if err != nil { if err != nil {
return ensAddr, nil, err return ensAddr, nil, err
} }
// Set the registrar as owner of the ENS root. // Set the registrar as owner of the ENS root
if _, err = ens.SetOwner([32]byte{}, regAddr); err != nil { if _, err = ens.SetOwner([32]byte{}, regAddr); err != nil {
return ensAddr, nil, err return ensAddr, nil, err
} }
return ensAddr, ens, nil return ensAddr, ens, nil
} }
@ -89,10 +85,9 @@ func ensParentNode(name string) (common.Hash, common.Hash) {
label := crypto.Keccak256Hash([]byte(parts[0])) label := crypto.Keccak256Hash([]byte(parts[0]))
if len(parts) == 1 { if len(parts) == 1 {
return [32]byte{}, label return [32]byte{}, label
} else { }
parentNode, parentLabel := ensParentNode(parts[1]) parentNode, parentLabel := ensParentNode(parts[1])
return crypto.Keccak256Hash(parentNode[:], parentLabel[:]), label return crypto.Keccak256Hash(parentNode[:], parentLabel[:]), label
}
} }
func EnsNode(name string) common.Hash { func EnsNode(name string) common.Hash {
@ -100,111 +95,101 @@ func EnsNode(name string) common.Hash {
return crypto.Keccak256Hash(parentNode[:], parentLabel[:]) return crypto.Keccak256Hash(parentNode[:], parentLabel[:])
} }
func (self *ENS) getResolver(node [32]byte) (*contract.PublicResolverSession, error) { func (ens *ENS) getResolver(node [32]byte) (*contract.PublicResolverSession, error) {
resolverAddr, err := self.Resolver(node) resolverAddr, err := ens.Resolver(node)
if err != nil { if err != nil {
return nil, err return nil, err
} }
resolver, err := contract.NewPublicResolver(resolverAddr, ens.contractBackend)
resolver, err := contract.NewPublicResolver(resolverAddr, self.contractBackend)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &contract.PublicResolverSession{ return &contract.PublicResolverSession{
Contract: resolver, Contract: resolver,
TransactOpts: self.TransactOpts, TransactOpts: ens.TransactOpts,
}, nil }, nil
} }
func (self *ENS) getRegistrar(node [32]byte) (*contract.FIFSRegistrarSession, error) { func (ens *ENS) getRegistrar(node [32]byte) (*contract.FIFSRegistrarSession, error) {
registrarAddr, err := self.Owner(node) registrarAddr, err := ens.Owner(node)
if err != nil { if err != nil {
return nil, err return nil, err
} }
registrar, err := contract.NewFIFSRegistrar(registrarAddr, ens.contractBackend)
registrar, err := contract.NewFIFSRegistrar(registrarAddr, self.contractBackend)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &contract.FIFSRegistrarSession{ return &contract.FIFSRegistrarSession{
Contract: registrar, Contract: registrar,
TransactOpts: self.TransactOpts, TransactOpts: ens.TransactOpts,
}, nil }, nil
} }
// Resolve is a non-transactional call that returns the content hash associated with a name. // Resolve is a non-transactional call that returns the content hash associated with a name.
func (self *ENS) Resolve(name string) (common.Hash, error) { func (ens *ENS) Resolve(name string) (common.Hash, error) {
node := EnsNode(name) node := EnsNode(name)
resolver, err := self.getResolver(node) resolver, err := ens.getResolver(node)
if err != nil { if err != nil {
return common.Hash{}, err return common.Hash{}, err
} }
ret, err := resolver.Content(node) ret, err := resolver.Content(node)
if err != nil { if err != nil {
return common.Hash{}, err return common.Hash{}, err
} }
return common.BytesToHash(ret[:]), nil return common.BytesToHash(ret[:]), nil
} }
// Addr is a non-transactional call that returns the address associated with a name. // Addr is a non-transactional call that returns the address associated with a name.
func (self *ENS) Addr(name string) (common.Address, error) { func (ens *ENS) Addr(name string) (common.Address, error) {
node := EnsNode(name) node := EnsNode(name)
resolver, err := self.getResolver(node) resolver, err := ens.getResolver(node)
if err != nil { if err != nil {
return common.Address{}, err return common.Address{}, err
} }
ret, err := resolver.Addr(node) ret, err := resolver.Addr(node)
if err != nil { if err != nil {
return common.Address{}, err return common.Address{}, err
} }
return common.BytesToAddress(ret[:]), nil return common.BytesToAddress(ret[:]), nil
} }
// SetAddress sets the address associated with a name. Only works if the caller // SetAddress sets the address associated with a name. Only works if the caller
// owns the name, and the associated resolver implements a `setAddress` function. // owns the name, and the associated resolver implements a `setAddress` function.
func (self *ENS) SetAddr(name string, addr common.Address) (*types.Transaction, error) { func (ens *ENS) SetAddr(name string, addr common.Address) (*types.Transaction, error) {
node := EnsNode(name) node := EnsNode(name)
resolver, err := self.getResolver(node) resolver, err := ens.getResolver(node)
if err != nil { if err != nil {
return nil, err return nil, err
} }
opts := ens.TransactOpts
opts := self.TransactOpts
opts.GasLimit = 200000 opts.GasLimit = 200000
return resolver.Contract.SetAddr(&opts, node, addr) return resolver.Contract.SetAddr(&opts, node, addr)
} }
// Register registers a new domain name for the caller, making them the owner of the new name. // Register registers a new domain name for the caller, making them the owner of the new name.
// Only works if the registrar for the parent domain implements the FIFS registrar protocol. // Only works if the registrar for the parent domain implements the FIFS registrar protocol.
func (self *ENS) Register(name string) (*types.Transaction, error) { func (ens *ENS) Register(name string) (*types.Transaction, error) {
parentNode, label := ensParentNode(name) parentNode, label := ensParentNode(name)
registrar, err := self.getRegistrar(parentNode) registrar, err := ens.getRegistrar(parentNode)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return registrar.Contract.Register(&self.TransactOpts, label, self.TransactOpts.From) return registrar.Contract.Register(&ens.TransactOpts, label, ens.TransactOpts.From)
} }
// SetContentHash sets the content hash associated with a name. Only works if the caller // SetContentHash sets the content hash associated with a name. Only works if the caller
// owns the name, and the associated resolver implements a `setContent` function. // owns the name, and the associated resolver implements a `setContent` function.
func (self *ENS) SetContentHash(name string, hash common.Hash) (*types.Transaction, error) { func (ens *ENS) SetContentHash(name string, hash common.Hash) (*types.Transaction, error) {
node := EnsNode(name) node := EnsNode(name)
resolver, err := self.getResolver(node) resolver, err := ens.getResolver(node)
if err != nil { if err != nil {
return nil, err return nil, err
} }
opts := ens.TransactOpts
opts := self.TransactOpts
opts.GasLimit = 200000 opts.GasLimit = 200000
return resolver.Contract.SetContent(&opts, node, hash) return resolver.Contract.SetContent(&opts, node, hash)
} }

View File

@ -267,9 +267,9 @@ func (bc *BlockChain) loadLastState() error {
blockTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64()) blockTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())
fastTd := bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64()) fastTd := bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64())
log.Info("Loaded most recent local header", "number", currentHeader.Number, "hash", currentHeader.Hash(), "td", headerTd, "age", common.PrettyAge(time.Unix(currentHeader.Time.Int64(), 0))) log.Info("Loaded most recent local header", "number", currentHeader.Number, "hash", currentHeader.Hash(), "td", headerTd, "age", common.PrettyAge(time.Unix(int64(currentHeader.Time), 0)))
log.Info("Loaded most recent local full block", "number", currentBlock.Number(), "hash", currentBlock.Hash(), "td", blockTd, "age", common.PrettyAge(time.Unix(currentBlock.Time().Int64(), 0))) log.Info("Loaded most recent local full block", "number", currentBlock.Number(), "hash", currentBlock.Hash(), "td", blockTd, "age", common.PrettyAge(time.Unix(int64(currentBlock.Time()), 0)))
log.Info("Loaded most recent local fast block", "number", currentFastBlock.Number(), "hash", currentFastBlock.Hash(), "td", fastTd, "age", common.PrettyAge(time.Unix(currentFastBlock.Time().Int64(), 0))) log.Info("Loaded most recent local fast block", "number", currentFastBlock.Number(), "hash", currentFastBlock.Hash(), "td", fastTd, "age", common.PrettyAge(time.Unix(int64(currentFastBlock.Time()), 0)))
return nil return nil
} }
@ -894,7 +894,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
context := []interface{}{ context := []interface{}{
"count", stats.processed, "elapsed", common.PrettyDuration(time.Since(start)), "count", stats.processed, "elapsed", common.PrettyDuration(time.Since(start)),
"number", head.Number(), "hash", head.Hash(), "age", common.PrettyAge(time.Unix(head.Time().Int64(), 0)), "number", head.Number(), "hash", head.Hash(), "age", common.PrettyAge(time.Unix(int64(head.Time()), 0)),
"size", common.StorageSize(bytes), "size", common.StorageSize(bytes),
} }
if stats.ignored > 0 { if stats.ignored > 0 {
@ -972,11 +972,16 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
triedb.Cap(limit - ethdb.IdealBatchSize) triedb.Cap(limit - ethdb.IdealBatchSize)
} }
// Find the next state trie we need to commit // Find the next state trie we need to commit
header := bc.GetHeaderByNumber(current - triesInMemory) chosen := current - triesInMemory
chosen := header.Number.Uint64()
// If we exceeded out time allowance, flush an entire trie to disk // If we exceeded out time allowance, flush an entire trie to disk
if bc.gcproc > bc.cacheConfig.TrieTimeLimit { if bc.gcproc > bc.cacheConfig.TrieTimeLimit {
// If the header is missing (canonical chain behind), we're reorging a low
// diff sidechain. Suspend committing until this operation is completed.
header := bc.GetHeaderByNumber(chosen)
if header == nil {
log.Warn("Reorg in progress, trie commit postponed", "number", chosen)
} else {
// If we're exceeding limits but haven't reached a large enough memory gap, // If we're exceeding limits but haven't reached a large enough memory gap,
// warn the user that the system is becoming unstable. // warn the user that the system is becoming unstable.
if chosen < lastWrite+triesInMemory && bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit { if chosen < lastWrite+triesInMemory && bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit {
@ -987,6 +992,7 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
lastWrite = chosen lastWrite = chosen
bc.gcproc = 0 bc.gcproc = 0
} }
}
// Garbage collect anything below our required write retention // Garbage collect anything below our required write retention
for !bc.triegc.Empty() { for !bc.triegc.Empty() {
root, number := bc.triegc.Pop() root, number := bc.triegc.Pop()
@ -1052,8 +1058,8 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
// accepted for future processing, and returns an error if the block is too far // accepted for future processing, and returns an error if the block is too far
// ahead and was not added. // ahead and was not added.
func (bc *BlockChain) addFutureBlock(block *types.Block) error { func (bc *BlockChain) addFutureBlock(block *types.Block) error {
max := big.NewInt(time.Now().Unix() + maxTimeFutureBlocks) max := uint64(time.Now().Unix() + maxTimeFutureBlocks)
if block.Time().Cmp(max) > 0 { if block.Time() > max {
return fmt.Errorf("future block timestamp %v > allowed %v", block.Time(), max) return fmt.Errorf("future block timestamp %v > allowed %v", block.Time(), max)
} }
bc.futureBlocks.Add(block.Hash(), block) bc.futureBlocks.Add(block.Hash(), block)
@ -1136,7 +1142,7 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, []
switch { switch {
// First block is pruned, insert as sidechain and reorg only if TD grows enough // First block is pruned, insert as sidechain and reorg only if TD grows enough
case err == consensus.ErrPrunedAncestor: case err == consensus.ErrPrunedAncestor:
return bc.insertSidechain(it) return bc.insertSidechain(block, it)
// First block is future, shove it (and all children) to the future queue (unknown ancestor) // First block is future, shove it (and all children) to the future queue (unknown ancestor)
case err == consensus.ErrFutureBlock || (err == consensus.ErrUnknownAncestor && bc.futureBlocks.Contains(it.first().ParentHash())): case err == consensus.ErrFutureBlock || (err == consensus.ErrUnknownAncestor && bc.futureBlocks.Contains(it.first().ParentHash())):
@ -1278,7 +1284,7 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, []
// //
// The method writes all (header-and-body-valid) blocks to disk, then tries to // The method writes all (header-and-body-valid) blocks to disk, then tries to
// switch over to the new chain if the TD exceeded the current chain. // switch over to the new chain if the TD exceeded the current chain.
func (bc *BlockChain) insertSidechain(it *insertIterator) (int, []interface{}, []*types.Log, error) { func (bc *BlockChain) insertSidechain(block *types.Block, it *insertIterator) (int, []interface{}, []*types.Log, error) {
var ( var (
externTd *big.Int externTd *big.Int
current = bc.CurrentBlock().NumberU64() current = bc.CurrentBlock().NumberU64()
@ -1287,7 +1293,7 @@ func (bc *BlockChain) insertSidechain(it *insertIterator) (int, []interface{}, [
// Since we don't import them here, we expect ErrUnknownAncestor for the remaining // Since we don't import them here, we expect ErrUnknownAncestor for the remaining
// ones. Any other errors means that the block is invalid, and should not be written // ones. Any other errors means that the block is invalid, and should not be written
// to disk. // to disk.
block, err := it.current(), consensus.ErrPrunedAncestor err := consensus.ErrPrunedAncestor
for ; block != nil && (err == consensus.ErrPrunedAncestor); block, err = it.next() { for ; block != nil && (err == consensus.ErrPrunedAncestor); block, err = it.next() {
// Check the canonical state root for that number // Check the canonical state root for that number
if number := block.NumberU64(); current >= number { if number := block.NumberU64(); current >= number {
@ -1317,7 +1323,7 @@ func (bc *BlockChain) insertSidechain(it *insertIterator) (int, []interface{}, [
if err := bc.WriteBlockWithoutState(block, externTd); err != nil { if err := bc.WriteBlockWithoutState(block, externTd); err != nil {
return it.index, nil, nil, err return it.index, nil, nil, err
} }
log.Debug("Inserted sidechain block", "number", block.Number(), "hash", block.Hash(), log.Debug("Injected sidechain block", "number", block.Number(), "hash", block.Hash(),
"diff", block.Difficulty(), "elapsed", common.PrettyDuration(time.Since(start)), "diff", block.Difficulty(), "elapsed", common.PrettyDuration(time.Since(start)),
"txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()), "txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()),
"root", block.Root()) "root", block.Root())
@ -1385,21 +1391,25 @@ func (bc *BlockChain) insertSidechain(it *insertIterator) (int, []interface{}, [
return 0, nil, nil, nil return 0, nil, nil, nil
} }
// reorgs takes two blocks, an old chain and a new chain and will reconstruct the blocks and inserts them // reorg takes two blocks, an old chain and a new chain and will reconstruct the
// to be part of the new canonical chain and accumulates potential missing transactions and post an // blocks and inserts them to be part of the new canonical chain and accumulates
// event about them // potential missing transactions and post an event about them.
func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error { func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
var ( var (
newChain types.Blocks newChain types.Blocks
oldChain types.Blocks oldChain types.Blocks
commonBlock *types.Block commonBlock *types.Block
deletedTxs types.Transactions deletedTxs types.Transactions
addedTxs types.Transactions
deletedLogs []*types.Log deletedLogs []*types.Log
rebirthLogs []*types.Log
// collectLogs collects the logs that were generated during the // collectLogs collects the logs that were generated during the
// processing of the block that corresponds with the given hash. // processing of the block that corresponds with the given hash.
// These logs are later announced as deleted. // These logs are later announced as deleted or reborn
collectLogs = func(hash common.Hash) { collectLogs = func(hash common.Hash, removed bool) {
// Coalesce logs and set 'Removed'.
number := bc.hc.GetBlockNumber(hash) number := bc.hc.GetBlockNumber(hash)
if number == nil { if number == nil {
return return
@ -1407,53 +1417,60 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
receipts := rawdb.ReadReceipts(bc.db, hash, *number) receipts := rawdb.ReadReceipts(bc.db, hash, *number)
for _, receipt := range receipts { for _, receipt := range receipts {
for _, log := range receipt.Logs { for _, log := range receipt.Logs {
del := *log l := *log
del.Removed = true if removed {
deletedLogs = append(deletedLogs, &del) l.Removed = true
deletedLogs = append(deletedLogs, &l)
} else {
rebirthLogs = append(rebirthLogs, &l)
}
} }
} }
} }
) )
// Reduce the longer chain to the same number as the shorter one
// first reduce whoever is higher bound
if oldBlock.NumberU64() > newBlock.NumberU64() { if oldBlock.NumberU64() > newBlock.NumberU64() {
// reduce old chain // Old chain is longer, gather all transactions and logs as deleted ones
for ; oldBlock != nil && oldBlock.NumberU64() != newBlock.NumberU64(); oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1) { for ; oldBlock != nil && oldBlock.NumberU64() != newBlock.NumberU64(); oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1) {
oldChain = append(oldChain, oldBlock) oldChain = append(oldChain, oldBlock)
deletedTxs = append(deletedTxs, oldBlock.Transactions()...) deletedTxs = append(deletedTxs, oldBlock.Transactions()...)
collectLogs(oldBlock.Hash(), true)
collectLogs(oldBlock.Hash())
} }
} else { } else {
// reduce new chain and append new chain blocks for inserting later on // New chain is longer, stash all blocks away for subsequent insertion
for ; newBlock != nil && newBlock.NumberU64() != oldBlock.NumberU64(); newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) { for ; newBlock != nil && newBlock.NumberU64() != oldBlock.NumberU64(); newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) {
newChain = append(newChain, newBlock) newChain = append(newChain, newBlock)
} }
} }
if oldBlock == nil { if oldBlock == nil {
return fmt.Errorf("Invalid old chain") return fmt.Errorf("invalid old chain")
} }
if newBlock == nil { if newBlock == nil {
return fmt.Errorf("Invalid new chain") return fmt.Errorf("invalid new chain")
} }
// Both sides of the reorg are at the same number, reduce both until the common
// ancestor is found
for { for {
// If the common ancestor was found, bail out
if oldBlock.Hash() == newBlock.Hash() { if oldBlock.Hash() == newBlock.Hash() {
commonBlock = oldBlock commonBlock = oldBlock
break break
} }
// Remove an old block as well as stash away a new block
oldChain = append(oldChain, oldBlock) oldChain = append(oldChain, oldBlock)
newChain = append(newChain, newBlock)
deletedTxs = append(deletedTxs, oldBlock.Transactions()...) deletedTxs = append(deletedTxs, oldBlock.Transactions()...)
collectLogs(oldBlock.Hash()) collectLogs(oldBlock.Hash(), true)
oldBlock, newBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1), bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) newChain = append(newChain, newBlock)
// Step back with both chains
oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1)
if oldBlock == nil { if oldBlock == nil {
return fmt.Errorf("Invalid old chain") return fmt.Errorf("invalid old chain")
} }
newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1)
if newBlock == nil { if newBlock == nil {
return fmt.Errorf("Invalid new chain") return fmt.Errorf("invalid new chain")
} }
} }
// Ensure the user sees large reorgs // Ensure the user sees large reorgs
@ -1468,35 +1485,46 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
log.Error("Impossible reorg, please file an issue", "oldnum", oldBlock.Number(), "oldhash", oldBlock.Hash(), "newnum", newBlock.Number(), "newhash", newBlock.Hash()) log.Error("Impossible reorg, please file an issue", "oldnum", oldBlock.Number(), "oldhash", oldBlock.Hash(), "newnum", newBlock.Number(), "newhash", newBlock.Hash())
} }
// Insert the new chain, taking care of the proper incremental order // Insert the new chain, taking care of the proper incremental order
var addedTxs types.Transactions
for i := len(newChain) - 1; i >= 0; i-- { for i := len(newChain) - 1; i >= 0; i-- {
// insert the block in the canonical way, re-writing history // Insert the block in the canonical way, re-writing history
bc.insert(newChain[i]) bc.insert(newChain[i])
// write lookup entries for hash based transaction/receipt searches
// Collect reborn logs due to chain reorg (except head block (reverse order))
if i != 0 {
collectLogs(newChain[i].Hash(), false)
}
// Write lookup entries for hash based transaction/receipt searches
rawdb.WriteTxLookupEntries(bc.db, newChain[i]) rawdb.WriteTxLookupEntries(bc.db, newChain[i])
addedTxs = append(addedTxs, newChain[i].Transactions()...) addedTxs = append(addedTxs, newChain[i].Transactions()...)
} }
// calculate the difference between deleted and added transactions // When transactions get deleted from the database, the receipts that were
diff := types.TxDifference(deletedTxs, addedTxs) // created in the fork must also be deleted
// When transactions get deleted from the database that means the
// receipts that were created in the fork must also be deleted
batch := bc.db.NewBatch() batch := bc.db.NewBatch()
for _, tx := range diff { for _, tx := range types.TxDifference(deletedTxs, addedTxs) {
rawdb.DeleteTxLookupEntry(batch, tx.Hash()) rawdb.DeleteTxLookupEntry(batch, tx.Hash())
} }
batch.Write() batch.Write()
// If any logs need to be fired, do it now. In theory we could avoid creating
// this goroutine if there are no events to fire, but realistcally that only
// ever happens if we're reorging empty blocks, which will only happen on idle
// networks where performance is not an issue either way.
//
// TODO(karalabe): Can we get rid of the goroutine somehow to guarantee correct
// event ordering?
go func() {
if len(deletedLogs) > 0 { if len(deletedLogs) > 0 {
go bc.rmLogsFeed.Send(RemovedLogsEvent{deletedLogs}) bc.rmLogsFeed.Send(RemovedLogsEvent{deletedLogs})
}
if len(rebirthLogs) > 0 {
bc.logsFeed.Send(rebirthLogs)
} }
if len(oldChain) > 0 { if len(oldChain) > 0 {
go func() {
for _, block := range oldChain { for _, block := range oldChain {
bc.chainSideFeed.Send(ChainSideEvent{Block: block}) bc.chainSideFeed.Send(ChainSideEvent{Block: block})
} }
}()
} }
}()
return nil return nil
} }

View File

@ -60,7 +60,7 @@ func (st *insertStats) report(chain []*types.Block, index int, cache common.Stor
"elapsed", common.PrettyDuration(elapsed), "mgasps", float64(st.usedGas) * 1000 / float64(elapsed), "elapsed", common.PrettyDuration(elapsed), "mgasps", float64(st.usedGas) * 1000 / float64(elapsed),
"number", end.Number(), "hash", end.Hash(), "number", end.Number(), "hash", end.Hash(),
} }
if timestamp := time.Unix(end.Time().Int64(), 0); time.Since(timestamp) > time.Minute { if timestamp := time.Unix(int64(end.Time()), 0); time.Since(timestamp) > time.Minute {
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...) context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
} }
context = append(context, []interface{}{"cache", cache}...) context = append(context, []interface{}{"cache", cache}...)
@ -111,14 +111,6 @@ func (it *insertIterator) next() (*types.Block, error) {
return it.chain[it.index], it.validator.ValidateBody(it.chain[it.index]) return it.chain[it.index], it.validator.ValidateBody(it.chain[it.index])
} }
// current returns the current block that's being processed.
func (it *insertIterator) current() *types.Block {
if it.index < 0 || it.index+1 >= len(it.chain) {
return nil
}
return it.chain[it.index]
}
// previous returns the previous block was being processed, or nil // previous returns the previous block was being processed, or nil
func (it *insertIterator) previous() *types.Block { func (it *insertIterator) previous() *types.Block {
if it.index < 1 { if it.index < 1 {

View File

@ -884,7 +884,6 @@ func TestChainTxReorgs(t *testing.T) {
} }
func TestLogReorgs(t *testing.T) { func TestLogReorgs(t *testing.T) {
var ( var (
key1, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291") key1, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
addr1 = crypto.PubkeyToAddress(key1.PublicKey) addr1 = crypto.PubkeyToAddress(key1.PublicKey)
@ -930,6 +929,213 @@ func TestLogReorgs(t *testing.T) {
} }
} }
func TestLogRebirth(t *testing.T) {
var (
key1, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
addr1 = crypto.PubkeyToAddress(key1.PublicKey)
db = ethdb.NewMemDatabase()
// this code generates a log
code = common.Hex2Bytes("60606040525b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405180905060405180910390a15b600a8060416000396000f360606040526008565b00")
gspec = &Genesis{Config: params.TestChainConfig, Alloc: GenesisAlloc{addr1: {Balance: big.NewInt(10000000000000)}}}
genesis = gspec.MustCommit(db)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
newLogCh = make(chan bool)
)
// listenNewLog checks whether the received logs number is equal with expected.
listenNewLog := func(sink chan []*types.Log, expect int) {
cnt := 0
for {
select {
case logs := <-sink:
cnt += len(logs)
case <-time.NewTimer(5 * time.Second).C:
// new logs timeout
newLogCh <- false
return
}
if cnt == expect {
break
} else if cnt > expect {
// redundant logs received
newLogCh <- false
return
}
}
select {
case <-sink:
// redundant logs received
newLogCh <- false
case <-time.NewTimer(100 * time.Millisecond).C:
newLogCh <- true
}
}
blockchain, _ := NewBlockChain(db, nil, gspec.Config, ethash.NewFaker(), vm.Config{}, nil)
defer blockchain.Stop()
logsCh := make(chan []*types.Log)
blockchain.SubscribeLogsEvent(logsCh)
rmLogsCh := make(chan RemovedLogsEvent)
blockchain.SubscribeRemovedLogsEvent(rmLogsCh)
chain, _ := GenerateChain(params.TestChainConfig, genesis, ethash.NewFaker(), db, 2, func(i int, gen *BlockGen) {
if i == 1 {
tx, err := types.SignTx(types.NewContractCreation(gen.TxNonce(addr1), new(big.Int), 1000000, new(big.Int), code), signer, key1)
if err != nil {
t.Fatalf("failed to create tx: %v", err)
}
gen.AddTx(tx)
}
})
// Spawn a goroutine to receive log events
go listenNewLog(logsCh, 1)
if _, err := blockchain.InsertChain(chain); err != nil {
t.Fatalf("failed to insert chain: %v", err)
}
if !<-newLogCh {
t.Fatalf("failed to receive new log event")
}
// Generate long reorg chain
forkChain, _ := GenerateChain(params.TestChainConfig, genesis, ethash.NewFaker(), db, 2, func(i int, gen *BlockGen) {
if i == 1 {
tx, err := types.SignTx(types.NewContractCreation(gen.TxNonce(addr1), new(big.Int), 1000000, new(big.Int), code), signer, key1)
if err != nil {
t.Fatalf("failed to create tx: %v", err)
}
gen.AddTx(tx)
// Higher block difficulty
gen.OffsetTime(-9)
}
})
// Spawn a goroutine to receive log events
go listenNewLog(logsCh, 1)
if _, err := blockchain.InsertChain(forkChain); err != nil {
t.Fatalf("failed to insert forked chain: %v", err)
}
if !<-newLogCh {
t.Fatalf("failed to receive new log event")
}
// Ensure removedLog events received
select {
case ev := <-rmLogsCh:
if len(ev.Logs) == 0 {
t.Error("expected logs")
}
case <-time.NewTimer(1 * time.Second).C:
t.Fatal("Timeout. There is no RemovedLogsEvent has been sent.")
}
newBlocks, _ := GenerateChain(params.TestChainConfig, chain[len(chain)-1], ethash.NewFaker(), db, 1, func(i int, gen *BlockGen) {})
go listenNewLog(logsCh, 1)
if _, err := blockchain.InsertChain(newBlocks); err != nil {
t.Fatalf("failed to insert forked chain: %v", err)
}
// Ensure removedLog events received
select {
case ev := <-rmLogsCh:
if len(ev.Logs) == 0 {
t.Error("expected logs")
}
case <-time.NewTimer(1 * time.Second).C:
t.Fatal("Timeout. There is no RemovedLogsEvent has been sent.")
}
// Rebirth logs should omit a newLogEvent
if !<-newLogCh {
t.Fatalf("failed to receive new log event")
}
}
func TestSideLogRebirth(t *testing.T) {
var (
key1, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
addr1 = crypto.PubkeyToAddress(key1.PublicKey)
db = ethdb.NewMemDatabase()
// this code generates a log
code = common.Hex2Bytes("60606040525b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405180905060405180910390a15b600a8060416000396000f360606040526008565b00")
gspec = &Genesis{Config: params.TestChainConfig, Alloc: GenesisAlloc{addr1: {Balance: big.NewInt(10000000000000)}}}
genesis = gspec.MustCommit(db)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
newLogCh = make(chan bool)
)
// listenNewLog checks whether the received logs number is equal with expected.
listenNewLog := func(sink chan []*types.Log, expect int) {
cnt := 0
for {
select {
case logs := <-sink:
cnt += len(logs)
case <-time.NewTimer(5 * time.Second).C:
// new logs timeout
newLogCh <- false
return
}
if cnt == expect {
break
} else if cnt > expect {
// redundant logs received
newLogCh <- false
return
}
}
select {
case <-sink:
// redundant logs received
newLogCh <- false
case <-time.NewTimer(100 * time.Millisecond).C:
newLogCh <- true
}
}
blockchain, _ := NewBlockChain(db, nil, gspec.Config, ethash.NewFaker(), vm.Config{}, nil)
defer blockchain.Stop()
logsCh := make(chan []*types.Log)
blockchain.SubscribeLogsEvent(logsCh)
chain, _ := GenerateChain(params.TestChainConfig, genesis, ethash.NewFaker(), db, 2, func(i int, gen *BlockGen) {
if i == 1 {
// Higher block difficulty
gen.OffsetTime(-9)
}
})
if _, err := blockchain.InsertChain(chain); err != nil {
t.Fatalf("failed to insert forked chain: %v", err)
}
// Generate side chain with lower difficulty
sideChain, _ := GenerateChain(params.TestChainConfig, genesis, ethash.NewFaker(), db, 2, func(i int, gen *BlockGen) {
if i == 1 {
tx, err := types.SignTx(types.NewContractCreation(gen.TxNonce(addr1), new(big.Int), 1000000, new(big.Int), code), signer, key1)
if err != nil {
t.Fatalf("failed to create tx: %v", err)
}
gen.AddTx(tx)
}
})
if _, err := blockchain.InsertChain(sideChain); err != nil {
t.Fatalf("failed to insert forked chain: %v", err)
}
// Generate a new block based on side chain
newBlocks, _ := GenerateChain(params.TestChainConfig, sideChain[len(sideChain)-1], ethash.NewFaker(), db, 1, func(i int, gen *BlockGen) {})
go listenNewLog(logsCh, 1)
if _, err := blockchain.InsertChain(newBlocks); err != nil {
t.Fatalf("failed to insert forked chain: %v", err)
}
// Rebirth logs should omit a newLogEvent
if !<-newLogCh {
t.Fatalf("failed to receive new log event")
}
}
func TestReorgSideEvent(t *testing.T) { func TestReorgSideEvent(t *testing.T) {
var ( var (
db = ethdb.NewMemDatabase() db = ethdb.NewMemDatabase()
@ -1483,3 +1689,58 @@ func BenchmarkBlockChain_1x1000Executions(b *testing.B) {
benchmarkLargeNumberOfValueToNonexisting(b, numTxs, numBlocks, recipientFn, dataFn) benchmarkLargeNumberOfValueToNonexisting(b, numTxs, numBlocks, recipientFn, dataFn)
} }
// Tests that importing a very large side fork, which is larger than the canon chain,
// but where the difficulty per block is kept low: this means that it will not
// overtake the 'canon' chain until after it's passed canon by about 200 blocks.
//
// Details at:
// - https://github.com/ethereum/go-ethereum/issues/18977
// - https://github.com/ethereum/go-ethereum/pull/18988
func TestLowDiffLongChain(t *testing.T) {
// Generate a canonical chain to act as the main dataset
engine := ethash.NewFaker()
db := ethdb.NewMemDatabase()
genesis := new(Genesis).MustCommit(db)
// We must use a pretty long chain to ensure that the fork doesn't overtake us
// until after at least 128 blocks post tip
blocks, _ := GenerateChain(params.TestChainConfig, genesis, engine, db, 6*triesInMemory, func(i int, b *BlockGen) {
b.SetCoinbase(common.Address{1})
b.OffsetTime(-9)
})
// Import the canonical chain
diskdb := ethdb.NewMemDatabase()
new(Genesis).MustCommit(diskdb)
chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{}, nil)
if err != nil {
t.Fatalf("failed to create tester chain: %v", err)
}
if n, err := chain.InsertChain(blocks); err != nil {
t.Fatalf("block %d: failed to insert into chain: %v", n, err)
}
// Generate fork chain, starting from an early block
parent := blocks[10]
fork, _ := GenerateChain(params.TestChainConfig, parent, engine, db, 8*triesInMemory, func(i int, b *BlockGen) {
b.SetCoinbase(common.Address{2})
})
// And now import the fork
if i, err := chain.InsertChain(fork); err != nil {
t.Fatalf("block %d: failed to insert into chain: %v", i, err)
}
head := chain.CurrentBlock()
if got := fork[len(fork)-1].Hash(); got != head.Hash() {
t.Fatalf("head wrong, expected %x got %x", head.Hash(), got)
}
// Sanity check that all the canonical numbers are present
header := chain.CurrentHeader()
for number := head.NumberU64(); number > 0; number-- {
if hash := chain.GetHeaderByNumber(number).Hash(); hash != header.Hash() {
t.Fatalf("header %d: canonical hash mismatch: have %x, want %x", number, hash, header.Hash())
}
header = chain.GetHeader(header.ParentHash, number-1)
}
}

View File

@ -149,12 +149,12 @@ func (b *BlockGen) PrevBlock(index int) *types.Block {
// associated difficulty. It's useful to test scenarios where forking is not // associated difficulty. It's useful to test scenarios where forking is not
// tied to chain length directly. // tied to chain length directly.
func (b *BlockGen) OffsetTime(seconds int64) { func (b *BlockGen) OffsetTime(seconds int64) {
b.header.Time.Add(b.header.Time, new(big.Int).SetInt64(seconds)) b.header.Time += uint64(seconds)
if b.header.Time.Cmp(b.parent.Header().Time) <= 0 { if b.header.Time <= b.parent.Header().Time {
panic("block time out of range") panic("block time out of range")
} }
chainreader := &fakeChainReader{config: b.config} chainreader := &fakeChainReader{config: b.config}
b.header.Difficulty = b.engine.CalcDifficulty(chainreader, b.header.Time.Uint64(), b.parent.Header()) b.header.Difficulty = b.engine.CalcDifficulty(chainreader, b.header.Time, b.parent.Header())
} }
// GenerateChain creates a chain of n blocks. The first block's // GenerateChain creates a chain of n blocks. The first block's
@ -225,20 +225,20 @@ func GenerateChain(config *params.ChainConfig, parent *types.Block, engine conse
} }
func makeHeader(chain consensus.ChainReader, parent *types.Block, state *state.StateDB, engine consensus.Engine) *types.Header { func makeHeader(chain consensus.ChainReader, parent *types.Block, state *state.StateDB, engine consensus.Engine) *types.Header {
var time *big.Int var time uint64
if parent.Time() == nil { if parent.Time() == 0 {
time = big.NewInt(10) time = 10
} else { } else {
time = new(big.Int).Add(parent.Time(), big.NewInt(10)) // block time is fixed at 10 seconds time = parent.Time() + 10 // block time is fixed at 10 seconds
} }
return &types.Header{ return &types.Header{
Root: state.IntermediateRoot(chain.Config().IsEIP158(parent.Number())), Root: state.IntermediateRoot(chain.Config().IsEIP158(parent.Number())),
ParentHash: parent.Hash(), ParentHash: parent.Hash(),
Coinbase: parent.Coinbase(), Coinbase: parent.Coinbase(),
Difficulty: engine.CalcDifficulty(chain, time.Uint64(), &types.Header{ Difficulty: engine.CalcDifficulty(chain, time, &types.Header{
Number: parent.Number(), Number: parent.Number(),
Time: new(big.Int).Sub(time, big.NewInt(10)), Time: time - 10,
Difficulty: parent.Difficulty(), Difficulty: parent.Difficulty(),
UncleHash: parent.UncleHash(), UncleHash: parent.UncleHash(),
}), }),

View File

@ -51,7 +51,7 @@ func NewEVMContext(msg Message, header *types.Header, chain ChainContext, author
Origin: msg.From(), Origin: msg.From(),
Coinbase: beneficiary, Coinbase: beneficiary,
BlockNumber: new(big.Int).Set(header.Number), BlockNumber: new(big.Int).Set(header.Number),
Time: new(big.Int).Set(header.Time), Time: new(big.Int).SetUint64(header.Time),
Difficulty: new(big.Int).Set(header.Difficulty), Difficulty: new(big.Int).Set(header.Difficulty),
GasLimit: header.GasLimit, GasLimit: header.GasLimit,
GasPrice: new(big.Int).Set(msg.GasPrice()), GasPrice: new(big.Int).Set(msg.GasPrice()),

View File

@ -157,7 +157,6 @@ func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, constant
if genesis != nil && genesis.Config == nil { if genesis != nil && genesis.Config == nil {
return params.AllEthashProtocolChanges, common.Hash{}, errGenesisNoConfig return params.AllEthashProtocolChanges, common.Hash{}, errGenesisNoConfig
} }
// Just commit the new block if there is no stored genesis block. // Just commit the new block if there is no stored genesis block.
stored := rawdb.ReadCanonicalHash(db, 0) stored := rawdb.ReadCanonicalHash(db, 0)
if (stored == common.Hash{}) { if (stored == common.Hash{}) {
@ -183,6 +182,7 @@ func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, constant
newcfg := genesis.configOrDefault(stored) newcfg := genesis.configOrDefault(stored)
if constantinopleOverride != nil { if constantinopleOverride != nil {
newcfg.ConstantinopleBlock = constantinopleOverride newcfg.ConstantinopleBlock = constantinopleOverride
newcfg.PetersburgBlock = constantinopleOverride
} }
storedcfg := rawdb.ReadChainConfig(db, stored) storedcfg := rawdb.ReadChainConfig(db, stored)
if storedcfg == nil { if storedcfg == nil {
@ -243,7 +243,7 @@ func (g *Genesis) ToBlock(db ethdb.Database) *types.Block {
head := &types.Header{ head := &types.Header{
Number: new(big.Int).SetUint64(g.Number), Number: new(big.Int).SetUint64(g.Number),
Nonce: types.EncodeNonce(g.Nonce), Nonce: types.EncodeNonce(g.Nonce),
Time: new(big.Int).SetUint64(g.Timestamp), Time: g.Timestamp,
ParentHash: g.ParentHash, ParentHash: g.ParentHash,
Extra: g.ExtraData, Extra: g.ExtraData,
GasLimit: g.GasLimit, GasLimit: g.GasLimit,
@ -339,6 +339,18 @@ func DefaultRinkebyGenesisBlock() *Genesis {
} }
} }
// DefaultGoerliGenesisBlock returns the Görli network genesis block.
func DefaultGoerliGenesisBlock() *Genesis {
return &Genesis{
Config: params.GoerliChainConfig,
Timestamp: 1548854791,
ExtraData: hexutil.MustDecode("0x22466c6578692069732061207468696e6722202d204166726900000000000000e0a2bd4258d2768837baa26a28fe71dc079f84c70000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
GasLimit: 10485760,
Difficulty: big.NewInt(1),
Alloc: decodePrealloc(goerliAllocData),
}
}
// DeveloperGenesisBlock returns the 'geth --dev' genesis block. Note, this must // DeveloperGenesisBlock returns the 'geth --dev' genesis block. Note, this must
// be seeded with the // be seeded with the
func DeveloperGenesisBlock(period uint64, faucet common.Address) *Genesis { func DeveloperGenesisBlock(period uint64, faucet common.Address) *Genesis {

File diff suppressed because one or more lines are too long

View File

@ -286,7 +286,7 @@ func (hc *HeaderChain) InsertHeaderChain(chain []*types.Header, writeHeader WhCa
"count", stats.processed, "elapsed", common.PrettyDuration(time.Since(start)), "count", stats.processed, "elapsed", common.PrettyDuration(time.Since(start)),
"number", last.Number, "hash", last.Hash(), "number", last.Number, "hash", last.Hash(),
} }
if timestamp := time.Unix(last.Time.Int64(), 0); time.Since(timestamp) > time.Minute { if timestamp := time.Unix(int64(last.Time), 0); time.Since(timestamp) > time.Minute {
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...) context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
} }
if stats.ignored > 0 { if stats.ignored > 0 {

View File

@ -79,7 +79,7 @@ type Header struct {
Number *big.Int `json:"number" gencodec:"required"` Number *big.Int `json:"number" gencodec:"required"`
GasLimit uint64 `json:"gasLimit" gencodec:"required"` GasLimit uint64 `json:"gasLimit" gencodec:"required"`
GasUsed uint64 `json:"gasUsed" gencodec:"required"` GasUsed uint64 `json:"gasUsed" gencodec:"required"`
Time *big.Int `json:"timestamp" gencodec:"required"` Time uint64 `json:"timestamp" gencodec:"required"`
Extra []byte `json:"extraData" gencodec:"required"` Extra []byte `json:"extraData" gencodec:"required"`
MixDigest common.Hash `json:"mixHash"` MixDigest common.Hash `json:"mixHash"`
Nonce BlockNonce `json:"nonce"` Nonce BlockNonce `json:"nonce"`
@ -91,7 +91,7 @@ type headerMarshaling struct {
Number *hexutil.Big Number *hexutil.Big
GasLimit hexutil.Uint64 GasLimit hexutil.Uint64
GasUsed hexutil.Uint64 GasUsed hexutil.Uint64
Time *hexutil.Big Time hexutil.Uint64
Extra hexutil.Bytes Extra hexutil.Bytes
Hash common.Hash `json:"hash"` // adds call to Hash() in MarshalJSON Hash common.Hash `json:"hash"` // adds call to Hash() in MarshalJSON
} }
@ -105,7 +105,7 @@ func (h *Header) Hash() common.Hash {
// Size returns the approximate memory used by all internal contents. It is used // Size returns the approximate memory used by all internal contents. It is used
// to approximate and limit the memory consumption of various caches. // to approximate and limit the memory consumption of various caches.
func (h *Header) Size() common.StorageSize { func (h *Header) Size() common.StorageSize {
return common.StorageSize(unsafe.Sizeof(*h)) + common.StorageSize(len(h.Extra)+(h.Difficulty.BitLen()+h.Number.BitLen()+h.Time.BitLen())/8) return common.StorageSize(unsafe.Sizeof(*h)) + common.StorageSize(len(h.Extra)+(h.Difficulty.BitLen()+h.Number.BitLen())/8)
} }
func rlpHash(x interface{}) (h common.Hash) { func rlpHash(x interface{}) (h common.Hash) {
@ -221,9 +221,6 @@ func NewBlockWithHeader(header *Header) *Block {
// modifying a header variable. // modifying a header variable.
func CopyHeader(h *Header) *Header { func CopyHeader(h *Header) *Header {
cpy := *h cpy := *h
if cpy.Time = new(big.Int); h.Time != nil {
cpy.Time.Set(h.Time)
}
if cpy.Difficulty = new(big.Int); h.Difficulty != nil { if cpy.Difficulty = new(big.Int); h.Difficulty != nil {
cpy.Difficulty.Set(h.Difficulty) cpy.Difficulty.Set(h.Difficulty)
} }
@ -286,7 +283,7 @@ func (b *Block) Number() *big.Int { return new(big.Int).Set(b.header.Number)
func (b *Block) GasLimit() uint64 { return b.header.GasLimit } func (b *Block) GasLimit() uint64 { return b.header.GasLimit }
func (b *Block) GasUsed() uint64 { return b.header.GasUsed } func (b *Block) GasUsed() uint64 { return b.header.GasUsed }
func (b *Block) Difficulty() *big.Int { return new(big.Int).Set(b.header.Difficulty) } func (b *Block) Difficulty() *big.Int { return new(big.Int).Set(b.header.Difficulty) }
func (b *Block) Time() *big.Int { return new(big.Int).Set(b.header.Time) } func (b *Block) Time() uint64 { return b.header.Time }
func (b *Block) NumberU64() uint64 { return b.header.Number.Uint64() } func (b *Block) NumberU64() uint64 { return b.header.Number.Uint64() }
func (b *Block) MixDigest() common.Hash { return b.header.MixDigest } func (b *Block) MixDigest() common.Hash { return b.header.MixDigest }

View File

@ -48,7 +48,7 @@ func TestBlockEncoding(t *testing.T) {
check("Root", block.Root(), common.HexToHash("ef1552a40b7165c3cd773806b9e0c165b75356e0314bf0706f279c729f51e017")) check("Root", block.Root(), common.HexToHash("ef1552a40b7165c3cd773806b9e0c165b75356e0314bf0706f279c729f51e017"))
check("Hash", block.Hash(), common.HexToHash("0a5843ac1cb04865017cb35a57b50b07084e5fcee39b5acadade33149f4fff9e")) check("Hash", block.Hash(), common.HexToHash("0a5843ac1cb04865017cb35a57b50b07084e5fcee39b5acadade33149f4fff9e"))
check("Nonce", block.Nonce(), uint64(0xa13a5a8c8f2bb1c4)) check("Nonce", block.Nonce(), uint64(0xa13a5a8c8f2bb1c4))
check("Time", block.Time(), big.NewInt(1426516743)) check("Time", block.Time(), uint64(1426516743))
check("Size", block.Size(), common.StorageSize(len(blockEnc))) check("Size", block.Size(), common.StorageSize(len(blockEnc)))
tx1 := NewTransaction(0, common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"), big.NewInt(10), 50000, big.NewInt(10), nil) tx1 := NewTransaction(0, common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"), big.NewInt(10), 50000, big.NewInt(10), nil)

View File

@ -27,7 +27,7 @@ func (h Header) MarshalJSON() ([]byte, error) {
Number *hexutil.Big `json:"number" gencodec:"required"` Number *hexutil.Big `json:"number" gencodec:"required"`
GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"` GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"` GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Time *hexutil.Big `json:"timestamp" gencodec:"required"` Time hexutil.Uint64 `json:"timestamp" gencodec:"required"`
Extra hexutil.Bytes `json:"extraData" gencodec:"required"` Extra hexutil.Bytes `json:"extraData" gencodec:"required"`
MixDigest common.Hash `json:"mixHash"` MixDigest common.Hash `json:"mixHash"`
Nonce BlockNonce `json:"nonce"` Nonce BlockNonce `json:"nonce"`
@ -45,7 +45,7 @@ func (h Header) MarshalJSON() ([]byte, error) {
enc.Number = (*hexutil.Big)(h.Number) enc.Number = (*hexutil.Big)(h.Number)
enc.GasLimit = hexutil.Uint64(h.GasLimit) enc.GasLimit = hexutil.Uint64(h.GasLimit)
enc.GasUsed = hexutil.Uint64(h.GasUsed) enc.GasUsed = hexutil.Uint64(h.GasUsed)
enc.Time = (*hexutil.Big)(h.Time) enc.Time = hexutil.Uint64(h.Time)
enc.Extra = h.Extra enc.Extra = h.Extra
enc.MixDigest = h.MixDigest enc.MixDigest = h.MixDigest
enc.Nonce = h.Nonce enc.Nonce = h.Nonce
@ -67,7 +67,7 @@ func (h *Header) UnmarshalJSON(input []byte) error {
Number *hexutil.Big `json:"number" gencodec:"required"` Number *hexutil.Big `json:"number" gencodec:"required"`
GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"` GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"` GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Time *hexutil.Big `json:"timestamp" gencodec:"required"` Time *hexutil.Uint64 `json:"timestamp" gencodec:"required"`
Extra *hexutil.Bytes `json:"extraData" gencodec:"required"` Extra *hexutil.Bytes `json:"extraData" gencodec:"required"`
MixDigest *common.Hash `json:"mixHash"` MixDigest *common.Hash `json:"mixHash"`
Nonce *BlockNonce `json:"nonce"` Nonce *BlockNonce `json:"nonce"`
@ -123,7 +123,7 @@ func (h *Header) UnmarshalJSON(input []byte) error {
if dec.Time == nil { if dec.Time == nil {
return errors.New("missing required field 'timestamp' for Header") return errors.New("missing required field 'timestamp' for Header")
} }
h.Time = (*big.Int)(dec.Time) h.Time = uint64(*dec.Time)
if dec.Extra == nil { if dec.Extra == nil {
return errors.New("missing required field 'extraData' for Header") return errors.New("missing required field 'extraData' for Header")
} }

View File

@ -121,7 +121,9 @@ func gasSStore(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, m
current = evm.StateDB.GetState(contract.Address(), common.BigToHash(x)) current = evm.StateDB.GetState(contract.Address(), common.BigToHash(x))
) )
// The legacy gas metering only takes into consideration the current state // The legacy gas metering only takes into consideration the current state
if !evm.chainRules.IsConstantinople { // Legacy rules should be applied if we are in Petersburg (removal of EIP-1283)
// OR Constantinople is not active
if evm.chainRules.IsPetersburg || !evm.chainRules.IsConstantinople {
// This checks for 3 scenario's and calculates gas accordingly: // This checks for 3 scenario's and calculates gas accordingly:
// //
// 1. From a zero-value address to a non-zero value (NEW VALUE) // 1. From a zero-value address to a non-zero value (NEW VALUE)

View File

@ -34,7 +34,11 @@ type JSONLogger struct {
// NewJSONLogger creates a new EVM tracer that prints execution steps as JSON objects // NewJSONLogger creates a new EVM tracer that prints execution steps as JSON objects
// into the provided stream. // into the provided stream.
func NewJSONLogger(cfg *LogConfig, writer io.Writer) *JSONLogger { func NewJSONLogger(cfg *LogConfig, writer io.Writer) *JSONLogger {
return &JSONLogger{json.NewEncoder(writer), cfg} l := &JSONLogger{json.NewEncoder(writer), cfg}
if l.cfg == nil {
l.cfg = &LogConfig{}
}
return l
} }
func (l *JSONLogger) CaptureStart(from common.Address, to common.Address, create bool, input []byte, gas uint64, value *big.Int) error { func (l *JSONLogger) CaptureStart(from common.Address, to common.Address, create bool, input []byte, gas uint64, value *big.Int) error {

View File

@ -213,6 +213,10 @@ func (b *EthAPIBackend) AccountManager() *accounts.Manager {
return b.eth.AccountManager() return b.eth.AccountManager()
} }
func (b *EthAPIBackend) RPCGasCap() *big.Int {
return b.eth.config.RPCGasCap
}
func (b *EthAPIBackend) BloomStatus() (uint64, uint64) { func (b *EthAPIBackend) BloomStatus() (uint64, uint64) {
sections, _, _ := b.eth.bloomIndexer.Sections() sections, _, _ := b.eth.bloomIndexer.Sections()
return params.BloomBitsBlocks, sections return params.BloomBitsBlocks, sections

View File

@ -214,7 +214,8 @@ func (api *PrivateDebugAPI) traceChain(ctx context.Context, start, end *types.Bl
log.Warn("Tracing failed", "hash", tx.Hash(), "block", task.block.NumberU64(), "err", err) log.Warn("Tracing failed", "hash", tx.Hash(), "block", task.block.NumberU64(), "err", err)
break break
} }
task.statedb.Finalise(true) // Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
task.statedb.Finalise(api.eth.blockchain.Config().IsEIP158(task.block.Number()))
task.results[i] = &txTraceResult{Result: res} task.results[i] = &txTraceResult{Result: res}
} }
// Stream the result back to the user or abort on teardown // Stream the result back to the user or abort on teardown
@ -506,7 +507,8 @@ func (api *PrivateDebugAPI) traceBlock(ctx context.Context, block *types.Block,
break break
} }
// Finalize the state so any modifications are written to the trie // Finalize the state so any modifications are written to the trie
statedb.Finalise(true) // Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
statedb.Finalise(vmenv.ChainConfig().IsEIP158(block.Number()))
} }
close(jobs) close(jobs)
pend.Wait() pend.Wait()
@ -608,7 +610,8 @@ func (api *PrivateDebugAPI) standardTraceBlockToFile(ctx context.Context, block
return dumps, err return dumps, err
} }
// Finalize the state so any modifications are written to the trie // Finalize the state so any modifications are written to the trie
statedb.Finalise(true) // Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
statedb.Finalise(vmenv.ChainConfig().IsEIP158(block.Number()))
// If we've traced the transaction we were looking for, abort // If we've traced the transaction we were looking for, abort
if tx.Hash() == txHash { if tx.Hash() == txHash {
@ -799,7 +802,8 @@ func (api *PrivateDebugAPI) computeTxEnv(blockHash common.Hash, txIndex int, ree
return nil, vm.Context{}, nil, fmt.Errorf("transaction %#x failed: %v", tx.Hash(), err) return nil, vm.Context{}, nil, fmt.Errorf("transaction %#x failed: %v", tx.Hash(), err)
} }
// Ensure any modifications are committed to the state // Ensure any modifications are committed to the state
statedb.Finalise(true) // Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
statedb.Finalise(vmenv.ChainConfig().IsEIP158(block.Number()))
} }
return nil, vm.Context{}, nil, fmt.Errorf("transaction index %d out of range for block %#x", txIndex, blockHash) return nil, vm.Context{}, nil, fmt.Errorf("transaction index %d out of range for block %#x", txIndex, blockHash)
} }

View File

@ -135,6 +135,9 @@ type Config struct {
// Constantinople block override (TODO: remove after the fork) // Constantinople block override (TODO: remove after the fork)
ConstantinopleOverride *big.Int ConstantinopleOverride *big.Int
// RPCGasCap is the global gas cap for eth-call variants.
RPCGasCap *big.Int `toml:",omitempty"`
} }
type configMarshaling struct { type configMarshaling struct {

View File

@ -75,6 +75,7 @@ var (
errUnknownPeer = errors.New("peer is unknown or unhealthy") errUnknownPeer = errors.New("peer is unknown or unhealthy")
errBadPeer = errors.New("action from bad peer ignored") errBadPeer = errors.New("action from bad peer ignored")
errStallingPeer = errors.New("peer is stalling") errStallingPeer = errors.New("peer is stalling")
errUnsyncedPeer = errors.New("unsynced peer")
errNoPeers = errors.New("no peers to keep download active") errNoPeers = errors.New("no peers to keep download active")
errTimeout = errors.New("timeout") errTimeout = errors.New("timeout")
errEmptyHeaderSet = errors.New("empty header set by peer") errEmptyHeaderSet = errors.New("empty header set by peer")
@ -99,6 +100,7 @@ type Downloader struct {
mode SyncMode // Synchronisation mode defining the strategy used (per sync cycle) mode SyncMode // Synchronisation mode defining the strategy used (per sync cycle)
mux *event.TypeMux // Event multiplexer to announce sync operation events mux *event.TypeMux // Event multiplexer to announce sync operation events
checkpoint uint64 // Checkpoint block number to enforce head against (e.g. fast sync)
genesis uint64 // Genesis block number to limit sync to (e.g. light client CHT) genesis uint64 // Genesis block number to limit sync to (e.g. light client CHT)
queue *queue // Scheduler for selecting the hashes to download queue *queue // Scheduler for selecting the hashes to download
peers *peerSet // Set of active peers from which download can proceed peers *peerSet // Set of active peers from which download can proceed
@ -205,15 +207,15 @@ type BlockChain interface {
} }
// New creates a new downloader to fetch hashes and blocks from remote peers. // New creates a new downloader to fetch hashes and blocks from remote peers.
func New(mode SyncMode, stateDb ethdb.Database, mux *event.TypeMux, chain BlockChain, lightchain LightChain, dropPeer peerDropFn) *Downloader { func New(mode SyncMode, checkpoint uint64, stateDb ethdb.Database, mux *event.TypeMux, chain BlockChain, lightchain LightChain, dropPeer peerDropFn) *Downloader {
if lightchain == nil { if lightchain == nil {
lightchain = chain lightchain = chain
} }
dl := &Downloader{ dl := &Downloader{
mode: mode, mode: mode,
stateDB: stateDb, stateDB: stateDb,
mux: mux, mux: mux,
checkpoint: checkpoint,
queue: newQueue(), queue: newQueue(),
peers: newPeerSet(), peers: newPeerSet(),
rttEstimate: uint64(rttMaxEstimate), rttEstimate: uint64(rttMaxEstimate),
@ -326,7 +328,7 @@ func (d *Downloader) Synchronise(id string, head common.Hash, td *big.Int, mode
case nil: case nil:
case errBusy: case errBusy:
case errTimeout, errBadPeer, errStallingPeer, case errTimeout, errBadPeer, errStallingPeer, errUnsyncedPeer,
errEmptyHeaderSet, errPeersUnavailable, errTooOld, errEmptyHeaderSet, errPeersUnavailable, errTooOld,
errInvalidAncestor, errInvalidChain: errInvalidAncestor, errInvalidChain:
log.Warn("Synchronisation failed, dropping peer", "peer", id, "err", err) log.Warn("Synchronisation failed, dropping peer", "peer", id, "err", err)
@ -577,6 +579,10 @@ func (d *Downloader) fetchHeight(p *peerConnection) (*types.Header, error) {
return nil, errBadPeer return nil, errBadPeer
} }
head := headers[0] head := headers[0]
if d.mode == FastSync && head.Number.Uint64() < d.checkpoint {
p.log.Warn("Remote head below checkpoint", "number", head.Number, "hash", head.Hash())
return nil, errUnsyncedPeer
}
p.log.Debug("Remote head header identified", "number", head.Number, "hash", head.Hash()) p.log.Debug("Remote head header identified", "number", head.Number, "hash", head.Hash())
return head, nil return head, nil

View File

@ -26,7 +26,7 @@ import (
"testing" "testing"
"time" "time"
ethereum "github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
@ -73,7 +73,8 @@ func newTester() *downloadTester {
} }
tester.stateDb = ethdb.NewMemDatabase() tester.stateDb = ethdb.NewMemDatabase()
tester.stateDb.Put(testGenesis.Root().Bytes(), []byte{0x00}) tester.stateDb.Put(testGenesis.Root().Bytes(), []byte{0x00})
tester.downloader = New(FullSync, tester.stateDb, new(event.TypeMux), tester, nil, tester.dropPeer)
tester.downloader = New(FullSync, 0, tester.stateDb, new(event.TypeMux), tester, nil, tester.dropPeer)
return tester return tester
} }
@ -1049,6 +1050,7 @@ func testBlockHeaderAttackerDropping(t *testing.T, protocol int) {
{errUnknownPeer, false}, // Peer is unknown, was already dropped, don't double drop {errUnknownPeer, false}, // Peer is unknown, was already dropped, don't double drop
{errBadPeer, true}, // Peer was deemed bad for some reason, drop it {errBadPeer, true}, // Peer was deemed bad for some reason, drop it
{errStallingPeer, true}, // Peer was detected to be stalling, drop it {errStallingPeer, true}, // Peer was detected to be stalling, drop it
{errUnsyncedPeer, true}, // Peer was detected to be unsynced, drop it
{errNoPeers, false}, // No peers to download from, soft race, no issue {errNoPeers, false}, // No peers to download from, soft race, no issue
{errTimeout, true}, // No hashes received in due time, drop the peer {errTimeout, true}, // No hashes received in due time, drop the peer
{errEmptyHeaderSet, true}, // No headers were returned as a response, drop as it's a dead end {errEmptyHeaderSet, true}, // No headers were returned as a response, drop as it's a dead end
@ -1567,3 +1569,39 @@ func TestRemoteHeaderRequestSpan(t *testing.T) {
} }
} }
} }
// Tests that peers below a pre-configured checkpoint block are prevented from
// being fast-synced from, avoiding potential cheap eclipse attacks.
func TestCheckpointEnforcement62(t *testing.T) { testCheckpointEnforcement(t, 62, FullSync) }
func TestCheckpointEnforcement63Full(t *testing.T) { testCheckpointEnforcement(t, 63, FullSync) }
func TestCheckpointEnforcement63Fast(t *testing.T) { testCheckpointEnforcement(t, 63, FastSync) }
func TestCheckpointEnforcement64Full(t *testing.T) { testCheckpointEnforcement(t, 64, FullSync) }
func TestCheckpointEnforcement64Fast(t *testing.T) { testCheckpointEnforcement(t, 64, FastSync) }
func TestCheckpointEnforcement64Light(t *testing.T) { testCheckpointEnforcement(t, 64, LightSync) }
func testCheckpointEnforcement(t *testing.T, protocol int, mode SyncMode) {
t.Parallel()
// Create a new tester with a particular hard coded checkpoint block
tester := newTester()
defer tester.terminate()
tester.downloader.checkpoint = uint64(fsMinFullBlocks) + 256
chain := testChainBase.shorten(int(tester.downloader.checkpoint) - 1)
// Attempt to sync with the peer and validate the result
tester.newPeer("peer", protocol, chain)
var expect error
if mode == FastSync {
expect = errUnsyncedPeer
}
if err := tester.sync("peer", nil, mode); err != expect {
t.Fatalf("block sync error mismatch: have %v, want %v", err, expect)
}
if mode == FastSync {
assertOwnChain(t, tester, 1)
} else {
assertOwnChain(t, tester, chain.len())
}
}

View File

@ -28,7 +28,6 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/eth/downloader" "github.com/ethereum/go-ethereum/eth/downloader"
@ -55,7 +54,7 @@ const (
) )
var ( var (
daoChallengeTimeout = 15 * time.Second // Time allowance for a node to reply to the DAO handshake challenge syncChallengeTimeout = 15 * time.Second // Time allowance for a node to reply to the sync progress challenge
) )
// errIncompatibleConfig is returned if the requested protocols and configs are // errIncompatibleConfig is returned if the requested protocols and configs are
@ -72,6 +71,9 @@ type ProtocolManager struct {
fastSync uint32 // Flag whether fast sync is enabled (gets disabled if we already have blocks) fastSync uint32 // Flag whether fast sync is enabled (gets disabled if we already have blocks)
acceptTxs uint32 // Flag whether we're considered synchronised (enables transaction processing) acceptTxs uint32 // Flag whether we're considered synchronised (enables transaction processing)
checkpointNumber uint64 // Block number for the sync progress validator to cross reference
checkpointHash common.Hash // Block hash for the sync progress validator to cross reference
txpool txPool txpool txPool
blockchain *core.BlockChain blockchain *core.BlockChain
chainconfig *params.ChainConfig chainconfig *params.ChainConfig
@ -126,6 +128,11 @@ func NewProtocolManager(config *params.ChainConfig, mode downloader.SyncMode, ne
if mode == downloader.FastSync { if mode == downloader.FastSync {
manager.fastSync = uint32(1) manager.fastSync = uint32(1)
} }
// If we have trusted checkpoints, enforce them on the chain
if checkpoint, ok := params.TrustedCheckpoints[blockchain.Genesis().Hash()]; ok {
manager.checkpointNumber = (checkpoint.SectionIndex+1)*params.CHTFrequencyClient - 1
manager.checkpointHash = checkpoint.SectionHead
}
// Initiate a sub-protocol for every implemented version we can handle // Initiate a sub-protocol for every implemented version we can handle
manager.SubProtocols = make([]p2p.Protocol, 0, len(ProtocolVersions)) manager.SubProtocols = make([]p2p.Protocol, 0, len(ProtocolVersions))
for i, version := range ProtocolVersions { for i, version := range ProtocolVersions {
@ -165,7 +172,7 @@ func NewProtocolManager(config *params.ChainConfig, mode downloader.SyncMode, ne
return nil, errIncompatibleConfig return nil, errIncompatibleConfig
} }
// Construct the different synchronisation mechanisms // Construct the different synchronisation mechanisms
manager.downloader = downloader.New(mode, chaindb, manager.eventMux, blockchain, nil, manager.removePeer) manager.downloader = downloader.New(mode, manager.checkpointNumber, chaindb, manager.eventMux, blockchain, nil, manager.removePeer)
validator := func(header *types.Header) error { validator := func(header *types.Header) error {
return engine.VerifyHeader(blockchain, header, true) return engine.VerifyHeader(blockchain, header, true)
@ -291,22 +298,22 @@ func (pm *ProtocolManager) handle(p *peer) error {
// after this will be sent via broadcasts. // after this will be sent via broadcasts.
pm.syncTransactions(p) pm.syncTransactions(p)
// If we're DAO hard-fork aware, validate any remote peer with regard to the hard-fork // If we have a trusted CHT, reject all peers below that (avoid fast sync eclipse)
if daoBlock := pm.chainconfig.DAOForkBlock; daoBlock != nil { if pm.checkpointHash != (common.Hash{}) {
// Request the peer's DAO fork header for extra-data validation // Request the peer's checkpoint header for chain height/weight validation
if err := p.RequestHeadersByNumber(daoBlock.Uint64(), 1, 0, false); err != nil { if err := p.RequestHeadersByNumber(pm.checkpointNumber, 1, 0, false); err != nil {
return err return err
} }
// Start a timer to disconnect if the peer doesn't reply in time // Start a timer to disconnect if the peer doesn't reply in time
p.forkDrop = time.AfterFunc(daoChallengeTimeout, func() { p.syncDrop = time.AfterFunc(syncChallengeTimeout, func() {
p.Log().Debug("Timed out DAO fork-check, dropping") p.Log().Warn("Checkpoint challenge timed out, dropping", "addr", p.RemoteAddr(), "type", p.Name())
pm.removePeer(p.id) pm.removePeer(p.id)
}) })
// Make sure it's cleaned up if the peer dies off // Make sure it's cleaned up if the peer dies off
defer func() { defer func() {
if p.forkDrop != nil { if p.syncDrop != nil {
p.forkDrop.Stop() p.syncDrop.Stop()
p.forkDrop = nil p.syncDrop = nil
} }
}() }()
} }
@ -438,41 +445,33 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
if err := msg.Decode(&headers); err != nil { if err := msg.Decode(&headers); err != nil {
return errResp(ErrDecode, "msg %v: %v", msg, err) return errResp(ErrDecode, "msg %v: %v", msg, err)
} }
// If no headers were received, but we're expending a DAO fork check, maybe it's that // If no headers were received, but we're expencting a checkpoint header, consider it that
if len(headers) == 0 && p.forkDrop != nil { if len(headers) == 0 && p.syncDrop != nil {
// Possibly an empty reply to the fork header checks, sanity check TDs // Stop the timer either way, decide later to drop or not
verifyDAO := true p.syncDrop.Stop()
p.syncDrop = nil
// If we already have a DAO header, we can check the peer's TD against it. If // If we're doing a fast sync, we must enforce the checkpoint block to avoid
// the peer's ahead of this, it too must have a reply to the DAO check // eclipse attacks. Unsynced nodes are welcome to connect after we're done
if daoHeader := pm.blockchain.GetHeaderByNumber(pm.chainconfig.DAOForkBlock.Uint64()); daoHeader != nil { // joining the network
if _, td := p.Head(); td.Cmp(pm.blockchain.GetTd(daoHeader.Hash(), daoHeader.Number.Uint64())) >= 0 { if atomic.LoadUint32(&pm.fastSync) == 1 {
verifyDAO = false p.Log().Warn("Dropping unsynced node during fast sync", "addr", p.RemoteAddr(), "type", p.Name())
} return errors.New("unsynced node cannot serve fast sync")
}
// If we're seemingly on the same chain, disable the drop timer
if verifyDAO {
p.Log().Debug("Seems to be on the same side of the DAO fork")
p.forkDrop.Stop()
p.forkDrop = nil
return nil
} }
} }
// Filter out any explicitly requested headers, deliver the rest to the downloader // Filter out any explicitly requested headers, deliver the rest to the downloader
filter := len(headers) == 1 filter := len(headers) == 1
if filter { if filter {
// If it's a potential DAO fork check, validate against the rules // If it's a potential sync progress check, validate the content and advertised chain weight
if p.forkDrop != nil && pm.chainconfig.DAOForkBlock.Cmp(headers[0].Number) == 0 { if p.syncDrop != nil && headers[0].Number.Uint64() == pm.checkpointNumber {
// Disable the fork drop timer // Disable the sync drop timer
p.forkDrop.Stop() p.syncDrop.Stop()
p.forkDrop = nil p.syncDrop = nil
// Validate the header and either drop the peer or continue // Validate the header and either drop the peer or continue
if err := misc.VerifyDAOHeaderExtraData(pm.chainconfig, headers[0]); err != nil { if headers[0].Hash() != pm.checkpointHash {
p.Log().Debug("Verified to be on the other side of the DAO fork, dropping") return errors.New("checkpoint hash mismatch")
return err
} }
p.Log().Debug("Verified to be on the same side of the DAO fork")
return nil return nil
} }
// Otherwise if it's a whitelisted block, validate against the set // Otherwise if it's a whitelisted block, validate against the set

View File

@ -449,48 +449,95 @@ func testGetReceipt(t *testing.T, protocol int) {
} }
} }
// Tests that post eth protocol handshake, DAO fork-enabled clients also execute // Tests that post eth protocol handshake, clients perform a mutual checkpoint
// a DAO "challenge" verifying each others' DAO fork headers to ensure they're on // challenge to validate each other's chains. Hash mismatches, or missing ones
// compatible chains. // during a fast sync should lead to the peer getting dropped.
func TestDAOChallengeNoVsNo(t *testing.T) { testDAOChallenge(t, false, false, false) } func TestCheckpointChallenge(t *testing.T) {
func TestDAOChallengeNoVsPro(t *testing.T) { testDAOChallenge(t, false, true, false) } tests := []struct {
func TestDAOChallengeProVsNo(t *testing.T) { testDAOChallenge(t, true, false, false) } syncmode downloader.SyncMode
func TestDAOChallengeProVsPro(t *testing.T) { testDAOChallenge(t, true, true, false) } checkpoint bool
func TestDAOChallengeNoVsTimeout(t *testing.T) { testDAOChallenge(t, false, false, true) } timeout bool
func TestDAOChallengeProVsTimeout(t *testing.T) { testDAOChallenge(t, true, true, true) } empty bool
match bool
drop bool
}{
// If checkpointing is not enabled locally, don't challenge and don't drop
{downloader.FullSync, false, false, false, false, false},
{downloader.FastSync, false, false, false, false, false},
{downloader.LightSync, false, false, false, false, false},
func testDAOChallenge(t *testing.T, localForked, remoteForked bool, timeout bool) { // If checkpointing is enabled locally and remote response is empty, only drop during fast sync
// Reduce the DAO handshake challenge timeout {downloader.FullSync, true, false, true, false, false},
if timeout { {downloader.FastSync, true, false, true, false, true}, // Special case, fast sync, unsynced peer
defer func(old time.Duration) { daoChallengeTimeout = old }(daoChallengeTimeout) {downloader.LightSync, true, false, true, false, false},
daoChallengeTimeout = 500 * time.Millisecond
// If checkpointing is enabled locally and remote response mismatches, always drop
{downloader.FullSync, true, false, false, false, true},
{downloader.FastSync, true, false, false, false, true},
{downloader.LightSync, true, false, false, false, true},
// If checkpointing is enabled locally and remote response matches, never drop
{downloader.FullSync, true, false, false, true, false},
{downloader.FastSync, true, false, false, true, false},
{downloader.LightSync, true, false, false, true, false},
// If checkpointing is enabled locally and remote times out, always drop
{downloader.FullSync, true, true, false, true, true},
{downloader.FastSync, true, true, false, true, true},
{downloader.LightSync, true, true, false, true, true},
} }
// Create a DAO aware protocol manager for _, tt := range tests {
t.Run(fmt.Sprintf("sync %v checkpoint %v timeout %v empty %v match %v", tt.syncmode, tt.checkpoint, tt.timeout, tt.empty, tt.match), func(t *testing.T) {
testCheckpointChallenge(t, tt.syncmode, tt.checkpoint, tt.timeout, tt.empty, tt.match, tt.drop)
})
}
}
func testCheckpointChallenge(t *testing.T, syncmode downloader.SyncMode, checkpoint bool, timeout bool, empty bool, match bool, drop bool) {
// Reduce the checkpoint handshake challenge timeout
defer func(old time.Duration) { syncChallengeTimeout = old }(syncChallengeTimeout)
syncChallengeTimeout = 250 * time.Millisecond
// Initialize a chain and generate a fake CHT if checkpointing is enabled
var ( var (
evmux = new(event.TypeMux)
pow = ethash.NewFaker()
db = ethdb.NewMemDatabase() db = ethdb.NewMemDatabase()
config = &params.ChainConfig{DAOForkBlock: big.NewInt(1), DAOForkSupport: localForked} config = new(params.ChainConfig)
gspec = &core.Genesis{Config: config} genesis = (&core.Genesis{Config: config}).MustCommit(db)
genesis = gspec.MustCommit(db)
) )
blockchain, err := core.NewBlockChain(db, nil, config, pow, vm.Config{}, nil) // If checkpointing is enabled, create and inject a fake CHT and the corresponding
// chllenge response.
var response *types.Header
if checkpoint {
index := uint64(rand.Intn(500))
number := (index+1)*params.CHTFrequencyClient - 1
response = &types.Header{Number: big.NewInt(int64(number)), Extra: []byte("valid")}
cht := &params.TrustedCheckpoint{
SectionIndex: index,
SectionHead: response.Hash(),
}
params.TrustedCheckpoints[genesis.Hash()] = cht
defer delete(params.TrustedCheckpoints, genesis.Hash())
}
// Create a checkpoint aware protocol manager
blockchain, err := core.NewBlockChain(db, nil, config, ethash.NewFaker(), vm.Config{}, nil)
if err != nil { if err != nil {
t.Fatalf("failed to create new blockchain: %v", err) t.Fatalf("failed to create new blockchain: %v", err)
} }
pm, err := NewProtocolManager(config, downloader.FullSync, DefaultConfig.NetworkId, evmux, new(testTxPool), pow, blockchain, db, nil) pm, err := NewProtocolManager(config, syncmode, DefaultConfig.NetworkId, new(event.TypeMux), new(testTxPool), ethash.NewFaker(), blockchain, db, nil)
if err != nil { if err != nil {
t.Fatalf("failed to start test protocol manager: %v", err) t.Fatalf("failed to start test protocol manager: %v", err)
} }
pm.Start(1000) pm.Start(1000)
defer pm.Stop() defer pm.Stop()
// Connect a new peer and check that we receive the DAO challenge // Connect a new peer and check that we receive the checkpoint challenge
peer, _ := newTestPeer("peer", eth63, pm, true) peer, _ := newTestPeer("peer", eth63, pm, true)
defer peer.close() defer peer.close()
if checkpoint {
challenge := &getBlockHeadersData{ challenge := &getBlockHeadersData{
Origin: hashOrNumber{Number: config.DAOForkBlock.Uint64()}, Origin: hashOrNumber{Number: response.Number.Uint64()},
Amount: 1, Amount: 1,
Skip: 0, Skip: 0,
Reverse: false, Reverse: false,
@ -500,28 +547,33 @@ func testDAOChallenge(t *testing.T, localForked, remoteForked bool, timeout bool
} }
// Create a block to reply to the challenge if no timeout is simulated // Create a block to reply to the challenge if no timeout is simulated
if !timeout { if !timeout {
blocks, _ := core.GenerateChain(&params.ChainConfig{}, genesis, ethash.NewFaker(), db, 1, func(i int, block *core.BlockGen) { if empty {
if remoteForked { if err := p2p.Send(peer.app, BlockHeadersMsg, []*types.Header{}); err != nil {
block.SetExtra(params.DAOForkBlockExtra)
}
})
if err := p2p.Send(peer.app, BlockHeadersMsg, []*types.Header{blocks[0].Header()}); err != nil {
t.Fatalf("failed to answer challenge: %v", err) t.Fatalf("failed to answer challenge: %v", err)
} }
time.Sleep(100 * time.Millisecond) // Sleep to avoid the verification racing with the drops } else if match {
} else { if err := p2p.Send(peer.app, BlockHeadersMsg, []*types.Header{response}); err != nil {
// Otherwise wait until the test timeout passes t.Fatalf("failed to answer challenge: %v", err)
time.Sleep(daoChallengeTimeout + 500*time.Millisecond)
}
// Verify that depending on fork side, the remote peer is maintained or dropped
if localForked == remoteForked && !timeout {
if peers := pm.peers.Len(); peers != 1 {
t.Fatalf("peer count mismatch: have %d, want %d", peers, 1)
} }
} else { } else {
if err := p2p.Send(peer.app, BlockHeadersMsg, []*types.Header{{Number: response.Number}}); err != nil {
t.Fatalf("failed to answer challenge: %v", err)
}
}
}
}
// Wait until the test timeout passes to ensure proper cleanup
time.Sleep(syncChallengeTimeout + 100*time.Millisecond)
// Verify that the remote peer is maintained or dropped
if drop {
if peers := pm.peers.Len(); peers != 0 { if peers := pm.peers.Len(); peers != 0 {
t.Fatalf("peer count mismatch: have %d, want %d", peers, 0) t.Fatalf("peer count mismatch: have %d, want %d", peers, 0)
} }
} else {
if peers := pm.peers.Len(); peers != 1 {
t.Fatalf("peer count mismatch: have %d, want %d", peers, 1)
}
} }
} }

View File

@ -79,7 +79,7 @@ type peer struct {
rw p2p.MsgReadWriter rw p2p.MsgReadWriter
version int // Protocol version negotiated version int // Protocol version negotiated
forkDrop *time.Timer // Timed connection dropper if forks aren't validated in time syncDrop *time.Timer // Timed connection dropper if sync progress isn't validated in time
head common.Hash head common.Hash
td *big.Int td *big.Int

View File

@ -188,14 +188,12 @@ func (pm *ProtocolManager) synchronise(peer *peer) {
atomic.StoreUint32(&pm.fastSync, 1) atomic.StoreUint32(&pm.fastSync, 1)
mode = downloader.FastSync mode = downloader.FastSync
} }
if mode == downloader.FastSync { if mode == downloader.FastSync {
// Make sure the peer's total difficulty we are synchronizing is higher. // Make sure the peer's total difficulty we are synchronizing is higher.
if pm.blockchain.GetTdByHash(pm.blockchain.CurrentFastBlock().Hash()).Cmp(pTd) >= 0 { if pm.blockchain.GetTdByHash(pm.blockchain.CurrentFastBlock().Hash()).Cmp(pTd) >= 0 {
return return
} }
} }
// Run the sync cycle, and disable fast sync if we've went past the pivot block // Run the sync cycle, and disable fast sync if we've went past the pivot block
if err := pm.downloader.Synchronise(peer.id, pHead, pTd, mode); err != nil { if err := pm.downloader.Synchronise(peer.id, pHead, pTd, mode); err != nil {
return return

View File

@ -557,7 +557,7 @@ func (s *Service) assembleBlockStats(block *types.Block) *blockStats {
Number: header.Number, Number: header.Number,
Hash: header.Hash(), Hash: header.Hash(),
ParentHash: header.ParentHash, ParentHash: header.ParentHash,
Timestamp: header.Time, Timestamp: new(big.Int).SetUint64(header.Time),
Miner: author, Miner: author,
GasUsed: header.GasUsed, GasUsed: header.GasUsed,
GasLimit: header.GasLimit, GasLimit: header.GasLimit,

View File

@ -177,3 +177,34 @@ func ExpandPackagesNoVendor(patterns []string) []string {
} }
return patterns return patterns
} }
// UploadSFTP uploads files to a remote host using the sftp command line tool.
// The destination host may be specified either as [user@]host[: or as a URI in
// the form sftp://[user@]host[:port].
func UploadSFTP(identityFile, host, dir string, files []string) error {
sftp := exec.Command("sftp")
sftp.Stdout = nil
sftp.Stderr = os.Stderr
if identityFile != "" {
sftp.Args = append(sftp.Args, "-i", identityFile)
}
sftp.Args = append(sftp.Args, host)
fmt.Println(">>>", strings.Join(sftp.Args, " "))
if *DryRunFlag {
return nil
}
stdin, err := sftp.StdinPipe()
if err != nil {
return fmt.Errorf("can't create stdin pipe for sftp: %v", err)
}
if err := sftp.Start(); err != nil {
return err
}
in := io.MultiWriter(stdin, os.Stdout)
for _, f := range files {
fmt.Fprintln(in, "put", f, path.Join(dir, filepath.Base(f)))
}
stdin.Close()
return sftp.Wait()
}

View File

@ -683,7 +683,7 @@ type CallArgs struct {
Data hexutil.Bytes `json:"data"` Data hexutil.Bytes `json:"data"`
} }
func (s *PublicBlockChainAPI) doCall(ctx context.Context, args CallArgs, blockNr rpc.BlockNumber, timeout time.Duration) ([]byte, uint64, bool, error) { func (s *PublicBlockChainAPI) doCall(ctx context.Context, args CallArgs, blockNr rpc.BlockNumber, timeout time.Duration, globalGasCap *big.Int) ([]byte, uint64, bool, error) {
defer func(start time.Time) { log.Debug("Executing EVM call finished", "runtime", time.Since(start)) }(time.Now()) defer func(start time.Time) { log.Debug("Executing EVM call finished", "runtime", time.Since(start)) }(time.Now())
state, header, err := s.b.StateAndHeaderByNumber(ctx, blockNr) state, header, err := s.b.StateAndHeaderByNumber(ctx, blockNr)
@ -700,14 +700,18 @@ func (s *PublicBlockChainAPI) doCall(ctx context.Context, args CallArgs, blockNr
} }
} }
// Set default gas & gas price if none were set // Set default gas & gas price if none were set
gas, gasPrice := uint64(args.Gas), args.GasPrice.ToInt() gas := uint64(args.Gas)
if gas == 0 { if gas == 0 {
gas = math.MaxUint64 / 2 gas = math.MaxUint64 / 2
} }
if globalGasCap != nil && globalGasCap.Uint64() < gas {
log.Warn("Caller gas above allowance, capping", "requested", gas, "cap", globalGasCap)
gas = globalGasCap.Uint64()
}
gasPrice := args.GasPrice.ToInt()
if gasPrice.Sign() == 0 { if gasPrice.Sign() == 0 {
gasPrice = new(big.Int).SetUint64(defaultGasPrice) gasPrice = new(big.Int).SetUint64(defaultGasPrice)
} }
// Create new call message // Create new call message
msg := types.NewMessage(addr, args.To, 0, args.Value.ToInt(), gas, gasPrice, args.Data, false) msg := types.NewMessage(addr, args.To, 0, args.Value.ToInt(), gas, gasPrice, args.Data, false)
@ -748,7 +752,7 @@ func (s *PublicBlockChainAPI) doCall(ctx context.Context, args CallArgs, blockNr
// Call executes the given transaction on the state for the given block number. // Call executes the given transaction on the state for the given block number.
// It doesn't make and changes in the state/blockchain and is useful to execute and retrieve values. // It doesn't make and changes in the state/blockchain and is useful to execute and retrieve values.
func (s *PublicBlockChainAPI) Call(ctx context.Context, args CallArgs, blockNr rpc.BlockNumber) (hexutil.Bytes, error) { func (s *PublicBlockChainAPI) Call(ctx context.Context, args CallArgs, blockNr rpc.BlockNumber) (hexutil.Bytes, error) {
result, _, _, err := s.doCall(ctx, args, blockNr, 5*time.Second) result, _, _, err := s.doCall(ctx, args, blockNr, 5*time.Second, s.b.RPCGasCap())
return (hexutil.Bytes)(result), err return (hexutil.Bytes)(result), err
} }
@ -771,13 +775,18 @@ func (s *PublicBlockChainAPI) EstimateGas(ctx context.Context, args CallArgs) (h
} }
hi = block.GasLimit() hi = block.GasLimit()
} }
gasCap := s.b.RPCGasCap()
if gasCap != nil && hi > gasCap.Uint64() {
log.Warn("Caller gas above allowance, capping", "requested", hi, "cap", gasCap)
hi = gasCap.Uint64()
}
cap = hi cap = hi
// Create a helper to check if a gas allowance results in an executable transaction // Create a helper to check if a gas allowance results in an executable transaction
executable := func(gas uint64) bool { executable := func(gas uint64) bool {
args.Gas = hexutil.Uint64(gas) args.Gas = hexutil.Uint64(gas)
_, _, failed, err := s.doCall(ctx, args, rpc.PendingBlockNumber, 0) _, _, failed, err := s.doCall(ctx, args, rpc.PendingBlockNumber, 0, gasCap)
if err != nil || failed { if err != nil || failed {
return false return false
} }
@ -795,7 +804,7 @@ func (s *PublicBlockChainAPI) EstimateGas(ctx context.Context, args CallArgs) (h
// Reject the transaction as invalid if it still fails at the highest allowance // Reject the transaction as invalid if it still fails at the highest allowance
if hi == cap { if hi == cap {
if !executable(hi) { if !executable(hi) {
return 0, fmt.Errorf("gas required exceeds allowance or always failing transaction") return 0, fmt.Errorf("gas required exceeds allowance (%d) or always failing transaction", cap)
} }
} }
return hexutil.Uint64(hi), nil return hexutil.Uint64(hi), nil
@ -882,7 +891,7 @@ func RPCMarshalBlock(b *types.Block, inclTx bool, fullTx bool) (map[string]inter
"size": hexutil.Uint64(b.Size()), "size": hexutil.Uint64(b.Size()),
"gasLimit": hexutil.Uint64(head.GasLimit), "gasLimit": hexutil.Uint64(head.GasLimit),
"gasUsed": hexutil.Uint64(head.GasUsed), "gasUsed": hexutil.Uint64(head.GasUsed),
"timestamp": (*hexutil.Big)(head.Time), "timestamp": hexutil.Uint64(head.Time),
"transactionsRoot": head.TxHash, "transactionsRoot": head.TxHash,
"receiptsRoot": head.ReceiptHash, "receiptsRoot": head.ReceiptHash,
} }

View File

@ -44,6 +44,7 @@ type Backend interface {
ChainDb() ethdb.Database ChainDb() ethdb.Database
EventMux() *event.TypeMux EventMux() *event.TypeMux
AccountManager() *accounts.Manager AccountManager() *accounts.Manager
RPCGasCap() *big.Int // global gas cap for eth_call over rpc: DoS protection
// BlockChain API // BlockChain API
SetHead(number uint64) SetHead(number uint64)

View File

@ -187,6 +187,10 @@ func (b *LesApiBackend) AccountManager() *accounts.Manager {
return b.eth.accountManager return b.eth.accountManager
} }
func (b *LesApiBackend) RPCGasCap() *big.Int {
return b.eth.config.RPCGasCap
}
func (b *LesApiBackend) BloomStatus() (uint64, uint64) { func (b *LesApiBackend) BloomStatus() (uint64, uint64) {
if b.eth.bloomIndexer == nil { if b.eth.bloomIndexer == nil {
return 0, 0 return 0, 0

View File

@ -153,9 +153,12 @@ func NewProtocolManager(chainConfig *params.ChainConfig, indexerConfig *light.In
if disableClientRemovePeer { if disableClientRemovePeer {
removePeer = func(id string) {} removePeer = func(id string) {}
} }
if lightSync { if lightSync {
manager.downloader = downloader.New(downloader.LightSync, chainDb, manager.eventMux, nil, blockchain, removePeer) var checkpoint uint64
if cht, ok := params.TrustedCheckpoints[blockchain.Genesis().Hash()]; ok {
checkpoint = (cht.SectionIndex+1)*params.CHTFrequencyClient - 1
}
manager.downloader = downloader.New(downloader.LightSync, checkpoint, chainDb, manager.eventMux, nil, blockchain, removePeer)
manager.peers.notify((*downloaderPeerNotify)(manager)) manager.peers.notify((*downloaderPeerNotify)(manager))
manager.fetcher = newLightFetcher(manager) manager.fetcher = newLightFetcher(manager)
} }
@ -324,7 +327,11 @@ func (pm *ProtocolManager) handle(p *peer) error {
} }
} }
var reqList = []uint64{GetBlockHeadersMsg, GetBlockBodiesMsg, GetCodeMsg, GetReceiptsMsg, GetProofsV1Msg, SendTxMsg, SendTxV2Msg, GetTxStatusMsg, GetHeaderProofsMsg, GetProofsV2Msg, GetHelperTrieProofsMsg} var (
reqList = []uint64{GetBlockHeadersMsg, GetBlockBodiesMsg, GetCodeMsg, GetReceiptsMsg, GetProofsV1Msg, SendTxMsg, SendTxV2Msg, GetTxStatusMsg, GetHeaderProofsMsg, GetProofsV2Msg, GetHelperTrieProofsMsg}
reqListV1 = []uint64{GetBlockHeadersMsg, GetBlockBodiesMsg, GetCodeMsg, GetReceiptsMsg, GetProofsV1Msg, SendTxMsg, GetHeaderProofsMsg}
reqListV2 = []uint64{GetBlockHeadersMsg, GetBlockBodiesMsg, GetCodeMsg, GetReceiptsMsg, SendTxV2Msg, GetTxStatusMsg, GetProofsV2Msg, GetHelperTrieProofsMsg}
)
// handleMsg is invoked whenever an inbound message is received from a remote // handleMsg is invoked whenever an inbound message is received from a remote
// peer. The remote connection is torn down upon returning any error. // peer. The remote connection is torn down upon returning any error.

View File

@ -508,8 +508,9 @@ func TestTransactionStatusLes2(t *testing.T) {
test := func(tx *types.Transaction, send bool, expStatus txStatus) { test := func(tx *types.Transaction, send bool, expStatus txStatus) {
reqID++ reqID++
if send { if send {
cost := peer.GetRequestCost(SendTxV2Msg, 1) enc, _ := rlp.EncodeToBytes(types.Transactions{tx})
sendRequest(peer.app, SendTxV2Msg, reqID, cost, types.Transactions{tx}) cost := peer.GetTxRelayCost(1, len(enc))
sendRequest(peer.app, SendTxV2Msg, reqID, cost, rlp.RawValue(enc))
} else { } else {
cost := peer.GetRequestCost(GetTxStatusMsg, 1) cost := peer.GetRequestCost(GetTxStatusMsg, 1)
sendRequest(peer.app, GetTxStatusMsg, reqID, cost, []common.Hash{tx.Hash()}) sendRequest(peer.app, GetTxStatusMsg, reqID, cost, []common.Hash{tx.Hash()})

View File

@ -42,6 +42,11 @@ var (
const maxResponseErrors = 50 // number of invalid responses tolerated (makes the protocol less brittle but still avoids spam) const maxResponseErrors = 50 // number of invalid responses tolerated (makes the protocol less brittle but still avoids spam)
// if the total encoded size of a sent transaction batch is over txSizeCostLimit
// per transaction then the request cost is calculated as proportional to the
// encoded size instead of the transaction count
const txSizeCostLimit = 0x4000
const ( const (
announceTypeNone = iota announceTypeNone = iota
announceTypeSimple announceTypeSimple
@ -163,7 +168,41 @@ func (p *peer) GetRequestCost(msgcode uint64, amount int) uint64 {
p.lock.RLock() p.lock.RLock()
defer p.lock.RUnlock() defer p.lock.RUnlock()
cost := p.fcCosts[msgcode].baseCost + p.fcCosts[msgcode].reqCost*uint64(amount) costs := p.fcCosts[msgcode]
if costs == nil {
return 0
}
cost := costs.baseCost + costs.reqCost*uint64(amount)
if cost > p.fcServerParams.BufLimit {
cost = p.fcServerParams.BufLimit
}
return cost
}
func (p *peer) GetTxRelayCost(amount, size int) uint64 {
p.lock.RLock()
defer p.lock.RUnlock()
var msgcode uint64
switch p.version {
case lpv1:
msgcode = SendTxMsg
case lpv2:
msgcode = SendTxV2Msg
default:
panic(nil)
}
costs := p.fcCosts[msgcode]
if costs == nil {
return 0
}
cost := costs.baseCost + costs.reqCost*uint64(amount)
sizeCost := costs.baseCost + costs.reqCost*uint64(size)/txSizeCostLimit
if sizeCost > cost {
cost = sizeCost
}
if cost > p.fcServerParams.BufLimit { if cost > p.fcServerParams.BufLimit {
cost = p.fcServerParams.BufLimit cost = p.fcServerParams.BufLimit
} }
@ -307,9 +346,9 @@ func (p *peer) RequestTxStatus(reqID, cost uint64, txHashes []common.Hash) error
return sendRequest(p.rw, GetTxStatusMsg, reqID, cost, txHashes) return sendRequest(p.rw, GetTxStatusMsg, reqID, cost, txHashes)
} }
// SendTxStatus sends a batch of transactions to be added to the remote transaction pool. // SendTxs sends a batch of transactions to be added to the remote transaction pool.
func (p *peer) SendTxs(reqID, cost uint64, txs types.Transactions) error { func (p *peer) SendTxs(reqID, cost uint64, txs rlp.RawValue) error {
p.Log().Debug("Fetching batch of transactions", "count", len(txs)) p.Log().Debug("Fetching batch of transactions", "size", len(txs))
switch p.version { switch p.version {
case lpv1: case lpv1:
return p2p.Send(p.rw, SendTxMsg, txs) // old message format does not include reqID return p2p.Send(p.rw, SendTxMsg, txs) // old message format does not include reqID
@ -485,6 +524,20 @@ func (p *peer) Handshake(td *big.Int, head common.Hash, headNum uint64, genesis
p.fcServerParams = params p.fcServerParams = params
p.fcServer = flowcontrol.NewServerNode(params) p.fcServer = flowcontrol.NewServerNode(params)
p.fcCosts = MRC.decode() p.fcCosts = MRC.decode()
var checkList []uint64
switch p.version {
case lpv1:
checkList = reqListV1
case lpv2:
checkList = reqListV2
default:
panic(nil)
}
for _, msgCode := range checkList {
if p.fcCosts[msgCode] == nil {
return errResp(ErrUselessPeer, "peer does not support message %d", msgCode)
}
}
} }
p.headInfo = &announceData{Td: rTd, Hash: rHash, Number: rNum} p.headInfo = &announceData{Td: rTd, Hash: rHash, Number: rNum}

View File

@ -21,6 +21,7 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rlp"
) )
type ltrInfo struct { type ltrInfo struct {
@ -113,21 +114,22 @@ func (self *LesTxRelay) send(txs types.Transactions, count int) {
for p, list := range sendTo { for p, list := range sendTo {
pp := p pp := p
ll := list ll := list
enc, _ := rlp.EncodeToBytes(ll)
reqID := genReqID() reqID := genReqID()
rq := &distReq{ rq := &distReq{
getCost: func(dp distPeer) uint64 { getCost: func(dp distPeer) uint64 {
peer := dp.(*peer) peer := dp.(*peer)
return peer.GetRequestCost(SendTxMsg, len(ll)) return peer.GetTxRelayCost(len(ll), len(enc))
}, },
canSend: func(dp distPeer) bool { canSend: func(dp distPeer) bool {
return dp.(*peer) == pp return dp.(*peer) == pp
}, },
request: func(dp distPeer) func() { request: func(dp distPeer) func() {
peer := dp.(*peer) peer := dp.(*peer)
cost := peer.GetRequestCost(SendTxMsg, len(ll)) cost := peer.GetTxRelayCost(len(ll), len(enc))
peer.fcServer.QueueRequest(reqID, cost) peer.fcServer.QueueRequest(reqID, cost)
return func() { peer.SendTxs(reqID, cost, ll) } return func() { peer.SendTxs(reqID, cost, enc) }
}, },
} }
self.reqDist.queue(rq) self.reqDist.queue(rq)

View File

@ -100,7 +100,7 @@ func NewLightChain(odr OdrBackend, config *params.ChainConfig, engine consensus.
if bc.genesisBlock == nil { if bc.genesisBlock == nil {
return nil, core.ErrNoGenesis return nil, core.ErrNoGenesis
} }
if cp, ok := trustedCheckpoints[bc.genesisBlock.Hash()]; ok { if cp, ok := params.TrustedCheckpoints[bc.genesisBlock.Hash()]; ok {
bc.addTrustedCheckpoint(cp) bc.addTrustedCheckpoint(cp)
} }
if err := bc.loadLastState(); err != nil { if err := bc.loadLastState(); err != nil {
@ -157,7 +157,7 @@ func (self *LightChain) loadLastState() error {
// Issue a status log and return // Issue a status log and return
header := self.hc.CurrentHeader() header := self.hc.CurrentHeader()
headerTd := self.GetTd(header.Hash(), header.Number.Uint64()) headerTd := self.GetTd(header.Hash(), header.Number.Uint64())
log.Info("Loaded most recent local header", "number", header.Number, "hash", header.Hash(), "td", headerTd, "age", common.PrettyAge(time.Unix(header.Time.Int64(), 0))) log.Info("Loaded most recent local header", "number", header.Number, "hash", header.Hash(), "td", headerTd, "age", common.PrettyAge(time.Unix(int64(header.Time), 0)))
return nil return nil
} }
@ -488,7 +488,7 @@ func (self *LightChain) SyncCht(ctx context.Context) bool {
// Ensure the chain didn't move past the latest block while retrieving it // Ensure the chain didn't move past the latest block while retrieving it
if self.hc.CurrentHeader().Number.Uint64() < header.Number.Uint64() { if self.hc.CurrentHeader().Number.Uint64() < header.Number.Uint64() {
log.Info("Updated latest header based on CHT", "number", header.Number, "hash", header.Hash(), "age", common.PrettyAge(time.Unix(header.Time.Int64(), 0))) log.Info("Updated latest header based on CHT", "number", header.Number, "hash", header.Hash(), "age", common.PrettyAge(time.Unix(int64(header.Time), 0)))
self.hc.SetCurrentHeader(header) self.hc.SetCurrentHeader(header)
} }
return true return true

View File

@ -104,13 +104,6 @@ var (
} }
) )
// trustedCheckpoints associates each known checkpoint with the genesis hash of the chain it belongs to
var trustedCheckpoints = map[common.Hash]*params.TrustedCheckpoint{
params.MainnetGenesisHash: params.MainnetTrustedCheckpoint,
params.TestnetGenesisHash: params.TestnetTrustedCheckpoint,
params.RinkebyGenesisHash: params.RinkebyTrustedCheckpoint,
}
var ( var (
ErrNoTrustedCht = errors.New("no trusted canonical hash trie") ErrNoTrustedCht = errors.New("no trusted canonical hash trie")
ErrNoTrustedBloomTrie = errors.New("no trusted bloom trie") ErrNoTrustedBloomTrie = errors.New("no trusted bloom trie")

View File

@ -314,6 +314,7 @@ func (r *PrefixedRegistry) UnregisterAll() {
var ( var (
DefaultRegistry = NewRegistry() DefaultRegistry = NewRegistry()
EphemeralRegistry = NewRegistry() EphemeralRegistry = NewRegistry()
AccountingRegistry = NewRegistry() // registry used in swarm
) )
// Call the given function for each registered metric. // Call the given function for each registered metric.

View File

@ -823,8 +823,8 @@ func (w *worker) commitNewWork(interrupt *int32, noempty bool, timestamp int64)
tstart := time.Now() tstart := time.Now()
parent := w.chain.CurrentBlock() parent := w.chain.CurrentBlock()
if parent.Time().Cmp(new(big.Int).SetInt64(timestamp)) >= 0 { if parent.Time() >= uint64(timestamp) {
timestamp = parent.Time().Int64() + 1 timestamp = int64(parent.Time() + 1)
} }
// this will ensure we're not going off too far in the future // this will ensure we're not going off too far in the future
if now := time.Now().Unix(); timestamp > now+1 { if now := time.Now().Unix(); timestamp > now+1 {
@ -839,7 +839,7 @@ func (w *worker) commitNewWork(interrupt *int32, noempty bool, timestamp int64)
Number: num.Add(num, common.Big1), Number: num.Add(num, common.Big1),
GasLimit: core.CalcGasLimit(parent, w.gasFloor, w.gasCeil), GasLimit: core.CalcGasLimit(parent, w.gasFloor, w.gasCeil),
Extra: w.extra, Extra: w.extra,
Time: big.NewInt(timestamp), Time: uint64(timestamp),
} }
// Only set the coinbase if our consensus engine is running (avoid spurious block rewards) // Only set the coinbase if our consensus engine is running (avoid spurious block rewards)
if w.isRunning() { if w.isRunning() {

View File

@ -109,7 +109,7 @@ func (h *Header) GetDifficulty() *BigInt { return &BigInt{h.header.Difficulty} }
func (h *Header) GetNumber() int64 { return h.header.Number.Int64() } func (h *Header) GetNumber() int64 { return h.header.Number.Int64() }
func (h *Header) GetGasLimit() int64 { return int64(h.header.GasLimit) } func (h *Header) GetGasLimit() int64 { return int64(h.header.GasLimit) }
func (h *Header) GetGasUsed() int64 { return int64(h.header.GasUsed) } func (h *Header) GetGasUsed() int64 { return int64(h.header.GasUsed) }
func (h *Header) GetTime() int64 { return h.header.Time.Int64() } func (h *Header) GetTime() int64 { return int64(h.header.Time) }
func (h *Header) GetExtra() []byte { return h.header.Extra } func (h *Header) GetExtra() []byte { return h.header.Extra }
func (h *Header) GetMixDigest() *Hash { return &Hash{h.header.MixDigest} } func (h *Header) GetMixDigest() *Hash { return &Hash{h.header.MixDigest} }
func (h *Header) GetNonce() *Nonce { return &Nonce{h.header.Nonce} } func (h *Header) GetNonce() *Nonce { return &Nonce{h.header.Nonce} }
@ -180,7 +180,7 @@ func (b *Block) GetDifficulty() *BigInt { return &BigInt{b.block.Difficu
func (b *Block) GetNumber() int64 { return b.block.Number().Int64() } func (b *Block) GetNumber() int64 { return b.block.Number().Int64() }
func (b *Block) GetGasLimit() int64 { return int64(b.block.GasLimit()) } func (b *Block) GetGasLimit() int64 { return int64(b.block.GasLimit()) }
func (b *Block) GetGasUsed() int64 { return int64(b.block.GasUsed()) } func (b *Block) GetGasUsed() int64 { return int64(b.block.GasUsed()) }
func (b *Block) GetTime() int64 { return b.block.Time().Int64() } func (b *Block) GetTime() int64 { return int64(b.block.Time()) }
func (b *Block) GetExtra() []byte { return b.block.Extra() } func (b *Block) GetExtra() []byte { return b.block.Extra() }
func (b *Block) GetMixDigest() *Hash { return &Hash{b.block.MixDigest()} } func (b *Block) GetMixDigest() *Hash { return &Hash{b.block.MixDigest()} }
func (b *Block) GetNonce() int64 { return int64(b.block.Nonce()) } func (b *Block) GetNonce() int64 { return int64(b.block.Nonce()) }

View File

@ -34,6 +34,7 @@ import (
type node struct { type node struct {
enode.Node enode.Node
addedAt time.Time // time when the node was added to the table addedAt time.Time // time when the node was added to the table
livenessChecks uint // how often liveness was checked
} }
type encPubkey [64]byte type encPubkey [64]byte

View File

@ -75,6 +75,8 @@ type Table struct {
net transport net transport
refreshReq chan chan struct{} refreshReq chan chan struct{}
initDone chan struct{} initDone chan struct{}
closeOnce sync.Once
closeReq chan struct{} closeReq chan struct{}
closed chan struct{} closed chan struct{}
@ -180,16 +182,14 @@ func (tab *Table) ReadRandomNodes(buf []*enode.Node) (n int) {
// Close terminates the network listener and flushes the node database. // Close terminates the network listener and flushes the node database.
func (tab *Table) Close() { func (tab *Table) Close() {
tab.closeOnce.Do(func() {
if tab.net != nil { if tab.net != nil {
tab.net.close() tab.net.close()
} }
// Wait for loop to end.
select { close(tab.closeReq)
case <-tab.closed: <-tab.closed
// already closed. })
case tab.closeReq <- struct{}{}:
<-tab.closed // wait for refreshLoop to end.
}
} }
// setFallbackNodes sets the initial points of contact. These nodes // setFallbackNodes sets the initial points of contact. These nodes
@ -290,37 +290,45 @@ func (tab *Table) lookup(targetKey encPubkey, refreshIfEmpty bool) []*node {
// we have asked all closest nodes, stop the search // we have asked all closest nodes, stop the search
break break
} }
// wait for the next reply select {
for _, n := range <-reply { case nodes := <-reply:
for _, n := range nodes {
if n != nil && !seen[n.ID()] { if n != nil && !seen[n.ID()] {
seen[n.ID()] = true seen[n.ID()] = true
result.push(n, bucketSize) result.push(n, bucketSize)
} }
} }
case <-tab.closeReq:
return nil // shutdown, no need to continue.
}
pendingQueries-- pendingQueries--
} }
return result.entries return result.entries
} }
func (tab *Table) findnode(n *node, targetKey encPubkey, reply chan<- []*node) { func (tab *Table) findnode(n *node, targetKey encPubkey, reply chan<- []*node) {
fails := tab.db.FindFails(n.ID()) fails := tab.db.FindFails(n.ID(), n.IP())
r, err := tab.net.findnode(n.ID(), n.addr(), targetKey) r, err := tab.net.findnode(n.ID(), n.addr(), targetKey)
if err != nil || len(r) == 0 { if err == errClosed {
// Avoid recording failures on shutdown.
reply <- nil
return
} else if len(r) == 0 {
fails++ fails++
tab.db.UpdateFindFails(n.ID(), fails) tab.db.UpdateFindFails(n.ID(), n.IP(), fails)
log.Trace("Findnode failed", "id", n.ID(), "failcount", fails, "err", err) log.Trace("Findnode failed", "id", n.ID(), "failcount", fails, "err", err)
if fails >= maxFindnodeFailures { if fails >= maxFindnodeFailures {
log.Trace("Too many findnode failures, dropping", "id", n.ID(), "failcount", fails) log.Trace("Too many findnode failures, dropping", "id", n.ID(), "failcount", fails)
tab.delete(n) tab.delete(n)
} }
} else if fails > 0 { } else if fails > 0 {
tab.db.UpdateFindFails(n.ID(), fails-1) tab.db.UpdateFindFails(n.ID(), n.IP(), fails-1)
} }
// Grab as many nodes as possible. Some of them might not be alive anymore, but we'll // Grab as many nodes as possible. Some of them might not be alive anymore, but we'll
// just remove those again during revalidation. // just remove those again during revalidation.
for _, n := range r { for _, n := range r {
tab.add(n) tab.addSeenNode(n)
} }
reply <- r reply <- r
} }
@ -329,7 +337,7 @@ func (tab *Table) refresh() <-chan struct{} {
done := make(chan struct{}) done := make(chan struct{})
select { select {
case tab.refreshReq <- done: case tab.refreshReq <- done:
case <-tab.closed: case <-tab.closeReq:
close(done) close(done)
} }
return done return done
@ -433,9 +441,9 @@ func (tab *Table) loadSeedNodes() {
seeds = append(seeds, tab.nursery...) seeds = append(seeds, tab.nursery...)
for i := range seeds { for i := range seeds {
seed := seeds[i] seed := seeds[i]
age := log.Lazy{Fn: func() interface{} { return time.Since(tab.db.LastPongReceived(seed.ID())) }} age := log.Lazy{Fn: func() interface{} { return time.Since(tab.db.LastPongReceived(seed.ID(), seed.IP())) }}
log.Trace("Found seed node in database", "id", seed.ID(), "addr", seed.addr(), "age", age) log.Trace("Found seed node in database", "id", seed.ID(), "addr", seed.addr(), "age", age)
tab.add(seed) tab.addSeenNode(seed)
} }
} }
@ -458,16 +466,17 @@ func (tab *Table) doRevalidate(done chan<- struct{}) {
b := tab.buckets[bi] b := tab.buckets[bi]
if err == nil { if err == nil {
// The node responded, move it to the front. // The node responded, move it to the front.
log.Debug("Revalidated node", "b", bi, "id", last.ID()) last.livenessChecks++
b.bump(last) log.Debug("Revalidated node", "b", bi, "id", last.ID(), "checks", last.livenessChecks)
tab.bumpInBucket(b, last)
return return
} }
// No reply received, pick a replacement or delete the node if there aren't // No reply received, pick a replacement or delete the node if there aren't
// any replacements. // any replacements.
if r := tab.replace(b, last); r != nil { if r := tab.replace(b, last); r != nil {
log.Debug("Replaced dead node", "b", bi, "id", last.ID(), "ip", last.IP(), "r", r.ID(), "rip", r.IP()) log.Debug("Replaced dead node", "b", bi, "id", last.ID(), "ip", last.IP(), "checks", last.livenessChecks, "r", r.ID(), "rip", r.IP())
} else { } else {
log.Debug("Removed dead node", "b", bi, "id", last.ID(), "ip", last.IP()) log.Debug("Removed dead node", "b", bi, "id", last.ID(), "ip", last.IP(), "checks", last.livenessChecks)
} }
} }
@ -502,7 +511,7 @@ func (tab *Table) copyLiveNodes() {
now := time.Now() now := time.Now()
for _, b := range &tab.buckets { for _, b := range &tab.buckets {
for _, n := range b.entries { for _, n := range b.entries {
if now.Sub(n.addedAt) >= seedMinTableTime { if n.livenessChecks > 0 && now.Sub(n.addedAt) >= seedMinTableTime {
tab.db.UpdateNode(unwrapNode(n)) tab.db.UpdateNode(unwrapNode(n))
} }
} }
@ -518,9 +527,11 @@ func (tab *Table) closest(target enode.ID, nresults int) *nodesByDistance {
close := &nodesByDistance{target: target} close := &nodesByDistance{target: target}
for _, b := range &tab.buckets { for _, b := range &tab.buckets {
for _, n := range b.entries { for _, n := range b.entries {
if n.livenessChecks > 0 {
close.push(n, nresults) close.push(n, nresults)
} }
} }
}
return close return close
} }
@ -540,12 +551,12 @@ func (tab *Table) bucket(id enode.ID) *bucket {
return tab.buckets[d-bucketMinDistance-1] return tab.buckets[d-bucketMinDistance-1]
} }
// add attempts to add the given node to its corresponding bucket. If the bucket has space // addSeenNode adds a node which may or may not be live to the end of a bucket. If the
// available, adding the node succeeds immediately. Otherwise, the node is added if the // bucket has space available, adding the node succeeds immediately. Otherwise, the node is
// least recently active node in the bucket does not respond to a ping packet. // added to the replacements list.
// //
// The caller must not hold tab.mutex. // The caller must not hold tab.mutex.
func (tab *Table) add(n *node) { func (tab *Table) addSeenNode(n *node) {
if n.ID() == tab.self().ID() { if n.ID() == tab.self().ID() {
return return
} }
@ -553,39 +564,67 @@ func (tab *Table) add(n *node) {
tab.mutex.Lock() tab.mutex.Lock()
defer tab.mutex.Unlock() defer tab.mutex.Unlock()
b := tab.bucket(n.ID()) b := tab.bucket(n.ID())
if !tab.bumpOrAdd(b, n) { if contains(b.entries, n.ID()) {
// Node is not in table. Add it to the replacement list. // Already in bucket, don't add.
return
}
if len(b.entries) >= bucketSize {
// Bucket full, maybe add as replacement.
tab.addReplacement(b, n) tab.addReplacement(b, n)
return
}
if !tab.addIP(b, n.IP()) {
// Can't add: IP limit reached.
return
}
// Add to end of bucket:
b.entries = append(b.entries, n)
b.replacements = deleteNode(b.replacements, n)
n.addedAt = time.Now()
if tab.nodeAddedHook != nil {
tab.nodeAddedHook(n)
} }
} }
// addThroughPing adds the given node to the table. Compared to plain // addVerifiedNode adds a node whose existence has been verified recently to the front of a
// 'add' there is an additional safety measure: if the table is still // bucket. If the node is already in the bucket, it is moved to the front. If the bucket
// initializing the node is not added. This prevents an attack where the // has no space, the node is added to the replacements list.
// table could be filled by just sending ping repeatedly. //
// There is an additional safety measure: if the table is still initializing the node
// is not added. This prevents an attack where the table could be filled by just sending
// ping repeatedly.
// //
// The caller must not hold tab.mutex. // The caller must not hold tab.mutex.
func (tab *Table) addThroughPing(n *node) { func (tab *Table) addVerifiedNode(n *node) {
if !tab.isInitDone() { if !tab.isInitDone() {
return return
} }
tab.add(n) if n.ID() == tab.self().ID() {
} return
}
// stuff adds nodes the table to the end of their corresponding bucket
// if the bucket is not full. The caller must not hold tab.mutex.
func (tab *Table) stuff(nodes []*node) {
tab.mutex.Lock() tab.mutex.Lock()
defer tab.mutex.Unlock() defer tab.mutex.Unlock()
for _, n := range nodes {
if n.ID() == tab.self().ID() {
continue // don't add self
}
b := tab.bucket(n.ID()) b := tab.bucket(n.ID())
if len(b.entries) < bucketSize { if tab.bumpInBucket(b, n) {
tab.bumpOrAdd(b, n) // Already in bucket, moved to front.
return
} }
if len(b.entries) >= bucketSize {
// Bucket full, maybe add as replacement.
tab.addReplacement(b, n)
return
}
if !tab.addIP(b, n.IP()) {
// Can't add: IP limit reached.
return
}
// Add to front of bucket.
b.entries, _ = pushNode(b.entries, n, bucketSize)
b.replacements = deleteNode(b.replacements, n)
n.addedAt = time.Now()
if tab.nodeAddedHook != nil {
tab.nodeAddedHook(n)
} }
} }
@ -657,12 +696,21 @@ func (tab *Table) replace(b *bucket, last *node) *node {
return r return r
} }
// bump moves the given node to the front of the bucket entry list // bumpInBucket moves the given node to the front of the bucket entry list
// if it is contained in that list. // if it is contained in that list.
func (b *bucket) bump(n *node) bool { func (tab *Table) bumpInBucket(b *bucket, n *node) bool {
for i := range b.entries { for i := range b.entries {
if b.entries[i].ID() == n.ID() { if b.entries[i].ID() == n.ID() {
// move it to the front if !n.IP().Equal(b.entries[i].IP()) {
// Endpoint has changed, ensure that the new IP fits into table limits.
tab.removeIP(b, b.entries[i].IP())
if !tab.addIP(b, n.IP()) {
// It doesn't, put the previous one back.
tab.addIP(b, b.entries[i].IP())
return false
}
}
// Move it to the front.
copy(b.entries[1:], b.entries[:i]) copy(b.entries[1:], b.entries[:i])
b.entries[0] = n b.entries[0] = n
return true return true
@ -671,29 +719,20 @@ func (b *bucket) bump(n *node) bool {
return false return false
} }
// bumpOrAdd moves n to the front of the bucket entry list or adds it if the list isn't
// full. The return value is true if n is in the bucket.
func (tab *Table) bumpOrAdd(b *bucket, n *node) bool {
if b.bump(n) {
return true
}
if len(b.entries) >= bucketSize || !tab.addIP(b, n.IP()) {
return false
}
b.entries, _ = pushNode(b.entries, n, bucketSize)
b.replacements = deleteNode(b.replacements, n)
n.addedAt = time.Now()
if tab.nodeAddedHook != nil {
tab.nodeAddedHook(n)
}
return true
}
func (tab *Table) deleteInBucket(b *bucket, n *node) { func (tab *Table) deleteInBucket(b *bucket, n *node) {
b.entries = deleteNode(b.entries, n) b.entries = deleteNode(b.entries, n)
tab.removeIP(b, n.IP()) tab.removeIP(b, n.IP())
} }
func contains(ns []*node, id enode.ID) bool {
for _, n := range ns {
if n.ID() == id {
return true
}
}
return false
}
// pushNode adds n to the front of list, keeping at most max items. // pushNode adds n to the front of list, keeping at most max items.
func pushNode(list []*node, n *node, max int) ([]*node, *node) { func pushNode(list []*node, n *node, max int) ([]*node, *node) {
if len(list) < max { if len(list) < max {

View File

@ -30,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/p2p/enr"
"github.com/ethereum/go-ethereum/p2p/netutil"
) )
func TestTable_pingReplace(t *testing.T) { func TestTable_pingReplace(t *testing.T) {
@ -50,8 +51,8 @@ func TestTable_pingReplace(t *testing.T) {
func testPingReplace(t *testing.T, newNodeIsResponding, lastInBucketIsResponding bool) { func testPingReplace(t *testing.T, newNodeIsResponding, lastInBucketIsResponding bool) {
transport := newPingRecorder() transport := newPingRecorder()
tab, db := newTestTable(transport) tab, db := newTestTable(transport)
defer tab.Close()
defer db.Close() defer db.Close()
defer tab.Close()
<-tab.initDone <-tab.initDone
@ -64,7 +65,7 @@ func testPingReplace(t *testing.T, newNodeIsResponding, lastInBucketIsResponding
// its bucket if it is unresponsive. Revalidate again to ensure that // its bucket if it is unresponsive. Revalidate again to ensure that
transport.dead[last.ID()] = !lastInBucketIsResponding transport.dead[last.ID()] = !lastInBucketIsResponding
transport.dead[pingSender.ID()] = !newNodeIsResponding transport.dead[pingSender.ID()] = !newNodeIsResponding
tab.add(pingSender) tab.addSeenNode(pingSender)
tab.doRevalidate(make(chan struct{}, 1)) tab.doRevalidate(make(chan struct{}, 1))
tab.doRevalidate(make(chan struct{}, 1)) tab.doRevalidate(make(chan struct{}, 1))
@ -114,10 +115,14 @@ func TestBucket_bumpNoDuplicates(t *testing.T) {
} }
prop := func(nodes []*node, bumps []int) (ok bool) { prop := func(nodes []*node, bumps []int) (ok bool) {
tab, db := newTestTable(newPingRecorder())
defer db.Close()
defer tab.Close()
b := &bucket{entries: make([]*node, len(nodes))} b := &bucket{entries: make([]*node, len(nodes))}
copy(b.entries, nodes) copy(b.entries, nodes)
for i, pos := range bumps { for i, pos := range bumps {
b.bump(b.entries[pos]) tab.bumpInBucket(b, b.entries[pos])
if hasDuplicates(b.entries) { if hasDuplicates(b.entries) {
t.Logf("bucket has duplicates after %d/%d bumps:", i+1, len(bumps)) t.Logf("bucket has duplicates after %d/%d bumps:", i+1, len(bumps))
for _, n := range b.entries { for _, n := range b.entries {
@ -126,6 +131,7 @@ func TestBucket_bumpNoDuplicates(t *testing.T) {
return false return false
} }
} }
checkIPLimitInvariant(t, tab)
return true return true
} }
if err := quick.Check(prop, cfg); err != nil { if err := quick.Check(prop, cfg); err != nil {
@ -137,33 +143,51 @@ func TestBucket_bumpNoDuplicates(t *testing.T) {
func TestTable_IPLimit(t *testing.T) { func TestTable_IPLimit(t *testing.T) {
transport := newPingRecorder() transport := newPingRecorder()
tab, db := newTestTable(transport) tab, db := newTestTable(transport)
defer tab.Close()
defer db.Close() defer db.Close()
defer tab.Close()
for i := 0; i < tableIPLimit+1; i++ { for i := 0; i < tableIPLimit+1; i++ {
n := nodeAtDistance(tab.self().ID(), i, net.IP{172, 0, 1, byte(i)}) n := nodeAtDistance(tab.self().ID(), i, net.IP{172, 0, 1, byte(i)})
tab.add(n) tab.addSeenNode(n)
} }
if tab.len() > tableIPLimit { if tab.len() > tableIPLimit {
t.Errorf("too many nodes in table") t.Errorf("too many nodes in table")
} }
checkIPLimitInvariant(t, tab)
} }
// This checks that the per-bucket IP limit is applied correctly. // This checks that the per-bucket IP limit is applied correctly.
func TestTable_BucketIPLimit(t *testing.T) { func TestTable_BucketIPLimit(t *testing.T) {
transport := newPingRecorder() transport := newPingRecorder()
tab, db := newTestTable(transport) tab, db := newTestTable(transport)
defer tab.Close()
defer db.Close() defer db.Close()
defer tab.Close()
d := 3 d := 3
for i := 0; i < bucketIPLimit+1; i++ { for i := 0; i < bucketIPLimit+1; i++ {
n := nodeAtDistance(tab.self().ID(), d, net.IP{172, 0, 1, byte(i)}) n := nodeAtDistance(tab.self().ID(), d, net.IP{172, 0, 1, byte(i)})
tab.add(n) tab.addSeenNode(n)
} }
if tab.len() > bucketIPLimit { if tab.len() > bucketIPLimit {
t.Errorf("too many nodes in table") t.Errorf("too many nodes in table")
} }
checkIPLimitInvariant(t, tab)
}
// checkIPLimitInvariant checks that ip limit sets contain an entry for every
// node in the table and no extra entries.
func checkIPLimitInvariant(t *testing.T, tab *Table) {
t.Helper()
tabset := netutil.DistinctNetSet{Subnet: tableSubnet, Limit: tableIPLimit}
for _, b := range tab.buckets {
for _, n := range b.entries {
tabset.Add(n.IP())
}
}
if tabset.String() != tab.ips.String() {
t.Errorf("table IP set is incorrect:\nhave: %v\nwant: %v", tab.ips, tabset)
}
} }
func TestTable_closest(t *testing.T) { func TestTable_closest(t *testing.T) {
@ -173,9 +197,9 @@ func TestTable_closest(t *testing.T) {
// for any node table, Target and N // for any node table, Target and N
transport := newPingRecorder() transport := newPingRecorder()
tab, db := newTestTable(transport) tab, db := newTestTable(transport)
defer tab.Close()
defer db.Close() defer db.Close()
tab.stuff(test.All) defer tab.Close()
fillTable(tab, test.All)
// check that closest(Target, N) returns nodes // check that closest(Target, N) returns nodes
result := tab.closest(test.Target, test.N).entries result := tab.closest(test.Target, test.N).entries
@ -234,13 +258,13 @@ func TestTable_ReadRandomNodesGetAll(t *testing.T) {
test := func(buf []*enode.Node) bool { test := func(buf []*enode.Node) bool {
transport := newPingRecorder() transport := newPingRecorder()
tab, db := newTestTable(transport) tab, db := newTestTable(transport)
defer tab.Close()
defer db.Close() defer db.Close()
defer tab.Close()
<-tab.initDone <-tab.initDone
for i := 0; i < len(buf); i++ { for i := 0; i < len(buf); i++ {
ld := cfg.Rand.Intn(len(tab.buckets)) ld := cfg.Rand.Intn(len(tab.buckets))
tab.stuff([]*node{nodeAtDistance(tab.self().ID(), ld, intIP(ld))}) fillTable(tab, []*node{nodeAtDistance(tab.self().ID(), ld, intIP(ld))})
} }
gotN := tab.ReadRandomNodes(buf) gotN := tab.ReadRandomNodes(buf)
if gotN != tab.len() { if gotN != tab.len() {
@ -272,16 +296,82 @@ func (*closeTest) Generate(rand *rand.Rand, size int) reflect.Value {
N: rand.Intn(bucketSize), N: rand.Intn(bucketSize),
} }
for _, id := range gen([]enode.ID{}, rand).([]enode.ID) { for _, id := range gen([]enode.ID{}, rand).([]enode.ID) {
n := enode.SignNull(new(enr.Record), id) r := new(enr.Record)
t.All = append(t.All, wrapNode(n)) r.Set(enr.IP(genIP(rand)))
n := wrapNode(enode.SignNull(r, id))
n.livenessChecks = 1
t.All = append(t.All, n)
} }
return reflect.ValueOf(t) return reflect.ValueOf(t)
} }
func TestTable_addVerifiedNode(t *testing.T) {
tab, db := newTestTable(newPingRecorder())
<-tab.initDone
defer db.Close()
defer tab.Close()
// Insert two nodes.
n1 := nodeAtDistance(tab.self().ID(), 256, net.IP{88, 77, 66, 1})
n2 := nodeAtDistance(tab.self().ID(), 256, net.IP{88, 77, 66, 2})
tab.addSeenNode(n1)
tab.addSeenNode(n2)
// Verify bucket content:
bcontent := []*node{n1, n2}
if !reflect.DeepEqual(tab.bucket(n1.ID()).entries, bcontent) {
t.Fatalf("wrong bucket content: %v", tab.bucket(n1.ID()).entries)
}
// Add a changed version of n2.
newrec := n2.Record()
newrec.Set(enr.IP{99, 99, 99, 99})
newn2 := wrapNode(enode.SignNull(newrec, n2.ID()))
tab.addVerifiedNode(newn2)
// Check that bucket is updated correctly.
newBcontent := []*node{newn2, n1}
if !reflect.DeepEqual(tab.bucket(n1.ID()).entries, newBcontent) {
t.Fatalf("wrong bucket content after update: %v", tab.bucket(n1.ID()).entries)
}
checkIPLimitInvariant(t, tab)
}
func TestTable_addSeenNode(t *testing.T) {
tab, db := newTestTable(newPingRecorder())
<-tab.initDone
defer db.Close()
defer tab.Close()
// Insert two nodes.
n1 := nodeAtDistance(tab.self().ID(), 256, net.IP{88, 77, 66, 1})
n2 := nodeAtDistance(tab.self().ID(), 256, net.IP{88, 77, 66, 2})
tab.addSeenNode(n1)
tab.addSeenNode(n2)
// Verify bucket content:
bcontent := []*node{n1, n2}
if !reflect.DeepEqual(tab.bucket(n1.ID()).entries, bcontent) {
t.Fatalf("wrong bucket content: %v", tab.bucket(n1.ID()).entries)
}
// Add a changed version of n2.
newrec := n2.Record()
newrec.Set(enr.IP{99, 99, 99, 99})
newn2 := wrapNode(enode.SignNull(newrec, n2.ID()))
tab.addSeenNode(newn2)
// Check that bucket content is unchanged.
if !reflect.DeepEqual(tab.bucket(n1.ID()).entries, bcontent) {
t.Fatalf("wrong bucket content after update: %v", tab.bucket(n1.ID()).entries)
}
checkIPLimitInvariant(t, tab)
}
func TestTable_Lookup(t *testing.T) { func TestTable_Lookup(t *testing.T) {
tab, db := newTestTable(lookupTestnet) tab, db := newTestTable(lookupTestnet)
defer tab.Close()
defer db.Close() defer db.Close()
defer tab.Close()
// lookup on empty table returns no nodes // lookup on empty table returns no nodes
if results := tab.lookup(lookupTestnet.target, false); len(results) > 0 { if results := tab.lookup(lookupTestnet.target, false); len(results) > 0 {
@ -289,8 +379,9 @@ func TestTable_Lookup(t *testing.T) {
} }
// seed table with initial node (otherwise lookup will terminate immediately) // seed table with initial node (otherwise lookup will terminate immediately)
seedKey, _ := decodePubkey(lookupTestnet.dists[256][0]) seedKey, _ := decodePubkey(lookupTestnet.dists[256][0])
seed := wrapNode(enode.NewV4(seedKey, net.IP{}, 0, 256)) seed := wrapNode(enode.NewV4(seedKey, net.IP{127, 0, 0, 1}, 0, 256))
tab.stuff([]*node{seed}) seed.livenessChecks = 1
fillTable(tab, []*node{seed})
results := tab.lookup(lookupTestnet.target, true) results := tab.lookup(lookupTestnet.target, true)
t.Logf("results:") t.Logf("results:")
@ -531,7 +622,6 @@ func (tn *preminedTestnet) findnode(toid enode.ID, toaddr *net.UDPAddr, target e
} }
func (*preminedTestnet) close() {} func (*preminedTestnet) close() {}
func (*preminedTestnet) waitping(from enode.ID) error { return nil }
func (*preminedTestnet) ping(toid enode.ID, toaddr *net.UDPAddr) error { return nil } func (*preminedTestnet) ping(toid enode.ID, toaddr *net.UDPAddr) error { return nil }
// mine generates a testnet struct literal with nodes at // mine generates a testnet struct literal with nodes at
@ -578,6 +668,12 @@ func gen(typ interface{}, rand *rand.Rand) interface{} {
return v.Interface() return v.Interface()
} }
func genIP(rand *rand.Rand) net.IP {
ip := make(net.IP, 4)
rand.Read(ip)
return ip
}
func quickcfg() *quick.Config { func quickcfg() *quick.Config {
return &quick.Config{ return &quick.Config{
MaxCount: 5000, MaxCount: 5000,

View File

@ -83,6 +83,14 @@ func fillBucket(tab *Table, n *node) (last *node) {
return b.entries[bucketSize-1] return b.entries[bucketSize-1]
} }
// fillTable adds nodes the table to the end of their corresponding bucket
// if the bucket is not full. The caller must not hold tab.mutex.
func fillTable(tab *Table, nodes []*node) {
for _, n := range nodes {
tab.addSeenNode(n)
}
}
type pingRecorder struct { type pingRecorder struct {
mu sync.Mutex mu sync.Mutex
dead, pinged map[enode.ID]bool dead, pinged map[enode.ID]bool
@ -109,10 +117,6 @@ func (t *pingRecorder) findnode(toid enode.ID, toaddr *net.UDPAddr, target encPu
return nil, nil return nil, nil
} }
func (t *pingRecorder) waitping(from enode.ID) error {
return nil // remote always pings
}
func (t *pingRecorder) ping(toid enode.ID, toaddr *net.UDPAddr) error { func (t *pingRecorder) ping(toid enode.ID, toaddr *net.UDPAddr) error {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
@ -141,15 +145,6 @@ func hasDuplicates(slice []*node) bool {
return false return false
} }
func contains(ns []*node, id enode.ID) bool {
for _, n := range ns {
if n.ID() == id {
return true
}
}
return false
}
func sortedByDistanceTo(distbase enode.ID, slice []*node) bool { func sortedByDistanceTo(distbase enode.ID, slice []*node) bool {
var last enode.ID var last enode.ID
for i, e := range slice { for i, e := range slice {

View File

@ -67,6 +67,8 @@ const (
// RPC request structures // RPC request structures
type ( type (
ping struct { ping struct {
senderKey *ecdsa.PublicKey // filled in by preverify
Version uint Version uint
From, To rpcEndpoint From, To rpcEndpoint
Expiration uint64 Expiration uint64
@ -155,8 +157,13 @@ func nodeToRPC(n *node) rpcNode {
return rpcNode{ID: ekey, IP: n.IP(), UDP: uint16(n.UDP()), TCP: uint16(n.TCP())} return rpcNode{ID: ekey, IP: n.IP(), UDP: uint16(n.UDP()), TCP: uint16(n.TCP())}
} }
// packet is implemented by all protocol messages.
type packet interface { type packet interface {
handle(t *udp, from *net.UDPAddr, fromKey encPubkey, mac []byte) error // preverify checks whether the packet is valid and should be handled at all.
preverify(t *udp, from *net.UDPAddr, fromID enode.ID, fromKey encPubkey) error
// handle handles the packet.
handle(t *udp, from *net.UDPAddr, fromID enode.ID, mac []byte)
// name returns the name of the packet for logging purposes.
name() string name() string
} }
@ -177,43 +184,48 @@ type udp struct {
tab *Table tab *Table
wg sync.WaitGroup wg sync.WaitGroup
addpending chan *pending addReplyMatcher chan *replyMatcher
gotreply chan reply gotreply chan reply
closing chan struct{} closing chan struct{}
} }
// pending represents a pending reply. // pending represents a pending reply.
// //
// some implementations of the protocol wish to send more than one // Some implementations of the protocol wish to send more than one
// reply packet to findnode. in general, any neighbors packet cannot // reply packet to findnode. In general, any neighbors packet cannot
// be matched up with a specific findnode packet. // be matched up with a specific findnode packet.
// //
// our implementation handles this by storing a callback function for // Our implementation handles this by storing a callback function for
// each pending reply. incoming packets from a node are dispatched // each pending reply. Incoming packets from a node are dispatched
// to all the callback functions for that node. // to all callback functions for that node.
type pending struct { type replyMatcher struct {
// these fields must match in the reply. // these fields must match in the reply.
from enode.ID from enode.ID
ip net.IP
ptype byte ptype byte
// time when the request must complete // time when the request must complete
deadline time.Time deadline time.Time
// callback is called when a matching reply arrives. if it returns // callback is called when a matching reply arrives. If it returns matched == true, the
// true, the callback is removed from the pending reply queue. // reply was acceptable. The second return value indicates whether the callback should
// if it returns false, the reply is considered incomplete and // be removed from the pending reply queue. If it returns false, the reply is considered
// the callback will be invoked again for the next matching reply. // incomplete and the callback will be invoked again for the next matching reply.
callback func(resp interface{}) (done bool) callback replyMatchFunc
// errc receives nil when the callback indicates completion or an // errc receives nil when the callback indicates completion or an
// error if no further reply is received within the timeout. // error if no further reply is received within the timeout.
errc chan<- error errc chan<- error
} }
type replyMatchFunc func(interface{}) (matched bool, requestDone bool)
type reply struct { type reply struct {
from enode.ID from enode.ID
ip net.IP
ptype byte ptype byte
data interface{} data packet
// loop indicates whether there was // loop indicates whether there was
// a matching request by sending on this channel. // a matching request by sending on this channel.
matched chan<- bool matched chan<- bool
@ -254,7 +266,7 @@ func newUDP(c conn, ln *enode.LocalNode, cfg Config) (*Table, *udp, error) {
db: ln.Database(), db: ln.Database(),
closing: make(chan struct{}), closing: make(chan struct{}),
gotreply: make(chan reply), gotreply: make(chan reply),
addpending: make(chan *pending), addReplyMatcher: make(chan *replyMatcher),
} }
tab, err := newTable(udp, ln.Database(), cfg.Bootnodes) tab, err := newTable(udp, ln.Database(), cfg.Bootnodes)
if err != nil { if err != nil {
@ -304,35 +316,37 @@ func (t *udp) sendPing(toid enode.ID, toaddr *net.UDPAddr, callback func()) <-ch
errc <- err errc <- err
return errc return errc
} }
errc := t.pending(toid, pongPacket, func(p interface{}) bool { // Add a matcher for the reply to the pending reply queue. Pongs are matched if they
ok := bytes.Equal(p.(*pong).ReplyTok, hash) // reference the ping we're about to send.
if ok && callback != nil { errc := t.pending(toid, toaddr.IP, pongPacket, func(p interface{}) (matched bool, requestDone bool) {
matched = bytes.Equal(p.(*pong).ReplyTok, hash)
if matched && callback != nil {
callback() callback()
} }
return ok return matched, matched
}) })
// Send the packet.
t.localNode.UDPContact(toaddr) t.localNode.UDPContact(toaddr)
t.write(toaddr, req.name(), packet) t.write(toaddr, toid, req.name(), packet)
return errc return errc
} }
func (t *udp) waitping(from enode.ID) error {
return <-t.pending(from, pingPacket, func(interface{}) bool { return true })
}
// findnode sends a findnode request to the given node and waits until // findnode sends a findnode request to the given node and waits until
// the node has sent up to k neighbors. // the node has sent up to k neighbors.
func (t *udp) findnode(toid enode.ID, toaddr *net.UDPAddr, target encPubkey) ([]*node, error) { func (t *udp) findnode(toid enode.ID, toaddr *net.UDPAddr, target encPubkey) ([]*node, error) {
// If we haven't seen a ping from the destination node for a while, it won't remember // If we haven't seen a ping from the destination node for a while, it won't remember
// our endpoint proof and reject findnode. Solicit a ping first. // our endpoint proof and reject findnode. Solicit a ping first.
if time.Since(t.db.LastPingReceived(toid)) > bondExpiration { if time.Since(t.db.LastPingReceived(toid, toaddr.IP)) > bondExpiration {
t.ping(toid, toaddr) t.ping(toid, toaddr)
t.waitping(toid) // Wait for them to ping back and process our pong.
time.Sleep(respTimeout)
} }
// Add a matcher for 'neighbours' replies to the pending reply queue. The matcher is
// active until enough nodes have been received.
nodes := make([]*node, 0, bucketSize) nodes := make([]*node, 0, bucketSize)
nreceived := 0 nreceived := 0
errc := t.pending(toid, neighborsPacket, func(r interface{}) bool { errc := t.pending(toid, toaddr.IP, neighborsPacket, func(r interface{}) (matched bool, requestDone bool) {
reply := r.(*neighbors) reply := r.(*neighbors)
for _, rn := range reply.Nodes { for _, rn := range reply.Nodes {
nreceived++ nreceived++
@ -343,22 +357,22 @@ func (t *udp) findnode(toid enode.ID, toaddr *net.UDPAddr, target encPubkey) ([]
} }
nodes = append(nodes, n) nodes = append(nodes, n)
} }
return nreceived >= bucketSize return true, nreceived >= bucketSize
}) })
t.send(toaddr, findnodePacket, &findnode{ t.send(toaddr, toid, findnodePacket, &findnode{
Target: target, Target: target,
Expiration: uint64(time.Now().Add(expiration).Unix()), Expiration: uint64(time.Now().Add(expiration).Unix()),
}) })
return nodes, <-errc return nodes, <-errc
} }
// pending adds a reply callback to the pending reply queue. // pending adds a reply matcher to the pending reply queue.
// see the documentation of type pending for a detailed explanation. // see the documentation of type replyMatcher for a detailed explanation.
func (t *udp) pending(id enode.ID, ptype byte, callback func(interface{}) bool) <-chan error { func (t *udp) pending(id enode.ID, ip net.IP, ptype byte, callback replyMatchFunc) <-chan error {
ch := make(chan error, 1) ch := make(chan error, 1)
p := &pending{from: id, ptype: ptype, callback: callback, errc: ch} p := &replyMatcher{from: id, ip: ip, ptype: ptype, callback: callback, errc: ch}
select { select {
case t.addpending <- p: case t.addReplyMatcher <- p:
// loop will handle it // loop will handle it
case <-t.closing: case <-t.closing:
ch <- errClosed ch <- errClosed
@ -366,10 +380,12 @@ func (t *udp) pending(id enode.ID, ptype byte, callback func(interface{}) bool)
return ch return ch
} }
func (t *udp) handleReply(from enode.ID, ptype byte, req packet) bool { // handleReply dispatches a reply packet, invoking reply matchers. It returns
// whether any matcher considered the packet acceptable.
func (t *udp) handleReply(from enode.ID, fromIP net.IP, ptype byte, req packet) bool {
matched := make(chan bool, 1) matched := make(chan bool, 1)
select { select {
case t.gotreply <- reply{from, ptype, req, matched}: case t.gotreply <- reply{from, fromIP, ptype, req, matched}:
// loop will handle it // loop will handle it
return <-matched return <-matched
case <-t.closing: case <-t.closing:
@ -385,7 +401,7 @@ func (t *udp) loop() {
var ( var (
plist = list.New() plist = list.New()
timeout = time.NewTimer(0) timeout = time.NewTimer(0)
nextTimeout *pending // head of plist when timeout was last reset nextTimeout *replyMatcher // head of plist when timeout was last reset
contTimeouts = 0 // number of continuous timeouts to do NTP checks contTimeouts = 0 // number of continuous timeouts to do NTP checks
ntpWarnTime = time.Unix(0, 0) ntpWarnTime = time.Unix(0, 0)
) )
@ -399,7 +415,7 @@ func (t *udp) loop() {
// Start the timer so it fires when the next pending reply has expired. // Start the timer so it fires when the next pending reply has expired.
now := time.Now() now := time.Now()
for el := plist.Front(); el != nil; el = el.Next() { for el := plist.Front(); el != nil; el = el.Next() {
nextTimeout = el.Value.(*pending) nextTimeout = el.Value.(*replyMatcher)
if dist := nextTimeout.deadline.Sub(now); dist < 2*respTimeout { if dist := nextTimeout.deadline.Sub(now); dist < 2*respTimeout {
timeout.Reset(dist) timeout.Reset(dist)
return return
@ -420,25 +436,23 @@ func (t *udp) loop() {
select { select {
case <-t.closing: case <-t.closing:
for el := plist.Front(); el != nil; el = el.Next() { for el := plist.Front(); el != nil; el = el.Next() {
el.Value.(*pending).errc <- errClosed el.Value.(*replyMatcher).errc <- errClosed
} }
return return
case p := <-t.addpending: case p := <-t.addReplyMatcher:
p.deadline = time.Now().Add(respTimeout) p.deadline = time.Now().Add(respTimeout)
plist.PushBack(p) plist.PushBack(p)
case r := <-t.gotreply: case r := <-t.gotreply:
var matched bool var matched bool // whether any replyMatcher considered the reply acceptable.
for el := plist.Front(); el != nil; el = el.Next() { for el := plist.Front(); el != nil; el = el.Next() {
p := el.Value.(*pending) p := el.Value.(*replyMatcher)
if p.from == r.from && p.ptype == r.ptype { if p.from == r.from && p.ptype == r.ptype && p.ip.Equal(r.ip) {
matched = true ok, requestDone := p.callback(r.data)
// Remove the matcher if its callback indicates matched = matched || ok
// that all replies have been received. This is // Remove the matcher if callback indicates that all replies have been received.
// required for packet types that expect multiple if requestDone {
// reply packets.
if p.callback(r.data) {
p.errc <- nil p.errc <- nil
plist.Remove(el) plist.Remove(el)
} }
@ -453,7 +467,7 @@ func (t *udp) loop() {
// Notify and remove callbacks whose deadline is in the past. // Notify and remove callbacks whose deadline is in the past.
for el := plist.Front(); el != nil; el = el.Next() { for el := plist.Front(); el != nil; el = el.Next() {
p := el.Value.(*pending) p := el.Value.(*replyMatcher)
if now.After(p.deadline) || now.Equal(p.deadline) { if now.After(p.deadline) || now.Equal(p.deadline) {
p.errc <- errTimeout p.errc <- errTimeout
plist.Remove(el) plist.Remove(el)
@ -504,17 +518,17 @@ func init() {
} }
} }
func (t *udp) send(toaddr *net.UDPAddr, ptype byte, req packet) ([]byte, error) { func (t *udp) send(toaddr *net.UDPAddr, toid enode.ID, ptype byte, req packet) ([]byte, error) {
packet, hash, err := encodePacket(t.priv, ptype, req) packet, hash, err := encodePacket(t.priv, ptype, req)
if err != nil { if err != nil {
return hash, err return hash, err
} }
return hash, t.write(toaddr, req.name(), packet) return hash, t.write(toaddr, toid, req.name(), packet)
} }
func (t *udp) write(toaddr *net.UDPAddr, what string, packet []byte) error { func (t *udp) write(toaddr *net.UDPAddr, toid enode.ID, what string, packet []byte) error {
_, err := t.conn.WriteToUDP(packet, toaddr) _, err := t.conn.WriteToUDP(packet, toaddr)
log.Trace(">> "+what, "addr", toaddr, "err", err) log.Trace(">> "+what, "id", toid, "addr", toaddr, "err", err)
return err return err
} }
@ -573,13 +587,19 @@ func (t *udp) readLoop(unhandled chan<- ReadPacket) {
} }
func (t *udp) handlePacket(from *net.UDPAddr, buf []byte) error { func (t *udp) handlePacket(from *net.UDPAddr, buf []byte) error {
packet, fromID, hash, err := decodePacket(buf) packet, fromKey, hash, err := decodePacket(buf)
if err != nil { if err != nil {
log.Debug("Bad discv4 packet", "addr", from, "err", err) log.Debug("Bad discv4 packet", "addr", from, "err", err)
return err return err
} }
err = packet.handle(t, from, fromID, hash) fromID := fromKey.id()
log.Trace("<< "+packet.name(), "addr", from, "err", err) if err == nil {
err = packet.preverify(t, from, fromID, fromKey)
}
log.Trace("<< "+packet.name(), "id", fromID, "addr", from, "err", err)
if err == nil {
packet.handle(t, from, fromID, hash)
}
return err return err
} }
@ -615,54 +635,67 @@ func decodePacket(buf []byte) (packet, encPubkey, []byte, error) {
return req, fromKey, hash, err return req, fromKey, hash, err
} }
func (req *ping) handle(t *udp, from *net.UDPAddr, fromKey encPubkey, mac []byte) error { // Packet Handlers
func (req *ping) preverify(t *udp, from *net.UDPAddr, fromID enode.ID, fromKey encPubkey) error {
if expired(req.Expiration) { if expired(req.Expiration) {
return errExpired return errExpired
} }
key, err := decodePubkey(fromKey) key, err := decodePubkey(fromKey)
if err != nil { if err != nil {
return fmt.Errorf("invalid public key: %v", err) return errors.New("invalid public key")
} }
t.send(from, pongPacket, &pong{ req.senderKey = key
return nil
}
func (req *ping) handle(t *udp, from *net.UDPAddr, fromID enode.ID, mac []byte) {
// Reply.
t.send(from, fromID, pongPacket, &pong{
To: makeEndpoint(from, req.From.TCP), To: makeEndpoint(from, req.From.TCP),
ReplyTok: mac, ReplyTok: mac,
Expiration: uint64(time.Now().Add(expiration).Unix()), Expiration: uint64(time.Now().Add(expiration).Unix()),
}) })
n := wrapNode(enode.NewV4(key, from.IP, int(req.From.TCP), from.Port))
t.handleReply(n.ID(), pingPacket, req) // Ping back if our last pong on file is too far in the past.
if time.Since(t.db.LastPongReceived(n.ID())) > bondExpiration { n := wrapNode(enode.NewV4(req.senderKey, from.IP, int(req.From.TCP), from.Port))
t.sendPing(n.ID(), from, func() { t.tab.addThroughPing(n) }) if time.Since(t.db.LastPongReceived(n.ID(), from.IP)) > bondExpiration {
t.sendPing(fromID, from, func() {
t.tab.addVerifiedNode(n)
})
} else { } else {
t.tab.addThroughPing(n) t.tab.addVerifiedNode(n)
} }
// Update node database and endpoint predictor.
t.db.UpdateLastPingReceived(n.ID(), from.IP, time.Now())
t.localNode.UDPEndpointStatement(from, &net.UDPAddr{IP: req.To.IP, Port: int(req.To.UDP)}) t.localNode.UDPEndpointStatement(from, &net.UDPAddr{IP: req.To.IP, Port: int(req.To.UDP)})
t.db.UpdateLastPingReceived(n.ID(), time.Now())
return nil
} }
func (req *ping) name() string { return "PING/v4" } func (req *ping) name() string { return "PING/v4" }
func (req *pong) handle(t *udp, from *net.UDPAddr, fromKey encPubkey, mac []byte) error { func (req *pong) preverify(t *udp, from *net.UDPAddr, fromID enode.ID, fromKey encPubkey) error {
if expired(req.Expiration) { if expired(req.Expiration) {
return errExpired return errExpired
} }
fromID := fromKey.id() if !t.handleReply(fromID, from.IP, pongPacket, req) {
if !t.handleReply(fromID, pongPacket, req) {
return errUnsolicitedReply return errUnsolicitedReply
} }
t.localNode.UDPEndpointStatement(from, &net.UDPAddr{IP: req.To.IP, Port: int(req.To.UDP)})
t.db.UpdateLastPongReceived(fromID, time.Now())
return nil return nil
} }
func (req *pong) handle(t *udp, from *net.UDPAddr, fromID enode.ID, mac []byte) {
t.localNode.UDPEndpointStatement(from, &net.UDPAddr{IP: req.To.IP, Port: int(req.To.UDP)})
t.db.UpdateLastPongReceived(fromID, from.IP, time.Now())
}
func (req *pong) name() string { return "PONG/v4" } func (req *pong) name() string { return "PONG/v4" }
func (req *findnode) handle(t *udp, from *net.UDPAddr, fromKey encPubkey, mac []byte) error { func (req *findnode) preverify(t *udp, from *net.UDPAddr, fromID enode.ID, fromKey encPubkey) error {
if expired(req.Expiration) { if expired(req.Expiration) {
return errExpired return errExpired
} }
fromID := fromKey.id() if time.Since(t.db.LastPongReceived(fromID, from.IP)) > bondExpiration {
if time.Since(t.db.LastPongReceived(fromID)) > bondExpiration {
// No endpoint proof pong exists, we don't process the packet. This prevents an // No endpoint proof pong exists, we don't process the packet. This prevents an
// attack vector where the discovery protocol could be used to amplify traffic in a // attack vector where the discovery protocol could be used to amplify traffic in a
// DDOS attack. A malicious actor would send a findnode request with the IP address // DDOS attack. A malicious actor would send a findnode request with the IP address
@ -671,43 +704,50 @@ func (req *findnode) handle(t *udp, from *net.UDPAddr, fromKey encPubkey, mac []
// findnode) to the victim. // findnode) to the victim.
return errUnknownNode return errUnknownNode
} }
return nil
}
func (req *findnode) handle(t *udp, from *net.UDPAddr, fromID enode.ID, mac []byte) {
// Determine closest nodes.
target := enode.ID(crypto.Keccak256Hash(req.Target[:])) target := enode.ID(crypto.Keccak256Hash(req.Target[:]))
t.tab.mutex.Lock() t.tab.mutex.Lock()
closest := t.tab.closest(target, bucketSize).entries closest := t.tab.closest(target, bucketSize).entries
t.tab.mutex.Unlock() t.tab.mutex.Unlock()
p := neighbors{Expiration: uint64(time.Now().Add(expiration).Unix())}
var sent bool
// Send neighbors in chunks with at most maxNeighbors per packet // Send neighbors in chunks with at most maxNeighbors per packet
// to stay below the 1280 byte limit. // to stay below the 1280 byte limit.
p := neighbors{Expiration: uint64(time.Now().Add(expiration).Unix())}
var sent bool
for _, n := range closest { for _, n := range closest {
if netutil.CheckRelayIP(from.IP, n.IP()) == nil { if netutil.CheckRelayIP(from.IP, n.IP()) == nil {
p.Nodes = append(p.Nodes, nodeToRPC(n)) p.Nodes = append(p.Nodes, nodeToRPC(n))
} }
if len(p.Nodes) == maxNeighbors { if len(p.Nodes) == maxNeighbors {
t.send(from, neighborsPacket, &p) t.send(from, fromID, neighborsPacket, &p)
p.Nodes = p.Nodes[:0] p.Nodes = p.Nodes[:0]
sent = true sent = true
} }
} }
if len(p.Nodes) > 0 || !sent { if len(p.Nodes) > 0 || !sent {
t.send(from, neighborsPacket, &p) t.send(from, fromID, neighborsPacket, &p)
} }
return nil
} }
func (req *findnode) name() string { return "FINDNODE/v4" } func (req *findnode) name() string { return "FINDNODE/v4" }
func (req *neighbors) handle(t *udp, from *net.UDPAddr, fromKey encPubkey, mac []byte) error { func (req *neighbors) preverify(t *udp, from *net.UDPAddr, fromID enode.ID, fromKey encPubkey) error {
if expired(req.Expiration) { if expired(req.Expiration) {
return errExpired return errExpired
} }
if !t.handleReply(fromKey.id(), neighborsPacket, req) { if !t.handleReply(fromID, from.IP, neighborsPacket, req) {
return errUnsolicitedReply return errUnsolicitedReply
} }
return nil return nil
} }
func (req *neighbors) handle(t *udp, from *net.UDPAddr, fromID enode.ID, mac []byte) {
}
func (req *neighbors) name() string { return "NEIGHBORS/v4" } func (req *neighbors) name() string { return "NEIGHBORS/v4" }
func expired(ts uint64) bool { func expired(ts uint64) bool {

View File

@ -19,6 +19,7 @@ package discover
import ( import (
"bytes" "bytes"
"crypto/ecdsa" "crypto/ecdsa"
crand "crypto/rand"
"encoding/binary" "encoding/binary"
"encoding/hex" "encoding/hex"
"errors" "errors"
@ -57,6 +58,7 @@ type udpTest struct {
t *testing.T t *testing.T
pipe *dgramPipe pipe *dgramPipe
table *Table table *Table
db *enode.DB
udp *udp udp *udp
sent [][]byte sent [][]byte
localkey, remotekey *ecdsa.PrivateKey localkey, remotekey *ecdsa.PrivateKey
@ -71,22 +73,32 @@ func newUDPTest(t *testing.T) *udpTest {
remotekey: newkey(), remotekey: newkey(),
remoteaddr: &net.UDPAddr{IP: net.IP{10, 0, 1, 99}, Port: 30303}, remoteaddr: &net.UDPAddr{IP: net.IP{10, 0, 1, 99}, Port: 30303},
} }
db, _ := enode.OpenDB("") test.db, _ = enode.OpenDB("")
ln := enode.NewLocalNode(db, test.localkey) ln := enode.NewLocalNode(test.db, test.localkey)
test.table, test.udp, _ = newUDP(test.pipe, ln, Config{PrivateKey: test.localkey}) test.table, test.udp, _ = newUDP(test.pipe, ln, Config{PrivateKey: test.localkey})
// Wait for initial refresh so the table doesn't send unexpected findnode. // Wait for initial refresh so the table doesn't send unexpected findnode.
<-test.table.initDone <-test.table.initDone
return test return test
} }
func (test *udpTest) close() {
test.table.Close()
test.db.Close()
}
// handles a packet as if it had been sent to the transport. // handles a packet as if it had been sent to the transport.
func (test *udpTest) packetIn(wantError error, ptype byte, data packet) error { func (test *udpTest) packetIn(wantError error, ptype byte, data packet) error {
enc, _, err := encodePacket(test.remotekey, ptype, data) return test.packetInFrom(wantError, test.remotekey, test.remoteaddr, ptype, data)
}
// handles a packet as if it had been sent to the transport by the key/endpoint.
func (test *udpTest) packetInFrom(wantError error, key *ecdsa.PrivateKey, addr *net.UDPAddr, ptype byte, data packet) error {
enc, _, err := encodePacket(key, ptype, data)
if err != nil { if err != nil {
return test.errorf("packet (%d) encode error: %v", ptype, err) return test.errorf("packet (%d) encode error: %v", ptype, err)
} }
test.sent = append(test.sent, enc) test.sent = append(test.sent, enc)
if err = test.udp.handlePacket(test.remoteaddr, enc); err != wantError { if err = test.udp.handlePacket(addr, enc); err != wantError {
return test.errorf("error mismatch: got %q, want %q", err, wantError) return test.errorf("error mismatch: got %q, want %q", err, wantError)
} }
return nil return nil
@ -94,19 +106,19 @@ func (test *udpTest) packetIn(wantError error, ptype byte, data packet) error {
// waits for a packet to be sent by the transport. // waits for a packet to be sent by the transport.
// validate should have type func(*udpTest, X) error, where X is a packet type. // validate should have type func(*udpTest, X) error, where X is a packet type.
func (test *udpTest) waitPacketOut(validate interface{}) ([]byte, error) { func (test *udpTest) waitPacketOut(validate interface{}) (*net.UDPAddr, []byte, error) {
dgram := test.pipe.waitPacketOut() dgram := test.pipe.waitPacketOut()
p, _, hash, err := decodePacket(dgram) p, _, hash, err := decodePacket(dgram.data)
if err != nil { if err != nil {
return hash, test.errorf("sent packet decode error: %v", err) return &dgram.to, hash, test.errorf("sent packet decode error: %v", err)
} }
fn := reflect.ValueOf(validate) fn := reflect.ValueOf(validate)
exptype := fn.Type().In(0) exptype := fn.Type().In(0)
if reflect.TypeOf(p) != exptype { if reflect.TypeOf(p) != exptype {
return hash, test.errorf("sent packet type mismatch, got: %v, want: %v", reflect.TypeOf(p), exptype) return &dgram.to, hash, test.errorf("sent packet type mismatch, got: %v, want: %v", reflect.TypeOf(p), exptype)
} }
fn.Call([]reflect.Value{reflect.ValueOf(p)}) fn.Call([]reflect.Value{reflect.ValueOf(p)})
return hash, nil return &dgram.to, hash, nil
} }
func (test *udpTest) errorf(format string, args ...interface{}) error { func (test *udpTest) errorf(format string, args ...interface{}) error {
@ -125,7 +137,7 @@ func (test *udpTest) errorf(format string, args ...interface{}) error {
func TestUDP_packetErrors(t *testing.T) { func TestUDP_packetErrors(t *testing.T) {
test := newUDPTest(t) test := newUDPTest(t)
defer test.table.Close() defer test.close()
test.packetIn(errExpired, pingPacket, &ping{From: testRemote, To: testLocalAnnounced, Version: 4}) test.packetIn(errExpired, pingPacket, &ping{From: testRemote, To: testLocalAnnounced, Version: 4})
test.packetIn(errUnsolicitedReply, pongPacket, &pong{ReplyTok: []byte{}, Expiration: futureExp}) test.packetIn(errUnsolicitedReply, pongPacket, &pong{ReplyTok: []byte{}, Expiration: futureExp})
@ -136,7 +148,7 @@ func TestUDP_packetErrors(t *testing.T) {
func TestUDP_pingTimeout(t *testing.T) { func TestUDP_pingTimeout(t *testing.T) {
t.Parallel() t.Parallel()
test := newUDPTest(t) test := newUDPTest(t)
defer test.table.Close() defer test.close()
toaddr := &net.UDPAddr{IP: net.ParseIP("1.2.3.4"), Port: 2222} toaddr := &net.UDPAddr{IP: net.ParseIP("1.2.3.4"), Port: 2222}
toid := enode.ID{1, 2, 3, 4} toid := enode.ID{1, 2, 3, 4}
@ -148,7 +160,7 @@ func TestUDP_pingTimeout(t *testing.T) {
func TestUDP_responseTimeouts(t *testing.T) { func TestUDP_responseTimeouts(t *testing.T) {
t.Parallel() t.Parallel()
test := newUDPTest(t) test := newUDPTest(t)
defer test.table.Close() defer test.close()
rand.Seed(time.Now().UnixNano()) rand.Seed(time.Now().UnixNano())
randomDuration := func(max time.Duration) time.Duration { randomDuration := func(max time.Duration) time.Duration {
@ -166,20 +178,20 @@ func TestUDP_responseTimeouts(t *testing.T) {
// with ptype <= 128 will not get a reply and should time out. // with ptype <= 128 will not get a reply and should time out.
// For all other requests, a reply is scheduled to arrive // For all other requests, a reply is scheduled to arrive
// within the timeout window. // within the timeout window.
p := &pending{ p := &replyMatcher{
ptype: byte(rand.Intn(255)), ptype: byte(rand.Intn(255)),
callback: func(interface{}) bool { return true }, callback: func(interface{}) (bool, bool) { return true, true },
} }
binary.BigEndian.PutUint64(p.from[:], uint64(i)) binary.BigEndian.PutUint64(p.from[:], uint64(i))
if p.ptype <= 128 { if p.ptype <= 128 {
p.errc = timeoutErr p.errc = timeoutErr
test.udp.addpending <- p test.udp.addReplyMatcher <- p
nTimeouts++ nTimeouts++
} else { } else {
p.errc = nilErr p.errc = nilErr
test.udp.addpending <- p test.udp.addReplyMatcher <- p
time.AfterFunc(randomDuration(60*time.Millisecond), func() { time.AfterFunc(randomDuration(60*time.Millisecond), func() {
if !test.udp.handleReply(p.from, p.ptype, nil) { if !test.udp.handleReply(p.from, p.ip, p.ptype, nil) {
t.Logf("not matched: %v", p) t.Logf("not matched: %v", p)
} }
}) })
@ -220,7 +232,7 @@ func TestUDP_responseTimeouts(t *testing.T) {
func TestUDP_findnodeTimeout(t *testing.T) { func TestUDP_findnodeTimeout(t *testing.T) {
t.Parallel() t.Parallel()
test := newUDPTest(t) test := newUDPTest(t)
defer test.table.Close() defer test.close()
toaddr := &net.UDPAddr{IP: net.ParseIP("1.2.3.4"), Port: 2222} toaddr := &net.UDPAddr{IP: net.ParseIP("1.2.3.4"), Port: 2222}
toid := enode.ID{1, 2, 3, 4} toid := enode.ID{1, 2, 3, 4}
@ -236,50 +248,65 @@ func TestUDP_findnodeTimeout(t *testing.T) {
func TestUDP_findnode(t *testing.T) { func TestUDP_findnode(t *testing.T) {
test := newUDPTest(t) test := newUDPTest(t)
defer test.table.Close() defer test.close()
// put a few nodes into the table. their exact // put a few nodes into the table. their exact
// distribution shouldn't matter much, although we need to // distribution shouldn't matter much, although we need to
// take care not to overflow any bucket. // take care not to overflow any bucket.
nodes := &nodesByDistance{target: testTarget.id()} nodes := &nodesByDistance{target: testTarget.id()}
for i := 0; i < bucketSize; i++ { live := make(map[enode.ID]bool)
numCandidates := 2 * bucketSize
for i := 0; i < numCandidates; i++ {
key := newkey() key := newkey()
n := wrapNode(enode.NewV4(&key.PublicKey, net.IP{10, 13, 0, 1}, 0, i)) ip := net.IP{10, 13, 0, byte(i)}
nodes.push(n, bucketSize) n := wrapNode(enode.NewV4(&key.PublicKey, ip, 0, 2000))
// Ensure half of table content isn't verified live yet.
if i > numCandidates/2 {
n.livenessChecks = 1
live[n.ID()] = true
} }
test.table.stuff(nodes.entries) nodes.push(n, numCandidates)
}
fillTable(test.table, nodes.entries)
// ensure there's a bond with the test node, // ensure there's a bond with the test node,
// findnode won't be accepted otherwise. // findnode won't be accepted otherwise.
remoteID := encodePubkey(&test.remotekey.PublicKey).id() remoteID := encodePubkey(&test.remotekey.PublicKey).id()
test.table.db.UpdateLastPongReceived(remoteID, time.Now()) test.table.db.UpdateLastPongReceived(remoteID, test.remoteaddr.IP, time.Now())
// check that closest neighbors are returned. // check that closest neighbors are returned.
test.packetIn(nil, findnodePacket, &findnode{Target: testTarget, Expiration: futureExp})
expected := test.table.closest(testTarget.id(), bucketSize) expected := test.table.closest(testTarget.id(), bucketSize)
test.packetIn(nil, findnodePacket, &findnode{Target: testTarget, Expiration: futureExp})
waitNeighbors := func(want []*node) { waitNeighbors := func(want []*node) {
test.waitPacketOut(func(p *neighbors) { test.waitPacketOut(func(p *neighbors) {
if len(p.Nodes) != len(want) { if len(p.Nodes) != len(want) {
t.Errorf("wrong number of results: got %d, want %d", len(p.Nodes), bucketSize) t.Errorf("wrong number of results: got %d, want %d", len(p.Nodes), bucketSize)
} }
for i := range p.Nodes { for i, n := range p.Nodes {
if p.Nodes[i].ID.id() != want[i].ID() { if n.ID.id() != want[i].ID() {
t.Errorf("result mismatch at %d:\n got: %v\n want: %v", i, p.Nodes[i], expected.entries[i]) t.Errorf("result mismatch at %d:\n got: %v\n want: %v", i, n, expected.entries[i])
}
if !live[n.ID.id()] {
t.Errorf("result includes dead node %v", n.ID.id())
} }
} }
}) })
} }
waitNeighbors(expected.entries[:maxNeighbors]) // Receive replies.
waitNeighbors(expected.entries[maxNeighbors:]) want := expected.entries
if len(want) > maxNeighbors {
waitNeighbors(want[:maxNeighbors])
want = want[maxNeighbors:]
}
waitNeighbors(want)
} }
func TestUDP_findnodeMultiReply(t *testing.T) { func TestUDP_findnodeMultiReply(t *testing.T) {
test := newUDPTest(t) test := newUDPTest(t)
defer test.table.Close() defer test.close()
rid := enode.PubkeyToIDV4(&test.remotekey.PublicKey) rid := enode.PubkeyToIDV4(&test.remotekey.PublicKey)
test.table.db.UpdateLastPingReceived(rid, time.Now()) test.table.db.UpdateLastPingReceived(rid, test.remoteaddr.IP, time.Now())
// queue a pending findnode request // queue a pending findnode request
resultc, errc := make(chan []*node), make(chan error) resultc, errc := make(chan []*node), make(chan error)
@ -329,11 +356,40 @@ func TestUDP_findnodeMultiReply(t *testing.T) {
} }
} }
func TestUDP_pingMatch(t *testing.T) {
test := newUDPTest(t)
defer test.close()
randToken := make([]byte, 32)
crand.Read(randToken)
test.packetIn(nil, pingPacket, &ping{From: testRemote, To: testLocalAnnounced, Version: 4, Expiration: futureExp})
test.waitPacketOut(func(*pong) error { return nil })
test.waitPacketOut(func(*ping) error { return nil })
test.packetIn(errUnsolicitedReply, pongPacket, &pong{ReplyTok: randToken, To: testLocalAnnounced, Expiration: futureExp})
}
func TestUDP_pingMatchIP(t *testing.T) {
test := newUDPTest(t)
defer test.close()
test.packetIn(nil, pingPacket, &ping{From: testRemote, To: testLocalAnnounced, Version: 4, Expiration: futureExp})
test.waitPacketOut(func(*pong) error { return nil })
_, hash, _ := test.waitPacketOut(func(*ping) error { return nil })
wrongAddr := &net.UDPAddr{IP: net.IP{33, 44, 1, 2}, Port: 30000}
test.packetInFrom(errUnsolicitedReply, test.remotekey, wrongAddr, pongPacket, &pong{
ReplyTok: hash,
To: testLocalAnnounced,
Expiration: futureExp,
})
}
func TestUDP_successfulPing(t *testing.T) { func TestUDP_successfulPing(t *testing.T) {
test := newUDPTest(t) test := newUDPTest(t)
added := make(chan *node, 1) added := make(chan *node, 1)
test.table.nodeAddedHook = func(n *node) { added <- n } test.table.nodeAddedHook = func(n *node) { added <- n }
defer test.table.Close() defer test.close()
// The remote side sends a ping packet to initiate the exchange. // The remote side sends a ping packet to initiate the exchange.
go test.packetIn(nil, pingPacket, &ping{From: testRemote, To: testLocalAnnounced, Version: 4, Expiration: futureExp}) go test.packetIn(nil, pingPacket, &ping{From: testRemote, To: testLocalAnnounced, Version: 4, Expiration: futureExp})
@ -356,7 +412,7 @@ func TestUDP_successfulPing(t *testing.T) {
}) })
// remote is unknown, the table pings back. // remote is unknown, the table pings back.
hash, _ := test.waitPacketOut(func(p *ping) error { _, hash, _ := test.waitPacketOut(func(p *ping) error {
if !reflect.DeepEqual(p.From, test.udp.ourEndpoint()) { if !reflect.DeepEqual(p.From, test.udp.ourEndpoint()) {
t.Errorf("got ping.From %#v, want %#v", p.From, test.udp.ourEndpoint()) t.Errorf("got ping.From %#v, want %#v", p.From, test.udp.ourEndpoint())
} }
@ -510,7 +566,12 @@ type dgramPipe struct {
cond *sync.Cond cond *sync.Cond
closing chan struct{} closing chan struct{}
closed bool closed bool
queue [][]byte queue []dgram
}
type dgram struct {
to net.UDPAddr
data []byte
} }
func newpipe() *dgramPipe { func newpipe() *dgramPipe {
@ -531,7 +592,7 @@ func (c *dgramPipe) WriteToUDP(b []byte, to *net.UDPAddr) (n int, err error) {
if c.closed { if c.closed {
return 0, errors.New("closed") return 0, errors.New("closed")
} }
c.queue = append(c.queue, msg) c.queue = append(c.queue, dgram{*to, b})
c.cond.Signal() c.cond.Signal()
return len(b), nil return len(b), nil
} }
@ -556,7 +617,7 @@ func (c *dgramPipe) LocalAddr() net.Addr {
return &net.UDPAddr{IP: testLocal.IP, Port: int(testLocal.UDP)} return &net.UDPAddr{IP: testLocal.IP, Port: int(testLocal.UDP)}
} }
func (c *dgramPipe) waitPacketOut() []byte { func (c *dgramPipe) waitPacketOut() dgram {
c.mu.Lock() c.mu.Lock()
defer c.mu.Unlock() defer c.mu.Unlock()
for len(c.queue) == 0 { for len(c.queue) == 0 {

View File

@ -21,11 +21,11 @@ import (
"crypto/rand" "crypto/rand"
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"net"
"os" "os"
"sync" "sync"
"time" "time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/errors" "github.com/syndtr/goleveldb/leveldb/errors"
@ -38,23 +38,30 @@ import (
// Keys in the node database. // Keys in the node database.
const ( const (
dbVersionKey = "version" // Version of the database to flush if changes dbVersionKey = "version" // Version of the database to flush if changes
dbItemPrefix = "n:" // Identifier to prefix node entries with dbNodePrefix = "n:" // Identifier to prefix node entries with
dbLocalPrefix = "local:"
dbDiscoverRoot = "v4"
dbDiscoverRoot = ":discover" // These fields are stored per ID and IP, the full key is "n:<ID>:v4:<IP>:findfail".
dbDiscoverSeq = dbDiscoverRoot + ":seq" // Use nodeItemKey to create those keys.
dbDiscoverPing = dbDiscoverRoot + ":lastping" dbNodeFindFails = "findfail"
dbDiscoverPong = dbDiscoverRoot + ":lastpong" dbNodePing = "lastping"
dbDiscoverFindFails = dbDiscoverRoot + ":findfail" dbNodePong = "lastpong"
dbLocalRoot = ":local" dbNodeSeq = "seq"
dbLocalSeq = dbLocalRoot + ":seq"
// Local information is keyed by ID only, the full key is "local:<ID>:seq".
// Use localItemKey to create those keys.
dbLocalSeq = "seq"
) )
var ( const (
dbNodeExpiration = 24 * time.Hour // Time after which an unseen node should be dropped. dbNodeExpiration = 24 * time.Hour // Time after which an unseen node should be dropped.
dbCleanupCycle = time.Hour // Time period for running the expiration task. dbCleanupCycle = time.Hour // Time period for running the expiration task.
dbVersion = 7 dbVersion = 8
) )
var zeroIP = make(net.IP, 16)
// DB is the node database, storing previously seen nodes and any collected metadata about // DB is the node database, storing previously seen nodes and any collected metadata about
// them for QoS purposes. // them for QoS purposes.
type DB struct { type DB struct {
@ -119,27 +126,58 @@ func newPersistentDB(path string) (*DB, error) {
return &DB{lvl: db, quit: make(chan struct{})}, nil return &DB{lvl: db, quit: make(chan struct{})}, nil
} }
// makeKey generates the leveldb key-blob from a node id and its particular // nodeKey returns the database key for a node record.
// field of interest. func nodeKey(id ID) []byte {
func makeKey(id ID, field string) []byte { key := append([]byte(dbNodePrefix), id[:]...)
if (id == ID{}) { key = append(key, ':')
return []byte(field) key = append(key, dbDiscoverRoot...)
} return key
return append([]byte(dbItemPrefix), append(id[:], field...)...)
} }
// splitKey tries to split a database key into a node id and a field part. // splitNodeKey returns the node ID of a key created by nodeKey.
func splitKey(key []byte) (id ID, field string) { func splitNodeKey(key []byte) (id ID, rest []byte) {
// If the key is not of a node, return it plainly if !bytes.HasPrefix(key, []byte(dbNodePrefix)) {
if !bytes.HasPrefix(key, []byte(dbItemPrefix)) { return ID{}, nil
return ID{}, string(key)
} }
// Otherwise split the id and field item := key[len(dbNodePrefix):]
item := key[len(dbItemPrefix):]
copy(id[:], item[:len(id)]) copy(id[:], item[:len(id)])
field = string(item[len(id):]) return id, item[len(id)+1:]
}
return id, field // nodeItemKey returns the database key for a node metadata field.
func nodeItemKey(id ID, ip net.IP, field string) []byte {
ip16 := ip.To16()
if ip16 == nil {
panic(fmt.Errorf("invalid IP (length %d)", len(ip)))
}
return bytes.Join([][]byte{nodeKey(id), ip16, []byte(field)}, []byte{':'})
}
// splitNodeItemKey returns the components of a key created by nodeItemKey.
func splitNodeItemKey(key []byte) (id ID, ip net.IP, field string) {
id, key = splitNodeKey(key)
// Skip discover root.
if string(key) == dbDiscoverRoot {
return id, nil, ""
}
key = key[len(dbDiscoverRoot)+1:]
// Split out the IP.
ip = net.IP(key[:16])
if ip4 := ip.To4(); ip4 != nil {
ip = ip4
}
key = key[16+1:]
// Field is the remainder of key.
field = string(key)
return id, ip, field
}
// localItemKey returns the key of a local node item.
func localItemKey(id ID, field string) []byte {
key := append([]byte(dbLocalPrefix), id[:]...)
key = append(key, ':')
key = append(key, field...)
return key
} }
// fetchInt64 retrieves an integer associated with a particular key. // fetchInt64 retrieves an integer associated with a particular key.
@ -181,7 +219,7 @@ func (db *DB) storeUint64(key []byte, n uint64) error {
// Node retrieves a node with a given id from the database. // Node retrieves a node with a given id from the database.
func (db *DB) Node(id ID) *Node { func (db *DB) Node(id ID) *Node {
blob, err := db.lvl.Get(makeKey(id, dbDiscoverRoot), nil) blob, err := db.lvl.Get(nodeKey(id), nil)
if err != nil { if err != nil {
return nil return nil
} }
@ -207,15 +245,15 @@ func (db *DB) UpdateNode(node *Node) error {
if err != nil { if err != nil {
return err return err
} }
if err := db.lvl.Put(makeKey(node.ID(), dbDiscoverRoot), blob, nil); err != nil { if err := db.lvl.Put(nodeKey(node.ID()), blob, nil); err != nil {
return err return err
} }
return db.storeUint64(makeKey(node.ID(), dbDiscoverSeq), node.Seq()) return db.storeUint64(nodeItemKey(node.ID(), zeroIP, dbNodeSeq), node.Seq())
} }
// NodeSeq returns the stored record sequence number of the given node. // NodeSeq returns the stored record sequence number of the given node.
func (db *DB) NodeSeq(id ID) uint64 { func (db *DB) NodeSeq(id ID) uint64 {
return db.fetchUint64(makeKey(id, dbDiscoverSeq)) return db.fetchUint64(nodeItemKey(id, zeroIP, dbNodeSeq))
} }
// Resolve returns the stored record of the node if it has a larger sequence // Resolve returns the stored record of the node if it has a larger sequence
@ -227,15 +265,17 @@ func (db *DB) Resolve(n *Node) *Node {
return db.Node(n.ID()) return db.Node(n.ID())
} }
// DeleteNode deletes all information/keys associated with a node. // DeleteNode deletes all information associated with a node.
func (db *DB) DeleteNode(id ID) error { func (db *DB) DeleteNode(id ID) {
deleter := db.lvl.NewIterator(util.BytesPrefix(makeKey(id, "")), nil) deleteRange(db.lvl, nodeKey(id))
for deleter.Next() { }
if err := db.lvl.Delete(deleter.Key(), nil); err != nil {
return err func deleteRange(db *leveldb.DB, prefix []byte) {
it := db.NewIterator(util.BytesPrefix(prefix), nil)
defer it.Release()
for it.Next() {
db.Delete(it.Key(), nil)
} }
}
return nil
} }
// ensureExpirer is a small helper method ensuring that the data expiration // ensureExpirer is a small helper method ensuring that the data expiration
@ -259,9 +299,7 @@ func (db *DB) expirer() {
for { for {
select { select {
case <-tick.C: case <-tick.C:
if err := db.expireNodes(); err != nil { db.expireNodes()
log.Error("Failed to expire nodedb items", "err", err)
}
case <-db.quit: case <-db.quit:
return return
} }
@ -269,71 +307,85 @@ func (db *DB) expirer() {
} }
// expireNodes iterates over the database and deletes all nodes that have not // expireNodes iterates over the database and deletes all nodes that have not
// been seen (i.e. received a pong from) for some allotted time. // been seen (i.e. received a pong from) for some time.
func (db *DB) expireNodes() error { func (db *DB) expireNodes() {
threshold := time.Now().Add(-dbNodeExpiration) it := db.lvl.NewIterator(util.BytesPrefix([]byte(dbNodePrefix)), nil)
// Find discovered nodes that are older than the allowance
it := db.lvl.NewIterator(nil, nil)
defer it.Release() defer it.Release()
if !it.Next() {
return
}
for it.Next() { var (
// Skip the item if not a discovery node threshold = time.Now().Add(-dbNodeExpiration).Unix()
id, field := splitKey(it.Key()) youngestPong int64
if field != dbDiscoverRoot { atEnd = false
continue )
for !atEnd {
id, ip, field := splitNodeItemKey(it.Key())
if field == dbNodePong {
time, _ := binary.Varint(it.Value())
if time > youngestPong {
youngestPong = time
} }
// Skip the node if not expired yet (and not self) if time < threshold {
if seen := db.LastPongReceived(id); seen.After(threshold) { // Last pong from this IP older than threshold, remove fields belonging to it.
continue deleteRange(db.lvl, nodeItemKey(id, ip, ""))
}
}
atEnd = !it.Next()
nextID, _ := splitNodeKey(it.Key())
if atEnd || nextID != id {
// We've moved beyond the last entry of the current ID.
// Remove everything if there was no recent enough pong.
if youngestPong > 0 && youngestPong < threshold {
deleteRange(db.lvl, nodeKey(id))
}
youngestPong = 0
} }
// Otherwise delete all associated information
db.DeleteNode(id)
} }
return nil
} }
// LastPingReceived retrieves the time of the last ping packet received from // LastPingReceived retrieves the time of the last ping packet received from
// a remote node. // a remote node.
func (db *DB) LastPingReceived(id ID) time.Time { func (db *DB) LastPingReceived(id ID, ip net.IP) time.Time {
return time.Unix(db.fetchInt64(makeKey(id, dbDiscoverPing)), 0) return time.Unix(db.fetchInt64(nodeItemKey(id, ip, dbNodePing)), 0)
} }
// UpdateLastPingReceived updates the last time we tried contacting a remote node. // UpdateLastPingReceived updates the last time we tried contacting a remote node.
func (db *DB) UpdateLastPingReceived(id ID, instance time.Time) error { func (db *DB) UpdateLastPingReceived(id ID, ip net.IP, instance time.Time) error {
return db.storeInt64(makeKey(id, dbDiscoverPing), instance.Unix()) return db.storeInt64(nodeItemKey(id, ip, dbNodePing), instance.Unix())
} }
// LastPongReceived retrieves the time of the last successful pong from remote node. // LastPongReceived retrieves the time of the last successful pong from remote node.
func (db *DB) LastPongReceived(id ID) time.Time { func (db *DB) LastPongReceived(id ID, ip net.IP) time.Time {
// Launch expirer // Launch expirer
db.ensureExpirer() db.ensureExpirer()
return time.Unix(db.fetchInt64(makeKey(id, dbDiscoverPong)), 0) return time.Unix(db.fetchInt64(nodeItemKey(id, ip, dbNodePong)), 0)
} }
// UpdateLastPongReceived updates the last pong time of a node. // UpdateLastPongReceived updates the last pong time of a node.
func (db *DB) UpdateLastPongReceived(id ID, instance time.Time) error { func (db *DB) UpdateLastPongReceived(id ID, ip net.IP, instance time.Time) error {
return db.storeInt64(makeKey(id, dbDiscoverPong), instance.Unix()) return db.storeInt64(nodeItemKey(id, ip, dbNodePong), instance.Unix())
} }
// FindFails retrieves the number of findnode failures since bonding. // FindFails retrieves the number of findnode failures since bonding.
func (db *DB) FindFails(id ID) int { func (db *DB) FindFails(id ID, ip net.IP) int {
return int(db.fetchInt64(makeKey(id, dbDiscoverFindFails))) return int(db.fetchInt64(nodeItemKey(id, ip, dbNodeFindFails)))
} }
// UpdateFindFails updates the number of findnode failures since bonding. // UpdateFindFails updates the number of findnode failures since bonding.
func (db *DB) UpdateFindFails(id ID, fails int) error { func (db *DB) UpdateFindFails(id ID, ip net.IP, fails int) error {
return db.storeInt64(makeKey(id, dbDiscoverFindFails), int64(fails)) return db.storeInt64(nodeItemKey(id, ip, dbNodeFindFails), int64(fails))
} }
// LocalSeq retrieves the local record sequence counter. // LocalSeq retrieves the local record sequence counter.
func (db *DB) localSeq(id ID) uint64 { func (db *DB) localSeq(id ID) uint64 {
return db.fetchUint64(makeKey(id, dbLocalSeq)) return db.fetchUint64(nodeItemKey(id, zeroIP, dbLocalSeq))
} }
// storeLocalSeq stores the local record sequence counter. // storeLocalSeq stores the local record sequence counter.
func (db *DB) storeLocalSeq(id ID, n uint64) { func (db *DB) storeLocalSeq(id ID, n uint64) {
db.storeUint64(makeKey(id, dbLocalSeq), n) db.storeUint64(nodeItemKey(id, zeroIP, dbLocalSeq), n)
} }
// QuerySeeds retrieves random nodes to be used as potential seed nodes // QuerySeeds retrieves random nodes to be used as potential seed nodes
@ -355,14 +407,14 @@ seek:
ctr := id[0] ctr := id[0]
rand.Read(id[:]) rand.Read(id[:])
id[0] = ctr + id[0]%16 id[0] = ctr + id[0]%16
it.Seek(makeKey(id, dbDiscoverRoot)) it.Seek(nodeKey(id))
n := nextNode(it) n := nextNode(it)
if n == nil { if n == nil {
id[0] = 0 id[0] = 0
continue seek // iterator exhausted continue seek // iterator exhausted
} }
if now.Sub(db.LastPongReceived(n.ID())) > maxAge { if now.Sub(db.LastPongReceived(n.ID(), n.IP())) > maxAge {
continue seek continue seek
} }
for i := range nodes { for i := range nodes {
@ -379,8 +431,8 @@ seek:
// database entries. // database entries.
func nextNode(it iterator.Iterator) *Node { func nextNode(it iterator.Iterator) *Node {
for end := false; !end; end = !it.Next() { for end := false; !end; end = !it.Next() {
id, field := splitKey(it.Key()) id, rest := splitNodeKey(it.Key())
if field != dbDiscoverRoot { if string(rest) != dbDiscoverRoot {
continue continue
} }
return mustDecodeNode(id[:], it.Value()) return mustDecodeNode(id[:], it.Value())

View File

@ -28,42 +28,54 @@ import (
"time" "time"
) )
var nodeDBKeyTests = []struct { var keytestID = HexID("51232b8d7821617d2b29b54b81cdefb9b3e9c37d7fd5f63270bcc9e1a6f6a439")
id ID
field string func TestDBNodeKey(t *testing.T) {
key []byte enc := nodeKey(keytestID)
}{ want := []byte{
{ 'n', ':',
id: ID{},
field: "version",
key: []byte{0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e}, // field
},
{
id: HexID("51232b8d7821617d2b29b54b81cdefb9b3e9c37d7fd5f63270bcc9e1a6f6a439"),
field: ":discover",
key: []byte{
0x6e, 0x3a, // prefix
0x51, 0x23, 0x2b, 0x8d, 0x78, 0x21, 0x61, 0x7d, // node id 0x51, 0x23, 0x2b, 0x8d, 0x78, 0x21, 0x61, 0x7d, // node id
0x2b, 0x29, 0xb5, 0x4b, 0x81, 0xcd, 0xef, 0xb9, // 0x2b, 0x29, 0xb5, 0x4b, 0x81, 0xcd, 0xef, 0xb9, //
0xb3, 0xe9, 0xc3, 0x7d, 0x7f, 0xd5, 0xf6, 0x32, // 0xb3, 0xe9, 0xc3, 0x7d, 0x7f, 0xd5, 0xf6, 0x32, //
0x70, 0xbc, 0xc9, 0xe1, 0xa6, 0xf6, 0xa4, 0x39, // 0x70, 0xbc, 0xc9, 0xe1, 0xa6, 0xf6, 0xa4, 0x39, //
0x3a, 0x64, 0x69, 0x73, 0x63, 0x6f, 0x76, 0x65, 0x72, // field ':', 'v', '4',
}, }
}, if !bytes.Equal(enc, want) {
t.Errorf("wrong encoded key:\ngot %q\nwant %q", enc, want)
}
id, _ := splitNodeKey(enc)
if id != keytestID {
t.Errorf("wrong ID from splitNodeKey")
}
} }
func TestDBKeys(t *testing.T) { func TestDBNodeItemKey(t *testing.T) {
for i, tt := range nodeDBKeyTests { wantIP := net.IP{127, 0, 0, 3}
if key := makeKey(tt.id, tt.field); !bytes.Equal(key, tt.key) { wantField := "foobar"
t.Errorf("make test %d: key mismatch: have 0x%x, want 0x%x", i, key, tt.key) enc := nodeItemKey(keytestID, wantIP, wantField)
want := []byte{
'n', ':',
0x51, 0x23, 0x2b, 0x8d, 0x78, 0x21, 0x61, 0x7d, // node id
0x2b, 0x29, 0xb5, 0x4b, 0x81, 0xcd, 0xef, 0xb9, //
0xb3, 0xe9, 0xc3, 0x7d, 0x7f, 0xd5, 0xf6, 0x32, //
0x70, 0xbc, 0xc9, 0xe1, 0xa6, 0xf6, 0xa4, 0x39, //
':', 'v', '4', ':',
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // IP
0x00, 0x00, 0xff, 0xff, 0x7f, 0x00, 0x00, 0x03, //
':', 'f', 'o', 'o', 'b', 'a', 'r',
} }
id, field := splitKey(tt.key) if !bytes.Equal(enc, want) {
if !bytes.Equal(id[:], tt.id[:]) { t.Errorf("wrong encoded key:\ngot %q\nwant %q", enc, want)
t.Errorf("split test %d: id mismatch: have 0x%x, want 0x%x", i, id, tt.id)
} }
if field != tt.field { id, ip, field := splitNodeItemKey(enc)
t.Errorf("split test %d: field mismatch: have 0x%x, want 0x%x", i, field, tt.field) if id != keytestID {
t.Errorf("splitNodeItemKey returned wrong ID: %v", id)
} }
if !bytes.Equal(ip, wantIP) {
t.Errorf("splitNodeItemKey returned wrong IP: %v", ip)
}
if field != wantField {
t.Errorf("splitNodeItemKey returned wrong field: %q", field)
} }
} }
@ -113,33 +125,33 @@ func TestDBFetchStore(t *testing.T) {
defer db.Close() defer db.Close()
// Check fetch/store operations on a node ping object // Check fetch/store operations on a node ping object
if stored := db.LastPingReceived(node.ID()); stored.Unix() != 0 { if stored := db.LastPingReceived(node.ID(), node.IP()); stored.Unix() != 0 {
t.Errorf("ping: non-existing object: %v", stored) t.Errorf("ping: non-existing object: %v", stored)
} }
if err := db.UpdateLastPingReceived(node.ID(), inst); err != nil { if err := db.UpdateLastPingReceived(node.ID(), node.IP(), inst); err != nil {
t.Errorf("ping: failed to update: %v", err) t.Errorf("ping: failed to update: %v", err)
} }
if stored := db.LastPingReceived(node.ID()); stored.Unix() != inst.Unix() { if stored := db.LastPingReceived(node.ID(), node.IP()); stored.Unix() != inst.Unix() {
t.Errorf("ping: value mismatch: have %v, want %v", stored, inst) t.Errorf("ping: value mismatch: have %v, want %v", stored, inst)
} }
// Check fetch/store operations on a node pong object // Check fetch/store operations on a node pong object
if stored := db.LastPongReceived(node.ID()); stored.Unix() != 0 { if stored := db.LastPongReceived(node.ID(), node.IP()); stored.Unix() != 0 {
t.Errorf("pong: non-existing object: %v", stored) t.Errorf("pong: non-existing object: %v", stored)
} }
if err := db.UpdateLastPongReceived(node.ID(), inst); err != nil { if err := db.UpdateLastPongReceived(node.ID(), node.IP(), inst); err != nil {
t.Errorf("pong: failed to update: %v", err) t.Errorf("pong: failed to update: %v", err)
} }
if stored := db.LastPongReceived(node.ID()); stored.Unix() != inst.Unix() { if stored := db.LastPongReceived(node.ID(), node.IP()); stored.Unix() != inst.Unix() {
t.Errorf("pong: value mismatch: have %v, want %v", stored, inst) t.Errorf("pong: value mismatch: have %v, want %v", stored, inst)
} }
// Check fetch/store operations on a node findnode-failure object // Check fetch/store operations on a node findnode-failure object
if stored := db.FindFails(node.ID()); stored != 0 { if stored := db.FindFails(node.ID(), node.IP()); stored != 0 {
t.Errorf("find-node fails: non-existing object: %v", stored) t.Errorf("find-node fails: non-existing object: %v", stored)
} }
if err := db.UpdateFindFails(node.ID(), num); err != nil { if err := db.UpdateFindFails(node.ID(), node.IP(), num); err != nil {
t.Errorf("find-node fails: failed to update: %v", err) t.Errorf("find-node fails: failed to update: %v", err)
} }
if stored := db.FindFails(node.ID()); stored != num { if stored := db.FindFails(node.ID(), node.IP()); stored != num {
t.Errorf("find-node fails: value mismatch: have %v, want %v", stored, num) t.Errorf("find-node fails: value mismatch: have %v, want %v", stored, num)
} }
// Check fetch/store operations on an actual node object // Check fetch/store operations on an actual node object
@ -256,7 +268,7 @@ func testSeedQuery() error {
if err := db.UpdateNode(seed.node); err != nil { if err := db.UpdateNode(seed.node); err != nil {
return fmt.Errorf("node %d: failed to insert: %v", i, err) return fmt.Errorf("node %d: failed to insert: %v", i, err)
} }
if err := db.UpdateLastPongReceived(seed.node.ID(), seed.pong); err != nil { if err := db.UpdateLastPongReceived(seed.node.ID(), seed.node.IP(), seed.pong); err != nil {
return fmt.Errorf("node %d: failed to insert bondTime: %v", i, err) return fmt.Errorf("node %d: failed to insert bondTime: %v", i, err)
} }
} }
@ -323,8 +335,10 @@ func TestDBPersistency(t *testing.T) {
var nodeDBExpirationNodes = []struct { var nodeDBExpirationNodes = []struct {
node *Node node *Node
pong time.Time pong time.Time
storeNode bool
exp bool exp bool
}{ }{
// Node has new enough pong time and isn't expired:
{ {
node: NewV4( node: NewV4(
hexPubkey("8d110e2ed4b446d9b5fb50f117e5f37fb7597af455e1dab0e6f045a6eeaa786a6781141659020d38bdc5e698ed3d4d2bafa8b5061810dfa63e8ac038db2e9b67"), hexPubkey("8d110e2ed4b446d9b5fb50f117e5f37fb7597af455e1dab0e6f045a6eeaa786a6781141659020d38bdc5e698ed3d4d2bafa8b5061810dfa63e8ac038db2e9b67"),
@ -332,15 +346,77 @@ var nodeDBExpirationNodes = []struct {
30303, 30303,
30303, 30303,
), ),
storeNode: true,
pong: time.Now().Add(-dbNodeExpiration + time.Minute), pong: time.Now().Add(-dbNodeExpiration + time.Minute),
exp: false, exp: false,
}, { },
// Node with pong time before expiration is removed:
{
node: NewV4( node: NewV4(
hexPubkey("913a205579c32425b220dfba999d215066e5bdbf900226b11da1907eae5e93eb40616d47412cf819664e9eacbdfcca6b0c6e07e09847a38472d4be46ab0c3672"), hexPubkey("913a205579c32425b220dfba999d215066e5bdbf900226b11da1907eae5e93eb40616d47412cf819664e9eacbdfcca6b0c6e07e09847a38472d4be46ab0c3672"),
net.IP{127, 0, 0, 2}, net.IP{127, 0, 0, 2},
30303, 30303,
30303, 30303,
), ),
storeNode: true,
pong: time.Now().Add(-dbNodeExpiration - time.Minute),
exp: true,
},
// Just pong time, no node stored:
{
node: NewV4(
hexPubkey("b56670e0b6bad2c5dab9f9fe6f061a16cf78d68b6ae2cfda3144262d08d97ce5f46fd8799b6d1f709b1abe718f2863e224488bd7518e5e3b43809ac9bd1138ca"),
net.IP{127, 0, 0, 3},
30303,
30303,
),
storeNode: false,
pong: time.Now().Add(-dbNodeExpiration - time.Minute),
exp: true,
},
// Node with multiple pong times, all older than expiration.
{
node: NewV4(
hexPubkey("29f619cebfd32c9eab34aec797ed5e3fe15b9b45be95b4df3f5fe6a9ae892f433eb08d7698b2ef3621568b0fb70d57b515ab30d4e72583b798298e0f0a66b9d1"),
net.IP{127, 0, 0, 4},
30303,
30303,
),
storeNode: true,
pong: time.Now().Add(-dbNodeExpiration - time.Minute),
exp: true,
},
{
node: NewV4(
hexPubkey("29f619cebfd32c9eab34aec797ed5e3fe15b9b45be95b4df3f5fe6a9ae892f433eb08d7698b2ef3621568b0fb70d57b515ab30d4e72583b798298e0f0a66b9d1"),
net.IP{127, 0, 0, 5},
30303,
30303,
),
storeNode: false,
pong: time.Now().Add(-dbNodeExpiration - 2*time.Minute),
exp: true,
},
// Node with multiple pong times, one newer, one older than expiration.
{
node: NewV4(
hexPubkey("3b73a9e5f4af6c4701c57c73cc8cfa0f4802840b24c11eba92aac3aef65644a3728b4b2aec8199f6d72bd66be2c65861c773129039bd47daa091ca90a6d4c857"),
net.IP{127, 0, 0, 6},
30303,
30303,
),
storeNode: true,
pong: time.Now().Add(-dbNodeExpiration + time.Minute),
exp: false,
},
{
node: NewV4(
hexPubkey("3b73a9e5f4af6c4701c57c73cc8cfa0f4802840b24c11eba92aac3aef65644a3728b4b2aec8199f6d72bd66be2c65861c773129039bd47daa091ca90a6d4c857"),
net.IP{127, 0, 0, 7},
30303,
30303,
),
storeNode: false,
pong: time.Now().Add(-dbNodeExpiration - time.Minute), pong: time.Now().Add(-dbNodeExpiration - time.Minute),
exp: true, exp: true,
}, },
@ -350,23 +426,39 @@ func TestDBExpiration(t *testing.T) {
db, _ := OpenDB("") db, _ := OpenDB("")
defer db.Close() defer db.Close()
// Add all the test nodes and set their last pong time // Add all the test nodes and set their last pong time.
for i, seed := range nodeDBExpirationNodes { for i, seed := range nodeDBExpirationNodes {
if seed.storeNode {
if err := db.UpdateNode(seed.node); err != nil { if err := db.UpdateNode(seed.node); err != nil {
t.Fatalf("node %d: failed to insert: %v", i, err) t.Fatalf("node %d: failed to insert: %v", i, err)
} }
if err := db.UpdateLastPongReceived(seed.node.ID(), seed.pong); err != nil { }
if err := db.UpdateLastPongReceived(seed.node.ID(), seed.node.IP(), seed.pong); err != nil {
t.Fatalf("node %d: failed to update bondTime: %v", i, err) t.Fatalf("node %d: failed to update bondTime: %v", i, err)
} }
} }
// Expire some of them, and check the rest
if err := db.expireNodes(); err != nil { db.expireNodes()
t.Fatalf("failed to expire nodes: %v", err)
} // Check that expired entries have been removed.
unixZeroTime := time.Unix(0, 0)
for i, seed := range nodeDBExpirationNodes { for i, seed := range nodeDBExpirationNodes {
node := db.Node(seed.node.ID()) node := db.Node(seed.node.ID())
if (node == nil && !seed.exp) || (node != nil && seed.exp) { pong := db.LastPongReceived(seed.node.ID(), seed.node.IP())
t.Errorf("node %d: expiration mismatch: have %v, want %v", i, node, seed.exp) if seed.exp {
if seed.storeNode && node != nil {
t.Errorf("node %d (%s) shouldn't be present after expiration", i, seed.node.ID().TerminalString())
}
if !pong.Equal(unixZeroTime) {
t.Errorf("pong time %d (%s %v) shouldn't be present after expiration", i, seed.node.ID().TerminalString(), seed.node.IP())
}
} else {
if seed.storeNode && node == nil {
t.Errorf("node %d (%s) should be present after expiration", i, seed.node.ID().TerminalString())
}
if !pong.Equal(seed.pong.Truncate(1 * time.Second)) {
t.Errorf("pong time %d (%s) should be %v after expiration, but is %v", i, seed.node.ID().TerminalString(), seed.pong, pong)
}
} }
} }
} }

View File

@ -27,23 +27,21 @@ var (
// All metrics are cumulative // All metrics are cumulative
// total amount of units credited // total amount of units credited
mBalanceCredit metrics.Counter mBalanceCredit = metrics.NewRegisteredCounterForced("account.balance.credit", metrics.AccountingRegistry)
// total amount of units debited // total amount of units debited
mBalanceDebit metrics.Counter mBalanceDebit = metrics.NewRegisteredCounterForced("account.balance.debit", metrics.AccountingRegistry)
// total amount of bytes credited // total amount of bytes credited
mBytesCredit metrics.Counter mBytesCredit = metrics.NewRegisteredCounterForced("account.bytes.credit", metrics.AccountingRegistry)
// total amount of bytes debited // total amount of bytes debited
mBytesDebit metrics.Counter mBytesDebit = metrics.NewRegisteredCounterForced("account.bytes.debit", metrics.AccountingRegistry)
// total amount of credited messages // total amount of credited messages
mMsgCredit metrics.Counter mMsgCredit = metrics.NewRegisteredCounterForced("account.msg.credit", metrics.AccountingRegistry)
// total amount of debited messages // total amount of debited messages
mMsgDebit metrics.Counter mMsgDebit = metrics.NewRegisteredCounterForced("account.msg.debit", metrics.AccountingRegistry)
// how many times local node had to drop remote peers // how many times local node had to drop remote peers
mPeerDrops metrics.Counter mPeerDrops = metrics.NewRegisteredCounterForced("account.peerdrops", metrics.AccountingRegistry)
// how many times local node overdrafted and dropped // how many times local node overdrafted and dropped
mSelfDrops metrics.Counter mSelfDrops = metrics.NewRegisteredCounterForced("account.selfdrops", metrics.AccountingRegistry)
MetricsRegistry metrics.Registry
) )
// Prices defines how prices are being passed on to the accounting instance // Prices defines how prices are being passed on to the accounting instance
@ -110,24 +108,13 @@ func NewAccounting(balance Balance, po Prices) *Accounting {
return ah return ah
} }
// SetupAccountingMetrics creates a separate registry for p2p accounting metrics; // SetupAccountingMetrics uses a separate registry for p2p accounting metrics;
// this registry should be independent of any other metrics as it persists at different endpoints. // this registry should be independent of any other metrics as it persists at different endpoints.
// It also instantiates the given metrics and starts the persisting go-routine which // It also starts the persisting go-routine which
// at the passed interval writes the metrics to a LevelDB // at the passed interval writes the metrics to a LevelDB
func SetupAccountingMetrics(reportInterval time.Duration, path string) *AccountingMetrics { func SetupAccountingMetrics(reportInterval time.Duration, path string) *AccountingMetrics {
// create an empty registry
MetricsRegistry = metrics.NewRegistry()
// instantiate the metrics
mBalanceCredit = metrics.NewRegisteredCounterForced("account.balance.credit", MetricsRegistry)
mBalanceDebit = metrics.NewRegisteredCounterForced("account.balance.debit", MetricsRegistry)
mBytesCredit = metrics.NewRegisteredCounterForced("account.bytes.credit", MetricsRegistry)
mBytesDebit = metrics.NewRegisteredCounterForced("account.bytes.debit", MetricsRegistry)
mMsgCredit = metrics.NewRegisteredCounterForced("account.msg.credit", MetricsRegistry)
mMsgDebit = metrics.NewRegisteredCounterForced("account.msg.debit", MetricsRegistry)
mPeerDrops = metrics.NewRegisteredCounterForced("account.peerdrops", MetricsRegistry)
mSelfDrops = metrics.NewRegisteredCounterForced("account.selfdrops", MetricsRegistry)
// create the DB and start persisting // create the DB and start persisting
return NewAccountingMetrics(MetricsRegistry, reportInterval, path) return NewAccountingMetrics(metrics.AccountingRegistry, reportInterval, path)
} }
// Send takes a peer, a size and a msg and // Send takes a peer, a size and a msg and

View File

@ -423,3 +423,17 @@ func (p *Peer) Handshake(ctx context.Context, hs interface{}, verify func(interf
} }
return rhs, nil return rhs, nil
} }
// HasCap returns true if Peer has a capability
// with provided name.
func (p *Peer) HasCap(capName string) (yes bool) {
if p == nil || p.Peer == nil {
return false
}
for _, c := range p.Caps() {
if c.Name == capName {
return true
}
}
return false
}

View File

@ -142,9 +142,9 @@ func newProtocol(pp *p2ptest.TestPeerPool) func(*p2p.Peer, p2p.MsgReadWriter) er
} }
} }
func protocolTester(t *testing.T, pp *p2ptest.TestPeerPool) *p2ptest.ProtocolTester { func protocolTester(pp *p2ptest.TestPeerPool) *p2ptest.ProtocolTester {
conf := adapters.RandomNodeConfig() conf := adapters.RandomNodeConfig()
return p2ptest.NewProtocolTester(t, conf.ID, 2, newProtocol(pp)) return p2ptest.NewProtocolTester(conf.ID, 2, newProtocol(pp))
} }
func protoHandshakeExchange(id enode.ID, proto *protoHandshake) []p2ptest.Exchange { func protoHandshakeExchange(id enode.ID, proto *protoHandshake) []p2ptest.Exchange {
@ -173,7 +173,7 @@ func protoHandshakeExchange(id enode.ID, proto *protoHandshake) []p2ptest.Exchan
func runProtoHandshake(t *testing.T, proto *protoHandshake, errs ...error) { func runProtoHandshake(t *testing.T, proto *protoHandshake, errs ...error) {
pp := p2ptest.NewTestPeerPool() pp := p2ptest.NewTestPeerPool()
s := protocolTester(t, pp) s := protocolTester(pp)
// TODO: make this more than one handshake // TODO: make this more than one handshake
node := s.Nodes[0] node := s.Nodes[0]
if err := s.TestExchanges(protoHandshakeExchange(node.ID(), proto)...); err != nil { if err := s.TestExchanges(protoHandshakeExchange(node.ID(), proto)...); err != nil {
@ -250,7 +250,7 @@ func TestProtocolHook(t *testing.T) {
} }
conf := adapters.RandomNodeConfig() conf := adapters.RandomNodeConfig()
tester := p2ptest.NewProtocolTester(t, conf.ID, 2, runFunc) tester := p2ptest.NewProtocolTester(conf.ID, 2, runFunc)
err := tester.TestExchanges(p2ptest.Exchange{ err := tester.TestExchanges(p2ptest.Exchange{
Expects: []p2ptest.Expect{ Expects: []p2ptest.Expect{
{ {
@ -389,7 +389,7 @@ func moduleHandshakeExchange(id enode.ID, resp uint) []p2ptest.Exchange {
func runModuleHandshake(t *testing.T, resp uint, errs ...error) { func runModuleHandshake(t *testing.T, resp uint, errs ...error) {
pp := p2ptest.NewTestPeerPool() pp := p2ptest.NewTestPeerPool()
s := protocolTester(t, pp) s := protocolTester(pp)
node := s.Nodes[0] node := s.Nodes[0]
if err := s.TestExchanges(protoHandshakeExchange(node.ID(), &protoHandshake{42, "420"})...); err != nil { if err := s.TestExchanges(protoHandshakeExchange(node.ID(), &protoHandshake{42, "420"})...); err != nil {
t.Fatal(err) t.Fatal(err)
@ -469,7 +469,7 @@ func testMultiPeerSetup(a, b enode.ID) []p2ptest.Exchange {
func runMultiplePeers(t *testing.T, peer int, errs ...error) { func runMultiplePeers(t *testing.T, peer int, errs ...error) {
pp := p2ptest.NewTestPeerPool() pp := p2ptest.NewTestPeerPool()
s := protocolTester(t, pp) s := protocolTester(pp)
if err := s.TestExchanges(testMultiPeerSetup(s.Nodes[0].ID(), s.Nodes[1].ID())...); err != nil { if err := s.TestExchanges(testMultiPeerSetup(s.Nodes[0].ID(), s.Nodes[1].ID())...); err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -43,21 +43,27 @@ func TestReporter(t *testing.T) {
metrics := SetupAccountingMetrics(reportInterval, filepath.Join(dir, "test.db")) metrics := SetupAccountingMetrics(reportInterval, filepath.Join(dir, "test.db"))
log.Debug("Done.") log.Debug("Done.")
//do some metrics //change metrics
mBalanceCredit.Inc(12) mBalanceCredit.Inc(12)
mBytesCredit.Inc(34) mBytesCredit.Inc(34)
mMsgDebit.Inc(9) mMsgDebit.Inc(9)
//store expected metrics
expectedBalanceCredit := mBalanceCredit.Count()
expectedBytesCredit := mBytesCredit.Count()
expectedMsgDebit := mMsgDebit.Count()
//give the reporter time to write the metrics to DB //give the reporter time to write the metrics to DB
time.Sleep(20 * time.Millisecond) time.Sleep(20 * time.Millisecond)
//set the metrics to nil - this effectively simulates the node having shut down...
mBalanceCredit = nil
mBytesCredit = nil
mMsgDebit = nil
//close the DB also, or we can't create a new one //close the DB also, or we can't create a new one
metrics.Close() metrics.Close()
//clear the metrics - this effectively simulates the node having shut down...
mBalanceCredit.Clear()
mBytesCredit.Clear()
mMsgDebit.Clear()
//setup the metrics again //setup the metrics again
log.Debug("Setting up metrics second time") log.Debug("Setting up metrics second time")
metrics = SetupAccountingMetrics(reportInterval, filepath.Join(dir, "test.db")) metrics = SetupAccountingMetrics(reportInterval, filepath.Join(dir, "test.db"))
@ -65,13 +71,13 @@ func TestReporter(t *testing.T) {
log.Debug("Done.") log.Debug("Done.")
//now check the metrics, they should have the same value as before "shutdown" //now check the metrics, they should have the same value as before "shutdown"
if mBalanceCredit.Count() != 12 { if mBalanceCredit.Count() != expectedBalanceCredit {
t.Fatalf("Expected counter to be %d, but is %d", 12, mBalanceCredit.Count()) t.Fatalf("Expected counter to be %d, but is %d", expectedBalanceCredit, mBalanceCredit.Count())
} }
if mBytesCredit.Count() != 34 { if mBytesCredit.Count() != expectedBytesCredit {
t.Fatalf("Expected counter to be %d, but is %d", 23, mBytesCredit.Count()) t.Fatalf("Expected counter to be %d, but is %d", expectedBytesCredit, mBytesCredit.Count())
} }
if mMsgDebit.Count() != 9 { if mMsgDebit.Count() != expectedMsgDebit {
t.Fatalf("Expected counter to be %d, but is %d", 9, mMsgDebit.Count()) t.Fatalf("Expected counter to be %d, but is %d", expectedMsgDebit, mMsgDebit.Count())
} }
} }

View File

@ -32,6 +32,9 @@ var (
// It is useful when constructing a chain network topology // It is useful when constructing a chain network topology
// when Network adds and removes nodes dynamically. // when Network adds and removes nodes dynamically.
func (net *Network) ConnectToLastNode(id enode.ID) (err error) { func (net *Network) ConnectToLastNode(id enode.ID) (err error) {
net.lock.Lock()
defer net.lock.Unlock()
ids := net.getUpNodeIDs() ids := net.getUpNodeIDs()
l := len(ids) l := len(ids)
if l < 2 { if l < 2 {
@ -41,29 +44,35 @@ func (net *Network) ConnectToLastNode(id enode.ID) (err error) {
if last == id { if last == id {
last = ids[l-2] last = ids[l-2]
} }
return net.connect(last, id) return net.connectNotConnected(last, id)
} }
// ConnectToRandomNode connects the node with provided NodeID // ConnectToRandomNode connects the node with provided NodeID
// to a random node that is up. // to a random node that is up.
func (net *Network) ConnectToRandomNode(id enode.ID) (err error) { func (net *Network) ConnectToRandomNode(id enode.ID) (err error) {
selected := net.GetRandomUpNode(id) net.lock.Lock()
defer net.lock.Unlock()
selected := net.getRandomUpNode(id)
if selected == nil { if selected == nil {
return ErrNodeNotFound return ErrNodeNotFound
} }
return net.connect(selected.ID(), id) return net.connectNotConnected(selected.ID(), id)
} }
// ConnectNodesFull connects all nodes one to another. // ConnectNodesFull connects all nodes one to another.
// It provides a complete connectivity in the network // It provides a complete connectivity in the network
// which should be rarely needed. // which should be rarely needed.
func (net *Network) ConnectNodesFull(ids []enode.ID) (err error) { func (net *Network) ConnectNodesFull(ids []enode.ID) (err error) {
net.lock.Lock()
defer net.lock.Unlock()
if ids == nil { if ids == nil {
ids = net.getUpNodeIDs() ids = net.getUpNodeIDs()
} }
for i, lid := range ids { for i, lid := range ids {
for _, rid := range ids[i+1:] { for _, rid := range ids[i+1:] {
if err = net.connect(lid, rid); err != nil { if err = net.connectNotConnected(lid, rid); err != nil {
return err return err
} }
} }
@ -74,12 +83,19 @@ func (net *Network) ConnectNodesFull(ids []enode.ID) (err error) {
// ConnectNodesChain connects all nodes in a chain topology. // ConnectNodesChain connects all nodes in a chain topology.
// If ids argument is nil, all nodes that are up will be connected. // If ids argument is nil, all nodes that are up will be connected.
func (net *Network) ConnectNodesChain(ids []enode.ID) (err error) { func (net *Network) ConnectNodesChain(ids []enode.ID) (err error) {
net.lock.Lock()
defer net.lock.Unlock()
return net.connectNodesChain(ids)
}
func (net *Network) connectNodesChain(ids []enode.ID) (err error) {
if ids == nil { if ids == nil {
ids = net.getUpNodeIDs() ids = net.getUpNodeIDs()
} }
l := len(ids) l := len(ids)
for i := 0; i < l-1; i++ { for i := 0; i < l-1; i++ {
if err := net.connect(ids[i], ids[i+1]); err != nil { if err := net.connectNotConnected(ids[i], ids[i+1]); err != nil {
return err return err
} }
} }
@ -89,6 +105,9 @@ func (net *Network) ConnectNodesChain(ids []enode.ID) (err error) {
// ConnectNodesRing connects all nodes in a ring topology. // ConnectNodesRing connects all nodes in a ring topology.
// If ids argument is nil, all nodes that are up will be connected. // If ids argument is nil, all nodes that are up will be connected.
func (net *Network) ConnectNodesRing(ids []enode.ID) (err error) { func (net *Network) ConnectNodesRing(ids []enode.ID) (err error) {
net.lock.Lock()
defer net.lock.Unlock()
if ids == nil { if ids == nil {
ids = net.getUpNodeIDs() ids = net.getUpNodeIDs()
} }
@ -96,15 +115,18 @@ func (net *Network) ConnectNodesRing(ids []enode.ID) (err error) {
if l < 2 { if l < 2 {
return nil return nil
} }
if err := net.ConnectNodesChain(ids); err != nil { if err := net.connectNodesChain(ids); err != nil {
return err return err
} }
return net.connect(ids[l-1], ids[0]) return net.connectNotConnected(ids[l-1], ids[0])
} }
// ConnectNodesStar connects all nodes into a star topology // ConnectNodesStar connects all nodes into a star topology
// If ids argument is nil, all nodes that are up will be connected. // If ids argument is nil, all nodes that are up will be connected.
func (net *Network) ConnectNodesStar(ids []enode.ID, center enode.ID) (err error) { func (net *Network) ConnectNodesStar(ids []enode.ID, center enode.ID) (err error) {
net.lock.Lock()
defer net.lock.Unlock()
if ids == nil { if ids == nil {
ids = net.getUpNodeIDs() ids = net.getUpNodeIDs()
} }
@ -112,16 +134,15 @@ func (net *Network) ConnectNodesStar(ids []enode.ID, center enode.ID) (err error
if center == id { if center == id {
continue continue
} }
if err := net.connect(center, id); err != nil { if err := net.connectNotConnected(center, id); err != nil {
return err return err
} }
} }
return nil return nil
} }
// connect connects two nodes but ignores already connected error. func (net *Network) connectNotConnected(oneID, otherID enode.ID) error {
func (net *Network) connect(oneID, otherID enode.ID) error { return ignoreAlreadyConnectedErr(net.connect(oneID, otherID))
return ignoreAlreadyConnectedErr(net.Connect(oneID, otherID))
} }
func ignoreAlreadyConnectedErr(err error) error { func ignoreAlreadyConnectedErr(err error) error {

View File

@ -100,7 +100,7 @@ func ControlEvent(v interface{}) *Event {
func (e *Event) String() string { func (e *Event) String() string {
switch e.Type { switch e.Type {
case EventTypeNode: case EventTypeNode:
return fmt.Sprintf("<node-event> id: %s up: %t", e.Node.ID().TerminalString(), e.Node.Up) return fmt.Sprintf("<node-event> id: %s up: %t", e.Node.ID().TerminalString(), e.Node.Up())
case EventTypeConn: case EventTypeConn:
return fmt.Sprintf("<conn-event> nodes: %s->%s up: %t", e.Conn.One.TerminalString(), e.Conn.Other.TerminalString(), e.Conn.Up) return fmt.Sprintf("<conn-event> nodes: %s->%s up: %t", e.Conn.One.TerminalString(), e.Conn.Other.TerminalString(), e.Conn.Up)
case EventTypeMsg: case EventTypeMsg:

View File

@ -421,14 +421,15 @@ type expectEvents struct {
} }
func (t *expectEvents) nodeEvent(id string, up bool) *Event { func (t *expectEvents) nodeEvent(id string, up bool) *Event {
return &Event{ node := Node{
Type: EventTypeNode,
Node: &Node{
Config: &adapters.NodeConfig{ Config: &adapters.NodeConfig{
ID: enode.HexID(id), ID: enode.HexID(id),
}, },
Up: up, up: up,
}, }
return &Event{
Type: EventTypeNode,
Node: &node,
} }
} }
@ -480,6 +481,7 @@ loop:
} }
func (t *expectEvents) expect(events ...*Event) { func (t *expectEvents) expect(events ...*Event) {
t.Helper()
timeout := time.After(10 * time.Second) timeout := time.After(10 * time.Second)
i := 0 i := 0
for { for {
@ -501,8 +503,8 @@ func (t *expectEvents) expect(events ...*Event) {
if event.Node.ID() != expected.Node.ID() { if event.Node.ID() != expected.Node.ID() {
t.Fatalf("expected node event %d to have id %q, got %q", i, expected.Node.ID().TerminalString(), event.Node.ID().TerminalString()) t.Fatalf("expected node event %d to have id %q, got %q", i, expected.Node.ID().TerminalString(), event.Node.ID().TerminalString())
} }
if event.Node.Up != expected.Node.Up { if event.Node.Up() != expected.Node.Up() {
t.Fatalf("expected node event %d to have up=%t, got up=%t", i, expected.Node.Up, event.Node.Up) t.Fatalf("expected node event %d to have up=%t, got up=%t", i, expected.Node.Up(), event.Node.Up())
} }
case EventTypeConn: case EventTypeConn:

View File

@ -90,15 +90,12 @@ func TestMocker(t *testing.T) {
for { for {
select { select {
case event := <-events: case event := <-events:
//if the event is a node Up event only if isNodeUp(event) {
if event.Node != nil && event.Node.Up {
//add the correspondent node ID to the map //add the correspondent node ID to the map
nodemap[event.Node.Config.ID] = true nodemap[event.Node.Config.ID] = true
//this means all nodes got a nodeUp event, so we can continue the test //this means all nodes got a nodeUp event, so we can continue the test
if len(nodemap) == nodeCount { if len(nodemap) == nodeCount {
nodesComplete = true nodesComplete = true
//wait for 3s as the mocker will need time to connect the nodes
//time.Sleep( 3 *time.Second)
} }
} else if event.Conn != nil && nodesComplete { } else if event.Conn != nil && nodesComplete {
connCount += 1 connCount += 1
@ -169,3 +166,7 @@ func TestMocker(t *testing.T) {
t.Fatalf("Expected empty list of nodes, got: %d", len(nodesInfo)) t.Fatalf("Expected empty list of nodes, got: %d", len(nodesInfo))
} }
} }
func isNodeUp(event *Event) bool {
return event.Node != nil && event.Node.Up()
}

View File

@ -136,7 +136,7 @@ func (net *Network) Config() *NetworkConfig {
// StartAll starts all nodes in the network // StartAll starts all nodes in the network
func (net *Network) StartAll() error { func (net *Network) StartAll() error {
for _, node := range net.Nodes { for _, node := range net.Nodes {
if node.Up { if node.Up() {
continue continue
} }
if err := net.Start(node.ID()); err != nil { if err := net.Start(node.ID()); err != nil {
@ -149,7 +149,7 @@ func (net *Network) StartAll() error {
// StopAll stops all nodes in the network // StopAll stops all nodes in the network
func (net *Network) StopAll() error { func (net *Network) StopAll() error {
for _, node := range net.Nodes { for _, node := range net.Nodes {
if !node.Up { if !node.Up() {
continue continue
} }
if err := net.Stop(node.ID()); err != nil { if err := net.Stop(node.ID()); err != nil {
@ -174,7 +174,7 @@ func (net *Network) startWithSnapshots(id enode.ID, snapshots map[string][]byte)
if node == nil { if node == nil {
return fmt.Errorf("node %v does not exist", id) return fmt.Errorf("node %v does not exist", id)
} }
if node.Up { if node.Up() {
return fmt.Errorf("node %v already up", id) return fmt.Errorf("node %v already up", id)
} }
log.Trace("Starting node", "id", id, "adapter", net.nodeAdapter.Name()) log.Trace("Starting node", "id", id, "adapter", net.nodeAdapter.Name())
@ -182,10 +182,10 @@ func (net *Network) startWithSnapshots(id enode.ID, snapshots map[string][]byte)
log.Warn("Node startup failed", "id", id, "err", err) log.Warn("Node startup failed", "id", id, "err", err)
return err return err
} }
node.Up = true node.SetUp(true)
log.Info("Started node", "id", id) log.Info("Started node", "id", id)
ev := NewEvent(node)
net.events.Send(NewEvent(node)) net.events.Send(ev)
// subscribe to peer events // subscribe to peer events
client, err := node.Client() client, err := node.Client()
@ -210,12 +210,14 @@ func (net *Network) watchPeerEvents(id enode.ID, events chan *p2p.PeerEvent, sub
// assume the node is now down // assume the node is now down
net.lock.Lock() net.lock.Lock()
defer net.lock.Unlock() defer net.lock.Unlock()
node := net.getNode(id) node := net.getNode(id)
if node == nil { if node == nil {
return return
} }
node.Up = false node.SetUp(false)
net.events.Send(NewEvent(node)) ev := NewEvent(node)
net.events.Send(ev)
}() }()
for { for {
select { select {
@ -251,34 +253,57 @@ func (net *Network) watchPeerEvents(id enode.ID, events chan *p2p.PeerEvent, sub
// Stop stops the node with the given ID // Stop stops the node with the given ID
func (net *Network) Stop(id enode.ID) error { func (net *Network) Stop(id enode.ID) error {
// IMPORTANT: node.Stop() must NOT be called under net.lock as
// node.Reachable() closure has a reference to the network and
// calls net.InitConn() what also locks the network. => DEADLOCK
// That holds until the following ticket is not resolved:
var err error
node, err := func() (*Node, error) {
net.lock.Lock() net.lock.Lock()
defer net.lock.Unlock()
node := net.getNode(id) node := net.getNode(id)
if node == nil { if node == nil {
return fmt.Errorf("node %v does not exist", id) return nil, fmt.Errorf("node %v does not exist", id)
} }
if !node.Up { if !node.Up() {
return fmt.Errorf("node %v already down", id) return nil, fmt.Errorf("node %v already down", id)
} }
node.Up = false node.SetUp(false)
net.lock.Unlock() return node, nil
}()
err := node.Stop()
if err != nil { if err != nil {
return err
}
err = node.Stop() // must be called without net.lock
net.lock.Lock() net.lock.Lock()
node.Up = true defer net.lock.Unlock()
net.lock.Unlock()
if err != nil {
node.SetUp(true)
return err return err
} }
log.Info("Stopped node", "id", id, "err", err) log.Info("Stopped node", "id", id, "err", err)
net.events.Send(ControlEvent(node)) ev := ControlEvent(node)
net.events.Send(ev)
return nil return nil
} }
// Connect connects two nodes together by calling the "admin_addPeer" RPC // Connect connects two nodes together by calling the "admin_addPeer" RPC
// method on the "one" node so that it connects to the "other" node // method on the "one" node so that it connects to the "other" node
func (net *Network) Connect(oneID, otherID enode.ID) error { func (net *Network) Connect(oneID, otherID enode.ID) error {
net.lock.Lock()
defer net.lock.Unlock()
return net.connect(oneID, otherID)
}
func (net *Network) connect(oneID, otherID enode.ID) error {
log.Debug("Connecting nodes with addPeer", "id", oneID, "other", otherID) log.Debug("Connecting nodes with addPeer", "id", oneID, "other", otherID)
conn, err := net.InitConn(oneID, otherID) conn, err := net.initConn(oneID, otherID)
if err != nil { if err != nil {
return err return err
} }
@ -376,6 +401,14 @@ func (net *Network) GetNode(id enode.ID) *Node {
return net.getNode(id) return net.getNode(id)
} }
func (net *Network) getNode(id enode.ID) *Node {
i, found := net.nodeMap[id]
if !found {
return nil
}
return net.Nodes[i]
}
// GetNode gets the node with the given name, returning nil if the node does // GetNode gets the node with the given name, returning nil if the node does
// not exist // not exist
func (net *Network) GetNodeByName(name string) *Node { func (net *Network) GetNodeByName(name string) *Node {
@ -398,28 +431,29 @@ func (net *Network) GetNodes() (nodes []*Node) {
net.lock.RLock() net.lock.RLock()
defer net.lock.RUnlock() defer net.lock.RUnlock()
nodes = append(nodes, net.Nodes...) return net.getNodes()
return nodes
} }
func (net *Network) getNode(id enode.ID) *Node { func (net *Network) getNodes() (nodes []*Node) {
i, found := net.nodeMap[id] nodes = append(nodes, net.Nodes...)
if !found { return nodes
return nil
}
return net.Nodes[i]
} }
// GetRandomUpNode returns a random node on the network, which is running. // GetRandomUpNode returns a random node on the network, which is running.
func (net *Network) GetRandomUpNode(excludeIDs ...enode.ID) *Node { func (net *Network) GetRandomUpNode(excludeIDs ...enode.ID) *Node {
net.lock.RLock() net.lock.RLock()
defer net.lock.RUnlock() defer net.lock.RUnlock()
return net.getRandomUpNode(excludeIDs...)
}
// GetRandomUpNode returns a random node on the network, which is running.
func (net *Network) getRandomUpNode(excludeIDs ...enode.ID) *Node {
return net.getRandomNode(net.getUpNodeIDs(), excludeIDs) return net.getRandomNode(net.getUpNodeIDs(), excludeIDs)
} }
func (net *Network) getUpNodeIDs() (ids []enode.ID) { func (net *Network) getUpNodeIDs() (ids []enode.ID) {
for _, node := range net.Nodes { for _, node := range net.Nodes {
if node.Up { if node.Up() {
ids = append(ids, node.ID()) ids = append(ids, node.ID())
} }
} }
@ -434,8 +468,8 @@ func (net *Network) GetRandomDownNode(excludeIDs ...enode.ID) *Node {
} }
func (net *Network) getDownNodeIDs() (ids []enode.ID) { func (net *Network) getDownNodeIDs() (ids []enode.ID) {
for _, node := range net.GetNodes() { for _, node := range net.getNodes() {
if !node.Up { if !node.Up() {
ids = append(ids, node.ID()) ids = append(ids, node.ID())
} }
} }
@ -449,7 +483,7 @@ func (net *Network) getRandomNode(ids []enode.ID, excludeIDs []enode.ID) *Node {
if l == 0 { if l == 0 {
return nil return nil
} }
return net.GetNode(filtered[rand.Intn(l)]) return net.getNode(filtered[rand.Intn(l)])
} }
func filterIDs(ids []enode.ID, excludeIDs []enode.ID) []enode.ID { func filterIDs(ids []enode.ID, excludeIDs []enode.ID) []enode.ID {
@ -527,6 +561,10 @@ func (net *Network) getConn(oneID, otherID enode.ID) *Conn {
func (net *Network) InitConn(oneID, otherID enode.ID) (*Conn, error) { func (net *Network) InitConn(oneID, otherID enode.ID) (*Conn, error) {
net.lock.Lock() net.lock.Lock()
defer net.lock.Unlock() defer net.lock.Unlock()
return net.initConn(oneID, otherID)
}
func (net *Network) initConn(oneID, otherID enode.ID) (*Conn, error) {
if oneID == otherID { if oneID == otherID {
return nil, fmt.Errorf("refusing to connect to self %v", oneID) return nil, fmt.Errorf("refusing to connect to self %v", oneID)
} }
@ -584,8 +622,21 @@ type Node struct {
// Config if the config used to created the node // Config if the config used to created the node
Config *adapters.NodeConfig `json:"config"` Config *adapters.NodeConfig `json:"config"`
// Up tracks whether or not the node is running // up tracks whether or not the node is running
Up bool `json:"up"` up bool
upMu sync.RWMutex
}
func (n *Node) Up() bool {
n.upMu.RLock()
defer n.upMu.RUnlock()
return n.up
}
func (n *Node) SetUp(up bool) {
n.upMu.Lock()
defer n.upMu.Unlock()
n.up = up
} }
// ID returns the ID of the node // ID returns the ID of the node
@ -619,10 +670,29 @@ func (n *Node) MarshalJSON() ([]byte, error) {
}{ }{
Info: n.NodeInfo(), Info: n.NodeInfo(),
Config: n.Config, Config: n.Config,
Up: n.Up, Up: n.Up(),
}) })
} }
// UnmarshalJSON implements json.Unmarshaler interface so that we don't lose
// Node.up status. IMPORTANT: The implementation is incomplete; we lose
// p2p.NodeInfo.
func (n *Node) UnmarshalJSON(raw []byte) error {
// TODO: How should we turn back NodeInfo into n.Node?
// Ticket: https://github.com/ethersphere/go-ethereum/issues/1177
node := struct {
Config *adapters.NodeConfig `json:"config,omitempty"`
Up bool `json:"up"`
}{}
if err := json.Unmarshal(raw, &node); err != nil {
return err
}
n.SetUp(node.Up)
n.Config = node.Config
return nil
}
// Conn represents a connection between two nodes in the network // Conn represents a connection between two nodes in the network
type Conn struct { type Conn struct {
// One is the node which initiated the connection // One is the node which initiated the connection
@ -642,10 +712,10 @@ type Conn struct {
// nodesUp returns whether both nodes are currently up // nodesUp returns whether both nodes are currently up
func (c *Conn) nodesUp() error { func (c *Conn) nodesUp() error {
if !c.one.Up { if !c.one.Up() {
return fmt.Errorf("one %v is not up", c.One) return fmt.Errorf("one %v is not up", c.One)
} }
if !c.other.Up { if !c.other.Up() {
return fmt.Errorf("other %v is not up", c.Other) return fmt.Errorf("other %v is not up", c.Other)
} }
return nil return nil
@ -717,7 +787,7 @@ func (net *Network) snapshot(addServices []string, removeServices []string) (*Sn
} }
for i, node := range net.Nodes { for i, node := range net.Nodes {
snap.Nodes[i] = NodeSnapshot{Node: *node} snap.Nodes[i] = NodeSnapshot{Node: *node}
if !node.Up { if !node.Up() {
continue continue
} }
snapshots, err := node.Snapshots() snapshots, err := node.Snapshots()
@ -772,7 +842,7 @@ func (net *Network) Load(snap *Snapshot) error {
if _, err := net.NewNodeWithConfig(n.Node.Config); err != nil { if _, err := net.NewNodeWithConfig(n.Node.Config); err != nil {
return err return err
} }
if !n.Node.Up { if !n.Node.Up() {
continue continue
} }
if err := net.startWithSnapshots(n.Node.Config.ID, n.Snapshots); err != nil { if err := net.startWithSnapshots(n.Node.Config.ID, n.Snapshots); err != nil {
@ -844,7 +914,7 @@ func (net *Network) Load(snap *Snapshot) error {
// Start connecting. // Start connecting.
for _, conn := range snap.Conns { for _, conn := range snap.Conns {
if !net.GetNode(conn.One).Up || !net.GetNode(conn.Other).Up { if !net.GetNode(conn.One).Up() || !net.GetNode(conn.Other).Up() {
//in this case, at least one of the nodes of a connection is not up, //in this case, at least one of the nodes of a connection is not up,
//so it would result in the snapshot `Load` to fail //so it would result in the snapshot `Load` to fail
continue continue
@ -898,7 +968,7 @@ func (net *Network) executeControlEvent(event *Event) {
} }
func (net *Network) executeNodeEvent(e *Event) error { func (net *Network) executeNodeEvent(e *Event) error {
if !e.Node.Up { if !e.Node.Up() {
return net.Stop(e.Node.ID()) return net.Stop(e.Node.ID())
} }

Some files were not shown because too many files have changed in this diff Show More